NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Pai, Shantaram S.; Coroneos, Rula M.
2010-01-01
Structural design generated by traditional method, optimization method and the stochastic design concept are compared. In the traditional method, the constraints are manipulated to obtain the design and weight is back calculated. In design optimization, the weight of a structure becomes the merit function with constraints imposed on failure modes and an optimization algorithm is used to generate the solution. Stochastic design concept accounts for uncertainties in loads, material properties, and other parameters and solution is obtained by solving a design optimization problem for a specified reliability. Acceptable solutions were produced by all the three methods. The variation in the weight calculated by the methods was modest. Some variation was noticed in designs calculated by the methods. The variation may be attributed to structural indeterminacy. It is prudent to develop design by all three methods prior to its fabrication. The traditional design method can be improved when the simplified sensitivities of the behavior constraint is used. Such sensitivity can reduce design calculations and may have a potential to unify the traditional and optimization methods. Weight versus reliabilitytraced out an inverted-S-shaped graph. The center of the graph corresponded to mean valued design. A heavy design with weight approaching infinity could be produced for a near-zero rate of failure. Weight can be reduced to a small value for a most failure-prone design. Probabilistic modeling of load and material properties remained a challenge.
A new optimal seam method for seamless image stitching
NASA Astrophysics Data System (ADS)
Xue, Jiale; Chen, Shengyong; Cheng, Xu; Han, Ying; Zhao, Meng
2017-07-01
A novel optimal seam method which aims to stitch those images with overlapping area more seamlessly has been propos ed. Considering the traditional gradient domain optimal seam method and fusion algorithm result in bad color difference measurement and taking a long time respectively, the input images would be converted to HSV space and a new energy function is designed to seek optimal stitching path. To smooth the optimal stitching path, a simplified pixel correction and weighted average method are utilized individually. The proposed methods exhibit performance in eliminating the stitching seam compared with the traditional gradient optimal seam and high efficiency with multi-band blending algorithm.
NASA Astrophysics Data System (ADS)
Zheng, Y.; Chen, J.
2017-09-01
A modified multi-objective particle swarm optimization method is proposed for obtaining Pareto-optimal solutions effectively. Different from traditional multi-objective particle swarm optimization methods, Kriging meta-models and the trapezoid index are introduced and integrated with the traditional one. Kriging meta-models are built to match expensive or black-box functions. By applying Kriging meta-models, function evaluation numbers are decreased and the boundary Pareto-optimal solutions are identified rapidly. For bi-objective optimization problems, the trapezoid index is calculated as the sum of the trapezoid's area formed by the Pareto-optimal solutions and one objective axis. It can serve as a measure whether the Pareto-optimal solutions converge to the Pareto front. Illustrative examples indicate that to obtain Pareto-optimal solutions, the method proposed needs fewer function evaluations than the traditional multi-objective particle swarm optimization method and the non-dominated sorting genetic algorithm II method, and both the accuracy and the computational efficiency are improved. The proposed method is also applied to the design of a deepwater composite riser example in which the structural performances are calculated by numerical analysis. The design aim was to enhance the tension strength and minimize the cost. Under the buckling constraint, the optimal trade-off of tensile strength and material volume is obtained. The results demonstrated that the proposed method can effectively deal with multi-objective optimizations with black-box functions.
Mai, Lan-Yin; Li, Yi-Xuan; Chen, Yong; Xie, Zhen; Li, Jie; Zhong, Ming-Yu
2014-05-01
The compatibility of traditional Chinese medicines (TCMs) formulae containing enormous information, is a complex component system. Applications of mathematical statistics methods on the compatibility researches of traditional Chinese medicines formulae have great significance for promoting the modernization of traditional Chinese medicines and improving clinical efficacies and optimizations of formulae. As a tool for quantitative analysis, data inference and exploring inherent rules of substances, the mathematical statistics method can be used to reveal the working mechanisms of the compatibility of traditional Chinese medicines formulae in qualitatively and quantitatively. By reviewing studies based on the applications of mathematical statistics methods, this paper were summarized from perspective of dosages optimization, efficacies and changes of chemical components as well as the rules of incompatibility and contraindication of formulae, will provide the references for further studying and revealing the working mechanisms and the connotations of traditional Chinese medicines.
Reliability-based trajectory optimization using nonintrusive polynomial chaos for Mars entry mission
NASA Astrophysics Data System (ADS)
Huang, Yuechen; Li, Haiyang
2018-06-01
This paper presents the reliability-based sequential optimization (RBSO) method to settle the trajectory optimization problem with parametric uncertainties in entry dynamics for Mars entry mission. First, the deterministic entry trajectory optimization model is reviewed, and then the reliability-based optimization model is formulated. In addition, the modified sequential optimization method, in which the nonintrusive polynomial chaos expansion (PCE) method and the most probable point (MPP) searching method are employed, is proposed to solve the reliability-based optimization problem efficiently. The nonintrusive PCE method contributes to the transformation between the stochastic optimization (SO) and the deterministic optimization (DO) and to the approximation of trajectory solution efficiently. The MPP method, which is used for assessing the reliability of constraints satisfaction only up to the necessary level, is employed to further improve the computational efficiency. The cycle including SO, reliability assessment and constraints update is repeated in the RBSO until the reliability requirements of constraints satisfaction are satisfied. Finally, the RBSO is compared with the traditional DO and the traditional sequential optimization based on Monte Carlo (MC) simulation in a specific Mars entry mission to demonstrate the effectiveness and the efficiency of the proposed method.
Issues and Strategies in Solving Multidisciplinary Optimization Problems
NASA Technical Reports Server (NTRS)
Patnaik, Surya
2013-01-01
Optimization research at NASA Glenn Research Center has addressed the design of structures, aircraft and airbreathing propulsion engines. The accumulated multidisciplinary design activity is collected under a testbed entitled COMETBOARDS. Several issues were encountered during the solution of the problems. Four issues and the strategies adapted for their resolution are discussed. This is followed by a discussion on analytical methods that is limited to structural design application. An optimization process can lead to an inefficient local solution. This deficiency was encountered during design of an engine component. The limitation was overcome through an augmentation of animation into optimization. Optimum solutions obtained were infeasible for aircraft and airbreathing propulsion engine problems. Alleviation of this deficiency required a cascading of multiple algorithms. Profile optimization of a beam produced an irregular shape. Engineering intuition restored the regular shape for the beam. The solution obtained for a cylindrical shell by a subproblem strategy converged to a design that can be difficult to manufacture. Resolution of this issue remains a challenge. The issues and resolutions are illustrated through a set of problems: Design of an engine component, Synthesis of a subsonic aircraft, Operation optimization of a supersonic engine, Design of a wave-rotor-topping device, Profile optimization of a cantilever beam, and Design of a cylindrical shell. This chapter provides a cursory account of the issues. Cited references provide detailed discussion on the topics. Design of a structure can also be generated by traditional method and the stochastic design concept. Merits and limitations of the three methods (traditional method, optimization method and stochastic concept) are illustrated. In the traditional method, the constraints are manipulated to obtain the design and weight is back calculated. In design optimization, the weight of a structure becomes the merit function with constraints imposed on failure modes and an optimization algorithm is used to generate the solution. Stochastic design concept accounts for uncertainties in loads, material properties, and other parameters and solution is obtained by solving a design optimization problem for a specified reliability. Acceptable solutions can be produced by all the three methods. The variation in the weight calculated by the methods was found to be modest. Some variation was noticed in designs calculated by the methods. The variation may be attributed to structural indeterminacy. It is prudent to develop design by all three methods prior to its fabrication. The traditional design method can be improved when the simplified sensitivities of the behavior constraint is used. Such sensitivity can reduce design calculations and may have a potential to unify the traditional and optimization methods. Weight versus reliability traced out an inverted-S-shaped graph. The center of the graph corresponded to mean valued design. A heavy design with weight approaching infinity could be produced for a near-zero rate of failure. Weight can be reduced to a small value for a most failure-prone design. Probabilistic modeling of load and material properties remained a challenge.
Aerodynamic design optimization using sensitivity analysis and computational fluid dynamics
NASA Technical Reports Server (NTRS)
Baysal, Oktay; Eleshaky, Mohamed E.
1991-01-01
A new and efficient method is presented for aerodynamic design optimization, which is based on a computational fluid dynamics (CFD)-sensitivity analysis algorithm. The method is applied to design a scramjet-afterbody configuration for an optimized axial thrust. The Euler equations are solved for the inviscid analysis of the flow, which in turn provides the objective function and the constraints. The CFD analysis is then coupled with the optimization procedure that uses a constrained minimization method. The sensitivity coefficients, i.e. gradients of the objective function and the constraints, needed for the optimization are obtained using a quasi-analytical method rather than the traditional brute force method of finite difference approximations. During the one-dimensional search of the optimization procedure, an approximate flow analysis (predicted flow) based on a first-order Taylor series expansion is used to reduce the computational cost. Finally, the sensitivity of the optimum objective function to various design parameters, which are kept constant during the optimization, is computed to predict new optimum solutions. The flow analysis of the demonstrative example are compared with the experimental data. It is shown that the method is more efficient than the traditional methods.
Control strategy of grid-connected photovoltaic generation system based on GMPPT method
NASA Astrophysics Data System (ADS)
Wang, Zhongfeng; Zhang, Xuyang; Hu, Bo; Liu, Jun; Li, Ligang; Gu, Yongqiang; Zhou, Bowen
2018-02-01
There are multiple local maximum power points when photovoltaic (PV) array runs under partial shading condition (PSC).However, the traditional maximum power point tracking (MPPT) algorithm might be easily trapped in local maximum power points (MPPs) and cannot find the global maximum power point (GMPP). To solve such problem, a global maximum power point tracking method (GMPPT) is improved, combined with traditional MPPT method and particle swarm optimization (PSO) algorithm. Under different operating conditions of PV cells, different tracking algorithms are used. When the environment changes, the improved PSO algorithm is adopted to realize the global optimal search, and the variable step incremental conductance (INC) method is adopted to achieve MPPT in optimal local location. Based on the simulation model of the PV grid system built in Matlab/Simulink, comparative analysis of the tracking effect of MPPT by the proposed control algorithm and the traditional MPPT method under the uniform solar condition and PSC, validate the correctness, feasibility and effectiveness of the proposed control strategy.
NASA Astrophysics Data System (ADS)
Chen, J.; Chen, W.; Dou, A.; Li, W.; Sun, Y.
2018-04-01
A new information extraction method of damaged buildings rooted in optimal feature space is put forward on the basis of the traditional object-oriented method. In this new method, ESP (estimate of scale parameter) tool is used to optimize the segmentation of image. Then the distance matrix and minimum separation distance of all kinds of surface features are calculated through sample selection to find the optimal feature space, which is finally applied to extract the image of damaged buildings after earthquake. The overall extraction accuracy reaches 83.1 %, the kappa coefficient 0.813. The new information extraction method greatly improves the extraction accuracy and efficiency, compared with the traditional object-oriented method, and owns a good promotional value in the information extraction of damaged buildings. In addition, the new method can be used for the information extraction of different-resolution images of damaged buildings after earthquake, then to seek the optimal observation scale of damaged buildings through accuracy evaluation. It is supposed that the optimal observation scale of damaged buildings is between 1 m and 1.2 m, which provides a reference for future information extraction of damaged buildings.
A Coarse-Alignment Method Based on the Optimal-REQUEST Algorithm
Zhu, Yongyun
2018-01-01
In this paper, we proposed a coarse-alignment method for strapdown inertial navigation systems based on attitude determination. The observation vectors, which can be obtained by inertial sensors, usually contain various types of noise, which affects the convergence rate and the accuracy of the coarse alignment. Given this drawback, we studied an attitude-determination method named optimal-REQUEST, which is an optimal method for attitude determination that is based on observation vectors. Compared to the traditional attitude-determination method, the filtering gain of the proposed method is tuned autonomously; thus, the convergence rate of the attitude determination is faster than in the traditional method. Within the proposed method, we developed an iterative method for determining the attitude quaternion. We carried out simulation and turntable tests, which we used to validate the proposed method’s performance. The experiment’s results showed that the convergence rate of the proposed optimal-REQUEST algorithm is faster and that the coarse alignment’s stability is higher. In summary, the proposed method has a high applicability to practical systems. PMID:29337895
O'Leary, Kevin J; Devisetty, Vikram K; Patel, Amitkumar R; Malkenson, David; Sama, Pradeep; Thompson, William K; Landler, Matthew P; Barnard, Cynthia; Williams, Mark V
2013-02-01
Research supports medical record review using screening triggers as the optimal method to detect hospital adverse events (AE), yet the method is labour-intensive. This study compared a traditional trigger tool with an enterprise data warehouse (EDW) based screening method to detect AEs. We created 51 automated queries based on 33 traditional triggers from prior research, and then applied them to 250 randomly selected medical patients hospitalised between 1 September 2009 and 31 August 2010. Two physicians each abstracted records from half the patients using a traditional trigger tool and then performed targeted abstractions for patients with positive EDW queries in the complementary half of the sample. A third physician confirmed presence of AEs and assessed preventability and severity. Traditional trigger tool and EDW based screening identified 54 (22%) and 53 (21%) patients with one or more AE. Overall, 140 (56%) patients had one or more positive EDW screens (total 366 positive screens). Of the 137 AEs detected by at least one method, 86 (63%) were detected by a traditional trigger tool, 97 (71%) by EDW based screening and 46 (34%) by both methods. Of the 11 total preventable AEs, 6 (55%) were detected by traditional trigger tool, 7 (64%) by EDW based screening and 2 (18%) by both methods. Of the 43 total serious AEs, 28 (65%) were detected by traditional trigger tool, 29 (67%) by EDW based screening and 14 (33%) by both. We found relatively poor agreement between traditional trigger tool and EDW based screening with only approximately a third of all AEs detected by both methods. A combination of complementary methods is the optimal approach to detecting AEs among hospitalised patients.
Zhang, Liping; Zheng, Yanling; Wang, Kai; Zhang, Xueliang; Zheng, Yujian
2014-06-01
In this paper, by using a particle swarm optimization algorithm to solve the optimal parameter estimation problem, an improved Nash nonlinear grey Bernoulli model termed PSO-NNGBM(1,1) is proposed. To test the forecasting performance, the optimized model is applied for forecasting the incidence of hepatitis B in Xinjiang, China. Four models, traditional GM(1,1), grey Verhulst model (GVM), original nonlinear grey Bernoulli model (NGBM(1,1)) and Holt-Winters exponential smoothing method, are also established for comparison with the proposed model under the criteria of mean absolute percentage error and root mean square percent error. The prediction results show that the optimized NNGBM(1,1) model is more accurate and performs better than the traditional GM(1,1), GVM, NGBM(1,1) and Holt-Winters exponential smoothing method. Copyright © 2014. Published by Elsevier Ltd.
NASA Technical Reports Server (NTRS)
Olds, John Robert; Walberg, Gerald D.
1993-01-01
Multidisciplinary design optimization (MDO) is an emerging discipline within aerospace engineering. Its goal is to bring structure and efficiency to the complex design process associated with advanced aerospace launch vehicles. Aerospace vehicles generally require input from a variety of traditional aerospace disciplines - aerodynamics, structures, performance, etc. As such, traditional optimization methods cannot always be applied. Several multidisciplinary techniques and methods were proposed as potentially applicable to this class of design problem. Among the candidate options are calculus-based (or gradient-based) optimization schemes and parametric schemes based on design of experiments theory. A brief overview of several applicable multidisciplinary design optimization methods is included. Methods from the calculus-based class and the parametric class are reviewed, but the research application reported focuses on methods from the parametric class. A vehicle of current interest was chosen as a test application for this research. The rocket-based combined-cycle (RBCC) single-stage-to-orbit (SSTO) launch vehicle combines elements of rocket and airbreathing propulsion in an attempt to produce an attractive option for launching medium sized payloads into low earth orbit. The RBCC SSTO presents a particularly difficult problem for traditional one-variable-at-a-time optimization methods because of the lack of an adequate experience base and the highly coupled nature of the design variables. MDO, however, with it's structured approach to design, is well suited to this problem. The result of the application of Taguchi methods, central composite designs, and response surface methods to the design optimization of the RBCC SSTO are presented. Attention is given to the aspect of Taguchi methods that attempts to locate a 'robust' design - that is, a design that is least sensitive to uncontrollable influences on the design. Near-optimum minimum dry weight solutions are determined for the vehicle. A summary and evaluation of the various parametric MDO methods employed in the research are included. Recommendations for additional research are provided.
ERIC Educational Resources Information Center
Schmitt, M. A.; And Others
1994-01-01
Compares traditional manure application planning techniques calculated to meet agronomic nutrient needs on a field-by-field basis with plans developed using computer-assisted linear programming optimization methods. Linear programming provided the most economical and environmentally sound manure application strategy. (Contains 15 references.) (MDH)
Heat transfer comparison of nanofluid filled transformer and traditional oil-immersed transformer
NASA Astrophysics Data System (ADS)
Zhang, Yunpeng; Ho, Siu-lau; Fu, Weinong
2018-05-01
Dispersing nanoparticles with high thermal conductivity into transformer oil is an innovative approach to improve the thermal performance of traditional oil-immersed transformers. This mixture, also known as nanofluid, has shown the potential in practical application through experimental measurements. This paper presents the comparisons of nanofluid filled transformer and traditional oil-immersed transformer in terms of their computational fluid dynamics (CFD) solutions from the perspective of optimal design. Thermal performance of transformers with the same parameters except coolants is compared. A further comparison on heat transfer then is made after minimizing the oil volume and maximum temperature-rise of these two transformers. Adaptive multi-objective optimization method is employed to tackle this optimization problem.
Fitting Prony Series To Data On Viscoelastic Materials
NASA Technical Reports Server (NTRS)
Hill, S. A.
1995-01-01
Improved method of fitting Prony series to data on viscoelastic materials involves use of least-squares optimization techniques. Based on optimization techniques yields closer correlation with data than traditional method. Involves no assumptions regarding the gamma'(sub i)s and higher-order terms, and provides for as many Prony terms as needed to represent higher-order subtleties in data. Curve-fitting problem treated as design-optimization problem and solved by use of partially-constrained-optimization techniques.
Deformation effect simulation and optimization for double front axle steering mechanism
NASA Astrophysics Data System (ADS)
Wu, Jungang; Zhang, Siqin; Yang, Qinglong
2013-03-01
This paper research on tire wear problem of heavy vehicles with Double Front Axle Steering Mechanism from the flexible effect of Steering Mechanism, and proposes a structural optimization method which use both traditional static structural theory and dynamic structure theory - Equivalent Static Load (ESL) method to optimize key parts. The good simulated and test results show this method has high engineering practice and reference value for tire wear problem of Double Front Axle Steering Mechanism design.
Madu, C N; Quint, D J; Normolle, D P; Marsh, R B; Wang, E Y; Pierce, L J
2001-11-01
To delineate with computed tomography (CT) the anatomic regions containing the supraclavicular (SCV) and infraclavicular (IFV) nodal groups, to define the course of the brachial plexus, to estimate the actual radiation dose received by these regions in a series of patients treated in the traditional manner, and to compare these doses to those received with an optimized dosimetric technique. Twenty patients underwent contrast material-enhanced CT for the purpose of radiation therapy planning. CT scans were used to study the location of the SCV and IFV nodal regions by using outlining of readily identifiable anatomic structures that define the nodal groups. The brachial plexus was also outlined by using similar methods. Radiation therapy doses to the SCV and IFV were then estimated by using traditional dose calculations and optimized planning. A repeated measures analysis of covariance was used to compare the SCV and IFV depths and to compare the doses achieved with the traditional and optimized methods. Coverage by the 90% isodose surface was significantly decreased with traditional planning versus conformal planning as the depth to the SCV nodes increased (P < .001). Significantly decreased coverage by using the 90% isodose surface was demonstrated for traditional planning versus conformal planning with increasing IFV depth (P = .015). A linear correlation was found between brachial plexus depth and SCV depth up to 7 cm. Conformal optimized planning provided improved dosimetric coverage compared with standard techniques.
NASA Astrophysics Data System (ADS)
Ha, Taewoo; Lee, Howon; Sim, Kyung Ik; Kim, Jonghyeon; Jo, Young Chan; Kim, Jae Hoon; Baek, Na Yeon; Kang, Dai-ill; Lee, Han Hyoung
2017-05-01
We have established optimal methods for terahertz time-domain spectroscopic analysis of highly absorbing pigments in powder form based on our investigation of representative traditional Chinese pigments, such as azurite [blue-based color pigment], Chinese vermilion [red-based color pigment], and arsenic yellow [yellow-based color pigment]. To accurately extract the optical constants in the terahertz region of 0.1 - 3 THz, we carried out transmission measurements in such a way that intense absorption peaks did not completely suppress the transmission level. This required preparation of pellet samples with optimized thicknesses and material densities. In some cases, mixing the pigments with polyethylene powder was required to minimize absorption due to certain peak features. The resulting distortion-free terahertz spectra of the investigated set of pigment species exhibited well-defined unique spectral fingerprints. Our study will be useful to future efforts to establish non-destructive analysis methods of traditional pigments, to construct their spectral databases, and to apply these tools to restoration of cultural heritage materials.
An Iterative Approach for the Optimization of Pavement Maintenance Management at the Network Level
Torres-Machí, Cristina; Chamorro, Alondra; Videla, Carlos; Yepes, Víctor
2014-01-01
Pavement maintenance is one of the major issues of public agencies. Insufficient investment or inefficient maintenance strategies lead to high economic expenses in the long term. Under budgetary restrictions, the optimal allocation of resources becomes a crucial aspect. Two traditional approaches (sequential and holistic) and four classes of optimization methods (selection based on ranking, mathematical optimization, near optimization, and other methods) have been applied to solve this problem. They vary in the number of alternatives considered and how the selection process is performed. Therefore, a previous understanding of the problem is mandatory to identify the most suitable approach and method for a particular network. This study aims to assist highway agencies, researchers, and practitioners on when and how to apply available methods based on a comparative analysis of the current state of the practice. Holistic approach tackles the problem considering the overall network condition, while the sequential approach is easier to implement and understand, but may lead to solutions far from optimal. Scenarios defining the suitability of these approaches are defined. Finally, an iterative approach gathering the advantages of traditional approaches is proposed and applied in a case study. The proposed approach considers the overall network condition in a simpler and more intuitive manner than the holistic approach. PMID:24741352
An iterative approach for the optimization of pavement maintenance management at the network level.
Torres-Machí, Cristina; Chamorro, Alondra; Videla, Carlos; Pellicer, Eugenio; Yepes, Víctor
2014-01-01
Pavement maintenance is one of the major issues of public agencies. Insufficient investment or inefficient maintenance strategies lead to high economic expenses in the long term. Under budgetary restrictions, the optimal allocation of resources becomes a crucial aspect. Two traditional approaches (sequential and holistic) and four classes of optimization methods (selection based on ranking, mathematical optimization, near optimization, and other methods) have been applied to solve this problem. They vary in the number of alternatives considered and how the selection process is performed. Therefore, a previous understanding of the problem is mandatory to identify the most suitable approach and method for a particular network. This study aims to assist highway agencies, researchers, and practitioners on when and how to apply available methods based on a comparative analysis of the current state of the practice. Holistic approach tackles the problem considering the overall network condition, while the sequential approach is easier to implement and understand, but may lead to solutions far from optimal. Scenarios defining the suitability of these approaches are defined. Finally, an iterative approach gathering the advantages of traditional approaches is proposed and applied in a case study. The proposed approach considers the overall network condition in a simpler and more intuitive manner than the holistic approach.
Optimization of segmented thermoelectric generator using Taguchi and ANOVA techniques.
Kishore, Ravi Anant; Sanghadasa, Mohan; Priya, Shashank
2017-12-01
Recent studies have demonstrated that segmented thermoelectric generators (TEGs) can operate over large thermal gradient and thus provide better performance (reported efficiency up to 11%) as compared to traditional TEGs, comprising of single thermoelectric (TE) material. However, segmented TEGs are still in early stages of development due to the inherent complexity in their design optimization and manufacturability. In this study, we demonstrate physics based numerical techniques along with Analysis of variance (ANOVA) and Taguchi optimization method for optimizing the performance of segmented TEGs. We have considered comprehensive set of design parameters, such as geometrical dimensions of p-n legs, height of segmentation, hot-side temperature, and load resistance, in order to optimize output power and efficiency of segmented TEGs. Using the state-of-the-art TE material properties and appropriate statistical tools, we provide near-optimum TEG configuration with only 25 experiments as compared to 3125 experiments needed by the conventional optimization methods. The effect of environmental factors on the optimization of segmented TEGs is also studied. Taguchi results are validated against the results obtained using traditional full factorial optimization technique and a TEG configuration for simultaneous optimization of power and efficiency is obtained.
Taghanaki, Saeid Asgari; Kawahara, Jeremy; Miles, Brandon; Hamarneh, Ghassan
2017-07-01
Feature reduction is an essential stage in computer aided breast cancer diagnosis systems. Multilayer neural networks can be trained to extract relevant features by encoding high-dimensional data into low-dimensional codes. Optimizing traditional auto-encoders works well only if the initial weights are close to a proper solution. They are also trained to only reduce the mean squared reconstruction error (MRE) between the encoder inputs and the decoder outputs, but do not address the classification error. The goal of the current work is to test the hypothesis that extending traditional auto-encoders (which only minimize reconstruction error) to multi-objective optimization for finding Pareto-optimal solutions provides more discriminative features that will improve classification performance when compared to single-objective and other multi-objective approaches (i.e. scalarized and sequential). In this paper, we introduce a novel multi-objective optimization of deep auto-encoder networks, in which the auto-encoder optimizes two objectives: MRE and mean classification error (MCE) for Pareto-optimal solutions, rather than just MRE. These two objectives are optimized simultaneously by a non-dominated sorting genetic algorithm. We tested our method on 949 X-ray mammograms categorized into 12 classes. The results show that the features identified by the proposed algorithm allow a classification accuracy of up to 98.45%, demonstrating favourable accuracy over the results of state-of-the-art methods reported in the literature. We conclude that adding the classification objective to the traditional auto-encoder objective and optimizing for finding Pareto-optimal solutions, using evolutionary multi-objective optimization, results in producing more discriminative features. Copyright © 2017 Elsevier B.V. All rights reserved.
Lan, Yihua; Li, Cunhua; Ren, Haozheng; Zhang, Yong; Min, Zhifang
2012-10-21
A new heuristic algorithm based on the so-called geometric distance sorting technique is proposed for solving the fluence map optimization with dose-volume constraints which is one of the most essential tasks for inverse planning in IMRT. The framework of the proposed method is basically an iterative process which begins with a simple linear constrained quadratic optimization model without considering any dose-volume constraints, and then the dose constraints for the voxels violating the dose-volume constraints are gradually added into the quadratic optimization model step by step until all the dose-volume constraints are satisfied. In each iteration step, an interior point method is adopted to solve each new linear constrained quadratic programming. For choosing the proper candidate voxels for the current dose constraint adding, a so-called geometric distance defined in the transformed standard quadratic form of the fluence map optimization model was used to guide the selection of the voxels. The new geometric distance sorting technique can mostly reduce the unexpected increase of the objective function value caused inevitably by the constraint adding. It can be regarded as an upgrading to the traditional dose sorting technique. The geometry explanation for the proposed method is also given and a proposition is proved to support our heuristic idea. In addition, a smart constraint adding/deleting strategy is designed to ensure a stable iteration convergence. The new algorithm is tested on four cases including head-neck, a prostate, a lung and an oropharyngeal, and compared with the algorithm based on the traditional dose sorting technique. Experimental results showed that the proposed method is more suitable for guiding the selection of new constraints than the traditional dose sorting method, especially for the cases whose target regions are in non-convex shapes. It is a more efficient optimization technique to some extent for choosing constraints than the dose sorting method. By integrating a smart constraint adding/deleting scheme within the iteration framework, the new technique builds up an improved algorithm for solving the fluence map optimization with dose-volume constraints.
NASA Astrophysics Data System (ADS)
Bharti, P. K.; Khan, M. I.; Singh, Harbinder
2010-10-01
Off-line quality control is considered to be an effective approach to improve product quality at a relatively low cost. The Taguchi method is one of the conventional approaches for this purpose. Through this approach, engineers can determine a feasible combination of design parameters such that the variability of a product's response can be reduced and the mean is close to the desired target. The traditional Taguchi method was focused on ensuring good performance at the parameter design stage with one quality characteristic, but most products and processes have multiple quality characteristics. The optimal parameter design minimizes the total quality loss for multiple quality characteristics. Several studies have presented approaches addressing multiple quality characteristics. Most of these papers were concerned with maximizing the parameter combination of signal to noise (SN) ratios. The results reveal the advantages of this approach are that the optimal parameter design is the same as the traditional Taguchi method for the single quality characteristic; the optimal design maximizes the amount of reduction of total quality loss for multiple quality characteristics. This paper presents a literature review on solving multi-response problems in the Taguchi method and its successful implementation in various industries.
Multi-level optimization of a beam-like space truss utilizing a continuum model
NASA Technical Reports Server (NTRS)
Yates, K.; Gurdal, Z.; Thangjitham, S.
1992-01-01
A continuous beam model is developed for approximate analysis of a large, slender, beam-like truss. The model is incorporated in a multi-level optimization scheme for the weight minimization of such trusses. This scheme is tested against traditional optimization procedures for savings in computational cost. Results from both optimization methods are presented for comparison.
Truss Optimization for a Manned Nuclear Electric Space Vehicle using Genetic Algorithms
NASA Technical Reports Server (NTRS)
Benford, Andrew; Tinker, Michael L.
2004-01-01
The purpose of this paper is to utilize the genetic algorithm (GA) optimization method for structural design of a nuclear propulsion vehicle. Genetic algorithms provide a guided, random search technique that mirrors biological adaptation. To verify the GA capabilities, other traditional optimization methods were used to generate results for comparison to the GA results, first for simple two-dimensional structures, and then for full-scale three-dimensional truss designs.
Adaptive photoacoustic imaging quality optimization with EMD and reconstruction
NASA Astrophysics Data System (ADS)
Guo, Chengwen; Ding, Yao; Yuan, Jie; Xu, Guan; Wang, Xueding; Carson, Paul L.
2016-10-01
Biomedical photoacoustic (PA) signal is characterized with extremely low signal to noise ratio which will yield significant artifacts in photoacoustic tomography (PAT) images. Since PA signals acquired by ultrasound transducers are non-linear and non-stationary, traditional data analysis methods such as Fourier and wavelet method cannot give useful information for further research. In this paper, we introduce an adaptive method to improve the quality of PA imaging based on empirical mode decomposition (EMD) and reconstruction. Data acquired by ultrasound transducers are adaptively decomposed into several intrinsic mode functions (IMFs) after a sifting pre-process. Since noise is randomly distributed in different IMFs, depressing IMFs with more noise while enhancing IMFs with less noise can effectively enhance the quality of reconstructed PAT images. However, searching optimal parameters by means of brute force searching algorithms will cost too much time, which prevent this method from practical use. To find parameters within reasonable time, heuristic algorithms, which are designed for finding good solutions more efficiently when traditional methods are too slow, are adopted in our method. Two of the heuristic algorithms, Simulated Annealing Algorithm, a probabilistic method to approximate the global optimal solution, and Artificial Bee Colony Algorithm, an optimization method inspired by the foraging behavior of bee swarm, are selected to search optimal parameters of IMFs in this paper. The effectiveness of our proposed method is proved both on simulated data and PA signals from real biomedical tissue, which might bear the potential for future clinical PA imaging de-noising.
Soil quality assessment using weighted fuzzy association rules
Xue, Yue-Ju; Liu, Shu-Guang; Hu, Yue-Ming; Yang, Jing-Feng
2010-01-01
Fuzzy association rules (FARs) can be powerful in assessing regional soil quality, a critical step prior to land planning and utilization; however, traditional FARs mined from soil quality database, ignoring the importance variability of the rules, can be redundant and far from optimal. In this study, we developed a method applying different weights to traditional FARs to improve accuracy of soil quality assessment. After the FARs for soil quality assessment were mined, redundant rules were eliminated according to whether the rules were significant or not in reducing the complexity of the soil quality assessment models and in improving the comprehensibility of FARs. The global weights, each representing the importance of a FAR in soil quality assessment, were then introduced and refined using a gradient descent optimization method. This method was applied to the assessment of soil resources conditions in Guangdong Province, China. The new approach had an accuracy of 87%, when 15 rules were mined, as compared with 76% from the traditional approach. The accuracy increased to 96% when 32 rules were mined, in contrast to 88% from the traditional approach. These results demonstrated an improved comprehensibility of FARs and a high accuracy of the proposed method.
Guided particle swarm optimization method to solve general nonlinear optimization problems
NASA Astrophysics Data System (ADS)
Abdelhalim, Alyaa; Nakata, Kazuhide; El-Alem, Mahmoud; Eltawil, Amr
2018-04-01
The development of hybrid algorithms is becoming an important topic in the global optimization research area. This article proposes a new technique in hybridizing the particle swarm optimization (PSO) algorithm and the Nelder-Mead (NM) simplex search algorithm to solve general nonlinear unconstrained optimization problems. Unlike traditional hybrid methods, the proposed method hybridizes the NM algorithm inside the PSO to improve the velocities and positions of the particles iteratively. The new hybridization considers the PSO algorithm and NM algorithm as one heuristic, not in a sequential or hierarchical manner. The NM algorithm is applied to improve the initial random solution of the PSO algorithm and iteratively in every step to improve the overall performance of the method. The performance of the proposed method was tested over 20 optimization test functions with varying dimensions. Comprehensive comparisons with other methods in the literature indicate that the proposed solution method is promising and competitive.
Polidori, David; Rowley, Clarence
2014-07-22
The indocyanine green dilution method is one of the methods available to estimate plasma volume, although some researchers have questioned the accuracy of this method. We developed a new, physiologically based mathematical model of indocyanine green kinetics that more accurately represents indocyanine green kinetics during the first few minutes postinjection than what is assumed when using the traditional mono-exponential back-extrapolation method. The mathematical model is used to develop an optimal back-extrapolation method for estimating plasma volume based on simulated indocyanine green kinetics obtained from the physiological model. Results from a clinical study using the indocyanine green dilution method in 36 subjects with type 2 diabetes indicate that the estimated plasma volumes are considerably lower when using the traditional back-extrapolation method than when using the proposed back-extrapolation method (mean (standard deviation) plasma volume = 26.8 (5.4) mL/kg for the traditional method vs 35.1 (7.0) mL/kg for the proposed method). The results obtained using the proposed method are more consistent with previously reported plasma volume values. Based on the more physiological representation of indocyanine green kinetics and greater consistency with previously reported plasma volume values, the new back-extrapolation method is proposed for use when estimating plasma volume using the indocyanine green dilution method.
Automated Cellient(™) cytoblocks: better, stronger, faster?
Prendeville, S; Brosnan, T; Browne, T J; McCarthy, J
2014-12-01
Cytoblocks (CBs), or cell blocks, provide additional morphological detail and a platform for immunocytochemistry (ICC) in cytopathology. The Cellient(™) system produces CBs in 45 minutes using methanol fixation, compared with traditional CBs, which require overnight formalin fixation. This study compares Cellient and traditional CB methods in terms of cellularity, morphology and immunoreactivity, evaluates the potential to add formalin fixation to the Cellient method for ICC studies and determines the optimal sectioning depth for maximal cellularity in Cellient CBs. One hundred and sixty CBs were prepared from 40 cytology samples (32 malignant, eight benign) using four processing methods: (A) traditional; (B) Cellient (methanol fixation); (C) Cellient using additional formalin fixation for 30 minutes; (D) Cellient using additional formalin fixation for 60 minutes. Haematoxylin and eosin-stained sections were assessed for cellularity and morphology. ICC was assessed on 14 cases with a panel of antibodies. Three additional Cellient samples were serially sectioned to determine the optimal sectioning depth. Scoring was performed by two independent, blinded reviewers. For malignant cases, morphology was superior with Cellient relative to traditional CBs (P < 0.001). Cellularity was comparable across all methods. ICC was excellent in all groups and the addition of formalin at any stage during the Cellient process did not influence the staining quality. Serial sectioning through Cellient CBs showed optimum cellularity at 30-40 μm with at least 27 sections obtainable. Cellient CBs provide superior morphology to traditional CBs and, if required, formalin fixation may be added to the Cellient process for ICC. Optimal Cellient CB cellularity is achieved at 30-40 μm, which will impact on the handling of cases in daily practice. © 2014 John Wiley & Sons Ltd.
The research on the mean shift algorithm for target tracking
NASA Astrophysics Data System (ADS)
CAO, Honghong
2017-06-01
The traditional mean shift algorithm for target tracking is effective and high real-time, but there still are some shortcomings. The traditional mean shift algorithm is easy to fall into local optimum in the tracking process, the effectiveness of the method is weak when the object is moving fast. And the size of the tracking window never changes, the method will fail when the size of the moving object changes, as a result, we come up with a new method. We use particle swarm optimization algorithm to optimize the mean shift algorithm for target tracking, Meanwhile, SIFT (scale-invariant feature transform) and affine transformation make the size of tracking window adaptive. At last, we evaluate the method by comparing experiments. Experimental result indicates that the proposed method can effectively track the object and the size of the tracking window changes.
Modeling of biological intelligence for SCM system optimization.
Chen, Shengyong; Zheng, Yujun; Cattani, Carlo; Wang, Wanliang
2012-01-01
This article summarizes some methods from biological intelligence for modeling and optimization of supply chain management (SCM) systems, including genetic algorithms, evolutionary programming, differential evolution, swarm intelligence, artificial immune, and other biological intelligence related methods. An SCM system is adaptive, dynamic, open self-organizing, which is maintained by flows of information, materials, goods, funds, and energy. Traditional methods for modeling and optimizing complex SCM systems require huge amounts of computing resources, and biological intelligence-based solutions can often provide valuable alternatives for efficiently solving problems. The paper summarizes the recent related methods for the design and optimization of SCM systems, which covers the most widely used genetic algorithms and other evolutionary algorithms.
Modeling of Biological Intelligence for SCM System Optimization
Chen, Shengyong; Zheng, Yujun; Cattani, Carlo; Wang, Wanliang
2012-01-01
This article summarizes some methods from biological intelligence for modeling and optimization of supply chain management (SCM) systems, including genetic algorithms, evolutionary programming, differential evolution, swarm intelligence, artificial immune, and other biological intelligence related methods. An SCM system is adaptive, dynamic, open self-organizing, which is maintained by flows of information, materials, goods, funds, and energy. Traditional methods for modeling and optimizing complex SCM systems require huge amounts of computing resources, and biological intelligence-based solutions can often provide valuable alternatives for efficiently solving problems. The paper summarizes the recent related methods for the design and optimization of SCM systems, which covers the most widely used genetic algorithms and other evolutionary algorithms. PMID:22162724
Airline Maintenance Manpower Optimization from the De Novo Perspective
NASA Astrophysics Data System (ADS)
Liou, James J. H.; Tzeng, Gwo-Hshiung
Human resource management (HRM) is an important issue for today’s competitive airline marketing. In this paper, we discuss a multi-objective model designed from the De Novo perspective to help airlines optimize their maintenance manpower portfolio. The effectiveness of the model and solution algorithm is demonstrated in an empirical study of the optimization of the human resources needed for airline line maintenance. Both De Novo and traditional multiple objective programming (MOP) methods are analyzed. A comparison of the results with those of traditional MOP indicates that the proposed model and solution algorithm does provide better performance and an improved human resource portfolio.
Modified Fully Utilized Design (MFUD) Method for Stress and Displacement Constraints
NASA Technical Reports Server (NTRS)
Patnaik, Surya; Gendy, Atef; Berke, Laszlo; Hopkins, Dale
1997-01-01
The traditional fully stressed method performs satisfactorily for stress-limited structural design. When this method is extended to include displacement limitations in addition to stress constraints, it is known as the fully utilized design (FUD). Typically, the FUD produces an overdesign, which is the primary limitation of this otherwise elegant method. We have modified FUD in an attempt to alleviate the limitation. This new method, called the modified fully utilized design (MFUD) method, has been tested successfully on a number of designs that were subjected to multiple loads and had both stress and displacement constraints. The solutions obtained with MFUD compare favorably with the optimum results that can be generated by using nonlinear mathematical programming techniques. The MFUD method appears to have alleviated the overdesign condition and offers the simplicity of a direct, fully stressed type of design method that is distinctly different from optimization and optimality criteria formulations. The MFUD method is being developed for practicing engineers who favor traditional design methods rather than methods based on advanced calculus and nonlinear mathematical programming techniques. The Integrated Force Method (IFM) was found to be the appropriate analysis tool in the development of the MFUD method. In this paper, the MFUD method and its optimality are presented along with a number of illustrative examples.
A Fast Method for Embattling Optimization of Ground-Based Radar Surveillance Network
NASA Astrophysics Data System (ADS)
Jiang, H.; Cheng, H.; Zhang, Y.; Liu, J.
A growing number of space activities have created an orbital debris environment that poses increasing impact risks to existing space systems and human space flight. For the safety of in-orbit spacecraft, a lot of observation facilities are needed to catalog space objects, especially in low earth orbit. Surveillance of Low earth orbit objects are mainly rely on ground-based radar, due to the ability limitation of exist radar facilities, a large number of ground-based radar need to build in the next few years in order to meet the current space surveillance demands. How to optimize the embattling of ground-based radar surveillance network is a problem to need to be solved. The traditional method for embattling optimization of ground-based radar surveillance network is mainly through to the detection simulation of all possible stations with cataloged data, and makes a comprehensive comparative analysis of various simulation results with the combinational method, and then selects an optimal result as station layout scheme. This method is time consuming for single simulation and high computational complexity for the combinational analysis, when the number of stations increases, the complexity of optimization problem will be increased exponentially, and cannot be solved with traditional method. There is no better way to solve this problem till now. In this paper, target detection procedure was simplified. Firstly, the space coverage of ground-based radar was simplified, a space coverage projection model of radar facilities in different orbit altitudes was built; then a simplified objects cross the radar coverage model was established according to the characteristics of space objects orbit motion; after two steps simplification, the computational complexity of the target detection was greatly simplified, and simulation results shown the correctness of the simplified results. In addition, the detection areas of ground-based radar network can be easily computed with the simplified model, and then optimized the embattling of ground-based radar surveillance network with the artificial intelligent algorithm, which can greatly simplifies the computational complexities. Comparing with the traditional method, the proposed method greatly improved the computational efficiency.
Comparison of Structural Optimization Techniques for a Nuclear Electric Space Vehicle
NASA Technical Reports Server (NTRS)
Benford, Andrew
2003-01-01
The purpose of this paper is to utilize the optimization method of genetic algorithms (GA) for truss design on a nuclear propulsion vehicle. Genetic Algorithms are a guided, random search that mirrors Darwin s theory of natural selection and survival of the fittest. To verify the GA s capabilities, other traditional optimization methods were used to compare the results obtained by the GA's, first on simple 2-D structures, and eventually on full-scale 3-D truss designs.
On unified modeling, theory, and method for solving multi-scale global optimization problems
NASA Astrophysics Data System (ADS)
Gao, David Yang
2016-10-01
A unified model is proposed for general optimization problems in multi-scale complex systems. Based on this model and necessary assumptions in physics, the canonical duality theory is presented in a precise way to include traditional duality theories and popular methods as special applications. Two conjectures on NP-hardness are proposed, which should play important roles for correctly understanding and efficiently solving challenging real-world problems. Applications are illustrated for both nonconvex continuous optimization and mixed integer nonlinear programming.
NASA Technical Reports Server (NTRS)
Bao, Han P.; Samareh, J. A.
2000-01-01
The primary objective of this paper is to demonstrate the use of process-based manufacturing and assembly cost models in a traditional performance-focused multidisciplinary design and optimization process. The use of automated cost-performance analysis is an enabling technology that could bring realistic processbased manufacturing and assembly cost into multidisciplinary design and optimization. In this paper, we present a new methodology for incorporating process costing into a standard multidisciplinary design optimization process. Material, manufacturing processes, and assembly processes costs then could be used as the objective function for the optimization method. A case study involving forty-six different configurations of a simple wing is presented, indicating that a design based on performance criteria alone may not necessarily be the most affordable as far as manufacturing and assembly cost is concerned.
2014-01-01
Background The indocyanine green dilution method is one of the methods available to estimate plasma volume, although some researchers have questioned the accuracy of this method. Methods We developed a new, physiologically based mathematical model of indocyanine green kinetics that more accurately represents indocyanine green kinetics during the first few minutes postinjection than what is assumed when using the traditional mono-exponential back-extrapolation method. The mathematical model is used to develop an optimal back-extrapolation method for estimating plasma volume based on simulated indocyanine green kinetics obtained from the physiological model. Results Results from a clinical study using the indocyanine green dilution method in 36 subjects with type 2 diabetes indicate that the estimated plasma volumes are considerably lower when using the traditional back-extrapolation method than when using the proposed back-extrapolation method (mean (standard deviation) plasma volume = 26.8 (5.4) mL/kg for the traditional method vs 35.1 (7.0) mL/kg for the proposed method). The results obtained using the proposed method are more consistent with previously reported plasma volume values. Conclusions Based on the more physiological representation of indocyanine green kinetics and greater consistency with previously reported plasma volume values, the new back-extrapolation method is proposed for use when estimating plasma volume using the indocyanine green dilution method. PMID:25052018
Byron, Kelly; Bluvshtein, Vlad; Lucke, Lori
2013-01-01
Transcutaneous energy transmission systems (TETS) wirelessly transmit power through the skin. TETS is particularly desirable for ventricular assist devices (VAD), which currently require cables through the skin to power the implanted pump. Optimizing the inductive link of the TET system is a multi-parameter problem. Most current techniques to optimize the design simplify the problem by combining parameters leading to sub-optimal solutions. In this paper we present an optimization method using a genetic algorithm to handle a larger set of parameters, which leads to a more optimal design. Using this approach, we were able to increase efficiency while also reducing power variability in a prototype, compared to a traditional manual design method.
He, Guilin; Zhang, Tuqiao; Zheng, Feifei; Zhang, Qingzhou
2018-06-20
Water quality security within water distribution systems (WDSs) has been an important issue due to their inherent vulnerability associated with contamination intrusion. This motivates intensive studies to identify optimal water quality sensor placement (WQSP) strategies, aimed to timely/effectively detect (un)intentional intrusion events. However, these available WQSP optimization methods have consistently presumed that each WDS node has an equal contamination probability. While being simple in implementation, this assumption may do not conform to the fact that the nodal contamination probability may be significantly regionally varied owing to variations in population density and user properties. Furthermore, the low computational efficiency is another important factor that has seriously hampered the practical applications of the currently available WQSP optimization approaches. To address these two issues, this paper proposes an efficient multi-objective WQSP optimization method to explicitly account for contamination probability variations. Four different contamination probability functions (CPFs) are proposed to represent the potential variations of nodal contamination probabilities within the WDS. Two real-world WDSs are used to demonstrate the utility of the proposed method. Results show that WQSP strategies can be significantly affected by the choice of the CPF. For example, when the proposed method is applied to the large case study with the CPF accounting for user properties, the event detection probabilities of the resultant solutions are approximately 65%, while these values are around 25% for the traditional approach, and such design solutions are achieved approximately 10,000 times faster than the traditional method. This paper provides an alternative method to identify optimal WQSP solutions for the WDS, and also builds knowledge regarding the impacts of different CPFs on sensor deployments. Copyright © 2018 Elsevier Ltd. All rights reserved.
Stochastic search in structural optimization - Genetic algorithms and simulated annealing
NASA Technical Reports Server (NTRS)
Hajela, Prabhat
1993-01-01
An account is given of illustrative applications of genetic algorithms and simulated annealing methods in structural optimization. The advantages of such stochastic search methods over traditional mathematical programming strategies are emphasized; it is noted that these methods offer a significantly higher probability of locating the global optimum in a multimodal design space. Both genetic-search and simulated annealing can be effectively used in problems with a mix of continuous, discrete, and integer design variables.
Minimal residual method provides optimal regularization parameter for diffuse optical tomography
NASA Astrophysics Data System (ADS)
Jagannath, Ravi Prasad K.; Yalavarthy, Phaneendra K.
2012-10-01
The inverse problem in the diffuse optical tomography is known to be nonlinear, ill-posed, and sometimes under-determined, requiring regularization to obtain meaningful results, with Tikhonov-type regularization being the most popular one. The choice of this regularization parameter dictates the reconstructed optical image quality and is typically chosen empirically or based on prior experience. An automated method for optimal selection of regularization parameter that is based on regularized minimal residual method (MRM) is proposed and is compared with the traditional generalized cross-validation method. The results obtained using numerical and gelatin phantom data indicate that the MRM-based method is capable of providing the optimal regularization parameter.
Minimal residual method provides optimal regularization parameter for diffuse optical tomography.
Jagannath, Ravi Prasad K; Yalavarthy, Phaneendra K
2012-10-01
The inverse problem in the diffuse optical tomography is known to be nonlinear, ill-posed, and sometimes under-determined, requiring regularization to obtain meaningful results, with Tikhonov-type regularization being the most popular one. The choice of this regularization parameter dictates the reconstructed optical image quality and is typically chosen empirically or based on prior experience. An automated method for optimal selection of regularization parameter that is based on regularized minimal residual method (MRM) is proposed and is compared with the traditional generalized cross-validation method. The results obtained using numerical and gelatin phantom data indicate that the MRM-based method is capable of providing the optimal regularization parameter.
Experimental validation of structural optimization methods
NASA Technical Reports Server (NTRS)
Adelman, Howard M.
1992-01-01
The topic of validating structural optimization methods by use of experimental results is addressed. The need for validating the methods as a way of effecting a greater and an accelerated acceptance of formal optimization methods by practicing engineering designers is described. The range of validation strategies is defined which includes comparison of optimization results with more traditional design approaches, establishing the accuracy of analyses used, and finally experimental validation of the optimization results. Examples of the use of experimental results to validate optimization techniques are described. The examples include experimental validation of the following: optimum design of a trussed beam; combined control-structure design of a cable-supported beam simulating an actively controlled space structure; minimum weight design of a beam with frequency constraints; minimization of the vibration response of helicopter rotor blade; minimum weight design of a turbine blade disk; aeroelastic optimization of an aircraft vertical fin; airfoil shape optimization for drag minimization; optimization of the shape of a hole in a plate for stress minimization; optimization to minimize beam dynamic response; and structural optimization of a low vibration helicopter rotor.
Zhang, Honglei; Ding, Jincheng; Zhao, Zengdian
2012-11-01
The traditional heating and microwave assisted method for biodiesel production using cation ion-exchange resin particles (CERP)/PES catalytic membrane were comparatively studied to achieve economic and effective method for utilization of free fatty acids (FFAs) from waste cooking oil (WCO). The optimal esterification conditions of the two methods were investigated and the experimental results showed that microwave irradiation exhibited a remarkable enhanced effect for esterification compared with that of traditional heating method. The FFAs conversion of microwave assisted esterification reached 97.4% under the optimal conditions of reaction temperature 60°C, methanol/acidified oil mass ratio 2.0:1, catalytic membrane (annealed at 120°C) loading 3g, microwave power 360W and reaction time 90min. The study results showed that it is a fast, easy and green way to produce biodiesel applying microwave irradiation. Copyright © 2012 Elsevier Ltd. All rights reserved.
Erva, Rajeswara Reddy; Goswami, Ajgebi Nath; Suman, Priyanka; Vedanabhatla, Ravali; Rajulapati, Satish Babu
2017-03-16
The culture conditions and nutritional rations influencing the production of extra cellular antileukemic enzyme by novel Enterobacter aerogenes KCTC2190/MTCC111 were optimized in shake-flask culture. Process variables like pH, temperature, incubation time, carbon and nitrogen sources, inducer concentration, and inoculum size were taken into account. In the present study, finest enzyme activity achieved by traditional one variable at a time method was 7.6 IU/mL which was a 2.6-fold increase compared to the initial value. Further, the L-asparaginase production was optimized using response surface methodology, and validated experimental result at optimized process variables gave 18.35 IU/mL of L-asparaginase activity, which is 2.4-times higher than the traditional optimization approach. The study explored the E. aerogenes MTCC111 as a potent and potential bacterial source for high yield of antileukemic drug.
Particle Swarm Optimization of Low-Thrust, Geocentric-to-Halo-Orbit Transfers
NASA Astrophysics Data System (ADS)
Abraham, Andrew J.
Missions to Lagrange points are becoming increasingly popular amongst spacecraft mission planners. Lagrange points are locations in space where the gravity force from two bodies, and the centrifugal force acting on a third body, cancel. To date, all spacecraft that have visited a Lagrange point have done so using high-thrust, chemical propulsion. Due to the increasing availability of low-thrust (high efficiency) propulsive devices, and their increasing capability in terms of fuel efficiency and instantaneous thrust, it has now become possible for a spacecraft to reach a Lagrange point orbit without the aid of chemical propellant. While at any given time there are many paths for a low-thrust trajectory to take, only one is optimal. The traditional approach to spacecraft trajectory optimization utilizes some form of gradient-based algorithm. While these algorithms offer numerous advantages, they also have a few significant shortcomings. The three most significant shortcomings are: (1) the fact that an initial guess solution is required to initialize the algorithm, (2) the radius of convergence can be quite small and can allow the algorithm to become trapped in local minima, and (3) gradient information is not always assessable nor always trustworthy for a given problem. To avoid these problems, this dissertation is focused on optimizing a low-thrust transfer trajectory from a geocentric orbit to an Earth-Moon, L1, Lagrange point orbit using the method of Particle Swarm Optimization (PSO). The PSO method is an evolutionary heuristic that was originally written to model birds swarming to locate hidden food sources. This PSO method will enable the exploration of the invariant stable manifold of the target Lagrange point orbit in an effort to optimize the spacecraft's low-thrust trajectory. Examples of these optimized trajectories are presented and contrasted with those found using traditional, gradient-based approaches. In summary, the results of this dissertation find that the PSO method does, indeed, successfully optimize the low-thrust trajectory transfer problem without the need for initial guessing. Furthermore, a two-degree-of-freedom PSO problem formulation significantly outperformed a one-degree-of-freedom formulation by at least an order of magnitude, in terms of CPU time. Finally, the PSO method is also used to solve a traditional, two-burn, impulsive transfer to a Lagrange point orbit using a hybrid optimization algorithm that incorporates a gradient-based shooting algorithm as a pre-optimizer. Surprisingly, the results of this study show that "fast" transfers outperform "slow" transfers in terms of both Deltav and time of flight.
Development and Application of Collaborative Optimization Software for Plate - fin Heat Exchanger
NASA Astrophysics Data System (ADS)
Chunzhen, Qiao; Ze, Zhang; Jiangfeng, Guo; Jian, Zhang
2017-12-01
This paper introduces the design ideas of the calculation software and application examples for plate - fin heat exchangers. Because of the large calculation quantity in the process of designing and optimizing heat exchangers, we used Visual Basic 6.0 as a software development carrier to design a basic calculation software to reduce the calculation quantity. Its design condition is plate - fin heat exchanger which was designed according to the boiler tail flue gas. The basis of the software is the traditional design method of the plate-fin heat exchanger. Using the software for design and calculation of plate-fin heat exchangers, discovery will effectively reduce the amount of computation, and similar to traditional methods, have a high value.
Handwritten digits recognition using HMM and PSO based on storks
NASA Astrophysics Data System (ADS)
Yan, Liao; Jia, Zhenhong; Yang, Jie; Pang, Shaoning
2010-07-01
A new method for handwritten digits recognition based on hidden markov model (HMM) and particle swarm optimization (PSO) is proposed. This method defined 24 strokes with the sense of directional, to make up for the shortage that is sensitive in choice of stating point in traditional methods, but also reduce the ambiguity caused by shakes. Make use of excellent global convergence of PSO; improving the probability of finding the optimum and avoiding local infinitesimal obviously. Experimental results demonstrate that compared with the traditional methods, the proposed method can make most of the recognition rate of handwritten digits improved.
C-learning: A new classification framework to estimate optimal dynamic treatment regimes.
Zhang, Baqun; Zhang, Min
2017-12-11
A dynamic treatment regime is a sequence of decision rules, each corresponding to a decision point, that determine that next treatment based on each individual's own available characteristics and treatment history up to that point. We show that identifying the optimal dynamic treatment regime can be recast as a sequential optimization problem and propose a direct sequential optimization method to estimate the optimal treatment regimes. In particular, at each decision point, the optimization is equivalent to sequentially minimizing a weighted expected misclassification error. Based on this classification perspective, we propose a powerful and flexible C-learning algorithm to learn the optimal dynamic treatment regimes backward sequentially from the last stage until the first stage. C-learning is a direct optimization method that directly targets optimizing decision rules by exploiting powerful optimization/classification techniques and it allows incorporation of patient's characteristics and treatment history to improve performance, hence enjoying advantages of both the traditional outcome regression-based methods (Q- and A-learning) and the more recent direct optimization methods. The superior performance and flexibility of the proposed methods are illustrated through extensive simulation studies. © 2017, The International Biometric Society.
Study on light weight design of truss structures of spacecrafts
NASA Astrophysics Data System (ADS)
Zeng, Fuming; Yang, Jianzhong; Wang, Jian
2015-08-01
Truss structure is usually adopted as the main structure form for spacecrafts due to its high efficiency in supporting concentrated loads. Light-weight design is now becoming the primary concern during conceptual design of spacecrafts. Implementation of light-weight design on truss structure always goes through three processes: topology optimization, size optimization and composites optimization. During each optimization process, appropriate algorithm such as the traditional optimality criterion method, mathematical programming method and the intelligent algorithms which simulate the growth and evolution processes in nature will be selected. According to the practical processes and algorithms, combined with engineering practice and commercial software, summary is made for the implementation of light-weight design on truss structure for spacecrafts.
Research on cutting path optimization of sheet metal parts based on ant colony algorithm
NASA Astrophysics Data System (ADS)
Wu, Z. Y.; Ling, H.; Li, L.; Wu, L. H.; Liu, N. B.
2017-09-01
In view of the disadvantages of the current cutting path optimization methods of sheet metal parts, a new method based on ant colony algorithm was proposed in this paper. The cutting path optimization problem of sheet metal parts was taken as the research object. The essence and optimization goal of the optimization problem were presented. The traditional serial cutting constraint rule was improved. The cutting constraint rule with cross cutting was proposed. The contour lines of parts were discretized and the mathematical model of cutting path optimization was established. Thus the problem was converted into the selection problem of contour lines of parts. Ant colony algorithm was used to solve the problem. The principle and steps of the algorithm were analyzed.
Surrogate Based Uni/Multi-Objective Optimization and Distribution Estimation Methods
NASA Astrophysics Data System (ADS)
Gong, W.; Duan, Q.; Huo, X.
2017-12-01
Parameter calibration has been demonstrated as an effective way to improve the performance of dynamic models, such as hydrological models, land surface models, weather and climate models etc. Traditional optimization algorithms usually cost a huge number of model evaluations, making dynamic model calibration very difficult, or even computationally prohibitive. With the help of a serious of recently developed adaptive surrogate-modelling based optimization methods: uni-objective optimization method ASMO, multi-objective optimization method MO-ASMO, and probability distribution estimation method ASMO-PODE, the number of model evaluations can be significantly reduced to several hundreds, making it possible to calibrate very expensive dynamic models, such as regional high resolution land surface models, weather forecast models such as WRF, and intermediate complexity earth system models such as LOVECLIM. This presentation provides a brief introduction to the common framework of adaptive surrogate-based optimization algorithms of ASMO, MO-ASMO and ASMO-PODE, a case study of Common Land Model (CoLM) calibration in Heihe river basin in Northwest China, and an outlook of the potential applications of the surrogate-based optimization methods.
Intelligent vehicle electrical power supply system with central coordinated protection
NASA Astrophysics Data System (ADS)
Yang, Diange; Kong, Weiwei; Li, Bing; Lian, Xiaomin
2016-07-01
The current research of vehicle electrical power supply system mainly focuses on electric vehicles (EV) and hybrid electric vehicles (HEV). The vehicle electrical power supply system used in traditional fuel vehicles is rather simple and imperfect; electrical/electronic devices (EEDs) applied in vehicles are usually directly connected with the vehicle's battery. With increasing numbers of EEDs being applied in traditional fuel vehicles, vehicle electrical power supply systems should be optimized and improved so that they can work more safely and more effectively. In this paper, a new vehicle electrical power supply system for traditional fuel vehicles, which accounts for all electrical/electronic devices and complex work conditions, is proposed based on a smart electrical/electronic device (SEED) system. Working as an independent intelligent electrical power supply network, the proposed system is isolated from the electrical control module and communication network, and access to the vehicle system is made through a bus interface. This results in a clean controller power supply with no electromagnetic interference. A new practical battery state of charge (SoC) estimation method is also proposed to achieve more accurate SoC estimation for lead-acid batteries in traditional fuel vehicles so that the intelligent power system can monitor the status of the battery for an over-current state in each power channel. Optimized protection methods are also used to ensure power supply safety. Experiments and tests on a traditional fuel vehicle are performed, and the results reveal that the battery SoC is calculated quickly and sufficiently accurately for battery over-discharge protection. Over-current protection is achieved, and the entire vehicle's power utilization is optimized. For traditional fuel vehicles, the proposed vehicle electrical power supply system is comprehensive and has a unified system architecture, enhancing system reliability and security.
Kernel optimization for short-range molecular dynamics
NASA Astrophysics Data System (ADS)
Hu, Changjun; Wang, Xianmeng; Li, Jianjiang; He, Xinfu; Li, Shigang; Feng, Yangde; Yang, Shaofeng; Bai, He
2017-02-01
To optimize short-range force computations in Molecular Dynamics (MD) simulations, multi-threading and SIMD optimizations are presented in this paper. With respect to multi-threading optimization, a Partition-and-Separate-Calculation (PSC) method is designed to avoid write conflicts caused by using Newton's third law. Serial bottlenecks are eliminated with no additional memory usage. The method is implemented by using the OpenMP model. Furthermore, the PSC method is employed on Intel Xeon Phi coprocessors in both native and offload models. We also evaluate the performance of the PSC method under different thread affinities on the MIC architecture. In the SIMD execution, we explain the performance influence in the PSC method, considering the "if-clause" of the cutoff radius check. The experiment results show that our PSC method is relatively more efficient compared to some traditional methods. In double precision, our 256-bit SIMD implementation is about 3 times faster than the scalar version.
Reentry trajectory optimization based on a multistage pseudospectral method.
Zhao, Jiang; Zhou, Rui; Jin, Xuelian
2014-01-01
Of the many direct numerical methods, the pseudospectral method serves as an effective tool to solve the reentry trajectory optimization for hypersonic vehicles. However, the traditional pseudospectral method is time-consuming due to large number of discretization points. For the purpose of autonomous and adaptive reentry guidance, the research herein presents a multistage trajectory control strategy based on the pseudospectral method, capable of dealing with the unexpected situations in reentry flight. The strategy typically includes two subproblems: the trajectory estimation and trajectory refining. In each processing stage, the proposed method generates a specified range of trajectory with the transition of the flight state. The full glide trajectory consists of several optimal trajectory sequences. The newly focused geographic constraints in actual flight are discussed thereafter. Numerical examples of free-space flight, target transition flight, and threat avoidance flight are used to show the feasible application of multistage pseudospectral method in reentry trajectory optimization.
Reentry Trajectory Optimization Based on a Multistage Pseudospectral Method
Zhou, Rui; Jin, Xuelian
2014-01-01
Of the many direct numerical methods, the pseudospectral method serves as an effective tool to solve the reentry trajectory optimization for hypersonic vehicles. However, the traditional pseudospectral method is time-consuming due to large number of discretization points. For the purpose of autonomous and adaptive reentry guidance, the research herein presents a multistage trajectory control strategy based on the pseudospectral method, capable of dealing with the unexpected situations in reentry flight. The strategy typically includes two subproblems: the trajectory estimation and trajectory refining. In each processing stage, the proposed method generates a specified range of trajectory with the transition of the flight state. The full glide trajectory consists of several optimal trajectory sequences. The newly focused geographic constraints in actual flight are discussed thereafter. Numerical examples of free-space flight, target transition flight, and threat avoidance flight are used to show the feasible application of multistage pseudospectral method in reentry trajectory optimization. PMID:24574929
Vidal, Victoria L; Ohaeri, Beatrice M; John, Pamela; Helen, Delles
2013-01-01
This quasi-experimental study, with a control group and experimental group, compares the effectiveness of virtual reality simulators on developing phlebotomy skills of nursing students with the effectiveness of traditional methods of teaching. Performance of actual phlebotomy on a live client was assessed after training, using a standardized form. Findings showed that students who were exposed to the virtual reality simulator performed better in the following performance metrics: pain factor, hematoma formation, and number of reinsertions. This study confirms that the use of the virtual reality-based system to supplement the traditional method may be the optimal program for training.
DOMe: A deduplication optimization method for the NewSQL database backups
Wang, Longxiang; Zhu, Zhengdong; Zhang, Xingjun; Wang, Yinfeng
2017-01-01
Reducing duplicated data of database backups is an important application scenario for data deduplication technology. NewSQL is an emerging database system and is now being used more and more widely. NewSQL systems need to improve data reliability by periodically backing up in-memory data, resulting in a lot of duplicated data. The traditional deduplication method is not optimized for the NewSQL server system and cannot take full advantage of hardware resources to optimize deduplication performance. A recent research pointed out that the future NewSQL server will have thousands of CPU cores, large DRAM and huge NVRAM. Therefore, how to utilize these hardware resources to optimize the performance of data deduplication is an important issue. To solve this problem, we propose a deduplication optimization method (DOMe) for NewSQL system backup. To take advantage of the large number of CPU cores in the NewSQL server to optimize deduplication performance, DOMe parallelizes the deduplication method based on the fork-join framework. The fingerprint index, which is the key data structure in the deduplication process, is implemented as pure in-memory hash table, which makes full use of the large DRAM in NewSQL system, eliminating the performance bottleneck problem of fingerprint index existing in traditional deduplication method. The H-store is used as a typical NewSQL database system to implement DOMe method. DOMe is experimentally analyzed by two representative backup data. The experimental results show that: 1) DOMe can reduce the duplicated NewSQL backup data. 2) DOMe significantly improves deduplication performance by parallelizing CDC algorithms. In the case of the theoretical speedup ratio of the server is 20.8, the speedup ratio of DOMe can achieve up to 18; 3) DOMe improved the deduplication throughput by 1.5 times through the pure in-memory index optimization method. PMID:29049307
Prakash, Jaya; Yalavarthy, Phaneendra K
2013-03-01
Developing a computationally efficient automated method for the optimal choice of regularization parameter in diffuse optical tomography. The least-squares QR (LSQR)-type method that uses Lanczos bidiagonalization is known to be computationally efficient in performing the reconstruction procedure in diffuse optical tomography. The same is effectively deployed via an optimization procedure that uses the simplex method to find the optimal regularization parameter. The proposed LSQR-type method is compared with the traditional methods such as L-curve, generalized cross-validation (GCV), and recently proposed minimal residual method (MRM)-based choice of regularization parameter using numerical and experimental phantom data. The results indicate that the proposed LSQR-type and MRM-based methods performance in terms of reconstructed image quality is similar and superior compared to L-curve and GCV-based methods. The proposed method computational complexity is at least five times lower compared to MRM-based method, making it an optimal technique. The LSQR-type method was able to overcome the inherent limitation of computationally expensive nature of MRM-based automated way finding the optimal regularization parameter in diffuse optical tomographic imaging, making this method more suitable to be deployed in real-time.
A predictive machine learning approach for microstructure optimization and materials design
NASA Astrophysics Data System (ADS)
Liu, Ruoqian; Kumar, Abhishek; Chen, Zhengzhang; Agrawal, Ankit; Sundararaghavan, Veera; Choudhary, Alok
2015-06-01
This paper addresses an important materials engineering question: How can one identify the complete space (or as much of it as possible) of microstructures that are theoretically predicted to yield the desired combination of properties demanded by a selected application? We present a problem involving design of magnetoelastic Fe-Ga alloy microstructure for enhanced elastic, plastic and magnetostrictive properties. While theoretical models for computing properties given the microstructure are known for this alloy, inversion of these relationships to obtain microstructures that lead to desired properties is challenging, primarily due to the high dimensionality of microstructure space, multi-objective design requirement and non-uniqueness of solutions. These challenges render traditional search-based optimization methods incompetent in terms of both searching efficiency and result optimality. In this paper, a route to address these challenges using a machine learning methodology is proposed. A systematic framework consisting of random data generation, feature selection and classification algorithms is developed. Experiments with five design problems that involve identification of microstructures that satisfy both linear and nonlinear property constraints show that our framework outperforms traditional optimization methods with the average running time reduced by as much as 80% and with optimality that would not be achieved otherwise.
OpenMP Parallelization and Optimization of Graph-Based Machine Learning Algorithms
Meng, Zhaoyi; Koniges, Alice; He, Yun Helen; ...
2016-09-21
In this paper, we investigate the OpenMP parallelization and optimization of two novel data classification algorithms. The new algorithms are based on graph and PDE solution techniques and provide significant accuracy and performance advantages over traditional data classification algorithms in serial mode. The methods leverage the Nystrom extension to calculate eigenvalue/eigenvectors of the graph Laplacian and this is a self-contained module that can be used in conjunction with other graph-Laplacian based methods such as spectral clustering. We use performance tools to collect the hotspots and memory access of the serial codes and use OpenMP as the parallelization language to parallelizemore » the most time-consuming parts. Where possible, we also use library routines. We then optimize the OpenMP implementations and detail the performance on traditional supercomputer nodes (in our case a Cray XC30), and test the optimization steps on emerging testbed systems based on Intel’s Knights Corner and Landing processors. We show both performance improvement and strong scaling behavior. Finally, a large number of optimization techniques and analyses are necessary before the algorithm reaches almost ideal scaling.« less
A Scalable and Robust Multi-Agent Approach to Distributed Optimization
NASA Technical Reports Server (NTRS)
Tumer, Kagan
2005-01-01
Modularizing a large optimization problem so that the solutions to the subproblems provide a good overall solution is a challenging problem. In this paper we present a multi-agent approach to this problem based on aligning the agent objectives with the system objectives, obviating the need to impose external mechanisms to achieve collaboration among the agents. This approach naturally addresses scaling and robustness issues by ensuring that the agents do not rely on the reliable operation of other agents We test this approach in the difficult distributed optimization problem of imperfect device subset selection [Challet and Johnson, 2002]. In this problem, there are n devices, each of which has a "distortion", and the task is to find the subset of those n devices that minimizes the average distortion. Our results show that in large systems (1000 agents) the proposed approach provides improvements of over an order of magnitude over both traditional optimization methods and traditional multi-agent methods. Furthermore, the results show that even in extreme cases of agent failures (i.e., half the agents fail midway through the simulation) the system remains coordinated and still outperforms a failure-free and centralized optimization algorithm.
Phase-Division-Based Dynamic Optimization of Linkages for Drawing Servo Presses
NASA Astrophysics Data System (ADS)
Zhang, Zhi-Gang; Wang, Li-Ping; Cao, Yan-Ke
2017-11-01
Existing linkage-optimization methods are designed for mechanical presses; few can be directly used for servo presses, so development of the servo press is limited. Based on the complementarity of linkage optimization and motion planning, a phase-division-based linkage-optimization model for a drawing servo press is established. Considering the motion-planning principles of a drawing servo press, and taking account of work rating and efficiency, the constraints of the optimization model are constructed. Linkage is optimized in two modes: use of either constant eccentric speed or constant slide speed in the work segments. The performances of optimized linkages are compared with those of a mature linkage SL4-2000A, which is optimized by a traditional method. The results show that the work rating of a drawing servo press equipped with linkages optimized by this new method improved and the root-mean-square torque of the servo motors is reduced by more than 10%. This research provides a promising method for designing energy-saving drawing servo presses with high work ratings.
Aerodynamic Shape Optimization Using Hybridized Differential Evolution
NASA Technical Reports Server (NTRS)
Madavan, Nateri K.
2003-01-01
An aerodynamic shape optimization method that uses an evolutionary algorithm known at Differential Evolution (DE) in conjunction with various hybridization strategies is described. DE is a simple and robust evolutionary strategy that has been proven effective in determining the global optimum for several difficult optimization problems. Various hybridization strategies for DE are explored, including the use of neural networks as well as traditional local search methods. A Navier-Stokes solver is used to evaluate the various intermediate designs and provide inputs to the hybrid DE optimizer. The method is implemented on distributed parallel computers so that new designs can be obtained within reasonable turnaround times. Results are presented for the inverse design of a turbine airfoil from a modern jet engine. (The final paper will include at least one other aerodynamic design application). The capability of the method to search large design spaces and obtain the optimal airfoils in an automatic fashion is demonstrated.
Image gathering and restoration - Information and visual quality
NASA Technical Reports Server (NTRS)
Mccormick, Judith A.; Alter-Gartenberg, Rachel; Huck, Friedrich O.
1989-01-01
A method is investigated for optimizing the end-to-end performance of image gathering and restoration for visual quality. To achieve this objective, one must inevitably confront the problems that the visual quality of restored images depends on perceptual rather than mathematical considerations and that these considerations vary with the target, the application, and the observer. The method adopted in this paper is to optimize image gathering informationally and to restore images interactively to obtain the visually preferred trade-off among fidelity resolution, sharpness, and clarity. The results demonstrate that this method leads to significant improvements in the visual quality obtained by the traditional digital processing methods. These traditional methods allow a significant loss of visual quality to occur because they treat the design of the image-gathering system and the formulation of the image-restoration algorithm as two separate tasks and fail to account for the transformations between the continuous and the discrete representations in image gathering and reconstruction.
New parameters in adaptive testing of ferromagnetic materials utilizing magnetic Barkhausen noise
NASA Astrophysics Data System (ADS)
Pal'a, Jozef; Ušák, Elemír
2016-03-01
A new method of magnetic Barkhausen noise (MBN) measurement and optimization of the measured data processing with respect to non-destructive evaluation of ferromagnetic materials was tested. Using this method we tried to found, if it is possible to enhance sensitivity and stability of measurement results by replacing the traditional MBN parameter (root mean square) with some new parameter. In the tested method, a complex set of the MBN from minor hysteresis loops is measured. Afterward, the MBN data are collected into suitably designed matrices and optimal parameters of MBN with respect to maximum sensitivity to the evaluated variable are searched. The method was verified on plastically deformed steel samples. It was shown that the proposed measuring method and measured data processing bring an improvement of the sensitivity to the evaluated variable when comparing with measuring traditional MBN parameter. Moreover, we found a parameter of MBN, which is highly resistant to the changes of applied field amplitude and at the same time it is noticeably more sensitive to the evaluated variable.
A Design Problem of Assembly Line Systems using Genetic Algorithm under the BTO Environment
NASA Astrophysics Data System (ADS)
Abe, Kazuaki; Yamada, Tetsuo; Matsui, Masayuki
Under the BTO environment, stochastic assembly lines require design methods which shorten not only the production lead time but also the ready time for the line design. We propose a design method for Assembly Line Systems (ALS) in Yamada et al. (2001) by using Genetic Algorithm (GA) and Adam-Eve GA, in which all design variables are determined in consideration of constraints such as line length related to the production lead time. First, an ALS model with a line length constraint is introduced, and an optimal design problem is set to maximize the net reward under shorter lead time. Next, a simulation optimization method is developed using Adam-Eve GA and traditional GA. Finally, an optimal design example is shown and discussed by comparing the 2-stage design by Yamada et al. (2001) and both the GA designs. It is shown that the Adam-Eve GA is superior to the traditional GA design in terms of computational time though there is only a slight difference in terms of net reward.
Towards an Optimized Method of Olive Tree Crown Volume Measurement
Miranda-Fuentes, Antonio; Llorens, Jordi; Gamarra-Diezma, Juan L.; Gil-Ribes, Jesús A.; Gil, Emilio
2015-01-01
Accurate crown characterization of large isolated olive trees is vital for adjusting spray doses in three-dimensional crop agriculture. Among the many methodologies available, laser sensors have proved to be the most reliable and accurate. However, their operation is time consuming and requires specialist knowledge and so a simpler crown characterization method is required. To this end, three methods were evaluated and compared with LiDAR measurements to determine their accuracy: Vertical Crown Projected Area method (VCPA), Ellipsoid Volume method (VE) and Tree Silhouette Volume method (VTS). Trials were performed in three different kinds of olive tree plantations: intensive, adapted one-trunked traditional and traditional. In total, 55 trees were characterized. Results show that all three methods are appropriate to estimate the crown volume, reaching high coefficients of determination: R2 = 0.783, 0.843 and 0.824 for VCPA, VE and VTS, respectively. However, discrepancies arise when evaluating tree plantations separately, especially for traditional trees. Here, correlations between LiDAR volume and other parameters showed that the Mean Vector calculated for VCPA method showed the highest correlation for traditional trees, thus its use in traditional plantations is highly recommended. PMID:25658396
Chen, Shuo; Ong, Yi Hong; Lin, Xiaoqian; Liu, Quan
2015-01-01
Raman spectroscopy has shown great potential in biomedical applications. However, intrinsically weak Raman signals cause slow data acquisition especially in Raman imaging. This problem can be overcome by narrow-band Raman imaging followed by spectral reconstruction. Our previous study has shown that Raman spectra free of fluorescence background can be reconstructed from narrow-band Raman measurements using traditional Wiener estimation. However, fluorescence-free Raman spectra are only available from those sophisticated Raman setups capable of fluorescence suppression. The reconstruction of Raman spectra with fluorescence background from narrow-band measurements is much more challenging due to the significant variation in fluorescence background. In this study, two advanced Wiener estimation methods, i.e. modified Wiener estimation and sequential weighted Wiener estimation, were optimized to achieve this goal. Both spontaneous Raman spectra and surface enhanced Raman spectra were evaluated. Compared with traditional Wiener estimation, two advanced methods showed significant improvement in the reconstruction of spontaneous Raman spectra. However, traditional Wiener estimation can work as effectively as the advanced methods for SERS spectra but much faster. The wise selection of these methods would enable accurate Raman reconstruction in a simple Raman setup without the function of fluorescence suppression for fast Raman imaging. PMID:26203387
Product modular design incorporating preventive maintenance issues
NASA Astrophysics Data System (ADS)
Gao, Yicong; Feng, Yixiong; Tan, Jianrong
2016-03-01
Traditional modular design methods lead to product maintenance problems, because the module form of a system is created according to either the function requirements or the manufacturing considerations. For solving these problems, a new modular design method is proposed with the considerations of not only the traditional function related attributes, but also the maintenance related ones. First, modularity parameters and modularity scenarios for product modularity are defined. Then the reliability and economic assessment models of product modularity strategies are formulated with the introduction of the effective working age of modules. A mathematical model used to evaluate the difference among the modules of the product so that the optimal module of the product can be established. After that, a multi-objective optimization problem based on metrics for preventive maintenance interval different degrees and preventive maintenance economics is formulated for modular optimization. Multi-objective GA is utilized to rapidly approximate the Pareto set of optimal modularity strategy trade-offs between preventive maintenance cost and preventive maintenance interval difference degree. Finally, a coordinate CNC boring machine is adopted to depict the process of product modularity. In addition, two factorial design experiments based on the modularity parameters are constructed and analyzed. These experiments investigate the impacts of these parameters on the optimal modularity strategies and the structure of module. The research proposes a new modular design method, which may help to improve the maintainability of product in modular design.
Regularized Embedded Multiple Kernel Dimensionality Reduction for Mine Signal Processing.
Li, Shuang; Liu, Bing; Zhang, Chen
2016-01-01
Traditional multiple kernel dimensionality reduction models are generally based on graph embedding and manifold assumption. But such assumption might be invalid for some high-dimensional or sparse data due to the curse of dimensionality, which has a negative influence on the performance of multiple kernel learning. In addition, some models might be ill-posed if the rank of matrices in their objective functions was not high enough. To address these issues, we extend the traditional graph embedding framework and propose a novel regularized embedded multiple kernel dimensionality reduction method. Different from the conventional convex relaxation technique, the proposed algorithm directly takes advantage of a binary search and an alternative optimization scheme to obtain optimal solutions efficiently. The experimental results demonstrate the effectiveness of the proposed method for supervised, unsupervised, and semisupervised scenarios.
Least squares polynomial chaos expansion: A review of sampling strategies
NASA Astrophysics Data System (ADS)
Hadigol, Mohammad; Doostan, Alireza
2018-04-01
As non-institutive polynomial chaos expansion (PCE) techniques have gained growing popularity among researchers, we here provide a comprehensive review of major sampling strategies for the least squares based PCE. Traditional sampling methods, such as Monte Carlo, Latin hypercube, quasi-Monte Carlo, optimal design of experiments (ODE), Gaussian quadratures, as well as more recent techniques, such as coherence-optimal and randomized quadratures are discussed. We also propose a hybrid sampling method, dubbed alphabetic-coherence-optimal, that employs the so-called alphabetic optimality criteria used in the context of ODE in conjunction with coherence-optimal samples. A comparison between the empirical performance of the selected sampling methods applied to three numerical examples, including high-order PCE's, high-dimensional problems, and low oversampling ratios, is presented to provide a road map for practitioners seeking the most suitable sampling technique for a problem at hand. We observed that the alphabetic-coherence-optimal technique outperforms other sampling methods, specially when high-order ODE are employed and/or the oversampling ratio is low.
Robust Frequency Invariant Beamforming with Low Sidelobe for Speech Enhancement
NASA Astrophysics Data System (ADS)
Zhu, Yiting; Pan, Xiang
2018-01-01
Frequency invariant beamformers (FIBs) are widely used in speech enhancement and source localization. There are two traditional optimization methods for FIB design. The first one is convex optimization, which is simple but the frequency invariant characteristic of the beam pattern is poor with respect to frequency band of five octaves. The least squares (LS) approach using spatial response variation (SRV) constraint is another optimization method. Although, it can provide good frequency invariant property, it usually couldn’t be used in speech enhancement for its lack of weight norm constraint which is related to the robustness of a beamformer. In this paper, a robust wideband beamforming method with a constant beamwidth is proposed. The frequency invariant beam pattern is achieved by resolving an optimization problem of the SRV constraint to cover speech frequency band. With the control of sidelobe level, it is available for the frequency invariant beamformer (FIB) to prevent distortion of interference from the undesirable direction. The approach is completed in time-domain by placing tapped delay lines(TDL) and finite impulse response (FIR) filter at the output of each sensor which is more convenient than the Frost processor. By invoking the weight norm constraint, the robustness of the beamformer is further improved against random errors. Experiment results show that the proposed method has a constant beamwidth and almost the same white noise gain as traditional delay-and-sum (DAS) beamformer.
NASA Astrophysics Data System (ADS)
He, Lirong; Cui, Guangmang; Feng, Huajun; Xu, Zhihai; Li, Qi; Chen, Yueting
2015-03-01
Coded exposure photography makes the motion de-blurring a well-posed problem. The integration pattern of light is modulated using the method of coded exposure by opening and closing the shutter within the exposure time, changing the traditional shutter frequency spectrum into a wider frequency band in order to preserve more image information in frequency domain. The searching method of optimal code is significant for coded exposure. In this paper, an improved criterion of the optimal code searching is proposed by analyzing relationship between code length and the number of ones in the code, considering the noise effect on code selection with the affine noise model. Then the optimal code is obtained utilizing the method of genetic searching algorithm based on the proposed selection criterion. Experimental results show that the time consuming of searching optimal code decreases with the presented method. The restoration image is obtained with better subjective experience and superior objective evaluation values.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Messud, J.; Dinh, P. M.; Suraud, Eric
2009-10-15
We propose a simplification of the time-dependent self-interaction correction (TD-SIC) method using two sets of orbitals, applying the optimized effective potential (OEP) method. The resulting scheme is called time-dependent 'generalized SIC-OEP'. A straightforward approximation, using the spatial localization of one set of orbitals, leads to the 'generalized SIC-Slater' formalism. We show that it represents a great improvement compared to the traditional SIC-Slater and Krieger-Li-Iafrate formalisms.
NASA Astrophysics Data System (ADS)
Messud, J.; Dinh, P. M.; Reinhard, P.-G.; Suraud, Eric
2009-10-01
We propose a simplification of the time-dependent self-interaction correction (TD-SIC) method using two sets of orbitals, applying the optimized effective potential (OEP) method. The resulting scheme is called time-dependent “generalized SIC-OEP.” A straightforward approximation, using the spatial localization of one set of orbitals, leads to the “generalized SIC-Slater” formalism. We show that it represents a great improvement compared to the traditional SIC-Slater and Krieger-Li-Iafrate formalisms.
Static vs stochastic optimization: A case study of FTSE Bursa Malaysia sectorial indices
NASA Astrophysics Data System (ADS)
Mamat, Nur Jumaadzan Zaleha; Jaaman, Saiful Hafizah; Ahmad, Rokiah@Rozita
2014-06-01
Traditional portfolio optimization methods in the likes of Markowitz' mean-variance model and semi-variance model utilize static expected return and volatility risk from historical data to generate an optimal portfolio. The optimal portfolio may not truly be optimal in reality due to the fact that maximum and minimum values from the data may largely influence the expected return and volatility risk values. This paper considers distributions of assets' return and volatility risk to determine a more realistic optimized portfolio. For illustration purposes, the sectorial indices data in FTSE Bursa Malaysia is employed. The results show that stochastic optimization provides more stable information ratio.
Clustering PPI data by combining FA and SHC method.
Lei, Xiujuan; Ying, Chao; Wu, Fang-Xiang; Xu, Jin
2015-01-01
Clustering is one of main methods to identify functional modules from protein-protein interaction (PPI) data. Nevertheless traditional clustering methods may not be effective for clustering PPI data. In this paper, we proposed a novel method for clustering PPI data by combining firefly algorithm (FA) and synchronization-based hierarchical clustering (SHC) algorithm. Firstly, the PPI data are preprocessed via spectral clustering (SC) which transforms the high-dimensional similarity matrix into a low dimension matrix. Then the SHC algorithm is used to perform clustering. In SHC algorithm, hierarchical clustering is achieved by enlarging the neighborhood radius of synchronized objects continuously, while the hierarchical search is very difficult to find the optimal neighborhood radius of synchronization and the efficiency is not high. So we adopt the firefly algorithm to determine the optimal threshold of the neighborhood radius of synchronization automatically. The proposed algorithm is tested on the MIPS PPI dataset. The results show that our proposed algorithm is better than the traditional algorithms in precision, recall and f-measure value.
Clustering PPI data by combining FA and SHC method
2015-01-01
Clustering is one of main methods to identify functional modules from protein-protein interaction (PPI) data. Nevertheless traditional clustering methods may not be effective for clustering PPI data. In this paper, we proposed a novel method for clustering PPI data by combining firefly algorithm (FA) and synchronization-based hierarchical clustering (SHC) algorithm. Firstly, the PPI data are preprocessed via spectral clustering (SC) which transforms the high-dimensional similarity matrix into a low dimension matrix. Then the SHC algorithm is used to perform clustering. In SHC algorithm, hierarchical clustering is achieved by enlarging the neighborhood radius of synchronized objects continuously, while the hierarchical search is very difficult to find the optimal neighborhood radius of synchronization and the efficiency is not high. So we adopt the firefly algorithm to determine the optimal threshold of the neighborhood radius of synchronization automatically. The proposed algorithm is tested on the MIPS PPI dataset. The results show that our proposed algorithm is better than the traditional algorithms in precision, recall and f-measure value. PMID:25707632
An automatic and effective parameter optimization method for model tuning
NASA Astrophysics Data System (ADS)
Zhang, T.; Li, L.; Lin, Y.; Xue, W.; Xie, F.; Xu, H.; Huang, X.
2015-05-01
Physical parameterizations in General Circulation Models (GCMs), having various uncertain parameters, greatly impact model performance and model climate sensitivity. Traditional manual and empirical tuning of these parameters is time consuming and ineffective. In this study, a "three-step" methodology is proposed to automatically and effectively obtain the optimum combination of some key parameters in cloud and convective parameterizations according to a comprehensive objective evaluation metrics. Different from the traditional optimization methods, two extra steps, one determines parameter sensitivity and the other chooses the optimum initial value of sensitive parameters, are introduced before the downhill simplex method to reduce the computational cost and improve the tuning performance. Atmospheric GCM simulation results show that the optimum combination of these parameters determined using this method is able to improve the model's overall performance by 9%. The proposed methodology and software framework can be easily applied to other GCMs to speed up the model development process, especially regarding unavoidable comprehensive parameters tuning during the model development stage.
Topology-changing shape optimization with the genetic algorithm
NASA Astrophysics Data System (ADS)
Lamberson, Steven E., Jr.
The goal is to take a traditional shape optimization problem statement and modify it slightly to allow for prescribed changes in topology. This modification enables greater flexibility in the choice of parameters for the topology optimization problem, while improving the direct physical relevance of the results. This modification involves changing the optimization problem statement from a nonlinear programming problem into a form of mixed-discrete nonlinear programing problem. The present work demonstrates one possible way of using the Genetic Algorithm (GA) to solve such a problem, including the use of "masking bits" and a new modification to the bit-string affinity (BSA) termination criterion specifically designed for problems with "masking bits." A simple ten-bar truss problem proves the utility of the modified BSA for this type of problem. A more complicated two dimensional bracket problem is solved using both the proposed approach and a more traditional topology optimization approach (Solid Isotropic Microstructure with Penalization or SIMP) to enable comparison. The proposed approach is able to solve problems with both local and global constraints, which is something traditional methods cannot do. The proposed approach has a significantly higher computational burden --- on the order of 100 times larger than SIMP, although the proposed approach is able to offset this with parallel computing.
An automatic and effective parameter optimization method for model tuning
NASA Astrophysics Data System (ADS)
Zhang, T.; Li, L.; Lin, Y.; Xue, W.; Xie, F.; Xu, H.; Huang, X.
2015-11-01
Physical parameterizations in general circulation models (GCMs), having various uncertain parameters, greatly impact model performance and model climate sensitivity. Traditional manual and empirical tuning of these parameters is time-consuming and ineffective. In this study, a "three-step" methodology is proposed to automatically and effectively obtain the optimum combination of some key parameters in cloud and convective parameterizations according to a comprehensive objective evaluation metrics. Different from the traditional optimization methods, two extra steps, one determining the model's sensitivity to the parameters and the other choosing the optimum initial value for those sensitive parameters, are introduced before the downhill simplex method. This new method reduces the number of parameters to be tuned and accelerates the convergence of the downhill simplex method. Atmospheric GCM simulation results show that the optimum combination of these parameters determined using this method is able to improve the model's overall performance by 9 %. The proposed methodology and software framework can be easily applied to other GCMs to speed up the model development process, especially regarding unavoidable comprehensive parameter tuning during the model development stage.
Efficient Optimization of Low-Thrust Spacecraft Trajectories
NASA Technical Reports Server (NTRS)
Lee, Seungwon; Fink, Wolfgang; Russell, Ryan; Terrile, Richard; Petropoulos, Anastassios; vonAllmen, Paul
2007-01-01
A paper describes a computationally efficient method of optimizing trajectories of spacecraft driven by propulsion systems that generate low thrusts and, hence, must be operated for long times. A common goal in trajectory-optimization problems is to find minimum-time, minimum-fuel, or Pareto-optimal trajectories (here, Pareto-optimality signifies that no other solutions are superior with respect to both flight time and fuel consumption). The present method utilizes genetic and simulated-annealing algorithms to search for globally Pareto-optimal solutions. These algorithms are implemented in parallel form to reduce computation time. These algorithms are coupled with either of two traditional trajectory- design approaches called "direct" and "indirect." In the direct approach, thrust control is discretized in either arc time or arc length, and the resulting discrete thrust vectors are optimized. The indirect approach involves the primer-vector theory (introduced in 1963), in which the thrust control problem is transformed into a co-state control problem and the initial values of the co-state vector are optimized. In application to two example orbit-transfer problems, this method was found to generate solutions comparable to those of other state-of-the-art trajectory-optimization methods while requiring much less computation time.
A predictive machine learning approach for microstructure optimization and materials design
Liu, Ruoqian; Kumar, Abhishek; Chen, Zhengzhang; ...
2015-06-23
This paper addresses an important materials engineering question: How can one identify the complete space (or as much of it as possible) of microstructures that are theoretically predicted to yield the desired combination of properties demanded by a selected application? We present a problem involving design of magnetoelastic Fe-Ga alloy microstructure for enhanced elastic, plastic and magnetostrictive properties. While theoretical models for computing properties given the microstructure are known for this alloy, inversion of these relationships to obtain microstructures that lead to desired properties is challenging, primarily due to the high dimensionality of microstructure space, multi-objective design requirement and non-uniquenessmore » of solutions. These challenges render traditional search-based optimization methods incompetent in terms of both searching efficiency and result optimality. In this paper, a route to address these challenges using a machine learning methodology is proposed. A systematic framework consisting of random data generation, feature selection and classification algorithms is developed. In conclusion, experiments with five design problems that involve identification of microstructures that satisfy both linear and nonlinear property constraints show that our framework outperforms traditional optimization methods with the average running time reduced by as much as 80% and with optimality that would not be achieved otherwise.« less
NASA Astrophysics Data System (ADS)
Varun, Sajja; Reddy, Kalakada Bhargav Bal; Vardhan Reddy, R. R. Vishnu
2016-09-01
In this research work, development of a multi response optimization technique has been undertaken, using traditional desirability analysis and non-traditional particle swarm optimization techniques (for different customer's priorities) in wire electrical discharge machining (WEDM). Monel 400 has been selected as work material for experimentation. The effect of key process parameters such as pulse on time (TON), pulse off time (TOFF), peak current (IP), wire feed (WF) were on material removal rate (MRR) and surface roughness(SR) in WEDM operation were investigated. Further, the responses such as MRR and SR were modelled empirically through regression analysis. The developed models can be used by the machinists to predict the MRR and SR over a wide range of input parameters. The optimization of multiple responses has been done for satisfying the priorities of multiple users by using Taguchi-desirability function method and particle swarm optimization technique. The analysis of variance (ANOVA) is also applied to investigate the effect of influential parameters. Finally, the confirmation experiments were conducted for the optimal set of machining parameters, and the betterment has been proved.
EIT image regularization by a new Multi-Objective Simulated Annealing algorithm.
Castro Martins, Thiago; Sales Guerra Tsuzuki, Marcos
2015-01-01
Multi-Objective Optimization can be used to produce regularized Electrical Impedance Tomography (EIT) images where the weight of the regularization term is not known a priori. This paper proposes a novel Multi-Objective Optimization algorithm based on Simulated Annealing tailored for EIT image reconstruction. Images are reconstructed from experimental data and compared with images from other Multi and Single Objective optimization methods. A significant performance enhancement from traditional techniques can be inferred from the results.
Shu, Ting; Zhang, Bob; Tang, Yuan Yan
2017-01-01
At present, heart disease is the number one cause of death worldwide. Traditionally, heart disease is commonly detected using blood tests, electrocardiogram, cardiac computerized tomography scan, cardiac magnetic resonance imaging, and so on. However, these traditional diagnostic methods are time consuming and/or invasive. In this paper, we propose an effective noninvasive computerized method based on facial images to quantitatively detect heart disease. Specifically, facial key block color features are extracted from facial images and analyzed using the Probabilistic Collaborative Representation Based Classifier. The idea of facial key block color analysis is founded in Traditional Chinese Medicine. A new dataset consisting of 581 heart disease and 581 healthy samples was experimented by the proposed method. In order to optimize the Probabilistic Collaborative Representation Based Classifier, an analysis of its parameters was performed. According to the experimental results, the proposed method obtains the highest accuracy compared with other classifiers and is proven to be effective at heart disease detection.
Large Scale Multi-area Static/Dynamic Economic Dispatch using Nature Inspired Optimization
NASA Astrophysics Data System (ADS)
Pandit, Manjaree; Jain, Kalpana; Dubey, Hari Mohan; Singh, Rameshwar
2017-04-01
Economic dispatch (ED) ensures that the generation allocation to the power units is carried out such that the total fuel cost is minimized and all the operating equality/inequality constraints are satisfied. Classical ED does not take transmission constraints into consideration, but in the present restructured power systems the tie-line limits play a very important role in deciding operational policies. ED is a dynamic problem which is performed on-line in the central load dispatch centre with changing load scenarios. The dynamic multi-area ED (MAED) problem is more complex due to the additional tie-line, ramp-rate and area-wise power balance constraints. Nature inspired (NI) heuristic optimization methods are gaining popularity over the traditional methods for complex problems. This work presents the modified particle swarm optimization (PSO) based techniques where parameter automation is effectively used for improving the search efficiency by avoiding stagnation to a sub-optimal result. This work validates the performance of the PSO variants with traditional solver GAMS for single as well as multi-area economic dispatch (MAED) on three test cases of a large 140-unit standard test system having complex constraints.
Unconventional bearing capacity analysis and optimization of multicell box girders.
Tepic, Jovan; Doroslovacki, Rade; Djelosevic, Mirko
2014-01-01
This study deals with unconventional bearing capacity analysis and the procedure of optimizing a two-cell box girder. The generalized model which enables the local stress-strain analysis of multicell girders was developed based on the principle of cross-sectional decomposition. The applied methodology is verified using the experimental data (Djelosevic et al., 2012) for traditionally formed box girders. The qualitative and quantitative evaluation of results obtained for the two-cell box girder is realized based on comparative analysis using the finite element method (FEM) and the ANSYS v12 software. The deflection function obtained by analytical and numerical methods was found consistent provided that the maximum deviation does not exceed 4%. Multicell box girders are rationally designed support structures characterized by much lower susceptibility of their cross-sectional elements to buckling and higher specific capacity than traditionally formed box girders. The developed local stress model is applied for optimizing the cross section of a two-cell box carrier. The author points to the advantages of implementing the model of local stresses in the optimization process and concludes that the technological reserve of bearing capacity amounts to 20% at the same girder weight and constant load conditions.
Structural optimization with approximate sensitivities
NASA Technical Reports Server (NTRS)
Patnaik, S. N.; Hopkins, D. A.; Coroneos, R.
1994-01-01
Computational efficiency in structural optimization can be enhanced if the intensive computations associated with the calculation of the sensitivities, that is, gradients of the behavior constraints, are reduced. Approximation to gradients of the behavior constraints that can be generated with small amount of numerical calculations is proposed. Structural optimization with these approximate sensitivities produced correct optimum solution. Approximate gradients performed well for different nonlinear programming methods, such as the sequence of unconstrained minimization technique, method of feasible directions, sequence of quadratic programming, and sequence of linear programming. Structural optimization with approximate gradients can reduce by one third the CPU time that would otherwise be required to solve the problem with explicit closed-form gradients. The proposed gradient approximation shows potential to reduce intensive computation that has been associated with traditional structural optimization.
Molecular taxonomy of phytopathogenic fungi: a case study in Peronospora.
Göker, Markus; García-Blázquez, Gema; Voglmayr, Hermann; Tellería, M Teresa; Martín, María P
2009-07-29
Inappropriate taxon definitions may have severe consequences in many areas. For instance, biologically sensible species delimitation of plant pathogens is crucial for measures such as plant protection or biological control and for comparative studies involving model organisms. However, delimiting species is challenging in the case of organisms for which often only molecular data are available, such as prokaryotes, fungi, and many unicellular eukaryotes. Even in the case of organisms with well-established morphological characteristics, molecular taxonomy is often necessary to emend current taxonomic concepts and to analyze DNA sequences directly sampled from the environment. Typically, for this purpose clustering approaches to delineate molecular operational taxonomic units have been applied using arbitrary choices regarding the distance threshold values, and the clustering algorithms. Here, we report on a clustering optimization method to establish a molecular taxonomy of Peronospora based on ITS nrDNA sequences. Peronospora is the largest genus within the downy mildews, which are obligate parasites of higher plants, and includes various economically important pathogens. The method determines the distance function and clustering setting that result in an optimal agreement with selected reference data. Optimization was based on both taxonomy-based and host-based reference information, yielding the same outcome. Resampling and permutation methods indicate that the method is robust regarding taxon sampling and errors in the reference data. Tests with newly obtained ITS sequences demonstrate the use of the re-classified dataset in molecular identification of downy mildews. A corrected taxonomy is provided for all Peronospora ITS sequences contained in public databases. Clustering optimization appears to be broadly applicable in automated, sequence-based taxonomy. The method connects traditional and modern taxonomic disciplines by specifically addressing the issue of how to optimally account for both traditional species concepts and genetic divergence.
Zhang, Yu-xin; Cheng, Zhi-feng; Xu, Zheng-ping; Bai, Jing
2015-01-01
In order to solve the problems such as complex operation, consumption for the carrier gas and long test period in traditional power transformer fault diagnosis approach based on dissolved gas analysis (DGA), this paper proposes a new method which is detecting 5 types of characteristic gas content in transformer oil such as CH4, C2H2, C2H4, C2H6 and H2 based on photoacoustic Spectroscopy and C2H2/C2H4, CH4/H2, C2H4/C2H6 three-ratios data are calculated. The support vector machine model was constructed using cross validation method under five support vector machine functions and four kernel functions, heuristic algorithms were used in parameter optimization for penalty factor c and g, which to establish the best SVM model for the highest fault diagnosis accuracy and the fast computing speed. Particles swarm optimization and genetic algorithm two types of heuristic algorithms were comparative studied in this paper for accuracy and speed in optimization. The simulation result shows that SVM model composed of C-SVC, RBF kernel functions and genetic algorithm obtain 97. 5% accuracy in test sample set and 98. 333 3% accuracy in train sample set, and genetic algorithm was about two times faster than particles swarm optimization in computing speed. The methods described in this paper has many advantages such as simple operation, non-contact measurement, no consumption for the carrier gas, long test period, high stability and sensitivity, the result shows that the methods described in this paper can instead of the traditional transformer fault diagnosis by gas chromatography and meets the actual project needs in transformer fault diagnosis.
Molecular Taxonomy of Phytopathogenic Fungi: A Case Study in Peronospora
Göker, Markus; García-Blázquez, Gema; Voglmayr, Hermann; Tellería, M. Teresa; Martín, María P.
2009-01-01
Background Inappropriate taxon definitions may have severe consequences in many areas. For instance, biologically sensible species delimitation of plant pathogens is crucial for measures such as plant protection or biological control and for comparative studies involving model organisms. However, delimiting species is challenging in the case of organisms for which often only molecular data are available, such as prokaryotes, fungi, and many unicellular eukaryotes. Even in the case of organisms with well-established morphological characteristics, molecular taxonomy is often necessary to emend current taxonomic concepts and to analyze DNA sequences directly sampled from the environment. Typically, for this purpose clustering approaches to delineate molecular operational taxonomic units have been applied using arbitrary choices regarding the distance threshold values, and the clustering algorithms. Methodology Here, we report on a clustering optimization method to establish a molecular taxonomy of Peronospora based on ITS nrDNA sequences. Peronospora is the largest genus within the downy mildews, which are obligate parasites of higher plants, and includes various economically important pathogens. The method determines the distance function and clustering setting that result in an optimal agreement with selected reference data. Optimization was based on both taxonomy-based and host-based reference information, yielding the same outcome. Resampling and permutation methods indicate that the method is robust regarding taxon sampling and errors in the reference data. Tests with newly obtained ITS sequences demonstrate the use of the re-classified dataset in molecular identification of downy mildews. Conclusions A corrected taxonomy is provided for all Peronospora ITS sequences contained in public databases. Clustering optimization appears to be broadly applicable in automated, sequence-based taxonomy. The method connects traditional and modern taxonomic disciplines by specifically addressing the issue of how to optimally account for both traditional species concepts and genetic divergence. PMID:19641601
Gálvez, Akemi; Iglesias, Andrés
2013-01-01
Fitting spline curves to data points is a very important issue in many applied fields. It is also challenging, because these curves typically depend on many continuous variables in a highly interrelated nonlinear way. In general, it is not possible to compute these parameters analytically, so the problem is formulated as a continuous nonlinear optimization problem, for which traditional optimization techniques usually fail. This paper presents a new bioinspired method to tackle this issue. In this method, optimization is performed through a combination of two techniques. Firstly, we apply the indirect approach to the knots, in which they are not initially the subject of optimization but precomputed with a coarse approximation scheme. Secondly, a powerful bioinspired metaheuristic technique, the firefly algorithm, is applied to optimization of data parameterization; then, the knot vector is refined by using De Boor's method, thus yielding a better approximation to the optimal knot vector. This scheme converts the original nonlinear continuous optimization problem into a convex optimization problem, solved by singular value decomposition. Our method is applied to some illustrative real-world examples from the CAD/CAM field. Our experimental results show that the proposed scheme can solve the original continuous nonlinear optimization problem very efficiently.
Gálvez, Akemi; Iglesias, Andrés
2013-01-01
Fitting spline curves to data points is a very important issue in many applied fields. It is also challenging, because these curves typically depend on many continuous variables in a highly interrelated nonlinear way. In general, it is not possible to compute these parameters analytically, so the problem is formulated as a continuous nonlinear optimization problem, for which traditional optimization techniques usually fail. This paper presents a new bioinspired method to tackle this issue. In this method, optimization is performed through a combination of two techniques. Firstly, we apply the indirect approach to the knots, in which they are not initially the subject of optimization but precomputed with a coarse approximation scheme. Secondly, a powerful bioinspired metaheuristic technique, the firefly algorithm, is applied to optimization of data parameterization; then, the knot vector is refined by using De Boor's method, thus yielding a better approximation to the optimal knot vector. This scheme converts the original nonlinear continuous optimization problem into a convex optimization problem, solved by singular value decomposition. Our method is applied to some illustrative real-world examples from the CAD/CAM field. Our experimental results show that the proposed scheme can solve the original continuous nonlinear optimization problem very efficiently. PMID:24376380
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ding, Tao; Li, Cheng; Huang, Can
Here, in order to solve the reactive power optimization with joint transmission and distribution networks, a hierarchical modeling method is proposed in this paper. It allows the reactive power optimization of transmission and distribution networks to be performed separately, leading to a master–slave structure and improves traditional centralized modeling methods by alleviating the big data problem in a control center. Specifically, the transmission-distribution-network coordination issue of the hierarchical modeling method is investigated. First, a curve-fitting approach is developed to provide a cost function of the slave model for the master model, which reflects the impacts of each slave model. Second,more » the transmission and distribution networks are decoupled at feeder buses, and all the distribution networks are coordinated by the master reactive power optimization model to achieve the global optimality. Finally, numerical results on two test systems verify the effectiveness of the proposed hierarchical modeling and curve-fitting methods.« less
Ding, Tao; Li, Cheng; Huang, Can; ...
2017-01-09
Here, in order to solve the reactive power optimization with joint transmission and distribution networks, a hierarchical modeling method is proposed in this paper. It allows the reactive power optimization of transmission and distribution networks to be performed separately, leading to a master–slave structure and improves traditional centralized modeling methods by alleviating the big data problem in a control center. Specifically, the transmission-distribution-network coordination issue of the hierarchical modeling method is investigated. First, a curve-fitting approach is developed to provide a cost function of the slave model for the master model, which reflects the impacts of each slave model. Second,more » the transmission and distribution networks are decoupled at feeder buses, and all the distribution networks are coordinated by the master reactive power optimization model to achieve the global optimality. Finally, numerical results on two test systems verify the effectiveness of the proposed hierarchical modeling and curve-fitting methods.« less
A Stochastic Inversion Method for Potential Field Data: Ant Colony Optimization
NASA Astrophysics Data System (ADS)
Liu, Shuang; Hu, Xiangyun; Liu, Tianyou
2014-07-01
Simulating natural ants' foraging behavior, the ant colony optimization (ACO) algorithm performs excellently in combinational optimization problems, for example the traveling salesman problem and the quadratic assignment problem. However, the ACO is seldom used to inverted for gravitational and magnetic data. On the basis of the continuous and multi-dimensional objective function for potential field data optimization inversion, we present the node partition strategy ACO (NP-ACO) algorithm for inversion of model variables of fixed shape and recovery of physical property distributions of complicated shape models. We divide the continuous variables into discrete nodes and ants directionally tour the nodes by use of transition probabilities. We update the pheromone trails by use of Gaussian mapping between the objective function value and the quantity of pheromone. It can analyze the search results in real time and promote the rate of convergence and precision of inversion. Traditional mapping, including the ant-cycle system, weaken the differences between ant individuals and lead to premature convergence. We tested our method by use of synthetic data and real data from scenarios involving gravity and magnetic anomalies. The inverted model variables and recovered physical property distributions were in good agreement with the true values. The ACO algorithm for binary representation imaging and full imaging can recover sharper physical property distributions than traditional linear inversion methods. The ACO has good optimization capability and some excellent characteristics, for example robustness, parallel implementation, and portability, compared with other stochastic metaheuristics.
Zhu, Qing-Xia; Cao, Yong-Bing; Cao, Ying-Ying; Lu, Feng
2014-04-01
A novel facile method for on-site detection of antipertensive chemicals (e. g. nicardipine hydrochloride, doxazosin mesylate, propranolol hydrochloride, and hydrochlorothiazide) adulterated in traditional Chinese medicine for hypertension using thin layer chromatography (TLC) combined with surface enhanced Raman spectroscopy (SERS) was reported in the present paper. Analytes and pharmaceutical matrices was separated by TLC, then SERS method was used to complete qualitative identification of trace substances on TLC plate. By optimizing colloidal silver concentration and developing solvent, as well as exploring the optimal limits of detection (LOD), the initially established TLC-SERS method was used to detect real hypertension Chinese pharmaceuticals. The results showed that this method had good specificity for the four chemicals and high sensitivity with a limit of detection as lower as to 0.005 microg. Finally, two of the ten antipertensive drugs were detected to be adulterated with chemicals. This simple and fast method can realize rapid detection of chemicals illegally for doping in antipertensive Chinese pharmaceuticals, and would have good prospects in on-site detection of chemicals for doping in Chinese pharmaceuticals.
NASA Astrophysics Data System (ADS)
Cheng, Longjiu; Cai, Wensheng; Shao, Xueguang
2005-03-01
An energy-based perturbation and a new idea of taboo strategy are proposed for structural optimization and applied in a benchmark problem, i.e., the optimization of Lennard-Jones (LJ) clusters. It is proved that the energy-based perturbation is much better than the traditional random perturbation both in convergence speed and searching ability when it is combined with a simple greedy method. By tabooing the most wide-spread funnel instead of the visited solutions, the hit rate of other funnels can be significantly improved. Global minima of (LJ) clusters up to 200 atoms are found with high efficiency.
Analysis and optimization of cyclic methods in orbit computation
NASA Technical Reports Server (NTRS)
Pierce, S.
1973-01-01
The mathematical analysis and computation of the K=3, order 4; K=4, order 6; and K=5, order 7 cyclic methods and the K=5, order 6 Cowell method and some results of optimizing the 3 backpoint cyclic multistep methods for solving ordinary differential equations are presented. Cyclic methods have the advantage over traditional methods of having higher order for a given number of backpoints while at the same time having more free parameters. After considering several error sources the primary source for the cyclic methods has been isolated. The free parameters for three backpoint methods were used to minimize the effects of some of these error sources. They now yield more accuracy with the same computing time as Cowell's method on selected problems. This work is being extended to the five backpoint methods. The analysis and optimization are more difficult here since the matrices are larger and the dimension of the optimizing space is larger. Indications are that the primary error source can be reduced. This will still leave several parameters free to minimize other sources.
Generation of structural topologies using efficient technique based on sorted compliances
NASA Astrophysics Data System (ADS)
Mazur, Monika; Tajs-Zielińska, Katarzyna; Bochenek, Bogdan
2018-01-01
Topology optimization, although well recognized is still widely developed. It has gained recently more attention since large computational ability become available for designers. This process is stimulated simultaneously by variety of emerging, innovative optimization methods. It is observed that traditional gradient-based mathematical programming algorithms, in many cases, are replaced by novel and e cient heuristic methods inspired by biological, chemical or physical phenomena. These methods become useful tools for structural optimization because of their versatility and easy numerical implementation. In this paper engineering implementation of a novel heuristic algorithm for minimum compliance topology optimization is discussed. The performance of the topology generator is based on implementation of a special function utilizing information of compliance distribution within the design space. With a view to cope with engineering problems the algorithm has been combined with structural analysis system Ansys.
Static vs stochastic optimization: A case study of FTSE Bursa Malaysia sectorial indices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mamat, Nur Jumaadzan Zaleha; Jaaman, Saiful Hafizah; Ahmad, Rokiah Rozita
2014-06-19
Traditional portfolio optimization methods in the likes of Markowitz' mean-variance model and semi-variance model utilize static expected return and volatility risk from historical data to generate an optimal portfolio. The optimal portfolio may not truly be optimal in reality due to the fact that maximum and minimum values from the data may largely influence the expected return and volatility risk values. This paper considers distributions of assets' return and volatility risk to determine a more realistic optimized portfolio. For illustration purposes, the sectorial indices data in FTSE Bursa Malaysia is employed. The results show that stochastic optimization provides more stablemore » information ratio.« less
Using High Resolution Design Spaces for Aerodynamic Shape Optimization Under Uncertainty
NASA Technical Reports Server (NTRS)
Li, Wu; Padula, Sharon
2004-01-01
This paper explains why high resolution design spaces encourage traditional airfoil optimization algorithms to generate noisy shape modifications, which lead to inaccurate linear predictions of aerodynamic coefficients and potential failure of descent methods. By using auxiliary drag constraints for a simultaneous drag reduction at all design points and the least shape distortion to achieve the targeted drag reduction, an improved algorithm generates relatively smooth optimal airfoils with no severe off-design performance degradation over a range of flight conditions, in high resolution design spaces parameterized by cubic B-spline functions. Simulation results using FUN2D in Euler flows are included to show the capability of the robust aerodynamic shape optimization method over a range of flight conditions.
Hu, Rui; Liu, Shutian; Li, Quhao
2017-05-20
For the development of a large-aperture space telescope, one of the key techniques is the method for designing the flexures for mounting the primary mirror, as the flexures are the key components. In this paper, a topology-optimization-based method for designing flexures is presented. The structural performances of the mirror system under multiple load conditions, including static gravity and thermal loads, as well as the dynamic vibration, are considered. The mirror surface shape error caused by gravity and the thermal effect is treated as the objective function, and the first-order natural frequency of the mirror structural system is taken as the constraint. The pattern repetition constraint is added, which can ensure symmetrical material distribution. The topology optimization model for flexure design is established. The substructuring method is also used to condense the degrees of freedom (DOF) of all the nodes of the mirror system, except for the nodes that are linked to the mounting flexures, to reduce the computation effort during the optimization iteration process. A potential optimized configuration is achieved by solving the optimization model and post-processing. A detailed shape optimization is subsequently conducted to optimize its dimension parameters. Our optimization method deduces new mounting structures that significantly enhance the optical performance of the mirror system compared to the traditional methods, which only focus on the parameters of existing structures. Design results demonstrate the effectiveness of the proposed optimization method.
Optimization of the design of Gas Cherenkov Detectors for ICF diagnosis
NASA Astrophysics Data System (ADS)
Liu, Bin; Hu, Huasi; Han, Hetong; Lv, Huanwen; Li, Lan
2018-07-01
A design method, which combines a genetic algorithm (GA) with Monte-Carlo simulation, is established and applied to two different types of Cherenkov detectors, namely, Gas Cherenkov Detector (GCD) and Gamma Reaction History (GRH). For accelerating the optimization program, open Message Passing Interface (MPI) is used in the Geant4 simulation. Compared with the traditional optical ray-tracing method, the performances of these detectors have been improved with the optimization method. The efficiency for GCD system, with a threshold of 6.3 MeV, is enhanced by ∼20% and time response improved by ∼7.2%. For the GRH system, with threshold of 10 MeV, the efficiency is enhanced by ∼76% in comparison with previously published results.
Efficient robust conditional random fields.
Song, Dongjin; Liu, Wei; Zhou, Tianyi; Tao, Dacheng; Meyer, David A
2015-10-01
Conditional random fields (CRFs) are a flexible yet powerful probabilistic approach and have shown advantages for popular applications in various areas, including text analysis, bioinformatics, and computer vision. Traditional CRF models, however, are incapable of selecting relevant features as well as suppressing noise from noisy original features. Moreover, conventional optimization methods often converge slowly in solving the training procedure of CRFs, and will degrade significantly for tasks with a large number of samples and features. In this paper, we propose robust CRFs (RCRFs) to simultaneously select relevant features. An optimal gradient method (OGM) is further designed to train RCRFs efficiently. Specifically, the proposed RCRFs employ the l1 norm of the model parameters to regularize the objective used by traditional CRFs, therefore enabling discovery of the relevant unary features and pairwise features of CRFs. In each iteration of OGM, the gradient direction is determined jointly by the current gradient together with the historical gradients, and the Lipschitz constant is leveraged to specify the proper step size. We show that an OGM can tackle the RCRF model training very efficiently, achieving the optimal convergence rate [Formula: see text] (where k is the number of iterations). This convergence rate is theoretically superior to the convergence rate O(1/k) of previous first-order optimization methods. Extensive experiments performed on three practical image segmentation tasks demonstrate the efficacy of OGM in training our proposed RCRFs.
AP-Cloud: Adaptive particle-in-cloud method for optimal solutions to Vlasov–Poisson equation
Wang, Xingyu; Samulyak, Roman; Jiao, Xiangmin; ...
2016-04-19
We propose a new adaptive Particle-in-Cloud (AP-Cloud) method for obtaining optimal numerical solutions to the Vlasov–Poisson equation. Unlike the traditional particle-in-cell (PIC) method, which is commonly used for solving this problem, the AP-Cloud adaptively selects computational nodes or particles to deliver higher accuracy and efficiency when the particle distribution is highly non-uniform. Unlike other adaptive techniques for PIC, our method balances the errors in PDE discretization and Monte Carlo integration, and discretizes the differential operators using a generalized finite difference (GFD) method based on a weighted least square formulation. As a result, AP-Cloud is independent of the geometric shapes ofmore » computational domains and is free of artificial parameters. Efficient and robust implementation is achieved through an octree data structure with 2:1 balance. We analyze the accuracy and convergence order of AP-Cloud theoretically, and verify the method using an electrostatic problem of a particle beam with halo. Here, simulation results show that the AP-Cloud method is substantially more accurate and faster than the traditional PIC, and it is free of artificial forces that are typical for some adaptive PIC techniques.« less
AP-Cloud: Adaptive Particle-in-Cloud method for optimal solutions to Vlasov–Poisson equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Xingyu; Samulyak, Roman, E-mail: roman.samulyak@stonybrook.edu; Computational Science Initiative, Brookhaven National Laboratory, Upton, NY 11973
We propose a new adaptive Particle-in-Cloud (AP-Cloud) method for obtaining optimal numerical solutions to the Vlasov–Poisson equation. Unlike the traditional particle-in-cell (PIC) method, which is commonly used for solving this problem, the AP-Cloud adaptively selects computational nodes or particles to deliver higher accuracy and efficiency when the particle distribution is highly non-uniform. Unlike other adaptive techniques for PIC, our method balances the errors in PDE discretization and Monte Carlo integration, and discretizes the differential operators using a generalized finite difference (GFD) method based on a weighted least square formulation. As a result, AP-Cloud is independent of the geometric shapes ofmore » computational domains and is free of artificial parameters. Efficient and robust implementation is achieved through an octree data structure with 2:1 balance. We analyze the accuracy and convergence order of AP-Cloud theoretically, and verify the method using an electrostatic problem of a particle beam with halo. Simulation results show that the AP-Cloud method is substantially more accurate and faster than the traditional PIC, and it is free of artificial forces that are typical for some adaptive PIC techniques.« less
AP-Cloud: Adaptive particle-in-cloud method for optimal solutions to Vlasov–Poisson equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Xingyu; Samulyak, Roman; Jiao, Xiangmin
We propose a new adaptive Particle-in-Cloud (AP-Cloud) method for obtaining optimal numerical solutions to the Vlasov–Poisson equation. Unlike the traditional particle-in-cell (PIC) method, which is commonly used for solving this problem, the AP-Cloud adaptively selects computational nodes or particles to deliver higher accuracy and efficiency when the particle distribution is highly non-uniform. Unlike other adaptive techniques for PIC, our method balances the errors in PDE discretization and Monte Carlo integration, and discretizes the differential operators using a generalized finite difference (GFD) method based on a weighted least square formulation. As a result, AP-Cloud is independent of the geometric shapes ofmore » computational domains and is free of artificial parameters. Efficient and robust implementation is achieved through an octree data structure with 2:1 balance. We analyze the accuracy and convergence order of AP-Cloud theoretically, and verify the method using an electrostatic problem of a particle beam with halo. Here, simulation results show that the AP-Cloud method is substantially more accurate and faster than the traditional PIC, and it is free of artificial forces that are typical for some adaptive PIC techniques.« less
Assessing the economics of processing end-of-life vehicles through manual dismantling.
Tian, Jin; Chen, Ming
2016-10-01
Most dismantling enterprises in a number of developing countries, such as China, usually adopt the "manual+mechanical" dismantling approach to process end-of-life vehicles. However, the automobile industry does not have a clear indicator to reasonably and effectively determine the manual dismantling degree for end-of-life vehicles. In this study, five different dismantling scenarios and an economic system for end-of-life vehicles were developed based on the actual situation of end-of-life vehicles. The fuzzy analytic hierarchy process was applied to set the weights of direct costs, indirect costs, and sales and to obtain an optimal manual dismantling scenario. Results showed that although the traditional method of "dismantling to the end" can guarantee the highest recycling rate, this method is not the best among all the scenarios. The profit gained in the optimal scenario is 100.6% higher than that in the traditional scenario. The optimal manual dismantling scenario showed that enterprises are required to select suitable parts to process through manual dismantling. Selecting suitable parts maximizes economic profit and improves dismantling speed. Copyright © 2016 Elsevier Ltd. All rights reserved.
An efficiency study of the simultaneous analysis and design of structures
NASA Technical Reports Server (NTRS)
Striz, Alfred G.; Wu, Zhiqi; Sobieski, Jaroslaw
1995-01-01
The efficiency of the Simultaneous Analysis and Design (SAND) approach in the minimum weight optimization of structural systems subject to strength and displacement constraints as well as size side constraints is investigated. SAND allows for an optimization to take place in one single operation as opposed to the more traditional and sequential Nested Analysis and Design (NAND) method, where analyses and optimizations alternate. Thus, SAND has the advantage that the stiffness matrix is never factored during the optimization retaining its original sparsity. One of SAND's disadvantages is the increase in the number of design variables and in the associated number of constraint gradient evaluations. If SAND is to be an acceptable player in the optimization field, it is essential to investigate the efficiency of the method and to present a possible cure for any inherent deficiencies.
Artificial bee colony algorithm for constrained possibilistic portfolio optimization problem
NASA Astrophysics Data System (ADS)
Chen, Wei
2015-07-01
In this paper, we discuss the portfolio optimization problem with real-world constraints under the assumption that the returns of risky assets are fuzzy numbers. A new possibilistic mean-semiabsolute deviation model is proposed, in which transaction costs, cardinality and quantity constraints are considered. Due to such constraints the proposed model becomes a mixed integer nonlinear programming problem and traditional optimization methods fail to find the optimal solution efficiently. Thus, a modified artificial bee colony (MABC) algorithm is developed to solve the corresponding optimization problem. Finally, a numerical example is given to illustrate the effectiveness of the proposed model and the corresponding algorithm.
Wang, Chang; Qin, Xin; Liu, Yan; Zhang, Wenchao
2016-06-01
An adaptive inertia weight particle swarm algorithm is proposed in this study to solve the local optimal problem with the method of traditional particle swarm optimization in the process of estimating magnetic resonance(MR)image bias field.An indicator measuring the degree of premature convergence was designed for the defect of traditional particle swarm optimization algorithm.The inertia weight was adjusted adaptively based on this indicator to ensure particle swarm to be optimized globally and to avoid it from falling into local optimum.The Legendre polynomial was used to fit bias field,the polynomial parameters were optimized globally,and finally the bias field was estimated and corrected.Compared to those with the improved entropy minimum algorithm,the entropy of corrected image was smaller and the estimated bias field was more accurate in this study.Then the corrected image was segmented and the segmentation accuracy obtained in this research was 10% higher than that with improved entropy minimum algorithm.This algorithm can be applied to the correction of MR image bias field.
NASA Astrophysics Data System (ADS)
Le-Duc, Thang; Ho-Huu, Vinh; Nguyen-Thoi, Trung; Nguyen-Quoc, Hung
2016-12-01
In recent years, various types of magnetorheological brakes (MRBs) have been proposed and optimized by different optimization algorithms that are integrated in commercial software such as ANSYS and Comsol Multiphysics. However, many of these optimization algorithms often possess some noteworthy shortcomings such as the trap of solutions at local extremes, or the limited number of design variables or the difficulty of dealing with discrete design variables. Thus, to overcome these limitations and develop an efficient computation tool for optimal design of the MRBs, an optimization procedure that combines differential evolution (DE), a gradient-free global optimization method with finite element analysis (FEA) is proposed in this paper. The proposed approach is then applied to the optimal design of MRBs with different configurations including conventional MRBs and MRBs with coils placed on the side housings. Moreover, to approach a real-life design, some necessary design variables of MRBs are considered as discrete variables in the optimization process. The obtained optimal design results are compared with those of available optimal designs in the literature. The results reveal that the proposed method outperforms some traditional approaches.
Optimization of hydraulic turbine governor parameters based on WPA
NASA Astrophysics Data System (ADS)
Gao, Chunyang; Yu, Xiangyang; Zhu, Yong; Feng, Baohao
2018-01-01
The parameters of hydraulic turbine governor directly affect the dynamic characteristics of the hydraulic unit, thus affecting the regulation capacity and the power quality of power grid. The governor of conventional hydropower unit is mainly PID governor with three adjustable parameters, which are difficult to set up. In order to optimize the hydraulic turbine governor, this paper proposes wolf pack algorithm (WPA) for intelligent tuning since the good global optimization capability of WPA. Compared with the traditional optimization method and PSO algorithm, the results show that the PID controller designed by WPA achieves a dynamic quality of hydraulic system and inhibits overshoot.
NASA Astrophysics Data System (ADS)
Sica, R. J.; Haefele, A.; Jalali, A.; Gamage, S.; Farhani, G.
2018-04-01
The optimal estimation method (OEM) has a long history of use in passive remote sensing, but has only recently been applied to active instruments like lidar. The OEM's advantage over traditional techniques includes obtaining a full systematic and random uncertainty budget plus the ability to work with the raw measurements without first applying instrument corrections. In our meeting presentation we will show you how to use the OEM for temperature and composition retrievals for Rayleigh-scatter, Ramanscatter and DIAL lidars.
NASA Astrophysics Data System (ADS)
Mao, Chao; Chen, Shou
2017-01-01
According to the traditional entropy value method still have low evaluation accuracy when evaluating the performance of mining projects, a performance evaluation model of mineral project founded on improved entropy is proposed. First establish a new weight assignment model founded on compatible matrix analysis of analytic hierarchy process (AHP) and entropy value method, when the compatibility matrix analysis to achieve consistency requirements, if it has differences between subjective weights and objective weights, moderately adjust both proportions, then on this basis, the fuzzy evaluation matrix for performance evaluation. The simulation experiments show that, compared with traditional entropy and compatible matrix analysis method, the proposed performance evaluation model of mining project based on improved entropy value method has higher accuracy assessment.
Cheng, Xu-Dong; Feng, Liang; Gu, Jun-Fei; Zhang, Ming-Hua; Jia, Xiao-Bin
2014-11-01
Chinese medicine prescriptions are the wisdom outcomes of traditional Chinese medicine (TCM) clinical treatment determinations which based on differentiation of symptoms and signs. Chinese medicine prescriptions are also the basis of secondary exploitation of TCM. The study on prescription helps to understand the material basis of its efficacy, pharmacological mechanism, which is an important guarantee for the modernization of traditional Chinese medicine. Currently, there is not yet dissertation n the method and technology system of basic research on the prescription of Chinese medicine. This paper focuses on how to build an effective system of prescription research technology. Based on "component structure" theory, a technology system contained four-step method that "prescription analysis, the material basis screening, the material basis of analysis and optimization and verify" was proposed. The technology system analyzes the material basis of the three levels such as Chinese medicine pieces, constituents and the compounds which could respect the overall efficacy of Chinese medicine. Ideas of prescription optimization, remodeling are introduced into the system. The technology system is the combination of the existing research and associates with new techniques and methods, which used for explore the research thought suitable for material basis research and prescription remodeling. The system provides a reference for the secondary development of traditional Chinese medicine, and industrial upgrading.
A Sparsity-Promoted Method Based on Majorization-Minimization for Weak Fault Feature Enhancement
Hao, Yansong; Song, Liuyang; Tang, Gang; Yuan, Hongfang
2018-01-01
Fault transient impulses induced by faulty components in rotating machinery usually contain substantial interference. Fault features are comparatively weak in the initial fault stage, which renders fault diagnosis more difficult. In this case, a sparse representation method based on the Majorzation-Minimization (MM) algorithm is proposed to enhance weak fault features and extract the features from strong background noise. However, the traditional MM algorithm suffers from two issues, which are the choice of sparse basis and complicated calculations. To address these challenges, a modified MM algorithm is proposed in which a sparse optimization objective function is designed firstly. Inspired by the Basis Pursuit (BP) model, the optimization function integrates an impulsive feature-preserving factor and a penalty function factor. Second, a modified Majorization iterative method is applied to address the convex optimization problem of the designed function. A series of sparse coefficients can be achieved through iterating, which only contain transient components. It is noteworthy that there is no need to select the sparse basis in the proposed iterative method because it is fixed as a unit matrix. Then the reconstruction step is omitted, which can significantly increase detection efficiency. Eventually, envelope analysis of the sparse coefficients is performed to extract weak fault features. Simulated and experimental signals including bearings and gearboxes are employed to validate the effectiveness of the proposed method. In addition, comparisons are made to prove that the proposed method outperforms the traditional MM algorithm in terms of detection results and efficiency. PMID:29597280
A Sparsity-Promoted Method Based on Majorization-Minimization for Weak Fault Feature Enhancement.
Ren, Bangyue; Hao, Yansong; Wang, Huaqing; Song, Liuyang; Tang, Gang; Yuan, Hongfang
2018-03-28
Fault transient impulses induced by faulty components in rotating machinery usually contain substantial interference. Fault features are comparatively weak in the initial fault stage, which renders fault diagnosis more difficult. In this case, a sparse representation method based on the Majorzation-Minimization (MM) algorithm is proposed to enhance weak fault features and extract the features from strong background noise. However, the traditional MM algorithm suffers from two issues, which are the choice of sparse basis and complicated calculations. To address these challenges, a modified MM algorithm is proposed in which a sparse optimization objective function is designed firstly. Inspired by the Basis Pursuit (BP) model, the optimization function integrates an impulsive feature-preserving factor and a penalty function factor. Second, a modified Majorization iterative method is applied to address the convex optimization problem of the designed function. A series of sparse coefficients can be achieved through iterating, which only contain transient components. It is noteworthy that there is no need to select the sparse basis in the proposed iterative method because it is fixed as a unit matrix. Then the reconstruction step is omitted, which can significantly increase detection efficiency. Eventually, envelope analysis of the sparse coefficients is performed to extract weak fault features. Simulated and experimental signals including bearings and gearboxes are employed to validate the effectiveness of the proposed method. In addition, comparisons are made to prove that the proposed method outperforms the traditional MM algorithm in terms of detection results and efficiency.
OPTIMIZING NIST SEQUENTIAL EXTRACTION METHOD FOR LAKE SEDIMENT (SRM4354)
Traditionally, measurements of radionuclides in the environment have focused on the determination of total concentration. It is clear, however, that total concentration does not describe the bioavailability of contaminating radionuclides. The environmental behavior depends on spe...
A Comparative Study of Probability Collectives Based Multi-agent Systems and Genetic Algorithms
NASA Technical Reports Server (NTRS)
Huang, Chien-Feng; Wolpert, David H.; Bieniawski, Stefan; Strauss, Charles E. M.
2005-01-01
We compare Genetic Algorithms (GA's) with Probability Collectives (PC), a new framework for distributed optimization and control. In contrast to GA's, PC-based methods do not update populations of solutions. Instead they update an explicitly parameterized probability distribution p over the space of solutions. That updating of p arises as the optimization of a functional of p. The functional is chosen so that any p that optimizes it should be p peaked about good solutions. The PC approach works in both continuous and discrete problems. It does not suffer from the resolution limitation of the finite bit length encoding of parameters into GA alleles. It also has deep connections with both game theory and statistical physics. We review the PC approach using its motivation as the information theoretic formulation of bounded rationality for multi-agent systems. It is then compared with GA's on a diverse set of problems. To handle high dimensional surfaces, in the PC method investigated here p is restricted to a product distribution. Each distribution in that product is controlled by a separate agent. The test functions were selected for their difficulty using either traditional gradient descent or genetic algorithms. On those functions the PC-based approach significantly outperforms traditional GA's in both rate of descent, trapping in false minima, and long term optimization.
NASA Astrophysics Data System (ADS)
Liu, Zhaoxin; Zhao, Liaoying; Li, Xiaorun; Chen, Shuhan
2018-04-01
Owing to the limitation of spatial resolution of the imaging sensor and the variability of ground surfaces, mixed pixels are widesperead in hyperspectral imagery. The traditional subpixel mapping algorithms treat all mixed pixels as boundary-mixed pixels while ignoring the existence of linear subpixels. To solve this question, this paper proposed a new subpixel mapping method based on linear subpixel feature detection and object optimization. Firstly, the fraction value of each class is obtained by spectral unmixing. Secondly, the linear subpixel features are pre-determined based on the hyperspectral characteristics and the linear subpixel feature; the remaining mixed pixels are detected based on maximum linearization index analysis. The classes of linear subpixels are determined by using template matching method. Finally, the whole subpixel mapping results are iteratively optimized by binary particle swarm optimization algorithm. The performance of the proposed subpixel mapping method is evaluated via experiments based on simulated and real hyperspectral data sets. The experimental results demonstrate that the proposed method can improve the accuracy of subpixel mapping.
Evolution of Stored-Product Entomology: Protecting the World Food Supply.
Hagstrum, David W; Phillips, Thomas W
2017-01-31
Traditional methods of stored-product pest control were initially passed from generation to generation. Ancient literature and archaeology reveal hermetic sealing, burning sulfur, desiccant dusts, and toxic botanicals as early control methods. Whereas traditional nonchemical methods were subsequently replaced by synthetic chemicals, other traditional methods were improved and integrated with key modern pesticides. Modern stored-product integrated pest management (IPM) makes decisions using knowledge of population dynamics and threshold insect densities. IPM programs are now being fine-tuned to meet regulatory and market standards. Better sampling methods and insights from life histories and ecological studies have been used to optimize the timing of pest management. Over the past 100 years, research on stored-product insects has shifted from being largely concentrated within 10 countries to being distributed across 65 countries. Although the components of IPM programs have been well researched, more research is needed on how these components can be combined to improve effectiveness and assure the security of postharvest food as the human population increases.
An optimization framework for measuring spatial access over healthcare networks.
Li, Zihao; Serban, Nicoleta; Swann, Julie L
2015-07-17
Measurement of healthcare spatial access over a network involves accounting for demand, supply, and network structure. Popular approaches are based on floating catchment areas; however the methods can overestimate demand over the network and fail to capture cascading effects across the system. Optimization is presented as a framework to measure spatial access. Questions related to when and why optimization should be used are addressed. The accuracy of the optimization models compared to the two-step floating catchment area method and its variations is analytically demonstrated, and a case study of specialty care for Cystic Fibrosis over the continental United States is used to compare these approaches. The optimization models capture a patient's experience rather than their opportunities and avoid overestimating patient demand. They can also capture system effects due to change based on congestion. Furthermore, the optimization models provide more elements of access than traditional catchment methods. Optimization models can incorporate user choice and other variations, and they can be useful towards targeting interventions to improve access. They can be easily adapted to measure access for different types of patients, over different provider types, or with capacity constraints in the network. Moreover, optimization models allow differences in access in rural and urban areas.
Supersonic transport grid generation, validation, and optimization
NASA Technical Reports Server (NTRS)
Aaronson, Philip G.
1995-01-01
The ever present demand for reduced flight times has renewed interest in High Speed Civil Transports (HSCT). The need for an HSCT becomes especially apparent when the long distance, over-sea, high growth Pacific rim routes are considered. Crucial to any successful HSCT design are minimal environmental impact and economic viability. Vital is the transport's aerodynamic efficiency, ultimately effecting both the environmental impact and the operating cost. Optimization, including numerical optimization, coupled with the use of computational fluid dynamics (CFD) technology, has and will offer a significant improvement beyond traditional methods.
Optimized extreme learning machine for urban land cover classification using hyperspectral imagery
NASA Astrophysics Data System (ADS)
Su, Hongjun; Tian, Shufang; Cai, Yue; Sheng, Yehua; Chen, Chen; Najafian, Maryam
2017-12-01
This work presents a new urban land cover classification framework using the firefly algorithm (FA) optimized extreme learning machine (ELM). FA is adopted to optimize the regularization coefficient C and Gaussian kernel σ for kernel ELM. Additionally, effectiveness of spectral features derived from an FA-based band selection algorithm is studied for the proposed classification task. Three sets of hyperspectral databases were recorded using different sensors, namely HYDICE, HyMap, and AVIRIS. Our study shows that the proposed method outperforms traditional classification algorithms such as SVM and reduces computational cost significantly.
Fuel management optimization using genetic algorithms and code independence
DOE Office of Scientific and Technical Information (OSTI.GOV)
DeChaine, M.D.; Feltus, M.A.
1994-12-31
Fuel management optimization is a hard problem for traditional optimization techniques. Loading pattern optimization is a large combinatorial problem without analytical derivative information. Therefore, methods designed for continuous functions, such as linear programming, do not always work well. Genetic algorithms (GAs) address these problems and, therefore, appear ideal for fuel management optimization. They do not require derivative information and work well with combinatorial. functions. The GAs are a stochastic method based on concepts from biological genetics. They take a group of candidate solutions, called the population, and use selection, crossover, and mutation operators to create the next generation of bettermore » solutions. The selection operator is a {open_quotes}survival-of-the-fittest{close_quotes} operation and chooses the solutions for the next generation. The crossover operator is analogous to biological mating, where children inherit a mixture of traits from their parents, and the mutation operator makes small random changes to the solutions.« less
Data-driven sensor placement from coherent fluid structures
NASA Astrophysics Data System (ADS)
Manohar, Krithika; Kaiser, Eurika; Brunton, Bingni W.; Kutz, J. Nathan; Brunton, Steven L.
2017-11-01
Optimal sensor placement is a central challenge in the prediction, estimation and control of fluid flows. We reinterpret sensor placement as optimizing discrete samples of coherent fluid structures for full state reconstruction. This permits a drastic reduction in the number of sensors required for faithful reconstruction, since complex fluid interactions can often be described by a small number of coherent structures. Our work optimizes point sensors using the pivoted matrix QR factorization to sample coherent structures directly computed from flow data. We apply this sampling technique in conjunction with various data-driven modal identification methods, including the proper orthogonal decomposition (POD) and dynamic mode decomposition (DMD). In contrast to POD-based sensors, DMD demonstrably enables the optimization of sensors for prediction in systems exhibiting multiple scales of dynamics. Finally, reconstruction accuracy from pivot sensors is shown to be competitive with sensors obtained using traditional computationally prohibitive optimization methods.
NASA Astrophysics Data System (ADS)
Villanueva Perez, Carlos Hernan
Computational design optimization provides designers with automated techniques to develop novel and non-intuitive optimal designs. Topology optimization is a design optimization technique that allows for the evolution of a broad variety of geometries in the optimization process. Traditional density-based topology optimization methods often lack a sufficient resolution of the geometry and physical response, which prevents direct use of the optimized design in manufacturing and the accurate modeling of the physical response of boundary conditions. The goal of this thesis is to introduce a unified topology optimization framework that uses the Level Set Method (LSM) to describe the design geometry and the eXtended Finite Element Method (XFEM) to solve the governing equations and measure the performance of the design. The methodology is presented as an alternative to density-based optimization approaches, and is able to accommodate a broad range of engineering design problems. The framework presents state-of-the-art methods for immersed boundary techniques to stabilize the systems of equations and enforce the boundary conditions, and is studied with applications in 2D and 3D linear elastic structures, incompressible flow, and energy and species transport problems to test the robustness and the characteristics of the method. A comparison of the framework against density-based topology optimization approaches is studied with regards to convergence, performance, and the capability to manufacture the designs. Furthermore, the ability to control the shape of the design to operate within manufacturing constraints is developed and studied. The analysis capability of the framework is validated quantitatively through comparison against previous benchmark studies, and qualitatively through its application to topology optimization problems. The design optimization problems converge to intuitive designs and resembled well the results from previous 2D or density-based studies.
Robust Airfoil Optimization in High Resolution Design Space
NASA Technical Reports Server (NTRS)
Li, Wu; Padula, Sharon L.
2003-01-01
The robust airfoil shape optimization is a direct method for drag reduction over a given range of operating conditions and has three advantages: (1) it prevents severe degradation in the off-design performance by using a smart descent direction in each optimization iteration, (2) it uses a large number of B-spline control points as design variables yet the resulting airfoil shape is fairly smooth, and (3) it allows the user to make a trade-off between the level of optimization and the amount of computing time consumed. The robust optimization method is demonstrated by solving a lift-constrained drag minimization problem for a two-dimensional airfoil in viscous flow with a large number of geometric design variables. Our experience with robust optimization indicates that our strategy produces reasonable airfoil shapes that are similar to the original airfoils, but these new shapes provide drag reduction over the specified range of Mach numbers. We have tested this strategy on a number of advanced airfoil models produced by knowledgeable aerodynamic design team members and found that our strategy produces airfoils better or equal to any designs produced by traditional design methods.
Immersed Boundary Methods for Optimization of Strongly Coupled Fluid-Structure Systems
NASA Astrophysics Data System (ADS)
Jenkins, Nicholas J.
Conventional methods for design of tightly coupled multidisciplinary systems, such as fluid-structure interaction (FSI) problems, traditionally rely on manual revisions informed by a loosely coupled linearized analysis. These approaches are both inaccurate for a multitude of applications, and they require an intimate understanding of the assumptions and limitations of the procedure in order to soundly optimize the design. Computational optimization, in particular topology optimization, has been shown to yield remarkable results for problems in solid mechanics using density interpolations schemes. In the context of FSI, however, well defined boundaries play a key role in both the design problem and the mechanical model. Density methods neither accurately represent the material boundary, nor provide a suitable platform to apply appropriate interface conditions. This thesis presents a new framework for shape and topology optimization of FSI problems that uses for the design problem the Level Set method (LSM) to describe the geometry evolution in the optimization process. The Extended Finite Element method (XFEM) is combined with a fictitiously deforming fluid domain (stationary arbitrary Lagrangian-Eulerian method) to predict the FSI response. The novelty of the proposed approach lies in the fact that the XFEM explicitly captures the material boundary defined by the level set iso-surface. Moreover, the XFEM provides a means to discretize the governing equations, and weak immersed boundary conditions are applied with Nitsche's Method to couple the fields. The flow is predicted by the incompressible Navier-Stokes equations, and a finite-deformation solid model is developed and tested for both hyperelastic and linear elastic problems. Transient and stationary numerical examples are presented to validate the FSI model and numerical solver approach. Pertaining to the optimization of FSI problems, the parameters of the discretized level set function are defined as explicit functions of the optimization variables, and the parameteric optimization problem is solved by nonlinear programming methods. The gradients of the objective and constrains are computed by the adjoint method for the global monolithic fluid-solid system. Two types of design problems are explored for optimization of the fluid-structure response: 1) the internal structural topology is varied, preserving the fluid-solid interface geometry, and 2) the fluid-solid interface is manipulated directly, which leads to simultaneously configuring both internal structural topology and outer mold shape. The numerical results show that the LSM-XFEM approach is well suited for designing practical applications, while at the same time reducing the requirement on highly refined mesh resolution compared to traditional density methods. However, these results also emphasize the need for a more robust embedded boundary condition framework. Further, the LSM can exhibit greater dependence on initial design seeding, and can impede design convergence. In particular for the strongly coupled FSI analysis developed here, the thinning and eventual removal of structural members can cause jumps in the evolution of the optimization functions.
Airbreathing hypersonic vehicle design and analysis methods
NASA Technical Reports Server (NTRS)
Lockwood, Mary Kae; Petley, Dennis H.; Hunt, James L.; Martin, John G.
1996-01-01
The design, analysis, and optimization of airbreathing hypersonic vehicles requires analyses involving many highly coupled disciplines at levels of accuracy exceeding those traditionally considered in a conceptual or preliminary-level design. Discipline analysis methods including propulsion, structures, thermal management, geometry, aerodynamics, performance, synthesis, sizing, closure, and cost are discussed. Also, the on-going integration of these methods into a working environment, known as HOLIST, is described.
Optimal and Miniaturized Strongly Coupled Magnetic Resonant Systems
NASA Astrophysics Data System (ADS)
Hu, Hao
Wireless power transfer (WPT) technologies for communication and recharging devices have recently attracted significant research attention. Conventional WPT systems based either on far-field or near-field coupling cannot provide simultaneously high efficiency and long transfer range. The Strongly Coupled Magnetic Resonance (SCMR) method was introduced recently, and it offers the possibility of transferring power with high efficiency over longer distances. Previous SCMR research has only focused on how to improve its efficiency and range through different methods. However, the study of optimal and miniaturized designs has been limited. In addition, no multiband and broadband SCMR WPT systems have been developed and traditional SCMR systems exhibit narrowband efficiency thereby imposing strict limitations on simultaneous wireless transmission of information and power, which is important for battery-less sensors. Therefore, new SCMR systems that are optimally designed and miniaturized in size will significantly enhance various technologies in many applications. The optimal and miniaturized SCMR systems are studied here. First, analytical models of the Conformal SCMR (CSCMR) system and thorough analysis and design methodology have been presented. This analysis specifically leads to the identification of the optimal design parameters, and predicts the performance of the designed CSCMR system. Second, optimal multiband and broadband CSCMR systems are designed. Two-band, three-band, and four-band CSCMR systems are designed and validated using simulations and measurements. Novel broadband CSCMR systems are also analyzed, designed, simulated and measured. The proposed broadband CSCMR system achieved more than 7 times larger bandwidth compared to the traditional SCMR system at the same frequency. Miniaturization methods of SCMR systems are also explored. Specifically, methods that use printable CSCMR with large capacitors, novel topologies including meandered, SRRs, and spiral topologies or 3-D structures, lower the operating frequency of SCMR systems, thereby reducing their size. Finally, SCMR systems are discussed and designed for various applications, such as biomedical devices and simultaneous powering of multiple devices.
ERIC Educational Resources Information Center
Mahavier, W. Ted
2002-01-01
Describes a two-semester numerical methods course that serves as a research experience for undergraduate students without requiring external funding or the modification of current curriculum. Uses an engineering problem to introduce students to constrained optimization via a variation of the traditional isoperimetric problem of finding the curve…
McConnel, M B; Galligan, D T
2004-10-01
Optimization programs are currently used to aid in the selection of bulls to be used in herd breeding programs. While these programs offer a systematic approach to the problem of semen selection, they ignore the impact of volume discounts. Volume discounts are discounts that vary depending on the number of straws purchased. The dynamic nature of volume discounts means that, in order to be adequately accounted for, they must be considered in the optimization routine. Failing to do this creates a missed economic opportunity because the potential benefits of optimally selecting and combining breeding company discount opportunities are not captured. To address these issues, an integer program was created which used binary decision variables to incorporate the effects of quantity discounts into the optimization program. A consistent set of trait criteria was used to select a group of bulls from 3 sample breeding companies. Three different selection programs were used to select the bulls, 2 traditional methods and the integer method. After the discounts were applied using each method, the integer program resulted in the lowest cost portfolio of bulls. A sensitivity analysis showed that the integer program also resulted in a low cost portfolio when the genetic trait goals were changed to be more or less stringent. In the sample application, a net benefit of the new approach over the traditional approaches was a 12.3 to 20.0% savings in semen cost.
Search of exploration opportunity for near earth objects based on analytical gradients
NASA Astrophysics Data System (ADS)
Ren, Y.; Cui, P. Y.; Luan, E. J.
2008-01-01
The problem of searching for exploration opportunity of near Earth objects is investigated. For rendezvous missions, the analytical gradients of performance index with respect to free parameters are derived by combining the calculus of variation with the theory of state-transition matrix. Then, some initial guesses are generated random in the search space, and the performance index is optimized with the guidance of analytical gradients from these initial guesses. This method not only keeps the property of global search in traditional method, but also avoids the blindness in the traditional exploration opportunity search; hence, the computing speed could be increased greatly. Furthermore, by using this method, the search precision could be controlled effectively.
Self-optimizing Pitch Control for Large Scale Wind Turbine Based on ADRC
NASA Astrophysics Data System (ADS)
Xia, Anjun; Hu, Guoqing; Li, Zheng; Huang, Dongxiao; Wang, Fengxiang
2018-01-01
Since wind turbine is a complex nonlinear and strong coupling system, traditional PI control method can hardly achieve good control performance. A self-optimizing pitch control method based on the active-disturbance-rejection control theory is proposed in this paper. A linear model of the wind turbine is derived by linearizing the aerodynamic torque equation and the dynamic response of wind turbine is transformed into a first-order linear system. An expert system is designed to optimize the amplification coefficient according to the pitch rate and the speed deviation. The purpose of the proposed control method is to regulate the amplification coefficient automatically and keep the variations of pitch rate and rotor speed in proper ranges. Simulation results show that the proposed pitch control method has the ability to modify the amplification coefficient effectively, when it is not suitable, and keep the variations of pitch rate and rotor speed in proper ranges
Gronseth, Gary; Dubinsky, Richard; Penfold-Murray, Rebecca; Cox, Julie; Bever Jr, Christopher; Martins, Yolanda; Rheaume, Carol; Shouse, Denise; Getchius, Thomas SD
2015-01-01
Background Evidence-based clinical practice guidelines (CPGs) are statements that provide recommendations to optimize patient care for a specific clinical problem or question. Merely reading a guideline rarely leads to implementation of recommendations. The American Academy of Neurology (AAN) has a formal process of guideline development and dissemination. The last few years have seen a burgeoning of social media such as Facebook, Twitter, and LinkedIn, and newer methods of dissemination such as podcasts and webinars. The role of these media in guideline dissemination has not been studied. Systematic evaluation of dissemination methods and comparison of the effectiveness of newer methods with traditional methods is not available. It is also not known whether specific dissemination methods may be more effectively targeted to specific audiences. Objective Our aim was to (1) develop an innovative dissemination strategy by adding social media-based dissemination methods to traditional methods for the AAN clinical practice guidelines “Complementary and alternative medicine in multiple sclerosis” (“CAM in MS”) and (2) evaluate whether the addition of social media outreach improves awareness of the CPG and knowledge of CPG recommendations, and affects implementation of those recommendations. Methods Outcomes were measured by four surveys in each of the two target populations: patients and physicians/clinicians (“physicians”). The primary outcome was the difference in participants’ intent to discuss use of complementary and alternative medicine (CAM) with their physicians or patients, respectively, after novel dissemination, as compared with that after traditional dissemination. Secondary outcomes were changes in awareness of the CPG, knowledge of CPG content, and behavior regarding CAM use in multiple sclerosis (MS). Results Response rates were 25.08% (622/2480) for physicians and 43.5% (348/800) for patients. Awareness of the CPG increased after traditional dissemination (absolute difference, 95% confidence interval: physicians 36%, 95% CI 25-46, and patients 10%, 95% CI 1-11) but did not increase further after novel dissemination (physicians 0%, 95% CI -11 to 11, and patients -4%, 95% CI -6 to 14). Intent to discuss CAM also increased after traditional dissemination but did not change after novel dissemination (traditional: physicians 12%, 95% CI 2-22, and patients 19%, 95% CI 3-33; novel: physicians 11%, 95% CI -1 to -21, and patients -8%, 95% CI -22 to 8). Knowledge of CPG recommendations and behavior regarding CAM use in MS did not change after either traditional dissemination or novel dissemination. Conclusions Social media-based dissemination methods did not confer additional benefit over print-, email-, and Internet-based methods in increasing CPG awareness and changing intent in physicians or patients. Research on audience selection, message formatting, and message delivery is required to utilize Web 2.0 technologies optimally for dissemination. PMID:26272267
Image Edge Tracking via Ant Colony Optimization
NASA Astrophysics Data System (ADS)
Li, Ruowei; Wu, Hongkun; Liu, Shilong; Rahman, M. A.; Liu, Sanchi; Kwok, Ngai Ming
2018-04-01
A good edge plot should use continuous thin lines to describe the complete contour of the captured object. However, the detection of weak edges is a challenging task because of the associated low pixel intensities. Ant Colony Optimization (ACO) has been employed by many researchers to address this problem. The algorithm is a meta-heuristic method developed by mimicking the natural behaviour of ants. It uses iterative searches to find the optimal solution that cannot be found via traditional optimization approaches. In this work, ACO is employed to track and repair broken edges obtained via conventional Sobel edge detector to produced a result with more connected edges.
Probability distribution functions for unit hydrographs with optimization using genetic algorithm
NASA Astrophysics Data System (ADS)
Ghorbani, Mohammad Ali; Singh, Vijay P.; Sivakumar, Bellie; H. Kashani, Mahsa; Atre, Atul Arvind; Asadi, Hakimeh
2017-05-01
A unit hydrograph (UH) of a watershed may be viewed as the unit pulse response function of a linear system. In recent years, the use of probability distribution functions (pdfs) for determining a UH has received much attention. In this study, a nonlinear optimization model is developed to transmute a UH into a pdf. The potential of six popular pdfs, namely two-parameter gamma, two-parameter Gumbel, two-parameter log-normal, two-parameter normal, three-parameter Pearson distribution, and two-parameter Weibull is tested on data from the Lighvan catchment in Iran. The probability distribution parameters are determined using the nonlinear least squares optimization method in two ways: (1) optimization by programming in Mathematica; and (2) optimization by applying genetic algorithm. The results are compared with those obtained by the traditional linear least squares method. The results show comparable capability and performance of two nonlinear methods. The gamma and Pearson distributions are the most successful models in preserving the rising and recession limbs of the unit hydographs. The log-normal distribution has a high ability in predicting both the peak flow and time to peak of the unit hydrograph. The nonlinear optimization method does not outperform the linear least squares method in determining the UH (especially for excess rainfall of one pulse), but is comparable.
NASA Astrophysics Data System (ADS)
Sun, Xinyao; Wang, Xue; Wu, Jiangwei; Liu, Youda
2014-05-01
Cyber physical systems(CPS) recently emerge as a new technology which can provide promising approaches to demand side management(DSM), an important capability in industrial power systems. Meanwhile, the manufacturing center is a typical industrial power subsystem with dozens of high energy consumption devices which have complex physical dynamics. DSM, integrated with CPS, is an effective methodology for solving energy optimization problems in manufacturing center. This paper presents a prediction-based manufacturing center self-adaptive energy optimization method for demand side management in cyber physical systems. To gain prior knowledge of DSM operating results, a sparse Bayesian learning based componential forecasting method is introduced to predict 24-hour electric load levels for specific industrial areas in China. From this data, a pricing strategy is designed based on short-term load forecasting results. To minimize total energy costs while guaranteeing manufacturing center service quality, an adaptive demand side energy optimization algorithm is presented. The proposed scheme is tested in a machining center energy optimization experiment. An AMI sensing system is then used to measure the demand side energy consumption of the manufacturing center. Based on the data collected from the sensing system, the load prediction-based energy optimization scheme is implemented. By employing both the PSO and the CPSO method, the problem of DSM in the manufacturing center is solved. The results of the experiment show the self-adaptive CPSO energy optimization method enhances optimization by 5% compared with the traditional PSO optimization method.
Study of motion of optimal bodies in the soil of grid method
NASA Astrophysics Data System (ADS)
Kotov, V. L.; Linnik, E. Yu
2016-11-01
The paper presents a method of calculating the optimum forms in axisymmetric numerical method based on the Godunov and models elastoplastic soil vedium Grigoryan. Solved two problems in a certain definition of generetrix rotation of the body of a given length and radius of the base, having a minimum impedance and maximum penetration depth. Numerical calculations are carried out by a modified method of local variations, which allows to significantly reduce the number of operations at different representations of generetrix. Significantly simplify the process of searching for optimal body allows the use of a quadratic model of local interaction for preliminary assessments. It is noted the qualitative similarity of the process of convergence of numerical calculations for solving the optimization problem based on local interaction model and within the of continuum mechanics. A comparison of the optimal bodies with absolutely optimal bodies possessing the minimum resistance of penetration below which is impossible to achieve under given constraints on the geometry. It is shown that the conical striker with a variable vertex angle, which equal to the angle of the solution is absolutely optimal body of minimum resistance of penetration for each value of the velocity of implementation will have a final depth of penetration is only 12% more than the traditional body absolutely optimal maximum depth penetration.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dall'Anese, Emiliano; Simonetto, Andrea
This paper considers distribution networks featuring inverter-interfaced distributed energy resources, and develops distributed feedback controllers that continuously drive the inverter output powers to solutions of AC optimal power flow (OPF) problems. Particularly, the controllers update the power setpoints based on voltage measurements as well as given (time-varying) OPF targets, and entail elementary operations implementable onto low-cost microcontrollers that accompany power-electronics interfaces of gateways and inverters. The design of the control framework is based on suitable linear approximations of the AC power-flow equations as well as Lagrangian regularization methods. Convergence and OPF-target tracking capabilities of the controllers are analytically established. Overall,more » the proposed method allows to bypass traditional hierarchical setups where feedback control and optimization operate at distinct time scales, and to enable real-time optimization of distribution systems.« less
Miao, Zhidong; Liu, Dake; Gong, Chen
2017-10-01
Inductive wireless power transfer (IWPT) is a promising power technology for implantable biomedical devices, where the power consumption is low and the efficiency is the most important consideration. In this paper, we propose an optimization method of impedance matching networks (IMN) to maximize the IWPT efficiency. The IMN at the load side is designed to achieve the optimal load, and the IMN at the source side is designed to deliver the required amount of power (no-more-no-less) from the power source to the load. The theoretical analyses and design procedure are given. An IWPT system for an implantable glaucoma therapeutic prototype is designed as an example. Compared with the efficiency of the resonant IWPT system, the efficiency of our optimized system increases with a factor of 1.73. Besides, the efficiency of our optimized IWPT system is 1.97 times higher than that of the IWPT system optimized by the traditional maximum power transfer method. All the discussions indicate that the optimization method proposed in this paper could achieve a high efficiency and long working time when the system is powered by a battery.
Optimization of multicast optical networks with genetic algorithm
NASA Astrophysics Data System (ADS)
Lv, Bo; Mao, Xiangqiao; Zhang, Feng; Qin, Xi; Lu, Dan; Chen, Ming; Chen, Yong; Cao, Jihong; Jian, Shuisheng
2007-11-01
In this letter, aiming to obtain the best multicast performance of optical network in which the video conference information is carried by specified wavelength, we extend the solutions of matrix games with the network coding theory and devise a new method to solve the complex problems of multicast network switching. In addition, an experimental optical network has been testified with best switching strategies by employing the novel numerical solution designed with an effective way of genetic algorithm. The result shows that optimal solutions with genetic algorithm are accordance with the ones with the traditional fictitious play method.
Optimal design of solidification processes
NASA Technical Reports Server (NTRS)
Dantzig, Jonathan A.; Tortorelli, Daniel A.
1991-01-01
An optimal design algorithm is presented for the analysis of general solidification processes, and is demonstrated for the growth of GaAs crystals in a Bridgman furnace. The system is optimal in the sense that the prespecified temperature distribution in the solidifying materials is obtained to maximize product quality. The optimization uses traditional numerical programming techniques which require the evaluation of cost and constraint functions and their sensitivities. The finite element method is incorporated to analyze the crystal solidification problem, evaluate the cost and constraint functions, and compute the sensitivities. These techniques are demonstrated in the crystal growth application by determining an optimal furnace wall temperature distribution to obtain the desired temperature profile in the crystal, and hence to maximize the crystal's quality. Several numerical optimization algorithms are studied to determine the proper convergence criteria, effective 1-D search strategies, appropriate forms of the cost and constraint functions, etc. In particular, we incorporate the conjugate gradient and quasi-Newton methods for unconstrained problems. The efficiency and effectiveness of each algorithm is presented in the example problem.
Aerodynamic Optimization of Rocket Control Surface Geometry Using Cartesian Methods and CAD Geometry
NASA Technical Reports Server (NTRS)
Nelson, Andrea; Aftosmis, Michael J.; Nemec, Marian; Pulliam, Thomas H.
2004-01-01
Aerodynamic design is an iterative process involving geometry manipulation and complex computational analysis subject to physical constraints and aerodynamic objectives. A design cycle consists of first establishing the performance of a baseline design, which is usually created with low-fidelity engineering tools, and then progressively optimizing the design to maximize its performance. Optimization techniques have evolved from relying exclusively on designer intuition and insight in traditional trial and error methods, to sophisticated local and global search methods. Recent attempts at automating the search through a large design space with formal optimization methods include both database driven and direct evaluation schemes. Databases are being used in conjunction with surrogate and neural network models as a basis on which to run optimization algorithms. Optimization algorithms are also being driven by the direct evaluation of objectives and constraints using high-fidelity simulations. Surrogate methods use data points obtained from simulations, and possibly gradients evaluated at the data points, to create mathematical approximations of a database. Neural network models work in a similar fashion, using a number of high-fidelity database calculations as training iterations to create a database model. Optimal designs are obtained by coupling an optimization algorithm to the database model. Evaluation of the current best design then gives either a new local optima and/or increases the fidelity of the approximation model for the next iteration. Surrogate methods have also been developed that iterate on the selection of data points to decrease the uncertainty of the approximation model prior to searching for an optimal design. The database approximation models for each of these cases, however, become computationally expensive with increase in dimensionality. Thus the method of using optimization algorithms to search a database model becomes problematic as the number of design variables is increased.
Sun, Xiaobo; Gao, Jingjing; Jin, Peng; Eng, Celeste; Burchard, Esteban G; Beaty, Terri H; Ruczinski, Ingo; Mathias, Rasika A; Barnes, Kathleen; Wang, Fusheng; Qin, Zhaohui S
2018-06-01
Sorted merging of genomic data is a common data operation necessary in many sequencing-based studies. It involves sorting and merging genomic data from different subjects by their genomic locations. In particular, merging a large number of variant call format (VCF) files is frequently required in large-scale whole-genome sequencing or whole-exome sequencing projects. Traditional single-machine based methods become increasingly inefficient when processing large numbers of files due to the excessive computation time and Input/Output bottleneck. Distributed systems and more recent cloud-based systems offer an attractive solution. However, carefully designed and optimized workflow patterns and execution plans (schemas) are required to take full advantage of the increased computing power while overcoming bottlenecks to achieve high performance. In this study, we custom-design optimized schemas for three Apache big data platforms, Hadoop (MapReduce), HBase, and Spark, to perform sorted merging of a large number of VCF files. These schemas all adopt the divide-and-conquer strategy to split the merging job into sequential phases/stages consisting of subtasks that are conquered in an ordered, parallel, and bottleneck-free way. In two illustrating examples, we test the performance of our schemas on merging multiple VCF files into either a single TPED or a single VCF file, which are benchmarked with the traditional single/parallel multiway-merge methods, message passing interface (MPI)-based high-performance computing (HPC) implementation, and the popular VCFTools. Our experiments suggest all three schemas either deliver a significant improvement in efficiency or render much better strong and weak scalabilities over traditional methods. Our findings provide generalized scalable schemas for performing sorted merging on genetics and genomics data using these Apache distributed systems.
Gao, Jingjing; Jin, Peng; Eng, Celeste; Burchard, Esteban G; Beaty, Terri H; Ruczinski, Ingo; Mathias, Rasika A; Barnes, Kathleen; Wang, Fusheng
2018-01-01
Abstract Background Sorted merging of genomic data is a common data operation necessary in many sequencing-based studies. It involves sorting and merging genomic data from different subjects by their genomic locations. In particular, merging a large number of variant call format (VCF) files is frequently required in large-scale whole-genome sequencing or whole-exome sequencing projects. Traditional single-machine based methods become increasingly inefficient when processing large numbers of files due to the excessive computation time and Input/Output bottleneck. Distributed systems and more recent cloud-based systems offer an attractive solution. However, carefully designed and optimized workflow patterns and execution plans (schemas) are required to take full advantage of the increased computing power while overcoming bottlenecks to achieve high performance. Findings In this study, we custom-design optimized schemas for three Apache big data platforms, Hadoop (MapReduce), HBase, and Spark, to perform sorted merging of a large number of VCF files. These schemas all adopt the divide-and-conquer strategy to split the merging job into sequential phases/stages consisting of subtasks that are conquered in an ordered, parallel, and bottleneck-free way. In two illustrating examples, we test the performance of our schemas on merging multiple VCF files into either a single TPED or a single VCF file, which are benchmarked with the traditional single/parallel multiway-merge methods, message passing interface (MPI)–based high-performance computing (HPC) implementation, and the popular VCFTools. Conclusions Our experiments suggest all three schemas either deliver a significant improvement in efficiency or render much better strong and weak scalabilities over traditional methods. Our findings provide generalized scalable schemas for performing sorted merging on genetics and genomics data using these Apache distributed systems. PMID:29762754
Compact Termination for Structural Soft-goods
NASA Technical Reports Server (NTRS)
Wilkes, Robert, Jr.
2013-01-01
Glass fiber is unique in its ability to withstand atomic oxygen and ultraviolet radiation in-space environments. However, glass fiber is also difficult to terminate by traditional methods without decreasing its strength significantly. Glass fiber products are especially sensitive to bend radius, and do not work very well with traditional 'sewn loop on pin' type connections. As with most composites, getting applied loads from a metallic structure into the webbing without stress concentrations is the key to a successful design. A potted end termination has been shown in some preliminary work to out-perform traditional termination methods. It was proposed to conduct a series of tensile tests on structural webbing or cord to determine the optimum potting geometry, and to then be able to estimate a weight and volume savings over traditional sewn-overa- pin connections. During the course of the investigation into potted end terminations for glass fiber webbing, a new and innovative connection was developed that has lower weight, reduced fabrication time, and superior thermal tolerance over the metallic end terminations that were to be optimized in the original proposal. This end termination essentially transitions the flexible glass fiber webbing into a rigid fiberglass termination, which can be bolted/fastened with traditional methods
Multidimensional optimal droop control for wind resources in DC microgrids
NASA Astrophysics Data System (ADS)
Bunker, Kaitlyn J.
Two important and upcoming technologies, microgrids and electricity generation from wind resources, are increasingly being combined. Various control strategies can be implemented, and droop control provides a simple option without requiring communication between microgrid components. Eliminating the single source of potential failure around the communication system is especially important in remote, islanded microgrids, which are considered in this work. However, traditional droop control does not allow the microgrid to utilize much of the power available from the wind. This dissertation presents a novel droop control strategy, which implements a droop surface in higher dimension than the traditional strategy. The droop control relationship then depends on two variables: the dc microgrid bus voltage, and the wind speed at the current time. An approach for optimizing this droop control surface in order to meet a given objective, for example utilizing all of the power available from a wind resource, is proposed and demonstrated. Various cases are used to test the proposed optimal high dimension droop control method, and demonstrate its function. First, the use of linear multidimensional droop control without optimization is demonstrated through simulation. Next, an optimal high dimension droop control surface is implemented with a simple dc microgrid containing two sources and one load. Various cases for changing load and wind speed are investigated using simulation and hardware-in-the-loop techniques. Optimal multidimensional droop control is demonstrated with a wind resource in a full dc microgrid example, containing an energy storage device as well as multiple sources and loads. Finally, the optimal high dimension droop control method is applied with a solar resource, and using a load model developed for a military patrol base application. The operation of the proposed control is again investigated using simulation and hardware-in-the-loop techniques.
Narayanaswami, Pushpa; Gronseth, Gary; Dubinsky, Richard; Penfold-Murray, Rebecca; Cox, Julie; Bever, Christopher; Martins, Yolanda; Rheaume, Carol; Shouse, Denise; Getchius, Thomas S D
2015-08-13
Evidence-based clinical practice guidelines (CPGs) are statements that provide recommendations to optimize patient care for a specific clinical problem or question. Merely reading a guideline rarely leads to implementation of recommendations. The American Academy of Neurology (AAN) has a formal process of guideline development and dissemination. The last few years have seen a burgeoning of social media such as Facebook, Twitter, and LinkedIn, and newer methods of dissemination such as podcasts and webinars. The role of these media in guideline dissemination has not been studied. Systematic evaluation of dissemination methods and comparison of the effectiveness of newer methods with traditional methods is not available. It is also not known whether specific dissemination methods may be more effectively targeted to specific audiences. Our aim was to (1) develop an innovative dissemination strategy by adding social media-based dissemination methods to traditional methods for the AAN clinical practice guidelines "Complementary and alternative medicine in multiple sclerosis" ("CAM in MS") and (2) evaluate whether the addition of social media outreach improves awareness of the CPG and knowledge of CPG recommendations, and affects implementation of those recommendations. Outcomes were measured by four surveys in each of the two target populations: patients and physicians/clinicians ("physicians"). The primary outcome was the difference in participants' intent to discuss use of complementary and alternative medicine (CAM) with their physicians or patients, respectively, after novel dissemination, as compared with that after traditional dissemination. Secondary outcomes were changes in awareness of the CPG, knowledge of CPG content, and behavior regarding CAM use in multiple sclerosis (MS). Response rates were 25.08% (622/2480) for physicians and 43.5% (348/800) for patients. Awareness of the CPG increased after traditional dissemination (absolute difference, 95% confidence interval: physicians 36%, 95% CI 25-46, and patients 10%, 95% CI 1-11) but did not increase further after novel dissemination (physicians 0%, 95% CI -11 to 11, and patients -4%, 95% CI -6 to 14). Intent to discuss CAM also increased after traditional dissemination but did not change after novel dissemination (traditional: physicians 12%, 95% CI 2-22, and patients 19%, 95% CI 3-33; novel: physicians 11%, 95% CI -1 to -21, and patients -8%, 95% CI -22 to 8). Knowledge of CPG recommendations and behavior regarding CAM use in MS did not change after either traditional dissemination or novel dissemination. Social media-based dissemination methods did not confer additional benefit over print-, email-, and Internet-based methods in increasing CPG awareness and changing intent in physicians or patients. Research on audience selection, message formatting, and message delivery is required to utilize Web 2.0 technologies optimally for dissemination.
NASA Technical Reports Server (NTRS)
Schredder, J. M.
1988-01-01
A comparative analysis was performed, using both the Geometrical Theory of Diffraction (GTD) and traditional pathlength error analysis techniques, for predicting RF antenna gain performance and pointing corrections. The NASA/JPL 70 meter antenna with its shaped surface was analyzed for gravity loading over the range of elevation angles. Also analyzed were the effects of lateral and axial displacements of the subreflector. Significant differences were noted between the predictions of the two methods, in the effect of subreflector displacements, and in the optimal subreflector positions to focus a gravity-deformed main reflector. The results are of relevance to future design procedure.
Optimization of a tensegrity wing for biomimetic applications
NASA Astrophysics Data System (ADS)
Moored, Keith W., III; Taylor, Stuart A.; Bart-Smith, Hilary
2006-03-01
Current attempts to build fast, efficient, and maneuverable underwater vehicles have looked to nature for inspiration. However, they have all been based on traditional propulsive techniques, i.e. rotary motors. In the current study a promising and potentially revolutionary approach is taken that overcomes the limitations of these traditional methods-morphing structure concepts with integrated actuation and sensing. Inspiration for this work comes from the manta ray (Manta birostris) and other batoid fish. These creatures are highly maneuverable but are also able to cruise at high speeds over long distances. In this paper, the structural foundation for the biomimetic morphing wing is a tensegrity structure. A preliminary procedure is presented for developing morphing tensegrity structures that include actuating elements. A shape optimization method is used that determines actuator placement and actuation amount necessary to achieve the measured biological displacement field of a ray. Lastly, an experimental manta ray wing is presented that measures the static and dynamic pressure field acting on the ray's wings during a normal flapping cycle.
Comparison of spike-sorting algorithms for future hardware implementation.
Gibson, Sarah; Judy, Jack W; Markovic, Dejan
2008-01-01
Applications such as brain-machine interfaces require hardware spike sorting in order to (1) obtain single-unit activity and (2) perform data reduction for wireless transmission of data. Such systems must be low-power, low-area, high-accuracy, automatic, and able to operate in real time. Several detection and feature extraction algorithms for spike sorting are described briefly and evaluated in terms of accuracy versus computational complexity. The nonlinear energy operator method is chosen as the optimal spike detection algorithm, being most robust over noise and relatively simple. The discrete derivatives method [1] is chosen as the optimal feature extraction method, maintaining high accuracy across SNRs with a complexity orders of magnitude less than that of traditional methods such as PCA.
Comparison of Optimal Design Methods in Inverse Problems
Banks, H. T.; Holm, Kathleen; Kappel, Franz
2011-01-01
Typical optimal design methods for inverse or parameter estimation problems are designed to choose optimal sampling distributions through minimization of a specific cost function related to the resulting error in parameter estimates. It is hoped that the inverse problem will produce parameter estimates with increased accuracy using data collected according to the optimal sampling distribution. Here we formulate the classical optimal design problem in the context of general optimization problems over distributions of sampling times. We present a new Prohorov metric based theoretical framework that permits one to treat succinctly and rigorously any optimal design criteria based on the Fisher Information Matrix (FIM). A fundamental approximation theory is also included in this framework. A new optimal design, SE-optimal design (standard error optimal design), is then introduced in the context of this framework. We compare this new design criteria with the more traditional D-optimal and E-optimal designs. The optimal sampling distributions from each design are used to compute and compare standard errors; the standard errors for parameters are computed using asymptotic theory or bootstrapping and the optimal mesh. We use three examples to illustrate ideas: the Verhulst-Pearl logistic population model [13], the standard harmonic oscillator model [13] and a popular glucose regulation model [16, 19, 29]. PMID:21857762
Prospective regularization design in prior-image-based reconstruction
NASA Astrophysics Data System (ADS)
Dang, Hao; Siewerdsen, Jeffrey H.; Webster Stayman, J.
2015-12-01
Prior-image-based reconstruction (PIBR) methods leveraging patient-specific anatomical information from previous imaging studies and/or sequences have demonstrated dramatic improvements in dose utilization and image quality for low-fidelity data. However, a proper balance of information from the prior images and information from the measurements is required (e.g. through careful tuning of regularization parameters). Inappropriate selection of reconstruction parameters can lead to detrimental effects including false structures and failure to improve image quality. Traditional methods based on heuristics are subject to error and sub-optimal solutions, while exhaustive searches require a large number of computationally intensive image reconstructions. In this work, we propose a novel method that prospectively estimates the optimal amount of prior image information for accurate admission of specific anatomical changes in PIBR without performing full image reconstructions. This method leverages an analytical approximation to the implicitly defined PIBR estimator, and introduces a predictive performance metric leveraging this analytical form and knowledge of a particular presumed anatomical change whose accurate reconstruction is sought. Additionally, since model-based PIBR approaches tend to be space-variant, a spatially varying prior image strength map is proposed to optimally admit changes everywhere in the image (eliminating the need to know change locations a priori). Studies were conducted in both an ellipse phantom and a realistic thorax phantom emulating a lung nodule surveillance scenario. The proposed method demonstrated accurate estimation of the optimal prior image strength while achieving a substantial computational speedup (about a factor of 20) compared to traditional exhaustive search. Moreover, the use of the proposed prior strength map in PIBR demonstrated accurate reconstruction of anatomical changes without foreknowledge of change locations in phantoms where the optimal parameters vary spatially by an order of magnitude or more. In a series of studies designed to explore potential unknowns associated with accurate PIBR, optimal prior image strength was found to vary with attenuation differences associated with anatomical change but exhibited only small variations as a function of the shape and size of the change. The results suggest that, given a target change attenuation, prospective patient-, change-, and data-specific customization of the prior image strength can be performed to ensure reliable reconstruction of specific anatomical changes.
Development of a novel and highly efficient method of isolating bacteriophages from water.
Liu, Weili; Li, Chao; Qiu, Zhi-Gang; Jin, Min; Wang, Jing-Feng; Yang, Dong; Xiao, Zhong-Hai; Yuan, Zhao-Kang; Li, Jun-Wen; Xu, Qun-Ying; Shen, Zhi-Qiang
2017-08-01
Bacteriophages are widely used to the treatment of drug-resistant bacteria and the improvement of food safety through bacterial lysis. However, the limited investigations on bacteriophage restrict their further application. In this study, a novel and highly efficient method was developed for isolating bacteriophage from water based on the electropositive silica gel particles (ESPs) method. To optimize the ESPs method, we evaluated the eluent type, flow rate, pH, temperature, and inoculation concentration of bacteriophage using bacteriophage f2. The quantitative detection reported that the recovery of the ESPs method reached over 90%. The qualitative detection demonstrated that the ESPs method effectively isolated 70% of extremely low-concentration bacteriophage (10 0 PFU/100L). Based on the host bacteria composed of 33 standard strains and 10 isolated strains, the bacteriophages in 18 water samples collected from the three sites in the Tianjin Haihe River Basin were isolated by the ESPs and traditional methods. Results showed that the ESPs method was significantly superior to the traditional method. The ESPs method isolated 32 strains of bacteriophage, whereas the traditional method isolated 15 strains. The sample isolation efficiency and bacteriophage isolation efficiency of the ESPs method were 3.28 and 2.13 times higher than those of the traditional method. The developed ESPs method was characterized by high isolation efficiency, efficient handling of large water sample size and low requirement on water quality. Copyright © 2017. Published by Elsevier B.V.
Hou, Yu-Lan; Wu, Shuang; Wang, Hua; Zhao, Yong; Liao, Peng; Tian, Qing-Qing; Sun, Wen-Jian; Chen, Bo
2013-01-01
A novel rapid method for detection of the illicit beta2-agonist additives in health foods and traditional Chinese patent medicines was developed with the desorption corona beam ionization mass spectrometry (DCBI-MS) technique. The DCBI conditions including temperature and sample volume were optimized according to the resulting mass spectra intensity. Matrix effect on 9 beta2-agonists additives was not significant in the proposed rapid determination procedure. All of the 9 target molecules were detected within 1 min. Quantification was achieved based on the typical fragment ion in MS2 spectra of each analyte. The method showed good linear coefficients in the range of 1-100 mg x L(-1) for all analytes. The relative deviation values were between 14.29% and 25.13%. Ten claimed antitussive and antiasthmatic health foods and traditional Chinese patent medicines from local pharmacies were analyzed. All of them were negative with the proposed DCBI-MS method. Without tedious sample pretreatments, the developed DCBI-MS is simple, rapid and sensitive for rapid qualification and semi-quantification of the illicit beta2-agonist additives in health foods and traditional Chinese patent medicines.
NASA Technical Reports Server (NTRS)
Striepe, Scott A.; Blanchard, Robert C.; Kirsch, Michael F.; Fowler, Wallace T.
2007-01-01
On January 14, 2005, ESA's Huygens probe separated from NASA's Cassini spacecraft, entered the Titan atmosphere and landed on its surface. As part of NASA Engineering Safety Center Independent Technical Assessment of the Huygens entry, descent, and landing, and an agreement with ESA, NASA provided results of all EDL analyses and associated findings to the Huygens project team prior to probe entry. In return, NASA was provided the flight data from the probe so that trajectory reconstruction could be done and simulation models assessed. Trajectory reconstruction of the Huygens entry probe at Titan was accomplished using two independent approaches: a traditional method and a POST2-based method. Results from both approaches are discussed in this paper.
New Horizons for Ninhydrin: Colorimetric Determination of Gender from Fingerprints.
Brunelle, Erica; Huynh, Crystal; Le, Anh Minh; Halámková, Lenka; Agudelo, Juliana; Halámek, Jan
2016-02-16
In the past century, forensic investigators have universally accepted fingerprinting as a reliable identification method via pictorial comparison. One of the most traditional detection methods uses ninhydrin, a chemical that reacts with amino acids in the fingerprint content to produce the blue-purple color known as Ruhemann's purple. It has recently been demonstrated that the amino acid content in fingerprints can be used to differentiate between male and female fingerprints. Here, we present a modified approach to the traditional ninhydrin method. This new approach for using ninhydrin is combined with an optimized extraction protocol and the concept of determining gender from fingerprints. In doing so, we are able to focus on the biochemical material rather than exclusively the physical image.
Color image enhancement based on particle swarm optimization with Gaussian mixture
NASA Astrophysics Data System (ADS)
Kattakkalil Subhashdas, Shibudas; Choi, Bong-Seok; Yoo, Ji-Hoon; Ha, Yeong-Ho
2015-01-01
This paper proposes a Gaussian mixture based image enhancement method which uses particle swarm optimization (PSO) to have an edge over other contemporary methods. The proposed method uses the guassian mixture model to model the lightness histogram of the input image in CIEL*a*b* space. The intersection points of the guassian components in the model are used to partition the lightness histogram. . The enhanced lightness image is generated by transforming the lightness value in each interval to appropriate output interval according to the transformation function that depends on PSO optimized parameters, weight and standard deviation of Gaussian component and cumulative distribution of the input histogram interval. In addition, chroma compensation is applied to the resulting image to reduce washout appearance. Experimental results show that the proposed method produces a better enhanced image compared to the traditional methods. Moreover, the enhanced image is free from several side effects such as washout appearance, information loss and gradation artifacts.
Grane, Camilla
2018-01-01
Highly automated driving will change driver's behavioural patterns. Traditional methods used for assessing manual driving will only be applicable for the parts of human-automation interaction where the driver intervenes such as in hand-over and take-over situations. Therefore, driver behaviour assessment will need to adapt to the new driving scenarios. This paper aims at simplifying the process of selecting appropriate assessment methods. Thirty-five papers were reviewed to examine potential and relevant methods. The review showed that many studies still relies on traditional driving assessment methods. A new method, the Failure-GAM 2 E model, with purpose to aid assessment selection when planning a study, is proposed and exemplified in the paper. Failure-GAM 2 E includes a systematic step-by-step procedure defining the situation, failures (Failure), goals (G), actions (A), subjective methods (M), objective methods (M) and equipment (E). The use of Failure-GAM 2 E in a study example resulted in a well-reasoned assessment plan, a new way of measuring trust through feet movements and a proposed Optimal Risk Management Model. Failure-GAM 2 E and the Optimal Risk Management Model are believed to support the planning process for research studies in the field of human-automation interaction. Copyright © 2017 Elsevier Ltd. All rights reserved.
Optimal atlas construction through hierarchical image registration
NASA Astrophysics Data System (ADS)
Grevera, George J.; Udupa, Jayaram K.; Odhner, Dewey; Torigian, Drew A.
2016-03-01
Atlases (digital or otherwise) are common in medicine. However, there is no standard framework for creating them from medical images. One traditional approach is to pick a representative subject and then proceed to label structures/regions of interest in this image. Another is to create a "mean" or average subject. Atlases may also contain more than a single representative (e.g., the Visible Human contains both a male and a female data set). Other criteria besides gender may be used as well, and the atlas may contain many examples for a given criterion. In this work, we propose that atlases be created in an optimal manner using a well-established graph theoretic approach using a min spanning tree (or more generally, a collection of them). The resulting atlases may contain many examples for a given criterion. In fact, our framework allows for the addition of new subjects to the atlas to allow it to evolve over time. Furthermore, one can apply segmentation methods to the graph (e.g., graph-cut, fuzzy connectedness, or cluster analysis) which allow it to be separated into "sub-atlases" as it evolves. We demonstrate our method by applying it to 50 3D CT data sets of the chest region, and by comparing it to a number of traditional methods using measures such as Mean Squared Difference, Mattes Mutual Information, and Correlation, and for rigid registration. Our results demonstrate that optimal atlases can be constructed in this manner and outperform other methods of construction using freely available software.
Palsis, John A; Brehmer, Thomas S; Pellegrini, Vincent D; Drew, Jacob M; Sachs, Barton L
2018-02-21
In an era of mandatory bundled payments for total joint replacement, accurate analysis of the cost of procedures is essential for orthopaedic surgeons and their institutions to maintain viable practices. The purpose of this study was to compare traditional accounting and time-driven activity-based costing (TDABC) methods for estimating the total costs of total hip and knee arthroplasty care cycles. We calculated the overall costs of elective primary total hip and total knee replacement care cycles at our academic medical center using traditional and TDABC accounting methods. We compared the methods with respect to the overall costs of hip and knee replacement and the costs for each major cost category. The traditional accounting method resulted in higher cost estimates. The total cost per hip replacement was $22,076 (2014 USD) using traditional accounting and was $12,957 using TDABC. The total cost per knee replacement was $29,488 using traditional accounting and was $16,981 using TDABC. With respect to cost categories, estimates using traditional accounting were greater for hip and knee replacement, respectively, by $3,432 and $5,486 for personnel, by $3,398 and $3,664 for space and equipment, and by $2,289 and $3,357 for indirect costs. Implants and consumables were derived from the actual hospital purchase price; accordingly, both methods produced equivalent results. Substantial cost differences exist between accounting methods. The focus of TDABC only on resources used directly by the patient contrasts with the allocation of all operating costs, including all indirect costs and unused capacity, with traditional accounting. We expect that the true costs of hip and knee replacement care cycles are likely somewhere between estimates derived from traditional accounting methods and TDABC. TDABC offers patient-level granular cost information that better serves in the redesign of care pathways and may lead to more strategic resource-allocation decisions to optimize actual operating margins.
Mixture experiment methods in the development and optimization of microemulsion formulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Furlanetto, Sandra; Cirri, Marzia; Piepel, Gregory F.
2011-06-25
Microemulsion formulations represent an interesting delivery vehicle for lipophilic drugs, allowing for improving their solubility and dissolution properties. This work developed effective microemulsion formulations using glyburide (a very poorly-water-soluble hypoglycaemic agent) as a model drug. First, the area of stable microemulsion (ME) formations was identified using a new approach based on mixture experiment methods. A 13-run mixture design was carried out in an experimental region defined by constraints on three components: aqueous, oil, and surfactant/cosurfactant. The transmittance percentage (at 550 nm) of ME formulations (indicative of their transparency and thus of their stability) was chosen as the response variable. Themore » results obtained using the mixture experiment approach corresponded well with those obtained using the traditional approach based on pseudo-ternary phase diagrams. However, the mixture experiment approach required far less experimental effort than the traditional approach. A subsequent 13-run mixture experiment, in the region of stable MEs, was then performed to identify the optimal formulation (i.e., having the best glyburide dissolution properties). Percent drug dissolved and dissolution efficiency were selected as the responses to be maximized. The ME formulation optimized via the mixture experiment approach consisted of 78% surfactant/cosurfacant (a mixture of Tween 20 and Transcutol, 1:1 v/v), 5% oil (Labrafac Hydro) and 17% aqueous (water). The stable region of MEs was identified using mixture experiment methods for the first time.« less
Thermal-Aware Test Access Mechanism and Wrapper Design Optimization for System-on-Chips
NASA Astrophysics Data System (ADS)
Yu, Thomas Edison; Yoneda, Tomokazu; Chakrabarty, Krishnendu; Fujiwara, Hideo
Rapid advances in semiconductor manufacturing technology have led to higher chip power densities, which places greater emphasis on packaging and temperature control during testing. For system-on-chips, peak power-based scheduling algorithms have been used to optimize tests under specified power constraints. However, imposing power constraints does not always solve the problem of overheating due to the non-uniform distribution of power across the chip. This paper presents a TAM/Wrapper co-design methodology for system-on-chips that ensures thermal safety while still optimizing the test schedule. The method combines a simplified thermal-cost model with a traditional bin-packing algorithm to minimize test time while satisfying temperature constraints. Furthermore, for temperature checking, thermal simulation is done using cycle-accurate power profiles for more realistic results. Experiments show that even a minimal sacrifice in test time can yield a considerable decrease in test temperature as well as the possibility of further lowering temperatures beyond those achieved using traditional power-based test scheduling.
NASA Technical Reports Server (NTRS)
Wheeler, Ward C.
2003-01-01
The problem of determining the minimum cost hypothetical ancestral sequences for a given cladogram is known to be NP-complete (Wang and Jiang, 1994). Traditionally, point estimations of hypothetical ancestral sequences have been used to gain heuristic, upper bounds on cladogram cost. These include procedures with such diverse approaches as non-additive optimization of multiple sequence alignment, direct optimization (Wheeler, 1996), and fixed-state character optimization (Wheeler, 1999). A method is proposed here which, by extending fixed-state character optimization, replaces the estimation process with a search. This form of optimization examines a diversity of potential state solutions for cost-efficient hypothetical ancestral sequences and can result in greatly more parsimonious cladograms. Additionally, such an approach can be applied to other NP-complete phylogenetic optimization problems such as genomic break-point analysis. c2003 The Willi Hennig Society. Published by Elsevier Science (USA). All rights reserved.
UAV path planning using artificial potential field method updated by optimal control theory
NASA Astrophysics Data System (ADS)
Chen, Yong-bo; Luo, Guan-chen; Mei, Yue-song; Yu, Jian-qiao; Su, Xiao-long
2016-04-01
The unmanned aerial vehicle (UAV) path planning problem is an important assignment in the UAV mission planning. Based on the artificial potential field (APF) UAV path planning method, it is reconstructed into the constrained optimisation problem by introducing an additional control force. The constrained optimisation problem is translated into the unconstrained optimisation problem with the help of slack variables in this paper. The functional optimisation method is applied to reform this problem into an optimal control problem. The whole transformation process is deduced in detail, based on a discrete UAV dynamic model. Then, the path planning problem is solved with the help of the optimal control method. The path following process based on the six degrees of freedom simulation model of the quadrotor helicopters is introduced to verify the practicability of this method. Finally, the simulation results show that the improved method is more effective in planning path. In the planning space, the length of the calculated path is shorter and smoother than that using traditional APF method. In addition, the improved method can solve the dead point problem effectively.
Scott, WE; Weegman, BP; Balamurugan, AN; Ferrer-Fabrega, J; Anazawa, T; Karatzas, T; Jie, T; Hammer, BE; Matsumoto, S; Avgoustiniatos, ES; Maynard, KS; Sutherland, DER; Hering, BJ; Papas, KK
2014-01-01
Background Porcine islet xenotransplantation is emerging as a potential alternative for allogeneic clinical islet transplantation. Optimization of porcine islet isolation in terms of yield and quality is critical for the success and cost effectiveness of this approach. Incomplete pancreas distension and inhomogeneous enzyme distribution have been identified as key factors for limiting viable islet yield per porcine pancreas. The aim of this study was to explore the utility of Magnetic Resonance Imaging (MRI) as a tool to investigate the homogeneity of enzyme delivery in porcine pancreata. Traditional and novel methods for enzyme delivery aimed at optimizing enzyme distribution were examined. Methods Pancreata were procured from Landrace pigs via en bloc viscerectomy. The main pancreatic duct was then cannulated with an 18g winged catheter and MRI performed at 1.5 T. Images were collected before and after ductal infusion of chilled MRI contrast agent (gadolinium) in physiological saline. Results Regions of the distal aspect of the splenic lobe and portions of the connecting lobe and bridge exhibited reduced delivery of solution when traditional methods of distension were utilized. Use of alternative methods of delivery (such as selective re-cannulation and distension of identified problem regions) resolved these issues and MRI was successfully utilized as a guide and assessment tool for improved delivery. Conclusion Current methods of porcine pancreas distension do not consistently deliver enzyme uniformly or adequately to all regions of the pancreas. Novel methods of enzyme delivery should be investigated and implemented for improved enzyme distribution. MRI serves as a valuable tool to visualize and evaluate the efficacy of current and prospective methods of pancreas distension and enzyme delivery. PMID:24986758
NASA Astrophysics Data System (ADS)
Gu, Tingwei; Kong, Deren; Shang, Fei; Chen, Jing
2017-12-01
We present an optimization algorithm to obtain low-uncertainty dynamic pressure measurements from a force-transducer-based device. In this paper, the advantages and disadvantages of the methods that are commonly used to measure the propellant powder gas pressure, the applicable scope of dynamic pressure calibration devices, and the shortcomings of the traditional comparison calibration method based on the drop-weight device are firstly analysed in detail. Then, a dynamic calibration method for measuring pressure using a force sensor based on a drop-weight device is introduced. This method can effectively save time when many pressure sensors are calibrated simultaneously and extend the life of expensive reference sensors. However, the force sensor is installed between the drop-weight and the hammerhead by transition pieces through the connection mode of bolt fastening, which causes adverse effects such as additional pretightening and inertia forces. To solve these effects, the influence mechanisms of the pretightening force, the inertia force and other influence factors on the force measurement are theoretically analysed. Then a measurement correction method for the force measurement is proposed based on an artificial neural network optimized by a genetic algorithm. The training and testing data sets are obtained from calibration tests, and the selection criteria for the key parameters of the correction model is discussed. The evaluation results for the test data show that the correction model can effectively improve the force measurement accuracy of the force sensor. Compared with the traditional high-accuracy comparison calibration method, the percentage difference of the impact-force-based measurement is less than 0.6% and the relative uncertainty of the corrected force value is 1.95%, which can meet the requirements of engineering applications.
Optimizing pressurized liquid extraction of microbial lipids using the response surface method.
Cescut, J; Severac, E; Molina-Jouve, C; Uribelarrea, J-L
2011-01-21
Response surface methodology (RSM) was used for the determination of optimum extraction parameters to reach maximum lipid extraction yield with yeast. Total lipids were extracted from oleaginous yeast (Rhodotorula glutinis) using pressurized liquid extraction (PLE). The effects of extraction parameters on lipid extraction yield were studied by employing a second-order central composite design. The optimal condition was obtained as three cycles of 15 min at 100°C with a ratio of 144 g of hydromatrix per 100 g of dry cell weight. Different analysis methods were used to compare the optimized PLE method with two conventional methods (Soxhlet and modification of Bligh and Dyer methods) under efficiency, selectivity and reproducibility criteria thanks to gravimetric analysis, GC with flame ionization detector, High Performance Liquid Chromatography linked to Evaporative Light Scattering Detector (HPLC-ELSD) and thin-layer chromatographic analysis. For each sample, the lipid extraction yield with optimized PLE was higher than those obtained with referenced methods (Soxhlet and Bligh and Dyer methods with, respectively, a recovery of 78% and 85% compared to PLE method). Moreover, the use of PLE led to major advantages such as an analysis time reduction by a factor of 10 and solvent quantity reduction by 70%, compared with traditional extraction methods. Copyright © 2010 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Ping; Wang, Chenyu; Li, Mingjie
In general, the modeling errors of dynamic system model are a set of random variables. The traditional performance index of modeling such as means square error (MSE) and root means square error (RMSE) cannot fully express the connotation of modeling errors with stochastic characteristics both in the dimension of time domain and space domain. Therefore, the probability density function (PDF) is introduced to completely describe the modeling errors in both time scales and space scales. Based on it, a novel wavelet neural network (WNN) modeling method is proposed by minimizing the two-dimensional (2D) PDF shaping of modeling errors. First, themore » modeling error PDF by the traditional WNN is estimated using data-driven kernel density estimation (KDE) technique. Then, the quadratic sum of 2D deviation between the modeling error PDF and the target PDF is utilized as performance index to optimize the WNN model parameters by gradient descent method. Since the WNN has strong nonlinear approximation and adaptive capability, and all the parameters are well optimized by the proposed method, the developed WNN model can make the modeling error PDF track the target PDF, eventually. Simulation example and application in a blast furnace ironmaking process show that the proposed method has a higher modeling precision and better generalization ability compared with the conventional WNN modeling based on MSE criteria. However, the proposed method has more desirable estimation for modeling error PDF that approximates to a Gaussian distribution whose shape is high and narrow.« less
Zhou, Ping; Wang, Chenyu; Li, Mingjie; ...
2018-01-31
In general, the modeling errors of dynamic system model are a set of random variables. The traditional performance index of modeling such as means square error (MSE) and root means square error (RMSE) cannot fully express the connotation of modeling errors with stochastic characteristics both in the dimension of time domain and space domain. Therefore, the probability density function (PDF) is introduced to completely describe the modeling errors in both time scales and space scales. Based on it, a novel wavelet neural network (WNN) modeling method is proposed by minimizing the two-dimensional (2D) PDF shaping of modeling errors. First, themore » modeling error PDF by the traditional WNN is estimated using data-driven kernel density estimation (KDE) technique. Then, the quadratic sum of 2D deviation between the modeling error PDF and the target PDF is utilized as performance index to optimize the WNN model parameters by gradient descent method. Since the WNN has strong nonlinear approximation and adaptive capability, and all the parameters are well optimized by the proposed method, the developed WNN model can make the modeling error PDF track the target PDF, eventually. Simulation example and application in a blast furnace ironmaking process show that the proposed method has a higher modeling precision and better generalization ability compared with the conventional WNN modeling based on MSE criteria. However, the proposed method has more desirable estimation for modeling error PDF that approximates to a Gaussian distribution whose shape is high and narrow.« less
Overcoming Communication Restrictions in Collectives
NASA Technical Reports Server (NTRS)
Tumer, Kagan; Agogino, Adrian K.
2004-01-01
Many large distributed system are characterized by having a large number of components (eg., agents, neurons) whose actions and interactions determine a %orld utility which rates the performance of the overall system. Such collectives are often subject to communication restrictions, making it difficult for components which try to optimize their own private utilities, to take actions that also help optimize the world utility. In this article we address that coordination problem and derive four utility functions which present different compromises between how aligned a component s private utility is with the world utility and how readily that component can determine the actions that optimize its utility. The results show that the utility functions specifically derived to operate under communication restrictions outperform both traditional methods and previous collective-based methods by up to 75%.
Bi, Wentao; Tian, Minglei; Row, Kyung Ho
2012-01-01
This study highlighted the application of a two-stepped extraction method for extraction and separation of oxymatrine from Sophora flavescens Ait. extract by utilizing silica-confined ionic liquids as sorbent. The optimized silica-confined ionic liquid was firstly mixed with plant extract to adsorb oxymatrine. Simultaneously, some interference, such as matrine, was removed. The obtained suspension was then added to a cartridge for solid phase extraction. Through these two steps, target compound was adequately separated from interferences with 93.4% recovery. In comparison with traditional solid phase extraction, this method accelerates loading and reduces the use of organic solvents during washing. Moreover, the optimization of loading volume was simplified as optimization of solid/liquid ratio. Copyright © 2011 Elsevier B.V. All rights reserved.
Research on illumination uniformity of high-power LED array light source
NASA Astrophysics Data System (ADS)
Yu, Xiaolong; Wei, Xueye; Zhang, Ou; Zhang, Xinwei
2018-06-01
Uniform illumination is one of the most important problem that must be solved in the application of high-power LED array. A numerical optimization algorithm, is applied to obtain the best LED array typesetting so that the light intensity of the target surface is evenly distributed. An evaluation function is set up through the standard deviation of the illuminance function, then the particle swarm optimization algorithm is utilized to optimize different arrays. Furthermore, the light intensity distribution is obtained by optical ray tracing method. Finally, a hybrid array is designed and the optical ray tracing method is applied to simulate the array. The simulation results, which is consistent with the traditional theoretical calculation, show that the algorithm introduced in this paper is reasonable and effective.
Castillo, Edward; Castillo, Richard; Fuentes, David; Guerrero, Thomas
2014-01-01
Purpose: Block matching is a well-known strategy for estimating corresponding voxel locations between a pair of images according to an image similarity metric. Though robust to issues such as image noise and large magnitude voxel displacements, the estimated point matches are not guaranteed to be spatially accurate. However, the underlying optimization problem solved by the block matching procedure is similar in structure to the class of optimization problem associated with B-spline based registration methods. By exploiting this relationship, the authors derive a numerical method for computing a global minimizer to a constrained B-spline registration problem that incorporates the robustness of block matching with the global smoothness properties inherent to B-spline parameterization. Methods: The method reformulates the traditional B-spline registration problem as a basis pursuit problem describing the minimal l1-perturbation to block match pairs required to produce a B-spline fitting error within a given tolerance. The sparsity pattern of the optimal perturbation then defines a voxel point cloud subset on which the B-spline fit is a global minimizer to a constrained variant of the B-spline registration problem. As opposed to traditional B-spline algorithms, the optimization step involving the actual image data is addressed by block matching. Results: The performance of the method is measured in terms of spatial accuracy using ten inhale/exhale thoracic CT image pairs (available for download at www.dir-lab.com) obtained from the COPDgene dataset and corresponding sets of expert-determined landmark point pairs. The results of the validation procedure demonstrate that the method can achieve a high spatial accuracy on a significantly complex image set. Conclusions: The proposed methodology is demonstrated to achieve a high spatial accuracy and is generalizable in that in can employ any displacement field parameterization described as a least squares fit to block match generated estimates. Thus, the framework allows for a wide range of image similarity block match metric and physical modeling combinations. PMID:24694135
Improved Ant Algorithms for Software Testing Cases Generation
Yang, Shunkun; Xu, Jiaqi
2014-01-01
Existing ant colony optimization (ACO) for software testing cases generation is a very popular domain in software testing engineering. However, the traditional ACO has flaws, as early search pheromone is relatively scarce, search efficiency is low, search model is too simple, positive feedback mechanism is easy to porduce the phenomenon of stagnation and precocity. This paper introduces improved ACO for software testing cases generation: improved local pheromone update strategy for ant colony optimization, improved pheromone volatilization coefficient for ant colony optimization (IPVACO), and improved the global path pheromone update strategy for ant colony optimization (IGPACO). At last, we put forward a comprehensive improved ant colony optimization (ACIACO), which is based on all the above three methods. The proposed technique will be compared with random algorithm (RND) and genetic algorithm (GA) in terms of both efficiency and coverage. The results indicate that the improved method can effectively improve the search efficiency, restrain precocity, promote case coverage, and reduce the number of iterations. PMID:24883391
Xun-Ping, W; An, Z
2017-07-27
Objective To optimize and simplify the survey method of Oncomelania hupensis snails in marshland endemic regions of schistosomiasis, so as to improve the precision, efficiency and economy of the snail survey. Methods A snail sampling strategy (Spatial Sampling Scenario of Oncomelania based on Plant Abundance, SOPA) which took the plant abundance as auxiliary variable was explored and an experimental study in a 50 m×50 m plot in a marshland in the Poyang Lake region was performed. Firstly, the push broom surveyed data was stratified into 5 layers by the plant abundance data; then, the required numbers of optimal sampling points of each layer through Hammond McCullagh equation were calculated; thirdly, every sample point in the line with the Multiple Directional Interpolation (MDI) placement scheme was pinpointed; and finally, the comparison study among the outcomes of the spatial random sampling strategy, the traditional systematic sampling method, the spatial stratified sampling method, Sandwich spatial sampling and inference and SOPA was performed. Results The method (SOPA) proposed in this study had the minimal absolute error of 0.213 8; and the traditional systematic sampling method had the largest estimate, and the absolute error was 0.924 4. Conclusion The snail sampling strategy (SOPA) proposed in this study obtains the higher estimation accuracy than the other four methods.
Theory and Computation of Optimal Low- and Medium- Thrust Orbit Transfers
NASA Technical Reports Server (NTRS)
Goodson, Troy D.; Chuang, Jason C. H.; Ledsinger, Laura A.
1996-01-01
This report presents new theoretical results which lead to new algorithms for the computation of fuel-optimal multiple-burn orbit transfers of low and medium thrust. Theoretical results introduced herein show how to add burns to an optimal trajectory and show that the traditional set of necessary conditions may be replaced with a much simpler set of equations. Numerical results are presented to demonstrate the utility of the theoretical results and the new algorithms. Two indirect methods from the literature are shown to be effective for the optimal orbit transfer problem with relatively small numbers of burns. These methods are the Minimizing Boundary Condition Method (MBCM) and BOUNDSCO. Both of these methods make use of the first-order necessary conditions exactly as derived by optimal control theory. Perturbations due to Earth's oblateness and atmospheric drag are considered. These perturbations are of greatest interest for transfers that take place between low Earth orbit altitudes and geosynchronous orbit altitudes. Example extremal solutions including these effects and computed by the aforementioned methods are presented. An investigation is also made into a suboptimal multiple-burn guidance scheme. The FORTRAN code developed for this study has been collected together in a package named ORBPACK. ORBPACK's user manual is provided as an appendix to this report.
Adaptive feature selection using v-shaped binary particle swarm optimization.
Teng, Xuyang; Dong, Hongbin; Zhou, Xiurong
2017-01-01
Feature selection is an important preprocessing method in machine learning and data mining. This process can be used not only to reduce the amount of data to be analyzed but also to build models with stronger interpretability based on fewer features. Traditional feature selection methods evaluate the dependency and redundancy of features separately, which leads to a lack of measurement of their combined effect. Moreover, a greedy search considers only the optimization of the current round and thus cannot be a global search. To evaluate the combined effect of different subsets in the entire feature space, an adaptive feature selection method based on V-shaped binary particle swarm optimization is proposed. In this method, the fitness function is constructed using the correlation information entropy. Feature subsets are regarded as individuals in a population, and the feature space is searched using V-shaped binary particle swarm optimization. The above procedure overcomes the hard constraint on the number of features, enables the combined evaluation of each subset as a whole, and improves the search ability of conventional binary particle swarm optimization. The proposed algorithm is an adaptive method with respect to the number of feature subsets. The experimental results show the advantages of optimizing the feature subsets using the V-shaped transfer function and confirm the effectiveness and efficiency of the feature subsets obtained under different classifiers.
Adaptive feature selection using v-shaped binary particle swarm optimization
Dong, Hongbin; Zhou, Xiurong
2017-01-01
Feature selection is an important preprocessing method in machine learning and data mining. This process can be used not only to reduce the amount of data to be analyzed but also to build models with stronger interpretability based on fewer features. Traditional feature selection methods evaluate the dependency and redundancy of features separately, which leads to a lack of measurement of their combined effect. Moreover, a greedy search considers only the optimization of the current round and thus cannot be a global search. To evaluate the combined effect of different subsets in the entire feature space, an adaptive feature selection method based on V-shaped binary particle swarm optimization is proposed. In this method, the fitness function is constructed using the correlation information entropy. Feature subsets are regarded as individuals in a population, and the feature space is searched using V-shaped binary particle swarm optimization. The above procedure overcomes the hard constraint on the number of features, enables the combined evaluation of each subset as a whole, and improves the search ability of conventional binary particle swarm optimization. The proposed algorithm is an adaptive method with respect to the number of feature subsets. The experimental results show the advantages of optimizing the feature subsets using the V-shaped transfer function and confirm the effectiveness and efficiency of the feature subsets obtained under different classifiers. PMID:28358850
Improving processes through evolutionary optimization.
Clancy, Thomas R
2011-09-01
As systems evolve over time, their natural tendency is to become increasingly more complex. Studies on complex systems have generated new perspectives on management in social organizations such as hospitals. Much of this research appears as a natural extension of the cross-disciplinary field of systems theory. This is the 18th in a series of articles applying complex systems science to the traditional management concepts of planning, organizing, directing, coordinating, and controlling. In this article, I discuss methods to optimize complex healthcare processes through learning, adaptation, and evolutionary planning.
NASA Astrophysics Data System (ADS)
Tan, Jun; Song, Peng; Li, Jinshan; Wang, Lei; Zhong, Mengxuan; Zhang, Xiaobo
2017-06-01
The surface-related multiple elimination (SRME) method is based on feedback formulation and has become one of the most preferred multiple suppression methods used. However, some differences are apparent between the predicted multiples and those in the source seismic records, which may result in conventional adaptive multiple subtraction methods being barely able to effectively suppress multiples in actual production. This paper introduces a combined adaptive multiple attenuation method based on the optimized event tracing technique and extended Wiener filtering. The method firstly uses multiple records predicted by SRME to generate a multiple velocity spectrum, then separates the original record to an approximate primary record and an approximate multiple record by applying the optimized event tracing method and short-time window FK filtering method. After applying the extended Wiener filtering method, residual multiples in the approximate primary record can then be eliminated and the damaged primary can be restored from the approximate multiple record. This method combines the advantages of multiple elimination based on the optimized event tracing method and the extended Wiener filtering technique. It is an ideal method for suppressing typical hyperbolic and other types of multiples, with the advantage of minimizing damage of the primary. Synthetic and field data tests show that this method produces better multiple elimination results than the traditional multi-channel Wiener filter method and is more suitable for multiple elimination in complicated geological areas.
A Mixed-Methods, Multiprofessional Approach to Needs Assessment for Designing Education
ERIC Educational Resources Information Center
Moore, Heidi K.; McKeithen, Tom M.; Holthusen, Amy E.
2011-01-01
Like most hospital units, neonatal intensive care units (NICUs) are multidisciplinary and team-based. As a result, providing optimal nutritional care to premature infants involves using the knowledge and skills of several types of professionals. Using traditional needs assessment methodologies to effectively understand the educational needs…
The trade-off between morphology and control in the co-optimized design of robots.
Rosendo, Andre; von Atzigen, Marco; Iida, Fumiya
2017-01-01
Conventionally, robot morphologies are developed through simulations and calculations, and different control methods are applied afterwards. Assuming that simulations and predictions are simplified representations of our reality, how sure can roboticists be that the chosen morphology is the most adequate for the possible control choices in the real-world? Here we study the influence of the design parameters in the creation of a robot with a Bayesian morphology-control (MC) co-optimization process. A robot autonomously creates child robots from a set of possible design parameters and uses Bayesian Optimization (BO) to infer the best locomotion behavior from real world experiments. Then, we systematically change from an MC co-optimization to a control-only (C) optimization, which better represents the traditional way that robots are developed, to explore the trade-off between these two methods. We show that although C processes can greatly improve the behavior of poor morphologies, such agents are still outperformed by MC co-optimization results with as few as 25 iterations. Our findings, on one hand, suggest that BO should be used in the design process of robots for both morphological and control parameters to reach optimal performance, and on the other hand, point to the downfall of current design methods in face of new search techniques.
NASA Astrophysics Data System (ADS)
Keum, Jongho; Coulibaly, Paulin
2017-07-01
Adequate and accurate hydrologic information from optimal hydrometric networks is an essential part of effective water resources management. Although the key hydrologic processes in the water cycle are interconnected, hydrometric networks (e.g., streamflow, precipitation, groundwater level) have been routinely designed individually. A decision support framework is proposed for integrated design of multivariable hydrometric networks. The proposed method is applied to design optimal precipitation and streamflow networks simultaneously. The epsilon-dominance hierarchical Bayesian optimization algorithm was combined with Shannon entropy of information theory to design and evaluate hydrometric networks. Specifically, the joint entropy from the combined networks was maximized to provide the most information, and the total correlation was minimized to reduce redundant information. To further optimize the efficiency between the networks, they were designed by maximizing the conditional entropy of the streamflow network given the information of the precipitation network. Compared to the traditional individual variable design approach, the integrated multivariable design method was able to determine more efficient optimal networks by avoiding the redundant stations. Additionally, four quantization cases were compared to evaluate their effects on the entropy calculations and the determination of the optimal networks. The evaluation results indicate that the quantization methods should be selected after careful consideration for each design problem since the station rankings and the optimal networks can change accordingly.
The trade-off between morphology and control in the co-optimized design of robots
Iida, Fumiya
2017-01-01
Conventionally, robot morphologies are developed through simulations and calculations, and different control methods are applied afterwards. Assuming that simulations and predictions are simplified representations of our reality, how sure can roboticists be that the chosen morphology is the most adequate for the possible control choices in the real-world? Here we study the influence of the design parameters in the creation of a robot with a Bayesian morphology-control (MC) co-optimization process. A robot autonomously creates child robots from a set of possible design parameters and uses Bayesian Optimization (BO) to infer the best locomotion behavior from real world experiments. Then, we systematically change from an MC co-optimization to a control-only (C) optimization, which better represents the traditional way that robots are developed, to explore the trade-off between these two methods. We show that although C processes can greatly improve the behavior of poor morphologies, such agents are still outperformed by MC co-optimization results with as few as 25 iterations. Our findings, on one hand, suggest that BO should be used in the design process of robots for both morphological and control parameters to reach optimal performance, and on the other hand, point to the downfall of current design methods in face of new search techniques. PMID:29023482
Results of an integrated structure-control law design sensitivity analysis
NASA Technical Reports Server (NTRS)
Gilbert, Michael G.
1988-01-01
Next generation air and space vehicle designs are driven by increased performance requirements, demanding a high level of design integration between traditionally separate design disciplines. Interdisciplinary analysis capabilities have been developed, for aeroservoelastic aircraft and large flexible spacecraft control for instance, but the requisite integrated design methods are only beginning to be developed. One integrated design method which has received attention is based on hierarchal problem decompositions, optimization, and design sensitivity analyses. This paper highlights a design sensitivity analysis method for Linear Quadratic Cost, Gaussian (LQG) optimal control laws, which predicts change in the optimal control law due to changes in fixed problem parameters using analytical sensitivity equations. Numerical results of a design sensitivity analysis for a realistic aeroservoelastic aircraft example are presented. In this example, the sensitivity of the optimally controlled aircraft's response to various problem formulation and physical aircraft parameters is determined. These results are used to predict the aircraft's new optimally controlled response if the parameter was to have some other nominal value during the control law design process. The sensitivity results are validated by recomputing the optimal control law for discrete variations in parameters, computing the new actual aircraft response, and comparing with the predicted response. These results show an improvement in sensitivity accuracy for integrated design purposes over methods which do not include changess in the optimal control law. Use of the analytical LQG sensitivity expressions is also shown to be more efficient that finite difference methods for the computation of the equivalent sensitivity information.
NASA Astrophysics Data System (ADS)
Zheng, Guangdi; Pan, Mingbo; Liu, Wei; Wu, Xuetong
2018-03-01
The target identification of the sea battlefield is the prerequisite for the judgment of the enemy in the modern naval battle. In this paper, a collaborative identification method based on convolution neural network is proposed to identify the typical targets of sea battlefields. Different from the traditional single-input/single-output identification method, the proposed method constructs a multi-input/single-output co-identification architecture based on optimized convolution neural network and weighted D-S evidence theory. The simulation results show that
NASA Astrophysics Data System (ADS)
Wang, Liwei; Liu, Xinggao; Zhang, Zeyin
2017-02-01
An efficient primal-dual interior-point algorithm using a new non-monotone line search filter method is presented for nonlinear constrained programming, which is widely applied in engineering optimization. The new non-monotone line search technique is introduced to lead to relaxed step acceptance conditions and improved convergence performance. It can also avoid the choice of the upper bound on the memory, which brings obvious disadvantages to traditional techniques. Under mild assumptions, the global convergence of the new non-monotone line search filter method is analysed, and fast local convergence is ensured by second order corrections. The proposed algorithm is applied to the classical alkylation process optimization problem and the results illustrate its effectiveness. Some comprehensive comparisons to existing methods are also presented.
Scott, William E; Weegman, Bradley P; Balamurugan, Appakalai N; Ferrer-Fabrega, Joana; Anazawa, Takayuki; Karatzas, Theodore; Jie, Tun; Hammer, Bruce E; Matsumoto, Shuchiro; Avgoustiniatos, Efstathios S; Maynard, Kristen S; Sutherland, David E R; Hering, Bernhard J; Papas, Klearchos K
2014-01-01
Porcine islet xenotransplantation is emerging as a potential alternative for allogeneic clinical islet transplantation. Optimization of porcine islet isolation in terms of yield and quality is critical for the success and cost-effectiveness of this approach. Incomplete pancreas distention and inhomogeneous enzyme distribution have been identified as key factors for limiting viable islet yield per porcine pancreas. The aim of this study was to explore the utility of magnetic resonance imaging (MRI) as a tool to investigate the homogeneity of enzyme delivery in porcine pancreata. Traditional and novel methods for enzyme delivery aimed at optimizing enzyme distribution were examined. Pancreata were procured from Landrace pigs via en bloc viscerectomy. The main pancreatic duct was then cannulated with an 18-g winged catheter and MRI performed at 1.5-T. Images were collected before and after ductal infusion of chilled MRI contrast agent (gadolinium) in physiological saline. Regions of the distal aspect of the splenic lobe and portions of the connecting lobe and bridge exhibited reduced delivery of solution when traditional methods of distention were utilized. Use of alternative methods of delivery (such as selective re-cannulation and distention of identified problem regions) resolved these issues, and MRI was successfully utilized as a guide and assessment tool for improved delivery. Current methods of porcine pancreas distention do not consistently deliver enzyme uniformly or adequately to all regions of the pancreas. Novel methods of enzyme delivery should be investigated and implemented for improved enzyme distribution. MRI serves as a valuable tool to visualize and evaluate the efficacy of current and prospective methods of pancreas distention and enzyme delivery. © 2014 John Wiley & Sons A/S Published by John Wiley & Sons Ltd.
Bardhan, Jaydeep P; Altman, Michael D; Tidor, B; White, Jacob K
2009-01-01
We present a partial-differential-equation (PDE)-constrained approach for optimizing a molecule's electrostatic interactions with a target molecule. The approach, which we call reverse-Schur co-optimization, can be more than two orders of magnitude faster than the traditional approach to electrostatic optimization. The efficiency of the co-optimization approach may enhance the value of electrostatic optimization for ligand-design efforts-in such projects, it is often desirable to screen many candidate ligands for their viability, and the optimization of electrostatic interactions can improve ligand binding affinity and specificity. The theoretical basis for electrostatic optimization derives from linear-response theory, most commonly continuum models, and simple assumptions about molecular binding processes. Although the theory has been used successfully to study a wide variety of molecular binding events, its implications have not yet been fully explored, in part due to the computational expense associated with the optimization. The co-optimization algorithm achieves improved performance by solving the optimization and electrostatic simulation problems simultaneously, and is applicable to both unconstrained and constrained optimization problems. Reverse-Schur co-optimization resembles other well-known techniques for solving optimization problems with PDE constraints. Model problems as well as realistic examples validate the reverse-Schur method, and demonstrate that our technique and alternative PDE-constrained methods scale very favorably compared to the standard approach. Regularization, which ordinarily requires an explicit representation of the objective function, can be included using an approximate Hessian calculated using the new BIBEE/P (boundary-integral-based electrostatics estimation by preconditioning) method.
Bardhan, Jaydeep P.; Altman, Michael D.
2009-01-01
We present a partial-differential-equation (PDE)-constrained approach for optimizing a molecule’s electrostatic interactions with a target molecule. The approach, which we call reverse-Schur co-optimization, can be more than two orders of magnitude faster than the traditional approach to electrostatic optimization. The efficiency of the co-optimization approach may enhance the value of electrostatic optimization for ligand-design efforts–in such projects, it is often desirable to screen many candidate ligands for their viability, and the optimization of electrostatic interactions can improve ligand binding affinity and specificity. The theoretical basis for electrostatic optimization derives from linear-response theory, most commonly continuum models, and simple assumptions about molecular binding processes. Although the theory has been used successfully to study a wide variety of molecular binding events, its implications have not yet been fully explored, in part due to the computational expense associated with the optimization. The co-optimization algorithm achieves improved performance by solving the optimization and electrostatic simulation problems simultaneously, and is applicable to both unconstrained and constrained optimization problems. Reverse-Schur co-optimization resembles other well-known techniques for solving optimization problems with PDE constraints. Model problems as well as realistic examples validate the reverse-Schur method, and demonstrate that our technique and alternative PDE-constrained methods scale very favorably compared to the standard approach. Regularization, which ordinarily requires an explicit representation of the objective function, can be included using an approximate Hessian calculated using the new BIBEE/P (boundary-integral-based electrostatics estimation by preconditioning) method. PMID:23055839
Spectrally optimal illuminations for diabetic retinopathy detection in retinal imaging
NASA Astrophysics Data System (ADS)
Bartczak, Piotr; Fält, Pauli; Penttinen, Niko; Ylitepsa, Pasi; Laaksonen, Lauri; Lensu, Lasse; Hauta-Kasari, Markku; Uusitalo, Hannu
2017-04-01
Retinal photography is a standard method for recording retinal diseases for subsequent analysis and diagnosis. However, the currently used white light or red-free retinal imaging does not necessarily provide the best possible visibility of different types of retinal lesions, important when developing diagnostic tools for handheld devices, such as smartphones. Using specifically designed illumination, the visibility and contrast of retinal lesions could be improved. In this study, spectrally optimal illuminations for diabetic retinopathy lesion visualization are implemented using a spectrally tunable light source based on digital micromirror device. The applicability of this method was tested in vivo by taking retinal monochrome images from the eyes of five diabetic volunteers and two non-diabetic control subjects. For comparison to existing methods, we evaluated the contrast of retinal images taken with our method and red-free illumination. The preliminary results show that the use of optimal illuminations improved the contrast of diabetic lesions in retinal images by 30-70%, compared to the traditional red-free illumination imaging.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mather, Barry A; Hodge, Brian S; Cho, Gyu-Jung
Voltage regulation devices have been traditionally installed and utilized to support distribution voltages. Installations of distributed energy resources (DERs) in distribution systems are rapidly increasing, and many of these generation resources have variable and uncertain power output. These generators can significantly change the voltage profile for a feeder; therefore, in the distribution system planning stage of the optimal operation and dispatch of voltage regulation devices, possible high penetrations of DERs should be considered. In this paper, we model the IEEE 34-bus test feeder, including all essential equipment. An optimization method is adopted to determine the optimal siting and operation ofmore » the voltage regulation devices in the presence of distributed solar power generation. Finally, we verify the optimal configuration of the entire system through the optimization and simulation results.« less
Optimized stereo matching in binocular three-dimensional measurement system using structured light.
Liu, Kun; Zhou, Changhe; Wei, Shengbin; Wang, Shaoqing; Fan, Xin; Ma, Jianyong
2014-09-10
In this paper, we develop an optimized stereo-matching method used in an active binocular three-dimensional measurement system. A traditional dense stereo-matching algorithm is time consuming due to a long search range and the high complexity of a similarity evaluation. We project a binary fringe pattern in combination with a series of N binary band limited patterns. In order to prune the search range, we execute an initial matching before exhaustive matching and evaluate a similarity measure using logical comparison instead of a complicated floating-point operation. Finally, an accurate point cloud can be obtained by triangulation methods and subpixel interpolation. The experiment results verify the computational efficiency and matching accuracy of the method.
K-Nearest Neighbor Algorithm Optimization in Text Categorization
NASA Astrophysics Data System (ADS)
Chen, Shufeng
2018-01-01
K-Nearest Neighbor (KNN) classification algorithm is one of the simplest methods of data mining. It has been widely used in classification, regression and pattern recognition. The traditional KNN method has some shortcomings such as large amount of sample computation and strong dependence on the sample library capacity. In this paper, a method of representative sample optimization based on CURE algorithm is proposed. On the basis of this, presenting a quick algorithm QKNN (Quick k-nearest neighbor) to find the nearest k neighbor samples, which greatly reduces the similarity calculation. The experimental results show that this algorithm can effectively reduce the number of samples and speed up the search for the k nearest neighbor samples to improve the performance of the algorithm.
Fast principal component analysis for stacking seismic data
NASA Astrophysics Data System (ADS)
Wu, Juan; Bai, Min
2018-04-01
Stacking seismic data plays an indispensable role in many steps of the seismic data processing and imaging workflow. Optimal stacking of seismic data can help mitigate seismic noise and enhance the principal components to a great extent. Traditional average-based seismic stacking methods cannot obtain optimal performance when the ambient noise is extremely strong. We propose a principal component analysis (PCA) algorithm for stacking seismic data without being sensitive to noise level. Considering the computational bottleneck of the classic PCA algorithm in processing massive seismic data, we propose an efficient PCA algorithm to make the proposed method readily applicable for industrial applications. Two numerically designed examples and one real seismic data are used to demonstrate the performance of the presented method.
Autofocus method for automated microscopy using embedded GPUs.
Castillo-Secilla, J M; Saval-Calvo, M; Medina-Valdès, L; Cuenca-Asensi, S; Martínez-Álvarez, A; Sánchez, C; Cristóbal, G
2017-03-01
In this paper we present a method for autofocusing images of sputum smears taken from a microscope which combines the finding of the optimal focus distance with an algorithm for extending the depth of field (EDoF). Our multifocus fusion method produces an unique image where all the relevant objects of the analyzed scene are well focused, independently to their distance to the sensor. This process is computationally expensive which makes unfeasible its automation using traditional embedded processors. For this purpose a low-cost optimized implementation is proposed using limited resources embedded GPU integrated on cutting-edge NVIDIA system on chip. The extensive tests performed on different sputum smear image sets show the real-time capabilities of our implementation maintaining the quality of the output image.
Smith, Aaron Douglas; Lockman, Nur Ain; Holtzapple, Mark T
2011-06-01
Nutrients are essential for microbial growth and metabolism in mixed-culture acid fermentations. Understanding the influence of nutrient feeding strategies on fermentation performance is necessary for optimization. For a four-bottle fermentation train, five nutrient contacting patterns (single-point nutrient addition to fermentors F1, F2, F3, and F4 and multi-point parallel addition) were investigated. Compared to the traditional nutrient contacting method (all nutrients fed to F1), the near-optimal feeding strategies improved exit yield, culture yield, process yield, exit acetate-equivalent yield, conversion, and total acid productivity by approximately 31%, 39%, 46%, 31%, 100%, and 19%, respectively. There was no statistical improvement in total acid concentration. The traditional nutrient feeding strategy had the highest selectivity and acetate-equivalent selectivity. Total acid productivity depends on carbon-nitrogen ratio.
NASA Astrophysics Data System (ADS)
Li, Gang; Yu, Yue; Zhang, Cui; Lin, Ling
2017-09-01
The oxygen saturation is one of the important parameters to evaluate human health. This paper presents an efficient optimization method that can improve the accuracy of oxygen saturation measurement, which employs an optical frequency division triangular wave signal as the excitation signal to obtain dynamic spectrum and calculate oxygen saturation. In comparison to the traditional method measured RMSE (root mean square error) of SpO2 which is 0.1705, this proposed method significantly reduced the measured RMSE which is 0.0965. It is notable that the accuracy of oxygen saturation measurement has been improved significantly. The method can simplify the circuit and bring down the demand of elements. Furthermore, it has a great reference value on improving the signal to noise ratio of other physiological signals.
Jin, Junchen
2016-01-01
The shunting schedule of electric multiple units depot (SSED) is one of the essential plans for high-speed train maintenance activities. This paper presents a 0-1 programming model to address the problem of determining an optimal SSED through automatic computing. The objective of the model is to minimize the number of shunting movements and the constraints include track occupation conflicts, shunting routes conflicts, time durations of maintenance processes, and shunting running time. An enhanced particle swarm optimization (EPSO) algorithm is proposed to solve the optimization problem. Finally, an empirical study from Shanghai South EMU Depot is carried out to illustrate the model and EPSO algorithm. The optimization results indicate that the proposed method is valid for the SSED problem and that the EPSO algorithm outperforms the traditional PSO algorithm on the aspect of optimality. PMID:27436998
Liu, Ying-Pei; Liang, Hai-Ping; Gao, Zhong-Ke
2015-01-01
In order to improve the performance of voltage source converter-high voltage direct current (VSC-HVDC) system, we propose an improved auto-disturbance rejection control (ADRC) method based on least squares support vector machines (LSSVM) in the rectifier side. Firstly, we deduce the high frequency transient mathematical model of VSC-HVDC system. Then we investigate the ADRC and LSSVM principles. We ignore the tracking differentiator in the ADRC controller aiming to improve the system dynamic response speed. On this basis, we derive the mathematical model of ADRC controller optimized by LSSVM for direct current voltage loop. Finally we carry out simulations to verify the feasibility and effectiveness of our proposed control method. In addition, we employ the time-frequency representation methods, i.e., Wigner-Ville distribution (WVD) and adaptive optimal kernel (AOK) time-frequency representation, to demonstrate our proposed method performs better than the traditional method from the perspective of energy distribution in time and frequency plane.
Gao, Zhong-Ke
2015-01-01
In order to improve the performance of voltage source converter-high voltage direct current (VSC-HVDC) system, we propose an improved auto-disturbance rejection control (ADRC) method based on least squares support vector machines (LSSVM) in the rectifier side. Firstly, we deduce the high frequency transient mathematical model of VSC-HVDC system. Then we investigate the ADRC and LSSVM principles. We ignore the tracking differentiator in the ADRC controller aiming to improve the system dynamic response speed. On this basis, we derive the mathematical model of ADRC controller optimized by LSSVM for direct current voltage loop. Finally we carry out simulations to verify the feasibility and effectiveness of our proposed control method. In addition, we employ the time-frequency representation methods, i.e., Wigner-Ville distribution (WVD) and adaptive optimal kernel (AOK) time-frequency representation, to demonstrate our proposed method performs better than the traditional method from the perspective of energy distribution in time and frequency plane. PMID:26098556
Sun, Meng; Lin, Yuanyuan; Zhang, Jie; Zheng, Shaohua; Wang, Sicen
2016-03-01
A rapid analytical method based on online solid-phase extraction with high-performance liquid chromatography and mass spectrometry has been established and applied to the determination of tannin compounds that may cause adverse effects in traditional Chinese medicine injections. Different solid-phase extraction sorbents have been compared and the elution buffer was optimized. The performance of the method was verified by evaluation of recovery (≥40%), repeatability (RSD ≤ 6%), linearity (r(2) ≥ 0.993), and limit of quantification (≤0.35 μg/mL). Five tannin compounds, gallic acid, cianidanol, gallocatechin gallate, ellagic acid, and penta-O-galloylglucose, were identified with concentrations ranging from 3.1-37.4 μg/mL in the analyzed traditional Chinese medicine injections. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
An improved local radial point interpolation method for transient heat conduction analysis
NASA Astrophysics Data System (ADS)
Wang, Feng; Lin, Gao; Zheng, Bao-Jing; Hu, Zhi-Qiang
2013-06-01
The smoothing thin plate spline (STPS) interpolation using the penalty function method according to the optimization theory is presented to deal with transient heat conduction problems. The smooth conditions of the shape functions and derivatives can be satisfied so that the distortions hardly occur. Local weak forms are developed using the weighted residual method locally from the partial differential equations of the transient heat conduction. Here the Heaviside step function is used as the test function in each sub-domain to avoid the need for a domain integral. Essential boundary conditions can be implemented like the finite element method (FEM) as the shape functions possess the Kronecker delta property. The traditional two-point difference method is selected for the time discretization scheme. Three selected numerical examples are presented in this paper to demonstrate the availability and accuracy of the present approach comparing with the traditional thin plate spline (TPS) radial basis functions.
Self-similarity Clustering Event Detection Based on Triggers Guidance
NASA Astrophysics Data System (ADS)
Zhang, Xianfei; Li, Bicheng; Tian, Yuxuan
Traditional method of Event Detection and Characterization (EDC) regards event detection task as classification problem. It makes words as samples to train classifier, which can lead to positive and negative samples of classifier imbalance. Meanwhile, there is data sparseness problem of this method when the corpus is small. This paper doesn't classify event using word as samples, but cluster event in judging event types. It adopts self-similarity to convergence the value of K in K-means algorithm by the guidance of event triggers, and optimizes clustering algorithm. Then, combining with named entity and its comparative position information, the new method further make sure the pinpoint type of event. The new method avoids depending on template of event in tradition methods, and its result of event detection can well be used in automatic text summarization, text retrieval, and topic detection and tracking.
Canseco Grellet, M A; Castagnaro, A; Dantur, K I; De Boeck, G; Ahmed, P M; Cárdenas, G J; Welin, B; Ruiz, R M
2016-10-01
To calculate fermentation efficiency in a continuous ethanol production process, we aimed to develop a robust mathematical method based on the analysis of metabolic by-product formation. This method is in contrast to the traditional way of calculating ethanol fermentation efficiency, where the ratio between the ethanol produced and the sugar consumed is expressed as a percentage of the theoretical conversion yield. Comparison between the two methods, at industrial scale and in sensitivity studies, showed that the indirect method was more robust and gave slightly higher fermentation efficiency values, although fermentation efficiency of the industrial process was found to be low (~75%). The traditional calculation method is simpler than the indirect method as it only requires a few chemical determinations in samples collected. However, a minor error in any measured parameter will have an important impact on the calculated efficiency. In contrast, the indirect method of calculation requires a greater number of determinations but is much more robust since an error in any parameter will only have a minor effect on the fermentation efficiency value. The application of the indirect calculation methodology in order to evaluate the real situation of the process and to reach an optimum fermentation yield for an industrial-scale ethanol production is recommended. Once a high fermentation yield has been reached the traditional method should be used to maintain the control of the process. Upon detection of lower yields in an optimized process the indirect method should be employed as it permits a more accurate diagnosis of causes of yield losses in order to correct the problem rapidly. The low fermentation efficiency obtained in this study shows an urgent need for industrial process optimization where the indirect calculation methodology will be an important tool to determine process losses. © 2016 The Society for Applied Microbiology.
Integrated System-Level Optimization for Concurrent Engineering With Parametric Subsystem Modeling
NASA Technical Reports Server (NTRS)
Schuman, Todd; DeWeck, Oliver L.; Sobieski, Jaroslaw
2005-01-01
The introduction of concurrent design practices to the aerospace industry has greatly increased the productivity of engineers and teams during design sessions as demonstrated by JPL's Team X. Simultaneously, advances in computing power have given rise to a host of potent numerical optimization methods capable of solving complex multidisciplinary optimization problems containing hundreds of variables, constraints, and governing equations. Unfortunately, such methods are tedious to set up and require significant amounts of time and processor power to execute, thus making them unsuitable for rapid concurrent engineering use. This paper proposes a framework for Integration of System-Level Optimization with Concurrent Engineering (ISLOCE). It uses parametric neural-network approximations of the subsystem models. These approximations are then linked to a system-level optimizer that is capable of reaching a solution quickly due to the reduced complexity of the approximations. The integration structure is described in detail and applied to the multiobjective design of a simplified Space Shuttle external fuel tank model. Further, a comparison is made between the new framework and traditional concurrent engineering (without system optimization) through an experimental trial with two groups of engineers. Each method is evaluated in terms of optimizer accuracy, time to solution, and ease of use. The results suggest that system-level optimization, running as a background process during integrated concurrent engineering sessions, is potentially advantageous as long as it is judiciously implemented.
Design of optimized piezoelectric HDD-sliders
NASA Astrophysics Data System (ADS)
Nakasone, Paulo H.; Yoo, Jeonghoon; Silva, Emilio C. N.
2010-04-01
As storage data density in hard-disk drives (HDDs) increases for constant or miniaturizing sizes, precision positioning of HDD heads becomes a more relevant issue to ensure enormous amounts of data to be properly written and read. Since the traditional single-stage voice coil motor (VCM) cannot satisfy the positioning requirement of high-density tracks per inch (TPI) HDDs, dual-stage servo systems have been proposed to overcome this matter, by using VCMs to coarsely move the HDD head while piezoelectric actuators provides fine and fast positioning. Thus, the aim of this work is to apply topology optimization method (TOM) to design novel piezoelectric HDD heads, by finding optimal placement of base-plate and piezoelectric material to high precision positioning HDD heads. Topology optimization method is a structural optimization technique that combines the finite element method (FEM) with optimization algorithms. The laminated finite element employs the MITC (mixed interpolation of tensorial components) formulation to provide accurate and reliable results. The topology optimization uses a rational approximation of material properties to vary the material properties between 'void' and 'filled' portions. The design problem consists in generating optimal structures that provide maximal displacements, appropriate structural stiffness and resonance phenomena avoidance. The requirements are achieved by applying formulations to maximize displacements, minimize structural compliance and maximize resonance frequencies. This paper presents the implementation of the algorithms and show results to confirm the feasibility of this approach.
NASA Astrophysics Data System (ADS)
Abu, M. Y.; Norizan, N. S.; Rahman, M. S. Abd
2018-04-01
Remanufacturing is a sustainability strategic planning which transforming the end of life product to as new performance with their warranty is same or better than the original product. In order to quantify the advantages of this strategy, all the processes must implement the optimization to reach the ultimate goal and reduce the waste generated. The aim of this work is to evaluate the criticality of parameters on the end of life crankshaft based on Taguchi’s orthogonal array. Then, estimate the cost using traditional cost accounting by considering the critical parameters. By implementing the optimization, the remanufacturer obviously produced lower cost and waste during production with higher potential to gain the profit. Mahalanobis-Taguchi System was proven as a powerful method of optimization that revealed the criticality of parameters. When subjected the method to the MAN engine model, there was 5 out of 6 crankpins were critical which need for grinding process while no changes happened to the Caterpillar engine model. Meanwhile, the cost per unit for MAN engine model was changed from MYR1401.29 to RM1251.29 while for Caterpillar engine model have no changes due to the no changes on criticality of parameters consideration. Therefore, by integrating the optimization and costing through remanufacturing process, a better decision can be achieved after observing the potential profit will be gained. The significant of output demonstrated through promoting sustainability by reducing re-melting process of damaged parts to ensure consistent benefit of return cores.
NASA Astrophysics Data System (ADS)
Yao, Wen; Chen, Xiaoqian; Huang, Yiyong; van Tooren, Michel
2013-06-01
To assess the on-orbit servicing (OOS) paradigm and optimize its utilities by taking advantage of its inherent flexibility and responsiveness, the OOS system assessment and optimization methods based on lifecycle simulation under uncertainties are studied. The uncertainty sources considered in this paper include both the aleatory (random launch/OOS operation failure and on-orbit component failure) and the epistemic (the unknown trend of the end-used market price) types. Firstly, the lifecycle simulation under uncertainties is discussed. The chronological flowchart is presented. The cost and benefit models are established, and the uncertainties thereof are modeled. The dynamic programming method to make optimal decision in face of the uncertain events is introduced. Secondly, the method to analyze the propagation effects of the uncertainties on the OOS utilities is studied. With combined probability and evidence theory, a Monte Carlo lifecycle Simulation based Unified Uncertainty Analysis (MCS-UUA) approach is proposed, based on which the OOS utility assessment tool under mixed uncertainties is developed. Thirdly, to further optimize the OOS system under mixed uncertainties, the reliability-based optimization (RBO) method is studied. To alleviate the computational burden of the traditional RBO method which involves nested optimum search and uncertainty analysis, the framework of Sequential Optimization and Mixed Uncertainty Analysis (SOMUA) is employed to integrate MCS-UUA, and the RBO algorithm SOMUA-MCS is developed. Fourthly, a case study on the OOS system for a hypothetical GEO commercial communication satellite is investigated with the proposed assessment tool. Furthermore, the OOS system is optimized with SOMUA-MCS. Lastly, some conclusions are given and future research prospects are highlighted.
Shan, Yi-chu; Zhang, Yu-kui; Zhao, Rui-huan
2002-07-01
In high performance liquid chromatography, it is necessary to apply multi-composition gradient elution for the separation of complex samples such as environmental and biological samples. Multivariate stepwise gradient elution is one of the most efficient elution modes, because it combines the high selectivity of multi-composition mobile phase and shorter analysis time of gradient elution. In practical separations, the separation selectivity of samples can be effectively adjusted by using ternary mobile phase. For the optimization of these parameters, the retention equation of samples must be obtained at first. Traditionally, several isocratic experiments are used to get the retention equation of solute. However, it is time consuming especially for the separation of complex samples with a wide range of polarity. A new method for the fast optimization of ternary stepwise gradient elution was proposed based on the migration rule of solute in column. First, the coefficients of retention equation of solute are obtained by running several linear gradient experiments, then the optimal separation conditions are searched according to the hierarchical chromatography response function which acts as the optimization criterion. For each kind of organic modifier, two initial linear gradient experiments are used to obtain the primary coefficients of retention equation of each solute. For ternary mobile phase, only four linear gradient runs are needed to get the coefficients of retention equation. Then the retention times of solutes under arbitrary mobile phase composition can be predicted. The initial optimal mobile phase composition is obtained by resolution mapping for all of the solutes. A hierarchical chromatography response function is used to evaluate the separation efficiencies and search the optimal elution conditions. In subsequent optimization, the migrating distance of solute in the column is considered to decide the mobile phase composition and sustaining time of the latter steps until all the solutes are eluted out. Thus the first stepwise gradient elution conditions are predicted. If the resolution of samples under the predicted optimal separation conditions is satisfactory, the optimization procedure is stopped; otherwise, the coefficients of retention equation are adjusted according to the experimental results under the previously predicted elution conditions. Then the new stepwise gradient elution conditions are predicted repeatedly until satisfactory resolution is obtained. Normally, the satisfactory separation conditions can be found only after six experiments by using the proposed method. In comparison with the traditional optimization method, the time needed to finish the optimization procedure can be greatly reduced. The method has been validated by its application to the separation of several samples such as amino acid derivatives, aromatic amines, in which satisfactory separations were obtained with predicted resolution.
System Design under Uncertainty: Evolutionary Optimization of the Gravity Probe-B Spacecraft
NASA Technical Reports Server (NTRS)
Pullen, Samuel P.; Parkinson, Bradford W.
1994-01-01
This paper discusses the application of evolutionary random-search algorithms (Simulated Annealing and Genetic Algorithms) to the problem of spacecraft design under performance uncertainty. Traditionally, spacecraft performance uncertainty has been measured by reliability. Published algorithms for reliability optimization are seldom used in practice because they oversimplify reality. The algorithm developed here uses random-search optimization to allow us to model the problem more realistically. Monte Carlo simulations are used to evaluate the objective function for each trial design solution. These methods have been applied to the Gravity Probe-B (GP-B) spacecraft being developed at Stanford University for launch in 1999, Results of the algorithm developed here for GP-13 are shown, and their implications for design optimization by evolutionary algorithms are discussed.
Dissolved oxygen content prediction in crab culture using a hybrid intelligent method
Yu, Huihui; Chen, Yingyi; Hassan, ShahbazGul; Li, Daoliang
2016-01-01
A precise predictive model is needed to obtain a clear understanding of the changing dissolved oxygen content in outdoor crab ponds, to assess how to reduce risk and to optimize water quality management. The uncertainties in the data from multiple sensors are a significant factor when building a dissolved oxygen content prediction model. To increase prediction accuracy, a new hybrid dissolved oxygen content forecasting model based on the radial basis function neural networks (RBFNN) data fusion method and a least squares support vector machine (LSSVM) with an optimal improved particle swarm optimization(IPSO) is developed. In the modelling process, the RBFNN data fusion method is used to improve information accuracy and provide more trustworthy training samples for the IPSO-LSSVM prediction model. The LSSVM is a powerful tool for achieving nonlinear dissolved oxygen content forecasting. In addition, an improved particle swarm optimization algorithm is developed to determine the optimal parameters for the LSSVM with high accuracy and generalizability. In this study, the comparison of the prediction results of different traditional models validates the effectiveness and accuracy of the proposed hybrid RBFNN-IPSO-LSSVM model for dissolved oxygen content prediction in outdoor crab ponds. PMID:27270206
Dissolved oxygen content prediction in crab culture using a hybrid intelligent method.
Yu, Huihui; Chen, Yingyi; Hassan, ShahbazGul; Li, Daoliang
2016-06-08
A precise predictive model is needed to obtain a clear understanding of the changing dissolved oxygen content in outdoor crab ponds, to assess how to reduce risk and to optimize water quality management. The uncertainties in the data from multiple sensors are a significant factor when building a dissolved oxygen content prediction model. To increase prediction accuracy, a new hybrid dissolved oxygen content forecasting model based on the radial basis function neural networks (RBFNN) data fusion method and a least squares support vector machine (LSSVM) with an optimal improved particle swarm optimization(IPSO) is developed. In the modelling process, the RBFNN data fusion method is used to improve information accuracy and provide more trustworthy training samples for the IPSO-LSSVM prediction model. The LSSVM is a powerful tool for achieving nonlinear dissolved oxygen content forecasting. In addition, an improved particle swarm optimization algorithm is developed to determine the optimal parameters for the LSSVM with high accuracy and generalizability. In this study, the comparison of the prediction results of different traditional models validates the effectiveness and accuracy of the proposed hybrid RBFNN-IPSO-LSSVM model for dissolved oxygen content prediction in outdoor crab ponds.
A Sensor Dynamic Measurement Error Prediction Model Based on NAPSO-SVM.
Jiang, Minlan; Jiang, Lan; Jiang, Dingde; Li, Fei; Song, Houbing
2018-01-15
Dynamic measurement error correction is an effective way to improve sensor precision. Dynamic measurement error prediction is an important part of error correction, and support vector machine (SVM) is often used for predicting the dynamic measurement errors of sensors. Traditionally, the SVM parameters were always set manually, which cannot ensure the model's performance. In this paper, a SVM method based on an improved particle swarm optimization (NAPSO) is proposed to predict the dynamic measurement errors of sensors. Natural selection and simulated annealing are added in the PSO to raise the ability to avoid local optima. To verify the performance of NAPSO-SVM, three types of algorithms are selected to optimize the SVM's parameters: the particle swarm optimization algorithm (PSO), the improved PSO optimization algorithm (NAPSO), and the glowworm swarm optimization (GSO). The dynamic measurement error data of two sensors are applied as the test data. The root mean squared error and mean absolute percentage error are employed to evaluate the prediction models' performances. The experimental results show that among the three tested algorithms the NAPSO-SVM method has a better prediction precision and a less prediction errors, and it is an effective method for predicting the dynamic measurement errors of sensors.
Optimal information networks: Application for data-driven integrated health in populations
Servadio, Joseph L.; Convertino, Matteo
2018-01-01
Development of composite indicators for integrated health in populations typically relies on a priori assumptions rather than model-free, data-driven evidence. Traditional variable selection processes tend not to consider relatedness and redundancy among variables, instead considering only individual correlations. In addition, a unified method for assessing integrated health statuses of populations is lacking, making systematic comparison among populations impossible. We propose the use of maximum entropy networks (MENets) that use transfer entropy to assess interrelatedness among selected variables considered for inclusion in a composite indicator. We also define optimal information networks (OINs) that are scale-invariant MENets, which use the information in constructed networks for optimal decision-making. Health outcome data from multiple cities in the United States are applied to this method to create a systemic health indicator, representing integrated health in a city. PMID:29423440
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ridolfi, E.; Napolitano, F., E-mail: francesco.napolitano@uniroma1.it; Alfonso, L.
2016-06-08
The description of river topography has a crucial role in accurate one-dimensional (1D) hydraulic modelling. Specifically, cross-sectional data define the riverbed elevation, the flood-prone area, and thus, the hydraulic behavior of the river. Here, the problem of the optimal cross-sectional spacing is solved through an information theory-based concept. The optimal subset of locations is the one with the maximum information content and the minimum amount of redundancy. The original contribution is the introduction of a methodology to sample river cross sections in the presence of bridges. The approach is tested on the Grosseto River (IT) and is compared to existingmore » guidelines. The results show that the information theory-based approach can support traditional methods to estimate rivers’ cross-sectional spacing.« less
Simultaneously optimizing dose and schedule of a new cytotoxic agent.
Braun, Thomas M; Thall, Peter F; Nguyen, Hoang; de Lima, Marcos
2007-01-01
Traditionally, phase I clinical trial designs are based upon one predefined course of treatment while varying among patients the dose given at each administration. In actual medical practice, patients receive a schedule comprised of several courses of treatment, and some patients may receive one or more dose reductions or delays during treatment. Consequently, the overall risk of toxicity for each patient is a function of both actual schedule of treatment and the differing doses used at each adminstration. Our goal is to provide a practical phase I clinical trial design that more accurately reflects actual medical practice by accounting for both dose per administration and schedule. We propose an outcome-adaptive Bayesian design that simultaneously optimizes both dose and schedule in terms of the overall risk of toxicity, based on time-to-toxicity outcomes. We use computer simulation as a tool to calibrate design parameters. We describe a phase I trial in allogeneic bone marrow transplantation that was designed and is currently being conducted using our new method. Our computer simulations demonstrate that our method outperforms any method that searches for an optimal dose but does not allow schedule to vary, both in terms of the probability of identifying optimal (dose, schedule) combinations, and the numbers of patients assigned to those combinations in the trial. Our design requires greater sample sizes than those seen in traditional phase I studies due to the larger number of treatment combinations examined. Our design also assumes that the effects of multiple administrations are independent of each other and that the hazard of toxicity is the same for all administrations. Our design is the first for phase I clinical trials that is sufficiently flexible and practical to truly reflect clinical practice by varying both dose and the timing and number of administrations given to each patient.
Hu, Shaoxing; Xu, Shike; Wang, Duhu; Zhang, Aiwu
2015-11-11
Aiming at addressing the problem of high computational cost of the traditional Kalman filter in SINS/GPS, a practical optimization algorithm with offline-derivation and parallel processing methods based on the numerical characteristics of the system is presented in this paper. The algorithm exploits the sparseness and/or symmetry of matrices to simplify the computational procedure. Thus plenty of invalid operations can be avoided by offline derivation using a block matrix technique. For enhanced efficiency, a new parallel computational mechanism is established by subdividing and restructuring calculation processes after analyzing the extracted "useful" data. As a result, the algorithm saves about 90% of the CPU processing time and 66% of the memory usage needed in a classical Kalman filter. Meanwhile, the method as a numerical approach needs no precise-loss transformation/approximation of system modules and the accuracy suffers little in comparison with the filter before computational optimization. Furthermore, since no complicated matrix theories are needed, the algorithm can be easily transplanted into other modified filters as a secondary optimization method to achieve further efficiency.
Mixture experiment methods in the development and optimization of microemulsion formulations.
Furlanetto, S; Cirri, M; Piepel, G; Mennini, N; Mura, P
2011-06-25
Microemulsion formulations represent an interesting delivery vehicle for lipophilic drugs, allowing for improving their solubility and dissolution properties. This work developed effective microemulsion formulations using glyburide (a very poorly-water-soluble hypoglycaemic agent) as a model drug. First, the area of stable microemulsion (ME) formations was identified using a new approach based on mixture experiment methods. A 13-run mixture design was carried out in an experimental region defined by constraints on three components: aqueous, oil and surfactant/cosurfactant. The transmittance percentage (at 550 nm) of ME formulations (indicative of their transparency and thus of their stability) was chosen as the response variable. The results obtained using the mixture experiment approach corresponded well with those obtained using the traditional approach based on pseudo-ternary phase diagrams. However, the mixture experiment approach required far less experimental effort than the traditional approach. A subsequent 13-run mixture experiment, in the region of stable MEs, was then performed to identify the optimal formulation (i.e., having the best glyburide dissolution properties). Percent drug dissolved and dissolution efficiency were selected as the responses to be maximized. The ME formulation optimized via the mixture experiment approach consisted of 78% surfactant/cosurfacant (a mixture of Tween 20 and Transcutol, 1:1, v/v), 5% oil (Labrafac Hydro) and 17% aqueous phase (water). The stable region of MEs was identified using mixture experiment methods for the first time. Copyright © 2011 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Kyzar, Kathleen; Jimerson, Jo Beth
2018-01-01
Evidence around adolescent learning and development is clear: School-family partnerships matter. However, traditional methods for engaging families that narrowly define who is involved and what constitutes involvement fall short of promoting optimal outcomes. Meaningful family engagement practices involve reciprocal, two-way interactions between…
USDA-ARS?s Scientific Manuscript database
Campylobacter jejuni (C. jejuni) is one of the most common causes of gastroenteritis in the world. Given the potential risks to human, animal and environmental health the development and optimization of methods to quantify this important pathogen in environmental samples is essential. Two of the mos...
ERIC Educational Resources Information Center
Mills, James W.; And Others
1973-01-01
The Study reported here tested an application of the Linear Programming Model at the Reading Clinic of Drew University. Results, while not conclusive, indicate that this approach yields greater gains in speed scores than a traditional approach for this population. (Author)
NASA Astrophysics Data System (ADS)
Xuan, Li; He, Bin; Hu, Li-Fa; Li, Da-Yu; Xu, Huan-Yu; Zhang, Xing-Yun; Wang, Shao-Xin; Wang, Yu-Kun; Yang, Cheng-Liang; Cao, Zhao-Liang; Mu, Quan-Quan; Lu, Xing-Hai
2016-09-01
Multi-conjugation adaptive optics (MCAOs) have been investigated and used in the large aperture optical telescopes for high-resolution imaging with large field of view (FOV). The atmospheric tomographic phase reconstruction and projection of three-dimensional turbulence volume onto wavefront correctors, such as deformable mirrors (DMs) or liquid crystal wavefront correctors (LCWCs), is a very important step in the data processing of an MCAO’s controller. In this paper, a method according to the wavefront reconstruction performance of MCAO is presented to evaluate the optimized configuration of multi laser guide stars (LGSs) and the reasonable conjugation heights of LCWCs. Analytical formulations are derived for the different configurations and are used to generate optimized parameters for MCAO. Several examples are given to demonstrate our LGSs configuration optimization method. Compared with traditional methods, our method has minimum wavefront tomographic error, which will be helpful to get higher imaging resolution at large FOV in MCAO. Project supported by the National Natural Science Foundation of China (Grant Nos. 11174274, 11174279, 61205021, 11204299, 61475152, and 61405194) and the State Key Laboratory of Applied Optics, Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences.
Automated optimization techniques for aircraft synthesis
NASA Technical Reports Server (NTRS)
Vanderplaats, G. N.
1976-01-01
Application of numerical optimization techniques to automated conceptual aircraft design is examined. These methods are shown to be a general and efficient way to obtain quantitative information for evaluating alternative new vehicle projects. Fully automated design is compared with traditional point design methods and time and resource requirements for automated design are given. The NASA Ames Research Center aircraft synthesis program (ACSYNT) is described with special attention to calculation of the weight of a vehicle to fly a specified mission. The ACSYNT procedures for automatically obtaining sensitivity of the design (aircraft weight, performance and cost) to various vehicle, mission, and material technology parameters are presented. Examples are used to demonstrate the efficient application of these techniques.
NASA Astrophysics Data System (ADS)
Saverskiy, Aleksandr Y.; Dinca, Dan-Cristian; Rommel, J. Martin
The Intra-Pulse Multi-Energy (IPME) method of material discrimination mitigates main disadvantages of the traditional "interlaced" approach: ambiguity caused by sampling different regions of cargo and reduction of effective scanning speed. A novel concept of creating multi-energy probing pulses using a standing-wave structure allows maintaining a constant energy spectrum while changing the time duration of each sub-pulse and thus enables adaptive cargo inspection. Depending on the cargo density, the dose delivered to the inspected object is optimized for best material discrimination, maximum material penetration, or lowest dose to cargo. A model based on Monte-Carlo simulation and experimental reference points were developed for the optimization of inspection conditions.
Exploring the quantum speed limit with computer games
NASA Astrophysics Data System (ADS)
Sørensen, Jens Jakob W. H.; Pedersen, Mads Kock; Munch, Michael; Haikka, Pinja; Jensen, Jesper Halkjær; Planke, Tilo; Andreasen, Morten Ginnerup; Gajdacz, Miroslav; Mølmer, Klaus; Lieberoth, Andreas; Sherson, Jacob F.
2016-04-01
Humans routinely solve problems of immense computational complexity by intuitively forming simple, low-dimensional heuristic strategies. Citizen science (or crowd sourcing) is a way of exploiting this ability by presenting scientific research problems to non-experts. ‘Gamification’—the application of game elements in a non-game context—is an effective tool with which to enable citizen scientists to provide solutions to research problems. The citizen science games Foldit, EteRNA and EyeWire have been used successfully to study protein and RNA folding and neuron mapping, but so far gamification has not been applied to problems in quantum physics. Here we report on Quantum Moves, an online platform gamifying optimization problems in quantum physics. We show that human players are able to find solutions to difficult problems associated with the task of quantum computing. Players succeed where purely numerical optimization fails, and analyses of their solutions provide insights into the problem of optimization of a more profound and general nature. Using player strategies, we have thus developed a few-parameter heuristic optimization method that efficiently outperforms the most prominent established numerical methods. The numerical complexity associated with time-optimal solutions increases for shorter process durations. To understand this better, we produced a low-dimensional rendering of the optimization landscape. This rendering reveals why traditional optimization methods fail near the quantum speed limit (that is, the shortest process duration with perfect fidelity). Combined analyses of optimization landscapes and heuristic solution strategies may benefit wider classes of optimization problems in quantum physics and beyond.
Exploring the quantum speed limit with computer games.
Sørensen, Jens Jakob W H; Pedersen, Mads Kock; Munch, Michael; Haikka, Pinja; Jensen, Jesper Halkjær; Planke, Tilo; Andreasen, Morten Ginnerup; Gajdacz, Miroslav; Mølmer, Klaus; Lieberoth, Andreas; Sherson, Jacob F
2016-04-14
Humans routinely solve problems of immense computational complexity by intuitively forming simple, low-dimensional heuristic strategies. Citizen science (or crowd sourcing) is a way of exploiting this ability by presenting scientific research problems to non-experts. 'Gamification'--the application of game elements in a non-game context--is an effective tool with which to enable citizen scientists to provide solutions to research problems. The citizen science games Foldit, EteRNA and EyeWire have been used successfully to study protein and RNA folding and neuron mapping, but so far gamification has not been applied to problems in quantum physics. Here we report on Quantum Moves, an online platform gamifying optimization problems in quantum physics. We show that human players are able to find solutions to difficult problems associated with the task of quantum computing. Players succeed where purely numerical optimization fails, and analyses of their solutions provide insights into the problem of optimization of a more profound and general nature. Using player strategies, we have thus developed a few-parameter heuristic optimization method that efficiently outperforms the most prominent established numerical methods. The numerical complexity associated with time-optimal solutions increases for shorter process durations. To understand this better, we produced a low-dimensional rendering of the optimization landscape. This rendering reveals why traditional optimization methods fail near the quantum speed limit (that is, the shortest process duration with perfect fidelity). Combined analyses of optimization landscapes and heuristic solution strategies may benefit wider classes of optimization problems in quantum physics and beyond.
A streamlined artificial variable free version of simplex method.
Inayatullah, Syed; Touheed, Nasir; Imtiaz, Muhammad
2015-01-01
This paper proposes a streamlined form of simplex method which provides some great benefits over traditional simplex method. For instance, it does not need any kind of artificial variables or artificial constraints; it could start with any feasible or infeasible basis of an LP. This method follows the same pivoting sequence as of simplex phase 1 without showing any explicit description of artificial variables which also makes it space efficient. Later in this paper, a dual version of the new method has also been presented which provides a way to easily implement the phase 1 of traditional dual simplex method. For a problem having an initial basis which is both primal and dual infeasible, our methods provide full freedom to the user, that whether to start with primal artificial free version or dual artificial free version without making any reformulation to the LP structure. Last but not the least, it provides a teaching aid for the teachers who want to teach feasibility achievement as a separate topic before teaching optimality achievement.
A Streamlined Artificial Variable Free Version of Simplex Method
Inayatullah, Syed; Touheed, Nasir; Imtiaz, Muhammad
2015-01-01
This paper proposes a streamlined form of simplex method which provides some great benefits over traditional simplex method. For instance, it does not need any kind of artificial variables or artificial constraints; it could start with any feasible or infeasible basis of an LP. This method follows the same pivoting sequence as of simplex phase 1 without showing any explicit description of artificial variables which also makes it space efficient. Later in this paper, a dual version of the new method has also been presented which provides a way to easily implement the phase 1 of traditional dual simplex method. For a problem having an initial basis which is both primal and dual infeasible, our methods provide full freedom to the user, that whether to start with primal artificial free version or dual artificial free version without making any reformulation to the LP structure. Last but not the least, it provides a teaching aid for the teachers who want to teach feasibility achievement as a separate topic before teaching optimality achievement. PMID:25767883
A Performance Weighted Collaborative Filtering algorithm for personalized radiology education.
Lin, Hongli; Yang, Xuedong; Wang, Weisheng; Luo, Jiawei
2014-10-01
Devising an accurate prediction algorithm that can predict the difficulty level of cases for individuals and then selects suitable cases for them is essential to the development of a personalized training system. In this paper, we propose a novel approach, called Performance Weighted Collaborative Filtering (PWCF), to predict the difficulty level of each case for individuals. The main idea of PWCF is to assign an optimal weight to each rating used for predicting the difficulty level of a target case for a trainee, rather than using an equal weight for all ratings as in traditional collaborative filtering methods. The assigned weight is a function of the performance level of the trainee at which the rating was made. The PWCF method and the traditional method are compared using two datasets. The experimental data are then evaluated by means of the MAE metric. Our experimental results show that PWCF outperforms the traditional methods by 8.12% and 17.05%, respectively, over the two datasets, in terms of prediction precision. This suggests that PWCF is a viable method for the development of personalized training systems in radiology education. Copyright © 2014. Published by Elsevier Inc.
On Improving Efficiency of Differential Evolution for Aerodynamic Shape Optimization Applications
NASA Technical Reports Server (NTRS)
Madavan, Nateri K.
2004-01-01
Differential Evolution (DE) is a simple and robust evolutionary strategy that has been provEn effective in determining the global optimum for several difficult optimization problems. Although DE offers several advantages over traditional optimization approaches, its use in applications such as aerodynamic shape optimization where the objective function evaluations are computationally expensive is limited by the large number of function evaluations often required. In this paper various approaches for improving the efficiency of DE are reviewed and discussed. Several approaches that have proven effective for other evolutionary algorithms are modified and implemented in a DE-based aerodynamic shape optimization method that uses a Navier-Stokes solver for the objective function evaluations. Parallelization techniques on distributed computers are used to reduce turnaround times. Results are presented for standard test optimization problems and for the inverse design of a turbine airfoil. The efficiency improvements achieved by the different approaches are evaluated and compared.
A Multi-agent Based Cooperative Voltage and Reactive Power Control
NASA Astrophysics Data System (ADS)
Ishida, Masato; Nagata, Takeshi; Saiki, Hiroshi; Shimada, Ikuhiko; Hatano, Ryousuke
In order to maintain system voltage within the optimal range and prevent voltage instability phenomena before they occur, a variety of phase modifying equipment is installed in optimal locations throughout the power system network and a variety of methods of voltage reactive control are employed. The proposed system divided the traditional method to control voltage and reactive power into two sub problems; “voltage control” to adjust the secondary bus voltage of substations, and “reactive power control” to adjust the primary bus voltage. In this system, two types of agents are installed in substations in order to cooperate “voltage control” and “reactive power control”. In order to verify the performance of the proposed method, it has been applied to the model network system. The results confirm that our proposed method is able to control violent fluctuations in load.
[Lateral chromatic aberrations correction for AOTF imaging spectrometer based on doublet prism].
Zhao, Hui-Jie; Zhou, Peng-Wei; Zhang, Ying; Li, Chong-Chong
2013-10-01
An user defined surface function method was proposed to model the acousto-optic interaction of AOTF based on wave-vector match principle. Assessment experiment result shows that this model can achieve accurate ray trace of AOTF diffracted beam. In addition, AOTF imaging spectrometer presents large residual lateral color when traditional chromatic aberrations correcting method is adopted. In order to reduce lateral chromatic aberrations, a method based on doublet prism is proposed. The optical material and angle of the prism are optimized automatically using global optimization with the help of user defined AOTF surface. Simulation result shows that the proposed method provides AOTF imaging spectrometer with great conveniences, which reduces the lateral chromatic aberration to less than 0.000 3 degrees and improves by one order of magnitude, with spectral image shift effectively corrected.
Gao, JianZhao; Tao, Xue-Wen; Zhao, Jia; Feng, Yuan-Ming; Cai, Yu-Dong; Zhang, Ning
2017-01-01
Lysine acetylation, as one type of post-translational modifications (PTM), plays key roles in cellular regulations and can be involved in a variety of human diseases. However, it is often high-cost and time-consuming to use traditional experimental approaches to identify the lysine acetylation sites. Therefore, effective computational methods should be developed to predict the acetylation sites. In this study, we developed a position-specific method for epsilon lysine acetylation site prediction. Sequences of acetylated proteins were retrieved from the UniProt database. Various kinds of features such as position specific scoring matrix (PSSM), amino acid factors (AAF), and disorders were incorporated. A feature selection method based on mRMR (Maximum Relevance Minimum Redundancy) and IFS (Incremental Feature Selection) was employed. Finally, 319 optimal features were selected from total 541 features. Using the 319 optimal features to encode peptides, a predictor was constructed based on dagging. As a result, an accuracy of 69.56% with MCC of 0.2792 was achieved. We analyzed the optimal features, which suggested some important factors determining the lysine acetylation sites. We developed a position-specific method for epsilon lysine acetylation site prediction. A set of optimal features was selected. Analysis of the optimal features provided insights into the mechanism of lysine acetylation sites, providing guidance of experimental validation. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Ultrasound image edge detection based on a novel multiplicative gradient and Canny operator.
Zheng, Yinfei; Zhou, Yali; Zhou, Hao; Gong, Xiaohong
2015-07-01
To achieve the fast and accurate segmentation of ultrasound image, a novel edge detection method for speckle noised ultrasound images was proposed, which was based on the traditional Canny and a novel multiplicative gradient operator. The proposed technique combines a new multiplicative gradient operator of non-Newtonian type with the traditional Canny operator to generate the initial edge map, which is subsequently optimized by the following edge tracing step. To verify the proposed method, we compared it with several other edge detection methods that had good robustness to noise, with experiments on the simulated and in vivo medical ultrasound image. Experimental results showed that the proposed algorithm has higher speed for real-time processing, and the edge detection accuracy could be 75% or more. Thus, the proposed method is very suitable for fast and accurate edge detection of medical ultrasound images. © The Author(s) 2014.
Potential benefits of genomic selection on genetic gain of small ruminant breeding programs.
Shumbusho, F; Raoul, J; Astruc, J M; Palhiere, I; Elsen, J M
2013-08-01
In conventional small ruminant breeding programs, only pedigree and phenotype records are used to make selection decisions but prospects of including genomic information are now under consideration. The objective of this study was to assess the potential benefits of genomic selection on the genetic gain in French sheep and goat breeding designs of today. Traditional and genomic scenarios were modeled with deterministic methods for 3 breeding programs. The models included decisional variables related to male selection candidates, progeny testing capacity, and economic weights that were optimized to maximize annual genetic gain (AGG) of i) a meat sheep breeding program that improved a meat trait of heritability (h(2)) = 0.30 and a maternal trait of h(2) = 0.09 and ii) dairy sheep and goat breeding programs that improved a milk trait of h(2) = 0.30. Values of ±0.20 of genetic correlation between meat and maternal traits were considered to study their effects on AGG. The Bulmer effect was accounted for and the results presented here are the averages of AGG after 10 generations of selection. Results showed that current traditional breeding programs provide an AGG of 0.095 genetic standard deviation (σa) for meat and 0.061 σa for maternal trait in meat breed and 0.147 σa and 0.120 σa in sheep and goat dairy breeds, respectively. By optimizing decisional variables, the AGG with traditional selection methods increased to 0.139 σa for meat and 0.096 σa for maternal traits in meat breeding programs and to 0.174 σa and 0.183 σa in dairy sheep and goat breeding programs, respectively. With a medium-sized reference population (nref) of 2,000 individuals, the best genomic scenarios gave an AGG that was 17.9% greater than with traditional selection methods with optimized values of decisional variables for combined meat and maternal traits in meat sheep, 51.7% in dairy sheep, and 26.2% in dairy goats. The superiority of genomic schemes increased with the size of the reference population and genomic selection gave the best results when nref > 1,000 individuals for dairy breeds and nref > 2,000 individuals for meat breed. Genetic correlation between meat and maternal traits had a large impact on the genetic gain of both traits. Changes in AGG due to correlation were greatest for low heritable maternal traits. As a general rule, AGG was increased both by optimizing selection designs and including genomic information.
On Improving Efficiency of Differential Evolution for Aerodynamic Shape Optimization Applications
NASA Technical Reports Server (NTRS)
Madavan, Nateri K.
2004-01-01
Differential Evolution (DE) is a simple and robust evolutionary strategy that has been proven effective in determining the global optimum for several difficult optimization problems. Although DE offers several advantages over traditional optimization approaches, its use in applications such as aerodynamic shape optimization where the objective function evaluations are computationally expensive is limited by the large number of function evaluations often required. In this paper various approaches for improving the efficiency of DE are reviewed and discussed. These approaches are implemented in a DE-based aerodynamic shape optimization method that uses a Navier-Stokes solver for the objective function evaluations. Parallelization techniques on distributed computers are used to reduce turnaround times. Results are presented for the inverse design of a turbine airfoil. The efficiency improvements achieved by the different approaches are evaluated and compared.
Bi-directional evolutionary optimization for photonic band gap structures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meng, Fei; School of Civil Engineering, Central South University, Changsha 410075; Huang, Xiaodong, E-mail: huang.xiaodong@rmit.edu.au
2015-12-01
Toward an efficient and easy-implement optimization for photonic band gap structures, this paper extends the bi-directional evolutionary structural optimization (BESO) method for maximizing photonic band gaps. Photonic crystals are assumed to be periodically composed of two dielectric materials with the different permittivity. Based on the finite element analysis and sensitivity analysis, BESO starts from a simple initial design without any band gap and gradually re-distributes dielectric materials within the unit cell so that the resulting photonic crystal possesses a maximum band gap between two specified adjacent bands. Numerical examples demonstrated the proposed optimization algorithm can successfully obtain the band gapsmore » from the first to the tenth band for both transverse magnetic and electric polarizations. Some optimized photonic crystals exhibit novel patterns markedly different from traditional designs of photonic crystals.« less
Optimal chroma-like channel design for passive color image splicing detection
NASA Astrophysics Data System (ADS)
Zhao, Xudong; Li, Shenghong; Wang, Shilin; Li, Jianhua; Yang, Kongjin
2012-12-01
Image splicing is one of the most common image forgeries in our daily life and due to the powerful image manipulation tools, image splicing is becoming easier and easier. Several methods have been proposed for image splicing detection and all of them worked on certain existing color channels. However, the splicing artifacts vary in different color channels and the selection of color model is important for image splicing detection. In this article, instead of finding an existing color model, we propose a color channel design method to find the most discriminative channel which is referred to as optimal chroma-like channel for a given feature extraction method. Experimental results show that both spatial and frequency features extracted from the designed channel achieve higher detection rate than those extracted from traditional color channels.
Automatic threshold optimization in nonlinear energy operator based spike detection.
Malik, Muhammad H; Saeed, Maryam; Kamboh, Awais M
2016-08-01
In neural spike sorting systems, the performance of the spike detector has to be maximized because it affects the performance of all subsequent blocks. Non-linear energy operator (NEO), is a popular spike detector due to its detection accuracy and its hardware friendly architecture. However, it involves a thresholding stage, whose value is usually approximated and is thus not optimal. This approximation deteriorates the performance in real-time systems where signal to noise ratio (SNR) estimation is a challenge, especially at lower SNRs. In this paper, we propose an automatic and robust threshold calculation method using an empirical gradient technique. The method is tested on two different datasets. The results show that our optimized threshold improves the detection accuracy in both high SNR and low SNR signals. Boxplots are presented that provide a statistical analysis of improvements in accuracy, for instance, the 75th percentile was at 98.7% and 93.5% for the optimized NEO threshold and traditional NEO threshold, respectively.
A Blocked Linear Method for Optimizing Large Parameter Sets in Variational Monte Carlo
Zhao, Luning; Neuscamman, Eric
2017-05-17
We present a modification to variational Monte Carlo’s linear method optimization scheme that addresses a critical memory bottleneck while maintaining compatibility with both the traditional ground state variational principle and our recently-introduced variational principle for excited states. For wave function ansatzes with tens of thousands of variables, our modification reduces the required memory per parallel process from tens of gigabytes to hundreds of megabytes, making the methodology a much better fit for modern supercomputer architectures in which data communication and per-process memory consumption are primary concerns. We verify the efficacy of the new optimization scheme in small molecule tests involvingmore » both the Hilbert space Jastrow antisymmetric geminal power ansatz and real space multi-Slater Jastrow expansions. Satisfied with its performance, we have added the optimizer to the QMCPACK software package, with which we demonstrate on a hydrogen ring a prototype approach for making systematically convergent, non-perturbative predictions of Mott-insulators’ optical band gaps.« less
Globally optimal superconducting magnets part I: minimum stored energy (MSE) current density map.
Tieng, Quang M; Vegh, Viktor; Brereton, Ian M
2009-01-01
An optimal current density map is crucial in magnet design to provide the initial values within search spaces in an optimization process for determining the final coil arrangement of the magnet. A strategy for obtaining globally optimal current density maps for the purpose of designing magnets with coaxial cylindrical coils in which the stored energy is minimized within a constrained domain is outlined. The current density maps obtained utilising the proposed method suggests that peak current densities occur around the perimeter of the magnet domain, where the adjacent peaks have alternating current directions for the most compact designs. As the dimensions of the domain are increased, the current density maps yield traditional magnet designs of positive current alone. These unique current density maps are obtained by minimizing the stored magnetic energy cost function and therefore suggest magnet coil designs of minimal system energy. Current density maps are provided for a number of different domain arrangements to illustrate the flexibility of the method and the quality of the achievable designs.
Optimizing Metabolite Production Using Periodic Oscillations
Sowa, Steven W.; Baldea, Michael; Contreras, Lydia M.
2014-01-01
Methods for improving microbial strains for metabolite production remain the subject of constant research. Traditionally, metabolic tuning has been mostly limited to knockouts or overexpression of pathway genes and regulators. In this paper, we establish a new method to control metabolism by inducing optimally tuned time-oscillations in the levels of selected clusters of enzymes, as an alternative strategy to increase the production of a desired metabolite. Using an established kinetic model of the central carbon metabolism of Escherichia coli, we formulate this concept as a dynamic optimization problem over an extended, but finite time horizon. Total production of a metabolite of interest (in this case, phosphoenolpyruvate, PEP) is established as the objective function and time-varying concentrations of the cellular enzymes are used as decision variables. We observe that by varying, in an optimal fashion, levels of key enzymes in time, PEP production increases significantly compared to the unoptimized system. We demonstrate that oscillations can improve metabolic output in experimentally feasible synthetic circuits. PMID:24901332
Human anatomy: let the students tell us how to teach.
Davis, Christopher R; Bates, Anthony S; Ellis, Harold; Roberts, Alice M
2014-01-01
Anatomy teaching methods have evolved as the medical undergraduate curriculum has modernized. Traditional teaching methods of dissection, prosection, tutorials and lectures are now supplemented by anatomical models and e-learning. Despite these changes, the preferences of medical students and anatomy faculty towards both traditional and contemporary teaching methods and tools are largely unknown. This study quantified medical student and anatomy faculty opinion on various aspects of anatomical teaching at the Department of Anatomy, University of Bristol, UK. A questionnaire was used to explore the perceived effectiveness of different anatomical teaching methods and tools among anatomy faculty (AF) and medical students in year one (Y1) and year two (Y2). A total of 370 preclinical medical students entered the study (76% response rate). Responses were quantified and intergroup comparisons were made. All students and AF were strongly in favor of access to cadaveric specimens and supported traditional methods of small-group teaching with medically qualified demonstrators. Other teaching methods, including e-learning, anatomical models and surgical videos, were considered useful educational tools. In several areas there was disharmony between the opinions of AF and medical students. This study emphasizes the importance of collecting student preferences to optimize teaching methods used in the undergraduate anatomy curriculum. © 2013 American Association of Anatomists.
A Robot Trajectory Optimization Approach for Thermal Barrier Coatings Used for Free-Form Components
NASA Astrophysics Data System (ADS)
Cai, Zhenhua; Qi, Beichun; Tao, Chongyuan; Luo, Jie; Chen, Yuepeng; Xie, Changjun
2017-10-01
This paper is concerned with a robot trajectory optimization approach for thermal barrier coatings. As the requirements of high reproducibility of complex workpieces increase, an optimal thermal spraying trajectory should not only guarantee an accurate control of spray parameters defined by users (e.g., scanning speed, spray distance, scanning step, etc.) to achieve coating thickness homogeneity but also help to homogenize the heat transfer distribution on the coating surface. A mesh-based trajectory generation approach is introduced in this work to generate path curves on a free-form component. Then, two types of meander trajectories are generated by performing a different connection method. Additionally, this paper presents a research approach for introducing the heat transfer analysis into the trajectory planning process. Combining heat transfer analysis with trajectory planning overcomes the defects of traditional trajectory planning methods (e.g., local over-heating), which helps form the uniform temperature field by optimizing the time sequence of path curves. The influence of two different robot trajectories on the process of heat transfer is estimated by coupled FEM models which demonstrates the effectiveness of the presented optimization approach.
Optimization in Cardiovascular Modeling
NASA Astrophysics Data System (ADS)
Marsden, Alison L.
2014-01-01
Fluid mechanics plays a key role in the development, progression, and treatment of cardiovascular disease. Advances in imaging methods and patient-specific modeling now reveal increasingly detailed information about blood flow patterns in health and disease. Building on these tools, there is now an opportunity to couple blood flow simulation with optimization algorithms to improve the design of surgeries and devices, incorporating more information about the flow physics in the design process to augment current medical knowledge. In doing so, a major challenge is the need for efficient optimization tools that are appropriate for unsteady fluid mechanics problems, particularly for the optimization of complex patient-specific models in the presence of uncertainty. This article reviews the state of the art in optimization tools for virtual surgery, device design, and model parameter identification in cardiovascular flow and mechanobiology applications. In particular, it reviews trade-offs between traditional gradient-based methods and derivative-free approaches, as well as the need to incorporate uncertainties. Key future challenges are outlined, which extend to the incorporation of biological response and the customization of surgeries and devices for individual patients.
Evolutionary Optimization of a Geometrically Refined Truss
NASA Technical Reports Server (NTRS)
Hull, P. V.; Tinker, M. L.; Dozier, G. V.
2007-01-01
Structural optimization is a field of research that has experienced noteworthy growth for many years. Researchers in this area have developed optimization tools to successfully design and model structures, typically minimizing mass while maintaining certain deflection and stress constraints. Numerous optimization studies have been performed to minimize mass, deflection, and stress on a benchmark cantilever truss problem. Predominantly traditional optimization theory is applied to this problem. The cross-sectional area of each member is optimized to minimize the aforementioned objectives. This Technical Publication (TP) presents a structural optimization technique that has been previously applied to compliant mechanism design. This technique demonstrates a method that combines topology optimization, geometric refinement, finite element analysis, and two forms of evolutionary computation: genetic algorithms and differential evolution to successfully optimize a benchmark structural optimization problem. A nontraditional solution to the benchmark problem is presented in this TP, specifically a geometrically refined topological solution. The design process begins with an alternate control mesh formulation, multilevel geometric smoothing operation, and an elastostatic structural analysis. The design process is wrapped in an evolutionary computing optimization toolset.
Wang, Rui; Zhou, Yongquan; Zhao, Chengyan; Wu, Haizhou
2015-01-01
Multi-threshold image segmentation is a powerful image processing technique that is used for the preprocessing of pattern recognition and computer vision. However, traditional multilevel thresholding methods are computationally expensive because they involve exhaustively searching the optimal thresholds to optimize the objective functions. To overcome this drawback, this paper proposes a flower pollination algorithm with a randomized location modification. The proposed algorithm is used to find optimal threshold values for maximizing Otsu's objective functions with regard to eight medical grayscale images. When benchmarked against other state-of-the-art evolutionary algorithms, the new algorithm proves itself to be robust and effective through numerical experimental results including Otsu's objective values and standard deviations.
[Traditional and modern approaches to culture of preimplantation mammalian embryos in vitro].
Brusentsev, E Iu; Igonina, T N; Amstislavskiĭ, S Ia
2014-01-01
This review covers the basic principles and methods of in vitro culture of preimplantation mammalian embryos. The features of in vitro development of embryos of various species of animals with allowance for the composition of nutrient media are described, with special attention paid to those species that have traditionally been consideredas laboratory (i.e., mice, rats, and hamsters). The effects of suboptimal culturing conditions of preimplantation embryos on the formation of the phenotype of individuals developed from these embryos are discussed. New approaches to optimize the conditions of the development of preimplantation mammalian embryos in vitro are analyzed.
Intravenous catheter training system: computer-based education versus traditional learning methods.
Engum, Scott A; Jeffries, Pamela; Fisher, Lisa
2003-07-01
Virtual reality simulators allow trainees to practice techniques without consequences, reduce potential risk associated with training, minimize animal use, and help to develop standards and optimize procedures. Current intravenous (IV) catheter placement training methods utilize plastic arms, however, the lack of variability can diminish the educational stimulus for the student. This study compares the effectiveness of an interactive, multimedia, virtual reality computer IV catheter simulator with a traditional laboratory experience of teaching IV venipuncture skills to both nursing and medical students. A randomized, pretest-posttest experimental design was employed. A total of 163 participants, 70 baccalaureate nursing students and 93 third-year medical students beginning their fundamental skills training were recruited. The students ranged in age from 20 to 55 years (mean 25). Fifty-eight percent were female and 68% percent perceived themselves as having average computer skills (25% declaring excellence). The methods of IV catheter education compared included a traditional method of instruction involving a scripted self-study module which involved a 10-minute videotape, instructor demonstration, and hands-on-experience using plastic mannequin arms. The second method involved an interactive multimedia, commercially made computer catheter simulator program utilizing virtual reality (CathSim). The pretest scores were similar between the computer and the traditional laboratory group. There was a significant improvement in cognitive gains, student satisfaction, and documentation of the procedure with the traditional laboratory group compared with the computer catheter simulator group. Both groups were similar in their ability to demonstrate the skill correctly. CONCLUSIONS; This evaluation and assessment was an initial effort to assess new teaching methodologies related to intravenous catheter placement and their effects on student learning outcomes and behaviors. Technology alone is not a solution for stand alone IV catheter placement education. A traditional learning method was preferred by students. The combination of these two methods of education may further enhance the trainee's satisfaction and skill acquisition level.
Real Time Optimal Control of Supercapacitor Operation for Frequency Response
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luo, Yusheng; Panwar, Mayank; Mohanpurkar, Manish
2016-07-01
Supercapacitors are gaining wider applications in power systems due to fast dynamic response. Utilizing supercapacitors by means of power electronics interfaces for power compensation is a proven effective technique. For applications such as requency restoration if the cost of supercapacitors maintenance as well as the energy loss on the power electronics interfaces are addressed. It is infeasible to use traditional optimization control methods to mitigate the impacts of frequent cycling. This paper proposes a Front End Controller (FEC) using Generalized Predictive Control featuring real time receding optimization. The optimization constraints are based on cost and thermal management to enhance tomore » the utilization efficiency of supercapacitors. A rigorous mathematical derivation is conducted and test results acquired from Digital Real Time Simulator are provided to demonstrate effectiveness.« less
On advanced configuration enhance adaptive system optimization
NASA Astrophysics Data System (ADS)
Liu, Hua; Ding, Quanxin; Wang, Helong; Guo, Chunjie; Chen, Hongliang; Zhou, Liwei
2017-10-01
For aim to find an effective method to structure to enhance these adaptive system with some complex function and look forward to establish an universally applicable solution in prototype and optimization. As the most attractive component in adaptive system, wave front corrector is constrained by some conventional technique and components, such as polarization dependence and narrow working waveband. Advanced configuration based on a polarized beam split can optimized energy splitting method used to overcome these problems effective. With the global algorithm, the bandwidth has been amplified by more than five times as compared with that of traditional ones. Simulation results show that the system can meet the application requirements in MTF and other related criteria. Compared with the conventional design, the system has reduced in volume and weight significantly. Therefore, the determining factors are the prototype selection and the system configuration, Results show their effectiveness.
Transmission Scheduling and Routing Algorithms for Delay Tolerant Networks
NASA Technical Reports Server (NTRS)
Dudukovich, Rachel; Raible, Daniel E.
2016-01-01
The challenges of data processing, transmission scheduling and routing within a space network present a multi-criteria optimization problem. Long delays, intermittent connectivity, asymmetric data rates and potentially high error rates make traditional networking approaches unsuitable. The delay tolerant networking architecture and protocols attempt to mitigate many of these issues, yet transmission scheduling is largely manually configured and routes are determined by a static contact routing graph. A high level of variability exists among the requirements and environmental characteristics of different missions, some of which may allow for the use of more opportunistic routing methods. In all cases, resource allocation and constraints must be balanced with the optimization of data throughput and quality of service. Much work has been done researching routing techniques for terrestrial-based challenged networks in an attempt to optimize contact opportunities and resource usage. This paper examines several popular methods to determine their potential applicability to space networks.
Bastian, Nathaniel D; Ekin, Tahir; Kang, Hyojung; Griffin, Paul M; Fulton, Lawrence V; Grannan, Benjamin C
2017-06-01
The management of hospitals within fixed-input health systems such as the U.S. Military Health System (MHS) can be challenging due to the large number of hospitals, as well as the uncertainty in input resources and achievable outputs. This paper introduces a stochastic multi-objective auto-optimization model (SMAOM) for resource allocation decision-making in fixed-input health systems. The model can automatically identify where to re-allocate system input resources at the hospital level in order to optimize overall system performance, while considering uncertainty in the model parameters. The model is applied to 128 hospitals in the three services (Air Force, Army, and Navy) in the MHS using hospital-level data from 2009 - 2013. The results are compared to the traditional input-oriented variable returns-to-scale Data Envelopment Analysis (DEA) model. The application of SMAOM to the MHS increases the expected system-wide technical efficiency by 18 % over the DEA model while also accounting for uncertainty of health system inputs and outputs. The developed method is useful for decision-makers in the Defense Health Agency (DHA), who have a strategic level objective of integrating clinical and business processes through better sharing of resources across the MHS and through system-wide standardization across the services. It is also less sensitive to data outliers or sampling errors than traditional DEA methods.
da Rosa, Hemerson S; Koetz, Mariana; Santos, Marí Castro; Jandrey, Elisa Helena Farias; Folmer, Vanderlei; Henriques, Amélia Teresinha; Mendez, Andreas Sebastian Loureiro
2018-04-01
Sida tuberculata (ST) is a Malvaceae species widely distributed in Southern Brazil. In traditional medicine, ST has been employed as hypoglycemic, hypocholesterolemic, anti-inflammatory and antimicrobial. Additionally, this species is chemically characterized by flavonoids, alkaloids and phytoecdysteroids mainly. The present work aimed to optimize the extractive technique and to validate an UHPLC method for the determination of 20-hydroxyecdsone (20HE) in the ST leaves. Box-Behnken Design (BBD) was used in method optimization. The extractive methods tested were: static and dynamic maceration, ultrasound, ultra-turrax and reflux. In the Box-Behnken three parameters were evaluated in three levels (-1, 0, +1), particle size, time and plant:solvent ratio. In validation method, the parameters of selectivity, specificity, linearity, limits of detection and quantification (LOD, LOQ), precision, accuracy and robustness were evaluated. The results indicate static maceration as better technique to obtain 20HE peak area in ST extract. The optimal extraction from surface response methodology was achieved with the parameters granulometry of 710 nm, 9 days of maceration and plant:solvent ratio 1:54 (w/v). The UHPLC-PDA analytical developed method showed full viability of performance, proving to be selective, linear, precise, accurate and robust for 20HE detection in ST leaves. The average content of 20HE was 0.56% per dry extract. Thus, the optimization of extractive method in ST leaves increased the concentration of 20HE in crude extract, and a reliable method was successfully developed according to validation requirements and in agreement with current legislation. Copyright © 2018 Elsevier Inc. All rights reserved.
Optimal GENCO bidding strategy
NASA Astrophysics Data System (ADS)
Gao, Feng
Electricity industries worldwide are undergoing a period of profound upheaval. The conventional vertically integrated mechanism is being replaced by a competitive market environment. Generation companies have incentives to apply novel technologies to lower production costs, for example: Combined Cycle units. Economic dispatch with Combined Cycle units becomes a non-convex optimization problem, which is difficult if not impossible to solve by conventional methods. Several techniques are proposed here: Mixed Integer Linear Programming, a hybrid method, as well as Evolutionary Algorithms. Evolutionary Algorithms share a common mechanism, stochastic searching per generation. The stochastic property makes evolutionary algorithms robust and adaptive enough to solve a non-convex optimization problem. This research implements GA, EP, and PS algorithms for economic dispatch with Combined Cycle units, and makes a comparison with classical Mixed Integer Linear Programming. The electricity market equilibrium model not only helps Independent System Operator/Regulator analyze market performance and market power, but also provides Market Participants the ability to build optimal bidding strategies based on Microeconomics analysis. Supply Function Equilibrium (SFE) is attractive compared to traditional models. This research identifies a proper SFE model, which can be applied to a multiple period situation. The equilibrium condition using discrete time optimal control is then developed for fuel resource constraints. Finally, the research discusses the issues of multiple equilibria and mixed strategies, which are caused by the transmission network. Additionally, an advantage of the proposed model for merchant transmission planning is discussed. A market simulator is a valuable training and evaluation tool to assist sellers, buyers, and regulators to understand market performance and make better decisions. A traditional optimization model may not be enough to consider the distributed, large-scale, and complex energy market. This research compares the performance and searching paths of different artificial life techniques such as Genetic Algorithm (GA), Evolutionary Programming (EP), and Particle Swarm (PS), and look for a proper method to emulate Generation Companies' (GENCOs) bidding strategies. After deregulation, GENCOs face risk and uncertainty associated with the fast-changing market environment. A profit-based bidding decision support system is critical for GENCOs to keep a competitive position in the new environment. Most past research do not pay special attention to the piecewise staircase characteristic of generator offer curves. This research proposes an optimal bidding strategy based on Parametric Linear Programming. The proposed algorithm is able to handle actual piecewise staircase energy offer curves. The proposed method is then extended to incorporate incomplete information based on Decision Analysis. Finally, the author develops an optimal bidding tool (GenBidding) and applies it to the RTS96 test system.
An Integrated Method for Airfoil Optimization
NASA Astrophysics Data System (ADS)
Okrent, Joshua B.
Design exploration and optimization is a large part of the initial engineering and design process. To evaluate the aerodynamic performance of a design, viscous Navier-Stokes solvers can be used. However this method can prove to be overwhelmingly time consuming when performing an initial design sweep. Therefore, another evaluation method is needed to provide accurate results at a faster pace. To accomplish this goal, a coupled viscous-inviscid method is used. This thesis proposes an integrated method for analyzing, evaluating, and optimizing an airfoil using a coupled viscous-inviscid solver along with a genetic algorithm to find the optimal candidate. The method proposed is different from prior optimization efforts in that it greatly broadens the design space, while allowing the optimization to search for the best candidate that will meet multiple objectives over a characteristic mission profile rather than over a single condition and single optimization parameter. The increased design space is due to the use of multiple parametric airfoil families, namely the NACA 4 series, CST family, and the PARSEC family. Almost all possible airfoil shapes can be created with these three families allowing for all possible configurations to be included. This inclusion of multiple airfoil families addresses a possible criticism of prior optimization attempts since by only focusing on one airfoil family, they were inherently limiting the number of possible airfoil configurations. By using multiple parametric airfoils, it can be assumed that all reasonable airfoil configurations are included in the analysis and optimization and that a global and not local maximum is found. Additionally, the method used is amenable to customization to suit any specific needs as well as including the effects of other physical phenomena or design criteria and/or constraints. This thesis found that an airfoil configuration that met multiple objectives could be found for a given set of nominal operational conditions from a broad design space with the use of minimal computational resources on both an absolute and relative scale to traditional analysis techniques. Aerodynamicists, program managers, aircraft configuration specialist, and anyone else in charge of aircraft configuration, design studies, and program level decisions might find the evaluation and optimization method proposed of interest.
Norman, Matthew R.
2014-11-24
New Hermite Weighted Essentially Non-Oscillatory (HWENO) interpolants are developed and investigated within the Multi-Moment Finite-Volume (MMFV) formulation using the ADER-DT time discretization. Whereas traditional WENO methods interpolate pointwise, function-based WENO methods explicitly form a non-oscillatory, high-order polynomial over the cell in question. This study chooses a function-based approach and details how fast convergence to optimal weights for smooth flow is ensured. Methods of sixth-, eighth-, and tenth-order accuracy are developed. We compare these against traditional single-moment WENO methods of fifth-, seventh-, ninth-, and eleventh-order accuracy to compare against more familiar methods from literature. The new HWENO methods improve upon existingmore » HWENO methods (1) by giving a better resolution of unreinforced contact discontinuities and (2) by only needing a single HWENO polynomial to update both the cell mean value and cell mean derivative. Test cases to validate and assess these methods include 1-D linear transport, the 1-D inviscid Burger's equation, and the 1-D inviscid Euler equations. Smooth and non-smooth flows are used for evaluation. These HWENO methods performed better than comparable literature-standard WENO methods for all regimes of discontinuity and smoothness in all tests herein. They exhibit improved optimal accuracy due to the use of derivatives, and they collapse to solutions similar to typical WENO methods when limiting is required. The study concludes that the new HWENO methods are robust and effective when used in the ADER-DT MMFV framework. Finally, these results are intended to demonstrate capability rather than exhaust all possible implementations.« less
NASA Astrophysics Data System (ADS)
Fomina, E. V.; Kozhukhova, N. I.; Sverguzova, S. V.; Fomin, A. E.
2018-05-01
In this paper, the regression equations method for design of construction material was studied. Regression and polynomial equations representing the correlation between the studied parameters were proposed. The logic design and software interface of the regression equations method focused on parameter optimization to provide the energy saving effect at the stage of autoclave aerated concrete design considering the replacement of traditionally used quartz sand by coal mining by-product such as argillite. The mathematical model represented by a quadric polynomial for the design of experiment was obtained using calculated and experimental data. This allowed the estimation of relationship between the composition and final properties of the aerated concrete. The surface response graphically presented in a nomogram allowed the estimation of concrete properties in response to variation of composition within the x-space. The optimal range of argillite content was obtained leading to a reduction of raw materials demand, development of target plastic strength of aerated concrete as well as a reduction of curing time before autoclave treatment. Generally, this method allows the design of autoclave aerated concrete with required performance without additional resource and time costs.
NASA Astrophysics Data System (ADS)
Wang, Jiaoyang; Wang, Lin; Yang, Ying; Gong, Rui; Shao, Xiaopeng; Liang, Chao; Xu, Jun
2016-05-01
In this paper, an integral design that combines optical system with image processing is introduced to obtain high resolution images, and the performance is evaluated and demonstrated. Traditional imaging methods often separate the two technical procedures of optical system design and imaging processing, resulting in the failures in efficient cooperation between the optical and digital elements. Therefore, an innovative approach is presented to combine the merit function during optical design together with the constraint conditions of image processing algorithms. Specifically, an optical imaging system with low resolution is designed to collect the image signals which are indispensable for imaging processing, while the ultimate goal is to obtain high resolution images from the final system. In order to optimize the global performance, the optimization function of ZEMAX software is utilized and the number of optimization cycles is controlled. Then Wiener filter algorithm is adopted to process the image simulation and mean squared error (MSE) is taken as evaluation criterion. The results show that, although the optical figures of merit for the optical imaging systems is not the best, it can provide image signals that are more suitable for image processing. In conclusion. The integral design of optical system and image processing can search out the overall optimal solution which is missed by the traditional design methods. Especially, when designing some complex optical system, this integral design strategy has obvious advantages to simplify structure and reduce cost, as well as to gain high resolution images simultaneously, which has a promising perspective of industrial application.
NASA Astrophysics Data System (ADS)
Shrivastava, Prashant Kumar; Pandey, Arun Kumar
2018-06-01
Inconel-718 has found high demand in different industries due to their superior mechanical properties. The traditional cutting methods are facing difficulties for cutting these alloys due to their low thermal potential, lower elasticity and high chemical compatibility at inflated temperature. The challenges of machining and/or finishing of unusual shapes and/or sizes in these materials have also faced by traditional machining. Laser beam cutting may be applied for the miniaturization and ultra-precision cutting and/or finishing by appropriate control of different process parameter. This paper present multi-objective optimization the kerf deviation, kerf width and kerf taper in the laser cutting of Incone-718 sheet. The second order regression models have been developed for different quality characteristics by using the experimental data obtained through experimentation. The regression models have been used as objective function for multi-objective optimization based on the hybrid approach of multiple regression analysis and genetic algorithm. The comparison of optimization results to experimental results shows an improvement of 88%, 10.63% and 42.15% in kerf deviation, kerf width and kerf taper, respectively. Finally, the effects of different process parameters on quality characteristics have also been discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rian, D.T.; Hage, A.
1994-12-31
A numerical simulator is often used as a reservoir management tool. One of its main purposes is to aid in the evaluation of number of wells, well locations and start time for wells. Traditionally, the optimization of a field development is done by a manual trial and error process. In this paper, an example of an automated technique is given. The core in the automization process is the reservoir simulator Frontline. Frontline is based on front tracking techniques, which makes it fast and accurate compared to traditional finite difference simulators. Due to its CPU-efficiency the simulator has been coupled withmore » an optimization module, which enables automatic optimization of location of wells, number of wells and start-up times. The simulator was used as an alternative method in the evaluation of waterflooding in a North Sea fractured chalk reservoir. Since Frontline, in principle, is 2D, Buckley-Leverett pseudo functions were used to represent the 3rd dimension. The area full field simulation model was run with up to 25 wells for 20 years in less than one minute of Vax 9000 CPU-time. The automatic Frontline evaluation indicated that a peripheral waterflood could double incremental recovery compared to a central pattern drive.« less
Li, Tao; Su, Chen
2018-06-02
Rhodiola is an increasingly widely used traditional Tibetan medicine and traditional Chinese medicine in China. The composition profiles of bioactive compounds are somewhat jagged according to different species, which makes it crucial to identify authentic Rhodiola species accurately so as to ensure clinical application of Rhodiola. In this paper, a nondestructive, rapid, and efficient method in classification of Rhodiola was developed by Fourier transform near-infrared (FT-NIR) spectroscopy combined with chemometrics analysis. A total of 160 batches of raw spectra were obtained from four different species of Rhodiola by FT-NIR, such as Rhodiola crenulata, Rhodiola fastigiata, Rhodiola kirilowii, and Rhodiola brevipetiolata. After excluding the outliers, different performances of 3 sample dividing methods, 12 spectral preprocessing methods, 2 wavelength selection methods, and 2 modeling evaluation methods were compared. The results indicated that this combination was superior than others in the authenticity identification analysis, which was FT-NIR combined with sample set partitioning based on joint x-y distances (SPXY), standard normal variate transformation (SNV) + Norris-Williams (NW) + 2nd derivative, competitive adaptive reweighted sampling (CARS), and kernel extreme learning machine (KELM). The accuracy (ACCU), sensitivity (SENS), and specificity (SPEC) of the optimal model were all 1, which showed that this combination of FT-NIR and chemometrics methods had the optimal authenticity identification performance. The classification performance of the partial least squares discriminant analysis (PLS-DA) model was slightly lower than KELM model, and PLS-DA model results were ACCU = 0.97, SENS = 0.93, and SPEC = 0.98, respectively. It can be concluded that FT-NIR combined with chemometrics analysis has great potential in authenticity identification and classification of Rhodiola, which can provide a valuable reference for the safety and effectiveness of clinical application of Rhodiola. Copyright © 2018 Elsevier B.V. All rights reserved.
Path Planning for Robot based on Chaotic Artificial Potential Field Method
NASA Astrophysics Data System (ADS)
Zhang, Cheng
2018-03-01
Robot path planning in unknown environments is one of the hot research topics in the field of robot control. Aiming at the shortcomings of traditional artificial potential field methods, we propose a new path planning for Robot based on chaotic artificial potential field method. The path planning adopts the potential function as the objective function and introduces the robot direction of movement as the control variables, which combines the improved artificial potential field method with chaotic optimization algorithm. Simulations have been carried out and the results demonstrate that the superior practicality and high efficiency of the proposed method.
Longevity and optimal health: working toward an integrative methodology.
Oz, Mehmet; Tallent, Jeremy
2009-08-01
Efforts to foster a research dialogue between traditions as seemingly divergent as Western biomedicine and Indo-Tibetan medical and self-regulatory practice require a carefully conceived set of methodological guidelines. To approach a useful methodology, some specific structural differences between traditions must be negotiated, for example the Indo-Tibetan emphasis on holism in medicine and ethics, which appears to run contrary to Western trends toward specialization in both clinical and research contexts. Certain pitfalls must be avoided as well, including the tendency to appropriate elements of either tradition in a reductionistic manner. However, research methods offering creative solutions to these problems are now emerging, successfully engendering quantitative insight without subsuming one tradition within the terms of the other. Only through continued, creative work exploring both the potentials and limitations of this dialogue can collaborative research insight be attained, and an appropriate and useful set of methodological principles be approached.
Optimal control of motorsport differentials
NASA Astrophysics Data System (ADS)
Tremlett, A. J.; Massaro, M.; Purdy, D. J.; Velenis, E.; Assadian, F.; Moore, A. P.; Halley, M.
2015-12-01
Modern motorsport limited slip differentials (LSD) have evolved to become highly adjustable, allowing the torque bias that they generate to be tuned in the corner entry, apex and corner exit phases of typical on-track manoeuvres. The task of finding the optimal torque bias profile under such varied vehicle conditions is complex. This paper presents a nonlinear optimal control method which is used to find the minimum time optimal torque bias profile through a lane change manoeuvre. The results are compared to traditional open and fully locked differential strategies, in addition to considering related vehicle stability and agility metrics. An investigation into how the optimal torque bias profile changes with reduced track-tyre friction is also included in the analysis. The optimal LSD profile was shown to give a performance gain over its locked differential counterpart in key areas of the manoeuvre where a quick direction change is required. The methodology proposed can be used to find both optimal passive LSD characteristics and as the basis of a semi-active LSD control algorithm.
Trajectory Optimization for Missions to Small Bodies with a Focus on Scientific Merit.
Englander, Jacob A; Vavrina, Matthew A; Lim, Lucy F; McFadden, Lucy A; Rhoden, Alyssa R; Noll, Keith S
2017-01-01
Trajectory design for missions to small bodies is tightly coupled both with the selection of targets for a mission and with the choice of spacecraft power, propulsion, and other hardware. Traditional methods of trajectory optimization have focused on finding the optimal trajectory for an a priori selection of destinations and spacecraft parameters. Recent research has expanded the field of trajectory optimization to multidisciplinary systems optimization that includes spacecraft parameters. The logical next step is to extend the optimization process to include target selection based not only on engineering figures of merit but also scientific value. This paper presents a new technique to solve the multidisciplinary mission optimization problem for small-bodies missions, including classical trajectory design, the choice of spacecraft power and propulsion systems, and also the scientific value of the targets. This technique, when combined with modern parallel computers, enables a holistic view of the small body mission design process that previously required iteration among several different design processes.
Key Technology of Real-Time Road Navigation Method Based on Intelligent Data Research
Tang, Haijing; Liang, Yu; Huang, Zhongnan; Wang, Taoyi; He, Lin; Du, Yicong; Ding, Gangyi
2016-01-01
The effect of traffic flow prediction plays an important role in routing selection. Traditional traffic flow forecasting methods mainly include linear, nonlinear, neural network, and Time Series Analysis method. However, all of them have some shortcomings. This paper analyzes the existing algorithms on traffic flow prediction and characteristics of city traffic flow and proposes a road traffic flow prediction method based on transfer probability. This method first analyzes the transfer probability of upstream of the target road and then makes the prediction of the traffic flow at the next time by using the traffic flow equation. Newton Interior-Point Method is used to obtain the optimal value of parameters. Finally, it uses the proposed model to predict the traffic flow at the next time. By comparing the existing prediction methods, the proposed model has proven to have good performance. It can fast get the optimal value of parameters faster and has higher prediction accuracy, which can be used to make real-time traffic flow prediction. PMID:27872637
Extreme Learning Machine and Particle Swarm Optimization in optimizing CNC turning operation
NASA Astrophysics Data System (ADS)
Janahiraman, Tiagrajah V.; Ahmad, Nooraziah; Hani Nordin, Farah
2018-04-01
The CNC machine is controlled by manipulating cutting parameters that could directly influence the process performance. Many optimization methods has been applied to obtain the optimal cutting parameters for the desired performance function. Nonetheless, the industry still uses the traditional technique to obtain those values. Lack of knowledge on optimization techniques is the main reason for this issue to be prolonged. Therefore, the simple yet easy to implement, Optimal Cutting Parameters Selection System is introduced to help the manufacturer to easily understand and determine the best optimal parameters for their turning operation. This new system consists of two stages which are modelling and optimization. In modelling of input-output and in-process parameters, the hybrid of Extreme Learning Machine and Particle Swarm Optimization is applied. This modelling technique tend to converge faster than other artificial intelligent technique and give accurate result. For the optimization stage, again the Particle Swarm Optimization is used to get the optimal cutting parameters based on the performance function preferred by the manufacturer. Overall, the system can reduce the gap between academic world and the industry by introducing a simple yet easy to implement optimization technique. This novel optimization technique can give accurate result besides being the fastest technique.
NASA Astrophysics Data System (ADS)
Buyuk, Ersin; Karaman, Abdullah
2017-04-01
We estimated transmissivity and storage coefficient values from the single well water-level measurements positioned ahead of the mining face by using particle swarm optimization (PSO) technique. The water-level response to the advancing mining face contains an semi-analytical function that is not suitable for conventional inversion shemes because the partial derivative is difficult to calculate . Morever, the logaritmic behaviour of the model create difficulty for obtaining an initial model that may lead to a stable convergence. The PSO appears to obtain a reliable solution that produce a reasonable fit between water-level data and model function response. Optimization methods have been used to find optimum conditions consisting either minimum or maximum of a given objective function with regard to some criteria. Unlike PSO, traditional non-linear optimization methods have been used for many hydrogeologic and geophysical engineering problems. These methods indicate some difficulties such as dependencies to initial model, evolution of the partial derivatives that is required while linearizing the model and trapping at local optimum. Recently, Particle swarm optimization (PSO) became the focus of modern global optimization method that is inspired from the social behaviour of birds of swarms, and appears to be a reliable and powerful algorithms for complex engineering applications. PSO that is not dependent on an initial model, and non-derivative stochastic process appears to be capable of searching all possible solutions in the model space either around local or global optimum points.
Structural Optimization in automotive design
NASA Technical Reports Server (NTRS)
Bennett, J. A.; Botkin, M. E.
1984-01-01
Although mathematical structural optimization has been an active research area for twenty years, there has been relatively little penetration into the design process. Experience indicates that often this is due to the traditional layout-analysis design process. In many cases, optimization efforts have been outgrowths of analysis groups which are themselves appendages to the traditional design process. As a result, optimization is often introduced into the design process too late to have a significant effect because many potential design variables have already been fixed. A series of examples are given to indicate how structural optimization has been effectively integrated into the design process.
Rank-Optimized Logistic Matrix Regression toward Improved Matrix Data Classification.
Zhang, Jianguang; Jiang, Jianmin
2018-02-01
While existing logistic regression suffers from overfitting and often fails in considering structural information, we propose a novel matrix-based logistic regression to overcome the weakness. In the proposed method, 2D matrices are directly used to learn two groups of parameter vectors along each dimension without vectorization, which allows the proposed method to fully exploit the underlying structural information embedded inside the 2D matrices. Further, we add a joint [Formula: see text]-norm on two parameter matrices, which are organized by aligning each group of parameter vectors in columns. This added co-regularization term has two roles-enhancing the effect of regularization and optimizing the rank during the learning process. With our proposed fast iterative solution, we carried out extensive experiments. The results show that in comparison to both the traditional tensor-based methods and the vector-based regression methods, our proposed solution achieves better performance for matrix data classifications.
Automated Lead Optimization of MMP-12 Inhibitors Using a Genetic Algorithm.
Pickett, Stephen D; Green, Darren V S; Hunt, David L; Pardoe, David A; Hughes, Ian
2011-01-13
Traditional lead optimization projects involve long synthesis and testing cycles, favoring extensive structure-activity relationship (SAR) analysis and molecular design steps, in an attempt to limit the number of cycles that a project must run to optimize a development candidate. Microfluidic-based chemistry and biology platforms, with cycle times of minutes rather than weeks, lend themselves to unattended autonomous operation. The bottleneck in the lead optimization process is therefore shifted from synthesis or test to SAR analysis and design. As such, the way is open to an algorithm-directed process, without the need for detailed user data analysis. Here, we present results of two synthesis and screening experiments, undertaken using traditional methodology, to validate a genetic algorithm optimization process for future application to a microfluidic system. The algorithm has several novel features that are important for the intended application. For example, it is robust to missing data and can suggest compounds for retest to ensure reliability of optimization. The algorithm is first validated on a retrospective analysis of an in-house library embedded in a larger virtual array of presumed inactive compounds. In a second, prospective experiment with MMP-12 as the target protein, 140 compounds are submitted for synthesis over 10 cycles of optimization. Comparison is made to the results from the full combinatorial library that was synthesized manually and tested independently. The results show that compounds selected by the algorithm are heavily biased toward the more active regions of the library, while the algorithm is robust to both missing data (compounds where synthesis failed) and inactive compounds. This publication places the full combinatorial library and biological data into the public domain with the intention of advancing research into algorithm-directed lead optimization methods.
Automated Lead Optimization of MMP-12 Inhibitors Using a Genetic Algorithm
2010-01-01
Traditional lead optimization projects involve long synthesis and testing cycles, favoring extensive structure−activity relationship (SAR) analysis and molecular design steps, in an attempt to limit the number of cycles that a project must run to optimize a development candidate. Microfluidic-based chemistry and biology platforms, with cycle times of minutes rather than weeks, lend themselves to unattended autonomous operation. The bottleneck in the lead optimization process is therefore shifted from synthesis or test to SAR analysis and design. As such, the way is open to an algorithm-directed process, without the need for detailed user data analysis. Here, we present results of two synthesis and screening experiments, undertaken using traditional methodology, to validate a genetic algorithm optimization process for future application to a microfluidic system. The algorithm has several novel features that are important for the intended application. For example, it is robust to missing data and can suggest compounds for retest to ensure reliability of optimization. The algorithm is first validated on a retrospective analysis of an in-house library embedded in a larger virtual array of presumed inactive compounds. In a second, prospective experiment with MMP-12 as the target protein, 140 compounds are submitted for synthesis over 10 cycles of optimization. Comparison is made to the results from the full combinatorial library that was synthesized manually and tested independently. The results show that compounds selected by the algorithm are heavily biased toward the more active regions of the library, while the algorithm is robust to both missing data (compounds where synthesis failed) and inactive compounds. This publication places the full combinatorial library and biological data into the public domain with the intention of advancing research into algorithm-directed lead optimization methods. PMID:24900251
Optimal solutions for the evolution of a social obesity epidemic model
NASA Astrophysics Data System (ADS)
Sikander, Waseem; Khan, Umar; Mohyud-Din, Syed Tauseef
2017-06-01
In this work, a novel modification in the traditional homotopy perturbation method (HPM) is proposed by embedding an auxiliary parameter in the boundary condition. The scheme is used to carry out a mathematical evaluation of the social obesity epidemic model. The incidence of excess weight and obesity in adulthood population and prediction of its behavior in the coming years is analyzed by using a modified algorithm. The proposed method increases the convergence of the approximate analytical solution over the domain of the problem. Furthermore, a convenient way is considered for choosing an optimal value of auxiliary parameters via minimizing the total residual error. The graphical comparison of the obtained results with the standard HPM explicitly reveals the accuracy and efficiency of the developed scheme.
Research on Fault Rate Prediction Method of T/R Component
NASA Astrophysics Data System (ADS)
Hou, Xiaodong; Yang, Jiangping; Bi, Zengjun; Zhang, Yu
2017-07-01
T/R component is an important part of the large phased array radar antenna array, because of its large numbers, high fault rate, it has important significance for fault prediction. Aiming at the problems of traditional grey model GM(1,1) in practical operation, the discrete grey model is established based on the original model in this paper, and the optimization factor is introduced to optimize the background value, and the linear form of the prediction model is added, the improved discrete grey model of linear regression is proposed, finally, an example is simulated and compared with other models. The results show that the method proposed in this paper has higher accuracy and the solution is simple and the application scope is more extensive.
Vector-model-supported approach in prostate plan optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Eva Sau Fan; Department of Health Technology and Informatics, The Hong Kong Polytechnic University; Wu, Vincent Wing Cheung
Lengthy time consumed in traditional manual plan optimization can limit the use of step-and-shoot intensity-modulated radiotherapy/volumetric-modulated radiotherapy (S&S IMRT/VMAT). A vector model base, retrieving similar radiotherapy cases, was developed with respect to the structural and physiologic features extracted from the Digital Imaging and Communications in Medicine (DICOM) files. Planning parameters were retrieved from the selected similar reference case and applied to the test case to bypass the gradual adjustment of planning parameters. Therefore, the planning time spent on the traditional trial-and-error manual optimization approach in the beginning of optimization could be reduced. Each S&S IMRT/VMAT prostate reference database comprised 100more » previously treated cases. Prostate cases were replanned with both traditional optimization and vector-model-supported optimization based on the oncologists' clinical dose prescriptions. A total of 360 plans, which consisted of 30 cases of S&S IMRT, 30 cases of 1-arc VMAT, and 30 cases of 2-arc VMAT plans including first optimization and final optimization with/without vector-model-supported optimization, were compared using the 2-sided t-test and paired Wilcoxon signed rank test, with a significance level of 0.05 and a false discovery rate of less than 0.05. For S&S IMRT, 1-arc VMAT, and 2-arc VMAT prostate plans, there was a significant reduction in the planning time and iteration with vector-model-supported optimization by almost 50%. When the first optimization plans were compared, 2-arc VMAT prostate plans had better plan quality than 1-arc VMAT plans. The volume receiving 35 Gy in the femoral head for 2-arc VMAT plans was reduced with the vector-model-supported optimization compared with the traditional manual optimization approach. Otherwise, the quality of plans from both approaches was comparable. Vector-model-supported optimization was shown to offer much shortened planning time and iteration number without compromising the plan quality.« less
A Sensor Dynamic Measurement Error Prediction Model Based on NAPSO-SVM
Jiang, Minlan; Jiang, Lan; Jiang, Dingde; Li, Fei
2018-01-01
Dynamic measurement error correction is an effective way to improve sensor precision. Dynamic measurement error prediction is an important part of error correction, and support vector machine (SVM) is often used for predicting the dynamic measurement errors of sensors. Traditionally, the SVM parameters were always set manually, which cannot ensure the model’s performance. In this paper, a SVM method based on an improved particle swarm optimization (NAPSO) is proposed to predict the dynamic measurement errors of sensors. Natural selection and simulated annealing are added in the PSO to raise the ability to avoid local optima. To verify the performance of NAPSO-SVM, three types of algorithms are selected to optimize the SVM’s parameters: the particle swarm optimization algorithm (PSO), the improved PSO optimization algorithm (NAPSO), and the glowworm swarm optimization (GSO). The dynamic measurement error data of two sensors are applied as the test data. The root mean squared error and mean absolute percentage error are employed to evaluate the prediction models’ performances. The experimental results show that among the three tested algorithms the NAPSO-SVM method has a better prediction precision and a less prediction errors, and it is an effective method for predicting the dynamic measurement errors of sensors. PMID:29342942
Optimizing area under the ROC curve using semi-supervised learning
Wang, Shijun; Li, Diana; Petrick, Nicholas; Sahiner, Berkman; Linguraru, Marius George; Summers, Ronald M.
2014-01-01
Receiver operating characteristic (ROC) analysis is a standard methodology to evaluate the performance of a binary classification system. The area under the ROC curve (AUC) is a performance metric that summarizes how well a classifier separates two classes. Traditional AUC optimization techniques are supervised learning methods that utilize only labeled data (i.e., the true class is known for all data) to train the classifiers. In this work, inspired by semi-supervised and transductive learning, we propose two new AUC optimization algorithms hereby referred to as semi-supervised learning receiver operating characteristic (SSLROC) algorithms, which utilize unlabeled test samples in classifier training to maximize AUC. Unlabeled samples are incorporated into the AUC optimization process, and their ranking relationships to labeled positive and negative training samples are considered as optimization constraints. The introduced test samples will cause the learned decision boundary in a multidimensional feature space to adapt not only to the distribution of labeled training data, but also to the distribution of unlabeled test data. We formulate the semi-supervised AUC optimization problem as a semi-definite programming problem based on the margin maximization theory. The proposed methods SSLROC1 (1-norm) and SSLROC2 (2-norm) were evaluated using 34 (determined by power analysis) randomly selected datasets from the University of California, Irvine machine learning repository. Wilcoxon signed rank tests showed that the proposed methods achieved significant improvement compared with state-of-the-art methods. The proposed methods were also applied to a CT colonography dataset for colonic polyp classification and showed promising results.1 PMID:25395692
Optimizing area under the ROC curve using semi-supervised learning.
Wang, Shijun; Li, Diana; Petrick, Nicholas; Sahiner, Berkman; Linguraru, Marius George; Summers, Ronald M
2015-01-01
Receiver operating characteristic (ROC) analysis is a standard methodology to evaluate the performance of a binary classification system. The area under the ROC curve (AUC) is a performance metric that summarizes how well a classifier separates two classes. Traditional AUC optimization techniques are supervised learning methods that utilize only labeled data (i.e., the true class is known for all data) to train the classifiers. In this work, inspired by semi-supervised and transductive learning, we propose two new AUC optimization algorithms hereby referred to as semi-supervised learning receiver operating characteristic (SSLROC) algorithms, which utilize unlabeled test samples in classifier training to maximize AUC. Unlabeled samples are incorporated into the AUC optimization process, and their ranking relationships to labeled positive and negative training samples are considered as optimization constraints. The introduced test samples will cause the learned decision boundary in a multidimensional feature space to adapt not only to the distribution of labeled training data, but also to the distribution of unlabeled test data. We formulate the semi-supervised AUC optimization problem as a semi-definite programming problem based on the margin maximization theory. The proposed methods SSLROC1 (1-norm) and SSLROC2 (2-norm) were evaluated using 34 (determined by power analysis) randomly selected datasets from the University of California, Irvine machine learning repository. Wilcoxon signed rank tests showed that the proposed methods achieved significant improvement compared with state-of-the-art methods. The proposed methods were also applied to a CT colonography dataset for colonic polyp classification and showed promising results.
Designing Industrial Networks Using Ecological Food Web Metrics.
Layton, Astrid; Bras, Bert; Weissburg, Marc
2016-10-18
Biologically Inspired Design (biomimicry) and Industrial Ecology both look to natural systems to enhance the sustainability and performance of engineered products, systems and industries. Bioinspired design (BID) traditionally has focused on a unit operation and single product level. In contrast, this paper describes how principles of network organization derived from analysis of ecosystem properties can be applied to industrial system networks. Specifically, this paper examines the applicability of particular food web matrix properties as design rules for economically and biologically sustainable industrial networks, using an optimization model developed for a carpet recycling network. Carpet recycling network designs based on traditional cost and emissions based optimization are compared to designs obtained using optimizations based solely on ecological food web metrics. The analysis suggests that networks optimized using food web metrics also were superior from a traditional cost and emissions perspective; correlations between optimization using ecological metrics and traditional optimization ranged generally from 0.70 to 0.96, with flow-based metrics being superior to structural parameters. Four structural food parameters provided correlations nearly the same as that obtained using all structural parameters, but individual structural parameters provided much less satisfactory correlations. The analysis indicates that bioinspired design principles from ecosystems can lead to both environmentally and economically sustainable industrial resource networks, and represent guidelines for designing sustainable industry networks.
Fuzzy multiobjective models for optimal operation of a hydropower system
NASA Astrophysics Data System (ADS)
Teegavarapu, Ramesh S. V.; Ferreira, André R.; Simonovic, Slobodan P.
2013-06-01
Optimal operation models for a hydropower system using new fuzzy multiobjective mathematical programming models are developed and evaluated in this study. The models use (i) mixed integer nonlinear programming (MINLP) with binary variables and (ii) integrate a new turbine unit commitment formulation along with water quality constraints used for evaluation of reservoir downstream impairment. Reardon method used in solution of genetic algorithm optimization problems forms the basis for development of a new fuzzy multiobjective hydropower system optimization model with creation of Reardon type fuzzy membership functions. The models are applied to a real-life hydropower reservoir system in Brazil. Genetic Algorithms (GAs) are used to (i) solve the optimization formulations to avoid computational intractability and combinatorial problems associated with binary variables in unit commitment, (ii) efficiently address Reardon method formulations, and (iii) deal with local optimal solutions obtained from the use of traditional gradient-based solvers. Decision maker's preferences are incorporated within fuzzy mathematical programming formulations to obtain compromise operating rules for a multiobjective reservoir operation problem dominated by conflicting goals of energy production, water quality and conservation releases. Results provide insight into compromise operation rules obtained using the new Reardon fuzzy multiobjective optimization framework and confirm its applicability to a variety of multiobjective water resources problems.
Fong, Simon; Deb, Suash; Yang, Xin-She; Zhuang, Yan
2014-01-01
Traditional K-means clustering algorithms have the drawback of getting stuck at local optima that depend on the random values of initial centroids. Optimization algorithms have their advantages in guiding iterative computation to search for global optima while avoiding local optima. The algorithms help speed up the clustering process by converging into a global optimum early with multiple search agents in action. Inspired by nature, some contemporary optimization algorithms which include Ant, Bat, Cuckoo, Firefly, and Wolf search algorithms mimic the swarming behavior allowing them to cooperatively steer towards an optimal objective within a reasonable time. It is known that these so-called nature-inspired optimization algorithms have their own characteristics as well as pros and cons in different applications. When these algorithms are combined with K-means clustering mechanism for the sake of enhancing its clustering quality by avoiding local optima and finding global optima, the new hybrids are anticipated to produce unprecedented performance. In this paper, we report the results of our evaluation experiments on the integration of nature-inspired optimization methods into K-means algorithms. In addition to the standard evaluation metrics in evaluating clustering quality, the extended K-means algorithms that are empowered by nature-inspired optimization methods are applied on image segmentation as a case study of application scenario.
Chen, Xianglong; Zhang, Bingzhi; Feng, Fuzhou; Jiang, Pengcheng
2017-01-01
The kurtosis-based indexes are usually used to identify the optimal resonant frequency band. However, kurtosis can only describe the strength of transient impulses, which cannot differentiate impulse noises and repetitive transient impulses cyclically generated in bearing vibration signals. As a result, it may lead to inaccurate results in identifying resonant frequency bands, in demodulating fault features and hence in fault diagnosis. In view of those drawbacks, this manuscript redefines the correlated kurtosis based on kurtosis and auto-correlative function, puts forward an improved correlated kurtosis based on squared envelope spectrum of bearing vibration signals. Meanwhile, this manuscript proposes an optimal resonant band demodulation method, which can adaptively determine the optimal resonant frequency band and accurately demodulate transient fault features of rolling bearings, by combining the complex Morlet wavelet filter and the Particle Swarm Optimization algorithm. Analysis of both simulation data and experimental data reveal that the improved correlated kurtosis can effectively remedy the drawbacks of kurtosis-based indexes and the proposed optimal resonant band demodulation is more accurate in identifying the optimal central frequencies and bandwidth of resonant bands. Improved fault diagnosis results in experiment verified the validity and advantage of the proposed method over the traditional kurtosis-based indexes. PMID:28208820
Deb, Suash; Yang, Xin-She
2014-01-01
Traditional K-means clustering algorithms have the drawback of getting stuck at local optima that depend on the random values of initial centroids. Optimization algorithms have their advantages in guiding iterative computation to search for global optima while avoiding local optima. The algorithms help speed up the clustering process by converging into a global optimum early with multiple search agents in action. Inspired by nature, some contemporary optimization algorithms which include Ant, Bat, Cuckoo, Firefly, and Wolf search algorithms mimic the swarming behavior allowing them to cooperatively steer towards an optimal objective within a reasonable time. It is known that these so-called nature-inspired optimization algorithms have their own characteristics as well as pros and cons in different applications. When these algorithms are combined with K-means clustering mechanism for the sake of enhancing its clustering quality by avoiding local optima and finding global optima, the new hybrids are anticipated to produce unprecedented performance. In this paper, we report the results of our evaluation experiments on the integration of nature-inspired optimization methods into K-means algorithms. In addition to the standard evaluation metrics in evaluating clustering quality, the extended K-means algorithms that are empowered by nature-inspired optimization methods are applied on image segmentation as a case study of application scenario. PMID:25202730
Lightweight structure design for supporting plate of primary mirror
NASA Astrophysics Data System (ADS)
Wang, Xiao; Wang, Wei; Liu, Bei; Qu, Yan Jun; Li, Xu Peng
2017-10-01
A topological optimization design for the lightweight technology of supporting plate of the primary mirror is presented in this paper. The supporting plate of the primary mirror is topologically optimized under the condition of determined shape, loads and environment. And the optimal structure is obtained. The diameter of the primary mirror in this paper is 450mm, and the material is SiC1 . It is better to select SiC/Al as the supporting material. Six points of axial relative displacement can be used as constraints in optimization2 . Establishing the supporting plate model and setting up the model parameters. After analyzing the force of the main mirror on the supporting plate, the model is applied with force and constraints. Modal analysis and static analysis of supporting plates are calculated. The continuum structure topological optimization mathematical model is created with the variable-density method. The maximum deformation of the surface of supporting plate under the gravity of the mirror and the first model frequency are assigned to response variable, and the entire volume of supporting structure is converted to object function. The structures before and after optimization are analyzed using the finite element method. Results show that the optimized fundamental frequency increases 29.85Hz and has a less displacement compared with the traditional structure.
Bayer image parallel decoding based on GPU
NASA Astrophysics Data System (ADS)
Hu, Rihui; Xu, Zhiyong; Wei, Yuxing; Sun, Shaohua
2012-11-01
In the photoelectrical tracking system, Bayer image is decompressed in traditional method, which is CPU-based. However, it is too slow when the images become large, for example, 2K×2K×16bit. In order to accelerate the Bayer image decoding, this paper introduces a parallel speedup method for NVIDA's Graphics Processor Unit (GPU) which supports CUDA architecture. The decoding procedure can be divided into three parts: the first is serial part, the second is task-parallelism part, and the last is data-parallelism part including inverse quantization, inverse discrete wavelet transform (IDWT) as well as image post-processing part. For reducing the execution time, the task-parallelism part is optimized by OpenMP techniques. The data-parallelism part could advance its efficiency through executing on the GPU as CUDA parallel program. The optimization techniques include instruction optimization, shared memory access optimization, the access memory coalesced optimization and texture memory optimization. In particular, it can significantly speed up the IDWT by rewriting the 2D (Tow-dimensional) serial IDWT into 1D parallel IDWT. Through experimenting with 1K×1K×16bit Bayer image, data-parallelism part is 10 more times faster than CPU-based implementation. Finally, a CPU+GPU heterogeneous decompression system was designed. The experimental result shows that it could achieve 3 to 5 times speed increase compared to the CPU serial method.
Head, Linden; Nessim, Carolyn; Boyd, Kirsty Usher
2017-01-01
Background Bilateral prophylactic mastectomy (BPM) has shown breast cancer risk reduction in high-risk/BRCA+ patients. However, priority of active cancers coupled with inefficient use of operating room (OR) resources presents challenges in offering BPM in a timely manner. To address these challenges, a rapid access prophylactic mastectomy and immediate reconstruction (RAPMIR) program was innovated. The purpose of this study was to evaluate RAPMIR with regards to access to care and efficiency. Methods We retrospectively reviewed the cases of all high-risk/BRCA+ patients having had BPM between September 2012 and August 2014. Patients were divided into 2 groups: those managed through the traditional model and those managed through the RAPMIR model. RAPMIR leverages 2 concurrently running ORs with surgical oncology and plastic surgery moving between rooms to complete 3 combined BPMs with immediate reconstruction in addition to 1–2 independent cases each operative day. RAPMIR eligibility criteria included high-risk/BRCA+ status; BPM with immediate, implant-based reconstruction; and day surgery candidacy. Wait times, case volumes and patient throughput were measured and compared. Results There were 16 traditional patients and 13 RAPMIR patients. Mean wait time (days from referral to surgery) for RAPMIR was significantly shorter than for the traditional model (165.4 v. 309.2 d, p = 0.027). Daily patient throughput (4.3 v. 2.8), plastic surgery case volume (3.7 v. 1.6) and surgical oncology case volume (3.0 v. 2.2) were significantly greater in the RAPMIR model than the traditional model (p = 0.003, p < 0.001 and p = 0.015, respectively). Conclusion A multidisciplinary model with optimized scheduling has the potential to improve access to care and optimize resource utilization. PMID:28234588
Optimal design of a beam-based dynamic vibration absorber using fixed-points theory
NASA Astrophysics Data System (ADS)
Hua, Yingyu; Wong, Waion; Cheng, Li
2018-05-01
The addition of a dynamic vibration absorber (DVA) to a vibrating structure could provide an economic solution for vibration suppressions if the absorber is properly designed and located onto the structure. A common design of the DVA is a sprung mass because of its simple structure and low cost. However, the vibration suppression performance of this kind of DVA is limited by the ratio between the absorber mass and the mass of the primary structure. In this paper, a beam-based DVA (beam DVA) is proposed and optimized for minimizing the resonant vibration of a general structure. The vibration suppression performance of the proposed beam DVA depends on the mass ratio, the flexural rigidity and length of the beam. In comparison with the traditional sprung mass DVA, the proposed beam DVA shows more flexibility in vibration control design because it has more design parameters. With proper design, the beam DVA's vibration suppression capability can outperform that of the traditional DVA under the same mass constraint. The general approach is illustrated using a benchmark cantilever beam as an example. The receptance theory is introduced to model the compound system consisting of the host beam and the attached beam-based DVA. The model is validated through comparisons with the results from Abaqus as well as the Transfer Matrix method (TMM) method. Fixed-points theory is then employed to derive the analytical expressions for the optimum tuning ratio and damping ratio of the proposed beam absorber. A design guideline is then presented to choose the parameters of the beam absorber. Comparisons are finally presented between the beam absorber and the traditional DVA in terms of the vibration suppression effect. It is shown that the proposed beam absorber can outperform the traditional DVA by following this proposed guideline.
Regel, Anne; Lunte, Susan
2013-01-01
Traditional fabrication methods for polymer microchips, the bonding of two substrates together to form the microchip, can make the integration of carbon electrodes difficult. We have developed a simple and inexpensive method to integrate graphite/PMMA composite electrodes (GPCEs) into a PMMA substrate. These substrates can be bonded to other PMMA layers using a solvent-assisted thermal bonding method. The optimal composition of the GPCEs for electrochemical detection was determined using cyclic voltammetry with dopamine as a test analyte. Using the optimized GPCEs in an all-PMMA flow cell with flow injection analysis, it was possible to detect 50 nM dopamine under the best conditions. These electrodes were also evaluated for the detection of dopamine and catechol following separation by microchip electrophoresis (ME). PMID:23670816
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, Jun -Sang; Ray, Atish K.; Dawson, Paul R.
A shrink-fit sample is manufactured with a Ti-8Al-1Mo-1V alloy to introduce a multiaxial residual stress field in the disk of the sample. A set of strain and orientation pole figures are measured at various locations across the disk using synchrotron high-energy X-ray diffraction. Two approaches—the traditional sin 2Ψ method and the bi-scale optimization method—are taken to determine the stresses in the disk based on the measured strain and orientation pole figures, to explore the range of solutions that are possible for the stress field within the disk. While the stress components computed using the sin 2Ψ method and the bi-scalemore » optimization method have similar trends, their magnitudes are significantly different. Lastly, it is suspected that the local texture variation in the material is the cause of this discrepancy.« less
NASA Astrophysics Data System (ADS)
Perez, Luis
Dye-sensitized solar cells (DSSC) have the potential to replace traditional and cost-inefficient crystalline silicon or ruthenium solar cells. This can only be accomplished by optimizing DSSC's energy efficiency. One of the major components in a dye-sensitized solar cell is the porous layer of titanium dioxide. This layer is coated with a molecular dye that absorbs sunlight. The research conducted for this paper focuses on the different methods used to dye the porous TiO2 layer with ferritin-encapsulated quantum dots. Multiple anodes were dyed using a method known as SILAR which involves deposition through alternate immersion in two different solutions. The efficiencies of DSSCs with ferritin-encapsulated lead sulfide dye deposited using SILAR were subsequently compared against the efficiencies produced by cells using the traditional immersion method. It was concluded that both methods resulted in similar efficiencies (? .074%) however, the SILAR method dyed the TiO2 coating significantly faster than the immersion method. On a related note, our experiments concluded that conducting 2 SILAR cycles yields the highest possible efficiency for this particular binding method. National Science Foundation.
Research progress on the brewing techniques of new-type rice wine.
Jiao, Aiquan; Xu, Xueming; Jin, Zhengyu
2017-01-15
As a traditional alcoholic beverage, Chinese rice wine (CRW) with high nutritional value and unique flavor has been popular in China for thousands of years. Although traditional production methods had been used without change for centuries, numerous technological innovations in the last decades have greatly impacted on the CRW industry. However, reviews related to the technology research progress in this field are relatively few. This article aimed at providing a brief summary of the recent developments in the new brewing technologies for making CRW. Based on the comparison between the conventional methods and the innovative technologies of CRW brewing, three principal aspects were summarized and sorted, including the innovation of raw material pretreatment, the optimization of fermentation and the reform of sterilization technology. Furthermore, by comparing the advantages and disadvantages of these methods, various issues are addressed related to the prospect of the CRW industry. Copyright © 2016 Elsevier Ltd. All rights reserved.
Li, Min; Yu, Bing-bing; Wu, Jian-hua; Xu, Lin; Sun, Gang
2013-01-01
Purpose As Doppler ultrasound has been proven to be an effective tool to predict and compress the optimal pulsing windows, we evaluated the effective dose and diagnostic accuracy of coronary CT angiography (CTA) incorporating Doppler-guided prospective electrocardiograph (ECG) gating, which presets pulsing windows according to Doppler analysis, in patients with a heart rate >65 bpm. Materials and Methods 119 patients with a heart rate >65 bpm who were scheduled for invasive coronary angiography were prospectively studied, and patients were randomly divided into traditional prospective (n = 61) and Doppler-guided prospective (n = 58) ECG gating groups. The exposure window of traditional prospective ECG gating was set at 30%–80% of the cardiac cycle. For the Doppler group, the length of diastasis was analyzed by Doppler. For lengths greater than 90 ms, the pulsing window was preset during diastole (during 60%–80%); otherwise, the optimal pulsing intervals were moved from diastole to systole (during 30%–50%). Results The mean heart rates of the traditional ECG and the Doppler-guided group during CT scanning were 75.0±7.7 bpm (range, 66–96 bpm) and 76.5±5.4 bpm (range: 66–105 bpm), respectively. The results indicated that whereas the image quality showed no significant difference between the traditional and Doppler groups (P = 0.42), the radiation dose of the Doppler group was significantly lower than that of the traditional group (5.2±3.4mSv vs. 9.3±4.5mSv, P<0.001). The sensitivities of CTA applying traditional and Doppler-guided prospective ECG gating to diagnose stenosis on a segment level were 95.5% and 94.3%, respectively; specificities 98.0% and 97.1%, respectively; positive predictive values 90.7% and 88.2%, respectively; negative predictive values 99.0% and 98.7%, respectively. There was no statistical difference in concordance between the traditional and Doppler groups (P = 0.22). Conclusion Doppler-guided prospective ECG gating represents an improved method in patients with a high heart rate to reduce effective radiation doses, while maintaining high diagnostic accuracy. PMID:23696793
Chen, Zhihuan; Yuan, Yanbin; Yuan, Xiaohui; Huang, Yuehua; Li, Xianshan; Li, Wenwu
2015-05-01
A hydraulic turbine regulating system (HTRS) is one of the most important components of hydropower plant, which plays a key role in maintaining safety, stability and economical operation of hydro-electrical installations. At present, the conventional PID controller is widely applied in the HTRS system for its practicability and robustness, and the primary problem with respect to this control law is how to optimally tune the parameters, i.e. the determination of PID controller gains for satisfactory performance. In this paper, a kind of multi-objective evolutionary algorithms, named adaptive grid particle swarm optimization (AGPSO) is applied to solve the PID gains tuning problem of the HTRS system. This newly AGPSO optimized method, which differs from a traditional one-single objective optimization method, is designed to take care of settling time and overshoot level simultaneously, in which a set of non-inferior alternatives solutions (i.e. Pareto solution) is generated. Furthermore, a fuzzy-based membership value assignment method is employed to choose the best compromise solution from the obtained Pareto set. An illustrative example associated with the best compromise solution for parameter tuning of the nonlinear HTRS system is introduced to verify the feasibility and the effectiveness of the proposed AGPSO-based optimization approach, as compared with two another prominent multi-objective algorithms, i.e. Non-dominated Sorting Genetic Algorithm II (NSGAII) and Strength Pareto Evolutionary Algorithm II (SPEAII), for the quality and diversity of obtained Pareto solutions set. Consequently, simulation results show that this AGPSO optimized approach outperforms than compared methods with higher efficiency and better quality no matter whether the HTRS system works under unload or load conditions. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Stanley, Douglas O.; Unal, Resit; Joyner, C. R.
1992-01-01
The application of advanced technologies to future launch vehicle designs would allow the introduction of a rocket-powered, single-stage-to-orbit (SSTO) launch system early in the next century. For a selected SSTO concept, a dual mixture ratio, staged combustion cycle engine that employs a number of innovative technologies was selected as the baseline propulsion system. A series of parametric trade studies are presented to optimize both a dual mixture ratio engine and a single mixture ratio engine of similar design and technology level. The effect of varying lift-off thrust-to-weight ratio, engine mode transition Mach number, mixture ratios, area ratios, and chamber pressure values on overall vehicle weight is examined. The sensitivity of the advanced SSTO vehicle to variations in each of these parameters is presented, taking into account the interaction of each of the parameters with each other. This parametric optimization and sensitivity study employs a Taguchi design method. The Taguchi method is an efficient approach for determining near-optimum design parameters using orthogonal matrices from design of experiments (DOE) theory. Using orthogonal matrices significantly reduces the number of experimental configurations to be studied. The effectiveness and limitations of the Taguchi method for propulsion/vehicle optimization studies as compared to traditional single-variable parametric trade studies is also discussed.
Coupling Matched Molecular Pairs with Machine Learning for Virtual Compound Optimization.
Turk, Samo; Merget, Benjamin; Rippmann, Friedrich; Fulle, Simone
2017-12-26
Matched molecular pair (MMP) analyses are widely used in compound optimization projects to gain insights into structure-activity relationships (SAR). The analysis is traditionally done via statistical methods but can also be employed together with machine learning (ML) approaches to extrapolate to novel compounds. The here introduced MMP/ML method combines a fragment-based MMP implementation with different machine learning methods to obtain automated SAR decomposition and prediction. To test the prediction capabilities and model transferability, two different compound optimization scenarios were designed: (1) "new fragments" which occurs when exploring new fragments for a defined compound series and (2) "new static core and transformations" which resembles for instance the identification of a new compound series. Very good results were achieved by all employed machine learning methods especially for the new fragments case, but overall deep neural network models performed best, allowing reliable predictions also for the new static core and transformations scenario, where comprehensive SAR knowledge of the compound series is missing. Furthermore, we show that models trained on all available data have a higher generalizability compared to models trained on focused series and can extend beyond chemical space covered in the training data. Thus, coupling MMP with deep neural networks provides a promising approach to make high quality predictions on various data sets and in different compound optimization scenarios.
Mesh Denoising based on Normal Voting Tensor and Binary Optimization.
Yadav, Sunil Kumar; Reitebuch, Ulrich; Polthier, Konrad
2017-08-17
This paper presents a two-stage mesh denoising algorithm. Unlike other traditional averaging approaches, our approach uses an element-based normal voting tensor to compute smooth surfaces. By introducing a binary optimization on the proposed tensor together with a local binary neighborhood concept, our algorithm better retains sharp features and produces smoother umbilical regions than previous approaches. On top of that, we provide a stochastic analysis on the different kinds of noise based on the average edge length. The quantitative results demonstrate that the performance of our method is better compared to state-of-the-art smoothing approaches.
NASA Astrophysics Data System (ADS)
Wu, Lifu; Qiu, Xiaojun; Burnett, Ian S.; Guo, Yecai
2015-08-01
Hybrid feedforward and feedback structures are useful for active noise control (ANC) applications where the noise can only be partially obtained with reference sensors. The traditional method uses the secondary signals of both the feedforward and feedback structures to synthesize a reference signal for the feedback structure in the hybrid structure. However, this approach introduces coupling between the feedforward and feedback structures and parameter changes in one structure affect the other during adaptation such that the feedforward and feedback structures must be optimized simultaneously in practical ANC system design. Two methods are investigated in this paper to remove such coupling effects. One is a simplified method, which uses the error signal directly as the reference signal in the feedback structure, and the second method generates the reference signal for the feedback structure by using only the secondary signal from the feedback structure and utilizes the generated reference signal as the error signal of the feedforward structure. Because the two decoupling methods can optimize the feedforward and feedback structures separately, they provide more flexibility in the design and optimization of the adaptive filters in practical ANC applications.
NASA Technical Reports Server (NTRS)
Ippolito, Corey; Nguyen, Nhan; Lohn, Jason; Dolan, John
2014-01-01
The emergence of advanced lightweight materials is resulting in a new generation of lighter, flexible, more-efficient airframes that are enabling concepts for active aeroelastic wing-shape control to achieve greater flight efficiency and increased safety margins. These elastically shaped aircraft concepts require non-traditional methods for large-scale multi-objective flight control that simultaneously seek to gain aerodynamic efficiency in terms of drag reduction while performing traditional command-tracking tasks as part of a complete guidance and navigation solution. This paper presents results from a preliminary study of a notional multi-objective control law for an aeroelastic flexible-wing aircraft controlled through distributed continuous leading and trailing edge control surface actuators. This preliminary study develops and analyzes a multi-objective control law derived from optimal linear quadratic methods on a longitudinal vehicle dynamics model with coupled aeroelastic dynamics. The controller tracks commanded attack-angle while minimizing drag and controlling wing twist and bend. This paper presents an overview of the elastic aircraft concept, outlines the coupled vehicle model, presents the preliminary control law formulation and implementation, presents results from simulation, provides analysis, and concludes by identifying possible future areas for research
Qualitative and Quantitative Analysis for Facial Complexion in Traditional Chinese Medicine
Zhao, Changbo; Li, Guo-zheng; Li, Fufeng; Wang, Zhi; Liu, Chang
2014-01-01
Facial diagnosis is an important and very intuitive diagnostic method in Traditional Chinese Medicine (TCM). However, due to its qualitative and experience-based subjective property, traditional facial diagnosis has a certain limitation in clinical medicine. The computerized inspection method provides classification models to recognize facial complexion (including color and gloss). However, the previous works only study the classification problems of facial complexion, which is considered as qualitative analysis in our perspective. For quantitative analysis expectation, the severity or degree of facial complexion has not been reported yet. This paper aims to make both qualitative and quantitative analysis for facial complexion. We propose a novel feature representation of facial complexion from the whole face of patients. The features are established with four chromaticity bases splitting up by luminance distribution on CIELAB color space. Chromaticity bases are constructed from facial dominant color using two-level clustering; the optimal luminance distribution is simply implemented with experimental comparisons. The features are proved to be more distinctive than the previous facial complexion feature representation. Complexion recognition proceeds by training an SVM classifier with the optimal model parameters. In addition, further improved features are more developed by the weighted fusion of five local regions. Extensive experimental results show that the proposed features achieve highest facial color recognition performance with a total accuracy of 86.89%. And, furthermore, the proposed recognition framework could analyze both color and gloss degrees of facial complexion by learning a ranking function. PMID:24967342
Bipolar electrode selection for a motor imagery based brain computer interface
NASA Astrophysics Data System (ADS)
Lou, Bin; Hong, Bo; Gao, Xiaorong; Gao, Shangkai
2008-09-01
A motor imagery based brain-computer interface (BCI) provides a non-muscular communication channel that enables people with paralysis to control external devices using their motor imagination. Reducing the number of electrodes is critical to improving the portability and practicability of the BCI system. A novel method is proposed to reduce the number of electrodes to a total of four by finding the optimal positions of two bipolar electrodes. Independent component analysis (ICA) is applied to find the source components of mu and alpha rhythms, and optimal electrodes are chosen by comparing the projection weights of sources on each channel. The results of eight subjects demonstrate the better classification performance of the optimal layout compared with traditional layouts, and the stability of this optimal layout over a one week interval was further verified.
Statistical Optimality in Multipartite Ranking and Ordinal Regression.
Uematsu, Kazuki; Lee, Yoonkyung
2015-05-01
Statistical optimality in multipartite ranking is investigated as an extension of bipartite ranking. We consider the optimality of ranking algorithms through minimization of the theoretical risk which combines pairwise ranking errors of ordinal categories with differential ranking costs. The extension shows that for a certain class of convex loss functions including exponential loss, the optimal ranking function can be represented as a ratio of weighted conditional probability of upper categories to lower categories, where the weights are given by the misranking costs. This result also bridges traditional ranking methods such as proportional odds model in statistics with various ranking algorithms in machine learning. Further, the analysis of multipartite ranking with different costs provides a new perspective on non-smooth list-wise ranking measures such as the discounted cumulative gain and preference learning. We illustrate our findings with simulation study and real data analysis.
Constant-Envelope Waveform Design for Optimal Target-Detection and Autocorrelation Performances
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sen, Satyabrata
2013-01-01
We propose an algorithm to directly synthesize in time-domain a constant-envelope transmit waveform that achieves the optimal performance in detecting an extended target in the presence of signal-dependent interference. This approach is in contrast to the traditional indirect methods that synthesize the transmit signal following the computation of the optimal energy spectral density. Additionally, we aim to maintain a good autocorrelation property of the designed signal. Therefore, our waveform design technique solves a bi-objective optimization problem in order to simultaneously improve the detection and autocorrelation performances, which are in general conflicting in nature. We demonstrate this compromising characteristics of themore » detection and autocorrelation performances with numerical examples. Furthermore, in the absence of the autocorrelation criterion, our designed signal is shown to achieve a near-optimum detection performance.« less
150-nm DR contact holes die-to-database inspection
NASA Astrophysics Data System (ADS)
Kuo, Shen C.; Wu, Clare; Eran, Yair; Staud, Wolfgang; Hemar, Shirley; Lindman, Ofer
2000-07-01
Using a failure analysis-driven yield enhancements concept, based on an optimization of the mask manufacturing process and UV reticle inspection is studied and shown to improve the contact layer quality. This is achieved by relating various manufacturing processes to very fine tuned contact defect detection. In this way, selecting an optimized manufacturing process with fine-tuned inspection setup is achieved in a controlled manner. This paper presents a study, performed on a specially designed test reticle, which simulates production contact layers of design rule 250nm, 180nm and 150nm. This paper focuses on the use of advanced UV reticle inspection techniques as part of the process optimization cycle. Current inspection equipment uses traditional and insufficient methods of small contact-hole inspection and review.
NASA Astrophysics Data System (ADS)
He, Yaoyao; Yang, Shanlin; Xu, Qifa
2013-07-01
In order to solve the model of short-term cascaded hydroelectric system scheduling, a novel chaotic particle swarm optimization (CPSO) algorithm using improved logistic map is introduced, which uses the water discharge as the decision variables combined with the death penalty function. According to the principle of maximum power generation, the proposed approach makes use of the ergodicity, symmetry and stochastic property of improved logistic chaotic map for enhancing the performance of particle swarm optimization (PSO) algorithm. The new hybrid method has been examined and tested on two test functions and a practical cascaded hydroelectric system. The experimental results show that the effectiveness and robustness of the proposed CPSO algorithm in comparison with other traditional algorithms.
Boston-Fleischhauer, Carol
2008-01-01
The design and implementation of efficient, effective, and safe processes are never-ending challenges in healthcare. Less than optimal performance levels and rising concerns about patient safety suggest that traditional process design methods are insufficient to meet design requirements. In this 2-part series, the author presents human factors engineering and reliability science as important knowledge to enhance existing operational and clinical process design methods in healthcare. An examination of these theories, application approaches, and examples are presented.
Wen Lin; Asko Noormets; John S. King; Ge Sun; Steve McNulty; Jean-Christophe Domec; Lucas Cernusak
2017-01-01
Stable isotope ratios (δ13C and δ18O) of tree-ring α-cellulose are important tools in paleoclimatology, ecology, plant physiology and genetics. The Multiple Sample Isolation System for Solids (MSISS) was a major advance in the tree-ring α-cellulose extraction methods, offering greater throughput and reduced labor input compared to traditional alternatives. However, the...
Dimensional Precision Research of Wax Molding Rapid Prototyping based on Droplet Injection
NASA Astrophysics Data System (ADS)
Mingji, Huang; Geng, Wu; yan, Shan
2017-11-01
The traditional casting process is complex, the mold is essential products, mold quality directly affect the quality of the product. With the method of rapid prototyping 3D printing to produce mold prototype. The utility wax model has the advantages of high speed, low cost and complex structure. Using the orthogonal experiment as the main method, analysis each factors of size precision. The purpose is to obtain the optimal process parameters, to improve the dimensional accuracy of production based on droplet injection molding.
Numerical realization of the variational method for generating self-trapped beams
NASA Astrophysics Data System (ADS)
Duque, Erick I.; Lopez-Aguayo, Servando; Malomed, Boris A.
2018-03-01
We introduce a numerical variational method based on the Rayleigh-Ritz optimization principle for predicting two-dimensional self-trapped beams in nonlinear media. This technique overcomes the limitation of the traditional variational approximation in performing analytical Lagrangian integration and differentiation. Approximate soliton solutions of a generalized nonlinear Schr\\"odinger equation are obtained, demonstrating robustness of the beams of various types (fundamental, vortices, multipoles, azimuthons) in the course of their propagation. The algorithm offers possibilities to produce more sophisticated soliton profiles in general nonlinear models.
Adjoint Sensitivity Method to Determine Optimal Set of Stations for Tsunami Source Inversion
NASA Astrophysics Data System (ADS)
Gusman, A. R.; Hossen, M. J.; Cummins, P. R.; Satake, K.
2017-12-01
We applied the adjoint sensitivity technique in tsunami science for the first time to determine an optimal set of stations for a tsunami source inversion. The adjoint sensitivity (AS) method has been used in numerical weather prediction to find optimal locations for adaptive observations. We implemented this technique to Green's Function based Time Reverse Imaging (GFTRI), which is recently used in tsunami source inversion in order to reconstruct the initial sea surface displacement, known as tsunami source model. This method has the same source representation as the traditional least square (LSQ) source inversion method where a tsunami source is represented by dividing the source region into a regular grid of "point" sources. For each of these, Green's function (GF) is computed using a basis function for initial sea surface displacement whose amplitude is concentrated near the grid point. We applied the AS method to the 2009 Samoa earthquake tsunami that occurred on 29 September 2009 in the southwest Pacific, near the Tonga trench. Many studies show that this earthquake is a doublet associated with both normal faulting in the outer-rise region and thrust faulting in the subduction interface. To estimate the tsunami source model for this complex event, we initially considered 11 observations consisting of 5 tide gauges and 6 DART bouys. After implementing AS method, we found the optimal set of observations consisting with 8 stations. Inversion with this optimal set provides better result in terms of waveform fitting and source model that shows both sub-events associated with normal and thrust faulting.
Hernandez, Wilmar
2007-01-01
In this paper a survey on recent applications of optimal signal processing techniques to improve the performance of mechanical sensors is made. Here, a comparison between classical filters and optimal filters for automotive sensors is made, and the current state of the art of the application of robust and optimal control and signal processing techniques to the design of the intelligent (or smart) sensors that today's cars need is presented through several experimental results that show that the fusion of intelligent sensors and optimal signal processing techniques is the clear way to go. However, the switch between the traditional methods of designing automotive sensors and the new ones cannot be done overnight because there are some open research issues that have to be solved. This paper draws attention to one of the open research issues and tries to arouse researcher's interest in the fusion of intelligent sensors and optimal signal processing techniques.
Zhang, Lin; Yin, Na; Fu, Xiong; Lin, Qiaomin; Wang, Ruchuan
2017-01-01
With the development of wireless sensor networks, certain network problems have become more prominent, such as limited node resources, low data transmission security, and short network life cycles. To solve these problems effectively, it is important to design an efficient and trusted secure routing algorithm for wireless sensor networks. Traditional ant-colony optimization algorithms exhibit only local convergence, without considering the residual energy of the nodes and many other problems. This paper introduces a multi-attribute pheromone ant secure routing algorithm based on reputation value (MPASR). This algorithm can reduce the energy consumption of a network and improve the reliability of the nodes’ reputations by filtering nodes with higher coincidence rates and improving the method used to update the nodes’ communication behaviors. At the same time, the node reputation value, the residual node energy and the transmission delay are combined to formulate a synthetic pheromone that is used in the formula for calculating the random proportion rule in traditional ant-colony optimization to select the optimal data transmission path. Simulation results show that the improved algorithm can increase both the security of data transmission and the quality of routing service. PMID:28282894
Inverse planning in the age of digital LINACs: station parameter optimized radiation therapy (SPORT)
NASA Astrophysics Data System (ADS)
Xing, Lei; Li, Ruijiang
2014-03-01
The last few years have seen a number of technical and clinical advances which give rise to a need for innovations in dose optimization and delivery strategies. Technically, a new generation of digital linac has become available which offers features such as programmable motion between station parameters and high dose-rate Flattening Filter Free (FFF) beams. Current inverse planning methods are designed for traditional machines and cannot accommodate these features of new generation linacs without compromising either dose conformality and/or delivery efficiency. Furthermore, SBRT is becoming increasingly important, which elevates the need for more efficient delivery, improved dose distribution. Here we will give an overview of our recent work in SPORT designed to harness the digital linacs and highlight the essential components of SPORT. We will summarize the pros and cons of traditional beamlet-based optimization (BBO) and direct aperture optimization (DAO) and introduce a new type of algorithm, compressed sensing (CS)-based inverse planning, that is capable of automatically removing the redundant segments during optimization and providing a plan with high deliverability in the presence of a large number of station control points (potentially non-coplanar, non-isocentric, and even multi-isocenters). We show that CS-approach takes the interplay between planning and delivery into account and allows us to balance the dose optimality and delivery efficiency in a controlled way and, providing a viable framework to address various unmet demands of the new generation linacs. A few specific implementation strategies of SPORT in the forms of fixed-gantry and rotational arc delivery are also presented.
Hu, Ting; Guo, Yan-Yun; Zhou, Qin-Fan; Zhong, Xian-Ke; Zhu, Liang; Piao, Jin-Hua; Chen, Jian; Jiang, Jian-Guo
2012-09-01
Eclipta prostrasta L. is a traditional Chinese medicine herb, which is rich in saponins and has strong antiviral and antitumor activities. An ultrasonic-assisted extraction (UAE) technique was developed for the fast extraction of saponins from E. prostrasta. The content of total saponins in E. prostrasta was determined using UV/vis spectrophotometric methods. Several influential parameters like ethanol concentration, extraction time, temperature, and liquid/solid ratio were investigated for the optimization of the extraction using single factor and Box-Behnken experimental designs. Extraction conditions were optimized for maximum yield of total saponins in E. prostrasta using response surface methodology (RSM) with 4 independent variables at 3 levels of each variable. Results showed that the optimization conditions for saponins extraction were: ethanol concentration 70%, extraction time 3 h, temperature 70 °C, and liquid/solid ratio 14:1. Corresponding saponins content was 2.096%. The mathematical model developed was found to fit well with the experimental data. Practical Application: Although there are wider applications of Eclipta prostrasta L. as a functional food or traditional medicine due to its various bioactivities, these properties are limited by its crude extracts. Total saponins are the main active ingredient of E. prostrasta. This research has optimized the extraction conditions of total saponins from E. prostrasta, which will provide useful reference information for further studies, and offer related industries with helpful guidance in practice. © 2012 Institute of Food Technologists®
Optimal allocation model of construction land based on two-level system optimization theory
NASA Astrophysics Data System (ADS)
Liu, Min; Liu, Yanfang; Xia, Yuping; Lei, Qihong
2007-06-01
The allocation of construction land is an important task in land-use planning. Whether implementation of planning decisions is a success or not, usually depends on a reasonable and scientific distribution method. Considering the constitution of land-use planning system and planning process in China, multiple levels and multiple objective decision problems is its essence. Also, planning quantity decomposition is a two-level system optimization problem and an optimal resource allocation decision problem between a decision-maker in the topper and a number of parallel decision-makers in the lower. According the characteristics of the decision-making process of two-level decision-making system, this paper develops an optimal allocation model of construction land based on two-level linear planning. In order to verify the rationality and the validity of our model, Baoan district of Shenzhen City has been taken as a test case. Under the assistance of the allocation model, construction land is allocated to ten townships of Baoan district. The result obtained from our model is compared to that of traditional method, and results show that our model is reasonable and usable. In the end, the paper points out the shortcomings of the model and further research directions.
Wu, Zhao; Xiong, Naixue; Huang, Yannong; Xu, Degang; Hu, Chunyang
2015-01-01
The services composition technology provides flexible methods for building service composition applications (SCAs) in wireless sensor networks (WSNs). The high reliability and high performance of SCAs help services composition technology promote the practical application of WSNs. The optimization methods for reliability and performance used for traditional software systems are mostly based on the instantiations of software components, which are inapplicable and inefficient in the ever-changing SCAs in WSNs. In this paper, we consider the SCAs with fault tolerance in WSNs. Based on a Universal Generating Function (UGF) we propose a reliability and performance model of SCAs in WSNs, which generalizes a redundancy optimization problem to a multi-state system. Based on this model, an efficient optimization algorithm for reliability and performance of SCAs in WSNs is developed based on a Genetic Algorithm (GA) to find the optimal structure of SCAs with fault-tolerance in WSNs. In order to examine the feasibility of our algorithm, we have evaluated the performance. Furthermore, the interrelationships between the reliability, performance and cost are investigated. In addition, a distinct approach to determine the most suitable parameters in the suggested algorithm is proposed. PMID:26561818
Intelligent Fault Diagnosis of HVCB with Feature Space Optimization-Based Random Forest
Ma, Suliang; Wu, Jianwen; Wang, Yuhao; Jia, Bowen; Jiang, Yuan
2018-01-01
Mechanical faults of high-voltage circuit breakers (HVCBs) always happen over long-term operation, so extracting the fault features and identifying the fault type have become a key issue for ensuring the security and reliability of power supply. Based on wavelet packet decomposition technology and random forest algorithm, an effective identification system was developed in this paper. First, compared with the incomplete description of Shannon entropy, the wavelet packet time-frequency energy rate (WTFER) was adopted as the input vector for the classifier model in the feature selection procedure. Then, a random forest classifier was used to diagnose the HVCB fault, assess the importance of the feature variable and optimize the feature space. Finally, the approach was verified based on actual HVCB vibration signals by considering six typical fault classes. The comparative experiment results show that the classification accuracy of the proposed method with the origin feature space reached 93.33% and reached up to 95.56% with optimized input feature vector of classifier. This indicates that feature optimization procedure is successful, and the proposed diagnosis algorithm has higher efficiency and robustness than traditional methods. PMID:29659548
FEM-based strain analysis study for multilayer sheet forming process
NASA Astrophysics Data System (ADS)
Zhang, Rongjing; Lang, Lihui; Zafar, Rizwan
2015-12-01
Fiber metal laminates have many advantages over traditional laminates (e.g., any type of fiber and resin material can be placed anywhere between the metallic layers without risk of failure of the composite fabric sheets). Furthermore, the process requirements to strictly control the temperature and punch force in fiber metal laminates are also less stringent than those in traditional laminates. To further explore the novel method, this study conducts a finite element method-based (FEM-based) strain analysis on multilayer blanks by using the 3A method. Different forming modes such as wrinkling and fracture are discussed by using experimental and numerical studies. Hydroforming is used for multilayer forming. The Barlat 2000 yield criteria and DYNAFORM/LS-DYNA are used for the simulations. Optimal process parameters are determined on the basis of fixed die-binder gap and variable cavity pressure. The results of this study will enhance the knowledge on the mechanics of multilayer structures formed by using the 3A method and expand its commercial applications.
An Efficient Bundle Adjustment Model Based on Parallax Parametrization for Environmental Monitoring
NASA Astrophysics Data System (ADS)
Chen, R.; Sun, Y. Y.; Lei, Y.
2017-12-01
With the rapid development of Unmanned Aircraft Systems (UAS), more and more research fields have been successfully equipped with this mature technology, among which is environmental monitoring. One difficult task is how to acquire accurate position of ground object in order to reconstruct the scene more accurate. To handle this problem, we combine bundle adjustment method from Photogrammetry with parallax parametrization from Computer Vision to create a new method call APCP (aerial polar-coordinate photogrammetry). One impressive advantage of this method compared with traditional method is that the 3-dimensional point in space is represented using three angles (elevation angle, azimuth angle and parallax angle) rather than the XYZ value. As the basis for APCP, bundle adjustment could be used to optimize the UAS sensors' pose accurately, reconstruct the 3D models of environment, thus serving as the criterion of accurate position for monitoring. To verity the effectiveness of the proposed method, we test on several UAV dataset obtained by non-metric digital cameras with large attitude angles, and we find that our methods could achieve 1 or 2 times better efficiency with no loss of accuracy than traditional ones. For the classical nonlinear optimization of bundle adjustment model based on the rectangular coordinate, it suffers the problem of being seriously dependent on the initial values, making it unable to converge fast or converge to a stable state. On the contrary, APCP method could deal with quite complex condition of UAS when conducting monitoring as it represent the points in space with angles, including the condition that the sequential images focusing on one object have zero parallax angle. In brief, this paper presents the parameterization of 3D feature points based on APCP, and derives a full bundle adjustment model and the corresponding nonlinear optimization problems based on this method. In addition, we analyze the influence of convergence and dependence on the initial values through math formulas. At last this paper conducts experiments using real aviation data, and proves that the new model can effectively solve bottlenecks of the classical method in a certain degree, that is, this paper provides a new idea and solution for faster and more efficient environmental monitoring.
Steps to achieve quantitative measurements of microRNA using two step droplet digital PCR.
Stein, Erica V; Duewer, David L; Farkas, Natalia; Romsos, Erica L; Wang, Lili; Cole, Kenneth D
2017-01-01
Droplet digital PCR (ddPCR) is being advocated as a reference method to measure rare genomic targets. It has consistently been proven to be more sensitive and direct at discerning copy numbers of DNA than other quantitative methods. However, one of the largest obstacles to measuring microRNA (miRNA) using ddPCR is that reverse transcription efficiency depends upon the target, meaning small RNA nucleotide composition directly effects primer specificity in a manner that prevents traditional quantitation optimization strategies. Additionally, the use of reagents that are optimized for miRNA measurements using quantitative real-time PCR (qRT-PCR) appear to either cause false positive or false negative detection of certain targets when used with traditional ddPCR quantification methods. False readings are often related to using inadequate enzymes, primers and probes. Given that two-step miRNA quantification using ddPCR relies solely on reverse transcription and uses proprietary reagents previously optimized only for qRT-PCR, these barriers are substantial. Therefore, here we outline essential controls, optimization techniques, and an efficacy model to improve the quality of ddPCR miRNA measurements. We have applied two-step principles used for miRNA qRT-PCR measurements and leveraged the use of synthetic miRNA targets to evaluate ddPCR following cDNA synthesis with four different commercial kits. We have identified inefficiencies and limitations as well as proposed ways to circumvent identified obstacles. Lastly, we show that we can apply these criteria to a model system to confidently quantify miRNA copy number. Our measurement technique is a novel way to quantify specific miRNA copy number in a single sample, without using standard curves for individual experiments. Our methodology can be used for validation and control measurements, as well as a diagnostic technique that allows scientists, technicians, clinicians, and regulators to base miRNA measures on a single unit of measurement rather than a ratio of values.
Steps to achieve quantitative measurements of microRNA using two step droplet digital PCR
Duewer, David L.; Farkas, Natalia; Romsos, Erica L.; Wang, Lili; Cole, Kenneth D.
2017-01-01
Droplet digital PCR (ddPCR) is being advocated as a reference method to measure rare genomic targets. It has consistently been proven to be more sensitive and direct at discerning copy numbers of DNA than other quantitative methods. However, one of the largest obstacles to measuring microRNA (miRNA) using ddPCR is that reverse transcription efficiency depends upon the target, meaning small RNA nucleotide composition directly effects primer specificity in a manner that prevents traditional quantitation optimization strategies. Additionally, the use of reagents that are optimized for miRNA measurements using quantitative real-time PCR (qRT-PCR) appear to either cause false positive or false negative detection of certain targets when used with traditional ddPCR quantification methods. False readings are often related to using inadequate enzymes, primers and probes. Given that two-step miRNA quantification using ddPCR relies solely on reverse transcription and uses proprietary reagents previously optimized only for qRT-PCR, these barriers are substantial. Therefore, here we outline essential controls, optimization techniques, and an efficacy model to improve the quality of ddPCR miRNA measurements. We have applied two-step principles used for miRNA qRT-PCR measurements and leveraged the use of synthetic miRNA targets to evaluate ddPCR following cDNA synthesis with four different commercial kits. We have identified inefficiencies and limitations as well as proposed ways to circumvent identified obstacles. Lastly, we show that we can apply these criteria to a model system to confidently quantify miRNA copy number. Our measurement technique is a novel way to quantify specific miRNA copy number in a single sample, without using standard curves for individual experiments. Our methodology can be used for validation and control measurements, as well as a diagnostic technique that allows scientists, technicians, clinicians, and regulators to base miRNA measures on a single unit of measurement rather than a ratio of values. PMID:29145448
NASA Astrophysics Data System (ADS)
Zhang, Chengxian; Throckmorton, Robert; Yang, Xu-Chen; Wang, Xin; Barnes, Edwin
We perform Randomized Benchmarking of a family of recently introduced control scheme for singlet-triplet qubits in semiconductor double quantum dots, which is optimized to have substantially shorter gate times. We study their performances under the recently introduced symmetric control scheme of changing the exchange interaction by raising and lowering the barrier between the two dots (barrier control) and compare these results to those under the traditional tilt control method in which the exchange interaction is varied by detuning. It has been suggested that the barrier control method encounters a much smaller charge noise. We found that for the cases where the charge noise is dominant, corresponding to the device made on isotopically enriched silicon, the optimized sequences offer much longer coherence time under barrier control compared to the tilt control method of the strength of the exchange interaction. This work was supported by the Research Grants Council of Hong Kong SAR (No. CityU 21300116) and the National Natural Science Foundation of China (No. 11604277), and by LPS-MPO-CMTC.
Optimized star sensors laboratory calibration method using a regularization neural network.
Zhang, Chengfen; Niu, Yanxiong; Zhang, Hao; Lu, Jiazhen
2018-02-10
High-precision ground calibration is essential to ensure the performance of star sensors. However, the complex distortion and multi-error coupling have brought great difficulties to traditional calibration methods, especially for large field of view (FOV) star sensors. Although increasing the complexity of models is an effective way to improve the calibration accuracy, it significantly increases the demand for calibration data. In order to achieve high-precision calibration of star sensors with large FOV, a novel laboratory calibration method based on a regularization neural network is proposed. A multi-layer structure neural network is designed to represent the mapping of the star vector and the corresponding star point coordinate directly. To ensure the generalization performance of the network, regularization strategies are incorporated into the net structure and the training algorithm. Simulation and experiment results demonstrate that the proposed method can achieve high precision with less calibration data and without any other priori information. Compared with traditional methods, the calibration error of the star sensor decreased by about 30%. The proposed method can satisfy the precision requirement for large FOV star sensors.
2013-01-01
The monitoring of the cardiac output (CO) and other hemodynamic parameters, traditionally performed with the thermodilution method via a pulmonary artery catheter (PAC), is now increasingly done with the aid of less invasive and much easier to use devices. When used within the context of a hemodynamic optimization protocol, they can positively influence the outcome in both surgical and non-surgical patient populations. While these monitoring tools have simplified the hemodynamic calculations, they are subject to limitations and can lead to erroneous results if not used properly. In this article we will review the commercially available minimally invasive CO monitoring devices, explore their technical characteristics and describe the limitations that should be taken into consideration when clinical decisions are made. PMID:24472443
Simplified Design Method for Tension Fasteners
NASA Astrophysics Data System (ADS)
Olmstead, Jim; Barker, Paul; Vandersluis, Jonathan
2012-07-01
Tension fastened joints design has traditionally been an iterative tradeoff between separation and strength requirements. This paper presents equations for the maximum external load that a fastened joint can support and the optimal preload to achieve this load. The equations, based on linear joint theory, account for separation and strength safety factors and variations in joint geometry, materials, preload, load-plane factor and thermal loading. The strength-normalized versions of the equations are applicable to any fastener and can be plotted to create a "Fastener Design Space", FDS. Any combination of preload and tension that falls within the FDS represents a safe joint design. The equation for the FDS apex represents the optimal preload and load capacity of a set of joints. The method can be used for preliminary design or to evaluate multiple pre-existing joints.
NASA Astrophysics Data System (ADS)
Wu, Qi
2010-03-01
Demand forecasts play a crucial role in supply chain management. The future demand for a certain product is the basis for the respective replenishment systems. Aiming at demand series with small samples, seasonal character, nonlinearity, randomicity and fuzziness, the existing support vector kernel does not approach the random curve of the sales time series in the space (quadratic continuous integral space). In this paper, we present a hybrid intelligent system combining the wavelet kernel support vector machine and particle swarm optimization for demand forecasting. The results of application in car sale series forecasting show that the forecasting approach based on the hybrid PSOWv-SVM model is effective and feasible, the comparison between the method proposed in this paper and other ones is also given, which proves that this method is, for the discussed example, better than hybrid PSOv-SVM and other traditional methods.
Park, Jun -Sang; Ray, Atish K.; Dawson, Paul R.; ...
2016-05-02
A shrink-fit sample is manufactured with a Ti-8Al-1Mo-1V alloy to introduce a multiaxial residual stress field in the disk of the sample. A set of strain and orientation pole figures are measured at various locations across the disk using synchrotron high-energy X-ray diffraction. Two approaches—the traditional sin 2Ψ method and the bi-scale optimization method—are taken to determine the stresses in the disk based on the measured strain and orientation pole figures, to explore the range of solutions that are possible for the stress field within the disk. While the stress components computed using the sin 2Ψ method and the bi-scalemore » optimization method have similar trends, their magnitudes are significantly different. Lastly, it is suspected that the local texture variation in the material is the cause of this discrepancy.« less
A Decision Processing Algorithm for CDC Location Under Minimum Cost SCM Network
NASA Astrophysics Data System (ADS)
Park, N. K.; Kim, J. Y.; Choi, W. Y.; Tian, Z. M.; Kim, D. J.
Location of CDC in the matter of network on Supply Chain is becoming on the high concern these days. Present status of methods on CDC has been mainly based on the calculation manually by the spread sheet to achieve the goal of minimum logistics cost. This study is focused on the development of new processing algorithm to overcome the limit of present methods, and examination of the propriety of this algorithm by case study. The algorithm suggested by this study is based on the principle of optimization on the directive GRAPH of SCM model and suggest the algorithm utilizing the traditionally introduced MST, shortest paths finding methods, etc. By the aftermath of this study, it helps to assess suitability of the present on-going SCM network and could be the criterion on the decision-making process for the optimal SCM network building-up for the demand prospect in the future.
Design optimization of highly asymmetrical layouts by 2D contour metrology
NASA Astrophysics Data System (ADS)
Hu, C. M.; Lo, Fred; Yang, Elvis; Yang, T. H.; Chen, K. C.
2018-03-01
As design pitch shrinks to the resolution limit of up-to-date optical lithography technology, the Critical Dimension (CD) variation tolerance has been dramatically decreased for ensuring the functionality of device. One of critical challenges associates with the narrower CD tolerance for whole chip area is the proximity effect control on asymmetrical layout environments. To fulfill the tight CD control of complex features, the Critical Dimension Scanning Electron Microscope (CD-SEM) based measurement results for qualifying process window and establishing the Optical Proximity Correction (OPC) model become insufficient, thus 2D contour extraction technique [1-5] has been an increasingly important approach for complementing the insufficiencies of traditional CD measurement algorithm. To alleviate the long cycle time and high cost penalties for product verification, manufacturing requirements are better to be well handled at design stage to improve the quality and yield of ICs. In this work, in-house 2D contour extraction platform was established for layout design optimization of 39nm half-pitch Self-Aligned Double Patterning (SADP) process layer. Combining with the adoption of Process Variation Band Index (PVBI), the contour extraction platform enables layout optimization speedup as comparing to traditional methods. The capabilities of identifying and handling lithography hotspots in complex layout environments of 2D contour extraction platform allow process window aware layout optimization to meet the manufacturing requirements.
NASA Astrophysics Data System (ADS)
Cao, Lu; Qiao, Dong; Xu, Jingwen
2018-02-01
Sub-Optimal Artificial Potential Function Sliding Mode Control (SOAPF-SMC) is proposed for the guidance and control of spacecraft rendezvous considering the obstacles avoidance, which is derived based on the theories of artificial potential function (APF), sliding mode control (SMC) and state dependent riccati equation (SDRE) technique. This new methodology designs a new improved APF to describe the potential field. It can guarantee the value of potential function converge to zero at the desired state. Moreover, the nonlinear terminal sliding mode is introduced to design the sliding mode surface with the potential gradient of APF, which offer a wide variety of controller design alternatives with fast and finite time convergence. Based on the above design, the optimal control theory (SDRE) is also employed to optimal the shape parameter of APF, in order to add some degree of optimality in reducing energy consumption. The new methodology is applied to spacecraft rendezvous with the obstacles avoidance problem, which is simulated to compare with the traditional artificial potential function sliding mode control (APF-SMC) and SDRE to evaluate the energy consumption and control precision. It is demonstrated that the presented method can avoiding dynamical obstacles whilst satisfying the requirements of autonomous rendezvous. In addition, it can save more energy than the traditional APF-SMC and also have better control accuracy than the SDRE.
NASA Astrophysics Data System (ADS)
Chen, Zhenzhong; Han, Junwei; Ngan, King Ngi
2005-10-01
MPEG-4 treats a scene as a composition of several objects or so-called video object planes (VOPs) that are separately encoded and decoded. Such a flexible video coding framework makes it possible to code different video object with different distortion scale. It is necessary to analyze the priority of the video objects according to its semantic importance, intrinsic properties and psycho-visual characteristics such that the bit budget can be distributed properly to video objects to improve the perceptual quality of the compressed video. This paper aims to provide an automatic video object priority definition method based on object-level visual attention model and further propose an optimization framework for video object bit allocation. One significant contribution of this work is that the human visual system characteristics are incorporated into the video coding optimization process. Another advantage is that the priority of the video object can be obtained automatically instead of fixing weighting factors before encoding or relying on the user interactivity. To evaluate the performance of the proposed approach, we compare it with traditional verification model bit allocation and the optimal multiple video object bit allocation algorithms. Comparing with traditional bit allocation algorithms, the objective quality of the object with higher priority is significantly improved under this framework. These results demonstrate the usefulness of this unsupervised subjective quality lifting framework.
Chang, Liang-Cheng; Lee, Da-Sheng
2012-01-01
Installation of a Wireless and Powerless Sensing Node (WPSN) inside a spindle enables the direct transmission of monitoring signals through a metal case of a certain thickness instead of the traditional method of using connecting cables. Thus, the node can be conveniently installed inside motors to measure various operational parameters. This study extends this earlier finding by applying this advantage to the monitoring of spindle systems. After over 2 years of system observation and optimization, the system has been verified to be superior to traditional methods. The innovation of fault diagnosis in this study includes the unmatched assembly dimensions of the spindle system, the unbalanced system, and bearing damage. The results of the experiment demonstrate that the WPSN provides a desirable signal-to-noise ratio (SNR) in all three of the simulated faults, with the difference of SNR reaching a maximum of 8.6 dB. Following multiple repetitions of the three experiment types, 80% of the faults were diagnosed when the spindle revolved at 4,000 rpm, significantly higher than the 30% fault recognition rate of traditional methods. The experimental results of monitoring of the spindle production line indicated that monitoring using the WPSN encounters less interference from noise compared to that of traditional methods. Therefore, this study has successfully developed a prototype concept into a well-developed monitoring system, and the monitoring can be implemented in a spindle production line or real-time monitoring of machine tools. PMID:22368456
Chang, Liang-Cheng; Lee, Da-Sheng
2012-01-01
Installation of a Wireless and Powerless Sensing Node (WPSN) inside a spindle enables the direct transmission of monitoring signals through a metal case of a certain thickness instead of the traditional method of using connecting cables. Thus, the node can be conveniently installed inside motors to measure various operational parameters. This study extends this earlier finding by applying this advantage to the monitoring of spindle systems. After over 2 years of system observation and optimization, the system has been verified to be superior to traditional methods. The innovation of fault diagnosis in this study includes the unmatched assembly dimensions of the spindle system, the unbalanced system, and bearing damage. The results of the experiment demonstrate that the WPSN provides a desirable signal-to-noise ratio (SNR) in all three of the simulated faults, with the difference of SNR reaching a maximum of 8.6 dB. Following multiple repetitions of the three experiment types, 80% of the faults were diagnosed when the spindle revolved at 4,000 rpm, significantly higher than the 30% fault recognition rate of traditional methods. The experimental results of monitoring of the spindle production line indicated that monitoring using the WPSN encounters less interference from noise compared to that of traditional methods. Therefore, this study has successfully developed a prototype concept into a well-developed monitoring system, and the monitoring can be implemented in a spindle production line or real-time monitoring of machine tools.
Optimized Laplacian image sharpening algorithm based on graphic processing unit
NASA Astrophysics Data System (ADS)
Ma, Tinghuai; Li, Lu; Ji, Sai; Wang, Xin; Tian, Yuan; Al-Dhelaan, Abdullah; Al-Rodhaan, Mznah
2014-12-01
In classical Laplacian image sharpening, all pixels are processed one by one, which leads to large amount of computation. Traditional Laplacian sharpening processed on CPU is considerably time-consuming especially for those large pictures. In this paper, we propose a parallel implementation of Laplacian sharpening based on Compute Unified Device Architecture (CUDA), which is a computing platform of Graphic Processing Units (GPU), and analyze the impact of picture size on performance and the relationship between the processing time of between data transfer time and parallel computing time. Further, according to different features of different memory, an improved scheme of our method is developed, which exploits shared memory in GPU instead of global memory and further increases the efficiency. Experimental results prove that two novel algorithms outperform traditional consequentially method based on OpenCV in the aspect of computing speed.
NASA Astrophysics Data System (ADS)
Li, Xi-Bing; Wang, Ze-Wei; Dong, Long-Jun
2016-01-01
Microseismic monitoring systems using local location techniques tend to be timely, automatic and stable. One basic requirement of these systems is the automatic picking of arrival times. However, arrival times generated by automated techniques always contain large picking errors (LPEs), which may make the location solution unreliable and cause the integrated system to be unstable. To overcome the LPE issue, we propose the virtual field optimization method (VFOM) for locating single-point sources. In contrast to existing approaches, the VFOM optimizes a continuous and virtually established objective function to search the space for the common intersection of the hyperboloids, which is determined by sensor pairs other than the least residual between the model-calculated and measured arrivals. The results of numerical examples and in-site blasts show that the VFOM can obtain more precise and stable solutions than traditional methods when the input data contain LPEs. Furthermore, we discuss the impact of LPEs on objective functions to determine the LPE-tolerant mechanism, velocity sensitivity and stopping criteria of the VFOM. The proposed method is also capable of locating acoustic sources using passive techniques such as passive sonar detection and acoustic emission.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shekar, Venkateswaran; Fiondella, Lance; Chatterjee, Samrat
Transportation networks are critical to the social and economic function of nations. Given the continuing increase in the populations of cities throughout the world, the criticality of transportation infrastructure is expected to increase. Thus, it is ever more important to mitigate congestion as well as to assess the impact disruptions would have on individuals who depend on transportation for their work and livelihood. Moreover, several government organizations are responsible for ensuring transportation networks are available despite the constant threat of natural disasters and terrorist activities. Most of the previous transportation network vulnerability research has been performed in the context ofmore » static traffic models, many of which are formulated as traditional optimization problems. However, transportation networks are dynamic because their usage varies over time. Thus, more appropriate methods to characterize the vulnerability of transportation networks should consider their dynamic properties. This paper presents a quantitative approach to assess the vulnerability of a transportation network to disruptions with methods from traffic simulation. Our approach can prioritize the critical links over time and is generalizable to the case where both link and node disruptions are of concern. We illustrate the approach through a series of examples. Our results demonstrate that the approach provides quantitative insight into the time varying criticality of links. Such an approach could be used as the objective function of less traditional optimization methods that use simulation and other techniques to evaluate the relative utility of a particular network defense to reduce vulnerability and increase resilience.« less
Taheri, Salman; Jalali, Fahimeh; Fattahi, Nazir; Jalili, Ronak; Bahrami, Gholamreza
2015-10-01
Dispersive liquid-liquid microextraction based on solidification of floating organic droplet was developed for the extraction of methadone and determination by high-performance liquid chromatography with UV detection. In this method, no microsyringe or fiber is required to support the organic microdrop due to the usage of an organic solvent with a low density and appropriate melting point. Furthermore, the extractant droplet can be collected easily by solidifying it at low temperature. 1-Undecanol and methanol were chosen as extraction and disperser solvents, respectively. Parameters that influence extraction efficiency, i.e. volumes of extracting and dispersing solvents, pH, and salt effect, were optimized by using response surface methodology. Under optimal conditions, enrichment factor for methadone was 134 and 160 in serum and urine samples, respectively. The limit of detection was 3.34 ng/mmL in serum and 1.67 ng/mL in urine samples. Compared with the traditional dispersive liquid-liquid microextraction, the proposed method obtained lower limit of detection. Moreover, the solidification of floating organic solvent facilitated the phase transfer. And most importantly, it avoided using high-density and toxic solvents of traditional dispersive liquid-liquid microextraction method. The proposed method was successfully applied to the determination of methadone in serum and urine samples of an addicted individual under methadone therapy. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Mao, Xue Gang; Du, Zi Han; Liu, Jia Qian; Chen, Shu Xin; Hou, Ji Yu
2018-01-01
Traditional field investigation and artificial interpretation could not satisfy the need of forest gaps extraction at regional scale. High spatial resolution remote sensing image provides the possibility for regional forest gaps extraction. In this study, we used object-oriented classification method to segment and classify forest gaps based on QuickBird high resolution optical remote sensing image in Jiangle National Forestry Farm of Fujian Province. In the process of object-oriented classification, 10 scales (10-100, with a step length of 10) were adopted to segment QuickBird remote sensing image; and the intersection area of reference object (RA or ) and intersection area of segmented object (RA os ) were adopted to evaluate the segmentation result at each scale. For segmentation result at each scale, 16 spectral characteristics and support vector machine classifier (SVM) were further used to classify forest gaps, non-forest gaps and others. The results showed that the optimal segmentation scale was 40 when RA or was equal to RA os . The accuracy difference between the maximum and minimum at different segmentation scales was 22%. At optimal scale, the overall classification accuracy was 88% (Kappa=0.82) based on SVM classifier. Combining high resolution remote sensing image data with object-oriented classification method could replace the traditional field investigation and artificial interpretation method to identify and classify forest gaps at regional scale.
Wu, Changzheng; Zhang, Feng; Li, Lijun; Jiang, Zhedong; Ni, Hui; Xiao, Anfeng
2018-01-01
High amounts of insoluble substrates exist in the traditional solid-state fermentation (SSF) system. The presence of these substrates complicates the determination of microbial biomass. Thus, enzyme activity is used as the sole index for the optimization of the traditional SSF system, and the relationship between microbial growth and enzyme synthesis is always ignored. This study was conducted to address this deficiency. All soluble nutrients from tea stalk were extracted using water. The aqueous extract was then mixed with polyurethane sponge to establish a modified SSF system, which was then used to conduct tannase production. With this system, biomass, enzyme activity, and enzyme productivity could be measured rationally and accurately. Thus, the association between biomass and enzyme activity could be easily identified, and the shortcomings of traditional SSF could be addressed. Different carbon and nitrogen sources exerted different effects on microbial growth and enzyme production. Single-factor experiments showed that glucose and yeast extract greatly improved microbial biomass accumulation and that tannin and (NH 4 ) 2 SO 4 efficiently promoted enzyme productivity. Then, these four factors were optimized through response surface methodology. Tannase activity reached 19.22 U/gds when the added amounts of tannin, glucose, (NH 4 ) 2 SO 4 , and yeast extract were 7.49, 8.11, 9.26, and 2.25%, respectively. Tannase activity under the optimized process conditions was 6.36 times higher than that under the initial process conditions. The optimized parameters were directly applied to the traditional tea stalk SSF system. Tannase activity reached 245 U/gds, which is 2.9 times higher than our previously reported value. In this study, a modified SSF system was established to address the shortcomings of the traditional SSF system. Analysis revealed that enzymatic activity and microbial biomass are closely related, and different carbon and nitrogen sources have different effects on microbial growth and enzyme production. The maximal tannase activity was obtained under the optimal combination of nutrient sources that enhances cell growth and tannase accumulation. Moreover, tannase production through the traditional tea stalk SSF was markedly improved when the optimized parameters were applied. This work provides an innovative approach to bioproduction research through SSF.
Han, Sheng-Nan
2014-07-01
Chemometrics is a new branch of chemistry which is widely applied to various fields of analytical chemistry. Chemometrics can use theories and methods of mathematics, statistics, computer science and other related disciplines to optimize the chemical measurement process and maximize access to acquire chemical information and other information on material systems by analyzing chemical measurement data. In recent years, traditional Chinese medicine has attracted widespread attention. In the research of traditional Chinese medicine, it has been a key problem that how to interpret the relationship between various chemical components and its efficacy, which seriously restricts the modernization of Chinese medicine. As chemometrics brings the multivariate analysis methods into the chemical research, it has been applied as an effective research tool in the composition-activity relationship research of Chinese medicine. This article reviews the applications of chemometrics methods in the composition-activity relationship research in recent years. The applications of multivariate statistical analysis methods (such as regression analysis, correlation analysis, principal component analysis, etc. ) and artificial neural network (such as back propagation artificial neural network, radical basis function neural network, support vector machine, etc. ) are summarized, including the brief fundamental principles, the research contents and the advantages and disadvantages. Finally, the existing main problems and prospects of its future researches are proposed.
Direct aperture optimization: a turnkey solution for step-and-shoot IMRT.
Shepard, D M; Earl, M A; Li, X A; Naqvi, S; Yu, C
2002-06-01
IMRT treatment plans for step-and-shoot delivery have traditionally been produced through the optimization of intensity distributions (or maps) for each beam angle. The optimization step is followed by the application of a leaf-sequencing algorithm that translates each intensity map into a set of deliverable aperture shapes. In this article, we introduce an automated planning system in which we bypass the traditional intensity optimization, and instead directly optimize the shapes and the weights of the apertures. We call this approach "direct aperture optimization." This technique allows the user to specify the maximum number of apertures per beam direction, and hence provides significant control over the complexity of the treatment delivery. This is possible because the machine dependent delivery constraints imposed by the MLC are enforced within the aperture optimization algorithm rather than in a separate leaf-sequencing step. The leaf settings and the aperture intensities are optimized simultaneously using a simulated annealing algorithm. We have tested direct aperture optimization on a variety of patient cases using the EGS4/BEAM Monte Carlo package for our dose calculation engine. The results demonstrate that direct aperture optimization can produce highly conformal step-and-shoot treatment plans using only three to five apertures per beam direction. As compared with traditional optimization strategies, our studies demonstrate that direct aperture optimization can result in a significant reduction in both the number of beam segments and the number of monitor units. Direct aperture optimization therefore produces highly efficient treatment deliveries that maintain the full dosimetric benefits of IMRT.
Comparison of optimal design methods in inverse problems
NASA Astrophysics Data System (ADS)
Banks, H. T.; Holm, K.; Kappel, F.
2011-07-01
Typical optimal design methods for inverse or parameter estimation problems are designed to choose optimal sampling distributions through minimization of a specific cost function related to the resulting error in parameter estimates. It is hoped that the inverse problem will produce parameter estimates with increased accuracy using data collected according to the optimal sampling distribution. Here we formulate the classical optimal design problem in the context of general optimization problems over distributions of sampling times. We present a new Prohorov metric-based theoretical framework that permits one to treat succinctly and rigorously any optimal design criteria based on the Fisher information matrix. A fundamental approximation theory is also included in this framework. A new optimal design, SE-optimal design (standard error optimal design), is then introduced in the context of this framework. We compare this new design criterion with the more traditional D-optimal and E-optimal designs. The optimal sampling distributions from each design are used to compute and compare standard errors; the standard errors for parameters are computed using asymptotic theory or bootstrapping and the optimal mesh. We use three examples to illustrate ideas: the Verhulst-Pearl logistic population model (Banks H T and Tran H T 2009 Mathematical and Experimental Modeling of Physical and Biological Processes (Boca Raton, FL: Chapman and Hall/CRC)), the standard harmonic oscillator model (Banks H T and Tran H T 2009) and a popular glucose regulation model (Bergman R N, Ider Y Z, Bowden C R and Cobelli C 1979 Am. J. Physiol. 236 E667-77 De Gaetano A and Arino O 2000 J. Math. Biol. 40 136-68 Toffolo G, Bergman R N, Finegood D T, Bowden C R and Cobelli C 1980 Diabetes 29 979-90).
Method of electric powertrain matching for battery-powered electric cars
NASA Astrophysics Data System (ADS)
Ning, Guobao; Xiong, Lu; Zhang, Lijun; Yu, Zhuoping
2013-05-01
The current match method of electric powertrain still makes use of longitudinal dynamics, which can't realize maximum capacity for on-board energy storage unit and can't reach lowest equivalent fuel consumption as well. Another match method focuses on improving available space considering reasonable layout of vehicle to enlarge rated energy capacity for on-board energy storage unit, which can keep the longitudinal dynamics performance almost unchanged but can't reach lowest fuel consumption. Considering the characteristics of driving motor, method of electric powertrain matching utilizing conventional longitudinal dynamics for driving system and cut-and-try method for energy storage system is proposed for passenger cars converted from traditional ones. Through combining the utilization of vehicle space which contributes to the on-board energy amount, vehicle longitudinal performance requirements, vehicle equivalent fuel consumption level, passive safety requirements and maximum driving range requirement together, a comprehensive optimal match method of electric powertrain for battery-powered electric vehicle is raised. In simulation, the vehicle model and match method is built in Matlab/simulink, and the Environmental Protection Agency (EPA) Urban Dynamometer Driving Schedule (UDDS) is chosen as a test condition. The simulation results show that 2.62% of regenerative energy and 2% of energy storage efficiency are increased relative to the traditional method. The research conclusions provide theoretical and practical solutions for electric powertrain matching for modern battery-powered electric vehicles especially for those converted from traditional ones, and further enhance dynamics of electric vehicles.
Liu, Ping; Li, Guodong; Liu, Xinggao
2015-09-01
Control vector parameterization (CVP) is an important approach of the engineering optimization for the industrial dynamic processes. However, its major defect, the low optimization efficiency caused by calculating the relevant differential equations in the generated nonlinear programming (NLP) problem repeatedly, limits its wide application in the engineering optimization for the industrial dynamic processes. A novel highly effective control parameterization approach, fast-CVP, is first proposed to improve the optimization efficiency for industrial dynamic processes, where the costate gradient formulae is employed and a fast approximate scheme is presented to solve the differential equations in dynamic process simulation. Three well-known engineering optimization benchmark problems of the industrial dynamic processes are demonstrated as illustration. The research results show that the proposed fast approach achieves a fine performance that at least 90% of the computation time can be saved in contrast to the traditional CVP method, which reveals the effectiveness of the proposed fast engineering optimization approach for the industrial dynamic processes. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Poster — Thur Eve — 61: A new framework for MPERT plan optimization using MC-DAO
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, M; Lloyd, S AM; Townson, R
2014-08-15
This work combines the inverse planning technique known as Direct Aperture Optimization (DAO) with Intensity Modulated Radiation Therapy (IMRT) and combined electron and photon therapy plans. In particular, determining conditions under which Modulated Photon/Electron Radiation Therapy (MPERT) produces better dose conformality and sparing of organs at risk than traditional IMRT plans is central to the project. Presented here are the materials and methods used to generate and manipulate the DAO procedure. Included is the introduction of a powerful Java-based toolkit, the Aperture-based Monte Carlo (MC) MPERT Optimizer (AMMO), that serves as a framework for optimization and provides streamlined access tomore » underlying particle transport packages. Comparison of the toolkit's dose calculations to those produced by the Eclipse TPS and the demonstration of a preliminary optimization are presented as first benchmarks. Excellent agreement is illustrated between the Eclipse TPS and AMMO for a 6MV photon field. The results of a simple optimization shows the functioning of the optimization framework, while significant research remains to characterize appropriate constraints.« less
Mini-batch optimized full waveform inversion with geological constrained gradient filtering
NASA Astrophysics Data System (ADS)
Yang, Hui; Jia, Junxiong; Wu, Bangyu; Gao, Jinghuai
2018-05-01
High computation cost and generating solutions without geological sense have hindered the wide application of Full Waveform Inversion (FWI). Source encoding technique is a way to dramatically reduce the cost of FWI but subject to fix-spread acquisition setup requirement and slow convergence for the suppression of cross-talk. Traditionally, gradient regularization or preconditioning is applied to mitigate the ill-posedness. An isotropic smoothing filter applied on gradients generally gives non-geological inversion results, and could also introduce artifacts. In this work, we propose to address both the efficiency and ill-posedness of FWI by a geological constrained mini-batch gradient optimization method. The mini-batch gradient descent optimization is adopted to reduce the computation time by choosing a subset of entire shots for each iteration. By jointly applying the structure-oriented smoothing to the mini-batch gradient, the inversion converges faster and gives results with more geological meaning. Stylized Marmousi model is used to show the performance of the proposed method on realistic synthetic model.
A strategy for reducing turnaround time in design optimization using a distributed computer system
NASA Technical Reports Server (NTRS)
Young, Katherine C.; Padula, Sharon L.; Rogers, James L.
1988-01-01
There is a need to explore methods for reducing lengthly computer turnaround or clock time associated with engineering design problems. Different strategies can be employed to reduce this turnaround time. One strategy is to run validated analysis software on a network of existing smaller computers so that portions of the computation can be done in parallel. This paper focuses on the implementation of this method using two types of problems. The first type is a traditional structural design optimization problem, which is characterized by a simple data flow and a complicated analysis. The second type of problem uses an existing computer program designed to study multilevel optimization techniques. This problem is characterized by complicated data flow and a simple analysis. The paper shows that distributed computing can be a viable means for reducing computational turnaround time for engineering design problems that lend themselves to decomposition. Parallel computing can be accomplished with a minimal cost in terms of hardware and software.
A novel adaptive Cuckoo search for optimal query plan generation.
Gomathi, Ramalingam; Sharmila, Dhandapani
2014-01-01
The emergence of multiple web pages day by day leads to the development of the semantic web technology. A World Wide Web Consortium (W3C) standard for storing semantic web data is the resource description framework (RDF). To enhance the efficiency in the execution time for querying large RDF graphs, the evolving metaheuristic algorithms become an alternate to the traditional query optimization methods. This paper focuses on the problem of query optimization of semantic web data. An efficient algorithm called adaptive Cuckoo search (ACS) for querying and generating optimal query plan for large RDF graphs is designed in this research. Experiments were conducted on different datasets with varying number of predicates. The experimental results have exposed that the proposed approach has provided significant results in terms of query execution time. The extent to which the algorithm is efficient is tested and the results are documented.
NASA Astrophysics Data System (ADS)
Ayadi, Omar; Felfel, Houssem; Masmoudi, Faouzi
2017-07-01
The current manufacturing environment has changed from traditional single-plant to multi-site supply chain where multiple plants are serving customer demands. In this article, a tactical multi-objective, multi-period, multi-product, multi-site supply-chain planning problem is proposed. A corresponding optimization model aiming to simultaneously minimize the total cost, maximize product quality and maximize the customer satisfaction demand level is developed. The proposed solution approach yields to a front of Pareto-optimal solutions that represents the trade-offs among the different objectives. Subsequently, the analytic hierarchy process method is applied to select the best Pareto-optimal solution according to the preferences of the decision maker. The robustness of the solutions and the proposed approach are discussed based on a sensitivity analysis and an application to a real case from the textile and apparel industry.
Accurate multiple sequence-structure alignment of RNA sequences using combinatorial optimization.
Bauer, Markus; Klau, Gunnar W; Reinert, Knut
2007-07-27
The discovery of functional non-coding RNA sequences has led to an increasing interest in algorithms related to RNA analysis. Traditional sequence alignment algorithms, however, fail at computing reliable alignments of low-homology RNA sequences. The spatial conformation of RNA sequences largely determines their function, and therefore RNA alignment algorithms have to take structural information into account. We present a graph-based representation for sequence-structure alignments, which we model as an integer linear program (ILP). We sketch how we compute an optimal or near-optimal solution to the ILP using methods from combinatorial optimization, and present results on a recently published benchmark set for RNA alignments. The implementation of our algorithm yields better alignments in terms of two published scores than the other programs that we tested: This is especially the case with an increasing number of input sequences. Our program LARA is freely available for academic purposes from http://www.planet-lisa.net.
Efficient Simulation Budget Allocation for Selecting an Optimal Subset
NASA Technical Reports Server (NTRS)
Chen, Chun-Hung; He, Donghai; Fu, Michael; Lee, Loo Hay
2008-01-01
We consider a class of the subset selection problem in ranking and selection. The objective is to identify the top m out of k designs based on simulated output. Traditional procedures are conservative and inefficient. Using the optimal computing budget allocation framework, we formulate the problem as that of maximizing the probability of correc tly selecting all of the top-m designs subject to a constraint on the total number of samples available. For an approximation of this corre ct selection probability, we derive an asymptotically optimal allocat ion and propose an easy-to-implement heuristic sequential allocation procedure. Numerical experiments indicate that the resulting allocatio ns are superior to other methods in the literature that we tested, and the relative efficiency increases for larger problems. In addition, preliminary numerical results indicate that the proposed new procedur e has the potential to enhance computational efficiency for simulation optimization.
Optimal Analyses for 3×n AB Games in the Worst Case
NASA Astrophysics Data System (ADS)
Huang, Li-Te; Lin, Shun-Shii
The past decades have witnessed a growing interest in research on deductive games such as Mastermind and AB game. Because of the complicated behavior of deductive games, tree-search approaches are often adopted to find their optimal strategies. In this paper, a generalized version of deductive games, called 3×n AB games, is introduced. However, traditional tree-search approaches are not appropriate for solving this problem since it can only solve instances with smaller n. For larger values of n, a systematic approach is necessary. Therefore, intensive analyses of playing 3×n AB games in the worst case optimally are conducted and a sophisticated method, called structural reduction, which aims at explaining the worst situation in this game is developed in the study. Furthermore, a worthwhile formula for calculating the optimal numbers of guesses required for arbitrary values of n is derived and proven to be final.
Universal field matching in craniospinal irradiation by a background-dose gradient-optimized method.
Traneus, Erik; Bizzocchi, Nicola; Fellin, Francesco; Rombi, Barbara; Farace, Paolo
2018-01-01
The gradient-optimized methods are overcoming the traditional feathering methods to plan field junctions in craniospinal irradiation. In this note, a new gradient-optimized technique, based on the use of a background dose, is described. Treatment planning was performed by RayStation (RaySearch Laboratories, Stockholm, Sweden) on the CT scans of a pediatric patient. Both proton (by pencil beam scanning) and photon (by volumetric modulated arc therapy) treatments were planned with three isocenters. An 'in silico' ideal background dose was created first to cover the upper-spinal target and to produce a perfect dose gradient along the upper and lower junction regions. Using it as background, the cranial and the lower-spinal beams were planned by inverse optimization to obtain dose coverage of their relevant targets and of the junction volumes. Finally, the upper-spinal beam was inversely planned after removal of the background dose and with the previously optimized beams switched on. In both proton and photon plans, the optimized cranial and the lower-spinal beams produced a perfect linear gradient in the junction regions, complementary to that produced by the optimized upper-spinal beam. The final dose distributions showed a homogeneous coverage of the targets. Our simple technique allowed to obtain high-quality gradients in the junction region. Such technique universally works for photons as well as protons and could be applicable to the TPSs that allow to manage a background dose. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
An Improved Heuristic Method for Subgraph Isomorphism Problem
NASA Astrophysics Data System (ADS)
Xiang, Yingzhuo; Han, Jiesi; Xu, Haijiang; Guo, Xin
2017-09-01
This paper focus on the subgraph isomorphism (SI) problem. We present an improved genetic algorithm, a heuristic method to search the optimal solution. The contribution of this paper is that we design a dedicated crossover algorithm and a new fitness function to measure the evolution process. Experiments show our improved genetic algorithm performs better than other heuristic methods. For a large graph, such as a subgraph of 40 nodes, our algorithm outperforms the traditional tree search algorithms. We find that the performance of our improved genetic algorithm does not decrease as the number of nodes in prototype graphs.
Alipieva, Kalina; Petreska, Jasmina; Gil-Izquierdo, Angel; Stefova, Marina; Evstatieva, Ljuba; Bankova, Vassya
2010-01-01
The influence of the extraction method on the yield and composition of extracts of Sideritis (Pirin mountain tea) has been studied. Maceration, ultrasound-assisted (USAE) and microwave assisted extraction (MAE) were applied. Total phenolics and total flavonoids were quantified spectrophotometrically, and individual compounds were analyzed by HPLC-DAD-MS(n). This preliminary study reveals that the traditional way of tea preparation from Sideritis is the most appropriate in order to extract the maximum of total flavonoids and total phenolics. In the case of methanol extraction, the optimal method is USAE.
Cascaded Optimization for a Persistent Data Ferrying Unmanned Aircraft
NASA Astrophysics Data System (ADS)
Carfang, Anthony
This dissertation develops and assesses a cascaded method for designing optimal periodic trajectories and link schedules for an unmanned aircraft to ferry data between stationary ground nodes. This results in a fast solution method without the need to artificially constrain system dynamics. Focusing on a fundamental ferrying problem that involves one source and one destination, but includes complex vehicle and Radio-Frequency (RF) dynamics, a cascaded structure to the system dynamics is uncovered. This structure is exploited by reformulating the nonlinear optimization problem into one that reduces the independent control to the vehicle's motion, while the link scheduling control is folded into the objective function and implemented as an optimal policy that depends on candidate motion control. This formulation is proven to maintain optimality while reducing computation time in comparison to traditional ferry optimization methods. The discrete link scheduling problem takes the form of a combinatorial optimization problem that is known to be NP-Hard. A derived necessary condition for optimality guides the development of several heuristic algorithms, specifically the Most-Data-First Algorithm and the Knapsack Adaptation. These heuristics are extended to larger ferrying scenarios, and assessed analytically and through Monte Carlo simulation, showing better throughput performance in the same order of magnitude of computation time in comparison to other common link scheduling policies. The cascaded optimization method is implemented with a novel embedded software system on a small, unmanned aircraft to validate the simulation results with field experiments. To address the sensitivity of results on trajectory tracking performance, a system that combines motion and link control with waypoint-based navigation is developed and assessed through field experiments. The data ferrying algorithms are further extended by incorporating a Gaussian process to opportunistically learn the RF environment. By continuously improving RF models, the cascaded planner can continually improve the ferrying system's overall performance.
NASA Astrophysics Data System (ADS)
Kenway, Gaetan K. W.
This thesis presents new tools and techniques developed to address the challenging problem of high-fidelity aerostructural optimization with respect to large numbers of design variables. A new mesh-movement scheme is developed that is both computationally efficient and sufficiently robust to accommodate large geometric design changes and aerostructural deformations. A fully coupled Newton-Krylov method is presented that accelerates the convergence of aerostructural systems and provides a 20% performance improvement over the traditional nonlinear block Gauss-Seidel approach and can handle more exible structures. A coupled adjoint method is used that efficiently computes derivatives for a gradient-based optimization algorithm. The implementation uses only machine accurate derivative techniques and is verified to yield fully consistent derivatives by comparing against the complex step method. The fully-coupled large-scale coupled adjoint solution method is shown to have 30% better performance than the segregated approach. The parallel scalability of the coupled adjoint technique is demonstrated on an Euler Computational Fluid Dynamics (CFD) model with more than 80 million state variables coupled to a detailed structural finite-element model of the wing with more than 1 million degrees of freedom. Multi-point high-fidelity aerostructural optimizations of a long-range wide-body, transonic transport aircraft configuration are performed using the developed techniques. The aerostructural analysis employs Euler CFD with a 2 million cell mesh and a structural finite element model with 300 000 DOF. Two design optimization problems are solved: one where takeoff gross weight is minimized, and another where fuel burn is minimized. Each optimization uses a multi-point formulation with 5 cruise conditions and 2 maneuver conditions. The optimization problems have 476 design variables are optimal results are obtained within 36 hours of wall time using 435 processors. The TOGW minimization results in a 4.2% reduction in TOGW with a 6.6% fuel burn reduction, while the fuel burn optimization resulted in a 11.2% fuel burn reduction with no change to the takeoff gross weight.
Du, Xi; He, Xin; Huang, Yu-Hong; Li, Zi-Qiang
2016-12-01
Cocktail probe substrates approach is a fast, sensitive and high through put method to determine cytochrome P450 enzymes activity. It has been widely used to screen early drug development, analyze drug metabolism types and confirm the metabolism pathways, study drug-drug interactions, optimize clinical regimen, evaluate post marketing drugs and help liver/kidney pathological studies. This article reviewed characteristics of Cocktail probe substrates, focused on the application to traditional Chinese medicine to CYP450 system as follows: the metabolic pathway research of Chinese herb active ingredients; processing way and compatibility of medical herbs affect CYP450; find out the metabolic characteristic of Chinese patent medicine, study in pharmacy of national minority; do research in liver protective effect of traditional Chinese medicine and evaluate traditional Chinese medicine syndromes in animal models. This article make a summary of existing research results and also make a comparison of cocktail probe substrates approach application to western medicine and Chinese medicine. Copyright© by the Chinese Pharmaceutical Association.
13C metabolic flux analysis: optimal design of isotopic labeling experiments.
Antoniewicz, Maciek R
2013-12-01
Measuring fluxes by 13C metabolic flux analysis (13C-MFA) has become a key activity in chemical and pharmaceutical biotechnology. Optimal design of isotopic labeling experiments is of central importance to 13C-MFA as it determines the precision with which fluxes can be estimated. Traditional methods for selecting isotopic tracers and labeling measurements did not fully utilize the power of 13C-MFA. Recently, new approaches were developed for optimal design of isotopic labeling experiments based on parallel labeling experiments and algorithms for rational selection of tracers. In addition, advanced isotopic labeling measurements were developed based on tandem mass spectrometry. Combined, these approaches can dramatically improve the quality of 13C-MFA results with important applications in metabolic engineering and biotechnology. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Kasiviswanathan, K.; Sudheer, K.
2013-05-01
Artificial neural network (ANN) based hydrologic models have gained lot of attention among water resources engineers and scientists, owing to their potential for accurate prediction of flood flows as compared to conceptual or physics based hydrologic models. The ANN approximates the non-linear functional relationship between the complex hydrologic variables in arriving at the river flow forecast values. Despite a large number of applications, there is still some criticism that ANN's point prediction lacks in reliability since the uncertainty of predictions are not quantified, and it limits its use in practical applications. A major concern in application of traditional uncertainty analysis techniques on neural network framework is its parallel computing architecture with large degrees of freedom, which makes the uncertainty assessment a challenging task. Very limited studies have considered assessment of predictive uncertainty of ANN based hydrologic models. In this study, a novel method is proposed that help construct the prediction interval of ANN flood forecasting model during calibration itself. The method is designed to have two stages of optimization during calibration: at stage 1, the ANN model is trained with genetic algorithm (GA) to obtain optimal set of weights and biases vector, and during stage 2, the optimal variability of ANN parameters (obtained in stage 1) is identified so as to create an ensemble of predictions. During the 2nd stage, the optimization is performed with multiple objectives, (i) minimum residual variance for the ensemble mean, (ii) maximum measured data points to fall within the estimated prediction interval and (iii) minimum width of prediction interval. The method is illustrated using a real world case study of an Indian basin. The method was able to produce an ensemble that has an average prediction interval width of 23.03 m3/s, with 97.17% of the total validation data points (measured) lying within the interval. The derived prediction interval for a selected hydrograph in the validation data set is presented in Fig 1. It is noted that most of the observed flows lie within the constructed prediction interval, and therefore provides information about the uncertainty of the prediction. One specific advantage of the method is that when ensemble mean value is considered as a forecast, the peak flows are predicted with improved accuracy by this method compared to traditional single point forecasted ANNs. Fig. 1 Prediction Interval for selected hydrograph
NASA Astrophysics Data System (ADS)
Mirzaei, Mahmood; Tibaldi, Carlo; Hansen, Morten H.
2016-09-01
PI/PID controllers are the most common wind turbine controllers. Normally a first tuning is obtained using methods such as pole-placement or Ziegler-Nichols and then extensive aeroelastic simulations are used to obtain the best tuning in terms of regulation of the outputs and reduction of the loads. In the traditional tuning approaches, the properties of different open loop and closed loop transfer functions of the system are not normally considered. In this paper, an assessment of the pole-placement tuning method is presented based on robustness measures. Then a constrained optimization setup is suggested to automatically tune the wind turbine controller subject to robustness constraints. The properties of the system such as the maximum sensitivity and complementary sensitivity functions (Ms and Mt ), along with some of the responses of the system, are used to investigate the controller performance and formulate the optimization problem. The cost function is the integral absolute error (IAE) of the rotational speed from a disturbance modeled as a step in wind speed. Linearized model of the DTU 10-MW reference wind turbine is obtained using HAWCStab2. Thereafter, the model is reduced with model order reduction. The trade-off curves are given to assess the tunings of the poles- placement method and a constrained optimization problem is solved to find the best tuning.
Baygin, Mehmet; Karakose, Mehmet
2013-01-01
Nowadays, the increasing use of group elevator control systems owing to increasing building heights makes the development of high-performance algorithms necessary in terms of time and energy saving. Although there are many studies in the literature about this topic, they are still not effective enough because they are not able to evaluate all features of system. In this paper, a new approach of immune system-based optimal estimate is studied for dynamic control of group elevator systems. The method is mainly based on estimation of optimal way by optimizing all calls with genetic, immune system and DNA computing algorithms, and it is evaluated with a fuzzy system. The system has a dynamic feature in terms of the situation of calls and the option of the most appropriate algorithm, and it also adaptively works in terms of parameters such as the number of floors and cabins. This new approach which provides both time and energy saving was carried out in real time. The experimental results comparatively demonstrate the effects of method. With dynamic and adaptive control approach in this study carried out, a significant progress on group elevator control systems has been achieved in terms of time and energy efficiency according to traditional methods. PMID:23935433
NASA Astrophysics Data System (ADS)
Chen, Enguo; Liu, Peng; Yu, Feihong
2012-10-01
A novel synchronized optimization method of multiple freeform surfaces is proposed and applied to double lenses illumination system design of CF-LCoS pico-projectors. Based on Snell's law and the energy conservation law, a series of first-order partial differential equations are derived for the multiple freeform surfaces of the initial system. By assigning the light deflection angle to each freeform surface, multiple surfaces can be obtained simultaneously by solving the corresponding equations, meanwhile the restricted angle on CF-LCoS is guaranteed. In order to improve the spatial uniformity, the multi-surfaces are synchronously optimized by using simplex algorithm for an extended LED source. Design example shows that the double lenses based illumination system, which employs a single 2 mm×2 mm LED chip and a CF-LCoS panel with a diagonal of 0.59 inches satisfies the needs of pico-projector. Moreover, analytical result indicates that the design method represents substantial improvement and practical significance over traditional CF-LCoS projection system, which could offer outstanding performance with both portability and low cost. The synchronized optimization design method could not only realize collimating and uniform illumination, but also could be introduced to other specific light conditions.
NASA Astrophysics Data System (ADS)
Wang, Mingming; Luo, Jianjun; Fang, Jing; Yuan, Jianping
2018-03-01
The existence of the path dependent dynamic singularities limits the volume of available workspace of free-floating space robot and induces enormous joint velocities when such singularities are met. In order to overcome this demerit, this paper presents an optimal joint trajectory planning method using forward kinematics equations of free-floating space robot, while joint motion laws are delineated with application of the concept of reaction null-space. Bézier curve, in conjunction with the null-space column vectors, are applied to describe the joint trajectories. Considering the forward kinematics equations of the free-floating space robot, the trajectory planning issue is consequently transferred to an optimization issue while the control points to construct the Bézier curve are the design variables. A constrained differential evolution (DE) scheme with premature handling strategy is implemented to find the optimal solution of the design variables while specific objectives and imposed constraints are satisfied. Differ from traditional methods, we synthesize null-space and specialized curve to provide a novel viewpoint for trajectory planning of free-floating space robot. Simulation results are presented for trajectory planning of 7 degree-of-freedom (DOF) kinematically redundant manipulator mounted on a free-floating spacecraft and demonstrate the feasibility and effectiveness of the proposed method.
NASA Astrophysics Data System (ADS)
Vasilkin, Andrey
2018-03-01
The more designing solutions at the search stage for design for high-rise buildings can be synthesized by the engineer, the more likely that the final adopted version will be the most efficient and economical. However, in modern market conditions, taking into account the complexity and responsibility of high-rise buildings the designer does not have the necessary time to develop, analyze and compare any significant number of options. To solve this problem, it is expedient to use the high potential of computer-aided designing. To implement automated search for design solutions, it is proposed to develop the computing facilities, the application of which will significantly increase the productivity of the designer and reduce the complexity of designing. Methods of structural and parametric optimization have been adopted as the basis of the computing facilities. Their efficiency in the synthesis of design solutions is shown, also the schemes, that illustrate and explain the introduction of structural optimization in the traditional design of steel frames, are constructed. To solve the problem of synthesis and comparison of design solutions for steel frames, it is proposed to develop the computing facilities that significantly reduces the complexity of search designing and based on the use of methods of structural and parametric optimization.
NASA Astrophysics Data System (ADS)
Mahata, Puspita; Mahata, Gour Chandra; Kumar De, Sujit
2018-03-01
Traditional supply chain inventory modes with trade credit usually only assumed that the up-stream suppliers offered the down-stream retailers a fixed credit period. However, in practice the retailers will also provide a credit period to customers to promote the market competition. In this paper, we formulate an optimal supply chain inventory model under two levels of trade credit policy with default risk consideration. Here, the demand is assumed to be credit-sensitive and increasing function of time. The major objective is to determine the retailer's optimal credit period and cycle time such that the total profit per unit time is maximized. The existence and uniqueness of the optimal solution to the presented model are examined, and an easy method is also shown to find the optimal inventory policies of the considered problem. Finally, numerical examples and sensitive analysis are presented to illustrate the developed model and to provide some managerial insights.
Optimally managing water resources in large river basins for an uncertain future
Roehl, Edwin A.; Conrads, Paul
2014-01-01
One of the challenges of basin management is the optimization of water use through ongoing regional economic development, droughts, and climate change. This paper describes a model of the Savannah River Basin designed to continuously optimize regulated flow to meet prioritized objectives set by resource managers and stakeholders. The model was developed from historical data by using machine learning, making it more accurate and adaptable to changing conditions than traditional models. The model is coupled to an optimization routine that computes the daily flow needed to most efficiently meet the water-resource management objectives. The model and optimization routine are packaged in a decision support system that makes it easy for managers and stakeholders to use. Simulation results show that flow can be regulated to substantially reduce salinity intrusions in the Savannah National Wildlife Refuge while conserving more water in the reservoirs. A method for using the model to assess the effectiveness of the flow-alteration features after the deepening also is demonstrated.
New reversing design method for LED uniform illumination.
Wang, Kai; Wu, Dan; Qin, Zong; Chen, Fei; Luo, Xiaobing; Liu, Sheng
2011-07-04
In light-emitting diode (LED) applications, it is becoming a big issue that how to optimize light intensity distribution curve (LIDC) and design corresponding optical component to achieve uniform illumination when distance-height ratio (DHR) is given. A new reversing design method is proposed to solve this problem, including design and optimization of LIDC to achieve high uniform illumination and a new algorithm of freeform lens to generate the required LIDC by LED light source. According to this method, two new LED modules integrated with freeform lenses are successfully designed for slim direct-lit LED backlighting with thickness of 10mm, and uniformities of illuminance increase from 0.446 to 0.915 and from 0.155 to 0.887 when DHRs are 2 and 3 respectively. Moreover, the number of new LED modules dramatically decreases to 1/9 of the traditional LED modules while achieving similar uniform illumination in backlighting. Therefore, this new method provides a practical and simple way for optical design of LED uniform illumination when DHR is much larger than 1.
Predicting cancerlectins by the optimal g-gap dipeptides
NASA Astrophysics Data System (ADS)
Lin, Hao; Liu, Wei-Xin; He, Jiao; Liu, Xin-Hui; Ding, Hui; Chen, Wei
2015-12-01
The cancerlectin plays a key role in the process of tumor cell differentiation. Thus, to fully understand the function of cancerlectin is significant because it sheds light on the future direction for the cancer therapy. However, the traditional wet-experimental methods were money- and time-consuming. It is highly desirable to develop an effective and efficient computational tool to identify cancerlectins. In this study, we developed a sequence-based method to discriminate between cancerlectins and non-cancerlectins. The analysis of variance (ANOVA) was used to choose the optimal feature set derived from the g-gap dipeptide composition. The jackknife cross-validated results showed that the proposed method achieved the accuracy of 75.19%, which is superior to other published methods. For the convenience of other researchers, an online web-server CaLecPred was established and can be freely accessed from the website http://lin.uestc.edu.cn/server/CalecPred. We believe that the CaLecPred is a powerful tool to study cancerlectins and to guide the related experimental validations.
Integrating computational methods to retrofit enzymes to synthetic pathways.
Brunk, Elizabeth; Neri, Marilisa; Tavernelli, Ivano; Hatzimanikatis, Vassily; Rothlisberger, Ursula
2012-02-01
Microbial production of desired compounds provides an efficient framework for the development of renewable energy resources. To be competitive to traditional chemistry, one requirement is to utilize the full capacity of the microorganism to produce target compounds with high yields and turnover rates. We use integrated computational methods to generate and quantify the performance of novel biosynthetic routes that contain highly optimized catalysts. Engineering a novel reaction pathway entails addressing feasibility on multiple levels, which involves handling the complexity of large-scale biochemical networks while respecting the critical chemical phenomena at the atomistic scale. To pursue this multi-layer challenge, our strategy merges knowledge-based metabolic engineering methods with computational chemistry methods. By bridging multiple disciplines, we provide an integral computational framework that could accelerate the discovery and implementation of novel biosynthetic production routes. Using this approach, we have identified and optimized a novel biosynthetic route for the production of 3HP from pyruvate. Copyright © 2011 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Zong, Yali; Hu, Naigang; Duan, Baoyan; Yang, Guigeng; Cao, Hongjun; Xu, Wanye
2016-03-01
Inevitable manufacturing errors and inconsistency between assumed and actual boundary conditions can affect the shape precision and cable tensions of a cable-network antenna, and even result in failure of the structure in service. In this paper, an analytical sensitivity analysis method of the shape precision and cable tensions with respect to the parameters carrying uncertainty was studied. Based on the sensitivity analysis, an optimal design procedure was proposed to alleviate the effects of the parameters that carry uncertainty. The validity of the calculated sensitivities is examined by those computed by a finite difference method. Comparison with a traditional design method shows that the presented design procedure can remarkably reduce the influence of the uncertainties on the antenna performance. Moreover, the results suggest that especially slender front net cables, thick tension ties, relatively slender boundary cables and high tension level can improve the ability of cable-network antenna structures to resist the effects of the uncertainties on the antenna performance.
Calibration of stereo rigs based on the backward projection process
NASA Astrophysics Data System (ADS)
Gu, Feifei; Zhao, Hong; Ma, Yueyang; Bu, Penghui; Zhao, Zixin
2016-08-01
High-accuracy 3D measurement based on binocular vision system is heavily dependent on the accurate calibration of two rigidly-fixed cameras. In most traditional calibration methods, stereo parameters are iteratively optimized through the forward imaging process (FIP). However, the results can only guarantee the minimal 2D pixel errors, but not the minimal 3D reconstruction errors. To address this problem, a simple method to calibrate a stereo rig based on the backward projection process (BPP) is proposed. The position of a spatial point can be determined separately from each camera by planar constraints provided by the planar pattern target. Then combined with pre-defined spatial points, intrinsic and extrinsic parameters of the stereo-rig can be optimized by minimizing the total 3D errors of both left and right cameras. An extensive performance study for the method in the presence of image noise and lens distortions is implemented. Experiments conducted on synthetic and real data demonstrate the accuracy and robustness of the proposed method.
Wang, Hong-Hua
2014-01-01
A precise mathematical model plays a pivotal role in the simulation, evaluation, and optimization of photovoltaic (PV) power systems. Different from the traditional linear model, the model of PV module has the features of nonlinearity and multiparameters. Since conventional methods are incapable of identifying the parameters of PV module, an excellent optimization algorithm is required. Artificial fish swarm algorithm (AFSA), originally inspired by the simulation of collective behavior of real fish swarms, is proposed to fast and accurately extract the parameters of PV module. In addition to the regular operation, a mutation operator (MO) is designed to enhance the searching performance of the algorithm. The feasibility of the proposed method is demonstrated by various parameters of PV module under different environmental conditions, and the testing results are compared with other studied methods in terms of final solutions and computational time. The simulation results show that the proposed method is capable of obtaining higher parameters identification precision. PMID:25243233
Alarcón, J A; Immink, M D; Méndez, L F
1989-12-01
The present study was conducted as part of an evaluation of the economic and nutritional effects of a crop diversification program for small-scale farmers in the Western highlands of Guatemala. Linear programming models are employed in order to obtain optimal combinations of traditional and non-traditional food crops under different ecological conditions that: a) provide minimum cost diets for auto-consumption, and b) maximize net income and market availability of dietary energy. Data used were generated by means of an agroeconomic survey conducted in 1983 among 726 farming households. Food prices were obtained from the Institute of Agrarian Marketing; data on production costs, from the National Bank of Agricultural Development in Guatemala. The gestation periods for each crop were obtained from three different sources, and then averaged. The results indicated that the optimal cropping pattern for the minimum-cost diets for auto consumption include traditional foods (corn, beans, broad bean, wheat, potato), non-traditional foods (carrots, broccoli, beets) and foods of animal origin (milk, eggs). A significant number of farmers included in the sample did not have sufficient land availability to produce all foods included in the minimum-cost diet. Cropping patterns which maximize net incomes include only non-traditional foods: onions, carrots, broccoli and beets for farmers in the low highland areas, and raddish, broccoli, cauliflower and carrots for farmers in the higher parts. Optimal cropping patterns which maximize market availability of dietary energy include traditional and non-traditional foods; for farmers in the lower areas: wheat, corn, beets, carrots and onions; for farmers in the higher areas: potato, wheat, raddish, carrots and cabbage.
Mudge, Joseph F; Penny, Faith M; Houlahan, Jeff E
2012-12-01
Setting optimal significance levels that minimize Type I and Type II errors allows for more transparent and well-considered statistical decision making compared to the traditional α = 0.05 significance level. We use the optimal α approach to re-assess conclusions reached by three recently published tests of the pace-of-life syndrome hypothesis, which attempts to unify occurrences of different physiological, behavioral, and life history characteristics under one theory, over different scales of biological organization. While some of the conclusions reached using optimal α were consistent to those previously reported using the traditional α = 0.05 threshold, opposing conclusions were also frequently reached. The optimal α approach reduced probabilities of Type I and Type II errors, and ensured statistical significance was associated with biological relevance. Biologists should seriously consider their choice of α when conducting null hypothesis significance tests, as there are serious disadvantages with consistent reliance on the traditional but arbitrary α = 0.05 significance level. Copyright © 2012 WILEY Periodicals, Inc.
2015 Summer Design Challenge: Team A&E (2241) Additively Manufactured Discriminator.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, Sarah E.; Moore, Brandon James
Current discriminator designs are based on historical designs and traditional manufacturing methods. The goal of this project was to form non-traditional groups to create novel discriminator designs by taking advantage of additive manufacturing. These designs would expand current discriminator designs and provide insight on the applicability of additive manufacturing for future projects. Our design stretched the current abilities of additive manufacturing and noted desired improvements for the future. Through collaboration with NSC, we noted several additional technologies which work well with additive manufacturing such as topology optimization and CT scanning and determined how these technologies could be improved to bettermore » combine with additive manufacturing.« less
Estimation of reflectance from camera responses by the regularized local linear model.
Zhang, Wei-Feng; Tang, Gongguo; Dai, Dao-Qing; Nehorai, Arye
2011-10-01
Because of the limited approximation capability of using fixed basis functions, the performance of reflectance estimation obtained by traditional linear models will not be optimal. We propose an approach based on the regularized local linear model. Our approach performs efficiently and knowledge of the spectral power distribution of the illuminant and the spectral sensitivities of the camera is not needed. Experimental results show that the proposed method performs better than some well-known methods in terms of both reflectance error and colorimetric error. © 2011 Optical Society of America
Remote sensing of coal mine pollution in the upper Potomac River basin
NASA Technical Reports Server (NTRS)
1974-01-01
A survey of remote sensing data pertinent to locating and monitoring sources of pollution resulting from surface and shaft mining operations was conducted in order to determine the various methods by which ERTS and aircraft remote sensing data can be used as a replacement for, or a supplement to traditional methods of monitoring coal mine pollution of the upper Potomac Basin. The gathering and analysis of representative samples of the raw and processed data obtained during the survey are described, along with plans to demonstrate and optimize the data collection processes.
Research on large equipment maintenance system in life cycle
NASA Astrophysics Data System (ADS)
Xu, Xiaowei; Wang, Hongxia; Liu, Zhenxing; Zhang, Nan
2017-06-01
In order to change the current disadvantages of traditional large equipment maintenance concept, this article plans to apply the technical method of prognostics and health management to optimize equipment maintenance strategy and develop large equipment maintenance system. Combined with the maintenance procedures of various phases in life cycle, it concluded the formulation methods of maintenance program and implement plans of maintenance work. In the meantime, it takes account into the example of the dredger power system of the Waterway Bureau to establish the auxiliary platform of ship maintenance system in life cycle.
NASA Astrophysics Data System (ADS)
Kohyama, Tetsu; Kaneko, Fumiya; Ly, Saksatha; Hamzik, James; Jaber, Jad; Yamada, Yoshiaki
2017-03-01
Weak-polar solvents like PGMEA (Propylene Glycol Monomethyl Ether Acetate) or CHN (Cyclohexanone) are used to dissolve hydrophobic photo-resist polymers, which are challenging for traditional cleaning methods such as distillation, ion-exchange resins service or water-washing processes. This paper investigated two novel surface modifications to see their effectiveness at metal removal and to understand the mechanism. The experiments yielded effective purification methods for metal reduction, focusing on solvent polarities based on HSP (Hansen Solubility Parameters), and developing optimal purification strategies.
Numerical realization of the variational method for generating self-trapped beams.
Duque, Erick I; Lopez-Aguayo, Servando; Malomed, Boris A
2018-03-19
We introduce a numerical variational method based on the Rayleigh-Ritz optimization principle for predicting two-dimensional self-trapped beams in nonlinear media. This technique overcomes the limitation of the traditional variational approximation in performing analytical Lagrangian integration and differentiation. Approximate soliton solutions of a generalized nonlinear Schrödinger equation are obtained, demonstrating robustness of the beams of various types (fundamental, vortices, multipoles, azimuthons) in the course of their propagation. The algorithm offers possibilities to produce more sophisticated soliton profiles in general nonlinear models.
Optimization and Comparison of Different Digital Mammographic Tomosynthesis Reconstruction Methods
2008-04-01
physical measurements of impulse response analysis, modulation transfer function (MTF) and noise power spectrum (NPS). (Months 5- 12). This task has...and 2 impulse -added: projection images with simulated impulse and the I /r2 shading difference. Other system blur and noise issues are not...blur, and suppressed high frequency noise . Point-by-point BP rather than traditional SAA should be considered as the basis of further deblurring
Ritchie, Marylyn D; White, Bill C; Parker, Joel S; Hahn, Lance W; Moore, Jason H
2003-01-01
Background Appropriate definition of neural network architecture prior to data analysis is crucial for successful data mining. This can be challenging when the underlying model of the data is unknown. The goal of this study was to determine whether optimizing neural network architecture using genetic programming as a machine learning strategy would improve the ability of neural networks to model and detect nonlinear interactions among genes in studies of common human diseases. Results Using simulated data, we show that a genetic programming optimized neural network approach is able to model gene-gene interactions as well as a traditional back propagation neural network. Furthermore, the genetic programming optimized neural network is better than the traditional back propagation neural network approach in terms of predictive ability and power to detect gene-gene interactions when non-functional polymorphisms are present. Conclusion This study suggests that a machine learning strategy for optimizing neural network architecture may be preferable to traditional trial-and-error approaches for the identification and characterization of gene-gene interactions in common, complex human diseases. PMID:12846935
Wen, Tingxi; Zhang, Zhongnan; Wong, Kelvin K. L.
2016-01-01
Unmanned aerial vehicle (UAV) has been widely used in many industries. In the medical environment, especially in some emergency situations, UAVs play an important role such as the supply of medicines and blood with speed and efficiency. In this paper, we study the problem of multi-objective blood supply by UAVs in such emergency situations. This is a complex problem that includes maintenance of the supply blood’s temperature model during transportation, the UAVs’ scheduling and routes’ planning in case of multiple sites requesting blood, and limited carrying capacity. Most importantly, we need to study the blood’s temperature change due to the external environment, the heating agent (or refrigerant) and time factor during transportation, and propose an optimal method for calculating the mixing proportion of blood and appendage in different circumstances and delivery conditions. Then, by introducing the idea of transportation appendage into the traditional Capacitated Vehicle Routing Problem (CVRP), this new problem is proposed according to the factors of distance and weight. Algorithmically, we use the combination of decomposition-based multi-objective evolutionary algorithm and local search method to perform a series of experiments on the CVRP public dataset. By comparing our technique with the traditional ones, our algorithm can obtain better optimization results and time performance. PMID:27163361
Integrated feature extraction and selection for neuroimage classification
NASA Astrophysics Data System (ADS)
Fan, Yong; Shen, Dinggang
2009-02-01
Feature extraction and selection are of great importance in neuroimage classification for identifying informative features and reducing feature dimensionality, which are generally implemented as two separate steps. This paper presents an integrated feature extraction and selection algorithm with two iterative steps: constrained subspace learning based feature extraction and support vector machine (SVM) based feature selection. The subspace learning based feature extraction focuses on the brain regions with higher possibility of being affected by the disease under study, while the possibility of brain regions being affected by disease is estimated by the SVM based feature selection, in conjunction with SVM classification. This algorithm can not only take into account the inter-correlation among different brain regions, but also overcome the limitation of traditional subspace learning based feature extraction methods. To achieve robust performance and optimal selection of parameters involved in feature extraction, selection, and classification, a bootstrapping strategy is used to generate multiple versions of training and testing sets for parameter optimization, according to the classification performance measured by the area under the ROC (receiver operating characteristic) curve. The integrated feature extraction and selection method is applied to a structural MR image based Alzheimer's disease (AD) study with 98 non-demented and 100 demented subjects. Cross-validation results indicate that the proposed algorithm can improve performance of the traditional subspace learning based classification.
Wen, Tingxi; Zhang, Zhongnan; Wong, Kelvin K L
2016-01-01
Unmanned aerial vehicle (UAV) has been widely used in many industries. In the medical environment, especially in some emergency situations, UAVs play an important role such as the supply of medicines and blood with speed and efficiency. In this paper, we study the problem of multi-objective blood supply by UAVs in such emergency situations. This is a complex problem that includes maintenance of the supply blood's temperature model during transportation, the UAVs' scheduling and routes' planning in case of multiple sites requesting blood, and limited carrying capacity. Most importantly, we need to study the blood's temperature change due to the external environment, the heating agent (or refrigerant) and time factor during transportation, and propose an optimal method for calculating the mixing proportion of blood and appendage in different circumstances and delivery conditions. Then, by introducing the idea of transportation appendage into the traditional Capacitated Vehicle Routing Problem (CVRP), this new problem is proposed according to the factors of distance and weight. Algorithmically, we use the combination of decomposition-based multi-objective evolutionary algorithm and local search method to perform a series of experiments on the CVRP public dataset. By comparing our technique with the traditional ones, our algorithm can obtain better optimization results and time performance.
Design of Clinical Support Systems Using Integrated Genetic Algorithm and Support Vector Machine
NASA Astrophysics Data System (ADS)
Chen, Yung-Fu; Huang, Yung-Fa; Jiang, Xiaoyi; Hsu, Yuan-Nian; Lin, Hsuan-Hung
Clinical decision support system (CDSS) provides knowledge and specific information for clinicians to enhance diagnostic efficiency and improving healthcare quality. An appropriate CDSS can highly elevate patient safety, improve healthcare quality, and increase cost-effectiveness. Support vector machine (SVM) is believed to be superior to traditional statistical and neural network classifiers. However, it is critical to determine suitable combination of SVM parameters regarding classification performance. Genetic algorithm (GA) can find optimal solution within an acceptable time, and is faster than greedy algorithm with exhaustive searching strategy. By taking the advantage of GA in quickly selecting the salient features and adjusting SVM parameters, a method using integrated GA and SVM (IGS), which is different from the traditional method with GA used for feature selection and SVM for classification, was used to design CDSSs for prediction of successful ventilation weaning, diagnosis of patients with severe obstructive sleep apnea, and discrimination of different cell types form Pap smear. The results show that IGS is better than methods using SVM alone or linear discriminator.
Automation of POST Cases via External Optimizer and "Artificial p2" Calculation
NASA Technical Reports Server (NTRS)
Dees, Patrick D.; Zwack, Mathew R.
2017-01-01
During early conceptual design of complex systems, speed and accuracy are often at odds with one another. While many characteristics of the design are fluctuating rapidly during this phase there is nonetheless a need to acquire accurate data from which to down-select designs as these decisions will have a large impact upon program life-cycle cost. Therefore enabling the conceptual designer to produce accurate data in a timely manner is tantamount to program viability. For conceptual design of launch vehicles, trajectory analysis and optimization is a large hurdle. Tools such as the industry standard Program to Optimize Simulated Trajectories (POST) have traditionally required an expert in the loop for setting up inputs, running the program, and analyzing the output. The solution space for trajectory analysis is in general non-linear and multi-modal requiring an experienced analyst to weed out sub-optimal designs in pursuit of the global optimum. While an experienced analyst presented with a vehicle similar to one which they have already worked on can likely produce optimal performance figures in a timely manner, as soon as the "experienced" or "similar" adjectives are invalid the process can become lengthy. In addition, an experienced analyst working on a similar vehicle may go into the analysis with preconceived ideas about what the vehicle's trajectory should look like which can result in sub-optimal performance being recorded. Thus, in any case but the ideal either time or accuracy can be sacrificed. In the authors' previous work a tool called multiPOST was created which captures the heuristics of a human analyst over the process of executing trajectory analysis with POST. However without the instincts of a human in the loop, this method relied upon Monte Carlo simulation to find successful trajectories. Overall the method has mixed results, and in the context of optimizing multiple vehicles it is inefficient in comparison to the method presented POST's internal optimizer functions like any other gradient-based optimizer. It has a specified variable to optimize whose value is represented as optval, a set of dependent constraints to meet with associated forms and tolerances whose value is represented as p2, and a set of independent variables known as the u-vector to modify in pursuit of optimality. Each of these quantities are calculated or manipulated at a certain phase within the trajectory. The optimizer is further constrained by the requirement that the input u-vector must result in a trajectory which proceeds through each of the prescribed events in the input file. For example, if the input u-vector causes the vehicle to crash before it can achieve the orbital parameters required for a parking orbit, then the run will fail without engaging the optimizer, and a p2 value of exactly zero is returned. This poses a problem, as this "non-connecting" region of the u-vector space is far larger than the "connecting" region which returns a non-zero value of p2 and can be worked on by the internal optimizer. Finding this connecting region and more specifically the global optimum within this region has traditionally required the use of an expert analyst.
Real-Time GNSS-Based Attitude Determination in the Measurement Domain.
Zhao, Lin; Li, Na; Li, Liang; Zhang, Yi; Cheng, Chun
2017-02-05
A multi-antenna-based GNSS receiver is capable of providing high-precision and drift-free attitude solution. Carrier phase measurements need be utilized to achieve high-precision attitude. The traditional attitude determination methods in the measurement domain and the position domain resolve the attitude and the ambiguity sequentially. The redundant measurements from multiple baselines have not been fully utilized to enhance the reliability of attitude determination. A multi-baseline-based attitude determination method in the measurement domain is proposed to estimate the attitude parameters and the ambiguity simultaneously. Meanwhile, the redundancy of attitude resolution has also been increased so that the reliability of ambiguity resolution and attitude determination can be enhanced. Moreover, in order to further improve the reliability of attitude determination, we propose a partial ambiguity resolution method based on the proposed attitude determination model. The static and kinematic experiments were conducted to verify the performance of the proposed method. When compared with the traditional attitude determination methods, the static experimental results show that the proposed method can improve the accuracy by at least 0.03° and enhance the continuity by 18%, at most. The kinematic result has shown that the proposed method can obtain an optimal balance between accuracy and reliability performance.
A random optimization approach for inherent optic properties of nearshore waters
NASA Astrophysics Data System (ADS)
Zhou, Aijun; Hao, Yongshuai; Xu, Kuo; Zhou, Heng
2016-10-01
Traditional method of water quality sampling is time-consuming and highly cost. It can not meet the needs of social development. Hyperspectral remote sensing technology has well time resolution, spatial coverage and more general segment information on spectrum. It has a good potential in water quality supervision. Via the method of semi-analytical, remote sensing information can be related with the water quality. The inherent optical properties are used to quantify the water quality, and an optical model inside the water is established to analysis the features of water. By stochastic optimization algorithm Threshold Acceptance, a global optimization of the unknown model parameters can be determined to obtain the distribution of chlorophyll, organic solution and suspended particles in water. Via the improvement of the optimization algorithm in the search step, the processing time will be obviously reduced, and it will create more opportunity for the increasing the number of parameter. For the innovation definition of the optimization steps and standard, the whole inversion process become more targeted, thus improving the accuracy of inversion. According to the application result for simulated data given by IOCCG and field date provided by NASA, the approach model get continuous improvement and enhancement. Finally, a low-cost, effective retrieval model of water quality from hyper-spectral remote sensing can be achieved.
Carroll, Sean Michael; Chubiz, Lon M.; Agashe, Deepa; Marx, Christopher J.
2015-01-01
Bioengineering holds great promise to provide fast and efficient biocatalysts for methanol-based biotechnology, but necessitates proven methods to optimize physiology in engineered strains. Here, we highlight experimental evolution as an effective means for optimizing an engineered Methylobacterium extorquens AM1. Replacement of the native formaldehyde oxidation pathway with a functional analog substantially decreased growth in an engineered Methylobacterium, but growth rapidly recovered after six hundred generations of evolution on methanol. We used whole-genome sequencing to identify the basis of adaptation in eight replicate evolved strains, and examined genomic changes in light of other growth and physiological data. We observed great variety in the numbers and types of mutations that occurred, including instances of parallel mutations at targets that may have been “rationalized” by the bioengineer, plus other “illogical” mutations that demonstrate the ability of evolution to expose unforeseen optimization solutions. Notably, we investigated mutations to RNA polymerase, which provided a massive growth benefit but are linked to highly aberrant transcriptional profiles. Overall, we highlight the power of experimental evolution to present genetic and physiological solutions for strain optimization, particularly in systems where the challenges of engineering are too many or too difficult to overcome via traditional engineering methods. PMID:27682084
Optimization of pencil beam f-theta lens for high-accuracy metrology
NASA Astrophysics Data System (ADS)
Peng, Chuanqian; He, Yumei; Wang, Jie
2018-01-01
Pencil beam deflectometric profilers are common instruments for high-accuracy surface slope metrology of x-ray mirrors in synchrotron facilities. An f-theta optical system is a key optical component of the deflectometric profilers and is used to perform the linear angle-to-position conversion. Traditional optimization procedures of the f-theta systems are not directly related to the angle-to-position conversion relation and are performed with stops of large size and a fixed working distance, which means they may not be suitable for the design of f-theta systems working with a small-sized pencil beam within a working distance range for ultra-high-accuracy metrology. If an f-theta system is not well-designed, aberrations of the f-theta system will introduce many systematic errors into the measurement. A least-squares' fitting procedure was used to optimize the configuration parameters of an f-theta system. Simulations using ZEMAX software showed that the optimized f-theta system significantly suppressed the angle-to-position conversion errors caused by aberrations. Any pencil-beam f-theta optical system can be optimized with the help of this optimization method.
Motion Estimation Using the Firefly Algorithm in Ultrasonic Image Sequence of Soft Tissue
Chao, Chih-Feng; Horng, Ming-Huwi; Chen, Yu-Chan
2015-01-01
Ultrasonic image sequence of the soft tissue is widely used in disease diagnosis; however, the speckle noises usually influenced the image quality. These images usually have a low signal-to-noise ratio presentation. The phenomenon gives rise to traditional motion estimation algorithms that are not suitable to measure the motion vectors. In this paper, a new motion estimation algorithm is developed for assessing the velocity field of soft tissue in a sequence of ultrasonic B-mode images. The proposed iterative firefly algorithm (IFA) searches for few candidate points to obtain the optimal motion vector, and then compares it to the traditional iterative full search algorithm (IFSA) via a series of experiments of in vivo ultrasonic image sequences. The experimental results show that the IFA can assess the vector with better efficiency and almost equal estimation quality compared to the traditional IFSA method. PMID:25873987
Motion estimation using the firefly algorithm in ultrasonic image sequence of soft tissue.
Chao, Chih-Feng; Horng, Ming-Huwi; Chen, Yu-Chan
2015-01-01
Ultrasonic image sequence of the soft tissue is widely used in disease diagnosis; however, the speckle noises usually influenced the image quality. These images usually have a low signal-to-noise ratio presentation. The phenomenon gives rise to traditional motion estimation algorithms that are not suitable to measure the motion vectors. In this paper, a new motion estimation algorithm is developed for assessing the velocity field of soft tissue in a sequence of ultrasonic B-mode images. The proposed iterative firefly algorithm (IFA) searches for few candidate points to obtain the optimal motion vector, and then compares it to the traditional iterative full search algorithm (IFSA) via a series of experiments of in vivo ultrasonic image sequences. The experimental results show that the IFA can assess the vector with better efficiency and almost equal estimation quality compared to the traditional IFSA method.
[Investigation on Spray Drying Technology of Auricularia auricular Extract].
Zhou, Rong; Chen, Hui; Xie, Yuan; Chen, Peng; Wang, Luo-lin
2015-07-01
To investigate the feasibility of spray drying technology of Auricularia auricular extract and its optimum process. On the basis of single factor test, with the yield of dry extract and the content of polysaccharide as indexes, orthogonal test method was used to optimize the spray drying technology on the inlet air temperature, injection speed and crude drug content. Using ultraviolet spectrophotometry, thin layer chromatography(TLC) and pharmacodynamics as indicators, extracts prepared by traditional alcohol precipitation drying process and spray drying process were compared. Compared with the traditional preparation method, the extract prepared by spray drying had little differences from the polysaccharide content, TLC and the function of reducing TG and TC, and its optimum technology condition were as follows: The inlet air temperature was 180 °C, injection speed was 10 ml/min and crude drugs content was 0. 4 g/mL. Auricularia auricular extract by spray drying technology is stable and feasible with high economic benefit.
Research on Rigid Body Motion Tracing in Space based on NX MCD
NASA Astrophysics Data System (ADS)
Wang, Junjie; Dai, Chunxiang; Shi, Karen; Qin, Rongkang
2018-03-01
In the use of MCD (Mechatronics Concept Designer) which is a module belong to SIEMENS Ltd industrial design software UG (Unigraphics NX), user can define rigid body and kinematic joint to make objects move according to the existing plan in simulation. At this stage, user may have the desire to see the path of some points in the moving object intuitively. In response to this requirement, this paper will compute the pose through the transformation matrix which can be available from the solver engine, and then fit these sampling points through B-spline curve. Meanwhile, combined with the actual constraints of rigid bodies, the traditional equal interval sampling strategy was optimized. The result shown that this method could satisfy the demand and make up for the deficiency in traditional sampling method. User can still edit and model on this 3D curve. Expected result has been achieved.
An effective parameter optimization with radiation balance constraints in the CAM5
NASA Astrophysics Data System (ADS)
Wu, L.; Zhang, T.; Qin, Y.; Lin, Y.; Xue, W.; Zhang, M.
2017-12-01
Uncertain parameters in physical parameterizations of General Circulation Models (GCMs) greatly impact model performance. Traditional parameter tuning methods are mostly unconstrained optimization, leading to the simulation results with optimal parameters may not meet the conditions that models have to keep. In this study, the radiation balance constraint is taken as an example, which is involved in the automatic parameter optimization procedure. The Lagrangian multiplier method is used to solve this optimization problem with constrains. In our experiment, we use CAM5 atmosphere model under 5-yr AMIP simulation with prescribed seasonal climatology of SST and sea ice. We consider the synthesized metrics using global means of radiation, precipitation, relative humidity, and temperature as the goal of optimization, and simultaneously consider the conditions that FLUT and FSNTOA should satisfy as constraints. The global average of the output variables FLUT and FSNTOA are set to be approximately equal to 240 Wm-2 in CAM5. Experiment results show that the synthesized metrics is 13.6% better than the control run. At the same time, both FLUT and FSNTOA are close to the constrained conditions. The FLUT condition is well satisfied, which is obviously better than the average annual FLUT obtained with the default parameters. The FSNTOA has a slight deviation from the observed value, but the relative error is less than 7.7‰.
Multiobjective hyper heuristic scheme for system design and optimization
NASA Astrophysics Data System (ADS)
Rafique, Amer Farhan
2012-11-01
As system design is becoming more and more multifaceted, integrated, and complex, the traditional single objective optimization trends of optimal design are becoming less and less efficient and effective. Single objective optimization methods present a unique optimal solution whereas multiobjective methods present pareto front. The foremost intent is to predict a reasonable distributed pareto-optimal solution set independent of the problem instance through multiobjective scheme. Other objective of application of intended approach is to improve the worthiness of outputs of the complex engineering system design process at the conceptual design phase. The process is automated in order to provide the system designer with the leverage of the possibility of studying and analyzing a large multiple of possible solutions in a short time. This article presents Multiobjective Hyper Heuristic Optimization Scheme based on low level meta-heuristics developed for the application in engineering system design. Herein, we present a stochastic function to manage meta-heuristics (low-level) to augment surety of global optimum solution. Generic Algorithm, Simulated Annealing and Swarm Intelligence are used as low-level meta-heuristics in this study. Performance of the proposed scheme is investigated through a comprehensive empirical analysis yielding acceptable results. One of the primary motives for performing multiobjective optimization is that the current engineering systems require simultaneous optimization of conflicting and multiple. Random decision making makes the implementation of this scheme attractive and easy. Injecting feasible solutions significantly alters the search direction and also adds diversity of population resulting in accomplishment of pre-defined goals set in the proposed scheme.
Hu, Xiao-Bing; Wang, Ming; Di Paolo, Ezequiel
2013-06-01
Searching the Pareto front for multiobjective optimization problems usually involves the use of a population-based search algorithm or of a deterministic method with a set of different single aggregate objective functions. The results are, in fact, only approximations of the real Pareto front. In this paper, we propose a new deterministic approach capable of fully determining the real Pareto front for those discrete problems for which it is possible to construct optimization algorithms to find the k best solutions to each of the single-objective problems. To this end, two theoretical conditions are given to guarantee the finding of the actual Pareto front rather than its approximation. Then, a general methodology for designing a deterministic search procedure is proposed. A case study is conducted, where by following the general methodology, a ripple-spreading algorithm is designed to calculate the complete exact Pareto front for multiobjective route optimization. When compared with traditional Pareto front search methods, the obvious advantage of the proposed approach is its unique capability of finding the complete Pareto front. This is illustrated by the simulation results in terms of both solution quality and computational efficiency.
NASA Astrophysics Data System (ADS)
Ye, Jing; Dang, Yaoguo; Li, Bingjun
2018-01-01
Grey-Markov forecasting model is a combination of grey prediction model and Markov chain which show obvious optimization effects for data sequences with characteristics of non-stationary and volatility. However, the state division process in traditional Grey-Markov forecasting model is mostly based on subjective real numbers that immediately affects the accuracy of forecasting values. To seek the solution, this paper introduces the central-point triangular whitenization weight function in state division to calculate possibilities of research values in each state which reflect preference degrees in different states in an objective way. On the other hand, background value optimization is applied in the traditional grey model to generate better fitting data. By this means, the improved Grey-Markov forecasting model is built. Finally, taking the grain production in Henan Province as an example, it verifies this model's validity by comparing with GM(1,1) based on background value optimization and the traditional Grey-Markov forecasting model.
Andrews, J R
1981-01-01
Two methods dominate cancer treatment--one, the traditional best practice, individualized treatment method and two, the a priori determined decision method of the interinstitutional, cooperative, clinical trial. In the first, choices are infinite and can be made at the time of treatment; in the second, choices are finite and are made in advance of treatment on a random basis. Neither method systematically selects, identifies, or formalizes the optimum level of effect in the treatment chosen. Of the two, it can be argued that the first, other things being equal, is more likely to select the optimum treatment. The determination of level of effect for the optimization of cancer treatment requires the generation of dose-response relationships for both benefit and risk and the introduction of benefit and risk considerations and judgements. The clinical trial, as presently constituted, doses not yield this kind of information, it being, generally, of the binary yes or no, better or worse type. The best practice, individualized treatment method can yield, when adequately documented, both a range of dose-response relationships and a variety of benefit and risk considerations. The presentation will be limited to a consideration of a single modality of cancer treatment, radiation therapy, but an analogy with other modalities of cancer treatment will be inferred. Criteria for optimization will be developed and graphic means for its identification and formalization will be demonstrated with examples taken from the radiotherapy literature. The general problem of optimization theory and practice will be discussed; the necessity for its exploration in relation to the increasing complexity of cancer treatment will be developed; and recommendations for clinical research will be made including a proposal for the support of clinics as an alternative to the support of programs.
NASA Astrophysics Data System (ADS)
Brewick, Patrick T.; Smyth, Andrew W.
2016-12-01
The authors have previously shown that many traditional approaches to operational modal analysis (OMA) struggle to properly identify the modal damping ratios for bridges under traffic loading due to the interference caused by the driving frequencies of the traffic loads. This paper presents a novel methodology for modal parameter estimation in OMA that overcomes the problems presented by driving frequencies and significantly improves the damping estimates. This methodology is based on finding the power spectral density (PSD) of a given modal coordinate, and then dividing the modal PSD into separate regions, left- and right-side spectra. The modal coordinates were found using a blind source separation (BSS) algorithm and a curve-fitting technique was developed that uses optimization to find the modal parameters that best fit each side spectra of the PSD. Specifically, a pattern-search optimization method was combined with a clustering analysis algorithm and together they were employed in a series of stages in order to improve the estimates of the modal damping ratios. This method was used to estimate the damping ratios from a simulated bridge model subjected to moving traffic loads. The results of this method were compared to other established OMA methods, such as Frequency Domain Decomposition (FDD) and BSS methods, and they were found to be more accurate and more reliable, even for modes that had their PSDs distorted or altered by driving frequencies.
NASA Astrophysics Data System (ADS)
Li, Leihong
A modular structural design methodology for composite blades is developed. This design method can be used to design composite rotor blades with sophisticate geometric cross-sections. This design method hierarchically decomposed the highly-coupled interdisciplinary rotor analysis into global and local levels. In the global level, aeroelastic response analysis and rotor trim are conduced based on multi-body dynamic models. In the local level, variational asymptotic beam sectional analysis methods are used for the equivalent one-dimensional beam properties. Compared with traditional design methodology, the proposed method is more efficient and accurate. Then, the proposed method is used to study three different design problems that have not been investigated before. The first is to add manufacturing constraints into design optimization. The introduction of manufacturing constraints complicates the optimization process. However, the design with manufacturing constraints benefits the manufacturing process and reduces the risk of violating major performance constraints. Next, a new design procedure for structural design against fatigue failure is proposed. This procedure combines the fatigue analysis with the optimization process. The durability or fatigue analysis employs a strength-based model. The design is subject to stiffness, frequency, and durability constraints. Finally, the manufacturing uncertainty impacts on rotor blade aeroelastic behavior are investigated, and a probabilistic design method is proposed to control the impacts of uncertainty on blade structural performance. The uncertainty factors include dimensions, shapes, material properties, and service loads.
A Study of Penalty Function Methods for Constraint Handling with Genetic Algorithm
NASA Technical Reports Server (NTRS)
Ortiz, Francisco
2004-01-01
COMETBOARDS (Comparative Evaluation Testbed of Optimization and Analysis Routines for Design of Structures) is a design optimization test bed that can evaluate the performance of several different optimization algorithms. A few of these optimization algorithms are the sequence of unconstrained minimization techniques (SUMT), sequential linear programming (SLP) and the sequential quadratic programming techniques (SQP). A genetic algorithm (GA) is a search technique that is based on the principles of natural selection or "survival of the fittest". Instead of using gradient information, the GA uses the objective function directly in the search. The GA searches the solution space by maintaining a population of potential solutions. Then, using evolving operations such as recombination, mutation and selection, the GA creates successive generations of solutions that will evolve and take on the positive characteristics of their parents and thus gradually approach optimal or near-optimal solutions. By using the objective function directly in the search, genetic algorithms can be effectively applied in non-convex, highly nonlinear, complex problems. The genetic algorithm is not guaranteed to find the global optimum, but it is less likely to get trapped at a local optimum than traditional gradient-based search methods when the objective function is not smooth and generally well behaved. The purpose of this research is to assist in the integration of genetic algorithm (GA) into COMETBOARDS. COMETBOARDS cast the design of structures as a constrained nonlinear optimization problem. One method used to solve constrained optimization problem with a GA to convert the constrained optimization problem into an unconstrained optimization problem by developing a penalty function that penalizes infeasible solutions. There have been several suggested penalty function in the literature each with there own strengths and weaknesses. A statistical analysis of some suggested penalty functions is performed in this study. Also, a response surface approach to robust design is used to develop a new penalty function approach. This new penalty function approach is then compared with the other existing penalty functions.
Robotic Online Path Planning on Point Cloud.
Liu, Ming
2016-05-01
This paper deals with the path-planning problem for mobile wheeled- or tracked-robot which drive in 2.5-D environments, where the traversable surface is usually considered as a 2-D-manifold embedded in a 3-D ambient space. Specially, we aim at solving the 2.5-D navigation problem using raw point cloud as input. The proposed method is independent of traditional surface parametrization or reconstruction methods, such as a meshing process, which generally has high-computational complexity. Instead, we utilize the output of 3-D tensor voting framework on the raw point clouds. The computation of tensor voting is accelerated by optimized implementation on graphics computation unit. Based on the tensor voting results, a novel local Riemannian metric is defined using the saliency components, which helps the modeling of the latent traversable surface. Using the proposed metric, we prove that the geodesic in the 3-D tensor space leads to rational path-planning results by experiments. Compared to traditional methods, the results reveal the advantages of the proposed method in terms of smoothing the robot maneuver while considering the minimum travel distance.
Residential roof condition assessment system using deep learning
NASA Astrophysics Data System (ADS)
Wang, Fan; Kerekes, John P.; Xu, Zhuoyi; Wang, Yandong
2018-01-01
The emergence of high resolution (HR) and ultra high resolution (UHR) airborne remote sensing imagery is enabling humans to move beyond traditional land cover analysis applications to the detailed characterization of surface objects. A residential roof condition assessment method using techniques from deep learning is presented. The proposed method operates on individual roofs and divides the task into two stages: (1) roof segmentation, followed by (2) condition classification of the segmented roof regions. As the first step in this process, a self-tuning method is proposed to segment the images into small homogeneous areas. The segmentation is initialized with simple linear iterative clustering followed by deep learned feature extraction and region merging, with the optimal result selected by an unsupervised index, Q. After the segmentation, a pretrained residual network is fine-tuned on the augmented roof segments using a proposed k-pixel extension technique for classification. The effectiveness of the proposed algorithm was demonstrated on both HR and UHR imagery collected by EagleView over different study sites. The proposed algorithm has yielded promising results and has outperformed traditional machine learning methods using hand-crafted features.
Lin, Cheng Yu; Kikuchi, Noboru; Hollister, Scott J
2004-05-01
An often-proposed tissue engineering design hypothesis is that the scaffold should provide a biomimetic mechanical environment for initial function and appropriate remodeling of regenerating tissue while concurrently providing sufficient porosity for cell migration and cell/gene delivery. To provide a systematic study of this hypothesis, the ability to precisely design and manufacture biomaterial scaffolds is needed. Traditional methods for scaffold design and fabrication cannot provide the control over scaffold architecture design to achieve specified properties within fixed limits on porosity. The purpose of this paper was to develop a general design optimization scheme for 3D internal scaffold architecture to match desired elastic properties and porosity simultaneously, by introducing the homogenization-based topology optimization algorithm (also known as general layout optimization). With an initial target for bone tissue engineering, we demonstrate that the method can produce highly porous structures that match human trabecular bone anisotropic stiffness using accepted biomaterials. In addition, we show that anisotropic bone stiffness may be matched with scaffolds of widely different porosity. Finally, we also demonstrate that prototypes of the designed structures can be fabricated using solid free-form fabrication (SFF) techniques.
Single- and Multiple-Objective Optimization with Differential Evolution and Neural Networks
NASA Technical Reports Server (NTRS)
Rai, Man Mohan
2006-01-01
Genetic and evolutionary algorithms have been applied to solve numerous problems in engineering design where they have been used primarily as optimization procedures. These methods have an advantage over conventional gradient-based search procedures became they are capable of finding global optima of multi-modal functions and searching design spaces with disjoint feasible regions. They are also robust in the presence of noisy data. Another desirable feature of these methods is that they can efficiently use distributed and parallel computing resources since multiple function evaluations (flow simulations in aerodynamics design) can be performed simultaneously and independently on ultiple processors. For these reasons genetic and evolutionary algorithms are being used more frequently in design optimization. Examples include airfoil and wing design and compressor and turbine airfoil design. They are also finding increasing use in multiple-objective and multidisciplinary optimization. This lecture will focus on an evolutionary method that is a relatively new member to the general class of evolutionary methods called differential evolution (DE). This method is easy to use and program and it requires relatively few user-specified constants. These constants are easily determined for a wide class of problems. Fine-tuning the constants will off course yield the solution to the optimization problem at hand more rapidly. DE can be efficiently implemented on parallel computers and can be used for continuous, discrete and mixed discrete/continuous optimization problems. It does not require the objective function to be continuous and is noise tolerant. DE and applications to single and multiple-objective optimization will be included in the presentation and lecture notes. A method for aerodynamic design optimization that is based on neural networks will also be included as a part of this lecture. The method offers advantages over traditional optimization methods. It is more flexible than other methods in dealing with design in the context of both steady and unsteady flows, partial and complete data sets, combined experimental and numerical data, inclusion of various constraints and rules of thumb, and other issues that characterize the aerodynamic design process. Neural networks provide a natural framework within which a succession of numerical solutions of increasing fidelity, incorporating more realistic flow physics, can be represented and utilized for optimization. Neural networks also offer an excellent framework for multiple-objective and multi-disciplinary design optimization. Simulation tools from various disciplines can be integrated within this framework and rapid trade-off studies involving one or many disciplines can be performed. The prospect of combining neural network based optimization methods and evolutionary algorithms to obtain a hybrid method with the best properties of both methods will be included in this presentation. Achieving solution diversity and accurate convergence to the exact Pareto front in multiple objective optimization usually requires a significant computational effort with evolutionary algorithms. In this lecture we will also explore the possibility of using neural networks to obtain estimates of the Pareto optimal front using non-dominated solutions generated by DE as training data. Neural network estimators have the potential advantage of reducing the number of function evaluations required to obtain solution accuracy and diversity, thus reducing cost to design.
Life cycle assessment and economic analysis of a low concentrating photovoltaic system.
De Feo, G; Forni, M; Petito, F; Renno, C
2016-10-01
Many new photovoltaic (PV) applications, such as the concentrating PV (CPV) systems, are appearing on the market. The main characteristic of CPV systems is to concentrate sunlight on a receiver by means of optical devices and to decrease the solar cells area required. A low CPV (LCPV) system allows optimizing the PV effect with high increase of generated electric power as well as decrease of active surface area. In this paper, an economic analysis and a life cycle assessment (LCA) study of a particular LCPV scheme is presented and its environmental impacts are compared with those of a PV traditional system. The LCA study was performed with the software tool SimaPro 8.0.2, using the Econinvent 3.1 database. A functional unit of 1 kWh of electricity produced was chosen. Carbon Footprint, Ecological Footprint and ReCiPe 2008 were the methods used to assess the environmental impacts of the LCPV plant compared with a corresponding traditional system. All the methods demonstrated the environmental convenience of the LCPV system. The innovative system allowed saving 16.9% of CO2 equivalent in comparison with the traditional PV plant. The environmental impacts saving was 17% in terms of Ecological Footprint, and, finally, 15.8% with the ReCiPe method.
The pre-image problem in kernel methods.
Kwok, James Tin-yau; Tsang, Ivor Wai-hung
2004-11-01
In this paper, we address the problem of finding the pre-image of a feature vector in the feature space induced by a kernel. This is of central importance in some kernel applications, such as on using kernel principal component analysis (PCA) for image denoising. Unlike the traditional method which relies on nonlinear optimization, our proposed method directly finds the location of the pre-image based on distance constraints in the feature space. It is noniterative, involves only linear algebra and does not suffer from numerical instability or local minimum problems. Evaluations on performing kernel PCA and kernel clustering on the USPS data set show much improved performance.
Mean-Reverting Portfolio With Budget Constraint
NASA Astrophysics Data System (ADS)
Zhao, Ziping; Palomar, Daniel P.
2018-05-01
This paper considers the mean-reverting portfolio design problem arising from statistical arbitrage in the financial markets. We first propose a general problem formulation aimed at finding a portfolio of underlying component assets by optimizing a mean-reversion criterion characterizing the mean-reversion strength, taking into consideration the variance of the portfolio and an investment budget constraint. Then several specific problems are considered based on the general formulation, and efficient algorithms are proposed. Numerical results on both synthetic and market data show that our proposed mean-reverting portfolio design methods can generate consistent profits and outperform the traditional design methods and the benchmark methods in the literature.
Adaptive control for solar energy based DC microgrid system development
NASA Astrophysics Data System (ADS)
Zhang, Qinhao
During the upgrading of current electric power grid, it is expected to develop smarter, more robust and more reliable power systems integrated with distributed generations. To realize these objectives, traditional control techniques are no longer effective in either stabilizing systems or delivering optimal and robust performances. Therefore, development of advanced control methods has received increasing attention in power engineering. This work addresses two specific problems in the control of solar panel based microgrid systems. First, a new control scheme is proposed for the microgrid systems to achieve optimal energy conversion ratio in the solar panels. The control system can optimize the efficiency of the maximum power point tracking (MPPT) algorithm by implementing two layers of adaptive control. Such a hierarchical control architecture has greatly improved the system performance, which is validated through both mathematical analysis and computer simulation. Second, in the development of the microgrid transmission system, the issues related to the tele-communication delay and constant power load (CPL)'s negative incremental impedance are investigated. A reference model based method is proposed for pole and zero placements that address the challenges of the time delay and CPL in closed-loop control. The effectiveness of the proposed modeling and control design methods are demonstrated in a simulation testbed. Practical aspects of the proposed methods for general microgrid systems are also discussed.
Modeling and optimization of dough recipe for breadsticks
NASA Astrophysics Data System (ADS)
Krivosheev, A. Yu; Ponomareva, E. I.; Zhuravlev, A. A.; Lukina, S. I.; Alekhina, N. N.
2018-05-01
During the work, the authors studied the combined effect of non-traditional raw materials on indicators of quality breadsticks, mathematical methods of experiment planning were applied. The main factors chosen were the dosages of flaxseed flour and grape seed oil. The output parameters were the swelling factor of the products and their strength. Optimization of the formulation composition of the dough for bread sticks was carried out by experimental- statistical methods. As a result of the experiment, mathematical models were constructed in the form of regression equations, adequately describing the process of studies. The statistical processing of the experimental data was carried out by the criteria of Student, Cochran and Fisher (with a confidence probability of 0.95). A mathematical interpretation of the regression equations was given. Optimization of the formulation of the dough for bread sticks was carried out by the method of uncertain Lagrange multipliers. The rational values of the factors were determined: the dosage of flaxseed flour - 14.22% and grape seed oil - 7.8%, ensuring the production of products with the best combination of swelling ratio and strength. On the basis of the data obtained, a recipe and a method for the production of breadsticks "Idea" were proposed (TU (Russian Technical Specifications) 9117-443-02068106-2017).
Dai, Hongying; Wu, Guodong; Wu, Michael; Zhi, Degui
2016-01-01
Next-generation sequencing data pose a severe curse of dimensionality, complicating traditional "single marker-single trait" analysis. We propose a two-stage combined p-value method for pathway analysis. The first stage is at the gene level, where we integrate effects within a gene using the Sequence Kernel Association Test (SKAT). The second stage is at the pathway level, where we perform a correlated Lancaster procedure to detect joint effects from multiple genes within a pathway. We show that the Lancaster procedure is optimal in Bahadur efficiency among all combined p-value methods. The Bahadur efficiency,[Formula: see text], compares sample sizes among different statistical tests when signals become sparse in sequencing data, i.e. ε →0. The optimal Bahadur efficiency ensures that the Lancaster procedure asymptotically requires a minimal sample size to detect sparse signals ([Formula: see text]). The Lancaster procedure can also be applied to meta-analysis. Extensive empirical assessments of exome sequencing data show that the proposed method outperforms Gene Set Enrichment Analysis (GSEA). We applied the competitive Lancaster procedure to meta-analysis data generated by the Global Lipids Genetics Consortium to identify pathways significantly associated with high-density lipoprotein cholesterol, low-density lipoprotein cholesterol, triglycerides, and total cholesterol.
CALIBRATION OF SEMI-ANALYTIC MODELS OF GALAXY FORMATION USING PARTICLE SWARM OPTIMIZATION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruiz, Andrés N.; Domínguez, Mariano J.; Yaryura, Yamila
2015-03-10
We present a fast and accurate method to select an optimal set of parameters in semi-analytic models of galaxy formation and evolution (SAMs). Our approach compares the results of a model against a set of observables applying a stochastic technique called Particle Swarm Optimization (PSO), a self-learning algorithm for localizing regions of maximum likelihood in multidimensional spaces that outperforms traditional sampling methods in terms of computational cost. We apply the PSO technique to the SAG semi-analytic model combined with merger trees extracted from a standard Lambda Cold Dark Matter N-body simulation. The calibration is performed using a combination of observedmore » galaxy properties as constraints, including the local stellar mass function and the black hole to bulge mass relation. We test the ability of the PSO algorithm to find the best set of free parameters of the model by comparing the results with those obtained using a MCMC exploration. Both methods find the same maximum likelihood region, however, the PSO method requires one order of magnitude fewer evaluations. This new approach allows a fast estimation of the best-fitting parameter set in multidimensional spaces, providing a practical tool to test the consequences of including other astrophysical processes in SAMs.« less
Design and optimization of a modal- independent linear ultrasonic motor.
Zhou, Shengli; Yao, Zhiyuan
2014-03-01
To simplify the design of the linear ultrasonic motor (LUSM) and improve its output performance, a method of modal decoupling for LUSMs is proposed in this paper. The specific embodiment of this method is decoupling of the traditional LUSM stator's complex vibration into two simple vibrations, with each vibration implemented by one vibrator. Because the two vibrators are designed independently, their frequencies can be tuned independently and frequency consistency is easy to achieve. Thus, the method can simplify the design of the LUSM. Based on this method, a prototype modal- independent LUSM is designed and fabricated. The motor reaches its maximum thrust force of 47 N, maximum unloaded speed of 0.43 m/s, and maximum power of 7.85 W at applied voltage of 200 Vpp. The motor's structure is then optimized by controlling the difference between the two vibrators' resonance frequencies to reach larger output speed, thrust, and power. The optimized results show that when the frequency difference is 73 Hz, the output force, speed, and power reach their maximum values. At the input voltage of 200 Vpp, the motor reaches its maximum thrust force of 64.2 N, maximum unloaded speed of 0.76 m/s, maximum power of 17.4 W, maximum thrust-weight ratio of 23.7, and maximum efficiency of 39.6%.
Applications of thin-film sandwich crystallization platforms.
Axford, Danny; Aller, Pierre; Sanchez-Weatherby, Juan; Sandy, James
2016-04-01
Examples are shown of protein crystallization in, and data collection from, solutions sandwiched between thin polymer films using vapour-diffusion and batch methods. The crystallization platform is optimal for both visualization and in situ data collection, with the need for traditional harvesting being eliminated. In wells constructed from the thinnest plastic and with a minimum of aqueous liquid, flash-cooling to 100 K is possible without significant ice formation and without any degradation in crystal quality. The approach is simple; it utilizes low-cost consumables but yields high-quality data with minimal sample intervention and, with the very low levels of background X-ray scatter that are observed, is optimal for microcrystals.
Optimization Control of the Color-Coating Production Process for Model Uncertainty
He, Dakuo; Wang, Zhengsong; Yang, Le; Mao, Zhizhong
2016-01-01
Optimized control of the color-coating production process (CCPP) aims at reducing production costs and improving economic efficiency while meeting quality requirements. However, because optimization control of the CCPP is hampered by model uncertainty, a strategy that considers model uncertainty is proposed. Previous work has introduced a mechanistic model of CCPP based on process analysis to simulate the actual production process and generate process data. The partial least squares method is then applied to develop predictive models of film thickness and economic efficiency. To manage the model uncertainty, the robust optimization approach is introduced to improve the feasibility of the optimized solution. Iterative learning control is then utilized to further refine the model uncertainty. The constrained film thickness is transformed into one of the tracked targets to overcome the drawback that traditional iterative learning control cannot address constraints. The goal setting of economic efficiency is updated continuously according to the film thickness setting until this reaches its desired value. Finally, fuzzy parameter adjustment is adopted to ensure that the economic efficiency and film thickness converge rapidly to their optimized values under the constraint conditions. The effectiveness of the proposed optimization control strategy is validated by simulation results. PMID:27247563
Optimization Control of the Color-Coating Production Process for Model Uncertainty.
He, Dakuo; Wang, Zhengsong; Yang, Le; Mao, Zhizhong
2016-01-01
Optimized control of the color-coating production process (CCPP) aims at reducing production costs and improving economic efficiency while meeting quality requirements. However, because optimization control of the CCPP is hampered by model uncertainty, a strategy that considers model uncertainty is proposed. Previous work has introduced a mechanistic model of CCPP based on process analysis to simulate the actual production process and generate process data. The partial least squares method is then applied to develop predictive models of film thickness and economic efficiency. To manage the model uncertainty, the robust optimization approach is introduced to improve the feasibility of the optimized solution. Iterative learning control is then utilized to further refine the model uncertainty. The constrained film thickness is transformed into one of the tracked targets to overcome the drawback that traditional iterative learning control cannot address constraints. The goal setting of economic efficiency is updated continuously according to the film thickness setting until this reaches its desired value. Finally, fuzzy parameter adjustment is adopted to ensure that the economic efficiency and film thickness converge rapidly to their optimized values under the constraint conditions. The effectiveness of the proposed optimization control strategy is validated by simulation results.
Efficient computation of optimal actions.
Todorov, Emanuel
2009-07-14
Optimal choice of actions is a fundamental problem relevant to fields as diverse as neuroscience, psychology, economics, computer science, and control engineering. Despite this broad relevance the abstract setting is similar: we have an agent choosing actions over time, an uncertain dynamical system whose state is affected by those actions, and a performance criterion that the agent seeks to optimize. Solving problems of this kind remains hard, in part, because of overly generic formulations. Here, we propose a more structured formulation that greatly simplifies the construction of optimal control laws in both discrete and continuous domains. An exhaustive search over actions is avoided and the problem becomes linear. This yields algorithms that outperform Dynamic Programming and Reinforcement Learning, and thereby solve traditional problems more efficiently. Our framework also enables computations that were not possible before: composing optimal control laws by mixing primitives, applying deterministic methods to stochastic systems, quantifying the benefits of error tolerance, and inferring goals from behavioral data via convex optimization. Development of a general class of easily solvable problems tends to accelerate progress--as linear systems theory has done, for example. Our framework may have similar impact in fields where optimal choice of actions is relevant.
Thresholding Based on Maximum Weighted Object Correlation for Rail Defect Detection
NASA Astrophysics Data System (ADS)
Li, Qingyong; Huang, Yaping; Liang, Zhengping; Luo, Siwei
Automatic thresholding is an important technique for rail defect detection, but traditional methods are not competent enough to fit the characteristics of this application. This paper proposes the Maximum Weighted Object Correlation (MWOC) thresholding method, fitting the features that rail images are unimodal and defect proportion is small. MWOC selects a threshold by optimizing the product of object correlation and the weight term that expresses the proportion of thresholded defects. Our experimental results demonstrate that MWOC achieves misclassification error of 0.85%, and outperforms the other well-established thresholding methods, including Otsu, maximum correlation thresholding, maximum entropy thresholding and valley-emphasis method, for the application of rail defect detection.
Smart LED lighting for major reductions in power and energy use for plant lighting in space
NASA Astrophysics Data System (ADS)
Poulet, Lucie
Launching or resupplying food, oxygen, and water into space for long-duration, crewed missions to distant destinations, such as Mars, is currently impossible. Bioregenerative life-support systems under development worldwide involving photoautotrophic organisms offer a solution to the food dilemma. However, using traditional Earth-based lighting methods, growth of food crops consumes copious energy, and since sunlight will not always be available at different space destinations, efficient electric lighting solutions are badly needed to reduce the Equivalent System Mass (ESM) of life-support infrastructure to be launched and transported to future space destinations with sustainable human habitats. The scope of the present study was to demonstrate that using LEDs coupled to plant detection, and optimizing spectral and irradiance parameters of LED light, the model crop lettuce (
Energy minimization on manifolds for docking flexible molecules
Mirzaei, Hanieh; Zarbafian, Shahrooz; Villar, Elizabeth; Mottarella, Scott; Beglov, Dmitri; Vajda, Sandor; Paschalidis, Ioannis Ch.; Vakili, Pirooz; Kozakov, Dima
2015-01-01
In this paper we extend a recently introduced rigid body minimization algorithm, defined on manifolds, to the problem of minimizing the energy of interacting flexible molecules. The goal is to integrate moving the ligand in six dimensional rotational/translational space with internal rotations around rotatable bonds within the two molecules. We show that adding rotational degrees of freedom to the rigid moves of the ligand results in an overall optimization search space that is a manifold to which our manifold optimization approach can be extended. The effectiveness of the method is shown for three different docking problems of increasing complexity. First we minimize the energy of fragment-size ligands with a single rotatable bond as part of a protein mapping method developed for the identification of binding hot spots. Second, we consider energy minimization for docking a flexible ligand to a rigid protein receptor, an approach frequently used in existing methods. In the third problem we account for flexibility in both the ligand and the receptor. Results show that minimization using the manifold optimization algorithm is substantially more efficient than minimization using a traditional all-atom optimization algorithm while producing solutions of comparable quality. In addition to the specific problems considered, the method is general enough to be used in a large class of applications such as docking multidomain proteins with flexible hinges. The code is available under open source license (at http://cluspro.bu.edu/Code/Code_Rigtree.tar), and with minimal effort can be incorporated into any molecular modeling package. PMID:26478722
Zhang, Ding-Kun; Han, Xue; Tan, Peng; Li, Rui-Yu; Niu, Ming; Zhang, Cong-En; Wang, Jia-Bo; Yang, Ming; Xiao, Xiao-He
2017-01-01
Aconite is a valuable drug and also a toxic material, which can be used only after detoxification processing. Although traditional processing methods can achieve detoxification effect as desired, there are some obvious drawbacks, including a significant loss of alkaloids and poor quality consistency. It is thus necessary to develop a new detoxification approach. In the present study, we designed a novel one-step detoxification approach by quickly drying fresh-cut aconite particles. In order to evaluate the technical advantages, the contents of mesaconitine, aconitine, hypaconitine, benzoylmesaconine, benzoylaconine, benzoylhypaconine, neoline, fuziline, songorine, and talatisamine were determined using HPLC and UHPLC/Q-TOF-MS. Multivariate analysis methods, such as Clustering analysis and Principle component analysis, were applied to determine the quality differences between samples. Our results showed that traditional processes could reduce toxicity as desired, but also led to more than 85.2% alkaloids loss. However, our novel one-step method was capable of achieving virtually the same detoxification effect, with only an approximately 30% alkaloids loss. Cluster analysis and Principal component analysis analyses suggested that Shengfupian and the novel products were significantly different from various traditional products. Acute toxicity testing showed that the novel products achieved a good detoxification effect, with its maximum tolerated dose being equivalent to 20 times of adult dosage. And cardiac effect testing also showed that the activity of the novel products was stronger than that of traditional products. Moreover, particles specification greatly improved the quality consistency of the novel products, which was immensely superior to the traditional products. These results would help guide the rational optimization of aconite processing technologies, providing better drugs for clinical treatment. Copyright © 2017 China Pharmaceutical University. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Long, Kai; Yuan, Philip F.; Xu, Shanqing; Xie, Yi Min
2018-04-01
Most studies on composites assume that the constituent phases have different values of stiffness. Little attention has been paid to the effect of constituent phases having distinct Poisson's ratios. This research focuses on a concurrent optimization method for simultaneously designing composite structures and materials with distinct Poisson's ratios. The proposed method aims to minimize the mean compliance of the macrostructure with a given mass of base materials. In contrast to the traditional interpolation of the stiffness matrix through numerical results, an interpolation scheme of the Young's modulus and Poisson's ratio using different parameters is adopted. The numerical results demonstrate that the Poisson effect plays a key role in reducing the mean compliance of the final design. An important contribution of the present study is that the proposed concurrent optimization method can automatically distribute base materials with distinct Poisson's ratios between the macrostructural and microstructural levels under a single constraint of the total mass.
Research on UAV Intelligent Obstacle Avoidance Technology During Inspection of Transmission Line
NASA Astrophysics Data System (ADS)
Wei, Chuanhu; Zhang, Fei; Yin, Chaoyuan; Liu, Yue; Liu, Liang; Li, Zongyu; Wang, Wanguo
Autonomous obstacle avoidance of unmanned aerial vehicle (hereinafter referred to as UAV) in electric power line inspection process has important significance for operation safety and economy for UAV intelligent inspection system of transmission line as main content of UAV intelligent inspection system on transmission line. In the paper, principles of UAV inspection obstacle avoidance technology of transmission line are introduced. UAV inspection obstacle avoidance technology based on particle swarm global optimization algorithm is proposed after common obstacle avoidance technologies are studied. Stimulation comparison is implemented with traditional UAV inspection obstacle avoidance technology which adopts artificial potential field method. Results show that UAV inspection strategy of particle swarm optimization algorithm, adopted in the paper, is prominently better than UAV inspection strategy of artificial potential field method in the aspects of obstacle avoidance effect and the ability of returning to preset inspection track after passing through the obstacle. An effective method is provided for UAV inspection obstacle avoidance of transmission line.
Rational Methods for the Selection of Diverse Screening Compounds
Huggins, David J.; Venkitaraman, Ashok R.; Spring, David R.
2016-01-01
Traditionally a pursuit of large pharmaceutical companies, high-throughput screening assays are becoming increasingly common within academic and government laboratories. This shift has been instrumental in enabling projects that have not been commercially viable, such as chemical probe discovery and screening against high risk targets. Once an assay has been prepared and validated, it must be fed with screening compounds. Crafting a successful collection of small molecules for screening poses a significant challenge. An optimized collection will minimize false positives whilst maximizing hit rates of compounds that are amenable to lead generation and optimization. Without due consideration of the relevant protein targets and the downstream screening assays, compound filtering and selection can fail to explore the great extent of chemical diversity and eschew valuable novelty. Herein, we discuss the different factors to be considered and methods that may be employed when assembling a structurally diverse compound screening collection. Rational methods for selecting diverse chemical libraries are essential for their effective use in high-throughput screens. PMID:21261294
H2/H∞ control for grid-feeding converter considering system uncertainty
NASA Astrophysics Data System (ADS)
Li, Zhongwen; Zang, Chuanzhi; Zeng, Peng; Yu, Haibin; Li, Shuhui; Fu, Xingang
2017-05-01
Three-phase grid-feeding converters are key components to integrate distributed generation and renewable power sources to the power utility. Conventionally, proportional integral and proportional resonant-based control strategies are applied to control the output power or current of a GFC. But, those control strategies have poor transient performance and are not robust against uncertainties and volatilities in the system. This paper proposes a H2/H∞-based control strategy, which can mitigate the above restrictions. The uncertainty and disturbance are included to formulate the GFC system state-space model, making it more accurate to reflect the practical system conditions. The paper uses a convex optimisation method to design the H2/H∞-based optimal controller. Instead of using a guess-and-check method, the paper uses particle swarm optimisation to search a H2/H∞ optimal controller. Several case studies implemented by both simulation and experiment can verify the superiority of the proposed control strategy than the traditional PI control methods especially under dynamic and variable system conditions.
Meta-heuristic algorithms as tools for hydrological science
NASA Astrophysics Data System (ADS)
Yoo, Do Guen; Kim, Joong Hoon
2014-12-01
In this paper, meta-heuristic optimization techniques are introduced and their applications to water resources engineering, particularly in hydrological science are introduced. In recent years, meta-heuristic optimization techniques have been introduced that can overcome the problems inherent in iterative simulations. These methods are able to find good solutions and require limited computation time and memory use without requiring complex derivatives. Simulation-based meta-heuristic methods such as Genetic algorithms (GAs) and Harmony Search (HS) have powerful searching abilities, which can occasionally overcome the several drawbacks of traditional mathematical methods. For example, HS algorithms can be conceptualized from a musical performance process and used to achieve better harmony; such optimization algorithms seek a near global optimum determined by the value of an objective function, providing a more robust determination of musical performance than can be achieved through typical aesthetic estimation. In this paper, meta-heuristic algorithms and their applications (focus on GAs and HS) in hydrological science are discussed by subject, including a review of existing literature in the field. Then, recent trends in optimization are presented and a relatively new technique such as Smallest Small World Cellular Harmony Search (SSWCHS) is briefly introduced, with a summary of promising results obtained in previous studies. As a result, previous studies have demonstrated that meta-heuristic algorithms are effective tools for the development of hydrological models and the management of water resources.
Coquet, Julia Becaria; Tumas, Natalia; Osella, Alberto Ruben; Tanzi, Matteo; Franco, Isabella; Diaz, Maria Del Pilar
2016-01-01
A number of studies have evidenced the effect of modifiable lifestyle factors such as diet, breastfeeding and nutritional status on breast cancer risk. However, none have addressed the missing data problem in nutritional epidemiologic research in South America. Missing data is a frequent problem in breast cancer studies and epidemiological settings in general. Estimates of effect obtained from these studies may be biased, if no appropriate method for handling missing data is applied. We performed Multiple Imputation for missing values on covariates in a breast cancer case-control study of Córdoba (Argentina) to optimize risk estimates. Data was obtained from a breast cancer case control study from 2008 to 2015 (318 cases, 526 controls). Complete case analysis and multiple imputation using chained equations were the methods applied to estimate the effects of a Traditional dietary pattern and other recognized factors associated with breast cancer. Physical activity and socioeconomic status were imputed. Logistic regression models were performed. When complete case analysis was performed only 31% of women were considered. Although a positive association of Traditional dietary pattern and breast cancer was observed from both approaches (complete case analysis OR=1.3, 95%CI=1.0-1.7; multiple imputation OR=1.4, 95%CI=1.2-1.7), effects of other covariates, like BMI and breastfeeding, were only identified when multiple imputation was considered. A Traditional dietary pattern, BMI and breastfeeding are associated with the occurrence of breast cancer in this Argentinean population when multiple imputation is appropriately performed. Multiple Imputation is suggested in Latin America’s epidemiologic studies to optimize effect estimates in the future. PMID:27892664
Resource Costs Give Optimization the Edge
C.M. Eddins
1996-01-01
To optimize or not to optimize - that is the question practically every sawmill has considered at some time or another. Edger and trimmer optimization is a particularly hot topic, as these are among the most wasteful areas of the sawmill because trimmer and edger operators traditionally tend to over edge or trim. By its very definition, optimizing equipment seeks to...
Van Daele, Timothy; Gernaey, Krist V; Ringborg, Rolf H; Börner, Tim; Heintz, Søren; Van Hauwermeiren, Daan; Grey, Carl; Krühne, Ulrich; Adlercreutz, Patrick; Nopens, Ingmar
2017-09-01
The aim of model calibration is to estimate unique parameter values from available experimental data, here applied to a biocatalytic process. The traditional approach of first gathering data followed by performing a model calibration is inefficient, since the information gathered during experimentation is not actively used to optimize the experimental design. By applying an iterative robust model-based optimal experimental design, the limited amount of data collected is used to design additional informative experiments. The algorithm is used here to calibrate the initial reaction rate of an ω-transaminase catalyzed reaction in a more accurate way. The parameter confidence region estimated from the Fisher Information Matrix is compared with the likelihood confidence region, which is not only more accurate but also a computationally more expensive method. As a result, an important deviation between both approaches is found, confirming that linearization methods should be applied with care for nonlinear models. © 2017 American Institute of Chemical Engineers Biotechnol. Prog., 33:1278-1293, 2017. © 2017 American Institute of Chemical Engineers.
4D modeling in high-rise construction
NASA Astrophysics Data System (ADS)
Balakina, Anastasiya; Simankina, Tatyana; Lukinov, Vitaly
2018-03-01
High-rise construction is a complex construction process, requiring the use of more perfected and sophisticated tools for design, planning and construction management. The use of BIM-technologies allows minimizing the risks associated with design errors and errors that occur during construction. This article discusses a visual planning method using the 4D model, which allows the project team to create an accurate and complete construction plan, which is much more difficult to achieve with the help of traditional planning methods. The use of the 4D model in the construction of a 70-story building allowed to detect spatial and temporal errors before the start of construction work. In addition to identifying design errors, 4D modeling has allowed to optimize the construction, as follows: to optimize the operation of cranes, the placement of building structures and materials at various stages of construction, to optimize the organization of work performance, as well as to monitor the activities related to the preparation of the construction site for compliance with labor protection and safety requirements, which resulted in saving money and time.
The future of human DNA vaccines
Li, Lei; Saade, Fadi; Petrovsky, Nikolai
2012-01-01
DNA vaccines have evolved greatly over the last 20 years since their invention, but have yet to become a competitive alternative to conventional protein or carbohydrate based human vaccines. Whilst safety concerns were an initial barrier, the Achilles heel of DNA vaccines remains their poor immunogenicity when compared to protein vaccines. A wide variety of strategies have been developed to optimize DNA vaccine immunogenicity, including codon optimization, genetic adjuvants, electroporation and sophisticated prime-boost regimens, with each of these methods having its advantages and limitations. Whilst each of these methods has contributed to incremental improvements in DNA vaccine efficacy, more is still needed if human DNA vaccines are to succeed commercially. This review foresees a final breakthrough in human DNA vaccines will come from application of the latest cutting-edge technologies, including “epigenetics” and “omics” approaches, alongside traditional techniques to improve immunogenicity such as adjuvants and electroporation, thereby overcoming the current limitations of DNA vaccines in humans PMID:22981627
On Utilizing Optimal and Information Theoretic Syntactic Modeling for Peptide Classification
NASA Astrophysics Data System (ADS)
Aygün, Eser; Oommen, B. John; Cataltepe, Zehra
Syntactic methods in pattern recognition have been used extensively in bioinformatics, and in particular, in the analysis of gene and protein expressions, and in the recognition and classification of bio-sequences. These methods are almost universally distance-based. This paper concerns the use of an Optimal and Information Theoretic (OIT) probabilistic model [11] to achieve peptide classification using the information residing in their syntactic representations. The latter has traditionally been achieved using the edit distances required in the respective peptide comparisons. We advocate that one can model the differences between compared strings as a mutation model consisting of random Substitutions, Insertions and Deletions (SID) obeying the OIT model. Thus, in this paper, we show that the probability measure obtained from the OIT model can be perceived as a sequence similarity metric, using which a Support Vector Machine (SVM)-based peptide classifier, referred to as OIT_SVM, can be devised.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hanson-Heine, Magnus W. D., E-mail: magnus.hansonheine@nottingham.ac.uk
Carefully choosing a set of optimized coordinates for performing vibrational frequency calculations can significantly reduce the anharmonic correlation energy from the self-consistent field treatment of molecular vibrations. However, moving away from normal coordinates also introduces an additional source of correlation energy arising from mode-coupling at the harmonic level. The impact of this new component of the vibrational energy is examined for a range of molecules, and a method is proposed for correcting the resulting self-consistent field frequencies by adding the full coupling energy from connected pairs of harmonic and pseudoharmonic modes, termed vibrational self-consistent field (harmonic correlation). This approach ismore » found to lift the vibrational degeneracies arising from coordinate optimization and provides better agreement with experimental and benchmark frequencies than uncorrected vibrational self-consistent field theory without relying on traditional correlated methods.« less
Okut, Dilara; Devseren, Esra; Koç, Mehmet; Ocak, Özgül Özdestan; Karataş, Haluk; Kaymak-Ertekin, Figen
2018-01-01
Purpose of this study was to develop prototype cooking equipment that can work at reduced pressure and to evaluate its performance for production of strawberry jam. The effect of vacuum cooking conditions on color soluble solid content, reducing sugars total sugars HMF and sensory properties were investigated. Also, the optimum vacuum cooking conditions for strawberry jam were optimized for Composite Rotatable Design. The optimum cooking temperature and time were determined targeting maximum soluble solid content and sensory attributes (consistency) and minimum Hue value and HMF content. The optimum vacuum cooking conditions determined were 74.4 °C temperature and 19.8 time. The soluble solid content strawberry jam made by vacuum process were similar to those prepared by traditional method. HMF contents of jams produced with vacuum cooking method were well within limit of standards.
NASA Technical Reports Server (NTRS)
Rai, Man Mohan (Inventor); Madavan, Nateri K. (Inventor)
2003-01-01
A method and system for design optimization that incorporates the advantages of both traditional response surface methodology (RSM) and neural networks is disclosed. The present invention employs a unique strategy called parameter-based partitioning of the given design space. In the design procedure, a sequence of composite response surfaces based on both neural networks and polynomial fits is used to traverse the design space to identify an optimal solution. The composite response surface has both the power of neural networks and the economy of low-order polynomials (in terms of the number of simulations needed and the network training requirements). The present invention handles design problems with many more parameters than would be possible using neural networks alone and permits a designer to rapidly perform a variety of trade-off studies before arriving at the final design.
Genetic Algorithm for Traveling Salesman Problem with Modified Cycle Crossover Operator
Mohamd Shoukry, Alaa; Gani, Showkat
2017-01-01
Genetic algorithms are evolutionary techniques used for optimization purposes according to survival of the fittest idea. These methods do not ensure optimal solutions; however, they give good approximation usually in time. The genetic algorithms are useful for NP-hard problems, especially the traveling salesman problem. The genetic algorithm depends on selection criteria, crossover, and mutation operators. To tackle the traveling salesman problem using genetic algorithms, there are various representations such as binary, path, adjacency, ordinal, and matrix representations. In this article, we propose a new crossover operator for traveling salesman problem to minimize the total distance. This approach has been linked with path representation, which is the most natural way to represent a legal tour. Computational results are also reported with some traditional path representation methods like partially mapped and order crossovers along with new cycle crossover operator for some benchmark TSPLIB instances and found improvements. PMID:29209364
Genetic Algorithm for Traveling Salesman Problem with Modified Cycle Crossover Operator.
Hussain, Abid; Muhammad, Yousaf Shad; Nauman Sajid, M; Hussain, Ijaz; Mohamd Shoukry, Alaa; Gani, Showkat
2017-01-01
Genetic algorithms are evolutionary techniques used for optimization purposes according to survival of the fittest idea. These methods do not ensure optimal solutions; however, they give good approximation usually in time. The genetic algorithms are useful for NP-hard problems, especially the traveling salesman problem. The genetic algorithm depends on selection criteria, crossover, and mutation operators. To tackle the traveling salesman problem using genetic algorithms, there are various representations such as binary, path, adjacency, ordinal, and matrix representations. In this article, we propose a new crossover operator for traveling salesman problem to minimize the total distance. This approach has been linked with path representation, which is the most natural way to represent a legal tour. Computational results are also reported with some traditional path representation methods like partially mapped and order crossovers along with new cycle crossover operator for some benchmark TSPLIB instances and found improvements.
Optimization of removal function in computer controlled optical surfacing
NASA Astrophysics Data System (ADS)
Chen, Xi; Guo, Peiji; Ren, Jianfeng
2010-10-01
The technical principle of computer controlled optical surfacing (CCOS) and the common method of optimizing removal function that is used in CCOS are introduced in this paper. A new optimizing method time-sharing synthesis of removal function is proposed to solve problems of the removal function being far away from Gaussian type and slow approaching of the removal function error that encountered in the mode of planet motion or translation-rotation. Detailed time-sharing synthesis of using six removal functions is discussed. For a given region on the workpiece, six positions are selected as the centers of the removal function; polishing tool controlled by the executive system of CCOS revolves around each centre to complete a cycle in proper order. The overall removal function obtained by the time-sharing process is the ratio of total material removal in six cycles to time duration of the six cycles, which depends on the arrangement and distribution of the six removal functions. Simulations on the synthesized overall removal functions under two different modes of motion, i.e., planet motion and translation-rotation are performed from which the optimized combination of tool parameters and distribution of time-sharing synthesis removal functions are obtained. The evaluation function when optimizing is determined by an approaching factor which is defined as the ratio of the material removal within the area of half of the polishing tool coverage from the polishing center to the total material removal within the full polishing tool coverage area. After optimization, it is found that the optimized removal function obtained by time-sharing synthesis is closer to the ideal Gaussian type removal function than those by the traditional methods. The time-sharing synthesis method of the removal function provides an efficient way to increase the convergence speed of the surface error in CCOS for the fabrication of aspheric optical surfaces, and to reduce the intermediate- and high-frequency error.
System Sensitivity Analysis Applied to the Conceptual Design of a Dual-Fuel Rocket SSTO
NASA Technical Reports Server (NTRS)
Olds, John R.
1994-01-01
This paper reports the results of initial efforts to apply the System Sensitivity Analysis (SSA) optimization method to the conceptual design of a single-stage-to-orbit (SSTO) launch vehicle. SSA is an efficient, calculus-based MDO technique for generating sensitivity derivatives in a highly multidisciplinary design environment. The method has been successfully applied to conceptual aircraft design and has been proven to have advantages over traditional direct optimization methods. The method is applied to the optimization of an advanced, piloted SSTO design similar to vehicles currently being analyzed by NASA as possible replacements for the Space Shuttle. Powered by a derivative of the Russian RD-701 rocket engine, the vehicle employs a combination of hydrocarbon, hydrogen, and oxygen propellants. Three primary disciplines are included in the design - propulsion, performance, and weights & sizing. A complete, converged vehicle analysis depends on the use of three standalone conceptual analysis computer codes. Efforts to minimize vehicle dry (empty) weight are reported in this paper. The problem consists of six system-level design variables and one system-level constraint. Using SSA in a 'manual' fashion to generate gradient information, six system-level iterations were performed from each of two different starting points. The results showed a good pattern of convergence for both starting points. A discussion of the advantages and disadvantages of the method, possible areas of improvement, and future work is included.
Cellular traction force recovery: An optimal filtering approach in two-dimensional Fourier space.
Huang, Jianyong; Qin, Lei; Peng, Xiaoling; Zhu, Tao; Xiong, Chunyang; Zhang, Youyi; Fang, Jing
2009-08-21
Quantitative estimation of cellular traction has significant physiological and clinical implications. As an inverse problem, traction force recovery is essentially susceptible to noise in the measured displacement data. For traditional procedure of Fourier transform traction cytometry (FTTC), noise amplification is accompanied in the force reconstruction and small tractions cannot be recovered from the displacement field with low signal-noise ratio (SNR). To improve the FTTC process, we develop an optimal filtering scheme to suppress the noise in the force reconstruction procedure. In the framework of the Wiener filtering theory, four filtering parameters are introduced in two-dimensional Fourier space and their analytical expressions are derived in terms of the minimum-mean-squared-error (MMSE) optimization criterion. The optimal filtering approach is validated with simulations and experimental data associated with the adhesion of single cardiac myocyte to elastic substrate. The results indicate that the proposed method can highly enhance SNR of the recovered forces to reveal tiny tractions in cell-substrate interaction.
NASA Astrophysics Data System (ADS)
Cheng, Jie; Qian, Zhaogang; Irani, Keki B.; Etemad, Hossein; Elta, Michael E.
1991-03-01
To meet the ever-increasing demand of the rapidly-growing semiconductor manufacturing industry it is critical to have a comprehensive methodology integrating techniques for process optimization real-time monitoring and adaptive process control. To this end we have accomplished an integrated knowledge-based approach combining latest expert system technology machine learning method and traditional statistical process control (SPC) techniques. This knowledge-based approach is advantageous in that it makes it possible for the task of process optimization and adaptive control to be performed consistently and predictably. Furthermore this approach can be used to construct high-level and qualitative description of processes and thus make the process behavior easy to monitor predict and control. Two software packages RIST (Rule Induction and Statistical Testing) and KARSM (Knowledge Acquisition from Response Surface Methodology) have been developed and incorporated with two commercially available packages G2 (real-time expert system) and ULTRAMAX (a tool for sequential process optimization).
A Novel Hybrid Firefly Algorithm for Global Optimization.
Zhang, Lina; Liu, Liqiang; Yang, Xin-She; Dai, Yuntao
Global optimization is challenging to solve due to its nonlinearity and multimodality. Traditional algorithms such as the gradient-based methods often struggle to deal with such problems and one of the current trends is to use metaheuristic algorithms. In this paper, a novel hybrid population-based global optimization algorithm, called hybrid firefly algorithm (HFA), is proposed by combining the advantages of both the firefly algorithm (FA) and differential evolution (DE). FA and DE are executed in parallel to promote information sharing among the population and thus enhance searching efficiency. In order to evaluate the performance and efficiency of the proposed algorithm, a diverse set of selected benchmark functions are employed and these functions fall into two groups: unimodal and multimodal. The experimental results show better performance of the proposed algorithm compared to the original version of the firefly algorithm (FA), differential evolution (DE) and particle swarm optimization (PSO) in the sense of avoiding local minima and increasing the convergence rate.
A Novel Hybrid Firefly Algorithm for Global Optimization
Zhang, Lina; Liu, Liqiang; Yang, Xin-She; Dai, Yuntao
2016-01-01
Global optimization is challenging to solve due to its nonlinearity and multimodality. Traditional algorithms such as the gradient-based methods often struggle to deal with such problems and one of the current trends is to use metaheuristic algorithms. In this paper, a novel hybrid population-based global optimization algorithm, called hybrid firefly algorithm (HFA), is proposed by combining the advantages of both the firefly algorithm (FA) and differential evolution (DE). FA and DE are executed in parallel to promote information sharing among the population and thus enhance searching efficiency. In order to evaluate the performance and efficiency of the proposed algorithm, a diverse set of selected benchmark functions are employed and these functions fall into two groups: unimodal and multimodal. The experimental results show better performance of the proposed algorithm compared to the original version of the firefly algorithm (FA), differential evolution (DE) and particle swarm optimization (PSO) in the sense of avoiding local minima and increasing the convergence rate. PMID:27685869
Yan, Bin-Jun; Guo, Zheng-Tai; Qu, Hai-Bin; Zhao, Bu-Chang; Zhao, Tao
2013-06-01
In this work, a feedforward control strategy basing on the concept of quality by design was established for the manufacturing process of traditional Chinese medicine to reduce the impact of the quality variation of raw materials on drug. In the research, the ethanol precipitation process of Danhong injection was taken as an application case of the method established. Box-Behnken design of experiments was conducted. Mathematical models relating the attributes of the concentrate, the process parameters and the quality of the supernatants produced were established. Then an optimization model for calculating the best process parameters basing on the attributes of the concentrate was built. The quality of the supernatants produced by ethanol precipitation with optimized and non-optimized process parameters were compared. The results showed that using the feedforward control strategy for process parameters optimization can control the quality of the supernatants effectively. The feedforward control strategy proposed can enhance the batch-to-batch consistency of the supernatants produced by ethanol precipitation.
NASA Astrophysics Data System (ADS)
Sudhakar, N.; Rajasekar, N.; Akhil, Saya; Jyotheeswara Reddy, K.
2017-11-01
The boost converter is the most desirable DC-DC power converter for renewable energy applications for its favorable continuous input current characteristics. In other hand, these DC-DC converters known as practical nonlinear systems are prone to several types of nonlinear phenomena including bifurcation, quasiperiodicity, intermittency and chaos. These undesirable effects has to be controlled for maintaining normal periodic operation of the converter and to ensure the stability. This paper presents an effective solution to control the chaos in solar fed DC-DC boost converter since the converter experiences wide range of input power variation which leads to chaotic phenomena. Controlling of chaos is significantly achieved using optimal circuit parameters obtained through Nelder-Mead Enhanced Bacterial Foraging Optimization Algorithm. The optimization renders the suitable parameters in minimum computational time. The results are compared with the traditional methods. The obtained results of the proposed system ensures the operation of the converter within the controllable region.
Meshes optimized for discrete exterior calculus (DEC).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mousley, Sarah C.; Deakin, Michael; Knupp, Patrick
We study the optimization of an energy function used by the meshing community to measure and improve mesh quality. This energy is non-traditional because it is dependent on both the primal triangulation and its dual Voronoi (power) diagram. The energy is a measure of the mesh's quality for usage in Discrete Exterior Calculus (DEC), a method for numerically solving PDEs. In DEC, the PDE domain is triangulated and this mesh is used to obtain discrete approximations of the continuous operators in the PDE. The energy of a mesh gives an upper bound on the error of the discrete diagonal approximationmore » of the Hodge star operator. In practice, one begins with an initial mesh and then makes adjustments to produce a mesh of lower energy. However, we have discovered several shortcomings in directly optimizing this energy, e.g. its non-convexity, and we show that the search for an optimized mesh may lead to mesh inversion (malformed triangles). We propose a new energy function to address some of these issues.« less
Metamodeling and the Critic-based approach to multi-level optimization.
Werbos, Ludmilla; Kozma, Robert; Silva-Lugo, Rodrigo; Pazienza, Giovanni E; Werbos, Paul J
2012-08-01
Large-scale networks with hundreds of thousands of variables and constraints are becoming more and more common in logistics, communications, and distribution domains. Traditionally, the utility functions defined on such networks are optimized using some variation of Linear Programming, such as Mixed Integer Programming (MIP). Despite enormous progress both in hardware (multiprocessor systems and specialized processors) and software (Gurobi) we are reaching the limits of what these tools can handle in real time. Modern logistic problems, for example, call for expanding the problem both vertically (from one day up to several days) and horizontally (combining separate solution stages into an integrated model). The complexity of such integrated models calls for alternative methods of solution, such as Approximate Dynamic Programming (ADP), which provide a further increase in the performance necessary for the daily operation. In this paper, we present the theoretical basis and related experiments for solving the multistage decision problems based on the results obtained for shorter periods, as building blocks for the models and the solution, via Critic-Model-Action cycles, where various types of neural networks are combined with traditional MIP models in a unified optimization system. In this system architecture, fast and simple feed-forward networks are trained to reasonably initialize more complicated recurrent networks, which serve as approximators of the value function (Critic). The combination of interrelated neural networks and optimization modules allows for multiple queries for the same system, providing flexibility and optimizing performance for large-scale real-life problems. A MATLAB implementation of our solution procedure for a realistic set of data and constraints shows promising results, compared to the iterative MIP approach. Copyright © 2012 Elsevier Ltd. All rights reserved.
Meshless methods in shape optimization of linear elastic and thermoelastic solids
NASA Astrophysics Data System (ADS)
Bobaru, Florin
This dissertation proposes a meshless approach to problems in shape optimization of elastic and thermoelastic solids. The Element-free Galerkin (EFG) method is used for this purpose. The ability of the EFG to avoid remeshing, that is normally done in a Finite Element approach to correct highly distorted meshes, is clearly demonstrated by several examples. The shape optimization example of a thermal cooling fin shows a dramatic improvement in the objective compared to a previous FEM analysis. More importantly, the new solution, displaying large shape changes contrasted to the initial design, was completely missed by the FEM analysis. The EFG formulation given here for shape optimization "uncovers" new solutions that are, apparently, unobtainable via a FEM approach. This is one of the main achievements of our work. The variational formulations for the analysis problem and for the sensitivity problems are obtained with a penalty method for imposing the displacement boundary conditions. The continuum formulation is general and this facilitates 2D and 3D with minor differences from one another. Also, transient thermoelastic problems can use the present development at each time step to solve shape optimization problems for time-dependent thermal problems. For the elasticity framework, displacement sensitivity is obtained in the EFG context. Excellent agreements with analytical solutions for some test problems are obtained. The shape optimization of a fillet is carried out in great detail, and results show significant improvement of the EFG solution over the FEM or the Boundary Element Method solutions. In our approach we avoid differentiating the complicated EFG shape functions, with respect to the shape design parameters, by using a particular discretization for sensitivity calculations. Displacement and temperature sensitivities are formulated for the shape optimization of a linear thermoelastic solid. Two important examples considered in this work, the optimization of a thermal fin and of a uniformly loaded thermoelastic beam, reveal new characteristics of the EFG method in shape optimization applications. Among other advantages of the EFG method over traditional FEM treatments of shape optimization problems, some of the most important ones are shown to be: elimination of post-processing for stress and strain recovery that directly gives more accurate results in critical positions (near the boundaries, for example) for shape optimization problems; nodes movement flexibility that permits new, better shapes (previously missed by an FEM analysis) to be discovered. Several new research directions that need further consideration are exposed.
Classification Influence of Features on Given Emotions and Its Application in Feature Selection
NASA Astrophysics Data System (ADS)
Xing, Yin; Chen, Chuang; Liu, Li-Long
2018-04-01
In order to solve the problem that there is a large amount of redundant data in high-dimensional speech emotion features, we analyze deeply the extracted speech emotion features and select better features. Firstly, a given emotion is classified by each feature. Secondly, the recognition rate is ranked in descending order. Then, the optimal threshold of features is determined by rate criterion. Finally, the better features are obtained. When applied in Berlin and Chinese emotional data set, the experimental results show that the feature selection method outperforms the other traditional methods.
Yang, Bin; Hu, Fu-chao; Chen, Gong-xi; Jiang, Dao-song
2009-12-01
The experiment extracted flavonoids in rhizome of Drynaria fortunei by microwave extraction, and determined the extraction rate through colorimetry. Through the single factor experiment and orthogonal method, the optimum extraction conditions were as follows: ethanol concentration was 40%, solid-liquid ratio was 1:20 (g/mL), microwave power was 325 W, extraction time was 40 s. Under these conditions, the extraction rate reached 1.73%. In all condtions, microwave power has the most significant effect on extraction rate. Microwave extraction has obvious advantages in comparison with traditional sovent refluxing method.
Optimization of lattice surgery is NP-hard
NASA Astrophysics Data System (ADS)
Herr, Daniel; Nori, Franco; Devitt, Simon J.
2017-09-01
The traditional method for computation in either the surface code or in the Raussendorf model is the creation of holes or "defects" within the encoded lattice of qubits that are manipulated via topological braiding to enact logic gates. However, this is not the only way to achieve universal, fault-tolerant computation. In this work, we focus on the lattice surgery representation, which realizes transversal logic operations without destroying the intrinsic 2D nearest-neighbor properties of the braid-based surface code and achieves universality without defects and braid-based logic. For both techniques there are open questions regarding the compilation and resource optimization of quantum circuits. Optimization in braid-based logic is proving to be difficult and the classical complexity associated with this problem has yet to be determined. In the context of lattice-surgery-based logic, we can introduce an optimality condition, which corresponds to a circuit with the lowest resource requirements in terms of physical qubits and computational time, and prove that the complexity of optimizing a quantum circuit in the lattice surgery model is NP-hard.
Optimization of topological quantum algorithms using Lattice Surgery is hard
NASA Astrophysics Data System (ADS)
Herr, Daniel; Nori, Franco; Devitt, Simon
The traditional method for computation in the surface code or the Raussendorf model is the creation of holes or ''defects'' within the encoded lattice of qubits which are manipulated via topological braiding to enact logic gates. However, this is not the only way to achieve universal, fault-tolerant computation. In this work we turn attention to the Lattice Surgery representation, which realizes encoded logic operations without destroying the intrinsic 2D nearest-neighbor interactions sufficient for braided based logic and achieves universality without using defects for encoding information. In both braided and lattice surgery logic there are open questions regarding the compilation and resource optimization of quantum circuits. Optimization in braid-based logic is proving to be difficult to define and the classical complexity associated with this problem has yet to be determined. In the context of lattice surgery based logic, we can introduce an optimality condition, which corresponds to a circuit with lowest amount of physical qubit requirements, and prove that the complexity of optimizing the geometric (lattice surgery) representation of a quantum circuit is NP-hard.
NASA Astrophysics Data System (ADS)
Paasche, H.; Tronicke, J.
2012-04-01
In many near surface geophysical applications multiple tomographic data sets are routinely acquired to explore subsurface structures and parameters. Linking the model generation process of multi-method geophysical data sets can significantly reduce ambiguities in geophysical data analysis and model interpretation. Most geophysical inversion approaches rely on local search optimization methods used to find an optimal model in the vicinity of a user-given starting model. The final solution may critically depend on the initial model. Alternatively, global optimization (GO) methods have been used to invert geophysical data. They explore the solution space in more detail and determine the optimal model independently from the starting model. Additionally, they can be used to find sets of optimal models allowing a further analysis of model parameter uncertainties. Here we employ particle swarm optimization (PSO) to realize the global optimization of tomographic data. PSO is an emergent methods based on swarm intelligence characterized by fast and robust convergence towards optimal solutions. The fundamental principle of PSO is inspired by nature, since the algorithm mimics the behavior of a flock of birds searching food in a search space. In PSO, a number of particles cruise a multi-dimensional solution space striving to find optimal model solutions explaining the acquired data. The particles communicate their positions and success and direct their movement according to the position of the currently most successful particle of the swarm. The success of a particle, i.e. the quality of the currently found model by a particle, must be uniquely quantifiable to identify the swarm leader. When jointly inverting disparate data sets, the optimization solution has to satisfy multiple optimization objectives, at least one for each data set. Unique determination of the most successful particle currently leading the swarm is not possible. Instead, only statements about the Pareto optimality of the found solutions can be made. Identification of the leading particle traditionally requires a costly combination of ranking and niching techniques. In our approach, we use a decision rule under uncertainty to identify the currently leading particle of the swarm. In doing so, we consider the different objectives of our optimization problem as competing agents with partially conflicting interests. Analysis of the maximin fitness function allows for robust and cheap identification of the currently leading particle. The final optimization result comprises a set of possible models spread along the Pareto front. For convex Pareto fronts, solution density is expected to be maximal in the region ideally compromising all objectives, i.e. the region of highest curvature.
Design of the smart home system based on the optimal routing algorithm and ZigBee network.
Jiang, Dengying; Yu, Ling; Wang, Fei; Xie, Xiaoxia; Yu, Yongsheng
2017-01-01
To improve the traditional smart home system, its electric wiring, networking technology, information transmission and facility control are studied. In this paper, we study the electric wiring, networking technology, information transmission and facility control to improve the traditional smart home system. First, ZigBee is used to replace the traditional electric wiring. Second, a network is built to connect lots of wireless sensors and facilities, thanks to the capability of ZigBee self-organized network and Genetic Algorithm-Particle Swarm Optimization Algorithm (GA-PSOA) to search for the optimal route. Finally, when the smart home system is connected to the internet based on the remote server technology, home environment and facilities could be remote real-time controlled. The experiments show that the GA-PSOA reduce the system delay and decrease the energy consumption of the wireless system.
Design of the smart home system based on the optimal routing algorithm and ZigBee network
Xie, Xiaoxia
2017-01-01
To improve the traditional smart home system, its electric wiring, networking technology, information transmission and facility control are studied. In this paper, we study the electric wiring, networking technology, information transmission and facility control to improve the traditional smart home system. First, ZigBee is used to replace the traditional electric wiring. Second, a network is built to connect lots of wireless sensors and facilities, thanks to the capability of ZigBee self-organized network and Genetic Algorithm-Particle Swarm Optimization Algorithm (GA-PSOA) to search for the optimal route. Finally, when the smart home system is connected to the internet based on the remote server technology, home environment and facilities could be remote real-time controlled. The experiments show that the GA-PSOA reduce the system delay and decrease the energy consumption of the wireless system. PMID:29131868
NASA Technical Reports Server (NTRS)
Gern, Frank; Vicroy, Dan D.; Mulani, Sameer B.; Chhabra, Rupanshi; Kapania, Rakesh K.; Schetz, Joseph A.; Brown, Derrell; Princen, Norman H.
2014-01-01
Traditional methods of control allocation optimization have shown difficulties in exploiting the full potential of controlling large arrays of control devices on innovative air vehicles. Artificial neutral networks are inspired by biological nervous systems and neurocomputing has successfully been applied to a variety of complex optimization problems. This project investigates the potential of applying neurocomputing to the control allocation optimization problem of Hybrid Wing Body (HWB) aircraft concepts to minimize control power, hinge moments, and actuator forces, while keeping system weights within acceptable limits. The main objective of this project is to develop a proof-of-concept process suitable to demonstrate the potential of using neurocomputing for optimizing actuation power for aircraft featuring multiple independently actuated control surfaces. A Nastran aeroservoelastic finite element model is used to generate a learning database of hinge moment and actuation power characteristics for an array of flight conditions and control surface deflections. An artificial neural network incorporating a genetic algorithm then uses this training data to perform control allocation optimization for the investigated aircraft configuration. The phase I project showed that optimization results for the sum of required hinge moments are improved by more than 12% over the best Nastran solution by using the neural network optimization process.
Serra J. Hoagland
2017-01-01
Traditional ecological knowledge (TEK) has been recognized within indigenous communities for millennia; however, traditional ecological knowledge has received growing attention within the western science (WS) paradigm over the past twenty-five years. Federal agencies, national organizations, and university programs dedicated to natural resource management are beginning...
Manoharan, Prabu; Ghoshal, Nanda
2018-05-01
Traditional structure-based virtual screening method to identify drug-like small molecules for BACE1 is so far unsuccessful. Location of BACE1, poor Blood Brain Barrier permeability and P-glycoprotein (Pgp) susceptibility of the inhibitors make it even more difficult. Fragment-based drug design method is suitable for efficient optimization of initial hit molecules for target like BACE1. We have developed a fragment-based virtual screening approach to identify/optimize the fragment molecules as a starting point. This method combines the shape, electrostatic, and pharmacophoric features of known fragment molecules, bound to protein conjugate crystal structure, and aims to identify both chemically and energetically feasible small fragment ligands that bind to BACE1 active site. The two top-ranked fragment hits were subjected for a 53 ns MD simulation. Principle component analysis and free energy landscape analysis reveal that the new ligands show the characteristic features of established BACE1 inhibitors. The potent method employed in this study may serve for the development of potential lead molecules for BACE1-directed Alzheimer's disease therapeutics.
Optimized Setup and Protocol for Magnetic Domain Imaging with In Situ Hysteresis Measurement.
Liu, Jun; Wilson, John; Davis, Claire; Peyton, Anthony
2017-11-07
This paper elaborates the sample preparation protocols required to obtain optimal domain patterns using the Bitter method, focusing on the extra steps compared to standard metallographic sample preparation procedures. The paper proposes a novel bespoke rig for dynamic domain imaging with in situ BH (magnetic hysteresis) measurements and elaborates the protocols for the sensor preparation and the use of the rig to ensure accurate BH measurement. The protocols for static and ordinary dynamic domain imaging (without in situ BH measurements) are also presented. The reported method takes advantage of the convenience and high sensitivity of the traditional Bitter method and enables in situ BH measurement without interrupting or interfering with the domain wall movement processes. This facilitates establishing a direct and quantitative link between the domain wall movement processes-microstructural feature interactions in ferritic steels with their BH loops. This method is anticipated to become a useful tool for the fundamental study of microstructure-magnetic property relationships in steels and to help interpret the electromagnetic sensor signals for non-destructive evaluation of steel microstructures.
Optimized Setup and Protocol for Magnetic Domain Imaging with In Situ Hysteresis Measurement
Liu, Jun; Wilson, John; Davis, Claire; Peyton, Anthony
2017-01-01
This paper elaborates the sample preparation protocols required to obtain optimal domain patterns using the Bitter method, focusing on the extra steps compared to standard metallographic sample preparation procedures. The paper proposes a novel bespoke rig for dynamic domain imaging with in situ BH (magnetic hysteresis) measurements and elaborates the protocols for the sensor preparation and the use of the rig to ensure accurate BH measurement. The protocols for static and ordinary dynamic domain imaging (without in situ BH measurements) are also presented. The reported method takes advantage of the convenience and high sensitivity of the traditional Bitter method and enables in situ BH measurement without interrupting or interfering with the domain wall movement processes. This facilitates establishing a direct and quantitative link between the domain wall movement processes–microstructural feature interactions in ferritic steels with their BH loops. This method is anticipated to become a useful tool for the fundamental study of microstructure–magnetic property relationships in steels and to help interpret the electromagnetic sensor signals for non-destructive evaluation of steel microstructures. PMID:29155796
Nonprincipal plane scattering of flat plates and pattern control of horn antennas
NASA Technical Reports Server (NTRS)
Balanis, Constantine A.; Polka, Lesley A.; Liu, Kefeng
1989-01-01
Using the geometrical theory of diffraction, the traditional method of high frequency scattering analysis, the prediction of the radar cross section of a perfectly conducting, flat, rectangular plate is limited to principal planes. Part A of this report predicts the radar cross section in nonprincipal planes using the method of equivalent currents. This technique is based on an asymptotic end-point reduction of the surface radiation integrals for an infinite wedge and enables nonprincipal plane prediction. The predicted radar cross sections for both horizontal and vertical polarizations are compared to moment method results and experimental data from Arizona State University's anechoic chamber. In part B, a variational calculus approach to the pattern control of the horn antenna is outlined. The approach starts with the optimization of the aperture field distribution so that the control of the radiation pattern in a range of directions can be realized. A control functional is thus formulated. Next, a spectral analysis method is introduced to solve for the eigenfunctions from the extremal condition of the formulated functional. Solutions to the optimized aperture field distribution are then obtained.
Duan, Li; Guo, Long; Liu, Ke; Liu, E-Hu; Li, Ping
2014-04-25
Citrus herbs have been widely used in traditional medicine and cuisine in China and other countries since the ancient time. However, the authentication and quality control of Citrus herbs has always been a challenging task due to their similar morphological characteristics and the diversity of the multi-components existed in the complicated matrix. In the present investigation, we developed a novel strategy to characterize and classify seven Citrus herbs based on chromatographic analysis and chemometric methods. Firstly, the chemical constituents in seven Citrus herbs were globally characterized by liquid chromatography combined with quadrupole time-of-flight mass spectrometry (LC-QTOF-MS). Based on their retention time, UV spectra and MS fragmentation behavior, a total of 75 compounds were identified or tentatively characterized in these herbal medicines. Secondly, a segmental monitoring method based on LC-variable wavelength detection was developed for simultaneous quantification of ten marker compounds in these Citrus herbs. Thirdly, based on the contents of the ten analytes, genetic algorithm optimized support vector machines (GA-SVM) was employed to differentiate and classify the 64 samples covering these seven herbs. The obtained classifier showed good prediction performance and the overall prediction accuracy reached 96.88%. The proposed strategy is expected to provide new insight for authentication and quality control of traditional herbs. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Alkasem, Ameen; Liu, Hongwei; Zuo, Decheng; Algarash, Basheer
2018-01-01
The volume of data being collected, analyzed, and stored has exploded in recent years, in particular in relation to the activity on the cloud computing. While large-scale data processing, analysis, storage, and platform model such as cloud computing were previously and currently are increasingly. Today, the major challenge is it address how to monitor and control these massive amounts of data and perform analysis in real-time at scale. The traditional methods and model systems are unable to cope with these quantities of data in real-time. Here we present a new methodology for constructing a model for optimizing the performance of real-time monitoring of big datasets, which includes a machine learning algorithms and Apache Spark Streaming to accomplish fine-grained fault diagnosis and repair of big dataset. As a case study, we use the failure of Virtual Machines (VMs) to start-up. The methodology proposition ensures that the most sensible action is carried out during the procedure of fine-grained monitoring and generates the highest efficacy and cost-saving fault repair through three construction control steps: (I) data collection; (II) analysis engine and (III) decision engine. We found that running this novel methodology can save a considerate amount of time compared to the Hadoop model, without sacrificing the classification accuracy or optimization of performance. The accuracy of the proposed method (92.13%) is an improvement on traditional approaches.
Kang, Jing; Xue, Chao; Chou, Adriana; Scholp, Austin; Gong, Ting; Zhang, Yi; Chen, Zhen; Jiang, Jack J
2018-02-05
The aim of this study was to quantify the effects of traditional and physiological warm-up exercises and to determine the optimal duration of these methods using acoustic and aerodynamic metrics. Twenty-six subjects were recruited to participate in both straw phonation exercises (physiological vocal warm-up) and traditional singing exercises (traditional vocal warm-up) for 20 minutes each, 24 hours apart. Phonation threshold pressure (PTP), fundamental frequency, jitter, shimmer, and noise-to-harmonics ratio were measured before the intervention (m0), as well as after 5 minutes (m5), 10 minutes (m10), 15 minutes (m15), and 20 minutes (m20) of intervention. PTP decreased significantly after straw phonation and reached a minimum value at 10 minutes (P < 0.001) and remained stable in traditional singing exercises. There were significant differences in fundamental frequency and shimmer from m0 to m15 and m20 in the traditional singing group (P = 0.001, P = 0.001, P = 0.001, and P = 0.002, respectively). No significant changes in acoustic parameters were observed after straw phonation. Both straw phonation exercises and traditional singing exercises are effective for voice warm-up. Straw phonation improves the subjects' fatigue resistance and vocal economy, resulting in a reduced PTP, whereas traditional singing exercises focus on technical singing skills, leading to an improvement of acoustic variables. Copyright © 2018 The Voice Foundation. Published by Elsevier Inc. All rights reserved.
Sun, Qing-hua; Yu, De-shuang; Zhang, Pei-yu; Lin, Xue-zheng; Li, Jin
2016-02-15
A heterotrophic nitrification-aerobic denitrification strain named y5 was isolated from marine environment by traditional microbial isolation method using seawater as medium. It was identified as Klebsiella sp. based on the morphological, physiological and 16S rRNA sequence analysis. The experiment results showed that the optimal carbon resource was sodium citrate; the optimal pH was 7.0; and the optimal C/N was 17. The strain could use NH4Cl, NaNO2 and KNO3 as sole nitrogen source, and the removal efficiencies were77.07%, 64.14% and 100% after 36 hours, respectively. The removal efficiency reached 100% after 36 hours in the coexistence of NH4Cl, NaNO2 and KNO3. The results showed that the strain y5 had independent and efficient heterotrophic nitrification and aerobic denitrification activities in high salt wastewater.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Luning; Neuscamman, Eric
We present a modification to variational Monte Carlo’s linear method optimization scheme that addresses a critical memory bottleneck while maintaining compatibility with both the traditional ground state variational principle and our recently-introduced variational principle for excited states. For wave function ansatzes with tens of thousands of variables, our modification reduces the required memory per parallel process from tens of gigabytes to hundreds of megabytes, making the methodology a much better fit for modern supercomputer architectures in which data communication and per-process memory consumption are primary concerns. We verify the efficacy of the new optimization scheme in small molecule tests involvingmore » both the Hilbert space Jastrow antisymmetric geminal power ansatz and real space multi-Slater Jastrow expansions. Satisfied with its performance, we have added the optimizer to the QMCPACK software package, with which we demonstrate on a hydrogen ring a prototype approach for making systematically convergent, non-perturbative predictions of Mott-insulators’ optical band gaps.« less
Turbopump Performance Improved by Evolutionary Algorithms
NASA Technical Reports Server (NTRS)
Oyama, Akira; Liou, Meng-Sing
2002-01-01
The development of design optimization technology for turbomachinery has been initiated using the multiobjective evolutionary algorithm under NASA's Intelligent Synthesis Environment and Revolutionary Aeropropulsion Concepts programs. As an alternative to the traditional gradient-based methods, evolutionary algorithms (EA's) are emergent design-optimization algorithms modeled after the mechanisms found in natural evolution. EA's search from multiple points, instead of moving from a single point. In addition, they require no derivatives or gradients of the objective function, leading to robustness and simplicity in coupling any evaluation codes. Parallel efficiency also becomes very high by using a simple master-slave concept for function evaluations, since such evaluations often consume the most CPU time, such as computational fluid dynamics. Application of EA's to multiobjective design problems is also straightforward because EA's maintain a population of design candidates in parallel. Because of these advantages, EA's are a unique and attractive approach to real-world design optimization problems.
Stoms, David M.; Davis, Frank W.
2014-01-01
Quantitative methods of spatial conservation prioritization have traditionally been applied to issues in conservation biology and reserve design, though their use in other types of natural resource management is growing. The utility maximization problem is one form of a covering problem where multiple criteria can represent the expected social benefits of conservation action. This approach allows flexibility with a problem formulation that is more general than typical reserve design problems, though the solution methods are very similar. However, few studies have addressed optimization in utility maximization problems for conservation planning, and the effect of solution procedure is largely unquantified. Therefore, this study mapped five criteria describing elements of multifunctional agriculture to determine a hypothetical conservation resource allocation plan for agricultural land conservation in the Central Valley of CA, USA. We compared solution procedures within the utility maximization framework to determine the difference between an open source integer programming approach and a greedy heuristic, and find gains from optimization of up to 12%. We also model land availability for conservation action as a stochastic process and determine the decline in total utility compared to the globally optimal set using both solution algorithms. Our results are comparable to other studies illustrating the benefits of optimization for different conservation planning problems, and highlight the importance of maximizing the effectiveness of limited funding for conservation and natural resource management. PMID:25538868
Kreitler, Jason R.; Stoms, David M.; Davis, Frank W.
2014-01-01
Quantitative methods of spatial conservation prioritization have traditionally been applied to issues in conservation biology and reserve design, though their use in other types of natural resource management is growing. The utility maximization problem is one form of a covering problem where multiple criteria can represent the expected social benefits of conservation action. This approach allows flexibility with a problem formulation that is more general than typical reserve design problems, though the solution methods are very similar. However, few studies have addressed optimization in utility maximization problems for conservation planning, and the effect of solution procedure is largely unquantified. Therefore, this study mapped five criteria describing elements of multifunctional agriculture to determine a hypothetical conservation resource allocation plan for agricultural land conservation in the Central Valley of CA, USA. We compared solution procedures within the utility maximization framework to determine the difference between an open source integer programming approach and a greedy heuristic, and find gains from optimization of up to 12%. We also model land availability for conservation action as a stochastic process and determine the decline in total utility compared to the globally optimal set using both solution algorithms. Our results are comparable to other studies illustrating the benefits of optimization for different conservation planning problems, and highlight the importance of maximizing the effectiveness of limited funding for conservation and natural resource management.
Li, Xiaohong; Zhang, Yuyan
2018-01-01
The ultraviolet spectrophotometric method is often used for determining the content of glycyrrhizic acid from Chinese herbal medicine Glycyrrhiza glabra. Based on the traditional single variable approach, four extraction parameters of ammonia concentration, ethanol concentration, circumfluence time, and liquid-solid ratio are adopted as the independent extraction variables. In the present work, central composite design of four factors and five levels is applied to design the extraction experiments. Subsequently, the prediction models of response surface methodology, artificial neural networks, and genetic algorithm-artificial neural networks are developed to analyze the obtained experimental data, while the genetic algorithm is utilized to find the optimal extraction parameters for the above well-established models. It is found that the optimization of extraction technology is presented as ammonia concentration 0.595%, ethanol concentration 58.45%, return time 2.5 h, and liquid-solid ratio 11.065 : 1. Under these conditions, the model predictive value is 381.24 mg, the experimental average value is 376.46 mg, and the expectation discrepancy is 4.78 mg. For the first time, a comparative study of these three approaches is conducted for the evaluation and optimization of the effects of the extraction independent variables. Furthermore, it is demonstrated that the combinational method of genetic algorithm and artificial neural networks provides a more reliable and more accurate strategy for design and optimization of glycyrrhizic acid extraction from Glycyrrhiza glabra. PMID:29887907
NASA Astrophysics Data System (ADS)
Wu, Dongjun
Network industries have technologies characterized by a spatial hierarchy, the "network," with capital-intensive interconnections and time-dependent, capacity-limited flows of products and services through the network to customers. This dissertation studies service pricing, investment and business operating strategies for the electric power network. First-best solutions for a variety of pricing and investment problems have been studied. The evaluation of genetic algorithms (GA, which are methods based on the idea of natural evolution) as a primary means of solving complicated network problems, both w.r.t. pricing: as well as w.r.t. investment and other operating decisions, has been conducted. New constraint-handling techniques in GAs have been studied and tested. The actual application of such constraint-handling techniques in solving practical non-linear optimization problems has been tested on several complex network design problems with encouraging initial results. Genetic algorithms provide solutions that are feasible and close to optimal when the optimal solution is know; in some instances, the near-optimal solutions for small problems by the proposed GA approach can only be tested by pushing the limits of currently available non-linear optimization software. The performance is far better than several commercially available GA programs, which are generally inadequate in solving any of the problems studied in this dissertation, primarily because of their poor handling of constraints. Genetic algorithms, if carefully designed, seem very promising in solving difficult problems which are intractable by traditional analytic methods.
Yu, Li; Jin, Weifeng; Li, Xiaohong; Zhang, Yuyan
2018-01-01
The ultraviolet spectrophotometric method is often used for determining the content of glycyrrhizic acid from Chinese herbal medicine Glycyrrhiza glabra . Based on the traditional single variable approach, four extraction parameters of ammonia concentration, ethanol concentration, circumfluence time, and liquid-solid ratio are adopted as the independent extraction variables. In the present work, central composite design of four factors and five levels is applied to design the extraction experiments. Subsequently, the prediction models of response surface methodology, artificial neural networks, and genetic algorithm-artificial neural networks are developed to analyze the obtained experimental data, while the genetic algorithm is utilized to find the optimal extraction parameters for the above well-established models. It is found that the optimization of extraction technology is presented as ammonia concentration 0.595%, ethanol concentration 58.45%, return time 2.5 h, and liquid-solid ratio 11.065 : 1. Under these conditions, the model predictive value is 381.24 mg, the experimental average value is 376.46 mg, and the expectation discrepancy is 4.78 mg. For the first time, a comparative study of these three approaches is conducted for the evaluation and optimization of the effects of the extraction independent variables. Furthermore, it is demonstrated that the combinational method of genetic algorithm and artificial neural networks provides a more reliable and more accurate strategy for design and optimization of glycyrrhizic acid extraction from Glycyrrhiza glabra .
Real-Time GNSS-Based Attitude Determination in the Measurement Domain
Zhao, Lin; Li, Na; Li, Liang; Zhang, Yi; Cheng, Chun
2017-01-01
A multi-antenna-based GNSS receiver is capable of providing high-precision and drift-free attitude solution. Carrier phase measurements need be utilized to achieve high-precision attitude. The traditional attitude determination methods in the measurement domain and the position domain resolve the attitude and the ambiguity sequentially. The redundant measurements from multiple baselines have not been fully utilized to enhance the reliability of attitude determination. A multi-baseline-based attitude determination method in the measurement domain is proposed to estimate the attitude parameters and the ambiguity simultaneously. Meanwhile, the redundancy of attitude resolution has also been increased so that the reliability of ambiguity resolution and attitude determination can be enhanced. Moreover, in order to further improve the reliability of attitude determination, we propose a partial ambiguity resolution method based on the proposed attitude determination model. The static and kinematic experiments were conducted to verify the performance of the proposed method. When compared with the traditional attitude determination methods, the static experimental results show that the proposed method can improve the accuracy by at least 0.03° and enhance the continuity by 18%, at most. The kinematic result has shown that the proposed method can obtain an optimal balance between accuracy and reliability performance. PMID:28165434
IESIP - AN IMPROVED EXPLORATORY SEARCH TECHNIQUE FOR PURE INTEGER LINEAR PROGRAMMING PROBLEMS
NASA Technical Reports Server (NTRS)
Fogle, F. R.
1994-01-01
IESIP, an Improved Exploratory Search Technique for Pure Integer Linear Programming Problems, addresses the problem of optimizing an objective function of one or more variables subject to a set of confining functions or constraints by a method called discrete optimization or integer programming. Integer programming is based on a specific form of the general linear programming problem in which all variables in the objective function and all variables in the constraints are integers. While more difficult, integer programming is required for accuracy when modeling systems with small numbers of components such as the distribution of goods, machine scheduling, and production scheduling. IESIP establishes a new methodology for solving pure integer programming problems by utilizing a modified version of the univariate exploratory move developed by Robert Hooke and T.A. Jeeves. IESIP also takes some of its technique from the greedy procedure and the idea of unit neighborhoods. A rounding scheme uses the continuous solution found by traditional methods (simplex or other suitable technique) and creates a feasible integer starting point. The Hook and Jeeves exploratory search is modified to accommodate integers and constraints and is then employed to determine an optimal integer solution from the feasible starting solution. The user-friendly IESIP allows for rapid solution of problems up to 10 variables in size (limited by DOS allocation). Sample problems compare IESIP solutions with the traditional branch-and-bound approach. IESIP is written in Borland's TURBO Pascal for IBM PC series computers and compatibles running DOS. Source code and an executable are provided. The main memory requirement for execution is 25K. This program is available on a 5.25 inch 360K MS DOS format diskette. IESIP was developed in 1990. IBM is a trademark of International Business Machines. TURBO Pascal is registered by Borland International.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sheng, Zheng, E-mail: 19994035@sina.com; Wang, Jun; Zhou, Bihua
2014-03-15
This paper introduces a novel hybrid optimization algorithm to establish the parameters of chaotic systems. In order to deal with the weaknesses of the traditional cuckoo search algorithm, the proposed adaptive cuckoo search with simulated annealing algorithm is presented, which incorporates the adaptive parameters adjusting operation and the simulated annealing operation in the cuckoo search algorithm. Normally, the parameters of the cuckoo search algorithm are kept constant that may result in decreasing the efficiency of the algorithm. For the purpose of balancing and enhancing the accuracy and convergence rate of the cuckoo search algorithm, the adaptive operation is presented tomore » tune the parameters properly. Besides, the local search capability of cuckoo search algorithm is relatively weak that may decrease the quality of optimization. So the simulated annealing operation is merged into the cuckoo search algorithm to enhance the local search ability and improve the accuracy and reliability of the results. The functionality of the proposed hybrid algorithm is investigated through the Lorenz chaotic system under the noiseless and noise condition, respectively. The numerical results demonstrate that the method can estimate parameters efficiently and accurately in the noiseless and noise condition. Finally, the results are compared with the traditional cuckoo search algorithm, genetic algorithm, and particle swarm optimization algorithm. Simulation results demonstrate the effectiveness and superior performance of the proposed algorithm.« less
Chan, Emory M; Xu, Chenxu; Mao, Alvin W; Han, Gang; Owen, Jonathan S; Cohen, Bruce E; Milliron, Delia J
2010-05-12
While colloidal nanocrystals hold tremendous potential for both enhancing fundamental understanding of materials scaling and enabling advanced technologies, progress in both realms can be inhibited by the limited reproducibility of traditional synthetic methods and by the difficulty of optimizing syntheses over a large number of synthetic parameters. Here, we describe an automated platform for the reproducible synthesis of colloidal nanocrystals and for the high-throughput optimization of physical properties relevant to emerging applications of nanomaterials. This robotic platform enables precise control over reaction conditions while performing workflows analogous to those of traditional flask syntheses. We demonstrate control over the size, size distribution, kinetics, and concentration of reactions by synthesizing CdSe nanocrystals with 0.2% coefficient of variation in the mean diameters across an array of batch reactors and over multiple runs. Leveraging this precise control along with high-throughput optical and diffraction characterization, we effectively map multidimensional parameter space to tune the size and polydispersity of CdSe nanocrystals, to maximize the photoluminescence efficiency of CdTe nanocrystals, and to control the crystal phase and maximize the upconverted luminescence of lanthanide-doped NaYF(4) nanocrystals. On the basis of these demonstrative examples, we conclude that this automated synthesis approach will be of great utility for the development of diverse colloidal nanomaterials for electronic assemblies, luminescent biological labels, electroluminescent devices, and other emerging applications.
Cai, Yefeng; Wu, Ming; Yang, Jun
2014-02-01
This paper describes a method for focusing the reproduced sound in the bright zone without disturbing other people in the dark zone in personal audio systems. The proposed method combines the least-squares and acoustic contrast criteria. A constrained parameter is introduced to tune the balance between two performance indices, namely, the acoustic contrast and the spatial average error. An efficient implementation of this method using convex optimization is presented. Offline simulations and real-time experiments using a linear loudspeaker array are conducted to evaluate the performance of the presented method. Results show that compared with the traditional acoustic contrast control method, the proposed method can improve the flatness of response in the bright zone by sacrificing the level of acoustic contrast.
Data-driven optimal binning for respiratory motion management in PET.
Kesner, Adam L; Meier, Joseph G; Burckhardt, Darrell D; Schwartz, Jazmin; Lynch, David A
2018-01-01
Respiratory gating has been used in PET imaging to reduce the amount of image blurring caused by patient motion. Optimal binning is an approach for using the motion-characterized data by binning it into a single, easy to understand/use, optimal bin. To date, optimal binning protocols have utilized externally driven motion characterization strategies that have been tuned with population-derived assumptions and parameters. In this work, we are proposing a new strategy with which to characterize motion directly from a patient's gated scan, and use that signal to create a patient/instance-specific optimal bin image. Two hundred and nineteen phase-gated FDG PET scans, acquired using data-driven gating as described previously, were used as the input for this study. For each scan, a phase-amplitude motion characterization was generated and normalized using principle component analysis. A patient-specific "optimal bin" window was derived using this characterization, via methods that mirror traditional optimal window binning strategies. The resulting optimal bin images were validated by correlating quantitative and qualitative measurements in the population of PET scans. In 53% (n = 115) of the image population, the optimal bin was determined to include 100% of the image statistics. In the remaining images, the optimal binning windows averaged 60% of the statistics and ranged between 20% and 90%. Tuning the algorithm, through a single acceptance window parameter, allowed for adjustments of the algorithm's performance in the population toward conservation of motion or reduced noise-enabling users to incorporate their definition of optimal. In the population of images that were deemed appropriate for segregation, average lesion SUV max were 7.9, 8.5, and 9.0 for nongated images, optimal bin, and gated images, respectively. The Pearson correlation of FWHM measurements between optimal bin images and gated images were better than with nongated images, 0.89 and 0.85, respectively. Generally, optimal bin images had better resolution than the nongated images and better noise characteristics than the gated images. We extended the concept of optimal binning to a data-driven form, updating a traditionally one-size-fits-all approach to a conformal one that supports adaptive imaging. This automated strategy was implemented easily within a large population and encapsulated motion information in an easy to use 3D image. Its simplicity and practicality may make this, or similar approaches ideal for use in clinical settings. © 2017 American Association of Physicists in Medicine.
Role of direct bioautographic method for detection of antistaphylococcal activity of essential oils.
Horváth, Györgyi; Jámbor, Noémi; Kocsis, Erika; Böszörményi, Andrea; Lemberkovics, Eva; Héthelyi, Eva; Kovács, Krisztina; Kocsis, Béla
2011-09-01
The aim of the present study was the chemical characterization of some traditionally used and therapeutically relevant essential oils (thyme, eucalyptus, cinnamon bark, clove, and tea tree) and the optimized microbiological investigation of the effect of these oils on clinical isolates of methicillin-resistant Staphylococcus aureus (MRSA) and methicillin-susceptible S. aureus (MSSA). The chemical composition of the oils was analyzed by TLC, and controlled by gas chromatography (GC) and gas chromatography/mass spectrometry (GC/MS). The antibacterial effect was investigated using a TLC-bioautographic method. Antibacterial activity of thyme, clove and cinnamon oils, as well as their main components (thymol, carvacrol, eugenol, and cinnamic aldehyde) was observed against all the bacterial strains used in this study. The essential oils of eucalyptus and tea tree showed weak activity in the bioautographic system. On the whole, the antibacterial activity of the essential oils could be related to their most abundant components, but the effect of the minor components should also be taken into consideration. Direct bioautography is more cost-effective and better in comparison with traditional microbiological laboratory methods (e.g. disc-diffusion, agar-plate technique).
NASA Astrophysics Data System (ADS)
Jiang, Hao; Lu, Jiangang
2018-05-01
Corn starch is an important material which has been traditionally used in the fields of food and chemical industry. In order to enhance the rapidness and reliability of the determination for starch content in corn, a methodology is proposed in this work, using an optimal CC-PLSR-RBFNN calibration model and near-infrared (NIR) spectroscopy. The proposed model was developed based on the optimal selection of crucial parameters and the combination of correlation coefficient method (CC), partial least squares regression (PLSR) and radial basis function neural network (RBFNN). To test the performance of the model, a standard NIR spectroscopy data set was introduced, containing spectral information and chemical reference measurements of 80 corn samples. For comparison, several other models based on the identical data set were also briefly discussed. In this process, the root mean square error of prediction (RMSEP) and coefficient of determination (Rp2) in the prediction set were used to make evaluations. As a result, the proposed model presented the best predictive performance with the smallest RMSEP (0.0497%) and the highest Rp2 (0.9968). Therefore, the proposed method combining NIR spectroscopy with the optimal CC-PLSR-RBFNN model can be helpful to determine starch content in corn.
Chou, Sheng-Kai; Jiau, Ming-Kai; Huang, Shih-Chia
2016-08-01
The growing ubiquity of vehicles has led to increased concerns about environmental issues. These concerns can be mitigated by implementing an effective carpool service. In an intelligent carpool system, an automated service process assists carpool participants in determining routes and matches. It is a discrete optimization problem that involves a system-wide condition as well as participants' expectations. In this paper, we solve the carpool service problem (CSP) to provide satisfactory ride matches. To this end, we developed a particle swarm carpool algorithm based on stochastic set-based particle swarm optimization (PSO). Our method introduces stochastic coding to augment traditional particles, and uses three terminologies to represent a particle: 1) particle position; 2) particle view; and 3) particle velocity. In this way, the set-based PSO (S-PSO) can be realized by local exploration. In the simulation and experiments, two kind of discrete PSOs-S-PSO and binary PSO (BPSO)-and a genetic algorithm (GA) are compared and examined using tested benchmarks that simulate a real-world metropolis. We observed that the S-PSO outperformed the BPSO and the GA thoroughly. Moreover, our method yielded the best result in a statistical test and successfully obtained numerical results for meeting the optimization objectives of the CSP.
Yan, Yiming; Tan, Zhichao; Su, Nan; Zhao, Chunhui
2017-08-24
In this paper, a building extraction method is proposed based on a stacked sparse autoencoder with an optimized structure and training samples. Building extraction plays an important role in urban construction and planning. However, some negative effects will reduce the accuracy of extraction, such as exceeding resolution, bad correction and terrain influence. Data collected by multiple sensors, as light detection and ranging (LIDAR), optical sensor etc., are used to improve the extraction. Using digital surface model (DSM) obtained from LIDAR data and optical images, traditional method can improve the extraction effect to a certain extent, but there are some defects in feature extraction. Since stacked sparse autoencoder (SSAE) neural network can learn the essential characteristics of the data in depth, SSAE was employed to extract buildings from the combined DSM data and optical image. A better setting strategy of SSAE network structure is given, and an idea of setting the number and proportion of training samples for better training of SSAE was presented. The optical data and DSM were combined as input of the optimized SSAE, and after training by an optimized samples, the appropriate network structure can extract buildings with great accuracy and has good robustness.
Binns, Michael; de Atauri, Pedro; Vlysidis, Anestis; Cascante, Marta; Theodoropoulos, Constantinos
2015-02-18
Flux balance analysis is traditionally implemented to identify the maximum theoretical flux for some specified reaction and a single distribution of flux values for all the reactions present which achieve this maximum value. However it is well known that the uncertainty in reaction networks due to branches, cycles and experimental errors results in a large number of combinations of internal reaction fluxes which can achieve the same optimal flux value. In this work, we have modified the applied linear objective of flux balance analysis to include a poling penalty function, which pushes each new set of reaction fluxes away from previous solutions generated. Repeated poling-based flux balance analysis generates a sample of different solutions (a characteristic set), which represents all the possible functionality of the reaction network. Compared to existing sampling methods, for the purpose of generating a relatively "small" characteristic set, our new method is shown to obtain a higher coverage than competing methods under most conditions. The influence of the linear objective function on the sampling (the linear bias) constrains optimisation results to a subspace of optimal solutions all producing the same maximal fluxes. Visualisation of reaction fluxes plotted against each other in 2 dimensions with and without the linear bias indicates the existence of correlations between fluxes. This method of sampling is applied to the organism Actinobacillus succinogenes for the production of succinic acid from glycerol. A new method of sampling for the generation of different flux distributions (sets of individual fluxes satisfying constraints on the steady-state mass balances of intermediates) has been developed using a relatively simple modification of flux balance analysis to include a poling penalty function inside the resulting optimisation objective function. This new methodology can achieve a high coverage of the possible flux space and can be used with and without linear bias to show optimal versus sub-optimal solution spaces. Basic analysis of the Actinobacillus succinogenes system using sampling shows that in order to achieve the maximal succinic acid production CO₂ must be taken into the system. Solutions involving release of CO₂ all give sub-optimal succinic acid production.
A novel framework of tissue membrane systems for image fusion.
Zhang, Zulin; Yi, Xinzhong; Peng, Hong
2014-01-01
This paper proposes a tissue membrane system-based framework to deal with the optimal image fusion problem. A spatial domain fusion algorithm is given, and a tissue membrane system of multiple cells is used as its computing framework. Based on the multicellular structure and inherent communication mechanism of the tissue membrane system, an improved velocity-position model is developed. The performance of the fusion framework is studied with comparison of several traditional fusion methods as well as genetic algorithm (GA)-based and differential evolution (DE)-based spatial domain fusion methods. Experimental results show that the proposed fusion framework is superior or comparable to the other methods and can be efficiently used for image fusion.