Sample records for optimal linear combination

  1. Combined genetic algorithm and multiple linear regression (GA-MLR) optimizer: Application to multi-exponential fluorescence decay surface.

    PubMed

    Fisz, Jacek J

    2006-12-07

    The optimization approach based on the genetic algorithm (GA) combined with multiple linear regression (MLR) method, is discussed. The GA-MLR optimizer is designed for the nonlinear least-squares problems in which the model functions are linear combinations of nonlinear functions. GA optimizes the nonlinear parameters, and the linear parameters are calculated from MLR. GA-MLR is an intuitive optimization approach and it exploits all advantages of the genetic algorithm technique. This optimization method results from an appropriate combination of two well-known optimization methods. The MLR method is embedded in the GA optimizer and linear and nonlinear model parameters are optimized in parallel. The MLR method is the only one strictly mathematical "tool" involved in GA-MLR. The GA-MLR approach simplifies and accelerates considerably the optimization process because the linear parameters are not the fitted ones. Its properties are exemplified by the analysis of the kinetic biexponential fluorescence decay surface corresponding to a two-excited-state interconversion process. A short discussion of the variable projection (VP) algorithm, designed for the same class of the optimization problems, is presented. VP is a very advanced mathematical formalism that involves the methods of nonlinear functionals, algebra of linear projectors, and the formalism of Fréchet derivatives and pseudo-inverses. Additional explanatory comments are added on the application of recently introduced the GA-NR optimizer to simultaneous recovery of linear and weakly nonlinear parameters occurring in the same optimization problem together with nonlinear parameters. The GA-NR optimizer combines the GA method with the NR method, in which the minimum-value condition for the quadratic approximation to chi(2), obtained from the Taylor series expansion of chi(2), is recovered by means of the Newton-Raphson algorithm. The application of the GA-NR optimizer to model functions which are multi-linear combinations of nonlinear functions, is indicated. The VP algorithm does not distinguish the weakly nonlinear parameters from the nonlinear ones and it does not apply to the model functions which are multi-linear combinations of nonlinear functions.

  2. A Comparative Theoretical and Computational Study on Robust Counterpart Optimization: I. Robust Linear Optimization and Robust Mixed Integer Linear Optimization

    PubMed Central

    Li, Zukui; Ding, Ran; Floudas, Christodoulos A.

    2011-01-01

    Robust counterpart optimization techniques for linear optimization and mixed integer linear optimization problems are studied in this paper. Different uncertainty sets, including those studied in literature (i.e., interval set; combined interval and ellipsoidal set; combined interval and polyhedral set) and new ones (i.e., adjustable box; pure ellipsoidal; pure polyhedral; combined interval, ellipsoidal, and polyhedral set) are studied in this work and their geometric relationship is discussed. For uncertainty in the left hand side, right hand side, and objective function of the optimization problems, robust counterpart optimization formulations induced by those different uncertainty sets are derived. Numerical studies are performed to compare the solutions of the robust counterpart optimization models and applications in refinery production planning and batch process scheduling problem are presented. PMID:21935263

  3. Combining Biomarkers Linearly and Nonlinearly for Classification Using the Area Under the ROC Curve

    PubMed Central

    Fong, Youyi; Yin, Shuxin; Huang, Ying

    2016-01-01

    In biomedical studies, it is often of interest to classify/predict a subject’s disease status based on a variety of biomarker measurements. A commonly used classification criterion is based on AUC - Area under the Receiver Operating Characteristic Curve. Many methods have been proposed to optimize approximated empirical AUC criteria, but there are two limitations to the existing methods. First, most methods are only designed to find the best linear combination of biomarkers, which may not perform well when there is strong nonlinearity in the data. Second, many existing linear combination methods use gradient-based algorithms to find the best marker combination, which often result in sub-optimal local solutions. In this paper, we address these two problems by proposing a new kernel-based AUC optimization method called Ramp AUC (RAUC). This method approximates the empirical AUC loss function with a ramp function, and finds the best combination by a difference of convex functions algorithm. We show that as a linear combination method, RAUC leads to a consistent and asymptotically normal estimator of the linear marker combination when the data is generated from a semiparametric generalized linear model, just as the Smoothed AUC method (SAUC). Through simulation studies and real data examples, we demonstrate that RAUC out-performs SAUC in finding the best linear marker combinations, and can successfully capture nonlinear pattern in the data to achieve better classification performance. We illustrate our method with a dataset from a recent HIV vaccine trial. PMID:27058981

  4. Analytical optimal pulse shapes obtained with the aid of genetic algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guerrero, Rubén D., E-mail: rdguerrerom@unal.edu.co; Arango, Carlos A.; Reyes, Andrés

    2015-09-28

    We propose a methodology to design optimal pulses for achieving quantum optimal control on molecular systems. Our approach constrains pulse shapes to linear combinations of a fixed number of experimentally relevant pulse functions. Quantum optimal control is obtained by maximizing a multi-target fitness function using genetic algorithms. As a first application of the methodology, we generated an optimal pulse that successfully maximized the yield on a selected dissociation channel of a diatomic molecule. Our pulse is obtained as a linear combination of linearly chirped pulse functions. Data recorded along the evolution of the genetic algorithm contained important information regarding themore » interplay between radiative and diabatic processes. We performed a principal component analysis on these data to retrieve the most relevant processes along the optimal path. Our proposed methodology could be useful for performing quantum optimal control on more complex systems by employing a wider variety of pulse shape functions.« less

  5. Optimization Research of Generation Investment Based on Linear Programming Model

    NASA Astrophysics Data System (ADS)

    Wu, Juan; Ge, Xueqian

    Linear programming is an important branch of operational research and it is a mathematical method to assist the people to carry out scientific management. GAMS is an advanced simulation and optimization modeling language and it will combine a large number of complex mathematical programming, such as linear programming LP, nonlinear programming NLP, MIP and other mixed-integer programming with the system simulation. In this paper, based on the linear programming model, the optimized investment decision-making of generation is simulated and analyzed. At last, the optimal installed capacity of power plants and the final total cost are got, which provides the rational decision-making basis for optimized investments.

  6. Adaptive convex combination approach for the identification of improper quaternion processes.

    PubMed

    Ujang, Bukhari Che; Jahanchahi, Cyrus; Took, Clive Cheong; Mandic, Danilo P

    2014-01-01

    Data-adaptive optimal modeling and identification of real-world vector sensor data is provided by combining the fractional tap-length (FT) approach with model order selection in the quaternion domain. To account rigorously for the generality of such processes, both second-order circular (proper) and noncircular (improper), the proposed approach in this paper combines the FT length optimization with both the strictly linear quaternion least mean square (QLMS) and widely linear QLMS (WL-QLMS). A collaborative approach based on QLMS and WL-QLMS is shown to both identify the type of processes (proper or improper) and to track their optimal parameters in real time. Analysis shows that monitoring the evolution of the convex mixing parameter within the collaborative approach allows us to track the improperness in real time. Further insight into the properties of those algorithms is provided by establishing a relationship between the steady-state error and optimal model order. The approach is supported by simulations on model order selection and identification of both strictly linear and widely linear quaternion-valued systems, such as those routinely used in renewable energy (wind) and human-centered computing (biomechanics).

  7. Developing a Measure of General Academic Ability: An Application of Maximal Reliability and Optimal Linear Combination to High School Students' Scores

    ERIC Educational Resources Information Center

    Dimitrov, Dimiter M.; Raykov, Tenko; AL-Qataee, Abdullah Ali

    2015-01-01

    This article is concerned with developing a measure of general academic ability (GAA) for high school graduates who apply to colleges, as well as with the identification of optimal weights of the GAA indicators in a linear combination that yields a composite score with maximal reliability and maximal predictive validity, employing the framework of…

  8. Particle swarm optimization - Genetic algorithm (PSOGA) on linear transportation problem

    NASA Astrophysics Data System (ADS)

    Rahmalia, Dinita

    2017-08-01

    Linear Transportation Problem (LTP) is the case of constrained optimization where we want to minimize cost subject to the balance of the number of supply and the number of demand. The exact method such as northwest corner, vogel, russel, minimal cost have been applied at approaching optimal solution. In this paper, we use heurisitic like Particle Swarm Optimization (PSO) for solving linear transportation problem at any size of decision variable. In addition, we combine mutation operator of Genetic Algorithm (GA) at PSO to improve optimal solution. This method is called Particle Swarm Optimization - Genetic Algorithm (PSOGA). The simulations show that PSOGA can improve optimal solution resulted by PSO.

  9. Simultaneous Optimization of Decisions Using a Linear Utility Function.

    ERIC Educational Resources Information Center

    Vos, Hans J.

    1990-01-01

    An approach is presented to simultaneously optimize decision rules for combinations of elementary decisions through a framework derived from Bayesian decision theory. The developed linear utility model for selection-mastery decisions was applied to a sample of 43 first year medical students to illustrate the procedure. (SLD)

  10. Stationary-phase optimized selectivity liquid chromatography: development of a linear gradient prediction algorithm.

    PubMed

    De Beer, Maarten; Lynen, Fréderic; Chen, Kai; Ferguson, Paul; Hanna-Brown, Melissa; Sandra, Pat

    2010-03-01

    Stationary-phase optimized selectivity liquid chromatography (SOS-LC) is a tool in reversed-phase LC (RP-LC) to optimize the selectivity for a given separation by combining stationary phases in a multisegment column. The presently (commercially) available SOS-LC optimization procedure and algorithm are only applicable to isocratic analyses. Step gradient SOS-LC has been developed, but this is still not very elegant for the analysis of complex mixtures composed of components covering a broad hydrophobicity range. A linear gradient prediction algorithm has been developed allowing one to apply SOS-LC as a generic RP-LC optimization method. The algorithm allows operation in isocratic, stepwise, and linear gradient run modes. The features of SOS-LC in the linear gradient mode are demonstrated by means of a mixture of 13 steroids, whereby baseline separation is predicted and experimentally demonstrated.

  11. Kernel reconstruction methods for Doppler broadening - Temperature interpolation by linear combination of reference cross sections at optimally chosen temperatures

    NASA Astrophysics Data System (ADS)

    Ducru, Pablo; Josey, Colin; Dibert, Karia; Sobes, Vladimir; Forget, Benoit; Smith, Kord

    2017-04-01

    This article establishes a new family of methods to perform temperature interpolation of nuclear interactions cross sections, reaction rates, or cross sections times the energy. One of these quantities at temperature T is approximated as a linear combination of quantities at reference temperatures (Tj). The problem is formalized in a cross section independent fashion by considering the kernels of the different operators that convert cross section related quantities from a temperature T0 to a higher temperature T - namely the Doppler broadening operation. Doppler broadening interpolation of nuclear cross sections is thus here performed by reconstructing the kernel of the operation at a given temperature T by means of linear combination of kernels at reference temperatures (Tj). The choice of the L2 metric yields optimal linear interpolation coefficients in the form of the solutions of a linear algebraic system inversion. The optimization of the choice of reference temperatures (Tj) is then undertaken so as to best reconstruct, in the L∞ sense, the kernels over a given temperature range [Tmin ,Tmax ]. The performance of these kernel reconstruction methods is then assessed in light of previous temperature interpolation methods by testing them upon isotope 238U. Temperature-optimized free Doppler kernel reconstruction significantly outperforms all previous interpolation-based methods, achieving 0.1% relative error on temperature interpolation of 238U total cross section over the temperature range [ 300 K , 3000 K ] with only 9 reference temperatures.

  12. Efficient Transition State Optimization of Periodic Structures through Automated Relaxed Potential Energy Surface Scans.

    PubMed

    Plessow, Philipp N

    2018-02-13

    This work explores how constrained linear combinations of bond lengths can be used to optimize transition states in periodic structures. Scanning of constrained coordinates is a standard approach for molecular codes with localized basis functions, where a full set of internal coordinates is used for optimization. Common plane wave-codes for periodic boundary conditions almost exlusively rely on Cartesian coordinates. An implementation of constrained linear combinations of bond lengths with Cartesian coordinates is described. Along with an optimization of the value of the constrained coordinate toward the transition states, this allows transition optimization within a single calculation. The approach is suitable for transition states that can be well described in terms of broken and formed bonds. In particular, the implementation is shown to be effective and efficient in the optimization of transition states in zeolite-catalyzed reactions, which have high relevance in industrial processes.

  13. A Nonlinear Physics-Based Optimal Control Method for Magnetostrictive Actuators

    NASA Technical Reports Server (NTRS)

    Smith, Ralph C.

    1998-01-01

    This paper addresses the development of a nonlinear optimal control methodology for magnetostrictive actuators. At moderate to high drive levels, the output from these actuators is highly nonlinear and contains significant magnetic and magnetomechanical hysteresis. These dynamics must be accommodated by models and control laws to utilize the full capabilities of the actuators. A characterization based upon ferromagnetic mean field theory provides a model which accurately quantifies both transient and steady state actuator dynamics under a variety of operating conditions. The control method consists of a linear perturbation feedback law used in combination with an optimal open loop nonlinear control. The nonlinear control incorporates the hysteresis and nonlinearities inherent to the transducer and can be computed offline. The feedback control is constructed through linearization of the perturbed system about the optimal system and is efficient for online implementation. As demonstrated through numerical examples, the combined hybrid control is robust and can be readily implemented in linear PDE-based structural models.

  14. Biomarker selection for medical diagnosis using the partial area under the ROC curve

    PubMed Central

    2014-01-01

    Background A biomarker is usually used as a diagnostic or assessment tool in medical research. Finding an ideal biomarker is not easy and combining multiple biomarkers provides a promising alternative. Moreover, some biomarkers based on the optimal linear combination do not have enough discriminatory power. As a result, the aim of this study was to find the significant biomarkers based on the optimal linear combination maximizing the pAUC for assessment of the biomarkers. Methods Under the binormality assumption we obtain the optimal linear combination of biomarkers maximizing the partial area under the receiver operating characteristic curve (pAUC). Related statistical tests are developed for assessment of a biomarker set and of an individual biomarker. Stepwise biomarker selections are introduced to identify those biomarkers of statistical significance. Results The results of simulation study and three real examples, Duchenne Muscular Dystrophy disease, heart disease, and breast tissue example are used to show that our methods are most suitable biomarker selection for the data sets of a moderate number of biomarkers. Conclusions Our proposed biomarker selection approaches can be used to find the significant biomarkers based on hypothesis testing. PMID:24410929

  15. Combined linear theory/impact theory method for analysis and design of high speed configurations

    NASA Technical Reports Server (NTRS)

    Brooke, D.; Vondrasek, D. V.

    1980-01-01

    Pressure distributions on a wing body at Mach 4.63 are calculated. The combined theory is shown to give improved predictions over either linear theory or impact theory alone. The combined theory is also applied in the inverse design mode to calculate optimum camber slopes at Mach 4.63. Comparisons with optimum camber slopes obtained from unmodified linear theory show large differences. Analysis of the results indicate that the combined theory correctly predicts the effect of thickness on the loading distributions at high Mach numbers, and that finite thickness wings optimized at high Mach numbers using unmodified linear theory will not achieve the minimum drag characteristics for which they are designed.

  16. On the Relationship between Maximal Reliability and Maximal Validity of Linear Composites

    ERIC Educational Resources Information Center

    Penev, Spiridon; Raykov, Tenko

    2006-01-01

    A linear combination of a set of measures is often sought as an overall score summarizing subject performance. The weights in this composite can be selected to maximize its reliability or to maximize its validity, and the optimal choice of weights is in general not the same for these two optimality criteria. We explore several relationships…

  17. Linear Power-Flow Models in Multiphase Distribution Networks: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernstein, Andrey; Dall'Anese, Emiliano

    This paper considers multiphase unbalanced distribution systems and develops approximate power-flow models where bus-voltages, line-currents, and powers at the point of common coupling are linearly related to the nodal net power injections. The linearization approach is grounded on a fixed-point interpretation of the AC power-flow equations, and it is applicable to distribution systems featuring (i) wye connections; (ii) ungrounded delta connections; (iii) a combination of wye-connected and delta-connected sources/loads; and, (iv) a combination of line-to-line and line-to-grounded-neutral devices at the secondary of distribution transformers. The proposed linear models can facilitate the development of computationally-affordable optimization and control applications -- frommore » advanced distribution management systems settings to online and distributed optimization routines. Performance of the proposed models is evaluated on different test feeders.« less

  18. Kernel reconstruction methods for Doppler broadening — Temperature interpolation by linear combination of reference cross sections at optimally chosen temperatures

    DOE PAGES

    Ducru, Pablo; Josey, Colin; Dibert, Karia; ...

    2017-01-25

    This paper establishes a new family of methods to perform temperature interpolation of nuclear interactions cross sections, reaction rates, or cross sections times the energy. One of these quantities at temperature T is approximated as a linear combination of quantities at reference temperatures (T j). The problem is formalized in a cross section independent fashion by considering the kernels of the different operators that convert cross section related quantities from a temperature T 0 to a higher temperature T — namely the Doppler broadening operation. Doppler broadening interpolation of nuclear cross sections is thus here performed by reconstructing the kernelmore » of the operation at a given temperature T by means of linear combination of kernels at reference temperatures (T j). The choice of the L 2 metric yields optimal linear interpolation coefficients in the form of the solutions of a linear algebraic system inversion. The optimization of the choice of reference temperatures (T j) is then undertaken so as to best reconstruct, in the L∞ sense, the kernels over a given temperature range [T min,T max]. The performance of these kernel reconstruction methods is then assessed in light of previous temperature interpolation methods by testing them upon isotope 238U. Temperature-optimized free Doppler kernel reconstruction significantly outperforms all previous interpolation-based methods, achieving 0.1% relative error on temperature interpolation of 238U total cross section over the temperature range [300 K,3000 K] with only 9 reference temperatures.« less

  19. Optimal second order sliding mode control for linear uncertain systems.

    PubMed

    Das, Madhulika; Mahanta, Chitralekha

    2014-11-01

    In this paper an optimal second order sliding mode controller (OSOSMC) is proposed to track a linear uncertain system. The optimal controller based on the linear quadratic regulator method is designed for the nominal system. An integral sliding mode controller is combined with the optimal controller to ensure robustness of the linear system which is affected by parametric uncertainties and external disturbances. To achieve finite time convergence of the sliding mode, a nonsingular terminal sliding surface is added with the integral sliding surface giving rise to a second order sliding mode controller. The main advantage of the proposed OSOSMC is that the control input is substantially reduced and it becomes chattering free. Simulation results confirm superiority of the proposed OSOSMC over some existing. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  20. Subjective audio quality evaluation of embedded-optimization-based distortion precompensation algorithms.

    PubMed

    Defraene, Bruno; van Waterschoot, Toon; Diehl, Moritz; Moonen, Marc

    2016-07-01

    Subjective audio quality evaluation experiments have been conducted to assess the performance of embedded-optimization-based precompensation algorithms for mitigating perceptible linear and nonlinear distortion in audio signals. It is concluded with statistical significance that the perceived audio quality is improved by applying an embedded-optimization-based precompensation algorithm, both in case (i) nonlinear distortion and (ii) a combination of linear and nonlinear distortion is present. Moreover, a significant positive correlation is reported between the collected subjective and objective PEAQ audio quality scores, supporting the validity of using PEAQ to predict the impact of linear and nonlinear distortion on the perceived audio quality.

  1. Combining large number of weak biomarkers based on AUC.

    PubMed

    Yan, Li; Tian, Lili; Liu, Song

    2015-12-20

    Combining multiple biomarkers to improve diagnosis and/or prognosis accuracy is a common practice in clinical medicine. Both parametric and non-parametric methods have been developed for finding the optimal linear combination of biomarkers to maximize the area under the receiver operating characteristic curve (AUC), primarily focusing on the setting with a small number of well-defined biomarkers. This problem becomes more challenging when the number of observations is not order of magnitude greater than the number of variables, especially when the involved biomarkers are relatively weak. Such settings are not uncommon in certain applied fields. The first aim of this paper is to empirically evaluate the performance of existing linear combination methods under such settings. The second aim is to propose a new combination method, namely, the pairwise approach, to maximize AUC. Our simulation studies demonstrated that the performance of several existing methods can become unsatisfactory as the number of markers becomes large, while the newly proposed pairwise method performs reasonably well. Furthermore, we apply all the combination methods to real datasets used for the development and validation of MammaPrint. The implication of our study for the design of optimal linear combination methods is discussed. Copyright © 2015 John Wiley & Sons, Ltd.

  2. Combining large number of weak biomarkers based on AUC

    PubMed Central

    Yan, Li; Tian, Lili; Liu, Song

    2018-01-01

    Combining multiple biomarkers to improve diagnosis and/or prognosis accuracy is a common practice in clinical medicine. Both parametric and non-parametric methods have been developed for finding the optimal linear combination of biomarkers to maximize the area under the receiver operating characteristic curve (AUC), primarily focusing on the setting with a small number of well-defined biomarkers. This problem becomes more challenging when the number of observations is not order of magnitude greater than the number of variables, especially when the involved biomarkers are relatively weak. Such settings are not uncommon in certain applied fields. The first aim of this paper is to empirically evaluate the performance of existing linear combination methods under such settings. The second aim is to propose a new combination method, namely, the pairwise approach, to maximize AUC. Our simulation studies demonstrated that the performance of several existing methods can become unsatisfactory as the number of markers becomes large, while the newly proposed pairwise method performs reasonably well. Furthermore, we apply all the combination methods to real datasets used for the development and validation of MammaPrint. The implication of our study for the design of optimal linear combination methods is discussed. PMID:26227901

  3. Codification of scan path parameters and development of perimeter scan strategies for 3D bowl-shaped laser forming

    NASA Astrophysics Data System (ADS)

    Tavakoli, A.; Naeini, H. Moslemi; Roohi, Amir H.; Gollo, M. Hoseinpour; Shahabad, Sh. Imani

    2018-01-01

    In the 3D laser forming process, developing an appropriate laser scan pattern for producing specimens with high quality and uniformity is critical. This study presents certain principles for developing scan paths. Seven scan path parameters are considered, including: (1) combined linear or curved path; (2) type of combined linear path; (3) order of scan sequences; (4) the position of the start point in each scan; (5) continuous or discontinuous scan path; (6) direction of scan path; and (7) angular arrangement of combined linear scan paths. Regarding these path parameters, ten combined linear scan patterns are presented. Numerical simulations show continuous hexagonal, scan pattern, scanning from outer to inner path, is the optimized. In addition, it is observed the position of the start point and the angular arrangement of scan paths is the most effective path parameters. Also, further experimentations show four sequences due to creat symmetric condition enhance the height of the bowl-shaped products and uniformity. Finally, the optimized hexagonal pattern was compared with the similar circular one. In the hexagonal scan path, distortion value and standard deviation rather to edge height of formed specimen is very low, and the edge height despite of decreasing length of scan path increases significantly compared to the circular scan path. As a result, four-sequence hexagonal scan pattern is proposed as the optimized perimeter scan path to produce bowl-shaped product.

  4. Interactive optimization approach for optimal impulsive rendezvous using primer vector and evolutionary algorithms

    NASA Astrophysics Data System (ADS)

    Luo, Ya-Zhong; Zhang, Jin; Li, Hai-yang; Tang, Guo-Jin

    2010-08-01

    In this paper, a new optimization approach combining primer vector theory and evolutionary algorithms for fuel-optimal non-linear impulsive rendezvous is proposed. The optimization approach is designed to seek the optimal number of impulses as well as the optimal impulse vectors. In this optimization approach, adding a midcourse impulse is determined by an interactive method, i.e. observing the primer-magnitude time history. An improved version of simulated annealing is employed to optimize the rendezvous trajectory with the fixed-number of impulses. This interactive approach is evaluated by three test cases: coplanar circle-to-circle rendezvous, same-circle rendezvous and non-coplanar rendezvous. The results show that the interactive approach is effective and efficient in fuel-optimal non-linear rendezvous design. It can guarantee solutions, which satisfy the Lawden's necessary optimality conditions.

  5. Fast online Monte Carlo-based IMRT planning for the MRI linear accelerator

    NASA Astrophysics Data System (ADS)

    Bol, G. H.; Hissoiny, S.; Lagendijk, J. J. W.; Raaymakers, B. W.

    2012-03-01

    The MRI accelerator, a combination of a 6 MV linear accelerator with a 1.5 T MRI, facilitates continuous patient anatomy updates regarding translations, rotations and deformations of targets and organs at risk. Accounting for these demands high speed, online intensity-modulated radiotherapy (IMRT) re-optimization. In this paper, a fast IMRT optimization system is described which combines a GPU-based Monte Carlo dose calculation engine for online beamlet generation and a fast inverse dose optimization algorithm. Tightly conformal IMRT plans are generated for four phantom cases and two clinical cases (cervix and kidney) in the presence of the magnetic fields of 0 and 1.5 T. We show that for the presented cases the beamlet generation and optimization routines are fast enough for online IMRT planning. Furthermore, there is no influence of the magnetic field on plan quality and complexity, and equal optimization constraints at 0 and 1.5 T lead to almost identical dose distributions.

  6. Linear combination methods to improve diagnostic/prognostic accuracy on future observations

    PubMed Central

    Kang, Le; Liu, Aiyi; Tian, Lili

    2014-01-01

    Multiple diagnostic tests or biomarkers can be combined to improve diagnostic accuracy. The problem of finding the optimal linear combinations of biomarkers to maximise the area under the receiver operating characteristic curve has been extensively addressed in the literature. The purpose of this article is threefold: (1) to provide an extensive review of the existing methods for biomarker combination; (2) to propose a new combination method, namely, the nonparametric stepwise approach; (3) to use leave-one-pair-out cross-validation method, instead of re-substitution method, which is overoptimistic and hence might lead to wrong conclusion, to empirically evaluate and compare the performance of different linear combination methods in yielding the largest area under receiver operating characteristic curve. A data set of Duchenne muscular dystrophy was analysed to illustrate the applications of the discussed combination methods. PMID:23592714

  7. [Study on the early detection of Sclerotinia of Brassica napus based on combinational-stimulated bands].

    PubMed

    Liu, Fei; Feng, Lei; Lou, Bing-gan; Sun, Guang-ming; Wang, Lian-ping; He, Yong

    2010-07-01

    The combinational-stimulated bands were used to develop linear and nonlinear calibrations for the early detection of sclerotinia of oilseed rape (Brassica napus L.). Eighty healthy and 100 Sclerotinia leaf samples were scanned, and different preprocessing methods combined with successive projections algorithm (SPA) were applied to develop partial least squares (PLS) discriminant models, multiple linear regression (MLR) and least squares-support vector machine (LS-SVM) models. The results indicated that the optimal full-spectrum PLS model was achieved by direct orthogonal signal correction (DOSC), then De-trending and Raw spectra with correct recognition ratio of 100%, 95.7% and 95.7%, respectively. When using combinational-stimulated bands, the optimal linear models were SPA-MLR (DOSC) and SPA-PLS (DOSC) with correct recognition ratio of 100%. All SPA-LSSVM models using DOSC, De-trending and Raw spectra achieved perfect results with recognition of 100%. The overall results demonstrated that it was feasible to use combinational-stimulated bands for the early detection of Sclerotinia of oilseed rape, and DOSC-SPA was a powerful way for informative wavelength selection. This method supplied a new approach to the early detection and portable monitoring instrument of sclerotinia.

  8. Multi-objective experimental design for (13)C-based metabolic flux analysis.

    PubMed

    Bouvin, Jeroen; Cajot, Simon; D'Huys, Pieter-Jan; Ampofo-Asiama, Jerry; Anné, Jozef; Van Impe, Jan; Geeraerd, Annemie; Bernaerts, Kristel

    2015-10-01

    (13)C-based metabolic flux analysis is an excellent technique to resolve fluxes in the central carbon metabolism but costs can be significant when using specialized tracers. This work presents a framework for cost-effective design of (13)C-tracer experiments, illustrated on two different networks. Linear and non-linear optimal input mixtures are computed for networks for Streptomyces lividans and a carcinoma cell line. If only glucose tracers are considered as labeled substrate for a carcinoma cell line or S. lividans, the best parameter estimation accuracy is obtained by mixtures containing high amounts of 1,2-(13)C2 glucose combined with uniformly labeled glucose. Experimental designs are evaluated based on a linear (D-criterion) and non-linear approach (S-criterion). Both approaches generate almost the same input mixture, however, the linear approach is favored due to its low computational effort. The high amount of 1,2-(13)C2 glucose in the optimal designs coincides with a high experimental cost, which is further enhanced when labeling is introduced in glutamine and aspartate tracers. Multi-objective optimization gives the possibility to assess experimental quality and cost at the same time and can reveal excellent compromise experiments. For example, the combination of 100% 1,2-(13)C2 glucose with 100% position one labeled glutamine and the combination of 100% 1,2-(13)C2 glucose with 100% uniformly labeled glutamine perform equally well for the carcinoma cell line, but the first mixture offers a decrease in cost of $ 120 per ml-scale cell culture experiment. We demonstrated the validity of a multi-objective linear approach to perform optimal experimental designs for the non-linear problem of (13)C-metabolic flux analysis. Tools and a workflow are provided to perform multi-objective design. The effortless calculation of the D-criterion can be exploited to perform high-throughput screening of possible (13)C-tracers, while the illustrated benefit of multi-objective design should stimulate its application within the field of (13)C-based metabolic flux analysis. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. Householder transformations and optimal linear combinations

    NASA Technical Reports Server (NTRS)

    Decell, H. P., Jr.; Smiley, W., III

    1974-01-01

    Several theorems related to the Householder transformation and separability criteria are proven. Orthogonal transformations, topology, divergence, mathematical matrices, and group theory are discussed.

  10. Combined control-structure optimization

    NASA Technical Reports Server (NTRS)

    Salama, M.; Milman, M.; Bruno, R.; Scheid, R.; Gibson, S.

    1989-01-01

    An approach for combined control-structure optimization keyed to enhancing early design trade-offs is outlined and illustrated by numerical examples. The approach employs a homotopic strategy and appears to be effective for generating families of designs that can be used in these early trade studies. Analytical results were obtained for classes of structure/control objectives with linear quadratic Gaussian (LQG) and linear quadratic regulator (LQR) costs. For these, researchers demonstrated that global optima can be computed for small values of the homotopy parameter. Conditions for local optima along the homotopy path were also given. Details of two numerical examples employing the LQR control cost were given showing variations of the optimal design variables along the homotopy path. The results of the second example suggest that introducing a second homotopy parameter relating the two parts of the control index in the LQG/LQR formulation might serve to enlarge the family of Pareto optima, but its effect on modifying the optimal structural shapes may be analogous to the original parameter lambda.

  11. Optimization of an electromagnetic linear actuator using a network and a finite element model

    NASA Astrophysics Data System (ADS)

    Neubert, Holger; Kamusella, Alfred; Lienig, Jens

    2011-03-01

    Model based design optimization leads to robust solutions only if the statistical deviations of design, load and ambient parameters from nominal values are considered. We describe an optimization methodology that involves these deviations as stochastic variables for an exemplary electromagnetic actuator used to drive a Braille printer. A combined model simulates the dynamic behavior of the actuator and its non-linear load. It consists of a dynamic network model and a stationary magnetic finite element (FE) model. The network model utilizes lookup tables of the magnetic force and the flux linkage computed by the FE model. After a sensitivity analysis using design of experiment (DoE) methods and a nominal optimization based on gradient methods, a robust design optimization is performed. Selected design variables are involved in form of their density functions. In order to reduce the computational effort we use response surfaces instead of the combined system model obtained in all stochastic analysis steps. Thus, Monte-Carlo simulations can be applied. As a result we found an optimum system design meeting our requirements with regard to function and reliability.

  12. Conceptual design optimization study

    NASA Technical Reports Server (NTRS)

    Hollowell, S. J.; Beeman, E. R., II; Hiyama, R. M.

    1990-01-01

    The feasibility of applying multilevel functional decomposition and optimization techniques to conceptual design of advanced fighter aircraft was investigated. Applying the functional decomposition techniques to the conceptual design phase appears to be feasible. The initial implementation of the modified design process will optimize wing design variables. A hybrid approach, combining functional decomposition techniques for generation of aerodynamic and mass properties linear sensitivity derivatives with existing techniques for sizing mission performance and optimization, is proposed.

  13. Simultaneous structural and control optimization via linear quadratic regulator eigenstructure assignment

    NASA Technical Reports Server (NTRS)

    Becus, G. A.; Lui, C. Y.; Venkayya, V. B.; Tischler, V. A.

    1987-01-01

    A method for simultaneous structural and control design of large flexible space structures (LFSS) to reduce vibration generated by disturbances is presented. Desired natural frequencies and damping ratios for the closed loop system are achieved by using a combination of linear quadratic regulator (LQR) synthesis and numerical optimization techniques. The state and control weighing matrices (Q and R) are expressed in terms of structural parameters such as mass and stiffness. The design parameters are selected by numerical optimization so as to minimize the weight of the structure and to achieve the desired closed-loop eigenvalues. An illustrative example of the design of a two bar truss is presented.

  14. Design, Optimization, and Evaluation of Integrally-Stiffened Al-2139 Panel with Curved Stiffeners

    NASA Technical Reports Server (NTRS)

    Havens, David; Shiyekar, Sandeep; Norris, Ashley; Bird, R. Keith; Kapania, Rakesh K.; Olliffe, Robert

    2011-01-01

    A curvilinear stiffened panel was designed, manufactured, and tested in the Combined Load Test Fixture at NASA Langley Research Center. The panel is representative of a large wing engine pylon rib and was optimized for minimum mass subjected to three combined load cases. The optimization included constraints on web buckling, material yielding, crippling or local stiffener failure, and damage tolerance using a new analysis tool named EBF3PanelOpt. Testing was performed for the critical combined compression-shear loading configuration. The panel was loaded beyond initial buckling, and strains and out-of-plane displacements were extracted from a total of 20 strain gages and 6 linear variable displacement transducers. The VIC-3D system was utilized to obtain full field displacements/strains in the stiffened side of the panel. The experimental data were compared with the strains and out-of-plane deflections from a high fidelity nonlinear finite element analysis. The experimental data were also compared with linear elastic finite element results of the panel/test-fixture assembly. Overall, the panel buckled very near to the predicted load in the web regions.

  15. A class of stochastic optimization problems with one quadratic & several linear objective functions and extended portfolio selection model

    NASA Astrophysics Data System (ADS)

    Xu, Jiuping; Li, Jun

    2002-09-01

    In this paper a class of stochastic multiple-objective programming problems with one quadratic, several linear objective functions and linear constraints has been introduced. The former model is transformed into a deterministic multiple-objective nonlinear programming model by means of the introduction of random variables' expectation. The reference direction approach is used to deal with linear objectives and results in a linear parametric optimization formula with a single linear objective function. This objective function is combined with the quadratic function using the weighted sums. The quadratic problem is transformed into a linear (parametric) complementary problem, the basic formula for the proposed approach. The sufficient and necessary conditions for (properly, weakly) efficient solutions and some construction characteristics of (weakly) efficient solution sets are obtained. An interactive algorithm is proposed based on reference direction and weighted sums. Varying the parameter vector on the right-hand side of the model, the DM can freely search the efficient frontier with the model. An extended portfolio selection model is formed when liquidity is considered as another objective to be optimized besides expectation and risk. The interactive approach is illustrated with a practical example.

  16. Optimization benefits analysis in production process of fabrication components

    NASA Astrophysics Data System (ADS)

    Prasetyani, R.; Rafsanjani, A. Y.; Rimantho, D.

    2017-12-01

    The determination of an optimal number of product combinations is important. The main problem at part and service department in PT. United Tractors Pandu Engineering (shortened to PT.UTPE) Is the optimization of the combination of fabrication component products (known as Liner Plate) which influence to the profit that will be obtained by the company. Liner Plate is a fabrication component that serves as a protector of core structure for heavy duty attachment, such as HD Vessel, HD Bucket, HD Shovel, and HD Blade. The graph of liner plate sales from January to December 2016 has fluctuated and there is no direct conclusion about the optimization of production of such fabrication components. The optimal product combination can be achieved by calculating and plotting the amount of production output and input appropriately. The method that used in this study is linear programming methods with primal, dual, and sensitivity analysis using QM software for Windows to obtain optimal fabrication components. In the optimal combination of components, PT. UTPE provide the profit increase of Rp. 105,285,000.00 for a total of Rp. 3,046,525,000.00 per month and the production of a total combination of 71 units per unit variance per month.

  17. A Test of a Linear Programming Model as an Optimal Solution to the Problem of Combining Methods of Reading Instruction

    ERIC Educational Resources Information Center

    Mills, James W.; And Others

    1973-01-01

    The Study reported here tested an application of the Linear Programming Model at the Reading Clinic of Drew University. Results, while not conclusive, indicate that this approach yields greater gains in speed scores than a traditional approach for this population. (Author)

  18. Determination of optimum values for maximizing the profit in bread production: Daily bakery Sdn Bhd

    NASA Astrophysics Data System (ADS)

    Muda, Nora; Sim, Raymond

    2015-02-01

    An integer programming problem is a mathematical optimization or feasibility program in which some or all of the variables are restricted to be integers. In many settings the term refers to integer linear programming (ILP), in which the objective function and the constraints (other than the integer constraints) are linear. An ILP has many applications in industrial production, including job-shop modelling. A possible objective is to maximize the total production, without exceeding the available resources. In some cases, this can be expressed in terms of a linear program, but variables must be constrained to be integer. It concerned with the optimization of a linear function while satisfying a set of linear equality and inequality constraints and restrictions. It has been used to solve optimization problem in many industries area such as banking, nutrition, agriculture, and bakery and so on. The main purpose of this study is to formulate the best combination of all ingredients in producing different type of bread in Daily Bakery in order to gain maximum profit. This study also focuses on the sensitivity analysis due to changing of the profit and the cost of each ingredient. The optimum result obtained from QM software is RM 65,377.29 per day. This study will be benefited for Daily Bakery and also other similar industries. By formulating a combination of all ingredients make up, they can easily know their total profit in producing bread everyday.

  19. An Efficacious Multi-Objective Fuzzy Linear Programming Approach for Optimal Power Flow Considering Distributed Generation.

    PubMed

    Warid, Warid; Hizam, Hashim; Mariun, Norman; Abdul-Wahab, Noor Izzri

    2016-01-01

    This paper proposes a new formulation for the multi-objective optimal power flow (MOOPF) problem for meshed power networks considering distributed generation. An efficacious multi-objective fuzzy linear programming optimization (MFLP) algorithm is proposed to solve the aforementioned problem with and without considering the distributed generation (DG) effect. A variant combination of objectives is considered for simultaneous optimization, including power loss, voltage stability, and shunt capacitors MVAR reserve. Fuzzy membership functions for these objectives are designed with extreme targets, whereas the inequality constraints are treated as hard constraints. The multi-objective fuzzy optimal power flow (OPF) formulation was converted into a crisp OPF in a successive linear programming (SLP) framework and solved using an efficient interior point method (IPM). To test the efficacy of the proposed approach, simulations are performed on the IEEE 30-busand IEEE 118-bus test systems. The MFLP optimization is solved for several optimization cases. The obtained results are compared with those presented in the literature. A unique solution with a high satisfaction for the assigned targets is gained. Results demonstrate the effectiveness of the proposed MFLP technique in terms of solution optimality and rapid convergence. Moreover, the results indicate that using the optimal DG location with the MFLP algorithm provides the solution with the highest quality.

  20. An Efficacious Multi-Objective Fuzzy Linear Programming Approach for Optimal Power Flow Considering Distributed Generation

    PubMed Central

    Warid, Warid; Hizam, Hashim; Mariun, Norman; Abdul-Wahab, Noor Izzri

    2016-01-01

    This paper proposes a new formulation for the multi-objective optimal power flow (MOOPF) problem for meshed power networks considering distributed generation. An efficacious multi-objective fuzzy linear programming optimization (MFLP) algorithm is proposed to solve the aforementioned problem with and without considering the distributed generation (DG) effect. A variant combination of objectives is considered for simultaneous optimization, including power loss, voltage stability, and shunt capacitors MVAR reserve. Fuzzy membership functions for these objectives are designed with extreme targets, whereas the inequality constraints are treated as hard constraints. The multi-objective fuzzy optimal power flow (OPF) formulation was converted into a crisp OPF in a successive linear programming (SLP) framework and solved using an efficient interior point method (IPM). To test the efficacy of the proposed approach, simulations are performed on the IEEE 30-busand IEEE 118-bus test systems. The MFLP optimization is solved for several optimization cases. The obtained results are compared with those presented in the literature. A unique solution with a high satisfaction for the assigned targets is gained. Results demonstrate the effectiveness of the proposed MFLP technique in terms of solution optimality and rapid convergence. Moreover, the results indicate that using the optimal DG location with the MFLP algorithm provides the solution with the highest quality. PMID:26954783

  1. A hybrid linear/nonlinear training algorithm for feedforward neural networks.

    PubMed

    McLoone, S; Brown, M D; Irwin, G; Lightbody, A

    1998-01-01

    This paper presents a new hybrid optimization strategy for training feedforward neural networks. The algorithm combines gradient-based optimization of nonlinear weights with singular value decomposition (SVD) computation of linear weights in one integrated routine. It is described for the multilayer perceptron (MLP) and radial basis function (RBF) networks and then extended to the local model network (LMN), a new feedforward structure in which a global nonlinear model is constructed from a set of locally valid submodels. Simulation results are presented demonstrating the superiority of the new hybrid training scheme compared to second-order gradient methods. It is particularly effective for the LMN architecture where the linear to nonlinear parameter ratio is large.

  2. Design, Optimization and Evaluation of Integrally Stiffened Al 7050 Panel with Curved Stiffeners

    NASA Technical Reports Server (NTRS)

    Slemp, Wesley C. H.; Bird, R. Keith; Kapania, Rakesh K.; Havens, David; Norris, Ashley; Olliffe, Robert

    2011-01-01

    A curvilinear stiffened panel was designed, manufactured, and tested in the Combined Load Test Fixture at NASA Langley Research Center. The panel was optimized for minimum mass subjected to constraints on buckling load, yielding, and crippling or local stiffener failure using a new analysis tool named EBF3PanelOpt. The panel was designed for a combined compression-shear loading configuration that is a realistic load case for a typical aircraft wing panel. The panel was loaded beyond buckling and strains and out-of-plane displacements were measured. The experimental data were compared with the strains and out-of-plane deflections from a high fidelity nonlinear finite element analysis and linear elastic finite element analysis of the panel/test-fixture assembly. The numerical results indicated that the panel buckled at the linearly elastic buckling eigenvalue predicted for the panel/test-fixture assembly. The experimental strains prior to buckling compared well with both the linear and nonlinear finite element model.

  3. An approach of traffic signal control based on NLRSQP algorithm

    NASA Astrophysics Data System (ADS)

    Zou, Yuan-Yang; Hu, Yu

    2017-11-01

    This paper presents a linear program model with linear complementarity constraints (LPLCC) to solve traffic signal optimization problem. The objective function of the model is to obtain the minimization of total queue length with weight factors at the end of each cycle. Then, a combination algorithm based on the nonlinear least regression and sequence quadratic program (NLRSQP) is proposed, by which the local optimal solution can be obtained. Furthermore, four numerical experiments are proposed to study how to set the initial solution of the algorithm that can get a better local optimal solution more quickly. In particular, the results of numerical experiments show that: The model is effective for different arrival rates and weight factors; and the lower bound of the initial solution is, the better optimal solution can be obtained.

  4. Robust stability of bidirectional associative memory neural networks with time delays

    NASA Astrophysics Data System (ADS)

    Park, Ju H.

    2006-01-01

    Based on the Lyapunov Krasovskii functionals combined with linear matrix inequality approach, a novel stability criterion is proposed for asymptotic stability of bidirectional associative memory neural networks with time delays. A novel delay-dependent stability criterion is given in terms of linear matrix inequalities, which can be solved easily by various optimization algorithms.

  5. Aerospace applications of integer and combinatorial optimization

    NASA Technical Reports Server (NTRS)

    Padula, S. L.; Kincaid, R. K.

    1995-01-01

    Research supported by NASA Langley Research Center includes many applications of aerospace design optimization and is conducted by teams of applied mathematicians and aerospace engineers. This paper investigates the benefits from this combined expertise in solving combinatorial optimization problems. Applications range from the design of large space antennas to interior noise control. A typical problem, for example, seeks the optimal locations for vibration-damping devices on a large space structure and is expressed as a mixed/integer linear programming problem with more than 1500 design variables.

  6. Optimized Controller Design for a 12-Pulse Voltage Source Converter Based HVDC System

    NASA Astrophysics Data System (ADS)

    Agarwal, Ruchi; Singh, Sanjeev

    2017-12-01

    The paper proposes an optimized controller design scheme for power quality improvement in 12-pulse voltage source converter based high voltage direct current system. The proposed scheme is hybrid combination of golden section search and successive linear search method. The paper aims at reduction of current sensor and optimization of controller. The voltage and current controller parameters are selected for optimization due to its impact on power quality. The proposed algorithm for controller optimizes the objective function which is composed of current harmonic distortion, power factor, and DC voltage ripples. The detailed designs and modeling of the complete system are discussed and its simulation is carried out in MATLAB-Simulink environment. The obtained results are presented to demonstrate the effectiveness of the proposed scheme under different transient conditions such as load perturbation, non-linear load condition, voltage sag condition, and tapped load fault under one phase open condition at both points-of-common coupling.

  7. A penalized linear and nonlinear combined conjugate gradient method for the reconstruction of fluorescence molecular tomography.

    PubMed

    Shang, Shang; Bai, Jing; Song, Xiaolei; Wang, Hongkai; Lau, Jaclyn

    2007-01-01

    Conjugate gradient method is verified to be efficient for nonlinear optimization problems of large-dimension data. In this paper, a penalized linear and nonlinear combined conjugate gradient method for the reconstruction of fluorescence molecular tomography (FMT) is presented. The algorithm combines the linear conjugate gradient method and the nonlinear conjugate gradient method together based on a restart strategy, in order to take advantage of the two kinds of conjugate gradient methods and compensate for the disadvantages. A quadratic penalty method is adopted to gain a nonnegative constraint and reduce the illposedness of the problem. Simulation studies show that the presented algorithm is accurate, stable, and fast. It has a better performance than the conventional conjugate gradient-based reconstruction algorithms. It offers an effective approach to reconstruct fluorochrome information for FMT.

  8. Optimizing Support Vector Machine Parameters with Genetic Algorithm for Credit Risk Assessment

    NASA Astrophysics Data System (ADS)

    Manurung, Jonson; Mawengkang, Herman; Zamzami, Elviawaty

    2017-12-01

    Support vector machine (SVM) is a popular classification method known to have strong generalization capabilities. SVM can solve the problem of classification and linear regression or nonlinear kernel which can be a learning algorithm for the ability of classification and regression. However, SVM also has a weakness that is difficult to determine the optimal parameter value. SVM calculates the best linear separator on the input feature space according to the training data. To classify data which are non-linearly separable, SVM uses kernel tricks to transform the data into a linearly separable data on a higher dimension feature space. The kernel trick using various kinds of kernel functions, such as : linear kernel, polynomial, radial base function (RBF) and sigmoid. Each function has parameters which affect the accuracy of SVM classification. To solve the problem genetic algorithms are proposed to be applied as the optimal parameter value search algorithm thus increasing the best classification accuracy on SVM. Data taken from UCI repository of machine learning database: Australian Credit Approval. The results show that the combination of SVM and genetic algorithms is effective in improving classification accuracy. Genetic algorithms has been shown to be effective in systematically finding optimal kernel parameters for SVM, instead of randomly selected kernel parameters. The best accuracy for data has been upgraded from kernel Linear: 85.12%, polynomial: 81.76%, RBF: 77.22% Sigmoid: 78.70%. However, for bigger data sizes, this method is not practical because it takes a lot of time.

  9. Intelligent Distributed Systems

    DTIC Science & Technology

    2015-10-23

    periodic gossiping algorithms by using convex combination rules rather than standard averaging rules. On a ring graph, we have discovered how to sequence...the gossips within a period to achieve the best possible convergence rate and we have related this optimal value to the classic edge coloring problem...consensus. There are three different approaches to distributed averaging: linear iterations, gossiping , and dou- ble linear iterations which are also known as

  10. Aerospace Applications of Integer and Combinatorial Optimization

    NASA Technical Reports Server (NTRS)

    Padula, S. L.; Kincaid, R. K.

    1995-01-01

    Research supported by NASA Langley Research Center includes many applications of aerospace design optimization and is conducted by teams of applied mathematicians and aerospace engineers. This paper investigates the benefits from this combined expertise in formulating and solving integer and combinatorial optimization problems. Applications range from the design of large space antennas to interior noise control. A typical problem, for example, seeks the optimal locations for vibration-damping devices on an orbiting platform and is expressed as a mixed/integer linear programming problem with more than 1500 design variables.

  11. Aerospace applications on integer and combinatorial optimization

    NASA Technical Reports Server (NTRS)

    Padula, S. L.; Kincaid, R. K.

    1995-01-01

    Research supported by NASA Langley Research Center includes many applications of aerospace design optimization and is conducted by teams of applied mathematicians and aerospace engineers. This paper investigates the benefits from this combined expertise in formulating and solving integer and combinatorial optimization problems. Applications range from the design of large space antennas to interior noise control. A typical problem. for example, seeks the optimal locations for vibration-damping devices on an orbiting platform and is expressed as a mixed/integer linear programming problem with more than 1500 design variables.

  12. Solid phase microextraction of diclofenac using molecularly imprinted polymer sorbent in hollow fiber combined with fiber optic-linear array spectrophotometry.

    PubMed

    Pebdani, Arezou Amiri; Shabani, Ali Mohammad Haji; Dadfarnia, Shayessteh; Khodadoust, Saeid

    2015-08-05

    A simple solid phase microextraction method based on molecularly imprinted polymer sorbent in the hollow fiber (MIP-HF-SPME) combined with fiber optic-linear array spectrophotometer has been applied for the extraction and determination of diclofenac in environmental and biological samples. The effects of different parameters such as pH, times of extraction, type and volume of the organic solvent, stirring rate and donor phase volume on the extraction efficiency of the diclofenac were investigated and optimized. Under the optimal conditions, the calibration graph was linear (r(2)=0.998) in the range of 3.0-85.0 μg L(-1) with a detection limit of 0.7 μg L(-1) for preconcentration of 25.0 mL of the sample and the relative standard deviation (n=6) less than 5%. This method was applied successfully for the extraction and determination of diclofenac in different matrices (water, urine and plasma) and accuracy was examined through the recovery experiments. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. Progress in multidisciplinary design optimization at NASA Langley

    NASA Technical Reports Server (NTRS)

    Padula, Sharon L.

    1993-01-01

    Multidisciplinary Design Optimization refers to some combination of disciplinary analyses, sensitivity analysis, and optimization techniques used to design complex engineering systems. The ultimate objective of this research at NASA Langley Research Center is to help the US industry reduce the costs associated with development, manufacturing, and maintenance of aerospace vehicles while improving system performance. This report reviews progress towards this objective and highlights topics for future research. Aerospace design problems selected from the author's research illustrate strengths and weaknesses in existing multidisciplinary optimization techniques. The techniques discussed include multiobjective optimization, global sensitivity equations and sequential linear programming.

  14. [Variable selection methods combined with local linear embedding theory used for optimization of near infrared spectral quantitative models].

    PubMed

    Hao, Yong; Sun, Xu-Dong; Yang, Qiang

    2012-12-01

    Variables selection strategy combined with local linear embedding (LLE) was introduced for the analysis of complex samples by using near infrared spectroscopy (NIRS). Three methods include Monte Carlo uninformation variable elimination (MCUVE), successive projections algorithm (SPA) and MCUVE connected with SPA were used for eliminating redundancy spectral variables. Partial least squares regression (PLSR) and LLE-PLSR were used for modeling complex samples. The results shown that MCUVE can both extract effective informative variables and improve the precision of models. Compared with PLSR models, LLE-PLSR models can achieve more accurate analysis results. MCUVE combined with LLE-PLSR is an effective modeling method for NIRS quantitative analysis.

  15. A rapid method for optimization of the rocket propulsion system for single-stage-to-orbit vehicles

    NASA Technical Reports Server (NTRS)

    Eldred, C. H.; Gordon, S. V.

    1976-01-01

    A rapid analytical method for the optimization of rocket propulsion systems is presented for a vertical take-off, horizontal landing, single-stage-to-orbit launch vehicle. This method utilizes trade-offs between propulsion characteristics affecting flight performance and engine system mass. The performance results from a point-mass trajectory optimization program are combined with a linearized sizing program to establish vehicle sizing trends caused by propulsion system variations. The linearized sizing technique was developed for the class of vehicle systems studied herein. The specific examples treated are the optimization of nozzle expansion ratio and lift-off thrust-to-weight ratio to achieve either minimum gross mass or minimum dry mass. Assumed propulsion system characteristics are high chamber pressure, liquid oxygen and liquid hydrogen propellants, conventional bell nozzles, and the same fixed nozzle expansion ratio for all engines on a vehicle.

  16. Daubechies wavelets for linear scaling density functional theory.

    PubMed

    Mohr, Stephan; Ratcliff, Laura E; Boulanger, Paul; Genovese, Luigi; Caliste, Damien; Deutsch, Thierry; Goedecker, Stefan

    2014-05-28

    We demonstrate that Daubechies wavelets can be used to construct a minimal set of optimized localized adaptively contracted basis functions in which the Kohn-Sham orbitals can be represented with an arbitrarily high, controllable precision. Ground state energies and the forces acting on the ions can be calculated in this basis with the same accuracy as if they were calculated directly in a Daubechies wavelets basis, provided that the amplitude of these adaptively contracted basis functions is sufficiently small on the surface of the localization region, which is guaranteed by the optimization procedure described in this work. This approach reduces the computational costs of density functional theory calculations, and can be combined with sparse matrix algebra to obtain linear scaling with respect to the number of electrons in the system. Calculations on systems of 10,000 atoms or more thus become feasible in a systematic basis set with moderate computational resources. Further computational savings can be achieved by exploiting the similarity of the adaptively contracted basis functions for closely related environments, e.g., in geometry optimizations or combined calculations of neutral and charged systems.

  17. Validation of a computer code for analysis of subsonic aerodynamic performance of wings with flaps in combination with a canard or horizontal tail and an application to optimization

    NASA Technical Reports Server (NTRS)

    Carlson, Harry W.; Darden, Christine M.; Mann, Michael J.

    1990-01-01

    Extensive correlations of computer code results with experimental data are employed to illustrate the use of a linearized theory, attached flow method for the estimation and optimization of the longitudinal aerodynamic performance of wing-canard and wing-horizontal tail configurations which may employ simple hinged flap systems. Use of an attached flow method is based on the premise that high levels of aerodynamic efficiency require a flow that is as nearly attached as circumstances permit. The results indicate that linearized theory, attached flow, computer code methods (modified to include estimated attainable leading-edge thrust and an approximate representation of vortex forces) provide a rational basis for the estimation and optimization of aerodynamic performance at subsonic speeds below the drag rise Mach number. Generally, good prediction of aerodynamic performance, as measured by the suction parameter, can be expected for near optimum combinations of canard or horizontal tail incidence and leading- and trailing-edge flap deflections at a given lift coefficient (conditions which tend to produce a predominantly attached flow).

  18. Time-optimal Aircraft Pursuit-evasion with a Weapon Envelope Constraint

    NASA Technical Reports Server (NTRS)

    Menon, P. K. A.

    1990-01-01

    The optimal pursuit-evasion problem between two aircraft including a realistic weapon envelope is analyzed using differential game theory. Six order nonlinear point mass vehicle models are employed and the inclusion of an arbitrary weapon envelope geometry is allowed. The performance index is a linear combination of flight time and the square of the vehicle acceleration. Closed form solution to this high-order differential game is then obtained using feedback linearization. The solution is in the form of a feedback guidance law together with a quartic polynomial for time-to-go. Due to its modest computational requirements, this nonlinear guidance law is useful for on-board real-time implementation.

  19. Using linear programming to minimize the cost of nurse personnel.

    PubMed

    Matthews, Charles H

    2005-01-01

    Nursing personnel costs make up a major portion of most hospital budgets. This report evaluates and optimizes the utility of the nurse personnel at the Internal Medicine Outpatient Clinic of Wake Forest University Baptist Medical Center. Linear programming (LP) was employed to determine the effective combination of nurses that would allow for all weekly clinic tasks to be covered while providing the lowest possible cost to the department. Linear programming is a standard application of standard spreadsheet software that allows the operator to establish the variables to be optimized and then requires the operator to enter a series of constraints that will each have an impact on the ultimate outcome. The application is therefore able to quantify and stratify the nurses necessary to execute the tasks. With the report, a specific sensitivity analysis can be performed to assess just how sensitive the outcome is to the stress of adding or deleting a nurse to or from the payroll. The nurse employee cost structure in this study consisted of five certified nurse assistants (CNA), three licensed practicing nurses (LPN), and five registered nurses (RN). The LP revealed that the outpatient clinic should staff four RNs, three LPNs, and four CNAs with 95 percent confidence of covering nurse demand on the floor. This combination of nurses would enable the clinic to: 1. Reduce annual staffing costs by 16 percent; 2. Force each level of nurse to be optimally productive by focusing on tasks specific to their expertise; 3. Assign accountability more efficiently as the nurses adhere to their specific duties; and 4. Ultimately provide a competitive advantage to the clinic as it relates to nurse employee and patient satisfaction. Linear programming can be used to solve capacity problems for just about any staffing situation, provided the model is indeed linear.

  20. Non-linear dynamic compensation system

    NASA Technical Reports Server (NTRS)

    Lin, Yu-Hwan (Inventor); Lurie, Boris J. (Inventor)

    1992-01-01

    A non-linear dynamic compensation subsystem is added in the feedback loop of a high precision optical mirror positioning control system to smoothly alter the control system response bandwidth from a relatively wide response bandwidth optimized for speed of control system response to a bandwidth sufficiently narrow to reduce position errors resulting from the quantization noise inherent in the inductosyn used to measure mirror position. The non-linear dynamic compensation system includes a limiter for limiting the error signal within preselected limits, a compensator for modifying the limiter output to achieve the reduced bandwidth response, and an adder for combining the modified error signal with the difference between the limited and unlimited error signals. The adder output is applied to control system motor so that the system response is optimized for accuracy when the error signal is within the preselected limits, optimized for speed of response when the error signal is substantially beyond the preselected limits and smoothly varied therebetween as the error signal approaches the preselected limits.

  1. The feasibility of manual parameter tuning for deformable breast MR image registration from a multi-objective optimization perspective.

    PubMed

    Pirpinia, Kleopatra; Bosman, Peter A N; Loo, Claudette E; Winter-Warnars, Gonneke; Janssen, Natasja N Y; Scholten, Astrid N; Sonke, Jan-Jakob; van Herk, Marcel; Alderliesten, Tanja

    2017-06-23

    Deformable image registration is typically formulated as an optimization problem involving a linearly weighted combination of terms that correspond to objectives of interest (e.g. similarity, deformation magnitude). The weights, along with multiple other parameters, need to be manually tuned for each application, a task currently addressed mainly via trial-and-error approaches. Such approaches can only be successful if there is a sensible interplay between parameters, objectives, and desired registration outcome. This, however, is not well established. To study this interplay, we use multi-objective optimization, where multiple solutions exist that represent the optimal trade-offs between the objectives, forming a so-called Pareto front. Here, we focus on weight tuning. To study the space a user has to navigate during manual weight tuning, we randomly sample multiple linear combinations. To understand how these combinations relate to desirability of registration outcome, we associate with each outcome a mean target registration error (TRE) based on expert-defined anatomical landmarks. Further, we employ a multi-objective evolutionary algorithm that optimizes the weight combinations, yielding a Pareto front of solutions, which can be directly navigated by the user. To study how the complexity of manual weight tuning changes depending on the registration problem, we consider an easy problem, prone-to-prone breast MR image registration, and a hard problem, prone-to-supine breast MR image registration. Lastly, we investigate how guidance information as an additional objective influences the prone-to-supine registration outcome. Results show that the interplay between weights, objectives, and registration outcome makes manual weight tuning feasible for the prone-to-prone problem, but very challenging for the harder prone-to-supine problem. Here, patient-specific, multi-objective weight optimization is needed, obtaining a mean TRE of 13.6 mm without guidance information reduced to 7.3 mm with guidance information, but also providing a Pareto front that exhibits an intuitively sensible interplay between weights, objectives, and registration outcome, allowing outcome selection.

  2. The feasibility of manual parameter tuning for deformable breast MR image registration from a multi-objective optimization perspective

    NASA Astrophysics Data System (ADS)

    Pirpinia, Kleopatra; Bosman, Peter A. N.; E Loo, Claudette; Winter-Warnars, Gonneke; Y Janssen, Natasja N.; Scholten, Astrid N.; Sonke, Jan-Jakob; van Herk, Marcel; Alderliesten, Tanja

    2017-07-01

    Deformable image registration is typically formulated as an optimization problem involving a linearly weighted combination of terms that correspond to objectives of interest (e.g. similarity, deformation magnitude). The weights, along with multiple other parameters, need to be manually tuned for each application, a task currently addressed mainly via trial-and-error approaches. Such approaches can only be successful if there is a sensible interplay between parameters, objectives, and desired registration outcome. This, however, is not well established. To study this interplay, we use multi-objective optimization, where multiple solutions exist that represent the optimal trade-offs between the objectives, forming a so-called Pareto front. Here, we focus on weight tuning. To study the space a user has to navigate during manual weight tuning, we randomly sample multiple linear combinations. To understand how these combinations relate to desirability of registration outcome, we associate with each outcome a mean target registration error (TRE) based on expert-defined anatomical landmarks. Further, we employ a multi-objective evolutionary algorithm that optimizes the weight combinations, yielding a Pareto front of solutions, which can be directly navigated by the user. To study how the complexity of manual weight tuning changes depending on the registration problem, we consider an easy problem, prone-to-prone breast MR image registration, and a hard problem, prone-to-supine breast MR image registration. Lastly, we investigate how guidance information as an additional objective influences the prone-to-supine registration outcome. Results show that the interplay between weights, objectives, and registration outcome makes manual weight tuning feasible for the prone-to-prone problem, but very challenging for the harder prone-to-supine problem. Here, patient-specific, multi-objective weight optimization is needed, obtaining a mean TRE of 13.6 mm without guidance information reduced to 7.3 mm with guidance information, but also providing a Pareto front that exhibits an intuitively sensible interplay between weights, objectives, and registration outcome, allowing outcome selection.

  3. Magnetofluorescent nanocomposites and quantum dots used for optimal application in magnetic fluorescence-linked immunoassay.

    PubMed

    Tsai, H Y; Li, S Y; Fuh, C Bor

    2018-03-01

    Magnetofluorescent nanocomposites with optimal magnetic and fluorescent properties were prepared and characterized by combining magnetic nanoparticles (iron oxide@polymethyl methacrylate) with fluorescent nanoparticles (rhodamine 6G@mSiO 2 ). Experimental parameters were optimized to produce nanocomposites with high magnetic susceptibility and fluorescence intensity. The detection of a model biomarker (alpha-fetoprotein) was used to demonstrate the feasibility of applying the magnetofluorescent nanocomposites combined with quantum dots and using magnetic fluorescence-linked immunoassay. The magnetofluorescent nanocomposites enable efficient mixing, fast re-concentration, and nanoparticle quantization for optimal reactions. Biofunctional quantum dots were used to confirm the alpha-fetoprotein (AFP) content in sandwich immunoassay after mixing and washing. The analysis time was only one third that required in ELISA. The detection limit was 0.2 pg mL -1 , and the linear range was 0.68 pg mL -1 -6.8 ng mL -1 . This detection limit is lower, and the linear range is wider than those of ELISA and other methods. The measurements made using the proposed method differed by less than 13% from those obtained using ELISA for four AFP concentrations (0.03, 0.15, 0.75, and 3.75 ng mL -1 ). The proposed method has a considerable potential for biomarker detection in various analytical and biomedical applications. Graphical abstract Magnetofluorescent nanocomposites combined with fluorescent quantum dots were used in magnetic fluorescence-linked immunoassay.

  4. On the stability and instantaneous velocity of grasped frictionless objects

    NASA Technical Reports Server (NTRS)

    Trinkle, Jeffrey C.

    1992-01-01

    A quantitative test for form closure valid for any number of contact points is formulated as a linear program, the optimal objective value of which provides a measure of how far a grasp is from losing form closure. Another contribution of the study is the formulation of a linear program whose solution yields the same information as the classical approach. The benefit of the formulation is that explicit testing of all possible combinations of contact interactions can be avoided by the algorithm used to solve the linear program.

  5. Optimal Combinations of Diagnostic Tests Based on AUC.

    PubMed

    Huang, Xin; Qin, Gengsheng; Fang, Yixin

    2011-06-01

    When several diagnostic tests are available, one can combine them to achieve better diagnostic accuracy. This article considers the optimal linear combination that maximizes the area under the receiver operating characteristic curve (AUC); the estimates of the combination's coefficients can be obtained via a nonparametric procedure. However, for estimating the AUC associated with the estimated coefficients, the apparent estimation by re-substitution is too optimistic. To adjust for the upward bias, several methods are proposed. Among them the cross-validation approach is especially advocated, and an approximated cross-validation is developed to reduce the computational cost. Furthermore, these proposed methods can be applied for variable selection to select important diagnostic tests. The proposed methods are examined through simulation studies and applications to three real examples. © 2010, The International Biometric Society.

  6. Intra-Operative Dosimetry in Prostate Brachytherapy

    DTIC Science & Technology

    2007-11-01

    of the focal spot. 2.1. Model for Reconstruction Space Transformation As illustrated in Figure 8, let A & B ( with reference frames FA & FB) be the two...simplex optimization method in MATLAB 7.0 with the search space being defined by the distortion modes from PCA. A linear combination of the modes would...arm is tracked with an X-ray fiducial system called FTRAC that is composed of optimally selected polynomial

  7. Time-optimal aircraft pursuit-evasion with a weapon envelope constraint

    NASA Technical Reports Server (NTRS)

    Menon, P. K. A.; Duke, E. L.

    1990-01-01

    The optimal pursuit-evasion problem between two aircraft, including nonlinear point-mass vehicle models and a realistic weapon envelope, is analyzed. Using a linear combination of flight time and the square of the vehicle acceleration as the performance index, a closed-form solution is obtained in nonlinear feedback form. Due to its modest computational requirements, this guidance law can be used for onboard real-time implementation.

  8. Performance of Nonlinear Finite-Difference Poisson-Boltzmann Solvers

    PubMed Central

    Cai, Qin; Hsieh, Meng-Juei; Wang, Jun; Luo, Ray

    2014-01-01

    We implemented and optimized seven finite-difference solvers for the full nonlinear Poisson-Boltzmann equation in biomolecular applications, including four relaxation methods, one conjugate gradient method, and two inexact Newton methods. The performance of the seven solvers was extensively evaluated with a large number of nucleic acids and proteins. Worth noting is the inexact Newton method in our analysis. We investigated the role of linear solvers in its performance by incorporating the incomplete Cholesky conjugate gradient and the geometric multigrid into its inner linear loop. We tailored and optimized both linear solvers for faster convergence rate. In addition, we explored strategies to optimize the successive over-relaxation method to reduce its convergence failures without too much sacrifice in its convergence rate. Specifically we attempted to adaptively change the relaxation parameter and to utilize the damping strategy from the inexact Newton method to improve the successive over-relaxation method. Our analysis shows that the nonlinear methods accompanied with a functional-assisted strategy, such as the conjugate gradient method and the inexact Newton method, can guarantee convergence in the tested molecules. Especially the inexact Newton method exhibits impressive performance when it is combined with highly efficient linear solvers that are tailored for its special requirement. PMID:24723843

  9. Multi-Window Controllers for Autonomous Space Systems

    NASA Technical Reports Server (NTRS)

    Lurie, B, J.; Hadaegh, F. Y.

    1997-01-01

    Multi-window controllers select between elementary linear controllers using nonlinear windows based on the amplitude and frequency content of the feedback error. The controllers are relatively simple to implement and perform much better than linear controllers. The commanders for such controllers only order the destination point and are freed from generating the command time-profiles. The robotic missions rely heavily on the tasks of acquisition and tracking. For autonomous and optimal control of the spacecraft, the control bandwidth must be larger while the feedback can (and, therefore, must) be reduced.. Combining linear compensators via multi-window nonlinear summer guarantees minimum phase character of the combined transfer function. It is shown that the solution may require using several parallel branches and windows. Several examples of multi-window nonlinear controller applications are presented.

  10. Optimizing cost-efficiency in mean exposure assessment - cost functions reconsidered

    PubMed Central

    2011-01-01

    Background Reliable exposure data is a vital concern in medical epidemiology and intervention studies. The present study addresses the needs of the medical researcher to spend monetary resources devoted to exposure assessment with an optimal cost-efficiency, i.e. obtain the best possible statistical performance at a specified budget. A few previous studies have suggested mathematical optimization procedures based on very simple cost models; this study extends the methodology to cover even non-linear cost scenarios. Methods Statistical performance, i.e. efficiency, was assessed in terms of the precision of an exposure mean value, as determined in a hierarchical, nested measurement model with three stages. Total costs were assessed using a corresponding three-stage cost model, allowing costs at each stage to vary non-linearly with the number of measurements according to a power function. Using these models, procedures for identifying the optimally cost-efficient allocation of measurements under a constrained budget were developed, and applied on 225 scenarios combining different sizes of unit costs, cost function exponents, and exposure variance components. Results Explicit mathematical rules for identifying optimal allocation could be developed when cost functions were linear, while non-linear cost functions implied that parts of or the entire optimization procedure had to be carried out using numerical methods. For many of the 225 scenarios, the optimal strategy consisted in measuring on only one occasion from each of as many subjects as allowed by the budget. Significant deviations from this principle occurred if costs for recruiting subjects were large compared to costs for setting up measurement occasions, and, at the same time, the between-subjects to within-subject variance ratio was small. In these cases, non-linearities had a profound influence on the optimal allocation and on the eventual size of the exposure data set. Conclusions The analysis procedures developed in the present study can be used for informed design of exposure assessment strategies, provided that data are available on exposure variability and the costs of collecting and processing data. The present shortage of empirical evidence on costs and appropriate cost functions however impedes general conclusions on optimal exposure measurement strategies in different epidemiologic scenarios. PMID:21600023

  11. Optimizing cost-efficiency in mean exposure assessment--cost functions reconsidered.

    PubMed

    Mathiassen, Svend Erik; Bolin, Kristian

    2011-05-21

    Reliable exposure data is a vital concern in medical epidemiology and intervention studies. The present study addresses the needs of the medical researcher to spend monetary resources devoted to exposure assessment with an optimal cost-efficiency, i.e. obtain the best possible statistical performance at a specified budget. A few previous studies have suggested mathematical optimization procedures based on very simple cost models; this study extends the methodology to cover even non-linear cost scenarios. Statistical performance, i.e. efficiency, was assessed in terms of the precision of an exposure mean value, as determined in a hierarchical, nested measurement model with three stages. Total costs were assessed using a corresponding three-stage cost model, allowing costs at each stage to vary non-linearly with the number of measurements according to a power function. Using these models, procedures for identifying the optimally cost-efficient allocation of measurements under a constrained budget were developed, and applied on 225 scenarios combining different sizes of unit costs, cost function exponents, and exposure variance components. Explicit mathematical rules for identifying optimal allocation could be developed when cost functions were linear, while non-linear cost functions implied that parts of or the entire optimization procedure had to be carried out using numerical methods.For many of the 225 scenarios, the optimal strategy consisted in measuring on only one occasion from each of as many subjects as allowed by the budget. Significant deviations from this principle occurred if costs for recruiting subjects were large compared to costs for setting up measurement occasions, and, at the same time, the between-subjects to within-subject variance ratio was small. In these cases, non-linearities had a profound influence on the optimal allocation and on the eventual size of the exposure data set. The analysis procedures developed in the present study can be used for informed design of exposure assessment strategies, provided that data are available on exposure variability and the costs of collecting and processing data. The present shortage of empirical evidence on costs and appropriate cost functions however impedes general conclusions on optimal exposure measurement strategies in different epidemiologic scenarios.

  12. Some comparisons of complexity in dictionary-based and linear computational models.

    PubMed

    Gnecco, Giorgio; Kůrková, Věra; Sanguineti, Marcello

    2011-03-01

    Neural networks provide a more flexible approximation of functions than traditional linear regression. In the latter, one can only adjust the coefficients in linear combinations of fixed sets of functions, such as orthogonal polynomials or Hermite functions, while for neural networks, one may also adjust the parameters of the functions which are being combined. However, some useful properties of linear approximators (such as uniqueness, homogeneity, and continuity of best approximation operators) are not satisfied by neural networks. Moreover, optimization of parameters in neural networks becomes more difficult than in linear regression. Experimental results suggest that these drawbacks of neural networks are offset by substantially lower model complexity, allowing accuracy of approximation even in high-dimensional cases. We give some theoretical results comparing requirements on model complexity for two types of approximators, the traditional linear ones and so called variable-basis types, which include neural networks, radial, and kernel models. We compare upper bounds on worst-case errors in variable-basis approximation with lower bounds on such errors for any linear approximator. Using methods from nonlinear approximation and integral representations tailored to computational units, we describe some cases where neural networks outperform any linear approximator. Copyright © 2010 Elsevier Ltd. All rights reserved.

  13. An intuitionistic fuzzy multi-objective non-linear programming model for sustainable irrigation water allocation under the combination of dry and wet conditions

    NASA Astrophysics Data System (ADS)

    Li, Mo; Fu, Qiang; Singh, Vijay P.; Ma, Mingwei; Liu, Xiao

    2017-12-01

    Water scarcity causes conflicts among natural resources, society and economy and reinforces the need for optimal allocation of irrigation water resources in a sustainable way. Uncertainties caused by natural conditions and human activities make optimal allocation more complex. An intuitionistic fuzzy multi-objective non-linear programming (IFMONLP) model for irrigation water allocation under the combination of dry and wet conditions is developed to help decision makers mitigate water scarcity. The model is capable of quantitatively solving multiple problems including crop yield increase, blue water saving, and water supply cost reduction to obtain a balanced water allocation scheme using a multi-objective non-linear programming technique. Moreover, it can deal with uncertainty as well as hesitation based on the introduction of intuitionistic fuzzy numbers. Consideration of the combination of dry and wet conditions for water availability and precipitation makes it possible to gain insights into the various irrigation water allocations, and joint probabilities based on copula functions provide decision makers an average standard for irrigation. A case study on optimally allocating both surface water and groundwater to different growth periods of rice in different subareas in Heping irrigation area, Qing'an County, northeast China shows the potential and applicability of the developed model. Results show that the crop yield increase target especially in tillering and elongation stages is a prevailing concern when more water is available, and trading schemes can mitigate water supply cost and save water with an increased grain output. Results also reveal that the water allocation schemes are sensitive to the variation of water availability and precipitation with uncertain characteristics. The IFMONLP model is applicable for most irrigation areas with limited water supplies to determine irrigation water strategies under a fuzzy environment.

  14. Is the linear modeling technique good enough for optimal form design? A comparison of quantitative analysis models.

    PubMed

    Lin, Yang-Cheng; Yeh, Chung-Hsing; Wang, Chen-Cheng; Wei, Chun-Chun

    2012-01-01

    How to design highly reputable and hot-selling products is an essential issue in product design. Whether consumers choose a product depends largely on their perception of the product image. A consumer-oriented design approach presented in this paper helps product designers incorporate consumers' perceptions of product forms in the design process. The consumer-oriented design approach uses quantification theory type I, grey prediction (the linear modeling technique), and neural networks (the nonlinear modeling technique) to determine the optimal form combination of product design for matching a given product image. An experimental study based on the concept of Kansei Engineering is conducted to collect numerical data for examining the relationship between consumers' perception of product image and product form elements of personal digital assistants (PDAs). The result of performance comparison shows that the QTTI model is good enough to help product designers determine the optimal form combination of product design. Although the PDA form design is used as a case study, the approach is applicable to other consumer products with various design elements and product images. The approach provides an effective mechanism for facilitating the consumer-oriented product design process.

  15. Is the Linear Modeling Technique Good Enough for Optimal Form Design? A Comparison of Quantitative Analysis Models

    PubMed Central

    Lin, Yang-Cheng; Yeh, Chung-Hsing; Wang, Chen-Cheng; Wei, Chun-Chun

    2012-01-01

    How to design highly reputable and hot-selling products is an essential issue in product design. Whether consumers choose a product depends largely on their perception of the product image. A consumer-oriented design approach presented in this paper helps product designers incorporate consumers' perceptions of product forms in the design process. The consumer-oriented design approach uses quantification theory type I, grey prediction (the linear modeling technique), and neural networks (the nonlinear modeling technique) to determine the optimal form combination of product design for matching a given product image. An experimental study based on the concept of Kansei Engineering is conducted to collect numerical data for examining the relationship between consumers' perception of product image and product form elements of personal digital assistants (PDAs). The result of performance comparison shows that the QTTI model is good enough to help product designers determine the optimal form combination of product design. Although the PDA form design is used as a case study, the approach is applicable to other consumer products with various design elements and product images. The approach provides an effective mechanism for facilitating the consumer-oriented product design process. PMID:23258961

  16. A non-linear data mining parameter selection algorithm for continuous variables

    PubMed Central

    Razavi, Marianne; Brady, Sean

    2017-01-01

    In this article, we propose a new data mining algorithm, by which one can both capture the non-linearity in data and also find the best subset model. To produce an enhanced subset of the original variables, a preferred selection method should have the potential of adding a supplementary level of regression analysis that would capture complex relationships in the data via mathematical transformation of the predictors and exploration of synergistic effects of combined variables. The method that we present here has the potential to produce an optimal subset of variables, rendering the overall process of model selection more efficient. This algorithm introduces interpretable parameters by transforming the original inputs and also a faithful fit to the data. The core objective of this paper is to introduce a new estimation technique for the classical least square regression framework. This new automatic variable transformation and model selection method could offer an optimal and stable model that minimizes the mean square error and variability, while combining all possible subset selection methodology with the inclusion variable transformations and interactions. Moreover, this method controls multicollinearity, leading to an optimal set of explanatory variables. PMID:29131829

  17. Guidance and Control strategies for aerospace vehicles

    NASA Technical Reports Server (NTRS)

    Hibey, J. L.; Naidu, D. S.; Charalambous, C. D.

    1989-01-01

    A neighboring optimal guidance scheme was devised for a nonlinear dynamic system with stochastic inputs and perfect measurements as applicable to fuel optimal control of an aeroassisted orbital transfer vehicle. For the deterministic nonlinear dynamic system describing the atmospheric maneuver, a nominal trajectory was determined. Then, a neighboring, optimal guidance scheme was obtained for open loop and closed loop control configurations. Taking modelling uncertainties into account, a linear, stochastic, neighboring optimal guidance scheme was devised. Finally, the optimal trajectory was approximated as the sum of the deterministic nominal trajectory and the stochastic neighboring optimal solution. Numerical results are presented for a typical vehicle. A fuel-optimal control problem in aeroassisted noncoplanar orbital transfer is also addressed. The equations of motion for the atmospheric maneuver are nonlinear and the optimal (nominal) trajectory and control are obtained. In order to follow the nominal trajectory under actual conditions, a neighboring optimum guidance scheme is designed using linear quadratic regulator theory for onboard real-time implementation. One of the state variables is used as the independent variable in reference to the time. The weighting matrices in the performance index are chosen by a combination of a heuristic method and an optimal modal approach. The necessary feedback control law is obtained in order to minimize the deviations from the nominal conditions.

  18. Aquifer Reclamation Design: The Use of Contaminant Transport Simulation Combined With Nonlinear Programing

    NASA Astrophysics Data System (ADS)

    Gorelick, Steven M.; Voss, Clifford I.; Gill, Philip E.; Murray, Walter; Saunders, Michael A.; Wright, Margaret H.

    1984-04-01

    A simulation-management methodology is demonstrated for the rehabilitation of aquifers that have been subjected to chemical contamination. Finite element groundwater flow and contaminant transport simulation are combined with nonlinear optimization. The model is capable of determining well locations plus pumping and injection rates for groundwater quality control. Examples demonstrate linear or nonlinear objective functions subject to linear and nonlinear simulation and water management constraints. Restrictions can be placed on hydraulic heads, stresses, and gradients, in addition to contaminant concentrations and fluxes. These restrictions can be distributed over space and time. Three design strategies are demonstrated for an aquifer that is polluted by a constant contaminant source: they are pumping for contaminant removal, water injection for in-ground dilution, and a pumping, treatment, and injection cycle. A transient model designs either contaminant plume interception or in-ground dilution so that water quality standards are met. The method is not limited to these cases. It is generally applicable to the optimization of many types of distributed parameter systems.

  19. Autonomous Guidance of Agile Small-scale Rotorcraft

    NASA Technical Reports Server (NTRS)

    Mettler, Bernard; Feron, Eric

    2004-01-01

    This report describes a guidance system for agile vehicles based on a hybrid closed-loop model of the vehicle dynamics. The hybrid model represents the vehicle dynamics through a combination of linear-time-invariant control modes and pre-programmed, finite-duration maneuvers. This particular hybrid structure can be realized through a control system that combines trim controllers and a maneuvering control logic. The former enable precise trajectory tracking, and the latter enables trajectories at the edge of the vehicle capabilities. The closed-loop model is much simpler than the full vehicle equations of motion, yet it can capture a broad range of dynamic behaviors. It also supports a consistent link between the physical layer and the decision-making layer. The trajectory generation was formulated as an optimization problem using mixed-integer-linear-programming. The optimization is solved in a receding horizon fashion. Several techniques to improve the computational tractability were investigate. Simulation experiments using NASA Ames 'R-50 model show that this approach fully exploits the vehicle's agility.

  20. Optimal GENCO bidding strategy

    NASA Astrophysics Data System (ADS)

    Gao, Feng

    Electricity industries worldwide are undergoing a period of profound upheaval. The conventional vertically integrated mechanism is being replaced by a competitive market environment. Generation companies have incentives to apply novel technologies to lower production costs, for example: Combined Cycle units. Economic dispatch with Combined Cycle units becomes a non-convex optimization problem, which is difficult if not impossible to solve by conventional methods. Several techniques are proposed here: Mixed Integer Linear Programming, a hybrid method, as well as Evolutionary Algorithms. Evolutionary Algorithms share a common mechanism, stochastic searching per generation. The stochastic property makes evolutionary algorithms robust and adaptive enough to solve a non-convex optimization problem. This research implements GA, EP, and PS algorithms for economic dispatch with Combined Cycle units, and makes a comparison with classical Mixed Integer Linear Programming. The electricity market equilibrium model not only helps Independent System Operator/Regulator analyze market performance and market power, but also provides Market Participants the ability to build optimal bidding strategies based on Microeconomics analysis. Supply Function Equilibrium (SFE) is attractive compared to traditional models. This research identifies a proper SFE model, which can be applied to a multiple period situation. The equilibrium condition using discrete time optimal control is then developed for fuel resource constraints. Finally, the research discusses the issues of multiple equilibria and mixed strategies, which are caused by the transmission network. Additionally, an advantage of the proposed model for merchant transmission planning is discussed. A market simulator is a valuable training and evaluation tool to assist sellers, buyers, and regulators to understand market performance and make better decisions. A traditional optimization model may not be enough to consider the distributed, large-scale, and complex energy market. This research compares the performance and searching paths of different artificial life techniques such as Genetic Algorithm (GA), Evolutionary Programming (EP), and Particle Swarm (PS), and look for a proper method to emulate Generation Companies' (GENCOs) bidding strategies. After deregulation, GENCOs face risk and uncertainty associated with the fast-changing market environment. A profit-based bidding decision support system is critical for GENCOs to keep a competitive position in the new environment. Most past research do not pay special attention to the piecewise staircase characteristic of generator offer curves. This research proposes an optimal bidding strategy based on Parametric Linear Programming. The proposed algorithm is able to handle actual piecewise staircase energy offer curves. The proposed method is then extended to incorporate incomplete information based on Decision Analysis. Finally, the author develops an optimal bidding tool (GenBidding) and applies it to the RTS96 test system.

  1. A Field-expedient Method for Detection of Leptospirosis Causative Agents in Rodents

    DTIC Science & Technology

    2012-01-01

    carboxytetramethylrhodamine (TAMRA)) (Roche Molecular Diagnostics, Pleasanton, California).24,25 Polymerase Chain Reaction. Wet reagent LPS PCR assay...City, Utah). Primers and probe were optimized with R.A.P.I.D. wet reagents and the optimum condition was 5 mmol/L MgCl2, 400 nmol/L primers, 100 nmol...for 20 seconds of combined annealing and primer extension. Linearity and Limit of Detection. The linearity of the LPS freeze-dried assay was

  2. Optimal helicopter trajectory planning for terrain following flight

    NASA Technical Reports Server (NTRS)

    Menon, P. K. A.

    1990-01-01

    Helicopters operating in high threat areas have to fly close to the earth surface to minimize the risk of being detected by the adversaries. Techniques are presented for low altitude helicopter trajectory planning. These methods are based on optimal control theory and appear to be implementable onboard in realtime. Second order necessary conditions are obtained to provide a criterion for finding the optimal trajectory when more than one extremal passes through a given point. A second trajectory planning method incorporating a quadratic performance index is also discussed. Trajectory planning problem is formulated as a differential game. The objective is to synthesize optimal trajectories in the presence of an actively maneuvering adversary. Numerical methods for obtaining solutions to these problems are outlined. As an alternative to numerical method, feedback linearizing transformations are combined with the linear quadratic game results to synthesize explicit nonlinear feedback strategies for helicopter pursuit-evasion. Some of the trajectories generated from this research are evaluated on a six-degree-of-freedom helicopter simulation incorporating an advanced autopilot. The optimal trajectory planning methods presented are also useful for autonomous land vehicle guidance.

  3. Optimizing parameters of GTU cycle and design values of air-gas channel in a gas turbine with cooled nozzle and rotor blades

    NASA Astrophysics Data System (ADS)

    Kler, A. M.; Zakharov, Yu. B.

    2012-09-01

    The authors have formulated the problem of joint optimization of pressure and temperature of combustion products before gas turbine, profiles of nozzle and rotor blades of gas turbine, and cooling air flow rates through nozzle and rotor blades. The article offers an original approach to optimization of profiles of gas turbine blades where the optimized profiles are presented as linear combinations of preliminarily formed basic profiles. The given examples relate to optimization of the gas turbine unit on the criterion of power efficiency at preliminary heat removal from air flows supplied for the air-gas channel cooling and without such removal.

  4. Optimization of formulation of soy-cakes baked in infrared-microwave combination oven by response surface methodology.

    PubMed

    Şakıyan, Özge

    2015-05-01

    The aim of present work is to optimize the formulation of a functional cake (soy-cake) to be baked in infrared-microwave combination oven. For this optimization process response surface methodology was utilized. It was also aimed to optimize the processing conditions of the combination baking. The independent variables were the baking time (8, 9, 10 min), the soy flour concentration (30, 40, 50 %) and the DATEM (diacetyltartaric acid esters of monoglycerides) concentration (0.4, 0.6 and 0.8 %). The quality parameters that were examined in the study were specific volume, weight loss, total color change and firmness of the cake samples. The results were analyzed by multiple regression; and the significant linear, quadratic, and interaction terms were used in the second order mathematical model. The optimum baking time, soy-flour concentration and DATEM concentration were found as 9.5 min, 30 and 0.72 %, respectively. The corresponding responses of the optimum points were almost comparable with those of conventionally baked soy-cakes. So it may be declared that it is possible to produce high quality soy cakes in a very short time by using infrared-microwave combination oven.

  5. A reliable algorithm for optimal control synthesis

    NASA Technical Reports Server (NTRS)

    Vansteenwyk, Brett; Ly, Uy-Loi

    1992-01-01

    In recent years, powerful design tools for linear time-invariant multivariable control systems have been developed based on direct parameter optimization. In this report, an algorithm for reliable optimal control synthesis using parameter optimization is presented. Specifically, a robust numerical algorithm is developed for the evaluation of the H(sup 2)-like cost functional and its gradients with respect to the controller design parameters. The method is specifically designed to handle defective degenerate systems and is based on the well-known Pade series approximation of the matrix exponential. Numerical test problems in control synthesis for simple mechanical systems and for a flexible structure with densely packed modes illustrate positively the reliability of this method when compared to a method based on diagonalization. Several types of cost functions have been considered: a cost function for robust control consisting of a linear combination of quadratic objectives for deterministic and random disturbances, and one representing an upper bound on the quadratic objective for worst case initial conditions. Finally, a framework for multivariable control synthesis has been developed combining the concept of closed-loop transfer recovery with numerical parameter optimization. The procedure enables designers to synthesize not only observer-based controllers but also controllers of arbitrary order and structure. Numerical design solutions rely heavily on the robust algorithm due to the high order of the synthesis model and the presence of near-overlapping modes. The design approach is successfully applied to the design of a high-bandwidth control system for a rotorcraft.

  6. Optimization of a constrained linear monochromator design for neutral atom beams.

    PubMed

    Kaltenbacher, Thomas

    2016-04-01

    A focused ground state, neutral atom beam, exploiting its de Broglie wavelength by means of atom optics, is used for neutral atom microscopy imaging. Employing Fresnel zone plates as a lens for these beams is a well established microscopy technique. To date, even for favorable beam source conditions a minimal focus spot size of slightly below 1μm was reached. This limitation is essentially given by the intrinsic spectral purity of the beam in combination with the chromatic aberration of the diffraction based zone plate. Therefore, it is important to enhance the monochromaticity of the beam, enabling a higher spatial resolution, preferably below 100nm. We propose to increase the monochromaticity of a neutral atom beam by means of a so-called linear monochromator set-up - a Fresnel zone plate in combination with a pinhole aperture - in order to gain more than one order of magnitude in spatial resolution. This configuration is known in X-ray microscopy and has proven to be useful, but has not been applied to neutral atom beams. The main result of this work is optimal design parameters based on models for this linear monochromator set-up followed by a second zone plate for focusing. The optimization was performed for minimizing the focal spot size and maximizing the centre line intensity at the detector position for an atom beam simultaneously. The results presented in this work are for, but not limited to, a neutral helium atom beam. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. Modeling of thermal storage systems in MILP distributed energy resource models

    DOE PAGES

    Steen, David; Stadler, Michael; Cardoso, Gonçalo; ...

    2014-08-04

    Thermal energy storage (TES) and distributed generation technologies, such as combined heat and power (CHP) or photovoltaics (PV), can be used to reduce energy costs and decrease CO 2 emissions from buildings by shifting energy consumption to times with less emissions and/or lower energy prices. To determine the feasibility of investing in TES in combination with other distributed energy resources (DER), mixed integer linear programming (MILP) can be used. Such a MILP model is the well-established Distributed Energy Resources Customer Adoption Model (DER-CAM); however, it currently uses only a simplified TES model to guarantee linearity and short run-times. Loss calculationsmore » are based only on the energy contained in the storage. This paper presents a new DER-CAM TES model that allows improved tracking of losses based on ambient and storage temperatures, and compares results with the previous version. A multi-layer TES model is introduced that retains linearity and avoids creating an endogenous optimization problem. The improved model increases the accuracy of the estimated storage losses and enables use of heat pumps for low temperature storage charging. Ultimately,results indicate that the previous model overestimates the attractiveness of TES investments for cases without possibility to invest in heat pumps and underestimates it for some locations when heat pumps are allowed. Despite a variation in optimal technology selection between the two models, the objective function value stays quite stable, illustrating the complexity of optimal DER sizing problems in buildings and microgrids.« less

  8. Optimization model of vaccination strategy for dengue transmission

    NASA Astrophysics Data System (ADS)

    Widayani, H.; Kallista, M.; Nuraini, N.; Sari, M. Y.

    2014-02-01

    Dengue fever is emerging tropical and subtropical disease caused by dengue virus infection. The vaccination should be done as a prevention of epidemic in population. The host-vector model are modified with consider a vaccination factor to prevent the occurrence of epidemic dengue in a population. An optimal vaccination strategy using non-linear objective function was proposed. The genetic algorithm programming techniques are combined with fourth-order Runge-Kutta method to construct the optimal vaccination. In this paper, the appropriate vaccination strategy by using the optimal minimum cost function which can reduce the number of epidemic was analyzed. The numerical simulation for some specific cases of vaccination strategy is shown.

  9. Practical synchronization on complex dynamical networks via optimal pinning control

    NASA Astrophysics Data System (ADS)

    Li, Kezan; Sun, Weigang; Small, Michael; Fu, Xinchu

    2015-07-01

    We consider practical synchronization on complex dynamical networks under linear feedback control designed by optimal control theory. The control goal is to minimize global synchronization error and control strength over a given finite time interval, and synchronization error at terminal time. By utilizing the Pontryagin's minimum principle, and based on a general complex dynamical network, we obtain an optimal system to achieve the control goal. The result is verified by performing some numerical simulations on Star networks, Watts-Strogatz networks, and Barabási-Albert networks. Moreover, by combining optimal control and traditional pinning control, we propose an optimal pinning control strategy which depends on the network's topological structure. Obtained results show that optimal pinning control is very effective for synchronization control in real applications.

  10. Fully integrated aerodynamic/dynamic optimization of helicopter rotor blades

    NASA Technical Reports Server (NTRS)

    Walsh, Joanne L.; Lamarsh, William J., II; Adelman, Howard M.

    1992-01-01

    This paper describes a fully integrated aerodynamic/dynamic optimization procedure for helicopter rotor blades. The procedure combines performance and dynamics analyses with a general purpose optimizer. The procedure minimizes a linear combination of power required (in hover, forward flight, and maneuver) and vibratory hub shear. The design variables include pretwist, taper initiation, taper ratio, root chord, blade stiffnesses, tuning masses, and tuning mass locations. Aerodynamic constraints consist of limits on power required in hover, forward flight and maneuver; airfoil section stall; drag divergence Mach number; minimum tip chord; and trim. Dynamic constraints are on frequencies, minimum autorotational inertia, and maximum blade weight. The procedure is demonstrated for two cases. In the first case the objective function involves power required (in hover, forward flight, and maneuver) and dynamics. The second case involves only hover power and dynamics. The designs from the integrated procedure are compared with designs from a sequential optimization approach in which the blade is first optimized for performance and then for dynamics. In both cases, the integrated approach is superior.

  11. Fully integrated aerodynamic/dynamic optimization of helicopter rotor blades

    NASA Technical Reports Server (NTRS)

    Walsh, Joanne L.; Lamarsh, William J., II; Adelman, Howard M.

    1992-01-01

    A fully integrated aerodynamic/dynamic optimization procedure is described for helicopter rotor blades. The procedure combines performance and dynamic analyses with a general purpose optimizer. The procedure minimizes a linear combination of power required (in hover, forward flight, and maneuver) and vibratory hub shear. The design variables include pretwist, taper initiation, taper ratio, root chord, blade stiffnesses, tuning masses, and tuning mass locations. Aerodynamic constraints consist of limits on power required in hover, forward flight and maneuvers; airfoil section stall; drag divergence Mach number; minimum tip chord; and trim. Dynamic constraints are on frequencies, minimum autorotational inertia, and maximum blade weight. The procedure is demonstrated for two cases. In the first case, the objective function involves power required (in hover, forward flight and maneuver) and dynamics. The second case involves only hover power and dynamics. The designs from the integrated procedure are compared with designs from a sequential optimization approach in which the blade is first optimized for performance and then for dynamics. In both cases, the integrated approach is superior.

  12. Factor Analysis via Components Analysis

    ERIC Educational Resources Information Center

    Bentler, Peter M.; de Leeuw, Jan

    2011-01-01

    When the factor analysis model holds, component loadings are linear combinations of factor loadings, and vice versa. This interrelation permits us to define new optimization criteria and estimation methods for exploratory factor analysis. Although this article is primarily conceptual in nature, an illustrative example and a small simulation show…

  13. Variations of archived static-weight data and WIM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elliott, C.J.; Gillmann, R.; Kent, P.M.

    1998-12-01

    Using seven-card archived, static-weight and weigh-in-motion (WIM), truck data received by FHWA for 1966--1992, the authors examine the fluctuations of four fiducial weight measures reported at weight sites in the 50 states. The reduced 172 MB Class 9 (332000) database was prepared and ordered from 2 CD-ROMS with duplicate records removed. Front-axle weight and gross-vehicle weight (GVW) are combined conceptually by determining the front axle weight in four-quartile GVW categories. The four categories of front axle weight from the four GVW categories are combined in four ways. Three linear combinations are with fixed-coefficient fiducials and one is that optimal linearmore » combination producing the smallest standard deviation to mean value ratio. The best combination gives coefficients of variation of 2--3% for samples of 100 trucks, below the expected accuracy of single-event WIM measurements. Time tracking of data shows some high-variation sites have seasonal variations, or linear variations over the time-ordered samples. Modeling of these effects is very site specific but provides a way to reduce high variations. Some automatic calibration schemes would erroneously remove such seasonal or linear variations were they static effects.« less

  14. Optimal Model-Based Fault Estimation and Correction for Particle Accelerators and Industrial Plants Using Combined Support Vector Machines and First Principles Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sayyar-Rodsari, Bijan; Schweiger, Carl; /SLAC /Pavilion Technologies, Inc., Austin, TX

    2010-08-25

    Timely estimation of deviations from optimal performance in complex systems and the ability to identify corrective measures in response to the estimated parameter deviations has been the subject of extensive research over the past four decades. The implications in terms of lost revenue from costly industrial processes, operation of large-scale public works projects and the volume of the published literature on this topic clearly indicates the significance of the problem. Applications range from manufacturing industries (integrated circuits, automotive, etc.), to large-scale chemical plants, pharmaceutical production, power distribution grids, and avionics. In this project we investigated a new framework for buildingmore » parsimonious models that are suited for diagnosis and fault estimation of complex technical systems. We used Support Vector Machines (SVMs) to model potentially time-varying parameters of a First-Principles (FP) description of the process. The combined SVM & FP model was built (i.e. model parameters were trained) using constrained optimization techniques. We used the trained models to estimate faults affecting simulated beam lifetime. In the case where a large number of process inputs are required for model-based fault estimation, the proposed framework performs an optimal nonlinear principal component analysis of the large-scale input space, and creates a lower dimension feature space in which fault estimation results can be effectively presented to the operation personnel. To fulfill the main technical objectives of the Phase I research, our Phase I efforts have focused on: (1) SVM Training in a Combined Model Structure - We developed the software for the constrained training of the SVMs in a combined model structure, and successfully modeled the parameters of a first-principles model for beam lifetime with support vectors. (2) Higher-order Fidelity of the Combined Model - We used constrained training to ensure that the output of the SVM (i.e. the parameters of the beam lifetime model) are physically meaningful. (3) Numerical Efficiency of the Training - We investigated the numerical efficiency of the SVM training. More specifically, for the primal formulation of the training, we have developed a problem formulation that avoids the linear increase in the number of the constraints as a function of the number of data points. (4) Flexibility of Software Architecture - The software framework for the training of the support vector machines was designed to enable experimentation with different solvers. We experimented with two commonly used nonlinear solvers for our simulations. The primary application of interest for this project has been the sustained optimal operation of particle accelerators at the Stanford Linear Accelerator Center (SLAC). Particle storage rings are used for a variety of applications ranging from 'colliding beam' systems for high-energy physics research to highly collimated x-ray generators for synchrotron radiation science. Linear accelerators are also used for collider research such as International Linear Collider (ILC), as well as for free electron lasers, such as the Linear Coherent Light Source (LCLS) at SLAC. One common theme in the operation of storage rings and linear accelerators is the need to precisely control the particle beams over long periods of time with minimum beam loss and stable, yet challenging, beam parameters. We strongly believe that beyond applications in particle accelerators, the high fidelity and cost benefits of a combined model-based fault estimation/correction system will attract customers from a wide variety of commercial and scientific industries. Even though the acquisition of Pavilion Technologies, Inc. by Rockwell Automation Inc. in 2007 has altered the small business status of the Pavilion and it no longer qualifies for a Phase II funding, our findings in the course of the Phase I research have convinced us that further research will render a workable model-based fault estimation and correction for particle accelerators and industrial plants feasible.« less

  15. Optimal Operation of a Thermal Energy Storage Tank Using Linear Optimization

    NASA Astrophysics Data System (ADS)

    Civit Sabate, Carles

    In this thesis, an optimization procedure for minimizing the operating costs of a Thermal Energy Storage (TES) tank is presented. The facility in which the optimization is based is the combined cooling, heating, and power (CCHP) plant at the University of California, Irvine. TES tanks provide the ability of decoupling the demand of chilled water from its generation, over the course of a day, from the refrigeration and air-conditioning plants. They can be used to perform demand-side management, and optimization techniques can help to approach their optimal use. The proposed optimization approach provides a fast and reliable methodology of finding the optimal use of the TES tank to reduce energy costs and provides a tool for future implementation of optimal control laws on the system. Advantages of the proposed methodology are studied using simulation with historical data.

  16. Predictors of burnout among correctional mental health professionals.

    PubMed

    Gallavan, Deanna B; Newman, Jody L

    2013-02-01

    This study focused on the experience of burnout among a sample of correctional mental health professionals. We examined the relationship of a linear combination of optimism, work family conflict, and attitudes toward prisoners with two dimensions derived from the Maslach Burnout Inventory and the Professional Quality of Life Scale. Initially, three subscales from the Maslach Burnout Inventory and two subscales from the Professional Quality of Life Scale were subjected to principal components analysis with oblimin rotation in order to identify underlying dimensions among the subscales. This procedure resulted in two components accounting for approximately 75% of the variance (r = -.27). The first component was labeled Negative Experience of Work because it seemed to tap the experience of being emotionally spent, detached, and socially avoidant. The second component was labeled Positive Experience of Work and seemed to tap a sense of competence, success, and satisfaction in one's work. Two multiple regression analyses were subsequently conducted, in which Negative Experience of Work and Positive Experience of Work, respectively, were predicted from a linear combination of optimism, work family conflict, and attitudes toward prisoners. In the first analysis, 44% of the variance in Negative Experience of Work was accounted for, with work family conflict and optimism accounting for the most variance. In the second analysis, 24% of the variance in Positive Experience of Work was accounted for, with optimism and attitudes toward prisoners accounting for the most variance.

  17. A sequential linear optimization approach for controller design

    NASA Technical Reports Server (NTRS)

    Horta, L. G.; Juang, J.-N.; Junkins, J. L.

    1985-01-01

    A linear optimization approach with a simple real arithmetic algorithm is presented for reliable controller design and vibration suppression of flexible structures. Using first order sensitivity of the system eigenvalues with respect to the design parameters in conjunction with a continuation procedure, the method converts a nonlinear optimization problem into a maximization problem with linear inequality constraints. The method of linear programming is then applied to solve the converted linear optimization problem. The general efficiency of the linear programming approach allows the method to handle structural optimization problems with a large number of inequality constraints on the design vector. The method is demonstrated using a truss beam finite element model for the optimal sizing and placement of active/passive-structural members for damping augmentation. Results using both the sequential linear optimization approach and nonlinear optimization are presented and compared. The insensitivity to initial conditions of the linear optimization approach is also demonstrated.

  18. A Comparison Study of Item Exposure Control Strategies in MCAT

    ERIC Educational Resources Information Center

    Mao, Xiuzhen; Ozdemir, Burhanettin; Wang, Yating; Xiu, Tao

    2016-01-01

    Four item selection indexes with and without exposure control are evaluated and compared in multidimensional computerized adaptive testing (CAT). The four item selection indices are D-optimality, Posterior expectation Kullback-Leibler information (KLP), the minimized error variance of the linear combination score with equal weight (V1), and the…

  19. Gradient stationary phase optimized selectivity liquid chromatography with conventional columns.

    PubMed

    Chen, Kai; Lynen, Frédéric; Szucs, Roman; Hanna-Brown, Melissa; Sandra, Pat

    2013-05-21

    Stationary phase optimized selectivity liquid chromatography (SOSLC) is a promising technique to optimize the selectivity of a given separation. By combination of different stationary phases, SOSLC offers excellent possibilities for method development under both isocratic and gradient conditions. The so far available commercial SOSLC protocol utilizes dedicated column cartridges and corresponding cartridge holders to build up the combined column of different stationary phases. The present work is aimed at developing and extending the gradient SOSLC approach towards coupling conventional columns. Generic tubing was used to connect short commercially available LC columns. Fast and base-line separation of a mixture of 12 compounds containing phenones, benzoic acids and hydroxybenzoates under both isocratic and linear gradient conditions was selected to demonstrate the potential of SOSLC. The influence of the connecting tubing on the deviation of predictions is also discussed.

  20. Sampling with poling-based flux balance analysis: optimal versus sub-optimal flux space analysis of Actinobacillus succinogenes.

    PubMed

    Binns, Michael; de Atauri, Pedro; Vlysidis, Anestis; Cascante, Marta; Theodoropoulos, Constantinos

    2015-02-18

    Flux balance analysis is traditionally implemented to identify the maximum theoretical flux for some specified reaction and a single distribution of flux values for all the reactions present which achieve this maximum value. However it is well known that the uncertainty in reaction networks due to branches, cycles and experimental errors results in a large number of combinations of internal reaction fluxes which can achieve the same optimal flux value. In this work, we have modified the applied linear objective of flux balance analysis to include a poling penalty function, which pushes each new set of reaction fluxes away from previous solutions generated. Repeated poling-based flux balance analysis generates a sample of different solutions (a characteristic set), which represents all the possible functionality of the reaction network. Compared to existing sampling methods, for the purpose of generating a relatively "small" characteristic set, our new method is shown to obtain a higher coverage than competing methods under most conditions. The influence of the linear objective function on the sampling (the linear bias) constrains optimisation results to a subspace of optimal solutions all producing the same maximal fluxes. Visualisation of reaction fluxes plotted against each other in 2 dimensions with and without the linear bias indicates the existence of correlations between fluxes. This method of sampling is applied to the organism Actinobacillus succinogenes for the production of succinic acid from glycerol. A new method of sampling for the generation of different flux distributions (sets of individual fluxes satisfying constraints on the steady-state mass balances of intermediates) has been developed using a relatively simple modification of flux balance analysis to include a poling penalty function inside the resulting optimisation objective function. This new methodology can achieve a high coverage of the possible flux space and can be used with and without linear bias to show optimal versus sub-optimal solution spaces. Basic analysis of the Actinobacillus succinogenes system using sampling shows that in order to achieve the maximal succinic acid production CO₂ must be taken into the system. Solutions involving release of CO₂ all give sub-optimal succinic acid production.

  1. Simultaneous learning and filtering without delusions: a Bayes-optimal combination of Predictive Inference and Adaptive Filtering.

    PubMed

    Kneissler, Jan; Drugowitsch, Jan; Friston, Karl; Butz, Martin V

    2015-01-01

    Predictive coding appears to be one of the fundamental working principles of brain processing. Amongst other aspects, brains often predict the sensory consequences of their own actions. Predictive coding resembles Kalman filtering, where incoming sensory information is filtered to produce prediction errors for subsequent adaptation and learning. However, to generate prediction errors given motor commands, a suitable temporal forward model is required to generate predictions. While in engineering applications, it is usually assumed that this forward model is known, the brain has to learn it. When filtering sensory input and learning from the residual signal in parallel, a fundamental problem arises: the system can enter a delusional loop when filtering the sensory information using an overly trusted forward model. In this case, learning stalls before accurate convergence because uncertainty about the forward model is not properly accommodated. We present a Bayes-optimal solution to this generic and pernicious problem for the case of linear forward models, which we call Predictive Inference and Adaptive Filtering (PIAF). PIAF filters incoming sensory information and learns the forward model simultaneously. We show that PIAF is formally related to Kalman filtering and to the Recursive Least Squares linear approximation method, but combines these procedures in a Bayes optimal fashion. Numerical evaluations confirm that the delusional loop is precluded and that the learning of the forward model is more than 10-times faster when compared to a naive combination of Kalman filtering and Recursive Least Squares.

  2. All-in-one model for designing optimal water distribution pipe networks

    NASA Astrophysics Data System (ADS)

    Aklog, Dagnachew; Hosoi, Yoshihiko

    2017-05-01

    This paper discusses the development of an easy-to-use, all-in-one model for designing optimal water distribution networks. The model combines different optimization techniques into a single package in which a user can easily choose what optimizer to use and compare the results of different optimizers to gain confidence in the performances of the models. At present, three optimization techniques are included in the model: linear programming (LP), genetic algorithm (GA) and a heuristic one-by-one reduction method (OBORM) that was previously developed by the authors. The optimizers were tested on a number of benchmark problems and performed very well in terms of finding optimal or near-optimal solutions with a reasonable computation effort. The results indicate that the model effectively addresses the issues of complexity and limited performance trust associated with previous models and can thus be used for practical purposes.

  3. Aether: leveraging linear programming for optimal cloud computing in genomics.

    PubMed

    Luber, Jacob M; Tierney, Braden T; Cofer, Evan M; Patel, Chirag J; Kostic, Aleksandar D

    2018-05-01

    Across biology, we are seeing rapid developments in scale of data production without a corresponding increase in data analysis capabilities. Here, we present Aether (http://aether.kosticlab.org), an intuitive, easy-to-use, cost-effective and scalable framework that uses linear programming to optimally bid on and deploy combinations of underutilized cloud computing resources. Our approach simultaneously minimizes the cost of data analysis and provides an easy transition from users' existing HPC pipelines. Data utilized are available at https://pubs.broadinstitute.org/diabimmune and with EBI SRA accession ERP005989. Source code is available at (https://github.com/kosticlab/aether). Examples, documentation and a tutorial are available at http://aether.kosticlab.org. chirag_patel@hms.harvard.edu or aleksandar.kostic@joslin.harvard.edu. Supplementary data are available at Bioinformatics online.

  4. Model-based Estimation for Pose, Velocity of Projectile from Stereo Linear Array Image

    NASA Astrophysics Data System (ADS)

    Zhao, Zhuxin; Wen, Gongjian; Zhang, Xing; Li, Deren

    2012-01-01

    The pose (position and attitude) and velocity of in-flight projectiles have major influence on the performance and accuracy. A cost-effective method for measuring the gun-boosted projectiles is proposed. The method adopts only one linear array image collected by the stereo vision system combining a digital line-scan camera and a mirror near the muzzle. From the projectile's stereo image, the motion parameters (pose and velocity) are acquired by using a model-based optimization algorithm. The algorithm achieves optimal estimation of the parameters by matching the stereo projection of the projectile and that of the same size 3D model. The speed and the AOA (angle of attack) could also be determined subsequently. Experiments are made to test the proposed method.

  5. Designing optimal food intake patterns to achieve nutritional goals for Japanese adults through the use of linear programming optimization models.

    PubMed

    Okubo, Hitomi; Sasaki, Satoshi; Murakami, Kentaro; Yokoyama, Tetsuji; Hirota, Naoko; Notsu, Akiko; Fukui, Mitsuru; Date, Chigusa

    2015-06-06

    Simultaneous dietary achievement of a full set of nutritional recommendations is difficult. Diet optimization model using linear programming is a useful mathematical means of translating nutrient-based recommendations into realistic nutritionally-optimal food combinations incorporating local and culture-specific foods. We used this approach to explore optimal food intake patterns that meet the nutrient recommendations of the Dietary Reference Intakes (DRIs) while incorporating typical Japanese food selections. As observed intake values, we used the food and nutrient intake data of 92 women aged 31-69 years and 82 men aged 32-69 years living in three regions of Japan. Dietary data were collected with semi-weighed dietary record on four non-consecutive days in each season of the year (16 days total). The linear programming models were constructed to minimize the differences between observed and optimized food intake patterns while also meeting the DRIs for a set of 28 nutrients, setting energy equal to estimated requirements, and not exceeding typical quantities of each food consumed by each age (30-49 or 50-69 years) and gender group. We successfully developed mathematically optimized food intake patterns that met the DRIs for all 28 nutrients studied in each sex and age group. Achieving nutritional goals required minor modifications of existing diets in older groups, particularly women, while major modifications were required to increase intake of fruit and vegetables in younger groups of both sexes. Across all sex and age groups, optimized food intake patterns demanded greatly increased intake of whole grains and reduced-fat dairy products in place of intake of refined grains and full-fat dairy products. Salt intake goals were the most difficult to achieve, requiring marked reduction of salt-containing seasoning (65-80%) in all sex and age groups. Using a linear programming model, we identified optimal food intake patterns providing practical food choices and meeting nutritional recommendations for Japanese populations. Dietary modifications from current eating habits required to fulfil nutritional goals differed by age: more marked increases in food volume were required in younger groups.

  6. Optimal estimation and scheduling in aquifer management using the rapid feedback control method

    NASA Astrophysics Data System (ADS)

    Ghorbanidehno, Hojat; Kokkinaki, Amalia; Kitanidis, Peter K.; Darve, Eric

    2017-12-01

    Management of water resources systems often involves a large number of parameters, as in the case of large, spatially heterogeneous aquifers, and a large number of "noisy" observations, as in the case of pressure observation in wells. Optimizing the operation of such systems requires both searching among many possible solutions and utilizing new information as it becomes available. However, the computational cost of this task increases rapidly with the size of the problem to the extent that textbook optimization methods are practically impossible to apply. In this paper, we present a new computationally efficient technique as a practical alternative for optimally operating large-scale dynamical systems. The proposed method, which we term Rapid Feedback Controller (RFC), provides a practical approach for combined monitoring, parameter estimation, uncertainty quantification, and optimal control for linear and nonlinear systems with a quadratic cost function. For illustration, we consider the case of a weakly nonlinear uncertain dynamical system with a quadratic objective function, specifically a two-dimensional heterogeneous aquifer management problem. To validate our method, we compare our results with the linear quadratic Gaussian (LQG) method, which is the basic approach for feedback control. We show that the computational cost of the RFC scales only linearly with the number of unknowns, a great improvement compared to the basic LQG control with a computational cost that scales quadratically. We demonstrate that the RFC method can obtain the optimal control values at a greatly reduced computational cost compared to the conventional LQG algorithm with small and controllable losses in the accuracy of the state and parameter estimation.

  7. Study on static and dynamic characteristics of moving magnet linear compressors

    NASA Astrophysics Data System (ADS)

    Chen, N.; Tang, Y. J.; Wu, Y. N.; Chen, X.; Xu, L.

    2007-09-01

    With the development of high-strength NdFeB magnetic material, moving magnet linear compressors have been gradually introduced in the fields of refrigeration and cryogenic engineering, especially in Stirling and pulse tube cryocoolers. This paper presents simulation and experimental investigations on the static and dynamic characteristics of a moving magnet linear motor and a moving magnet linear compressor. Both equivalent magnetic circuits and finite element approaches have been used to model the moving magnet linear motor. Subsequently, the force and equilibrium characteristics of the linear motor have been predicted and verified by detailed static experimental analyses. In combination with a harmonic analysis, experimental investigations were conducted on a prototype of a moving magnet linear compressor. A voltage-stroke relationship, the effect of charging pressure on the performance and dynamic frequency response characteristics are investigated. Finally, the method to identify optimal points of the linear compressor has been described, which is indispensable to the design and operation of moving magnet linear compressors.

  8. Linear quadratic optimization for positive LTI system

    NASA Astrophysics Data System (ADS)

    Muhafzan, Yenti, Syafrida Wirma; Zulakmal

    2017-05-01

    Nowaday the linear quadratic optimization subject to positive linear time invariant (LTI) system constitute an interesting study considering it can become a mathematical model of variety of real problem whose variables have to nonnegative and trajectories generated by these variables must be nonnegative. In this paper we propose a method to generate an optimal control of linear quadratic optimization subject to positive linear time invariant (LTI) system. A sufficient condition that guarantee the existence of such optimal control is discussed.

  9. Optimization of GRIN lenses coupling system for twin-core fiber interconnection with single core fibers

    NASA Astrophysics Data System (ADS)

    Chen, Gongdai; Deng, Hongchang; Yuan, Libo

    2018-07-01

    We aim at a more compact, flexible, and simpler core-to-fiber coupling approach, optimal combinations of two graded refractive index (GRIN) lenses have been demonstrated for the interconnection between a twin-core single-mode fiber and two single-core single-mode fibers. The optimal two-lens combinations achieve an efficient core-to-fiber separating coupling and allow the fibers and lenses to coaxially assemble. Finally, axial deviations and transverse displacements of the components are discussed, and the latter increases the coupling loss more significantly. The gap length between the two lenses is designed to be fine-tuned to compensate for the transverse displacement, and the good linear compensation relationship contributes to the device manufacturing. This approach has potential applications in low coupling loss and low crosstalk devices without sophisticated alignment and adjustment, and enables the channel separating for multicore fibers.

  10. Non Linear Programming (NLP) Formulation for Quantitative Modeling of Protein Signal Transduction Pathways

    PubMed Central

    Morris, Melody K.; Saez-Rodriguez, Julio; Lauffenburger, Douglas A.; Alexopoulos, Leonidas G.

    2012-01-01

    Modeling of signal transduction pathways plays a major role in understanding cells' function and predicting cellular response. Mathematical formalisms based on a logic formalism are relatively simple but can describe how signals propagate from one protein to the next and have led to the construction of models that simulate the cells response to environmental or other perturbations. Constrained fuzzy logic was recently introduced to train models to cell specific data to result in quantitative pathway models of the specific cellular behavior. There are two major issues in this pathway optimization: i) excessive CPU time requirements and ii) loosely constrained optimization problem due to lack of data with respect to large signaling pathways. Herein, we address both issues: the former by reformulating the pathway optimization as a regular nonlinear optimization problem; and the latter by enhanced algorithms to pre/post-process the signaling network to remove parts that cannot be identified given the experimental conditions. As a case study, we tackle the construction of cell type specific pathways in normal and transformed hepatocytes using medium and large-scale functional phosphoproteomic datasets. The proposed Non Linear Programming (NLP) formulation allows for fast optimization of signaling topologies by combining the versatile nature of logic modeling with state of the art optimization algorithms. PMID:23226239

  11. Non Linear Programming (NLP) formulation for quantitative modeling of protein signal transduction pathways.

    PubMed

    Mitsos, Alexander; Melas, Ioannis N; Morris, Melody K; Saez-Rodriguez, Julio; Lauffenburger, Douglas A; Alexopoulos, Leonidas G

    2012-01-01

    Modeling of signal transduction pathways plays a major role in understanding cells' function and predicting cellular response. Mathematical formalisms based on a logic formalism are relatively simple but can describe how signals propagate from one protein to the next and have led to the construction of models that simulate the cells response to environmental or other perturbations. Constrained fuzzy logic was recently introduced to train models to cell specific data to result in quantitative pathway models of the specific cellular behavior. There are two major issues in this pathway optimization: i) excessive CPU time requirements and ii) loosely constrained optimization problem due to lack of data with respect to large signaling pathways. Herein, we address both issues: the former by reformulating the pathway optimization as a regular nonlinear optimization problem; and the latter by enhanced algorithms to pre/post-process the signaling network to remove parts that cannot be identified given the experimental conditions. As a case study, we tackle the construction of cell type specific pathways in normal and transformed hepatocytes using medium and large-scale functional phosphoproteomic datasets. The proposed Non Linear Programming (NLP) formulation allows for fast optimization of signaling topologies by combining the versatile nature of logic modeling with state of the art optimization algorithms.

  12. Calculating and controlling the error of discrete representations of Pareto surfaces in convex multi-criteria optimization.

    PubMed

    Craft, David

    2010-10-01

    A discrete set of points and their convex combinations can serve as a sparse representation of the Pareto surface in multiple objective convex optimization. We develop a method to evaluate the quality of such a representation, and show by example that in multiple objective radiotherapy planning, the number of Pareto optimal solutions needed to represent Pareto surfaces of up to five dimensions grows at most linearly with the number of objectives. The method described is also applicable to the representation of convex sets. Copyright © 2009 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  13. Some comments on Anderson and Pospahala's correction of bias in line transect sampling

    USGS Publications Warehouse

    Anderson, D.R.; Burnham, K.P.; Chain, B.R.

    1980-01-01

    ANDERSON and POSPAHALA (1970) investigated the estimation of wildlife population size using the belt or line transect sampling method and devised a correction for bias, thus leading to an estimator with interesting characteristics. This work was given a uniform mathematical framework in BURNHAM and ANDERSON (1976). In this paper we show that the ANDERSON-POSPAHALA estimator is optimal in the sense of being the (unique) best linear unbiased estimator within the class of estimators which are linear combinations of cell frequencies, provided certain assumptions are met.

  14. Inversion method based on stochastic optimization for particle sizing.

    PubMed

    Sánchez-Escobar, Juan Jaime; Barbosa-Santillán, Liliana Ibeth; Vargas-Ubera, Javier; Aguilar-Valdés, Félix

    2016-08-01

    A stochastic inverse method is presented based on a hybrid evolutionary optimization algorithm (HEOA) to retrieve a monomodal particle-size distribution (PSD) from the angular distribution of scattered light. By solving an optimization problem, the HEOA (with the Fraunhofer approximation) retrieves the PSD from an intensity pattern generated by Mie theory. The analyzed light-scattering pattern can be attributed to unimodal normal, gamma, or lognormal distribution of spherical particles covering the interval of modal size parameters 46≤α≤150. The HEOA ensures convergence to the near-optimal solution during the optimization of a real-valued objective function by combining the advantages of a multimember evolution strategy and locally weighted linear regression. The numerical results show that our HEOA can be satisfactorily applied to solve the inverse light-scattering problem.

  15. Investigation of a tubular dual-stator flux-switching permanent-magnet linear generator for free-piston energy converter

    NASA Astrophysics Data System (ADS)

    Sui, Yi; Zheng, Ping; Tong, Chengde; Yu, Bin; Zhu, Shaohong; Zhu, Jianguo

    2015-05-01

    This paper describes a tubular dual-stator flux-switching permanent-magnet (PM) linear generator for free-piston energy converter. The operating principle, topology, and design considerations of the machine are investigated. Combining the motion characteristic of free-piston Stirling engine, a tubular dual-stator PM linear generator is designed by finite element method. Some major structural parameters, such as the outer and inner radii of the mover, PM thickness, mover tooth width, tooth width of the outer and inner stators, etc., are optimized to improve the machine performances like thrust capability and power density. In comparison with conventional single-stator PM machines like moving-magnet linear machine and flux-switching linear machine, the proposed dual-stator flux-switching PM machine shows advantages in higher mass power density, higher volume power density, and lighter mover.

  16. Optimal Design of Multitype Groundwater Monitoring Networks Using Easily Accessible Tools.

    PubMed

    Wöhling, Thomas; Geiges, Andreas; Nowak, Wolfgang

    2016-11-01

    Monitoring networks are expensive to establish and to maintain. In this paper, we extend an existing data-worth estimation method from the suite of PEST utilities with a global optimization method for optimal sensor placement (called optimal design) in groundwater monitoring networks. Design optimization can include multiple simultaneous sensor locations and multiple sensor types. Both location and sensor type are treated simultaneously as decision variables. Our method combines linear uncertainty quantification and a modified genetic algorithm for discrete multilocation, multitype search. The efficiency of the global optimization is enhanced by an archive of past samples and parallel computing. We demonstrate our methodology for a groundwater monitoring network at the Steinlach experimental site, south-western Germany, which has been established to monitor river-groundwater exchange processes. The target of optimization is the best possible exploration for minimum variance in predicting the mean travel time of the hyporheic exchange. Our results demonstrate that the information gain of monitoring network designs can be explored efficiently and with easily accessible tools prior to taking new field measurements or installing additional measurement points. The proposed methods proved to be efficient and can be applied for model-based optimal design of any type of monitoring network in approximately linear systems. Our key contributions are (1) the use of easy-to-implement tools for an otherwise complex task and (2) yet to consider data-worth interdependencies in simultaneous optimization of multiple sensor locations and sensor types. © 2016, National Ground Water Association.

  17. Technical note: Combining quantile forecasts and predictive distributions of streamflows

    NASA Astrophysics Data System (ADS)

    Bogner, Konrad; Liechti, Katharina; Zappa, Massimiliano

    2017-11-01

    The enhanced availability of many different hydro-meteorological modelling and forecasting systems raises the issue of how to optimally combine this great deal of information. Especially the usage of deterministic and probabilistic forecasts with sometimes widely divergent predicted future streamflow values makes it even more complicated for decision makers to sift out the relevant information. In this study multiple streamflow forecast information will be aggregated based on several different predictive distributions, and quantile forecasts. For this combination the Bayesian model averaging (BMA) approach, the non-homogeneous Gaussian regression (NGR), also known as the ensemble model output statistic (EMOS) techniques, and a novel method called Beta-transformed linear pooling (BLP) will be applied. By the help of the quantile score (QS) and the continuous ranked probability score (CRPS), the combination results for the Sihl River in Switzerland with about 5 years of forecast data will be compared and the differences between the raw and optimally combined forecasts will be highlighted. The results demonstrate the importance of applying proper forecast combination methods for decision makers in the field of flood and water resource management.

  18. A new adaptive multiple modelling approach for non-linear and non-stationary systems

    NASA Astrophysics Data System (ADS)

    Chen, Hao; Gong, Yu; Hong, Xia

    2016-07-01

    This paper proposes a novel adaptive multiple modelling algorithm for non-linear and non-stationary systems. This simple modelling paradigm comprises K candidate sub-models which are all linear. With data available in an online fashion, the performance of all candidate sub-models are monitored based on the most recent data window, and M best sub-models are selected from the K candidates. The weight coefficients of the selected sub-model are adapted via the recursive least square (RLS) algorithm, while the coefficients of the remaining sub-models are unchanged. These M model predictions are then optimally combined to produce the multi-model output. We propose to minimise the mean square error based on a recent data window, and apply the sum to one constraint to the combination parameters, leading to a closed-form solution, so that maximal computational efficiency can be achieved. In addition, at each time step, the model prediction is chosen from either the resultant multiple model or the best sub-model, whichever is the best. Simulation results are given in comparison with some typical alternatives, including the linear RLS algorithm and a number of online non-linear approaches, in terms of modelling performance and time consumption.

  19. Utilizing population controls in rare-variant case-parent association tests.

    PubMed

    Jiang, Yu; Satten, Glen A; Han, Yujun; Epstein, Michael P; Heinzen, Erin L; Goldstein, David B; Allen, Andrew S

    2014-06-05

    There is great interest in detecting associations between human traits and rare genetic variation. To address the low power implicit in single-locus tests of rare genetic variants, many rare-variant association approaches attempt to accumulate information across a gene, often by taking linear combinations of single-locus contributions to a statistic. Using the right linear combination is key-an optimal test will up-weight true causal variants, down-weight neutral variants, and correctly assign the direction of effect for causal variants. Here, we propose a procedure that exploits data from population controls to estimate the linear combination to be used in an case-parent trio rare-variant association test. Specifically, we estimate the linear combination by comparing population control allele frequencies with allele frequencies in the parents of affected offspring. These estimates are then used to construct a rare-variant transmission disequilibrium test (rvTDT) in the case-parent data. Because the rvTDT is conditional on the parents' data, using parental data in estimating the linear combination does not affect the validity or asymptotic distribution of the rvTDT. By using simulation, we show that our new population-control-based rvTDT can dramatically improve power over rvTDTs that do not use population control information across a wide variety of genetic architectures. It also remains valid under population stratification. We apply the approach to a cohort of epileptic encephalopathy (EE) trios and find that dominant (or additive) inherited rare variants are unlikely to play a substantial role within EE genes previously identified through de novo mutation studies. Copyright © 2014 The American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.

  20. Aether: leveraging linear programming for optimal cloud computing in genomics

    PubMed Central

    Luber, Jacob M; Tierney, Braden T; Cofer, Evan M; Patel, Chirag J

    2018-01-01

    Abstract Motivation Across biology, we are seeing rapid developments in scale of data production without a corresponding increase in data analysis capabilities. Results Here, we present Aether (http://aether.kosticlab.org), an intuitive, easy-to-use, cost-effective and scalable framework that uses linear programming to optimally bid on and deploy combinations of underutilized cloud computing resources. Our approach simultaneously minimizes the cost of data analysis and provides an easy transition from users’ existing HPC pipelines. Availability and implementation Data utilized are available at https://pubs.broadinstitute.org/diabimmune and with EBI SRA accession ERP005989. Source code is available at (https://github.com/kosticlab/aether). Examples, documentation and a tutorial are available at http://aether.kosticlab.org. Contact chirag_patel@hms.harvard.edu or aleksandar.kostic@joslin.harvard.edu Supplementary information Supplementary data are available at Bioinformatics online. PMID:29228186

  1. Investigation, development, and application of optimal output feedback theory. Volume 3: The relationship between dynamic compensators and observers and Kalman filters

    NASA Technical Reports Server (NTRS)

    Broussard, John R.

    1987-01-01

    Relationships between observers, Kalman Filters and dynamic compensators using feedforward control theory are investigated. In particular, the relationship, if any, between the dynamic compensator state and linear functions of a discrete plane state are investigated. It is shown that, in steady state, a dynamic compensator driven by the plant output can be expressed as the sum of two terms. The first term is a linear combination of the plant state. The second term depends on plant and measurement noise, and the plant control. Thus, the state of the dynamic compensator can be expressed as an estimator of the first term with additive error given by the second term. Conditions under which a dynamic compensator is a Kalman filter are presented, and reduced-order optimal estimaters are investigated.

  2. Factorization and reduction methods for optimal control of distributed parameter systems

    NASA Technical Reports Server (NTRS)

    Burns, J. A.; Powers, R. K.

    1985-01-01

    A Chandrasekhar-type factorization method is applied to the linear-quadratic optimal control problem for distributed parameter systems. An aeroelastic control problem is used as a model example to demonstrate that if computationally efficient algorithms, such as those of Chandrasekhar-type, are combined with the special structure often available to a particular problem, then an abstract approximation theory developed for distributed parameter control theory becomes a viable method of solution. A numerical scheme based on averaging approximations is applied to hereditary control problems. Numerical examples are given.

  3. Information analysis of posterior canal afferents in the turtle, Trachemys scripta elegans.

    PubMed

    Rowe, Michael H; Neiman, Alexander B

    2012-01-24

    We have used sinusoidal and band-limited Gaussian noise stimuli along with information measures to characterize the linear and non-linear responses of morpho-physiologically identified posterior canal (PC) afferents and to examine the relationship between mutual information rate and other physiological parameters. Our major findings are: 1) spike generation in most PC afferents is effectively a stochastic renewal process, and spontaneous discharges are fully characterized by their first order statistics; 2) a regular discharge, as measured by normalized coefficient of variation (cv*), reduces intrinsic noise in afferent discharges at frequencies below the mean firing rate; 3) coherence and mutual information rates, calculated from responses to band-limited Gaussian noise, are jointly determined by gain and intrinsic noise (discharge regularity), the two major determinants of signal to noise ratio in the afferent response; 4) measures of optimal non-linear encoding were only moderately greater than optimal linear encoding, indicating that linear stimulus encoding is limited primarily by internal noise rather than by non-linearities; and 5) a leaky integrate and fire model reproduces these results and supports the suggestion that the combination of high discharge regularity and high discharge rates serves to extend the linear encoding range of afferents to higher frequencies. These results provide a framework for future assessments of afferent encoding of signals generated during natural head movements and for comparison with coding strategies used by other sensory systems. This article is part of a Special Issue entitled: Neural Coding. Copyright © 2011 Elsevier B.V. All rights reserved.

  4. Bayesian integration and non-linear feedback control in a full-body motor task.

    PubMed

    Stevenson, Ian H; Fernandes, Hugo L; Vilares, Iris; Wei, Kunlin; Körding, Konrad P

    2009-12-01

    A large number of experiments have asked to what degree human reaching movements can be understood as being close to optimal in a statistical sense. However, little is known about whether these principles are relevant for other classes of movements. Here we analyzed movement in a task that is similar to surfing or snowboarding. Human subjects stand on a force plate that measures their center of pressure. This center of pressure affects the acceleration of a cursor that is displayed in a noisy fashion (as a cloud of dots) on a projection screen while the subject is incentivized to keep the cursor close to a fixed position. We find that salient aspects of observed behavior are well-described by optimal control models where a Bayesian estimation model (Kalman filter) is combined with an optimal controller (either a Linear-Quadratic-Regulator or Bang-bang controller). We find evidence that subjects integrate information over time taking into account uncertainty. However, behavior in this continuous steering task appears to be a highly non-linear function of the visual feedback. While the nervous system appears to implement Bayes-like mechanisms for a full-body, dynamic task, it may additionally take into account the specific costs and constraints of the task.

  5. Portfolio optimization by using linear programing models based on genetic algorithm

    NASA Astrophysics Data System (ADS)

    Sukono; Hidayat, Y.; Lesmana, E.; Putra, A. S.; Napitupulu, H.; Supian, S.

    2018-01-01

    In this paper, we discussed the investment portfolio optimization using linear programming model based on genetic algorithms. It is assumed that the portfolio risk is measured by absolute standard deviation, and each investor has a risk tolerance on the investment portfolio. To complete the investment portfolio optimization problem, the issue is arranged into a linear programming model. Furthermore, determination of the optimum solution for linear programming is done by using a genetic algorithm. As a numerical illustration, we analyze some of the stocks traded on the capital market in Indonesia. Based on the analysis, it is shown that the portfolio optimization performed by genetic algorithm approach produces more optimal efficient portfolio, compared to the portfolio optimization performed by a linear programming algorithm approach. Therefore, genetic algorithms can be considered as an alternative on determining the investment portfolio optimization, particularly using linear programming models.

  6. Optimal and robust control of a class of nonlinear systems using dynamically re-optimised single network adaptive critic design

    NASA Astrophysics Data System (ADS)

    Tiwari, Shivendra N.; Padhi, Radhakant

    2018-01-01

    Following the philosophy of adaptive optimal control, a neural network-based state feedback optimal control synthesis approach is presented in this paper. First, accounting for a nominal system model, a single network adaptive critic (SNAC) based multi-layered neural network (called as NN1) is synthesised offline. However, another linear-in-weight neural network (called as NN2) is trained online and augmented to NN1 in such a manner that their combined output represent the desired optimal costate for the actual plant. To do this, the nominal model needs to be updated online to adapt to the actual plant, which is done by synthesising yet another linear-in-weight neural network (called as NN3) online. Training of NN3 is done by utilising the error information between the nominal and actual states and carrying out the necessary Lyapunov stability analysis using a Sobolev norm based Lyapunov function. This helps in training NN2 successfully to capture the required optimal relationship. The overall architecture is named as 'Dynamically Re-optimised single network adaptive critic (DR-SNAC)'. Numerical results for two motivating illustrative problems are presented, including comparison studies with closed form solution for one problem, which clearly demonstrate the effectiveness and benefit of the proposed approach.

  7. Multi-Objective Optimization of Moving-magnet Linear Oscillatory Motor Using Response Surface Methodology with Quantum-Behaved PSO Operator

    NASA Astrophysics Data System (ADS)

    Lei, Meizhen; Wang, Liqiang

    2018-01-01

    To reduce the difficulty of manufacturing and increase the magnetic thrust density, a moving-magnet linear oscillatory motor (MMLOM) without inner-stators was Proposed. To get the optimal design of maximum electromagnetic thrust with minimal permanent magnetic material, firstly, the 3D finite element analysis (FEA) model of the MMLOM was built and verified by comparison with prototype experiment result. Then the influence of design parameters of permanent magnet (PM) on the electromagnetic thrust was systematically analyzed by the 3D FEA to get the design parameters. Secondly, response surface methodology (RSM) was employed to build the response surface model of the new MMLOM, which can obtain an analytical model of the PM volume and thrust. Then a multi-objective optimization methods for design parameters of PM, using response surface methodology (RSM) with a quantum-behaved PSO (QPSO) operator, was proposed. Then the way to choose the best design parameters of PM among the multi-objective optimization solution sets was proposed. Then the 3D FEA of the optimal design candidates was compared. The comparison results showed that the proposed method can obtain the best combination of the geometric parameters of reducing the PM volume and increasing the thrust.

  8. Optimal energy growth in a stably stratified shear flow

    NASA Astrophysics Data System (ADS)

    Jose, Sharath; Roy, Anubhab; Bale, Rahul; Iyer, Krithika; Govindarajan, Rama

    2018-02-01

    Transient growth of perturbations by a linear non-modal evolution is studied here in a stably stratified bounded Couette flow. The density stratification is linear. Classical inviscid stability theory states that a parallel shear flow is stable to exponentially growing disturbances if the Richardson number (Ri) is greater than 1/4 everywhere in the flow. Experiments and numerical simulations at higher Ri show however that algebraically growing disturbances can lead to transient amplification. The complexity of a stably stratified shear flow stems from its ability to combine this transient amplification with propagating internal gravity waves (IGWs). The optimal perturbations associated with maximum energy amplification are numerically obtained at intermediate Reynolds numbers. It is shown that in this wall-bounded flow, the three-dimensional optimal perturbations are oblique, unlike in unstratified flow. A partitioning of energy into kinetic and potential helps in understanding the exchange of energies and how it modifies the transient growth. We show that the apportionment between potential and kinetic energy depends, in an interesting manner, on the Richardson number, and on time, as the transient growth proceeds from an optimal perturbation. The oft-quoted stabilizing role of stratification is also probed in the non-diffusive limit in the context of disturbance energy amplification.

  9. On Richardson extrapolation for low-dissipation low-dispersion diagonally implicit Runge-Kutta schemes

    NASA Astrophysics Data System (ADS)

    Havasi, Ágnes; Kazemi, Ehsan

    2018-04-01

    In the modeling of wave propagation phenomena it is necessary to use time integration methods which are not only sufficiently accurate, but also properly describe the amplitude and phase of the propagating waves. It is not clear if amending the developed schemes by extrapolation methods to obtain a high order of accuracy preserves the qualitative properties of these schemes in the perspective of dissipation, dispersion and stability analysis. It is illustrated that the combination of various optimized schemes with Richardson extrapolation is not optimal for minimal dissipation and dispersion errors. Optimized third-order and fourth-order methods are obtained, and it is shown that the proposed methods combined with Richardson extrapolation result in fourth and fifth orders of accuracy correspondingly, while preserving optimality and stability. The numerical applications include the linear wave equation, a stiff system of reaction-diffusion equations and the nonlinear Euler equations with oscillatory initial conditions. It is demonstrated that the extrapolated third-order scheme outperforms the recently developed fourth-order diagonally implicit Runge-Kutta scheme in terms of accuracy and stability.

  10. Identification of optimal feedback control rules from micro-quadrotor and insect flight trajectories.

    PubMed

    Faruque, Imraan A; Muijres, Florian T; Macfarlane, Kenneth M; Kehlenbeck, Andrew; Humbert, J Sean

    2018-06-01

    This paper presents "optimal identification," a framework for using experimental data to identify the optimality conditions associated with the feedback control law implemented in the measurements. The technique compares closed loop trajectory measurements against a reduced order model of the open loop dynamics, and uses linear matrix inequalities to solve an inverse optimal control problem as a convex optimization that estimates the controller optimality conditions. In this study, the optimal identification technique is applied to two examples, that of a millimeter-scale micro-quadrotor with an engineered controller on board, and the example of a population of freely flying Drosophila hydei maneuvering about forward flight. The micro-quadrotor results show that the performance indices used to design an optimal flight control law for a micro-quadrotor may be recovered from the closed loop simulated flight trajectories, and the Drosophila results indicate that the combined effect of the insect longitudinal flight control sensing and feedback acts principally to regulate pitch rate.

  11. Calorimetry at the International Linear Collider

    NASA Astrophysics Data System (ADS)

    Repond, José

    2007-03-01

    The physics potential of the International Linear Collider depends critically on the jet energy resolution of its detector. Detector concepts are being developed which optimize the jet energy resolution, with the aim of achieving σjet=30%/√{Ejet}. Under the assumption that Particle Flow Algorithms (PFAs), which combine tracking and calorimeter information to reconstruct the energy of hadronic jets, can provide this unprecedented jet energy resolution, calorimeters with very fine granularity are being developed. After a brief introduction outlining the principles of PFAs, the current status of various calorimeter prototype construction projects and their plans for the next few years will be reviewed.

  12. Optimal design of focused experiments and surveys

    NASA Astrophysics Data System (ADS)

    Curtis, Andrew

    1999-10-01

    Experiments and surveys are often performed to obtain data that constrain some previously underconstrained model. Often, constraints are most desired in a particular subspace of model space. Experiment design optimization requires that the quality of any particular design can be both quantified and then maximized. This study shows how the quality can be defined such that it depends on the amount of information that is focused in the particular subspace of interest. In addition, algorithms are presented which allow one particular focused quality measure (from the class of focused measures) to be evaluated efficiently. A subclass of focused quality measures is also related to the standard variance and resolution measures from linearized inverse theory. The theory presented here requires that the relationship between model parameters and data can be linearized around a reference model without significant loss of information. Physical and financial constraints define the space of possible experiment designs. Cross-well tomographic examples are presented, plus a strategy for survey design to maximize information about linear combinations of parameters such as bulk modulus, κ =λ+ 2μ/3.

  13. Parallel and Preemptable Dynamically Dimensioned Search Algorithms for Single and Multi-objective Optimization in Water Resources

    NASA Astrophysics Data System (ADS)

    Tolson, B.; Matott, L. S.; Gaffoor, T. A.; Asadzadeh, M.; Shafii, M.; Pomorski, P.; Xu, X.; Jahanpour, M.; Razavi, S.; Haghnegahdar, A.; Craig, J. R.

    2015-12-01

    We introduce asynchronous parallel implementations of the Dynamically Dimensioned Search (DDS) family of algorithms including DDS, discrete DDS, PA-DDS and DDS-AU. These parallel algorithms are unique from most existing parallel optimization algorithms in the water resources field in that parallel DDS is asynchronous and does not require an entire population (set of candidate solutions) to be evaluated before generating and then sending a new candidate solution for evaluation. One key advance in this study is developing the first parallel PA-DDS multi-objective optimization algorithm. The other key advance is enhancing the computational efficiency of solving optimization problems (such as model calibration) by combining a parallel optimization algorithm with the deterministic model pre-emption concept. These two efficiency techniques can only be combined because of the asynchronous nature of parallel DDS. Model pre-emption functions to terminate simulation model runs early, prior to completely simulating the model calibration period for example, when intermediate results indicate the candidate solution is so poor that it will definitely have no influence on the generation of further candidate solutions. The computational savings of deterministic model preemption available in serial implementations of population-based algorithms (e.g., PSO) disappear in synchronous parallel implementations as these algorithms. In addition to the key advances above, we implement the algorithms across a range of computation platforms (Windows and Unix-based operating systems from multi-core desktops to a supercomputer system) and package these for future modellers within a model-independent calibration software package called Ostrich as well as MATLAB versions. Results across multiple platforms and multiple case studies (from 4 to 64 processors) demonstrate the vast improvement over serial DDS-based algorithms and highlight the important role model pre-emption plays in the performance of parallel, pre-emptable DDS algorithms. Case studies include single- and multiple-objective optimization problems in water resources model calibration and in many cases linear or near linear speedups are observed.

  14. Wing box transonic-flutter suppression using piezoelectric self-sensing actuators attached to skin

    NASA Astrophysics Data System (ADS)

    Otiefy, R. A. H.; Negm, H. M.

    2010-12-01

    The main objective of this research is to study the capability of piezoelectric (PZT) self-sensing actuators to suppress the transonic wing box flutter, which is a flow-structure interaction phenomenon. The unsteady general frequency modified transonic small disturbance (TSD) equation is used to model the transonic flow about the wing. The wing box structure and piezoelectric actuators are modeled using the equivalent plate method, which is based on the first order shear deformation plate theory (FSDPT). The piezoelectric actuators are bonded to the skin. The optimal electromechanical coupling conditions between the piezoelectric actuators and the wing are collected from previous work. Three main different control strategies, a linear quadratic Gaussian (LQG) which combines the linear quadratic regulator (LQR) with the Kalman filter estimator (KFE), an optimal static output feedback (SOF), and a classic feedback controller (CFC), are studied and compared. The optimum actuator and sensor locations are determined using the norm of feedback control gains (NFCG) and norm of Kalman filter estimator gains (NKFEG) respectively. A genetic algorithm (GA) optimization technique is used to calculate the controller and estimator parameters to achieve a target response.

  15. Optimization of natural frequencies of a slender beam shaped in a linear combination of its mode shapes

    NASA Astrophysics Data System (ADS)

    Silva, Guilherme Augusto Lopes da; Nicoletti, Rodrigo

    2017-06-01

    This work focuses on the placement of natural frequencies of beams to desired frequency regions. More specifically, we investigate the effects of combining mode shapes to shape a beam to change its natural frequencies, both numerically and experimentally. First, we present a parametric analysis of a shaped beam and we analyze the resultant effects for different boundary conditions and mode shapes. Second, we present an optimization procedure to find the optimum shape of the beam for desired natural frequencies. In this case, we adopt the Nelder-Mead simplex search method, which allows a broad search of the optimum shape in the solution domain. Finally, the obtained results are verified experimentally for a clamped-clamped beam in three different optimization runs. Results show that the method is effective in placing natural frequencies at desired values (experimental results lie within a 10% error to the expected theoretical ones). However, the beam must be axially constrained to have the natural frequencies changed.

  16. Optshrink LR + S: accelerated fMRI reconstruction using non-convex optimal singular value shrinkage.

    PubMed

    Aggarwal, Priya; Shrivastava, Parth; Kabra, Tanay; Gupta, Anubha

    2017-03-01

    This paper presents a new accelerated fMRI reconstruction method, namely, OptShrink LR + S method that reconstructs undersampled fMRI data using a linear combination of low-rank and sparse components. The low-rank component has been estimated using non-convex optimal singular value shrinkage algorithm, while the sparse component has been estimated using convex l 1 minimization. The performance of the proposed method is compared with the existing state-of-the-art algorithms on real fMRI dataset. The proposed OptShrink LR + S method yields good qualitative and quantitative results.

  17. Eutrophic water purification efficiency using a combination of hydrodynamic cavitation and ozonation on a pilot scale.

    PubMed

    Li, Wei-Xin; Tang, Chuan-Dong; Wu, Zhi-Lin; Wang, Wei-Min; Zhang, Yu-Feng; Zhao, Yi; Cravotto, Giancarlo

    2015-04-01

    This paper presents the purification of eutrophic water using a combination of hydrodynamic cavitation (HC) and ozonation (O3) at a continuous flow of 0.8 m(3) h(-1) on a pilot scale. The maximum removal rate of chlorophyll a using O3 alone and the HC/O3 combination was 62.3 and 78.8%, respectively, under optimal conditions, where the ozone utilization efficiency was 64.5 and 94.8% and total energy consumption was 8.89 and 8.25 kWh m(-3), respectively. Thus, the removal rate of chlorophyll a and the ozone utilization efficiency were improved by 26.5% and 46.9%, respectively, by using the combined technique. Meanwhile, total energy consumption was reduced by 7.2%. Turbidity linearly decreased with chlorophyll a removal rate, but no linear relationship exists between the removal of COD or UV254 and chlorophyll a. As expected, the suction-cavitation-assisted O3 exhibited higher energy efficiency than the extrusion-cavitation-assisted O3 and O3 alone methods.

  18. Combinatorial therapy discovery using mixed integer linear programming.

    PubMed

    Pang, Kaifang; Wan, Ying-Wooi; Choi, William T; Donehower, Lawrence A; Sun, Jingchun; Pant, Dhruv; Liu, Zhandong

    2014-05-15

    Combinatorial therapies play increasingly important roles in combating complex diseases. Owing to the huge cost associated with experimental methods in identifying optimal drug combinations, computational approaches can provide a guide to limit the search space and reduce cost. However, few computational approaches have been developed for this purpose, and thus there is a great need of new algorithms for drug combination prediction. Here we proposed to formulate the optimal combinatorial therapy problem into two complementary mathematical algorithms, Balanced Target Set Cover (BTSC) and Minimum Off-Target Set Cover (MOTSC). Given a disease gene set, BTSC seeks a balanced solution that maximizes the coverage on the disease genes and minimizes the off-target hits at the same time. MOTSC seeks a full coverage on the disease gene set while minimizing the off-target set. Through simulation, both BTSC and MOTSC demonstrated a much faster running time over exhaustive search with the same accuracy. When applied to real disease gene sets, our algorithms not only identified known drug combinations, but also predicted novel drug combinations that are worth further testing. In addition, we developed a web-based tool to allow users to iteratively search for optimal drug combinations given a user-defined gene set. Our tool is freely available for noncommercial use at http://www.drug.liuzlab.org/. zhandong.liu@bcm.edu Supplementary data are available at Bioinformatics online.

  19. Optimization of composite box-beam structures including effects of subcomponent interactions

    NASA Technical Reports Server (NTRS)

    Ragon, Scott A.; Guerdal, Zafer; Starnes, James H., Jr.

    1995-01-01

    Minimum mass designs are obtained for a simple box beam structure subject to bending, torque and combined bending/torque load cases. These designs are obtained subject to point strain and linear buckling constraints. The present work differs from previous efforts in that special attention is payed to including the effects of subcomponent panel interaction in the optimal design process. Two different approaches are used to impose the buckling constraints. When the global approach is used, buckling constraints are imposed on the global structure via a linear eigenvalue analysis. This approach allows the subcomponent panels to interact in a realistic manner. The results obtained using this approach are compared to results obtained using a traditional, less expensive approach, called the local approach. When the local approach is used, in-plane loads are extracted from the global model and used to impose buckling constraints on each subcomponent panel individually. In the global cases, it is found that there can be significant interaction between skin, spar, and rib design variables. This coupling is weak or nonexistent in the local designs. It is determined that weight savings of up to 7% may be obtained by using the global approach instead of the local approach to design these structures. Several of the designs obtained using the linear buckling analysis are subjected to a geometrically nonlinear analysis. For the designs which were subjected to bending loads, the innermost rib panel begins to collapse at less than half the intended design load and in a mode different from that predicted by linear analysis. The discrepancy between the predicted linear and nonlinear responses is attributed to the effects of the nonlinear rib crushing load, and the parameter which controls this rib collapse failure mode is shown to be the rib thickness. The rib collapse failure mode may be avoided by increasing the rib thickness above the value obtained from the (linear analysis based) optimizer. It is concluded that it would be necessary to include geometric nonlinearities in the design optimization process if the true optimum in this case were to be found.

  20. Robust Neighboring Optimal Guidance for the Advanced Launch System

    NASA Technical Reports Server (NTRS)

    Hull, David G.

    1993-01-01

    In recent years, optimization has become an engineering tool through the availability of numerous successful nonlinear programming codes. Optimal control problems are converted into parameter optimization (nonlinear programming) problems by assuming the control to be piecewise linear, making the unknowns the nodes or junction points of the linear control segments. Once the optimal piecewise linear control (suboptimal) control is known, a guidance law for operating near the suboptimal path is the neighboring optimal piecewise linear control (neighboring suboptimal control). Research conducted under this grant has been directed toward the investigation of neighboring suboptimal control as a guidance scheme for an advanced launch system.

  1. Time-Frequency Analysis of Non-Stationary Biological Signals with Sparse Linear Regression Based Fourier Linear Combiner.

    PubMed

    Wang, Yubo; Veluvolu, Kalyana C

    2017-06-14

    It is often difficult to analyze biological signals because of their nonlinear and non-stationary characteristics. This necessitates the usage of time-frequency decomposition methods for analyzing the subtle changes in these signals that are often connected to an underlying phenomena. This paper presents a new approach to analyze the time-varying characteristics of such signals by employing a simple truncated Fourier series model, namely the band-limited multiple Fourier linear combiner (BMFLC). In contrast to the earlier designs, we first identified the sparsity imposed on the signal model in order to reformulate the model to a sparse linear regression model. The coefficients of the proposed model are then estimated by a convex optimization algorithm. The performance of the proposed method was analyzed with benchmark test signals. An energy ratio metric is employed to quantify the spectral performance and results show that the proposed method Sparse-BMFLC has high mean energy (0.9976) ratio and outperforms existing methods such as short-time Fourier transfrom (STFT), continuous Wavelet transform (CWT) and BMFLC Kalman Smoother. Furthermore, the proposed method provides an overall 6.22% in reconstruction error.

  2. Estimation of Thalamocortical and Intracortical Network Models from Joint Thalamic Single-Electrode and Cortical Laminar-Electrode Recordings in the Rat Barrel System

    PubMed Central

    Blomquist, Patrick; Devor, Anna; Indahl, Ulf G.; Ulbert, Istvan; Einevoll, Gaute T.; Dale, Anders M.

    2009-01-01

    A new method is presented for extraction of population firing-rate models for both thalamocortical and intracortical signal transfer based on stimulus-evoked data from simultaneous thalamic single-electrode and cortical recordings using linear (laminar) multielectrodes in the rat barrel system. Time-dependent population firing rates for granular (layer 4), supragranular (layer 2/3), and infragranular (layer 5) populations in a barrel column and the thalamic population in the homologous barreloid are extracted from the high-frequency portion (multi-unit activity; MUA) of the recorded extracellular signals. These extracted firing rates are in turn used to identify population firing-rate models formulated as integral equations with exponentially decaying coupling kernels, allowing for straightforward transformation to the more common firing-rate formulation in terms of differential equations. Optimal model structures and model parameters are identified by minimizing the deviation between model firing rates and the experimentally extracted population firing rates. For the thalamocortical transfer, the experimental data favor a model with fast feedforward excitation from thalamus to the layer-4 laminar population combined with a slower inhibitory process due to feedforward and/or recurrent connections and mixed linear-parabolic activation functions. The extracted firing rates of the various cortical laminar populations are found to exhibit strong temporal correlations for the present experimental paradigm, and simple feedforward population firing-rate models combined with linear or mixed linear-parabolic activation function are found to provide excellent fits to the data. The identified thalamocortical and intracortical network models are thus found to be qualitatively very different. While the thalamocortical circuit is optimally stimulated by rapid changes in the thalamic firing rate, the intracortical circuits are low-pass and respond most strongly to slowly varying inputs from the cortical layer-4 population. PMID:19325875

  3. Quantifying and visualizing variations in sets of images using continuous linear optimal transport

    NASA Astrophysics Data System (ADS)

    Kolouri, Soheil; Rohde, Gustavo K.

    2014-03-01

    Modern advancements in imaging devices have enabled us to explore the subcellular structure of living organisms and extract vast amounts of information. However, interpreting the biological information mined in the captured images is not a trivial task. Utilizing predetermined numerical features is usually the only hope for quantifying this information. Nonetheless, direct visual or biological interpretation of results obtained from these selected features is non-intuitive and difficult. In this paper, we describe an automatic method for modeling visual variations in a set of images, which allows for direct visual interpretation of the most significant differences, without the need for predefined features. The method is based on a linearized version of the continuous optimal transport (OT) metric, which provides a natural linear embedding for the image data set, in which linear combination of images leads to a visually meaningful image. This enables us to apply linear geometric data analysis techniques such as principal component analysis and linear discriminant analysis in the linearly embedded space and visualize the most prominent modes, as well as the most discriminant modes of variations, in the dataset. Using the continuous OT framework, we are able to analyze variations in shape and texture in a set of images utilizing each image at full resolution, that otherwise cannot be done by existing methods. The proposed method is applied to a set of nuclei images segmented from Feulgen stained liver tissues in order to investigate the major visual differences in chromatin distribution of Fetal-Type Hepatoblastoma (FHB) cells compared to the normal cells.

  4. Susceptibility-weighted imaging using inter-echo-variance channel combination for improved contrast at 7 tesla.

    PubMed

    Hosseini, Zahra; Liu, Junmin; Solovey, Igor; Menon, Ravi S; Drangova, Maria

    2017-04-01

    To implement and optimize a new approach for susceptibility-weighted image (SWI) generation from multi-echo multi-channel image data and compare its performance against optimized traditional SWI pipelines. Five healthy volunteers were imaged at 7 Tesla. The inter-echo-variance (IEV) channel combination, which uses the variance of the local frequency shift at multiple echo times as a weighting factor during channel combination, was used to calculate multi-echo local phase shift maps. Linear phase masks were combined with the magnitude to generate IEV-SWI. The performance of the IEV-SWI pipeline was compared with that of two accepted SWI pipelines-channel combination followed by (i) Homodyne filtering (HPH-SWI) and (ii) unwrapping and high-pass filtering (SVD-SWI). The filtering steps of each pipeline were optimized. Contrast-to-noise ratio was used as the comparison metric. Qualitative assessment of artifact and vessel conspicuity was performed and processing time of pipelines was evaluated. The optimized IEV-SWI pipeline (σ = 7 mm) resulted in continuous vessel visibility throughout the brain. IEV-SWI had significantly higher contrast compared with HPH-SWI and SVD-SWI (P < 0.001, Friedman nonparametric test). Residual background fields and phase wraps in HPH-SWI and SVD-SWI corrupted the vessel signal and/or generated vessel-mimicking artifact. Optimized implementation of the IEV-SWI pipeline processed a six-echo 16-channel dataset in under 10 min. IEV-SWI benefits from channel-by-channel processing of phase data and results in high contrast images with an optimal balance between contrast and background noise removal, thereby presenting evidence of importance of the order in which postprocessing techniques are applied for multi-channel SWI generation. 2 J. Magn. Reson. Imaging 2017;45:1113-1124. © 2016 International Society for Magnetic Resonance in Medicine.

  5. [Optimization of cultivation conditions in se-enriched Spirulina platensis].

    PubMed

    Huang, Zhi; Zheng, Wen-Jie; Guo, Bao-Jiang

    2002-05-01

    Orthogonal combination design was adopted in examining the Spirulina platensis (S. platensis) yield and the influence of four factors (Se content, Se-adding method, S content and NaHCO3 content) on algae growth. The results showed that Se content, Se-adding method and NaHCO3 content were key factors in cultivation conditions of Se-enriched S. platensis with the optimal combination being Se at 300 mg/L, Se-adding amount equally divided into three times and NaHCO3 at 16.8 g/L. Algae yield had a remarkable correlation with OD560 and floating rate by linear regression analysis. There was a corresponding relationship between effects of the four factors on algae yield and on OD560, floating rate too. In conclusion, OD560 and floating rate could be served as yield-forming factors.

  6. Optimal signal constellation design for ultra-high-speed optical transport in the presence of nonlinear phase noise.

    PubMed

    Liu, Tao; Djordjevic, Ivan B

    2014-12-29

    In this paper, we first describe an optimal signal constellation design algorithm suitable for the coherent optical channels dominated by the linear phase noise. Then, we modify this algorithm to be suitable for the nonlinear phase noise dominated channels. In optimization procedure, the proposed algorithm uses the cumulative log-likelihood function instead of the Euclidian distance. Further, an LDPC coded modulation scheme is proposed to be used in combination with signal constellations obtained by proposed algorithm. Monte Carlo simulations indicate that the LDPC-coded modulation schemes employing the new constellation sets, obtained by our new signal constellation design algorithm, outperform corresponding QAM constellations significantly in terms of transmission distance and have better nonlinearity tolerance.

  7. Field-design optimization with triangular heliostat pods

    NASA Astrophysics Data System (ADS)

    Domínguez-Bravo, Carmen-Ana; Bode, Sebastian-James; Heiming, Gregor; Richter, Pascal; Carrizosa, Emilio; Fernández-Cara, Enrique; Frank, Martin; Gauché, Paul

    2016-05-01

    In this paper the optimization of a heliostat field with triangular heliostat pods is addressed. The use of structures which allow the combination of several heliostats into a common pod system aims to reduce the high costs associated with the heliostat field and therefore reduces the Levelized Cost of Electricity value. A pattern-based algorithm and two pattern-free algorithms are adapted to handle the field layout problem with triangular heliostat pods. Under the Helio100 project in South Africa, a new small-scale Solar Power Tower plant has been recently constructed. The Helio100 plant has 20 triangular pods (each with 6 heliostats) whose positions follow a linear pattern. The obtained field layouts after optimization are compared against the reference field Helio100.

  8. Online stochastic optimization of radiotherapy patient scheduling.

    PubMed

    Legrain, Antoine; Fortin, Marie-Andrée; Lahrichi, Nadia; Rousseau, Louis-Martin

    2015-06-01

    The effective management of a cancer treatment facility for radiation therapy depends mainly on optimizing the use of the linear accelerators. In this project, we schedule patients on these machines taking into account their priority for treatment, the maximum waiting time before the first treatment, and the treatment duration. We collaborate with the Centre Intégré de Cancérologie de Laval to determine the best scheduling policy. Furthermore, we integrate the uncertainty related to the arrival of patients at the center. We develop a hybrid method combining stochastic optimization and online optimization to better meet the needs of central planning. We use information on the future arrivals of patients to provide an accurate picture of the expected utilization of resources. Results based on real data show that our method outperforms the policies typically used in treatment centers.

  9. An Augmented Lagrangian Filter Method for Real-Time Embedded Optimization

    DOE PAGES

    Chiang, Nai -Yuan; Huang, Rui; Zavala, Victor M.

    2017-04-17

    We present a filter line-search algorithm for nonconvex continuous optimization that combines an augmented Lagrangian function and a constraint violation metric to accept and reject steps. The approach is motivated by real-time optimization applications that need to be executed on embedded computing platforms with limited memory and processor speeds. The proposed method enables primal–dual regularization of the linear algebra system that in turn permits the use of solution strategies with lower computing overheads. We prove that the proposed algorithm is globally convergent and we demonstrate the developments using a nonconvex real-time optimization application for a building heating, ventilation, and airmore » conditioning system. Our numerical tests are performed on a standard processor and on an embedded platform. Lastly, we demonstrate that the approach reduces solution times by a factor of over 1000.« less

  10. An Augmented Lagrangian Filter Method for Real-Time Embedded Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chiang, Nai -Yuan; Huang, Rui; Zavala, Victor M.

    We present a filter line-search algorithm for nonconvex continuous optimization that combines an augmented Lagrangian function and a constraint violation metric to accept and reject steps. The approach is motivated by real-time optimization applications that need to be executed on embedded computing platforms with limited memory and processor speeds. The proposed method enables primal–dual regularization of the linear algebra system that in turn permits the use of solution strategies with lower computing overheads. We prove that the proposed algorithm is globally convergent and we demonstrate the developments using a nonconvex real-time optimization application for a building heating, ventilation, and airmore » conditioning system. Our numerical tests are performed on a standard processor and on an embedded platform. Lastly, we demonstrate that the approach reduces solution times by a factor of over 1000.« less

  11. Finite grade pheromone ant colony optimization for image segmentation

    NASA Astrophysics Data System (ADS)

    Yuanjing, F.; Li, Y.; Liangjun, K.

    2008-06-01

    By combining the decision process of ant colony optimization (ACO) with the multistage decision process of image segmentation based on active contour model (ACM), an algorithm called finite grade ACO (FACO) for image segmentation is proposed. This algorithm classifies pheromone into finite grades and updating of the pheromone is achieved by changing the grades and the updated quantity of pheromone is independent from the objective function. The algorithm that provides a new approach to obtain precise contour is proved to converge to the global optimal solutions linearly by means of finite Markov chains. The segmentation experiments with ultrasound heart image show the effectiveness of the algorithm. Comparing the results for segmentation of left ventricle images shows that the ACO for image segmentation is more effective than the GA approach and the new pheromone updating strategy appears good time performance in optimization process.

  12. Application of Metaheuristic and Deterministic Algorithms for Aircraft Reference Trajectory Optimization =

    NASA Astrophysics Data System (ADS)

    Murrieta Mendoza, Alejandro

    Aircraft reference trajectory is an alternative method to reduce fuel consumption, thus the pollution released to the atmosphere. Fuel consumption reduction is of special importance for two reasons: first, because the aeronautical industry is responsible of 2% of the CO2 released to the atmosphere, and second, because it will reduce the flight cost. The aircraft fuel model was obtained from a numerical performance database which was created and validated by our industrial partner from flight experimental test data. A new methodology using the numerical database was proposed in this thesis to compute the fuel burn for a given trajectory. Weather parameters such as wind and temperature were taken into account as they have an important effect in fuel burn. The open source model used to obtain the weather forecast was provided by Weather Canada. A combination of linear and bi-linear interpolations allowed finding the required weather data. The search space was modelled using different graphs: one graph was used for mapping the different flight phases such as climb, cruise and descent, and another graph was used for mapping the physical space in which the aircraft would perform its flight. The trajectory was optimized in its vertical reference trajectory using the Beam Search algorithm, and a combination of the Beam Search algorithm with a search space reduction technique. The trajectory was optimized simultaneously for the vertical and lateral reference navigation plans while fulfilling a Required Time of Arrival constraint using three different metaheuristic algorithms: the artificial bee's colony, and the ant colony optimization. Results were validated using the software FlightSIMRTM, a commercial Flight Management System, an exhaustive search algorithm, and as flown flights obtained from flightawareRTM. All algorithms were able to reduce the fuel burn, and the flight costs. None None None None None None None

  13. Compressed modes for variational problems in mathematics and physics

    PubMed Central

    Ozoliņš, Vidvuds; Lai, Rongjie; Caflisch, Russel; Osher, Stanley

    2013-01-01

    This article describes a general formalism for obtaining spatially localized (“sparse”) solutions to a class of problems in mathematical physics, which can be recast as variational optimization problems, such as the important case of Schrödinger’s equation in quantum mechanics. Sparsity is achieved by adding an regularization term to the variational principle, which is shown to yield solutions with compact support (“compressed modes”). Linear combinations of these modes approximate the eigenvalue spectrum and eigenfunctions in a systematically improvable manner, and the localization properties of compressed modes make them an attractive choice for use with efficient numerical algorithms that scale linearly with the problem size. PMID:24170861

  14. Compressed modes for variational problems in mathematics and physics.

    PubMed

    Ozolins, Vidvuds; Lai, Rongjie; Caflisch, Russel; Osher, Stanley

    2013-11-12

    This article describes a general formalism for obtaining spatially localized ("sparse") solutions to a class of problems in mathematical physics, which can be recast as variational optimization problems, such as the important case of Schrödinger's equation in quantum mechanics. Sparsity is achieved by adding an regularization term to the variational principle, which is shown to yield solutions with compact support ("compressed modes"). Linear combinations of these modes approximate the eigenvalue spectrum and eigenfunctions in a systematically improvable manner, and the localization properties of compressed modes make them an attractive choice for use with efficient numerical algorithms that scale linearly with the problem size.

  15. Travel Demand Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Southworth, Frank; Garrow, Dr. Laurie

    This chapter describes the principal types of both passenger and freight demand models in use today, providing a brief history of model development supported by references to a number of popular texts on the subject, and directing the reader to papers covering some of the more recent technical developments in the area. Over the past half century a variety of methods have been used to estimate and forecast travel demands, drawing concepts from economic/utility maximization theory, transportation system optimization and spatial interaction theory, using and often combining solution techniques as varied as Box-Jenkins methods, non-linear multivariate regression, non-linear mathematical programming,more » and agent-based microsimulation.« less

  16. Optimal four-impulse rendezvous between coplanar elliptical orbits

    NASA Astrophysics Data System (ADS)

    Wang, JianXia; Baoyin, HeXi; Li, JunFeng; Sun, FuChun

    2011-04-01

    Rendezvous in circular or near circular orbits has been investigated in great detail, while rendezvous in arbitrary eccentricity elliptical orbits is not sufficiently explored. Among the various optimization methods proposed for fuel optimal orbital rendezvous, Lawden's primer vector theory is favored by many researchers with its clear physical concept and simplicity in solution. Prussing has applied the primer vector optimization theory to minimum-fuel, multiple-impulse, time-fixed orbital rendezvous in a near circular orbit and achieved great success. Extending Prussing's work, this paper will employ the primer vector theory to study trajectory optimization problems of arbitrary eccentricity elliptical orbit rendezvous. Based on linearized equations of relative motion on elliptical reference orbit (referred to as T-H equations), the primer vector theory is used to deal with time-fixed multiple-impulse optimal rendezvous between two coplanar, coaxial elliptical orbits with arbitrary large eccentricity. A parameter adjustment method is developed for the prime vector to satisfy the Lawden's necessary condition for the optimal solution. Finally, the optimal multiple-impulse rendezvous solution including the time, direction and magnitudes of the impulse is obtained by solving the two-point boundary value problem. The rendezvous error of the linearized equation is also analyzed. The simulation results confirmed the analyzed results that the rendezvous error is small for the small eccentricity case and is large for the higher eccentricity. For better rendezvous accuracy of high eccentricity orbits, a combined method of multiplier penalty function with the simplex search method is used for local optimization. The simplex search method is sensitive to the initial values of optimization variables, but the simulation results show that initial values with the primer vector theory, and the local optimization algorithm can improve the rendezvous accuracy effectively with fast convergence, because the optimal results obtained by the primer vector theory are already very close to the actual optimal solution. If the initial values are taken randomly, it is difficult to converge to the optimal solution.

  17. A Bayesian model averaging method for the derivation of reservoir operating rules

    NASA Astrophysics Data System (ADS)

    Zhang, Jingwen; Liu, Pan; Wang, Hao; Lei, Xiaohui; Zhou, Yanlai

    2015-09-01

    Because the intrinsic dynamics among optimal decision making, inflow processes and reservoir characteristics are complex, functional forms of reservoir operating rules are always determined subjectively. As a result, the uncertainty of selecting form and/or model involved in reservoir operating rules must be analyzed and evaluated. In this study, we analyze the uncertainty of reservoir operating rules using the Bayesian model averaging (BMA) model. Three popular operating rules, namely piecewise linear regression, surface fitting and a least-squares support vector machine, are established based on the optimal deterministic reservoir operation. These individual models provide three-member decisions for the BMA combination, enabling the 90% release interval to be estimated by the Markov Chain Monte Carlo simulation. A case study of China's the Baise reservoir shows that: (1) the optimal deterministic reservoir operation, superior to any reservoir operating rules, is used as the samples to derive the rules; (2) the least-squares support vector machine model is more effective than both piecewise linear regression and surface fitting; (3) BMA outperforms any individual model of operating rules based on the optimal trajectories. It is revealed that the proposed model can reduce the uncertainty of operating rules, which is of great potential benefit in evaluating the confidence interval of decisions.

  18. Selectivity optimization in green chromatography by gradient stationary phase optimized selectivity liquid chromatography.

    PubMed

    Chen, Kai; Lynen, Frédéric; De Beer, Maarten; Hitzel, Laure; Ferguson, Paul; Hanna-Brown, Melissa; Sandra, Pat

    2010-11-12

    Stationary phase optimized selectivity liquid chromatography (SOSLC) is a promising technique to optimize the selectivity of a given separation by using a combination of different stationary phases. Previous work has shown that SOSLC offers excellent possibilities for method development, especially after the recent modification towards linear gradient SOSLC. The present work is aimed at developing and extending the SOSLC approach towards selectivity optimization and method development for green chromatography. Contrary to current LC practices, a green mobile phase (water/ethanol/formic acid) is hereby preselected and the composition of the stationary phase is optimized under a given gradient profile to obtain baseline resolution of all target solutes in the shortest possible analysis time. With the algorithm adapted to the high viscosity property of ethanol, the principle is illustrated with a fast, full baseline resolution for a randomly selected mixture composed of sulphonamides, xanthine alkaloids and steroids. Copyright © 2010 Elsevier B.V. All rights reserved.

  19. Data mining-based coefficient of influence factors optimization of test paper reliability

    NASA Astrophysics Data System (ADS)

    Xu, Peiyao; Jiang, Huiping; Wei, Jieyao

    2018-05-01

    Test is a significant part of the teaching process. It demonstrates the final outcome of school teaching through teachers' teaching level and students' scores. The analysis of test paper is a complex operation that has the characteristics of non-linear relation in the length of the paper, time duration and the degree of difficulty. It is therefore difficult to optimize the coefficient of influence factors under different conditions in order to get text papers with clearly higher reliability with general methods [1]. With data mining techniques like Support Vector Regression (SVR) and Genetic Algorithm (GA), we can model the test paper analysis and optimize the coefficient of impact factors for higher reliability. It's easy to find that the combination of SVR and GA can get an effective advance in reliability from the test results. The optimal coefficient of influence factors optimization has a practicability in actual application, and the whole optimizing operation can offer model basis for test paper analysis.

  20. Fecal Markers of Intestinal Inflammation and Permeability Associated with the Subsequent Acquisition of Linear Growth Deficits in Infants

    PubMed Central

    Kosek, Margaret; Haque, Rashidul; Lima, Aldo; Babji, Sudhir; Shrestha, Sanjaya; Qureshi, Shahida; Amidou, Samie; Mduma, Estomih; Lee, Gwenyth; Yori, Pablo Peñataro; Guerrant, Richard L.; Bhutta, Zulfiqar; Mason, Carl; Kang, Gagandeep; Kabir, Mamun; Amour, Caroline; Bessong, Pascal; Turab, Ali; Seidman, Jessica; Olortegui, Maribel Paredes; Quetz, Josiane; Lang, Dennis; Gratz, Jean; Miller, Mark; Gottlieb, Michael

    2013-01-01

    Enteric infections are associated with linear growth failure in children. To quantify the association between intestinal inflammation and linear growth failure three commercially available enzyme-linked immunosorbent assays (neopterin [NEO], alpha-anti-trypsin [AAT], and myeloperoxidase [MPO]) were performed in a structured sampling of asymptomatic stool from children under longitudinal surveillance for diarrheal illness in eight countries. Samples from 537 children contributed 1,169 AAT, 916 MPO, and 954 NEO test results that were significantly associated with linear growth. When combined to form a disease activity score, children with the highest score grew 1.08 cm less than children with the lowest score over the 6-month period following the tests after controlling for the incidence of diarrheal disease. This set of affordable non-invasive tests delineates those at risk of linear growth failure and may be used for the improved assessments of interventions to optimize growth during a critical period of early childhood. PMID:23185075

  1. Nonlinear aeroservoelastic analysis of a controlled multiple-actuated-wing model with free-play

    NASA Astrophysics Data System (ADS)

    Huang, Rui; Hu, Haiyan; Zhao, Yonghui

    2013-10-01

    In this paper, the effects of structural nonlinearity due to free-play in both leading-edge and trailing-edge outboard control surfaces on the linear flutter control system are analyzed for an aeroelastic model of three-dimensional multiple-actuated-wing. The free-play nonlinearities in the control surfaces are modeled theoretically by using the fictitious mass approach. The nonlinear aeroelastic equations of the presented model can be divided into nine sub-linear modal-based aeroelastic equations according to the different combinations of deflections of the leading-edge and trailing-edge outboard control surfaces. The nonlinear aeroelastic responses can be computed based on these sub-linear aeroelastic systems. To demonstrate the effects of nonlinearity on the linear flutter control system, a single-input and single-output controller and a multi-input and multi-output controller are designed based on the unconstrained optimization techniques. The numerical results indicate that the free-play nonlinearity can lead to either limit cycle oscillations or divergent motions when the linear control system is implemented.

  2. Locality-preserving sparse representation-based classification in hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Gao, Lianru; Yu, Haoyang; Zhang, Bing; Li, Qingting

    2016-10-01

    This paper proposes to combine locality-preserving projections (LPP) and sparse representation (SR) for hyperspectral image classification. The LPP is first used to reduce the dimensionality of all the training and testing data by finding the optimal linear approximations to the eigenfunctions of the Laplace Beltrami operator on the manifold, where the high-dimensional data lies. Then, SR codes the projected testing pixels as sparse linear combinations of all the training samples to classify the testing pixels by evaluating which class leads to the minimum approximation error. The integration of LPP and SR represents an innovative contribution to the literature. The proposed approach, called locality-preserving SR-based classification, addresses the imbalance between high dimensionality of hyperspectral data and the limited number of training samples. Experimental results on three real hyperspectral data sets demonstrate that the proposed approach outperforms the original counterpart, i.e., SR-based classification.

  3. Linear Multivariable Regression Models for Prediction of Eddy Dissipation Rate from Available Meteorological Data

    NASA Technical Reports Server (NTRS)

    MCKissick, Burnell T. (Technical Monitor); Plassman, Gerald E.; Mall, Gerald H.; Quagliano, John R.

    2005-01-01

    Linear multivariable regression models for predicting day and night Eddy Dissipation Rate (EDR) from available meteorological data sources are defined and validated. Model definition is based on a combination of 1997-2000 Dallas/Fort Worth (DFW) data sources, EDR from Aircraft Vortex Spacing System (AVOSS) deployment data, and regression variables primarily from corresponding Automated Surface Observation System (ASOS) data. Model validation is accomplished through EDR predictions on a similar combination of 1994-1995 Memphis (MEM) AVOSS and ASOS data. Model forms include an intercept plus a single term of fixed optimal power for each of these regression variables; 30-minute forward averaged mean and variance of near-surface wind speed and temperature, variance of wind direction, and a discrete cloud cover metric. Distinct day and night models, regressing on EDR and the natural log of EDR respectively, yield best performance and avoid model discontinuity over day/night data boundaries.

  4. Otoacoustic emissions in the general adult population of Nord-Trøndelag, Norway: III. Relationships with pure-tone hearing thresholds.

    PubMed

    Engdahl, Bo; Tambs, Kristian; Borchgrevink, Hans M; Hoffman, Howard J

    2005-01-01

    This study aims to describe the association between otoacoustic emissions (OAEs) and pure-tone hearing thresholds (PTTs) in an unscreened adult population (N =6415), to determine the efficiency by which TEOAEs and DPOAEs can identify ears with elevated PTTs, and to investigate whether a combination of DPOAE and TEOAE responses improves this performance. Associations were examined by linear regression analysis and ANOVA. Test performance was assessed by receiver operator characteristic (ROC) curves. The relation between OAEs and PTTs appeared curvilinear with a moderate degree of non-linearity. Combining DPOAEs and TEOAEs improved performance. Test performance depended on the cut-off thresholds defining elevated PTTs with optimal values between 25 and 45 dB HL, depending on frequency and type of OAE measure. The unique constitution of the present large sample, which reflects the general adult population, makes these results applicable to population-based studies and screening programs.

  5. Optimizing an ELF/VLF Phased Array at HAARP

    NASA Astrophysics Data System (ADS)

    Fujimaru, S.; Moore, R. C.

    2013-12-01

    The goal of this study is to maximize the amplitude of 1-5 kHz ELF/VLF waves generated by ionospheric HF heating and measured at a ground-based ELF/VLF receiver. The optimization makes use of experimental observations performed during ELF/VLF wave generation experiments at the High-frequency Active Auroral Research Program (HAARP) Observatory in Gakona, Alaska. During these experiments, the amplitude, phase, and propagation delay of the ELF/VLF waves were carefully measured. The HF beam was aimed at 15 degrees zenith angle in 8 different azimuthal directions, equally spaced in a circle, while broadcasting a 3.25 MHz (X-mode) signal that was amplitude modulated (square wave) with a linear frequency-time chirp between 1 and 5 kHz. The experimental observations are used to provide reference amplitudes, phases, and propagation delays for ELF/VLF waves generated at these specific locations. The presented optimization accounts for the trade-off between duty cycle, heated area, and the distributed nature of the source region in order to construct a "most efficient" phased array. The amplitudes and phases generated by modulated heating at each location are combined in post-processing to find an optimal combination of duty cycle, heating location, and heating order.

  6. Linear antenna array optimization using flower pollination algorithm.

    PubMed

    Saxena, Prerna; Kothari, Ashwin

    2016-01-01

    Flower pollination algorithm (FPA) is a new nature-inspired evolutionary algorithm used to solve multi-objective optimization problems. The aim of this paper is to introduce FPA to the electromagnetics and antenna community for the optimization of linear antenna arrays. FPA is applied for the first time to linear array so as to obtain optimized antenna positions in order to achieve an array pattern with minimum side lobe level along with placement of deep nulls in desired directions. Various design examples are presented that illustrate the use of FPA for linear antenna array optimization, and subsequently the results are validated by benchmarking along with results obtained using other state-of-the-art, nature-inspired evolutionary algorithms such as particle swarm optimization, ant colony optimization and cat swarm optimization. The results suggest that in most cases, FPA outperforms the other evolutionary algorithms and at times it yields a similar performance.

  7. Derived Optimal Linear Combination Evapotranspiration (DOLCE): a global gridded synthesis ET estimate

    NASA Astrophysics Data System (ADS)

    Hobeichi, Sanaa; Abramowitz, Gab; Evans, Jason; Ukkola, Anna

    2018-02-01

    Accurate global gridded estimates of evapotranspiration (ET) are key to understanding water and energy budgets, in addition to being required for model evaluation. Several gridded ET products have already been developed which differ in their data requirements, the approaches used to derive them and their estimates, yet it is not clear which provides the most reliable estimates. This paper presents a new global ET dataset and associated uncertainty with monthly temporal resolution for 2000-2009. Six existing gridded ET products are combined using a weighting approach trained by observational datasets from 159 FLUXNET sites. The weighting method is based on a technique that provides an analytically optimal linear combination of ET products compared to site data and accounts for both the performance differences and error covariance between the participating ET products. We examine the performance of the weighting approach in several in-sample and out-of-sample tests that confirm that point-based estimates of flux towers provide information on the grid scale of these products. We also provide evidence that the weighted product performs better than its six constituent ET product members in four common metrics. Uncertainty in the ET estimate is derived by rescaling the spread of participating ET products so that their spread reflects the ability of the weighted mean estimate to match flux tower data. While issues in observational data and any common biases in participating ET datasets are limitations to the success of this approach, future datasets can easily be incorporated and enhance the derived product.

  8. Reduced-Drift Virtual Gyro from an Array of Low-Cost Gyros.

    PubMed

    Vaccaro, Richard J; Zaki, Ahmed S

    2017-02-11

    A Kalman filter approach for combining the outputs of an array of high-drift gyros to obtain a virtual lower-drift gyro has been known in the literature for more than a decade. The success of this approach depends on the correlations of the random drift components of the individual gyros. However, no method of estimating these correlations has appeared in the literature. This paper presents an algorithm for obtaining the statistical model for an array of gyros, including the cross-correlations of the individual random drift components. In order to obtain this model, a new statistic, called the "Allan covariance" between two gyros, is introduced. The gyro array model can be used to obtain the Kalman filter-based (KFB) virtual gyro. Instead, we consider a virtual gyro obtained by taking a linear combination of individual gyro outputs. The gyro array model is used to calculate the optimal coefficients, as well as to derive a formula for the drift of the resulting virtual gyro. The drift formula for the optimal linear combination (OLC) virtual gyro is identical to that previously derived for the KFB virtual gyro. Thus, a Kalman filter is not necessary to obtain a minimum drift virtual gyro. The theoretical results of this paper are demonstrated using simulated as well as experimental data. In experimental results with a 28-gyro array, the OLC virtual gyro has a drift spectral density 40 times smaller than that obtained by taking the average of the gyro signals.

  9. Serenity: A subsystem quantum chemistry program.

    PubMed

    Unsleber, Jan P; Dresselhaus, Thomas; Klahr, Kevin; Schnieders, David; Böckers, Michael; Barton, Dennis; Neugebauer, Johannes

    2018-05-15

    We present the new quantum chemistry program Serenity. It implements a wide variety of functionalities with a focus on subsystem methodology. The modular code structure in combination with publicly available external tools and particular design concepts ensures extensibility and robustness with a focus on the needs of a subsystem program. Several important features of the program are exemplified with sample calculations with subsystem density-functional theory, potential reconstruction techniques, a projection-based embedding approach and combinations thereof with geometry optimization, semi-numerical frequency calculations and linear-response time-dependent density-functional theory. © 2018 Wiley Periodicals, Inc. © 2018 Wiley Periodicals, Inc.

  10. Multidimensional indexing structure for use with linear optimization queries

    NASA Technical Reports Server (NTRS)

    Bergman, Lawrence David (Inventor); Castelli, Vittorio (Inventor); Chang, Yuan-Chi (Inventor); Li, Chung-Sheng (Inventor); Smith, John Richard (Inventor)

    2002-01-01

    Linear optimization queries, which usually arise in various decision support and resource planning applications, are queries that retrieve top N data records (where N is an integer greater than zero) which satisfy a specific optimization criterion. The optimization criterion is to either maximize or minimize a linear equation. The coefficients of the linear equation are given at query time. Methods and apparatus are disclosed for constructing, maintaining and utilizing a multidimensional indexing structure of database records to improve the execution speed of linear optimization queries. Database records with numerical attributes are organized into a number of layers and each layer represents a geometric structure called convex hull. Such linear optimization queries are processed by searching from the outer-most layer of this multi-layer indexing structure inwards. At least one record per layer will satisfy the query criterion and the number of layers needed to be searched depends on the spatial distribution of records, the query-issued linear coefficients, and N, the number of records to be returned. When N is small compared to the total size of the database, answering the query typically requires searching only a small fraction of all relevant records, resulting in a tremendous speedup as compared to linearly scanning the entire dataset.

  11. Optimizing complex phenotypes through model-guided multiplex genome engineering

    DOE PAGES

    Kuznetsov, Gleb; Goodman, Daniel B.; Filsinger, Gabriel T.; ...

    2017-05-25

    Here, we present a method for identifying genomic modifications that optimize a complex phenotype through multiplex genome engineering and predictive modeling. We apply our method to identify six single nucleotide mutations that recover 59% of the fitness defect exhibited by the 63-codon E. coli strain C321.ΔA. By introducing targeted combinations of changes in multiplex we generate rich genotypic and phenotypic diversity and characterize clones using whole-genome sequencing and doubling time measurements. Regularized multivariate linear regression accurately quantifies individual allelic effects and overcomes bias from hitchhiking mutations and context-dependence of genome editing efficiency that would confound other strategies.

  12. Optimizing complex phenotypes through model-guided multiplex genome engineering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuznetsov, Gleb; Goodman, Daniel B.; Filsinger, Gabriel T.

    Here, we present a method for identifying genomic modifications that optimize a complex phenotype through multiplex genome engineering and predictive modeling. We apply our method to identify six single nucleotide mutations that recover 59% of the fitness defect exhibited by the 63-codon E. coli strain C321.ΔA. By introducing targeted combinations of changes in multiplex we generate rich genotypic and phenotypic diversity and characterize clones using whole-genome sequencing and doubling time measurements. Regularized multivariate linear regression accurately quantifies individual allelic effects and overcomes bias from hitchhiking mutations and context-dependence of genome editing efficiency that would confound other strategies.

  13. A More Compact AES

    NASA Astrophysics Data System (ADS)

    Canright, David; Osvik, Dag Arne

    We explore ways to reduce the number of bit operations required to implement AES. One way involves optimizing the composite field approach for entire rounds of AES. Another way is integrating the Galois multiplications of MixColumns with the linear transformations of the S-box. Combined with careful optimizations, these reduce the number of bit operations to encrypt one block by 9.0%, compared to earlier work that used the composite field only in the S-box. For decryption, the improvement is 13.5%. This work may be useful both as a starting point for a bit-sliced software implementation, where reducing operations increases speed, and also for hardware with limited resources.

  14. On stochastic control and optimal measurement strategies. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Kramer, L. C.

    1971-01-01

    The control of stochastic dynamic systems is studied with particular emphasis on those which influence the quality or nature of the measurements which are made to effect control. Four main areas are discussed: (1) the meaning of stochastic optimality and the means by which dynamic programming may be applied to solve a combined control/measurement problem; (2) a technique by which it is possible to apply deterministic methods, specifically the minimum principle, to the study of stochastic problems; (3) the methods described are applied to linear systems with Gaussian disturbances to study the structure of the resulting control system; and (4) several applications are considered.

  15. Linear and nonlinear methods in modeling the aqueous solubility of organic compounds.

    PubMed

    Catana, Cornel; Gao, Hua; Orrenius, Christian; Stouten, Pieter F W

    2005-01-01

    Solubility data for 930 diverse compounds have been analyzed using linear Partial Least Square (PLS) and nonlinear PLS methods, Continuum Regression (CR), and Neural Networks (NN). 1D and 2D descriptors from MOE package in combination with E-state or ISIS keys have been used. The best model was obtained using linear PLS for a combination between 22 MOE descriptors and 65 ISIS keys. It has a correlation coefficient (r2) of 0.935 and a root-mean-square error (RMSE) of 0.468 log molar solubility (log S(w)). The model validated on a test set of 177 compounds not included in the training set has r2 0.911 and RMSE 0.475 log S(w). The descriptors were ranked according to their importance, and at the top of the list have been found the 22 MOE descriptors. The CR model produced results as good as PLS, and because of the way in which cross-validation has been done it is expected to be a valuable tool in prediction besides PLS model. The statistics obtained using nonlinear methods did not surpass those got with linear ones. The good statistic obtained for linear PLS and CR recommends these models to be used in prediction when it is difficult or impossible to make experimental measurements, for virtual screening, combinatorial library design, and efficient leads optimization.

  16. Optimal Hedging Rule for Reservoir Refill Operation

    NASA Astrophysics Data System (ADS)

    Wan, W.; Zhao, J.; Lund, J. R.; Zhao, T.; Lei, X.; Wang, H.

    2015-12-01

    This paper develops an optimal reservoir Refill Hedging Rule (RHR) for combined water supply and flood operation using mathematical analysis. A two-stage model is developed to formulate the trade-off between operations for conservation benefit and flood damage in the reservoir refill season. Based on the probability distribution of the maximum refill water availability at the end of the second stage, three zones are characterized according to the relationship among storage capacity, expected storage buffer (ESB), and maximum safety excess discharge (MSED). The Karush-Kuhn-Tucker conditions of the model show that the optimality of the refill operation involves making the expected marginal loss of conservation benefit from unfilling (i.e., ending storage of refill period less than storage capacity) as nearly equal to the expected marginal flood damage from levee overtopping downstream as possible while maintaining all constraints. This principle follows and combines the hedging rules for water supply and flood management. A RHR curve is drawn analogously to water supply hedging and flood hedging rules, showing the trade-off between the two objectives. The release decision result has a linear relationship with the current water availability, implying the linearity of RHR for a wide range of water conservation functions (linear, concave, or convex). A demonstration case shows the impacts of factors. Larger downstream flood conveyance capacity and empty reservoir capacity allow a smaller current release and more water can be conserved. Economic indicators of conservation benefit and flood damage compete with each other on release, the greater economic importance of flood damage is, the more water should be released in the current stage, and vice versa. Below a critical value, improving forecasts yields less water release, but an opposing effect occurs beyond this critical value. Finally, the Danjiangkou Reservoir case study shows that the RHR together with a rolling horizon decision approach can lead to a gradual dynamic refilling, indicating its potential for practical use.

  17. From diets to foods: using linear programming to formulate a nutritious, minimum-cost porridge mix for children aged 1 to 2 years.

    PubMed

    De Carvalho, Irene Stuart Torrié; Granfeldt, Yvonne; Dejmek, Petr; Håkansson, Andreas

    2015-03-01

    Linear programming has been used extensively as a tool for nutritional recommendations. Extending the methodology to food formulation presents new challenges, since not all combinations of nutritious ingredients will produce an acceptable food. Furthermore, it would help in implementation and in ensuring the feasibility of the suggested recommendations. To extend the previously used linear programming methodology from diet optimization to food formulation using consistency constraints. In addition, to exemplify usability using the case of a porridge mix formulation for emergency situations in rural Mozambique. The linear programming method was extended with a consistency constraint based on previously published empirical studies on swelling of starch in soft porridges. The new method was exemplified using the formulation of a nutritious, minimum-cost porridge mix for children aged 1 to 2 years for use as a complete relief food, based primarily on local ingredients, in rural Mozambique. A nutritious porridge fulfilling the consistency constraints was found; however, the minimum cost was unfeasible with local ingredients only. This illustrates the challenges in formulating nutritious yet economically feasible foods from local ingredients. The high cost was caused by the high cost of mineral-rich foods. A nutritious, low-cost porridge that fulfills the consistency constraints was obtained by including supplements of zinc and calcium salts as ingredients. The optimizations were successful in fulfilling all constraints and provided a feasible porridge, showing that the extended constrained linear programming methodology provides a systematic tool for designing nutritious foods.

  18. Pulmonary Nodule Recognition Based on Multiple Kernel Learning Support Vector Machine-PSO

    PubMed Central

    Zhu, Zhichuan; Zhao, Qingdong; Liu, Liwei; Zhang, Lijuan

    2018-01-01

    Pulmonary nodule recognition is the core module of lung CAD. The Support Vector Machine (SVM) algorithm has been widely used in pulmonary nodule recognition, and the algorithm of Multiple Kernel Learning Support Vector Machine (MKL-SVM) has achieved good results therein. Based on grid search, however, the MKL-SVM algorithm needs long optimization time in course of parameter optimization; also its identification accuracy depends on the fineness of grid. In the paper, swarm intelligence is introduced and the Particle Swarm Optimization (PSO) is combined with MKL-SVM algorithm to be MKL-SVM-PSO algorithm so as to realize global optimization of parameters rapidly. In order to obtain the global optimal solution, different inertia weights such as constant inertia weight, linear inertia weight, and nonlinear inertia weight are applied to pulmonary nodules recognition. The experimental results show that the model training time of the proposed MKL-SVM-PSO algorithm is only 1/7 of the training time of the MKL-SVM grid search algorithm, achieving better recognition effect. Moreover, Euclidean norm of normalized error vector is proposed to measure the proximity between the average fitness curve and the optimal fitness curve after convergence. Through statistical analysis of the average of 20 times operation results with different inertial weights, it can be seen that the dynamic inertial weight is superior to the constant inertia weight in the MKL-SVM-PSO algorithm. In the dynamic inertial weight algorithm, the parameter optimization time of nonlinear inertia weight is shorter; the average fitness value after convergence is much closer to the optimal fitness value, which is better than the linear inertial weight. Besides, a better nonlinear inertial weight is verified. PMID:29853983

  19. Pulmonary Nodule Recognition Based on Multiple Kernel Learning Support Vector Machine-PSO.

    PubMed

    Li, Yang; Zhu, Zhichuan; Hou, Alin; Zhao, Qingdong; Liu, Liwei; Zhang, Lijuan

    2018-01-01

    Pulmonary nodule recognition is the core module of lung CAD. The Support Vector Machine (SVM) algorithm has been widely used in pulmonary nodule recognition, and the algorithm of Multiple Kernel Learning Support Vector Machine (MKL-SVM) has achieved good results therein. Based on grid search, however, the MKL-SVM algorithm needs long optimization time in course of parameter optimization; also its identification accuracy depends on the fineness of grid. In the paper, swarm intelligence is introduced and the Particle Swarm Optimization (PSO) is combined with MKL-SVM algorithm to be MKL-SVM-PSO algorithm so as to realize global optimization of parameters rapidly. In order to obtain the global optimal solution, different inertia weights such as constant inertia weight, linear inertia weight, and nonlinear inertia weight are applied to pulmonary nodules recognition. The experimental results show that the model training time of the proposed MKL-SVM-PSO algorithm is only 1/7 of the training time of the MKL-SVM grid search algorithm, achieving better recognition effect. Moreover, Euclidean norm of normalized error vector is proposed to measure the proximity between the average fitness curve and the optimal fitness curve after convergence. Through statistical analysis of the average of 20 times operation results with different inertial weights, it can be seen that the dynamic inertial weight is superior to the constant inertia weight in the MKL-SVM-PSO algorithm. In the dynamic inertial weight algorithm, the parameter optimization time of nonlinear inertia weight is shorter; the average fitness value after convergence is much closer to the optimal fitness value, which is better than the linear inertial weight. Besides, a better nonlinear inertial weight is verified.

  20. Analysis of polycyclic aromatic hydrocarbons in water and beverages using membrane-assisted solvent extraction in combination with large volume injection-gas chromatography-mass spectrometric detection.

    PubMed

    Rodil, Rosario; Schellin, Manuela; Popp, Peter

    2007-09-07

    Membrane-assisted solvent extraction (MASE) in combination with large volume injection-gas chromatography-mass spectrometry (LVI-GC-MS) was applied for the determination of 16 polycyclic aromatic hydrocarbons (PAHs) in aqueous samples. The MASE conditions were optimized for achieving high enrichment of the analytes from aqueous samples, in terms of extraction conditions (shaking speed, extraction temperature and time), extraction solvent and composition (ionic strength, sample pH and presence of organic solvent). Parameters like linearity and reproducibility of the procedure were determined. The extraction efficiency was above 65% for all the analytes and the relative standard deviation (RSD) for five consecutive extractions ranged from 6 to 18%. At optimized conditions detection limits at the ng/L level were achieved. The effectiveness of the method was tested by analyzing real samples, such as river water, apple juice, red wine and milk.

  1. Analysis of the faster-than-Nyquist optimal linear multicarrier system

    NASA Astrophysics Data System (ADS)

    Marquet, Alexandre; Siclet, Cyrille; Roque, Damien

    2017-02-01

    Faster-than-Nyquist signalization enables a better spectral efficiency at the expense of an increased computational complexity. Regarding multicarrier communications, previous work mainly relied on the study of non-linear systems exploiting coding and/or equalization techniques, with no particular optimization of the linear part of the system. In this article, we analyze the performance of the optimal linear multicarrier system when used together with non-linear receiving structures (iterative decoding and direct feedback equalization), or in a standalone fashion. We also investigate the limits of the normality assumption of the interference, used for implementing such non-linear systems. The use of this optimal linear system leads to a closed-form expression of the bit-error probability that can be used to predict the performance and help the design of coded systems. Our work also highlights the great performance/complexity trade-off offered by decision feedback equalization in a faster-than-Nyquist context. xml:lang="fr"

  2. Distribution-Agnostic Stochastic Optimal Power Flow for Distribution Grids: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, Kyri; Dall'Anese, Emiliano; Summers, Tyler

    2016-09-01

    This paper outlines a data-driven, distributionally robust approach to solve chance-constrained AC optimal power flow problems in distribution networks. Uncertain forecasts for loads and power generated by photovoltaic (PV) systems are considered, with the goal of minimizing PV curtailment while meeting power flow and voltage regulation constraints. A data- driven approach is utilized to develop a distributionally robust conservative convex approximation of the chance-constraints; particularly, the mean and covariance matrix of the forecast errors are updated online, and leveraged to enforce voltage regulation with predetermined probability via Chebyshev-based bounds. By combining an accurate linear approximation of the AC power flowmore » equations with the distributionally robust chance constraint reformulation, the resulting optimization problem becomes convex and computationally tractable.« less

  3. Short-Term Planning of Hybrid Power System

    NASA Astrophysics Data System (ADS)

    Knežević, Goran; Baus, Zoran; Nikolovski, Srete

    2016-07-01

    In this paper short-term planning algorithm for hybrid power system consist of different types of cascade hydropower plants (run-of-the river, pumped storage, conventional), thermal power plants (coal-fired power plants, combined cycle gas-fired power plants) and wind farms is presented. The optimization process provides a joint bid of the hybrid system, and thus making the operation schedule of hydro and thermal power plants, the operation condition of pumped-storage hydropower plants with the aim of maximizing profits on day ahead market, according to expected hourly electricity prices, the expected local water inflow in certain hydropower plants, and the expected production of electrical energy from the wind farm, taking into account previously contracted bilateral agreement for electricity generation. Optimization process is formulated as hourly-discretized mixed integer linear optimization problem. Optimization model is applied on the case study in order to show general features of the developed model.

  4. ADS: A FORTRAN program for automated design synthesis: Version 1.10

    NASA Technical Reports Server (NTRS)

    Vanderplaats, G. N.

    1985-01-01

    A new general-purpose optimization program for engineering design is described. ADS (Automated Design Synthesis - Version 1.10) is a FORTRAN program for solution of nonlinear constrained optimization problems. The program is segmented into three levels: strategy, optimizer, and one-dimensional search. At each level, several options are available so that a total of over 100 possible combinations can be created. Examples of available strategies are sequential unconstrained minimization, the Augmented Lagrange Multiplier method, and Sequential Linear Programming. Available optimizers include variable metric methods and the Method of Feasible Directions as examples, and one-dimensional search options include polynomial interpolation and the Golden Section method as examples. Emphasis is placed on ease of use of the program. All information is transferred via a single parameter list. Default values are provided for all internal program parameters such as convergence criteria, and the user is given a simple means to over-ride these, if desired.

  5. A tool for efficient, model-independent management optimization under uncertainty

    USGS Publications Warehouse

    White, Jeremy; Fienen, Michael N.; Barlow, Paul M.; Welter, Dave E.

    2018-01-01

    To fill a need for risk-based environmental management optimization, we have developed PESTPP-OPT, a model-independent tool for resource management optimization under uncertainty. PESTPP-OPT solves a sequential linear programming (SLP) problem and also implements (optional) efficient, “on-the-fly” (without user intervention) first-order, second-moment (FOSM) uncertainty techniques to estimate model-derived constraint uncertainty. Combined with a user-specified risk value, the constraint uncertainty estimates are used to form chance-constraints for the SLP solution process, so that any optimal solution includes contributions from model input and observation uncertainty. In this way, a “single answer” that includes uncertainty is yielded from the modeling analysis. PESTPP-OPT uses the familiar PEST/PEST++ model interface protocols, which makes it widely applicable to many modeling analyses. The use of PESTPP-OPT is demonstrated with a synthetic, integrated surface-water/groundwater model. The function and implications of chance constraints for this synthetic model are discussed.

  6. Smoothing-based compressed state Kalman filter for joint state-parameter estimation: Applications in reservoir characterization and CO2 storage monitoring

    NASA Astrophysics Data System (ADS)

    Li, Y. J.; Kokkinaki, Amalia; Darve, Eric F.; Kitanidis, Peter K.

    2017-08-01

    The operation of most engineered hydrogeological systems relies on simulating physical processes using numerical models with uncertain parameters and initial conditions. Predictions by such uncertain models can be greatly improved by Kalman-filter techniques that sequentially assimilate monitoring data. Each assimilation constitutes a nonlinear optimization, which is solved by linearizing an objective function about the model prediction and applying a linear correction to this prediction. However, if model parameters and initial conditions are uncertain, the optimization problem becomes strongly nonlinear and a linear correction may yield unphysical results. In this paper, we investigate the utility of one-step ahead smoothing, a variant of the traditional filtering process, to eliminate nonphysical results and reduce estimation artifacts caused by nonlinearities. We present the smoothing-based compressed state Kalman filter (sCSKF), an algorithm that combines one step ahead smoothing, in which current observations are used to correct the state and parameters one step back in time, with a nonensemble covariance compression scheme, that reduces the computational cost by efficiently exploring the high-dimensional state and parameter space. Numerical experiments show that when model parameters are uncertain and the states exhibit hyperbolic behavior with sharp fronts, as in CO2 storage applications, one-step ahead smoothing reduces overshooting errors and, by design, gives physically consistent state and parameter estimates. We compared sCSKF with commonly used data assimilation methods and showed that for the same computational cost, combining one step ahead smoothing and nonensemble compression is advantageous for real-time characterization and monitoring of large-scale hydrogeological systems with sharp moving fronts.

  7. EEG-based mild depressive detection using feature selection methods and classifiers.

    PubMed

    Li, Xiaowei; Hu, Bin; Sun, Shuting; Cai, Hanshu

    2016-11-01

    Depression has become a major health burden worldwide, and effectively detection of such disorder is a great challenge which requires latest technological tool, such as Electroencephalography (EEG). This EEG-based research seeks to find prominent frequency band and brain regions that are most related to mild depression, as well as an optimal combination of classification algorithms and feature selection methods which can be used in future mild depression detection. An experiment based on facial expression viewing task (Emo_block and Neu_block) was conducted, and EEG data of 37 university students were collected using a 128 channel HydroCel Geodesic Sensor Net (HCGSN). For discriminating mild depressive patients and normal controls, BayesNet (BN), Support Vector Machine (SVM), Logistic Regression (LR), k-nearest neighbor (KNN) and RandomForest (RF) classifiers were used. And BestFirst (BF), GreedyStepwise (GSW), GeneticSearch (GS), LinearForwordSelection (LFS) and RankSearch (RS) based on Correlation Features Selection (CFS) were applied for linear and non-linear EEG features selection. Independent Samples T-test with Bonferroni correction was used to find the significantly discriminant electrodes and features. Data mining results indicate that optimal performance is achieved using a combination of feature selection method GSW based on CFS and classifier KNN for beta frequency band. Accuracies achieved 92.00% and 98.00%, and AUC achieved 0.957 and 0.997, for Emo_block and Neu_block beta band data respectively. T-test results validate the effectiveness of selected features by search method GSW. Simplified EEG system with only FP1, FP2, F3, O2, T3 electrodes was also explored with linear features, which yielded accuracies of 91.70% and 96.00%, AUC of 0.952 and 0.972, for Emo_block and Neu_block respectively. Classification results obtained by GSW + KNN are encouraging and better than previously published results. In the spatial distribution of features, we find that left parietotemporal lobe in beta EEG frequency band has greater effect on mild depression detection. And fewer EEG channels (FP1, FP2, F3, O2 and T3) combined with linear features may be good candidates for usage in portable systems for mild depression detection. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  8. Variable-Complexity Multidisciplinary Optimization on Parallel Computers

    NASA Technical Reports Server (NTRS)

    Grossman, Bernard; Mason, William H.; Watson, Layne T.; Haftka, Raphael T.

    1998-01-01

    This report covers work conducted under grant NAG1-1562 for the NASA High Performance Computing and Communications Program (HPCCP) from December 7, 1993, to December 31, 1997. The objective of the research was to develop new multidisciplinary design optimization (MDO) techniques which exploit parallel computing to reduce the computational burden of aircraft MDO. The design of the High-Speed Civil Transport (HSCT) air-craft was selected as a test case to demonstrate the utility of our MDO methods. The three major tasks of this research grant included: development of parallel multipoint approximation methods for the aerodynamic design of the HSCT, use of parallel multipoint approximation methods for structural optimization of the HSCT, mathematical and algorithmic development including support in the integration of parallel computation for items (1) and (2). These tasks have been accomplished with the development of a response surface methodology that incorporates multi-fidelity models. For the aerodynamic design we were able to optimize with up to 20 design variables using hundreds of expensive Euler analyses together with thousands of inexpensive linear theory simulations. We have thereby demonstrated the application of CFD to a large aerodynamic design problem. For the predicting structural weight we were able to combine hundreds of structural optimizations of refined finite element models with thousands of optimizations based on coarse models. Computations have been carried out on the Intel Paragon with up to 128 nodes. The parallel computation allowed us to perform combined aerodynamic-structural optimization using state of the art models of a complex aircraft configurations.

  9. Near-optimal alternative generation using modified hit-and-run sampling for non-linear, non-convex problems

    NASA Astrophysics Data System (ADS)

    Rosenberg, D. E.; Alafifi, A.

    2016-12-01

    Water resources systems analysis often focuses on finding optimal solutions. Yet an optimal solution is optimal only for the modelled issues and managers often seek near-optimal alternatives that address un-modelled objectives, preferences, limits, uncertainties, and other issues. Early on, Modelling to Generate Alternatives (MGA) formalized near-optimal as the region comprising the original problem constraints plus a new constraint that allowed performance within a specified tolerance of the optimal objective function value. MGA identified a few maximally-different alternatives from the near-optimal region. Subsequent work applied Markov Chain Monte Carlo (MCMC) sampling to generate a larger number of alternatives that span the near-optimal region of linear problems or select portions for non-linear problems. We extend the MCMC Hit-And-Run method to generate alternatives that span the full extent of the near-optimal region for non-linear, non-convex problems. First, start at a feasible hit point within the near-optimal region, then run a random distance in a random direction to a new hit point. Next, repeat until generating the desired number of alternatives. The key step at each iterate is to run a random distance along the line in the specified direction to a new hit point. If linear equity constraints exist, we construct an orthogonal basis and use a null space transformation to confine hits and runs to a lower-dimensional space. Linear inequity constraints define the convex bounds on the line that runs through the current hit point in the specified direction. We then use slice sampling to identify a new hit point along the line within bounds defined by the non-linear inequity constraints. This technique is computationally efficient compared to prior near-optimal alternative generation techniques such MGA, MCMC Metropolis-Hastings, evolutionary, or firefly algorithms because search at each iteration is confined to the hit line, the algorithm can move in one step to any point in the near-optimal region, and each iterate generates a new, feasible alternative. We use the method to generate alternatives that span the near-optimal regions of simple and more complicated water management problems and may be preferred to optimal solutions. We also discuss extensions to handle non-linear equity constraints.

  10. How quantitative measures unravel design principles in multi-stage phosphorylation cascades.

    PubMed

    Frey, Simone; Millat, Thomas; Hohmann, Stefan; Wolkenhauer, Olaf

    2008-09-07

    We investigate design principles of linear multi-stage phosphorylation cascades by using quantitative measures for signaling time, signal duration and signal amplitude. We compare alternative pathway structures by varying the number of phosphorylations and the length of the cascade. We show that a model for a weakly activated pathway does not reflect the biological context well, unless it is restricted to certain parameter combinations. Focusing therefore on a more general model, we compare alternative structures with respect to a multivariate optimization criterion. We test the hypothesis that the structure of a linear multi-stage phosphorylation cascade is the result of an optimization process aiming for a fast response, defined by the minimum of the product of signaling time and signal duration. It is then shown that certain pathway structures minimize this criterion. Several popular models of MAPK cascades form the basis of our study. These models represent different levels of approximation, which we compare and discuss with respect to the quantitative measures.

  11. Simulations of nanocrystals under pressure: combining electronic enthalpy and linear-scaling density-functional theory.

    PubMed

    Corsini, Niccolò R C; Greco, Andrea; Hine, Nicholas D M; Molteni, Carla; Haynes, Peter D

    2013-08-28

    We present an implementation in a linear-scaling density-functional theory code of an electronic enthalpy method, which has been found to be natural and efficient for the ab initio calculation of finite systems under hydrostatic pressure. Based on a definition of the system volume as that enclosed within an electronic density isosurface [M. Cococcioni, F. Mauri, G. Ceder, and N. Marzari, Phys. Rev. Lett. 94, 145501 (2005)], it supports both geometry optimizations and molecular dynamics simulations. We introduce an approach for calibrating the parameters defining the volume in the context of geometry optimizations and discuss their significance. Results in good agreement with simulations using explicit solvents are obtained, validating our approach. Size-dependent pressure-induced structural transformations and variations in the energy gap of hydrogenated silicon nanocrystals are investigated, including one comparable in size to recent experiments. A detailed analysis of the polyamorphic transformations reveals three types of amorphous structures and their persistence on depressurization is assessed.

  12. Simulations of nanocrystals under pressure: Combining electronic enthalpy and linear-scaling density-functional theory

    NASA Astrophysics Data System (ADS)

    Corsini, Niccolò R. C.; Greco, Andrea; Hine, Nicholas D. M.; Molteni, Carla; Haynes, Peter D.

    2013-08-01

    We present an implementation in a linear-scaling density-functional theory code of an electronic enthalpy method, which has been found to be natural and efficient for the ab initio calculation of finite systems under hydrostatic pressure. Based on a definition of the system volume as that enclosed within an electronic density isosurface [M. Cococcioni, F. Mauri, G. Ceder, and N. Marzari, Phys. Rev. Lett. 94, 145501 (2005)], 10.1103/PhysRevLett.94.145501, it supports both geometry optimizations and molecular dynamics simulations. We introduce an approach for calibrating the parameters defining the volume in the context of geometry optimizations and discuss their significance. Results in good agreement with simulations using explicit solvents are obtained, validating our approach. Size-dependent pressure-induced structural transformations and variations in the energy gap of hydrogenated silicon nanocrystals are investigated, including one comparable in size to recent experiments. A detailed analysis of the polyamorphic transformations reveals three types of amorphous structures and their persistence on depressurization is assessed.

  13. Methods of sequential estimation for determining initial data in numerical weather prediction. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Cohn, S. E.

    1982-01-01

    Numerical weather prediction (NWP) is an initial-value problem for a system of nonlinear differential equations, in which initial values are known incompletely and inaccurately. Observational data available at the initial time must therefore be supplemented by data available prior to the initial time, a problem known as meteorological data assimilation. A further complication in NWP is that solutions of the governing equations evolve on two different time scales, a fast one and a slow one, whereas fast scale motions in the atmosphere are not reliably observed. This leads to the so called initialization problem: initial values must be constrained to result in a slowly evolving forecast. The theory of estimation of stochastic dynamic systems provides a natural approach to such problems. For linear stochastic dynamic models, the Kalman-Bucy (KB) sequential filter is the optimal data assimilation method, for linear models, the optimal combined data assimilation-initialization method is a modified version of the KB filter.

  14. A linear programming approach to max-sum problem: a review.

    PubMed

    Werner, Tomás

    2007-07-01

    The max-sum labeling problem, defined as maximizing a sum of binary (i.e., pairwise) functions of discrete variables, is a general NP-hard optimization problem with many applications, such as computing the MAP configuration of a Markov random field. We review a not widely known approach to the problem, developed by Ukrainian researchers Schlesinger et al. in 1976, and show how it contributes to recent results, most importantly, those on the convex combination of trees and tree-reweighted max-product. In particular, we review Schlesinger et al.'s upper bound on the max-sum criterion, its minimization by equivalent transformations, its relation to the constraint satisfaction problem, the fact that this minimization is dual to a linear programming relaxation of the original problem, and the three kinds of consistency necessary for optimality of the upper bound. We revisit problems with Boolean variables and supermodular problems. We describe two algorithms for decreasing the upper bound. We present an example application for structural image analysis.

  15. Efficiency in nonequilibrium molecular dynamics Monte Carlo simulations

    DOE PAGES

    Radak, Brian K.; Roux, Benoît

    2016-10-07

    Hybrid algorithms combining nonequilibrium molecular dynamics and Monte Carlo (neMD/MC) offer a powerful avenue for improving the sampling efficiency of computer simulations of complex systems. These neMD/MC algorithms are also increasingly finding use in applications where conventional approaches are impractical, such as constant-pH simulations with explicit solvent. However, selecting an optimal nonequilibrium protocol for maximum efficiency often represents a non-trivial challenge. This work evaluates the efficiency of a broad class of neMD/MC algorithms and protocols within the theoretical framework of linear response theory. The approximations are validated against constant pH-MD simulations and shown to provide accurate predictions of neMD/MC performance.more » An assessment of a large set of protocols confirms (both theoretically and empirically) that a linear work protocol gives the best neMD/MC performance. Lastly, a well-defined criterion for optimizing the time parameters of the protocol is proposed and demonstrated with an adaptive algorithm that improves the performance on-the-fly with minimal cost.« less

  16. Advanced complex trait analysis.

    PubMed

    Gray, A; Stewart, I; Tenesa, A

    2012-12-01

    The Genome-wide Complex Trait Analysis (GCTA) software package can quantify the contribution of genetic variation to phenotypic variation for complex traits. However, as those datasets of interest continue to increase in size, GCTA becomes increasingly computationally prohibitive. We present an adapted version, Advanced Complex Trait Analysis (ACTA), demonstrating dramatically improved performance. We restructure the genetic relationship matrix (GRM) estimation phase of the code and introduce the highly optimized parallel Basic Linear Algebra Subprograms (BLAS) library combined with manual parallelization and optimization. We introduce the Linear Algebra PACKage (LAPACK) library into the restricted maximum likelihood (REML) analysis stage. For a test case with 8999 individuals and 279,435 single nucleotide polymorphisms (SNPs), we reduce the total runtime, using a compute node with two multi-core Intel Nehalem CPUs, from ∼17 h to ∼11 min. The source code is fully available under the GNU Public License, along with Linux binaries. For more information see http://www.epcc.ed.ac.uk/software-products/acta. a.gray@ed.ac.uk Supplementary data are available at Bioinformatics online.

  17. Optimized Spectral Editing of 13C MAS NMR Spectra of Rigid Solids Using Cross-Polarization Methods

    NASA Astrophysics Data System (ADS)

    Sangill, R.; Rastrupandersen, N.; Bildsoe, H.; Jakobsen, H. J.; Nielsen, N. C.

    Combinations of 13C magic-angle spinning (MAS) NMR experiments employing cross polarization (CP), cross polarization-depolarization (CPD), and cross polarization-depolarization-repolarization are analyzed quantitatively to derive simple and general procedures for optimized spectral editing of 13C CP/MAS NMR spectra of rigid solids by separation of the 13C resonances into CH n subspectra ( n = 0, 1, 2, and 3). Special attention is devoted to a differentiation by CPD/MAS of CH and CH 2 resonances since these groups behave quite similarly during spin lock under Hartmann-Hahn match and are therefore generally difficult to distinguish unambiguously. A general procedure for the design of subexperiments and linear combinations of their spectra to provide optimized signal-to-noise ratios for the edited subspectra is described. The technique is illustrated by a series of edited 13C CP/MAS spectra for a number of rigid solids ranging from simple organic compounds (sucrose and l-menthol) to complex pharmaceutical products (calcipotriol monohydrate and vitamin D 3) and polymers (polypropylene, polyvinyl alcohol, polyvinyl chloride, and polystyrene).

  18. Dynamic modeling and optimization for space logistics using time-expanded networks

    NASA Astrophysics Data System (ADS)

    Ho, Koki; de Weck, Olivier L.; Hoffman, Jeffrey A.; Shishko, Robert

    2014-12-01

    This research develops a dynamic logistics network formulation for lifecycle optimization of mission sequences as a system-level integrated method to find an optimal combination of technologies to be used at each stage of the campaign. This formulation can find the optimal transportation architecture considering its technology trades over time. The proposed methodologies are inspired by the ground logistics analysis techniques based on linear programming network optimization. Particularly, the time-expanded network and its extension are developed for dynamic space logistics network optimization trading the quality of the solution with the computational load. In this paper, the methodologies are applied to a human Mars exploration architecture design problem. The results reveal multiple dynamic system-level trades over time and give recommendation of the optimal strategy for the human Mars exploration architecture. The considered trades include those between In-Situ Resource Utilization (ISRU) and propulsion technologies as well as the orbit and depot location selections over time. This research serves as a precursor for eventual permanent settlement and colonization of other planets by humans and us becoming a multi-planet species.

  19. A mathematical framework for yield (vs. rate) optimization in constraint-based modeling and applications in metabolic engineering.

    PubMed

    Klamt, Steffen; Müller, Stefan; Regensburger, Georg; Zanghellini, Jürgen

    2018-05-01

    The optimization of metabolic rates (as linear objective functions) represents the methodical core of flux-balance analysis techniques which have become a standard tool for the study of genome-scale metabolic models. Besides (growth and synthesis) rates, metabolic yields are key parameters for the characterization of biochemical transformation processes, especially in the context of biotechnological applications. However, yields are ratios of rates, and hence the optimization of yields (as nonlinear objective functions) under arbitrary linear constraints is not possible with current flux-balance analysis techniques. Despite the fundamental importance of yields in constraint-based modeling, a comprehensive mathematical framework for yield optimization is still missing. We present a mathematical theory that allows one to systematically compute and analyze yield-optimal solutions of metabolic models under arbitrary linear constraints. In particular, we formulate yield optimization as a linear-fractional program. For practical computations, we transform the linear-fractional yield optimization problem to a (higher-dimensional) linear problem. Its solutions determine the solutions of the original problem and can be used to predict yield-optimal flux distributions in genome-scale metabolic models. For the theoretical analysis, we consider the linear-fractional problem directly. Most importantly, we show that the yield-optimal solution set (like the rate-optimal solution set) is determined by (yield-optimal) elementary flux vectors of the underlying metabolic model. However, yield- and rate-optimal solutions may differ from each other, and hence optimal (biomass or product) yields are not necessarily obtained at solutions with optimal (growth or synthesis) rates. Moreover, we discuss phase planes/production envelopes and yield spaces, in particular, we prove that yield spaces are convex and provide algorithms for their computation. We illustrate our findings by a small example and demonstrate their relevance for metabolic engineering with realistic models of E. coli. We develop a comprehensive mathematical framework for yield optimization in metabolic models. Our theory is particularly useful for the study and rational modification of cell factories designed under given yield and/or rate requirements. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  20. Flexible, phase-matched, linear receive arrays for high-field MRI in monkeys.

    PubMed

    Goense, Jozien; Logothetis, Nikos K; Merkle, Hellmut

    2010-10-01

    High signal-to-noise ratios (SNR) are essential for high-resolution anatomical and functional MRI. Phased arrays are advantageous for this but have the drawback that they often have inflexible and bulky configurations. Particularly in experiments where functional MRI is combined with simultaneous electrophysiology, space constraints can be prohibitive. To this end we developed a highly flexible multiple receive element phased array for use on anesthetized monkeys. The elements are interchangeable and different sizes and combinations of coil elements can be used, for instance, combinations of single and overlapped elements. The preamplifiers including control electronics are detachable and can serve a variety of prefabricated and phase matched arrays of different configurations, allowing the elements to always be placed in close proximity to the area of interest. Optimizing performance of the individual elements ensured high SNR at the cortical surface as well as in deeper laying structures. Performance of a variety of arrangements of gapped linear arrays was evaluated at 4.7 and 7T in high-resolution anatomical and functional MRI. Copyright © 2010 Elsevier Inc. All rights reserved.

  1. Efficient QoS-aware Service Composition

    NASA Astrophysics Data System (ADS)

    Alrifai, Mohammad; Risse, Thomas

    Web service composition requests are usually combined with endto-end QoS requirements, which are specified in terms of non-functional properties (e.g. response time, throughput and price). The goal of QoS-aware service composition is to find the best combination of services such that their aggregated QoS values meet these end-to-end requirements. Local selection techniques are very efficient but fail short in handling global QoS constraints. Global optimization techniques, on the other hand, can handle global constraints, but their poor performance render them inappropriate for applications with dynamic and real-time requirements. In this paper we address this problem and propose a solution that combines global optimization with local selection techniques for achieving a better performance. The proposed solution consists of two steps: first we use mixed integer linear programming (MILP) to find the optimal decomposition of global QoS constraints into local constraints. Second, we use local search to find the best web services that satisfy these local constraints. Unlike existing MILP-based global planning solutions, the size of the MILP model in our case is much smaller and independent on the number of available services, yields faster computation and more scalability. Preliminary experiments have been conducted to evaluate the performance of the proposed solution.

  2. Optimal design of supply chain network under uncertainty environment using hybrid analytical and simulation modeling approach

    NASA Astrophysics Data System (ADS)

    Chiadamrong, N.; Piyathanavong, V.

    2017-12-01

    Models that aim to optimize the design of supply chain networks have gained more interest in the supply chain literature. Mixed-integer linear programming and discrete-event simulation are widely used for such an optimization problem. We present a hybrid approach to support decisions for supply chain network design using a combination of analytical and discrete-event simulation models. The proposed approach is based on iterative procedures until the difference between subsequent solutions satisfies the pre-determined termination criteria. The effectiveness of proposed approach is illustrated by an example, which shows closer to optimal results with much faster solving time than the results obtained from the conventional simulation-based optimization model. The efficacy of this proposed hybrid approach is promising and can be applied as a powerful tool in designing a real supply chain network. It also provides the possibility to model and solve more realistic problems, which incorporate dynamism and uncertainty.

  3. A systematic approach to designing statistically powerful heteroscedastic 2 × 2 factorial studies while minimizing financial costs.

    PubMed

    Jan, Show-Li; Shieh, Gwowen

    2016-08-31

    The 2 × 2 factorial design is widely used for assessing the existence of interaction and the extent of generalizability of two factors where each factor had only two levels. Accordingly, research problems associated with the main effects and interaction effects can be analyzed with the selected linear contrasts. To correct for the potential heterogeneity of variance structure, the Welch-Satterthwaite test is commonly used as an alternative to the t test for detecting the substantive significance of a linear combination of mean effects. This study concerns the optimal allocation of group sizes for the Welch-Satterthwaite test in order to minimize the total cost while maintaining adequate power. The existing method suggests that the optimal ratio of sample sizes is proportional to the ratio of the population standard deviations divided by the square root of the ratio of the unit sampling costs. Instead, a systematic approach using optimization technique and screening search is presented to find the optimal solution. Numerical assessments revealed that the current allocation scheme generally does not give the optimal solution. Alternatively, the suggested approaches to power and sample size calculations give accurate and superior results under various treatment and cost configurations. The proposed approach improves upon the current method in both its methodological soundness and overall performance. Supplementary algorithms are also developed to aid the usefulness and implementation of the recommended technique in planning 2 × 2 factorial designs.

  4. Robust automated mass spectra interpretation and chemical formula calculation using mixed integer linear programming.

    PubMed

    Baran, Richard; Northen, Trent R

    2013-10-15

    Untargeted metabolite profiling using liquid chromatography and mass spectrometry coupled via electrospray ionization is a powerful tool for the discovery of novel natural products, metabolic capabilities, and biomarkers. However, the elucidation of the identities of uncharacterized metabolites from spectral features remains challenging. A critical step in the metabolite identification workflow is the assignment of redundant spectral features (adducts, fragments, multimers) and calculation of the underlying chemical formula. Inspection of the data by experts using computational tools solving partial problems (e.g., chemical formula calculation for individual ions) can be performed to disambiguate alternative solutions and provide reliable results. However, manual curation is tedious and not readily scalable or standardized. Here we describe an automated procedure for the robust automated mass spectra interpretation and chemical formula calculation using mixed integer linear programming optimization (RAMSI). Chemical rules among related ions are expressed as linear constraints and both the spectra interpretation and chemical formula calculation are performed in a single optimization step. This approach is unbiased in that it does not require predefined sets of neutral losses and positive and negative polarity spectra can be combined in a single optimization. The procedure was evaluated with 30 experimental mass spectra and was found to effectively identify the protonated or deprotonated molecule ([M + H](+) or [M - H](-)) while being robust to the presence of background ions. RAMSI provides a much-needed standardized tool for interpreting ions for subsequent identification in untargeted metabolomics workflows.

  5. Noise removal in extended depth of field microscope images through nonlinear signal processing.

    PubMed

    Zahreddine, Ramzi N; Cormack, Robert H; Cogswell, Carol J

    2013-04-01

    Extended depth of field (EDF) microscopy, achieved through computational optics, allows for real-time 3D imaging of live cell dynamics. EDF is achieved through a combination of point spread function engineering and digital image processing. A linear Wiener filter has been conventionally used to deconvolve the image, but it suffers from high frequency noise amplification and processing artifacts. A nonlinear processing scheme is proposed which extends the depth of field while minimizing background noise. The nonlinear filter is generated via a training algorithm and an iterative optimizer. Biological microscope images processed with the nonlinear filter show a significant improvement in image quality and signal-to-noise ratio over the conventional linear filter.

  6. Multi-segment detector array for hybrid reflection-mode ultrasound and optoacoustic tomography (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Merčep, Elena; Burton, Neal C.; Deán-Ben, Xosé Luís.; Razansky, Daniel

    2017-02-01

    The complementary contrast of the optoacoustic (OA) and pulse-echo ultrasound (US) modalities makes the combined usage of these imaging technologies highly advantageous. Due to the different physical contrast mechanisms development of a detector array optimally suited for both modalities is one of the challenges to efficient implementation of a single OA-US imaging device. We demonstrate imaging performance of the first hybrid detector array whose novel design, incorporating array segments of linear and concave geometry, optimally supports image acquisition in both reflection-mode ultrasonography and optoacoustic tomography modes. Hybrid detector array has a total number of 256 elements and three segments of different geometry and variable pitch size: a central 128-element linear segment with pitch of 0.25mm, ideally suited for pulse-echo US imaging, and two external 64-elements segments with concave geometry and 0.6mm pitch optimized for OA image acquisition. Interleaved OA and US image acquisition with up to 25 fps is facilitated through a custom-made multiplexer unit. Spatial resolution of the transducer was characterized in numerical simulations and validated in phantom experiments and comprises 230 and 300 μm in the respective OA and US imaging modes. Imaging performance of the multi-segment detector array was experimentally shown in a series of imaging sessions with healthy volunteers. Employing mixed array geometries allows at the same time achieving excellent OA contrast with a large field of view, and US contrast for complementary structural features with reduced side-lobes and improved resolution. The newly designed hybrid detector array that comprises segments of linear and concave geometries optimally fulfills requirements for efficient US and OA imaging and may expand the applicability of the developed hybrid OPUS imaging technology and accelerate its clinical translation.

  7. Search for a new economic optimum in the management of household waste in Tiaret city (western Algeria).

    PubMed

    Asnoune, M; Abdelmalek, F; Djelloul, A; Mesghouni, K; Addou, A

    2016-11-01

    In household waste matters, the objective is always to conceive an optimal integrated system of management, where the terms 'optimal' and 'integrated' refer generally to a combination between the waste and the techniques of treatment, valorization and elimination, which often aim at the lowest possible cost. The management optimization of household waste using operational methodologies has not yet been applied in any Algerian district. We proposed an optimization of the valorization of household waste in Tiaret city in order to lower the total management cost. The methodology is modelled by non-linear mathematical equations using 28 variables of decision and aims to assign optimally the seven components of household waste (i.e. plastic, cardboard paper, glass, metals, textiles, organic matter and others) among four centres of treatment [i.e. waste to energy (WTE) or incineration, composting (CM), anaerobic digestion (ANB) or methanization and landfilling (LF)]. The analysis of the obtained results shows that the variation of total cost is mainly due to the assignment of waste among the treatment centres and that certain treatment cannot be applied to household waste in Tiaret city. On the other hand, certain techniques of valorization have been favoured by the optimization. In this work, four scenarios have been proposed to optimize the system cost, where the modelling shows that the mixed scenario (the three treatment centres CM, ANB, LF) suggests a better combination of technologies of waste treatment, with an optimal solution for the system (cost and profit). © The Author(s) 2016.

  8. Optimal non-linear health insurance.

    PubMed

    Blomqvist, A

    1997-06-01

    Most theoretical and empirical work on efficient health insurance has been based on models with linear insurance schedules (a constant co-insurance parameter). In this paper, dynamic optimization techniques are used to analyse the properties of optimal non-linear insurance schedules in a model similar to one originally considered by Spence and Zeckhauser (American Economic Review, 1971, 61, 380-387) and reminiscent of those that have been used in the literature on optimal income taxation. The results of a preliminary numerical example suggest that the welfare losses from the implicit subsidy to employer-financed health insurance under US tax law may be a good deal smaller than previously estimated using linear models.

  9. Gas chimney detection based on improving the performance of combined multilayer perceptron and support vector classifier

    NASA Astrophysics Data System (ADS)

    Hashemi, H.; Tax, D. M. J.; Duin, R. P. W.; Javaherian, A.; de Groot, P.

    2008-11-01

    Seismic object detection is a relatively new field in which 3-D bodies are visualized and spatial relationships between objects of different origins are studied in order to extract geologic information. In this paper, we propose a method for finding an optimal classifier with the help of a statistical feature ranking technique and combining different classifiers. The method, which has general applicability, is demonstrated here on a gas chimney detection problem. First, we evaluate a set of input seismic attributes extracted at locations labeled by a human expert using regularized discriminant analysis (RDA). In order to find the RDA score for each seismic attribute, forward and backward search strategies are used. Subsequently, two non-linear classifiers: multilayer perceptron (MLP) and support vector classifier (SVC) are run on the ranked seismic attributes. Finally, to capitalize on the intrinsic differences between both classifiers, the MLP and SVC results are combined using logical rules of maximum, minimum and mean. The proposed method optimizes the ranked feature space size and yields the lowest classification error in the final combined result. We will show that the logical minimum reveals gas chimneys that exhibit both the softness of MLP and the resolution of SVC classifiers.

  10. Optimized Hyper Beamforming of Linear Antenna Arrays Using Collective Animal Behaviour

    PubMed Central

    Ram, Gopi; Mandal, Durbadal; Kar, Rajib; Ghoshal, Sakti Prasad

    2013-01-01

    A novel optimization technique which is developed on mimicking the collective animal behaviour (CAB) is applied for the optimal design of hyper beamforming of linear antenna arrays. Hyper beamforming is based on sum and difference beam patterns of the array, each raised to the power of a hyperbeam exponent parameter. The optimized hyperbeam is achieved by optimization of current excitation weights and uniform interelement spacing. As compared to conventional hyper beamforming of linear antenna array, real coded genetic algorithm (RGA), particle swarm optimization (PSO), and differential evolution (DE) applied to the hyper beam of the same array can achieve reduction in sidelobe level (SLL) and same or less first null beam width (FNBW), keeping the same value of hyperbeam exponent. Again, further reductions of sidelobe level (SLL) and first null beam width (FNBW) have been achieved by the proposed collective animal behaviour (CAB) algorithm. CAB finds near global optimal solution unlike RGA, PSO, and DE in the present problem. The above comparative optimization is illustrated through 10-, 14-, and 20-element linear antenna arrays to establish the optimization efficacy of CAB. PMID:23970843

  11. UV Spectrophotometric Simultaneous Determination of Paracetamol and Ibuprofen in Combined Tablets by Derivative and Wavelet Transforms

    PubMed Central

    Hoang, Vu Dang; Ly, Dong Thi Ha; Tho, Nguyen Huu; Minh Thi Nguyen, Hue

    2014-01-01

    The application of first-order derivative and wavelet transforms to UV spectra and ratio spectra was proposed for the simultaneous determination of ibuprofen and paracetamol in their combined tablets. A new hybrid approach on the combined use of first-order derivative and wavelet transforms to spectra was also discussed. In this application, DWT (sym6 and haar), CWT (mexh), and FWT were optimized to give the highest spectral recoveries. Calibration graphs in the linear concentration ranges of ibuprofen (12–32 mg/L) and paracetamol (20–40 mg/L) were obtained by measuring the amplitudes of the transformed signals. Our proposed spectrophotometric methods were statistically compared to HPLC in terms of precision and accuracy. PMID:24949492

  12. UV spectrophotometric simultaneous determination of paracetamol and ibuprofen in combined tablets by derivative and wavelet transforms.

    PubMed

    Hoang, Vu Dang; Ly, Dong Thi Ha; Tho, Nguyen Huu; Nguyen, Hue Minh Thi

    2014-01-01

    The application of first-order derivative and wavelet transforms to UV spectra and ratio spectra was proposed for the simultaneous determination of ibuprofen and paracetamol in their combined tablets. A new hybrid approach on the combined use of first-order derivative and wavelet transforms to spectra was also discussed. In this application, DWT (sym6 and haar), CWT (mexh), and FWT were optimized to give the highest spectral recoveries. Calibration graphs in the linear concentration ranges of ibuprofen (12-32 mg/L) and paracetamol (20-40 mg/L) were obtained by measuring the amplitudes of the transformed signals. Our proposed spectrophotometric methods were statistically compared to HPLC in terms of precision and accuracy.

  13. Subpixel Mapping of Hyperspectral Image Based on Linear Subpixel Feature Detection and Object Optimization

    NASA Astrophysics Data System (ADS)

    Liu, Zhaoxin; Zhao, Liaoying; Li, Xiaorun; Chen, Shuhan

    2018-04-01

    Owing to the limitation of spatial resolution of the imaging sensor and the variability of ground surfaces, mixed pixels are widesperead in hyperspectral imagery. The traditional subpixel mapping algorithms treat all mixed pixels as boundary-mixed pixels while ignoring the existence of linear subpixels. To solve this question, this paper proposed a new subpixel mapping method based on linear subpixel feature detection and object optimization. Firstly, the fraction value of each class is obtained by spectral unmixing. Secondly, the linear subpixel features are pre-determined based on the hyperspectral characteristics and the linear subpixel feature; the remaining mixed pixels are detected based on maximum linearization index analysis. The classes of linear subpixels are determined by using template matching method. Finally, the whole subpixel mapping results are iteratively optimized by binary particle swarm optimization algorithm. The performance of the proposed subpixel mapping method is evaluated via experiments based on simulated and real hyperspectral data sets. The experimental results demonstrate that the proposed method can improve the accuracy of subpixel mapping.

  14. Multilayer perceptron architecture optimization using parallel computing techniques.

    PubMed

    Castro, Wilson; Oblitas, Jimy; Santa-Cruz, Roberto; Avila-George, Himer

    2017-01-01

    The objective of this research was to develop a methodology for optimizing multilayer-perceptron-type neural networks by evaluating the effects of three neural architecture parameters, namely, number of hidden layers (HL), neurons per hidden layer (NHL), and activation function type (AF), on the sum of squares error (SSE). The data for the study were obtained from quality parameters (physicochemical and microbiological) of milk samples. Architectures or combinations were organized in groups (G1, G2, and G3) generated upon interspersing one, two, and three layers. Within each group, the networks had three neurons in the input layer, six neurons in the output layer, three to twenty-seven NHL, and three AF (tan-sig, log-sig, and linear) types. The number of architectures was determined using three factorial-type experimental designs, which reached 63, 2 187, and 50 049 combinations for G1, G2 and G3, respectively. Using MATLAB 2015a, a logical sequence was designed and implemented for constructing, training, and evaluating multilayer-perceptron-type neural networks using parallel computing techniques. The results show that HL and NHL have a statistically relevant effect on SSE, and from two hidden layers, AF also has a significant effect; thus, both AF and NHL can be evaluated to determine the optimal combination per group. Moreover, in the three study groups, it is observed that there is an inverse relationship between the number of processors and the total optimization time.

  15. Multilayer perceptron architecture optimization using parallel computing techniques

    PubMed Central

    Castro, Wilson; Oblitas, Jimy; Santa-Cruz, Roberto; Avila-George, Himer

    2017-01-01

    The objective of this research was to develop a methodology for optimizing multilayer-perceptron-type neural networks by evaluating the effects of three neural architecture parameters, namely, number of hidden layers (HL), neurons per hidden layer (NHL), and activation function type (AF), on the sum of squares error (SSE). The data for the study were obtained from quality parameters (physicochemical and microbiological) of milk samples. Architectures or combinations were organized in groups (G1, G2, and G3) generated upon interspersing one, two, and three layers. Within each group, the networks had three neurons in the input layer, six neurons in the output layer, three to twenty-seven NHL, and three AF (tan-sig, log-sig, and linear) types. The number of architectures was determined using three factorial-type experimental designs, which reached 63, 2 187, and 50 049 combinations for G1, G2 and G3, respectively. Using MATLAB 2015a, a logical sequence was designed and implemented for constructing, training, and evaluating multilayer-perceptron-type neural networks using parallel computing techniques. The results show that HL and NHL have a statistically relevant effect on SSE, and from two hidden layers, AF also has a significant effect; thus, both AF and NHL can be evaluated to determine the optimal combination per group. Moreover, in the three study groups, it is observed that there is an inverse relationship between the number of processors and the total optimization time. PMID:29236744

  16. Optimization techniques for integrating spatial data

    USGS Publications Warehouse

    Herzfeld, U.C.; Merriam, D.F.

    1995-01-01

    Two optimization techniques ta predict a spatial variable from any number of related spatial variables are presented. The applicability of the two different methods for petroleum-resource assessment is tested in a mature oil province of the Midcontinent (USA). The information on petroleum productivity, usually not directly accessible, is related indirectly to geological, geophysical, petrographical, and other observable data. This paper presents two approaches based on construction of a multivariate spatial model from the available data to determine a relationship for prediction. In the first approach, the variables are combined into a spatial model by an algebraic map-comparison/integration technique. Optimal weights for the map comparison function are determined by the Nelder-Mead downhill simplex algorithm in multidimensions. Geologic knowledge is necessary to provide a first guess of weights to start the automatization, because the solution is not unique. In the second approach, active set optimization for linear prediction of the target under positivity constraints is applied. Here, the procedure seems to select one variable from each data type (structure, isopachous, and petrophysical) eliminating data redundancy. Automating the determination of optimum combinations of different variables by applying optimization techniques is a valuable extension of the algebraic map-comparison/integration approach to analyzing spatial data. Because of the capability of handling multivariate data sets and partial retention of geographical information, the approaches can be useful in mineral-resource exploration. ?? 1995 International Association for Mathematical Geology.

  17. Multivariate optimization of a synergistic blend of oleoresin sage (Salvia officinalis L.) and ascorbyl palmitate to stabilize sunflower oil.

    PubMed

    Upadhyay, Rohit; Mishra, Hari Niwas

    2016-04-01

    The simultaneous optimization of a synergistic blend of oleoresin sage (SAG) and ascorbyl palmitate (AP) in sunflower oil (SO) was performed using central composite and rotatable design coupled with principal component analysis (PCA) and response surface methodology (RSM). The physicochemical parameters viz., peroxide value, anisidine value, free fatty acids, induction period, total polar matter, antioxidant capacity and conjugated diene value were considered as response variables. PCA reduced the original set of correlated responses to few uncorrelated principal components (PC). The PC1 (eigen value, 5.78; data variance explained, 82.53 %) was selected for optimization using RSM. The quadratic model adequately described the data (R (2) = 0. 91, p < 0.05) and lack of fit was insignificant (p > 0.05). The contour plot of PC 1 score indicated the optimal synergistic combination of 1289.19 and 218.06 ppm for SAG and AP, respectively. This combination of SAG and AP resulted in shelf life of 320 days at 25 °C estimated using linear shelf life prediction model. In conclusion, the versatility of PCA-RSM approach has resulted in an easy interpretation in multiple response optimizations. This approach can be considered as a useful guide to develop new oil blends stabilized with food additives from natural sources.

  18. Computer-intensive simulation of solid-state NMR experiments using SIMPSON.

    PubMed

    Tošner, Zdeněk; Andersen, Rasmus; Stevensson, Baltzar; Edén, Mattias; Nielsen, Niels Chr; Vosegaard, Thomas

    2014-09-01

    Conducting large-scale solid-state NMR simulations requires fast computer software potentially in combination with efficient computational resources to complete within a reasonable time frame. Such simulations may involve large spin systems, multiple-parameter fitting of experimental spectra, or multiple-pulse experiment design using parameter scan, non-linear optimization, or optimal control procedures. To efficiently accommodate such simulations, we here present an improved version of the widely distributed open-source SIMPSON NMR simulation software package adapted to contemporary high performance hardware setups. The software is optimized for fast performance on standard stand-alone computers, multi-core processors, and large clusters of identical nodes. We describe the novel features for fast computation including internal matrix manipulations, propagator setups and acquisition strategies. For efficient calculation of powder averages, we implemented interpolation method of Alderman, Solum, and Grant, as well as recently introduced fast Wigner transform interpolation technique. The potential of the optimal control toolbox is greatly enhanced by higher precision gradients in combination with the efficient optimization algorithm known as limited memory Broyden-Fletcher-Goldfarb-Shanno. In addition, advanced parallelization can be used in all types of calculations, providing significant time reductions. SIMPSON is thus reflecting current knowledge in the field of numerical simulations of solid-state NMR experiments. The efficiency and novel features are demonstrated on the representative simulations. Copyright © 2014 Elsevier Inc. All rights reserved.

  19. ORACLS: A system for linear-quadratic-Gaussian control law design

    NASA Technical Reports Server (NTRS)

    Armstrong, E. S.

    1978-01-01

    A modern control theory design package (ORACLS) for constructing controllers and optimal filters for systems modeled by linear time-invariant differential or difference equations is described. Numerical linear-algebra procedures are used to implement the linear-quadratic-Gaussian (LQG) methodology of modern control theory. Algorithms are included for computing eigensystems of real matrices, the relative stability of a matrix, factored forms for nonnegative definite matrices, the solutions and least squares approximations to the solutions of certain linear matrix algebraic equations, the controllability properties of a linear time-invariant system, and the steady state covariance matrix of an open-loop stable system forced by white noise. Subroutines are provided for solving both the continuous and discrete optimal linear regulator problems with noise free measurements and the sampled-data optimal linear regulator problem. For measurement noise, duality theory and the optimal regulator algorithms are used to solve the continuous and discrete Kalman-Bucy filter problems. Subroutines are also included which give control laws causing the output of a system to track the output of a prescribed model.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, Kyri; Dall'Anese, Emiliano; Summers, Tyler

    This paper outlines a data-driven, distributionally robust approach to solve chance-constrained AC optimal power flow problems in distribution networks. Uncertain forecasts for loads and power generated by photovoltaic (PV) systems are considered, with the goal of minimizing PV curtailment while meeting power flow and voltage regulation constraints. A data- driven approach is utilized to develop a distributionally robust conservative convex approximation of the chance-constraints; particularly, the mean and covariance matrix of the forecast errors are updated online, and leveraged to enforce voltage regulation with predetermined probability via Chebyshev-based bounds. By combining an accurate linear approximation of the AC power flowmore » equations with the distributionally robust chance constraint reformulation, the resulting optimization problem becomes convex and computationally tractable.« less

  1. An Optimized Integrator Windup Protection Technique Applied to a Turbofan Engine Control

    NASA Technical Reports Server (NTRS)

    Watts, Stephen R.; Garg, Sanjay

    1995-01-01

    This paper introduces a new technique for providing memoryless integrator windup protection which utilizes readily available optimization software tools. This integrator windup protection synthesis provides a concise methodology for creating integrator windup protection for each actuation system loop independently while assuring both controller and closed loop system stability. The individual actuation system loops' integrator windup protection can then be combined to provide integrator windup protection for the entire system. This technique is applied to an H(exp infinity) based multivariable control designed for a linear model of an advanced afterburning turbofan engine. The resulting transient characteristics are examined for the integrated system while encountering single and multiple actuation limits.

  2. Interpreting linear support vector machine models with heat map molecule coloring

    PubMed Central

    2011-01-01

    Background Model-based virtual screening plays an important role in the early drug discovery stage. The outcomes of high-throughput screenings are a valuable source for machine learning algorithms to infer such models. Besides a strong performance, the interpretability of a machine learning model is a desired property to guide the optimization of a compound in later drug discovery stages. Linear support vector machines showed to have a convincing performance on large-scale data sets. The goal of this study is to present a heat map molecule coloring technique to interpret linear support vector machine models. Based on the weights of a linear model, the visualization approach colors each atom and bond of a compound according to its importance for activity. Results We evaluated our approach on a toxicity data set, a chromosome aberration data set, and the maximum unbiased validation data sets. The experiments show that our method sensibly visualizes structure-property and structure-activity relationships of a linear support vector machine model. The coloring of ligands in the binding pocket of several crystal structures of a maximum unbiased validation data set target indicates that our approach assists to determine the correct ligand orientation in the binding pocket. Additionally, the heat map coloring enables the identification of substructures important for the binding of an inhibitor. Conclusions In combination with heat map coloring, linear support vector machine models can help to guide the modification of a compound in later stages of drug discovery. Particularly substructures identified as important by our method might be a starting point for optimization of a lead compound. The heat map coloring should be considered as complementary to structure based modeling approaches. As such, it helps to get a better understanding of the binding mode of an inhibitor. PMID:21439031

  3. A time-domain decomposition iterative method for the solution of distributed linear quadratic optimal control problems

    NASA Astrophysics Data System (ADS)

    Heinkenschloss, Matthias

    2005-01-01

    We study a class of time-domain decomposition-based methods for the numerical solution of large-scale linear quadratic optimal control problems. Our methods are based on a multiple shooting reformulation of the linear quadratic optimal control problem as a discrete-time optimal control (DTOC) problem. The optimality conditions for this DTOC problem lead to a linear block tridiagonal system. The diagonal blocks are invertible and are related to the original linear quadratic optimal control problem restricted to smaller time-subintervals. This motivates the application of block Gauss-Seidel (GS)-type methods for the solution of the block tridiagonal systems. Numerical experiments show that the spectral radii of the block GS iteration matrices are larger than one for typical applications, but that the eigenvalues of the iteration matrices decay to zero fast. Hence, while the GS method is not expected to convergence for typical applications, it can be effective as a preconditioner for Krylov-subspace methods. This is confirmed by our numerical tests.A byproduct of this research is the insight that certain instantaneous control techniques can be viewed as the application of one step of the forward block GS method applied to the DTOC optimality system.

  4. Linearization methods for optimizing the low thrust spacecraft trajectory: Theoretical aspects

    NASA Astrophysics Data System (ADS)

    Kazmerchuk, P. V.

    2016-12-01

    The theoretical aspects of the modified linearization method, which makes it possible to solve a wide class of nonlinear problems on optimizing low-thrust spacecraft trajectories (V. V. Efanov et al., 2009; V. V. Khartov et al., 2010) are examined. The main modifications of the linearization method are connected with its refinement for optimizing the main dynamic systems and design parameters of the spacecraft.

  5. Topology optimized and 3D printed polymer-bonded permanent magnets for a predefined external field

    NASA Astrophysics Data System (ADS)

    Huber, C.; Abert, C.; Bruckner, F.; Pfaff, C.; Kriwet, J.; Groenefeld, M.; Teliban, I.; Vogler, C.; Suess, D.

    2017-08-01

    Topology optimization offers great opportunities to design permanent magnetic systems that have specific external field characteristics. Additive manufacturing of polymer-bonded magnets with an end-user 3D printer can be used to manufacture permanent magnets with structures that had been difficult or impossible to manufacture previously. This work combines these two powerful methods to design and manufacture permanent magnetic systems with specific properties. The topology optimization framework is simple, fast, and accurate. It can also be used for the reverse engineering of permanent magnets in order to find the topology from field measurements. Furthermore, a magnetic system that generates a linear external field above the magnet is presented. With a volume constraint, the amount of magnetic material can be minimized without losing performance. Simulations and measurements of the printed systems show very good agreement.

  6. Stress optimization of leaf-spring crossed flexure pivots for an active Gurney flap mechanism

    NASA Astrophysics Data System (ADS)

    Freire Gómez, Jon; Booker, Julian D.; Mellor, Phil H.

    2015-04-01

    The EU's Green Rotorcraft programme is pursuing the development of a functional and airworthy Active Gurney Flap (AGF) for a full-scale helicopter rotor blade. Interest in the development of this `smart adaptive rotor blade' technology lies in its potential to provide a number of aerodynamic benefits, which would in turn translate into a reduction in fuel consumption and noise levels. The AGF mechanism selected employs leaf-spring crossed flexure pivots. These provide important advantages over bearings as they are not susceptible to seizing and do not require maintenance (i.e. lubrication or cleaning). A baseline design of this mechanism was successfully tested both in a fatigue rig and in a 2D wind tunnel environment at flight-representative deployment schedules. For full validation, a flight test would also be required. However, the severity of the in-flight loading conditions would likely compromise the mechanical integrity of the pivots' leaf-springs in their current form. This paper investigates the scope for stress reduction through three-dimensional shape optimization of the leaf-springs of a generic crossed flexure pivot. To this end, a procedure combining a linear strain energy formulation, a parametric leaf-spring profile definition and a series of optimization algorithms is employed. The resulting optimized leaf-springs are proven to be not only independent of the angular rotation at which the pivot operates, but also linearly scalable to leaf-springs of any length, minimum thickness and width. Validated using non-linear finite element analysis, the results show very significant stress reductions relative to pivots with constant cross section leaf-springs, of up to as much as 30% for the specific pivot configuration employed in the AGF mechanism. It is concluded that shape optimization offers great potential for reducing stress in crossed flexure pivots and, consequently, for extending their fatigue life and/or rotational range.

  7. Artificial Intelligence vs. Statistical Modeling and Optimization of Continuous Bead Milling Process for Bacterial Cell Lysis.

    PubMed

    Haque, Shafiul; Khan, Saif; Wahid, Mohd; Dar, Sajad A; Soni, Nipunjot; Mandal, Raju K; Singh, Vineeta; Tiwari, Dileep; Lohani, Mohtashim; Areeshi, Mohammed Y; Govender, Thavendran; Kruger, Hendrik G; Jawed, Arshad

    2016-01-01

    For a commercially viable recombinant intracellular protein production process, efficient cell lysis and protein release is a major bottleneck. The recovery of recombinant protein, cholesterol oxidase (COD) was studied in a continuous bead milling process. A full factorial response surface methodology (RSM) design was employed and compared to artificial neural networks coupled with genetic algorithm (ANN-GA). Significant process variables, cell slurry feed rate (A), bead load (B), cell load (C), and run time (D), were investigated and optimized for maximizing COD recovery. RSM predicted an optimum of feed rate of 310.73 mL/h, bead loading of 79.9% (v/v), cell loading OD 600 nm of 74, and run time of 29.9 min with a recovery of ~3.2 g/L. ANN-GA predicted a maximum COD recovery of ~3.5 g/L at an optimum feed rate (mL/h): 258.08, bead loading (%, v/v): 80%, cell loading (OD 600 nm ): 73.99, and run time of 32 min. An overall 3.7-fold increase in productivity is obtained when compared to a batch process. Optimization and comparison of statistical vs. artificial intelligence techniques in continuous bead milling process has been attempted for the very first time in our study. We were able to successfully represent the complex non-linear multivariable dependence of enzyme recovery on bead milling parameters. The quadratic second order response functions are not flexible enough to represent such complex non-linear dependence. ANN being a summation function of multiple layers are capable to represent complex non-linear dependence of variables in this case; enzyme recovery as a function of bead milling parameters. Since GA can even optimize discontinuous functions present study cites a perfect example of using machine learning (ANN) in combination with evolutionary optimization (GA) for representing undefined biological functions which is the case for common industrial processes involving biological moieties.

  8. Artificial Intelligence vs. Statistical Modeling and Optimization of Continuous Bead Milling Process for Bacterial Cell Lysis

    PubMed Central

    Haque, Shafiul; Khan, Saif; Wahid, Mohd; Dar, Sajad A.; Soni, Nipunjot; Mandal, Raju K.; Singh, Vineeta; Tiwari, Dileep; Lohani, Mohtashim; Areeshi, Mohammed Y.; Govender, Thavendran; Kruger, Hendrik G.; Jawed, Arshad

    2016-01-01

    For a commercially viable recombinant intracellular protein production process, efficient cell lysis and protein release is a major bottleneck. The recovery of recombinant protein, cholesterol oxidase (COD) was studied in a continuous bead milling process. A full factorial response surface methodology (RSM) design was employed and compared to artificial neural networks coupled with genetic algorithm (ANN-GA). Significant process variables, cell slurry feed rate (A), bead load (B), cell load (C), and run time (D), were investigated and optimized for maximizing COD recovery. RSM predicted an optimum of feed rate of 310.73 mL/h, bead loading of 79.9% (v/v), cell loading OD600 nm of 74, and run time of 29.9 min with a recovery of ~3.2 g/L. ANN-GA predicted a maximum COD recovery of ~3.5 g/L at an optimum feed rate (mL/h): 258.08, bead loading (%, v/v): 80%, cell loading (OD600 nm): 73.99, and run time of 32 min. An overall 3.7-fold increase in productivity is obtained when compared to a batch process. Optimization and comparison of statistical vs. artificial intelligence techniques in continuous bead milling process has been attempted for the very first time in our study. We were able to successfully represent the complex non-linear multivariable dependence of enzyme recovery on bead milling parameters. The quadratic second order response functions are not flexible enough to represent such complex non-linear dependence. ANN being a summation function of multiple layers are capable to represent complex non-linear dependence of variables in this case; enzyme recovery as a function of bead milling parameters. Since GA can even optimize discontinuous functions present study cites a perfect example of using machine learning (ANN) in combination with evolutionary optimization (GA) for representing undefined biological functions which is the case for common industrial processes involving biological moieties. PMID:27920762

  9. Quadratic constrained mixed discrete optimization with an adiabatic quantum optimizer

    NASA Astrophysics Data System (ADS)

    Chandra, Rishabh; Jacobson, N. Tobias; Moussa, Jonathan E.; Frankel, Steven H.; Kais, Sabre

    2014-07-01

    We extend the family of problems that may be implemented on an adiabatic quantum optimizer (AQO). When a quadratic optimization problem has at least one set of discrete controls and the constraints are linear, we call this a quadratic constrained mixed discrete optimization (QCMDO) problem. QCMDO problems are NP-hard, and no efficient classical algorithm for their solution is known. Included in the class of QCMDO problems are combinatorial optimization problems constrained by a linear partial differential equation (PDE) or system of linear PDEs. An essential complication commonly encountered in solving this type of problem is that the linear constraint may introduce many intermediate continuous variables into the optimization while the computational cost grows exponentially with problem size. We resolve this difficulty by developing a constructive mapping from QCMDO to quadratic unconstrained binary optimization (QUBO) such that the size of the QUBO problem depends only on the number of discrete control variables. With a suitable embedding, taking into account the physical constraints of the realizable coupling graph, the resulting QUBO problem can be implemented on an existing AQO. The mapping itself is efficient, scaling cubically with the number of continuous variables in the general case and linearly in the PDE case if an efficient preconditioner is available.

  10. p-Aminophenol degradation by ozonation combined with sonolysis: operating conditions influence and mechanism.

    PubMed

    He, Zhiqiao; Song, Shuang; Ying, Haiping; Xu, Lejin; Chen, Jianmeng

    2007-07-01

    The degradation of p-aminophenol (PAP) in aqueous solution by sonolysis, by ozonation, and by a combination of both was investigated in laboratory-scale experiments. Operation parameters such as pH, temperature, ultrasonic energy density and ozone dose were optimized with regard to the efficiency of PAP removal. The concentration of PAP during the reaction was detected by high-pressure liquid chromatography. The concentrations of ammonium ions and nitrate ions were monitored during the degradation. Intermediate products such as 4-iminocyclohexa-2,5-dien-1-one, phenol, but-2-enedioic acid, and acetic acid were detected by gas chromatography coupled with mass spectrometry. The degradation rate of PAP was higher in the combined system than in the linear combination of separate experiments. The degradation efficiency was decreased rapidly when n-butanol was added to the combined reaction system, which showed that some radical reaction might proceed during the laboratory experiments.

  11. A study of the use of linear programming techniques to improve the performance in design optimization problems

    NASA Technical Reports Server (NTRS)

    Young, Katherine C.; Sobieszczanski-Sobieski, Jaroslaw

    1988-01-01

    This project has two objectives. The first is to determine whether linear programming techniques can improve performance when handling design optimization problems with a large number of design variables and constraints relative to the feasible directions algorithm. The second purpose is to determine whether using the Kreisselmeier-Steinhauser (KS) function to replace the constraints with one constraint will reduce the cost of total optimization. Comparisons are made using solutions obtained with linear and non-linear methods. The results indicate that there is no cost saving using the linear method or in using the KS function to replace constraints.

  12. Multiobjective Optimization Combining BMP Technology and Land Preservation for Watershed-based Stormwater Management

    NASA Astrophysics Data System (ADS)

    McGarity, A. E.

    2009-12-01

    Recent progress has been made developing decision-support models for optimal deployment of best management practices (BMP’s) in an urban watershed to achieve water quality goals. One example is the high-level screening model StormWISE, developed by the author (McGarity, 2006) that uses linear and nonlinear programming to narrow the search for optimal solutions to certain land use categories and drainage zones. Another example is the model SUSTAIN developed by USEPA and Tetra Tech (Lai, et al., 2006), which builds on the work of Yu, et al., 2002), that uses a detailed, computationally intensive simulation model driven by a genetic solver to select optimal BMP sites. However, a model that deals only with best management practice (BMP) site selections may fail to consider solutions that avoid future nonpoint pollutant loadings by preserving undeveloped land. This paper presents results of a recently completed research project in which water resource engineers partnered with experienced professionals at a land conservation trust to develop a multiobjective model for watershed management. The result is a revised version of StormWISE that can be used to identify optimal, cost-effective combinations of easements and similar land preservation tools for undeveloped sites along with low impact development (LID) and BMP technologies for developed sites. The goal is to achieve the watershed-wide limits on runoff volume and pollutant loads that are necessary to meet water quality goals as well as ecological benefits associated with habitat preservation and enhancement. A nonlinear programming formulation is presented for the extended StormWISE model that achieves desired levels of environmental benefits at minimum cost. Tradeoffs between different environmental benefits are generated by multiple runs of the model while varying the levels of each environmental benefit obtained. The model is solved using piecewise linearization of environmental benefit functions where each linear segment of represents a different option for reducing stormwater runoff volumes and pollutant loadings. The solutions space is comprised of optimal levels of expenditure for categories of BMP's by land use category and optimal land preservation expenditures by drainage zone. To demonstrate the usefulness of the model, results from its application to the Little Crum Creek watershed in suburban Philadelphia are presented. The model has been used to assist a watershed association and four municipalities to develop an action plan for restoration of water quality on this impaired stream. References Lai, F., J. Zhen, J. Riverson, and L. Shoemaker (2006). "SUSTAIN - An Evaluation and Cost-Optimization Tool for Placement of BMPs," ASCE World Environmental and Water Resource Congress 2006. McGarity, A.E. (2006). A Cost Minimization Model to Priortize Urban Catchments for Stormwater BMP Implementation Projects. American Water Resources Association National Meeting, Baltimore, MD, November, 2006. Yu, S., J. X. Zhen, and S.Y. Zhai, (2002). Development of Stormwater Best Management Practice Placement Strategy for the Virginia Department of Transportation. Final Contract Report, VTRC 04-CR9, Virginia Transportation Research Council.

  13. Drag reduction of a car model by linear genetic programming control

    NASA Astrophysics Data System (ADS)

    Li, Ruiying; Noack, Bernd R.; Cordier, Laurent; Borée, Jacques; Harambat, Fabien

    2017-08-01

    We investigate open- and closed-loop active control for aerodynamic drag reduction of a car model. Turbulent flow around a blunt-edged Ahmed body is examined at ReH≈ 3× 105 based on body height. The actuation is performed with pulsed jets at all trailing edges (multiple inputs) combined with a Coanda deflection surface. The flow is monitored with 16 pressure sensors distributed at the rear side (multiple outputs). We apply a recently developed model-free control strategy building on genetic programming in Dracopoulos and Kent (Neural Comput Appl 6:214-228, 1997) and Gautier et al. (J Fluid Mech 770:424-441, 2015). The optimized control laws comprise periodic forcing, multi-frequency forcing and sensor-based feedback including also time-history information feedback and combinations thereof. Key enabler is linear genetic programming (LGP) as powerful regression technique for optimizing the multiple-input multiple-output control laws. The proposed LGP control can select the best open- or closed-loop control in an unsupervised manner. Approximately 33% base pressure recovery associated with 22% drag reduction is achieved in all considered classes of control laws. Intriguingly, the feedback actuation emulates periodic high-frequency forcing. In addition, the control identified automatically the only sensor which listens to high-frequency flow components with good signal to noise ratio. Our control strategy is, in principle, applicable to all multiple actuators and sensors experiments.

  14. Prediction of monthly rainfall in Victoria, Australia: Clusterwise linear regression approach

    NASA Astrophysics Data System (ADS)

    Bagirov, Adil M.; Mahmood, Arshad; Barton, Andrew

    2017-05-01

    This paper develops the Clusterwise Linear Regression (CLR) technique for prediction of monthly rainfall. The CLR is a combination of clustering and regression techniques. It is formulated as an optimization problem and an incremental algorithm is designed to solve it. The algorithm is applied to predict monthly rainfall in Victoria, Australia using rainfall data with five input meteorological variables over the period of 1889-2014 from eight geographically diverse weather stations. The prediction performance of the CLR method is evaluated by comparing observed and predicted rainfall values using four measures of forecast accuracy. The proposed method is also compared with the CLR using the maximum likelihood framework by the expectation-maximization algorithm, multiple linear regression, artificial neural networks and the support vector machines for regression models using computational results. The results demonstrate that the proposed algorithm outperforms other methods in most locations.

  15. Gain optimization with non-linear controls

    NASA Technical Reports Server (NTRS)

    Slater, G. L.; Kandadai, R. D.

    1984-01-01

    An algorithm has been developed for the analysis and design of controls for non-linear systems. The technical approach is to use statistical linearization to model the non-linear dynamics of a system by a quasi-Gaussian model. A covariance analysis is performed to determine the behavior of the dynamical system and a quadratic cost function. Expressions for the cost function and its derivatives are determined so that numerical optimization techniques can be applied to determine optimal feedback laws. The primary application for this paper is centered about the design of controls for nominally linear systems but where the controls are saturated or limited by fixed constraints. The analysis is general, however, and numerical computation requires only that the specific non-linearity be considered in the analysis.

  16. Dc microgrid stabilization through fuzzy control of interleaved, heterogeneous storage elements

    NASA Astrophysics Data System (ADS)

    Smith, Robert David

    As microgrid power systems gain prevalence and renewable energy comprises greater and greater portions of distributed generation, energy storage becomes important to offset the higher variance of renewable energy sources and maximize their usefulness. One of the emerging techniques is to utilize a combination of lead-acid batteries and ultracapacitors to provide both short and long-term stabilization to microgrid systems. The different energy and power characteristics of batteries and ultracapacitors imply that they ought to be utilized in different ways. Traditional linear controls can use these energy storage systems to stabilize a power grid, but cannot effect more complex interactions. This research explores a fuzzy logic approach to microgrid stabilization. The ability of a fuzzy logic controller to regulate a dc bus in the presence of source and load fluctuations, in a manner comparable to traditional linear control systems, is explored and demonstrated. Furthermore, the expanded capabilities (such as storage balancing, self-protection, and battery optimization) of a fuzzy logic system over a traditional linear control system are shown. System simulation results are presented and validated through hardware-based experiments. These experiments confirm the capabilities of the fuzzy logic control system to regulate bus voltage, balance storage elements, optimize battery usage, and effect self-protection.

  17. A liquid-phase microextraction method, combining a dual gauge microsyringe with a hollow fiber membrane, for the determination of organochlorine pesticides in aqueous solution by gas chromatography/ion trap mass spectrometry.

    PubMed

    Yan, Chih-Hao; Wu, Hui-Fen

    2004-01-01

    A liquid-phase microextraction (LPME) method has been demonstrated for the extraction and determination of organochlorine pesticides (OCPs) in aqueous solution. The method combines a dual gauge microsyringe with a hollow fiber membrane (LPME/DGM-HF) followed by detection by gas chromatography/ion trap mass spectrometry (GC/ITMS). The advantages include speed, low solvent and sample consumption, simplicity and ease of use. The extraction time, solvent selection, salt concentration and sample stirring rate have been investigated in order to optimize extraction efficiency. The viability is evaluated by measuring the linearity and detection limit of the five OCPs in aqueous solution. Detection linearity for the OCPs has been achieved over a range of concentrations between 1 and 500 microg/L (r2 > 0.930), with a detection limit of 0.1 microg/L for each OCP. Copyright 2004 John Wiley & Sons, Ltd.

  18. Metamodeling and the Critic-based approach to multi-level optimization.

    PubMed

    Werbos, Ludmilla; Kozma, Robert; Silva-Lugo, Rodrigo; Pazienza, Giovanni E; Werbos, Paul J

    2012-08-01

    Large-scale networks with hundreds of thousands of variables and constraints are becoming more and more common in logistics, communications, and distribution domains. Traditionally, the utility functions defined on such networks are optimized using some variation of Linear Programming, such as Mixed Integer Programming (MIP). Despite enormous progress both in hardware (multiprocessor systems and specialized processors) and software (Gurobi) we are reaching the limits of what these tools can handle in real time. Modern logistic problems, for example, call for expanding the problem both vertically (from one day up to several days) and horizontally (combining separate solution stages into an integrated model). The complexity of such integrated models calls for alternative methods of solution, such as Approximate Dynamic Programming (ADP), which provide a further increase in the performance necessary for the daily operation. In this paper, we present the theoretical basis and related experiments for solving the multistage decision problems based on the results obtained for shorter periods, as building blocks for the models and the solution, via Critic-Model-Action cycles, where various types of neural networks are combined with traditional MIP models in a unified optimization system. In this system architecture, fast and simple feed-forward networks are trained to reasonably initialize more complicated recurrent networks, which serve as approximators of the value function (Critic). The combination of interrelated neural networks and optimization modules allows for multiple queries for the same system, providing flexibility and optimizing performance for large-scale real-life problems. A MATLAB implementation of our solution procedure for a realistic set of data and constraints shows promising results, compared to the iterative MIP approach. Copyright © 2012 Elsevier Ltd. All rights reserved.

  19. Fast global image smoothing based on weighted least squares.

    PubMed

    Min, Dongbo; Choi, Sunghwan; Lu, Jiangbo; Ham, Bumsub; Sohn, Kwanghoon; Do, Minh N

    2014-12-01

    This paper presents an efficient technique for performing a spatially inhomogeneous edge-preserving image smoothing, called fast global smoother. Focusing on sparse Laplacian matrices consisting of a data term and a prior term (typically defined using four or eight neighbors for 2D image), our approach efficiently solves such global objective functions. In particular, we approximate the solution of the memory-and computation-intensive large linear system, defined over a d-dimensional spatial domain, by solving a sequence of 1D subsystems. Our separable implementation enables applying a linear-time tridiagonal matrix algorithm to solve d three-point Laplacian matrices iteratively. Our approach combines the best of two paradigms, i.e., efficient edge-preserving filters and optimization-based smoothing. Our method has a comparable runtime to the fast edge-preserving filters, but its global optimization formulation overcomes many limitations of the local filtering approaches. Our method also achieves high-quality results as the state-of-the-art optimization-based techniques, but runs ∼10-30 times faster. Besides, considering the flexibility in defining an objective function, we further propose generalized fast algorithms that perform Lγ norm smoothing (0 < γ < 2) and support an aggregated (robust) data term for handling imprecise data constraints. We demonstrate the effectiveness and efficiency of our techniques in a range of image processing and computer graphics applications.

  20. A numerical algorithm for optimal feedback gains in high dimensional linear quadratic regulator problems

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Ito, K.

    1991-01-01

    A hybrid method for computing the feedback gains in linear quadratic regulator problem is proposed. The method, which combines use of a Chandrasekhar type system with an iteration of the Newton-Kleinman form with variable acceleration parameter Smith schemes, is formulated to efficiently compute directly the feedback gains rather than solutions of an associated Riccati equation. The hybrid method is particularly appropriate when used with large dimensional systems such as those arising in approximating infinite-dimensional (distributed parameter) control systems (e.g., those governed by delay-differential and partial differential equations). Computational advantages of the proposed algorithm over the standard eigenvector (Potter, Laub-Schur) based techniques are discussed, and numerical evidence of the efficacy of these ideas is presented.

  1. Computational Modelling and Optimal Control of Ebola Virus Disease with non-Linear Incidence Rate

    NASA Astrophysics Data System (ADS)

    Takaidza, I.; Makinde, O. D.; Okosun, O. K.

    2017-03-01

    The 2014 Ebola outbreak in West Africa has exposed the need to connect modellers and those with relevant data as pivotal to better understanding of how the disease spreads and quantifying the effects of possible interventions. In this paper, we model and analyse the Ebola virus disease with non-linear incidence rate. The epidemic model created is used to describe how the Ebola virus could potentially evolve in a population. We perform an uncertainty analysis of the basic reproductive number R 0 to quantify its sensitivity to other disease-related parameters. We also analyse the sensitivity of the final epidemic size to the time control interventions (education, vaccination, quarantine and safe handling) and provide the cost effective combination of the interventions.

  2. Linear stability analysis of scramjet unstart

    NASA Astrophysics Data System (ADS)

    Jang, Ik; Nichols, Joseph; Moin, Parviz

    2015-11-01

    We investigate the bifurcation structure of unstart and restart events in a dual-mode scramjet using the Reynolds-averaged Navier-Stokes equations. The scramjet of interest (HyShot II, Laurence et al., AIAA2011-2310) operates at a free-stream Mach number of approximately 8, and the length of the combustor chamber is 300mm. A heat-release model is applied to mimic the combustion process. Pseudo-arclength continuation with Newton-Raphson iteration is used to calculate multiple solution branches. Stability analysis based on linearized dynamics about the solution curves reveals a metric that optimally forewarns unstart. By combining direct and adjoint eigenmodes, structural sensitivity analysis suggests strategies for unstart mitigation, including changing the isolator length. This work is supported by DOE/NNSA and AFOSR.

  3. Integrated modeling environment for systems-level performance analysis of the Next-Generation Space Telescope

    NASA Astrophysics Data System (ADS)

    Mosier, Gary E.; Femiano, Michael; Ha, Kong; Bely, Pierre Y.; Burg, Richard; Redding, David C.; Kissil, Andrew; Rakoczy, John; Craig, Larry

    1998-08-01

    All current concepts for the NGST are innovative designs which present unique systems-level challenges. The goals are to outperform existing observatories at a fraction of the current price/performance ratio. Standard practices for developing systems error budgets, such as the 'root-sum-of- squares' error tree, are insufficient for designs of this complexity. Simulation and optimization are the tools needed for this project; in particular tools that integrate controls, optics, thermal and structural analysis, and design optimization. This paper describes such an environment which allows sub-system performance specifications to be analyzed parametrically, and includes optimizing metrics that capture the science requirements. The resulting systems-level design trades are greatly facilitated, and significant cost savings can be realized. This modeling environment, built around a tightly integrated combination of commercial off-the-shelf and in-house- developed codes, provides the foundation for linear and non- linear analysis on both the time and frequency-domains, statistical analysis, and design optimization. It features an interactive user interface and integrated graphics that allow highly-effective, real-time work to be done by multidisciplinary design teams. For the NGST, it has been applied to issues such as pointing control, dynamic isolation of spacecraft disturbances, wavefront sensing and control, on-orbit thermal stability of the optics, and development of systems-level error budgets. In this paper, results are presented from parametric trade studies that assess requirements for pointing control, structural dynamics, reaction wheel dynamic disturbances, and vibration isolation. These studies attempt to define requirements bounds such that the resulting design is optimized at the systems level, without attempting to optimize each subsystem individually. The performance metrics are defined in terms of image quality, specifically centroiding error and RMS wavefront error, which directly links to science requirements.

  4. An Efficient Method Coupling Kernel Principal Component Analysis with Adjoint-Based Optimal Control and Its Goal-Oriented Extensions

    NASA Astrophysics Data System (ADS)

    Thimmisetty, C.; Talbot, C.; Tong, C. H.; Chen, X.

    2016-12-01

    The representativeness of available data poses a significant fundamental challenge to the quantification of uncertainty in geophysical systems. Furthermore, the successful application of machine learning methods to geophysical problems involving data assimilation is inherently constrained by the extent to which obtainable data represent the problem considered. We show how the adjoint method, coupled with optimization based on methods of machine learning, can facilitate the minimization of an objective function defined on a space of significantly reduced dimension. By considering uncertain parameters as constituting a stochastic process, the Karhunen-Loeve expansion and its nonlinear extensions furnish an optimal basis with respect to which optimization using L-BFGS can be carried out. In particular, we demonstrate that kernel PCA can be coupled with adjoint-based optimal control methods to successfully determine the distribution of material parameter values for problems in the context of channelized deformable media governed by the equations of linear elasticity. Since certain subsets of the original data are characterized by different features, the convergence rate of the method in part depends on, and may be limited by, the observations used to furnish the kernel principal component basis. By determining appropriate weights for realizations of the stochastic random field, then, one may accelerate the convergence of the method. To this end, we present a formulation of Weighted PCA combined with a gradient-based means using automatic differentiation to iteratively re-weight observations concurrent with the determination of an optimal reduced set control variables in the feature space. We demonstrate how improvements in the accuracy and computational efficiency of the weighted linear method can be achieved over existing unweighted kernel methods, and discuss nonlinear extensions of the algorithm.

  5. Proper Orthogonal Decomposition in Optimal Control of Fluids

    NASA Technical Reports Server (NTRS)

    Ravindran, S. S.

    1999-01-01

    In this article, we present a reduced order modeling approach suitable for active control of fluid dynamical systems based on proper orthogonal decomposition (POD). The rationale behind the reduced order modeling is that numerical simulation of Navier-Stokes equations is still too costly for the purpose of optimization and control of unsteady flows. We examine the possibility of obtaining reduced order models that reduce computational complexity associated with the Navier-Stokes equations while capturing the essential dynamics by using the POD. The POD allows extraction of certain optimal set of basis functions, perhaps few, from a computational or experimental data-base through an eigenvalue analysis. The solution is then obtained as a linear combination of these optimal set of basis functions by means of Galerkin projection. This makes it attractive for optimal control and estimation of systems governed by partial differential equations. We here use it in active control of fluid flows governed by the Navier-Stokes equations. We show that the resulting reduced order model can be very efficient for the computations of optimization and control problems in unsteady flows. Finally, implementational issues and numerical experiments are presented for simulations and optimal control of fluid flow through channels.

  6. Energy management of three-dimensional minimum-time intercept. [for aircraft flight optimization

    NASA Technical Reports Server (NTRS)

    Kelley, H. J.; Cliff, E. M.; Visser, H. G.

    1985-01-01

    A real-time computer algorithm to control and optimize aircraft flight profiles is described and applied to a three-dimensional minimum-time intercept mission. The proposed scheme has roots in two well known techniques: singular perturbations and neighboring-optimal guidance. Use of singular-perturbation ideas is made in terms of the assumed trajectory-family structure. A heading/energy family of prestored point-mass-model state-Euler solutions is used as the baseline in this scheme. The next step is to generate a near-optimal guidance law that will transfer the aircraft to the vicinity of this reference family. The control commands fed to the autopilot (bank angle and load factor) consist of the reference controls plus correction terms which are linear combinations of the altitude and path-angle deviations from reference values, weighted by a set of precalculated gains. In this respect the proposed scheme resembles neighboring-optimal guidance. However, in contrast to the neighboring-optimal guidance scheme, the reference control and state variables as well as the feedback gains are stored as functions of energy and heading in the present approach. Some numerical results comparing open-loop optimal and approximate feedback solutions are presented.

  7. Large-Scale Linear Optimization through Machine Learning: From Theory to Practical System Design and Implementation

    DTIC Science & Technology

    2016-08-10

    AFRL-AFOSR-JP-TR-2016-0073 Large-scale Linear Optimization through Machine Learning: From Theory to Practical System Design and Implementation ...2016 4.  TITLE AND SUBTITLE Large-scale Linear Optimization through Machine Learning: From Theory to Practical System Design and Implementation 5a...performances on various machine learning tasks and it naturally lends itself to fast parallel implementations . Despite this, very little work has been

  8. ORACLS- OPTIMAL REGULATOR ALGORITHMS FOR THE CONTROL OF LINEAR SYSTEMS (CDC VERSION)

    NASA Technical Reports Server (NTRS)

    Armstrong, E. S.

    1994-01-01

    This control theory design package, called Optimal Regulator Algorithms for the Control of Linear Systems (ORACLS), was developed to aid in the design of controllers and optimal filters for systems which can be modeled by linear, time-invariant differential and difference equations. Optimal linear quadratic regulator theory, currently referred to as the Linear-Quadratic-Gaussian (LQG) problem, has become the most widely accepted method of determining optimal control policy. Within this theory, the infinite duration time-invariant problems, which lead to constant gain feedback control laws and constant Kalman-Bucy filter gains for reconstruction of the system state, exhibit high tractability and potential ease of implementation. A variety of new and efficient methods in the field of numerical linear algebra have been combined into the ORACLS program, which provides for the solution to time-invariant continuous or discrete LQG problems. The ORACLS package is particularly attractive to the control system designer because it provides a rigorous tool for dealing with multi-input and multi-output dynamic systems in both continuous and discrete form. The ORACLS programming system is a collection of subroutines which can be used to formulate, manipulate, and solve various LQG design problems. The ORACLS program is constructed in a manner which permits the user to maintain considerable flexibility at each operational state. This flexibility is accomplished by providing primary operations, analysis of linear time-invariant systems, and control synthesis based on LQG methodology. The input-output routines handle the reading and writing of numerical matrices, printing heading information, and accumulating output information. The basic vector-matrix operations include addition, subtraction, multiplication, equation, norm construction, tracing, transposition, scaling, juxtaposition, and construction of null and identity matrices. The analysis routines provide for the following computations: the eigenvalues and eigenvectors of real matrices; the relative stability of a given matrix; matrix factorization; the solution of linear constant coefficient vector-matrix algebraic equations; the controllability properties of a linear time-invariant system; the steady-state covariance matrix of an open-loop stable system forced by white noise; and the transient response of continuous linear time-invariant systems. The control law design routines of ORACLS implement some of the more common techniques of time-invariant LQG methodology. For the finite-duration optimal linear regulator problem with noise-free measurements, continuous dynamics, and integral performance index, a routine is provided which implements the negative exponential method for finding both the transient and steady-state solutions to the matrix Riccati equation. For the discrete version of this problem, the method of backwards differencing is applied to find the solutions to the discrete Riccati equation. A routine is also included to solve the steady-state Riccati equation by the Newton algorithms described by Klein, for continuous problems, and by Hewer, for discrete problems. Another routine calculates the prefilter gain to eliminate control state cross-product terms in the quadratic performance index and the weighting matrices for the sampled data optimal linear regulator problem. For cases with measurement noise, duality theory and optimal regulator algorithms are used to calculate solutions to the continuous and discrete Kalman-Bucy filter problems. Finally, routines are included to implement the continuous and discrete forms of the explicit (model-in-the-system) and implicit (model-in-the-performance-index) model following theory. These routines generate linear control laws which cause the output of a dynamic time-invariant system to track the output of a prescribed model. In order to apply ORACLS, the user must write an executive (driver) program which inputs the problem coefficients, formulates and selects the routines to be used to solve the problem, and specifies the desired output. There are three versions of ORACLS source code available for implementation: CDC, IBM, and DEC. The CDC version has been implemented on a CDC 6000 series computer with a central memory of approximately 13K (octal) of 60 bit words. The CDC version is written in FORTRAN IV, was developed in 1978, and last updated in 1989. The IBM version has been implemented on an IBM 370 series computer with a central memory requirement of approximately 300K of 8 bit bytes. The IBM version is written in FORTRAN IV and was generated in 1981. The DEC version has been implemented on a VAX series computer operating under VMS. The VAX version is written in FORTRAN 77 and was generated in 1986.

  9. ORACLS- OPTIMAL REGULATOR ALGORITHMS FOR THE CONTROL OF LINEAR SYSTEMS (DEC VAX VERSION)

    NASA Technical Reports Server (NTRS)

    Frisch, H.

    1994-01-01

    This control theory design package, called Optimal Regulator Algorithms for the Control of Linear Systems (ORACLS), was developed to aid in the design of controllers and optimal filters for systems which can be modeled by linear, time-invariant differential and difference equations. Optimal linear quadratic regulator theory, currently referred to as the Linear-Quadratic-Gaussian (LQG) problem, has become the most widely accepted method of determining optimal control policy. Within this theory, the infinite duration time-invariant problems, which lead to constant gain feedback control laws and constant Kalman-Bucy filter gains for reconstruction of the system state, exhibit high tractability and potential ease of implementation. A variety of new and efficient methods in the field of numerical linear algebra have been combined into the ORACLS program, which provides for the solution to time-invariant continuous or discrete LQG problems. The ORACLS package is particularly attractive to the control system designer because it provides a rigorous tool for dealing with multi-input and multi-output dynamic systems in both continuous and discrete form. The ORACLS programming system is a collection of subroutines which can be used to formulate, manipulate, and solve various LQG design problems. The ORACLS program is constructed in a manner which permits the user to maintain considerable flexibility at each operational state. This flexibility is accomplished by providing primary operations, analysis of linear time-invariant systems, and control synthesis based on LQG methodology. The input-output routines handle the reading and writing of numerical matrices, printing heading information, and accumulating output information. The basic vector-matrix operations include addition, subtraction, multiplication, equation, norm construction, tracing, transposition, scaling, juxtaposition, and construction of null and identity matrices. The analysis routines provide for the following computations: the eigenvalues and eigenvectors of real matrices; the relative stability of a given matrix; matrix factorization; the solution of linear constant coefficient vector-matrix algebraic equations; the controllability properties of a linear time-invariant system; the steady-state covariance matrix of an open-loop stable system forced by white noise; and the transient response of continuous linear time-invariant systems. The control law design routines of ORACLS implement some of the more common techniques of time-invariant LQG methodology. For the finite-duration optimal linear regulator problem with noise-free measurements, continuous dynamics, and integral performance index, a routine is provided which implements the negative exponential method for finding both the transient and steady-state solutions to the matrix Riccati equation. For the discrete version of this problem, the method of backwards differencing is applied to find the solutions to the discrete Riccati equation. A routine is also included to solve the steady-state Riccati equation by the Newton algorithms described by Klein, for continuous problems, and by Hewer, for discrete problems. Another routine calculates the prefilter gain to eliminate control state cross-product terms in the quadratic performance index and the weighting matrices for the sampled data optimal linear regulator problem. For cases with measurement noise, duality theory and optimal regulator algorithms are used to calculate solutions to the continuous and discrete Kalman-Bucy filter problems. Finally, routines are included to implement the continuous and discrete forms of the explicit (model-in-the-system) and implicit (model-in-the-performance-index) model following theory. These routines generate linear control laws which cause the output of a dynamic time-invariant system to track the output of a prescribed model. In order to apply ORACLS, the user must write an executive (driver) program which inputs the problem coefficients, formulates and selects the routines to be used to solve the problem, and specifies the desired output. There are three versions of ORACLS source code available for implementation: CDC, IBM, and DEC. The CDC version has been implemented on a CDC 6000 series computer with a central memory of approximately 13K (octal) of 60 bit words. The CDC version is written in FORTRAN IV, was developed in 1978, and last updated in 1986. The IBM version has been implemented on an IBM 370 series computer with a central memory requirement of approximately 300K of 8 bit bytes. The IBM version is written in FORTRAN IV and was generated in 1981. The DEC version has been implemented on a VAX series computer operating under VMS. The VAX version is written in FORTRAN 77 and was generated in 1986.

  10. Optimal aeroassisted coplanar orbital transfer using an energy model

    NASA Technical Reports Server (NTRS)

    Halyo, Nesim; Taylor, Deborah B.

    1989-01-01

    The atmospheric portion of the trajectories for the aeroassisted coplanar orbit transfer was investigated. The equations of motion for the problem are expressed using reduced order model and total vehicle energy, kinetic plus potential, as the independent variable rather than time. The order reduction is achieved analytically without an approximation of the vehicle dynamics. In this model, the problem of coplanar orbit transfer is seen as one in which a given amount of energy must be transferred from the vehicle to the atmosphere during the trajectory without overheating the vehicle. An optimal control problem is posed where a linear combination of the integrated square of the heating rate and the vehicle drag is the cost function to be minimized. The necessary conditions for optimality are obtained. These result in a 4th order two-point-boundary-value problem. A parametric study of the optimal guidance trajectory in which the proportion of the heating rate term versus the drag varies is made. Simulations of the guidance trajectories are presented.

  11. Internal combustion engine report: Spark ignited ICE GenSet optimization and novel concept development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keller, J.; Blarigan, P. Van

    1998-08-01

    In this manuscript the authors report on two projects each of which the goal is to produce cost effective hydrogen utilization technologies. These projects are: (1) the development of an electrical generation system using a conventional four-stroke spark-ignited internal combustion engine generator combination (SI-GenSet) optimized for maximum efficiency and minimum emissions, and (2) the development of a novel internal combustion engine concept. The SI-GenSet will be optimized to run on either hydrogen or hydrogen-blends. The novel concept seeks to develop an engine that optimizes the Otto cycle in a free piston configuration while minimizing all emissions. To this end themore » authors are developing a rapid combustion homogeneous charge compression ignition (HCCI) engine using a linear alternator for both power take-off and engine control. Targeted applications include stationary electrical power generation, stationary shaft power generation, hybrid vehicles, and nearly any other application now being accomplished with internal combustion engines.« less

  12. An hp symplectic pseudospectral method for nonlinear optimal control

    NASA Astrophysics Data System (ADS)

    Peng, Haijun; Wang, Xinwei; Li, Mingwu; Chen, Biaosong

    2017-01-01

    An adaptive symplectic pseudospectral method based on the dual variational principle is proposed and is successfully applied to solving nonlinear optimal control problems in this paper. The proposed method satisfies the first order necessary conditions of continuous optimal control problems, also the symplectic property of the original continuous Hamiltonian system is preserved. The original optimal control problem is transferred into a set of nonlinear equations which can be solved easily by Newton-Raphson iterations, and the Jacobian matrix is found to be sparse and symmetric. The proposed method, on one hand, exhibits exponent convergence rates when the number of collocation points are increasing with the fixed number of sub-intervals; on the other hand, exhibits linear convergence rates when the number of sub-intervals is increasing with the fixed number of collocation points. Furthermore, combining with the hp method based on the residual error of dynamic constraints, the proposed method can achieve given precisions in a few iterations. Five examples highlight the high precision and high computational efficiency of the proposed method.

  13. Design Optimization and Fabrication of a Novel Structural SOI Piezoresistive Pressure Sensor with High Accuracy

    PubMed Central

    Li, Chuang; Cordovilla, Francisco; Jagdheesh, R.

    2018-01-01

    This paper presents a novel structural piezoresistive pressure sensor with four-grooved membrane combined with rood beam to measure low pressure. In this investigation, the design, optimization, fabrication, and measurements of the sensor are involved. By analyzing the stress distribution and deflection of sensitive elements using finite element method, a novel structure featuring high concentrated stress profile (HCSP) and locally stiffened membrane (LSM) is built. Curve fittings of the mechanical stress and deflection based on FEM simulation results are performed to establish the relationship between mechanical performance and structure dimension. A combination of FEM and curve fitting method is carried out to determine the structural dimensions. The optimized sensor chip is fabricated on a SOI wafer by traditional MEMS bulk-micromachining and anodic bonding technology. When the applied pressure is 1 psi, the sensor achieves a sensitivity of 30.9 mV/V/psi, a pressure nonlinearity of 0.21% FSS and an accuracy of 0.30%, and thereby the contradiction between sensitivity and linearity is alleviated. In terms of size, accuracy and high temperature characteristic, the proposed sensor is a proper choice for measuring pressure of less than 1 psi. PMID:29393916

  14. Automation of route identification and optimisation based on data-mining and chemical intuition.

    PubMed

    Lapkin, A A; Heer, P K; Jacob, P-M; Hutchby, M; Cunningham, W; Bull, S D; Davidson, M G

    2017-09-21

    Data-mining of Reaxys and network analysis of the combined literature and in-house reactions set were used to generate multiple possible reaction routes to convert a bio-waste feedstock, limonene, into a pharmaceutical API, paracetamol. The network analysis of data provides a rich knowledge-base for generation of the initial reaction screening and development programme. Based on the literature and the in-house data, an overall flowsheet for the conversion of limonene to paracetamol was proposed. Each individual reaction-separation step in the sequence was simulated as a combination of the continuous flow and batch steps. The linear model generation methodology allowed us to identify the reaction steps requiring further chemical optimisation. The generated model can be used for global optimisation and generation of environmental and other performance indicators, such as cost indicators. However, the identified further challenge is to automate model generation to evolve optimal multi-step chemical routes and optimal process configurations.

  15. Insights from Classifying Visual Concepts with Multiple Kernel Learning

    PubMed Central

    Binder, Alexander; Nakajima, Shinichi; Kloft, Marius; Müller, Christina; Samek, Wojciech; Brefeld, Ulf; Müller, Klaus-Robert; Kawanabe, Motoaki

    2012-01-01

    Combining information from various image features has become a standard technique in concept recognition tasks. However, the optimal way of fusing the resulting kernel functions is usually unknown in practical applications. Multiple kernel learning (MKL) techniques allow to determine an optimal linear combination of such similarity matrices. Classical approaches to MKL promote sparse mixtures. Unfortunately, 1-norm regularized MKL variants are often observed to be outperformed by an unweighted sum kernel. The main contributions of this paper are the following: we apply a recently developed non-sparse MKL variant to state-of-the-art concept recognition tasks from the application domain of computer vision. We provide insights on benefits and limits of non-sparse MKL and compare it against its direct competitors, the sum-kernel SVM and sparse MKL. We report empirical results for the PASCAL VOC 2009 Classification and ImageCLEF2010 Photo Annotation challenge data sets. Data sets (kernel matrices) as well as further information are available at http://doc.ml.tu-berlin.de/image_mkl/(Accessed 2012 Jun 25). PMID:22936970

  16. Homotopy approach to optimal, linear quadratic, fixed architecture compensation

    NASA Technical Reports Server (NTRS)

    Mercadal, Mathieu

    1991-01-01

    Optimal linear quadratic Gaussian compensators with constrained architecture are a sensible way to generate good multivariable feedback systems meeting strict implementation requirements. The optimality conditions obtained from the constrained linear quadratic Gaussian are a set of highly coupled matrix equations that cannot be solved algebraically except when the compensator is centralized and full order. An alternative to the use of general parameter optimization methods for solving the problem is to use homotopy. The benefit of the method is that it uses the solution to a simplified problem as a starting point and the final solution is then obtained by solving a simple differential equation. This paper investigates the convergence properties and the limitation of such an approach and sheds some light on the nature and the number of solutions of the constrained linear quadratic Gaussian problem. It also demonstrates the usefulness of homotopy on an example of an optimal decentralized compensator.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vanderbei, Robert J., E-mail: rvdb@princeton.edu; P Latin-Small-Letter-Dotless-I nar, Mustafa C., E-mail: mustafap@bilkent.edu.tr; Bozkaya, Efe B.

    An American option (or, warrant) is the right, but not the obligation, to purchase or sell an underlying equity at any time up to a predetermined expiration date for a predetermined amount. A perpetual American option differs from a plain American option in that it does not expire. In this study, we solve the optimal stopping problem of a perpetual American option (both call and put) in discrete time using linear programming duality. Under the assumption that the underlying stock price follows a discrete time and discrete state Markov process, namely a geometric random walk, we formulate the pricing problemmore » as an infinite dimensional linear programming (LP) problem using the excessive-majorant property of the value function. This formulation allows us to solve complementary slackness conditions in closed-form, revealing an optimal stopping strategy which highlights the set of stock-prices where the option should be exercised. The analysis for the call option reveals that such a critical value exists only in some cases, depending on a combination of state-transition probabilities and the economic discount factor (i.e., the prevailing interest rate) whereas it ceases to be an issue for the put.« less

  18. Substructural controller synthesis

    NASA Technical Reports Server (NTRS)

    Su, Tzu-Jeng; Craig, Roy R., Jr.

    1989-01-01

    A decentralized design procedure which combines substructural synthesis, model reduction, decentralized controller design, subcontroller synthesis, and controller reduction is proposed for the control design of flexible structures. The structure to be controlled is decomposed into several substructures, which are modeled by component mode synthesis methods. For each substructure, a subcontroller is designed by using the linear quadratic optimal control theory. Then, a controller synthesis scheme called Substructural Controller Synthesis (SCS) is used to assemble the subcontrollers into a system controller, which is to be used to control the whole structure.

  19. Discrete Optimization Model for Vehicle Routing Problem with Scheduling Side Cosntraints

    NASA Astrophysics Data System (ADS)

    Juliandri, Dedy; Mawengkang, Herman; Bu'ulolo, F.

    2018-01-01

    Vehicle Routing Problem (VRP) is an important element of many logistic systems which involve routing and scheduling of vehicles from a depot to a set of customers node. This is a hard combinatorial optimization problem with the objective to find an optimal set of routes used by a fleet of vehicles to serve the demands a set of customers It is required that these vehicles return to the depot after serving customers’ demand. The problem incorporates time windows, fleet and driver scheduling, pick-up and delivery in the planning horizon. The goal is to determine the scheduling of fleet and driver and routing policies of the vehicles. The objective is to minimize the overall costs of all routes over the planning horizon. We model the problem as a linear mixed integer program. We develop a combination of heuristics and exact method for solving the model.

  20. Optimization of self-study room open problem based on green and low-carbon campus construction

    NASA Astrophysics Data System (ADS)

    Liu, Baoyou

    2017-04-01

    The optimization of self-study room open arrangement problem in colleges and universities is conducive to accelerate the fine management of the campus and promote green and low-carbon campus construction. Firstly, combined with the actual survey data, the self-study area and living area were divided into different blocks, and the electricity consumption in each self-study room and distance between different living and studying areas were normalized. Secondly, the minimum of total satisfaction index and the minimum of the total electricity consumption were selected as the optimization targets respectively. The mathematical models of linear programming were established and resolved by LINGO software. The results showed that the minimum of total satisfaction index was 4055.533 and the total minimum electricity consumption was 137216 W. Finally, some advice had been put forward on how to realize the high efficient administration of the study room.

  1. Bernoulli substitution in the Ramsey model: Optimal trajectories under control constraints

    NASA Astrophysics Data System (ADS)

    Krasovskii, A. A.; Lebedev, P. D.; Tarasyev, A. M.

    2017-05-01

    We consider a neoclassical (economic) growth model. A nonlinear Ramsey equation, modeling capital dynamics, in the case of Cobb-Douglas production function is reduced to the linear differential equation via a Bernoulli substitution. This considerably facilitates the search for a solution to the optimal growth problem with logarithmic preferences. The study deals with solving the corresponding infinite horizon optimal control problem. We consider a vector field of the Hamiltonian system in the Pontryagin maximum principle, taking into account control constraints. We prove the existence of two alternative steady states, depending on the constraints. A proposed algorithm for constructing growth trajectories combines methods of open-loop control and closed-loop regulatory control. For some levels of constraints and initial conditions, a closed-form solution is obtained. We also demonstrate the impact of technological change on the economic equilibrium dynamics. Results are supported by computer calculations.

  2. Monitoring Pb in Aqueous Samples by Using Low Density Solvent on Air-Assisted Dispersive Liquid-Liquid Microextraction Coupled with UV-Vis Spectrophotometry.

    PubMed

    Nejad, Mina Ghasemi; Faraji, Hakim; Moghimi, Ali

    2017-04-01

    In this study, AA-DLLME combined with UV-Vis spectrophotometry was developed for pre-concentration, microextraction and determination of lead in aqueous samples. Optimization of the independent variables was carried out according to chemometric methods in three steps. According to the screening and optimization study, 86 μL of 1-undecanol (extracting solvent), 12 times syringe pumps, pH 2.0, 0.00% of salt and 0.1% DDTP (chelating agent) were chosen as the optimum independent variables for microextraction and determination of lead. Under the optimized conditions, R = 0.9994, and linearity range was 0.01-100 µg mL -1 . LOD and LOQ were 3.4 and 11.6 ng mL -1 , respectively. The method was applied for analysis of real water samples, such as tap, mineral, river and waste water.

  3. Optimized Quasi-Interpolators for Image Reconstruction.

    PubMed

    Sacht, Leonardo; Nehab, Diego

    2015-12-01

    We propose new quasi-interpolators for the continuous reconstruction of sampled images, combining a narrowly supported piecewise-polynomial kernel and an efficient digital filter. In other words, our quasi-interpolators fit within the generalized sampling framework and are straightforward to use. We go against standard practice and optimize for approximation quality over the entire Nyquist range, rather than focusing exclusively on the asymptotic behavior as the sample spacing goes to zero. In contrast to previous work, we jointly optimize with respect to all degrees of freedom available in both the kernel and the digital filter. We consider linear, quadratic, and cubic schemes, offering different tradeoffs between quality and computational cost. Experiments with compounded rotations and translations over a range of input images confirm that, due to the additional degrees of freedom and the more realistic objective function, our new quasi-interpolators perform better than the state of the art, at a similar computational cost.

  4. A New Stochastic Technique for Painlevé Equation-I Using Neural Network Optimized with Swarm Intelligence

    PubMed Central

    Raja, Muhammad Asif Zahoor; Khan, Junaid Ali; Ahmad, Siraj-ul-Islam; Qureshi, Ijaz Mansoor

    2012-01-01

    A methodology for solution of Painlevé equation-I is presented using computational intelligence technique based on neural networks and particle swarm optimization hybridized with active set algorithm. The mathematical model of the equation is developed with the help of linear combination of feed-forward artificial neural networks that define the unsupervised error of the model. This error is minimized subject to the availability of appropriate weights of the networks. The learning of the weights is carried out using particle swarm optimization algorithm used as a tool for viable global search method, hybridized with active set algorithm for rapid local convergence. The accuracy, convergence rate, and computational complexity of the scheme are analyzed based on large number of independents runs and their comprehensive statistical analysis. The comparative studies of the results obtained are made with MATHEMATICA solutions, as well as, with variational iteration method and homotopy perturbation method. PMID:22919371

  5. An historical survey of computational methods in optimal control.

    NASA Technical Reports Server (NTRS)

    Polak, E.

    1973-01-01

    Review of some of the salient theoretical developments in the specific area of optimal control algorithms. The first algorithms for optimal control were aimed at unconstrained problems and were derived by using first- and second-variation methods of the calculus of variations. These methods have subsequently been recognized as gradient, Newton-Raphson, or Gauss-Newton methods in function space. A much more recent addition to the arsenal of unconstrained optimal control algorithms are several variations of conjugate-gradient methods. At first, constrained optimal control problems could only be solved by exterior penalty function methods. Later algorithms specifically designed for constrained problems have appeared. Among these are methods for solving the unconstrained linear quadratic regulator problem, as well as certain constrained minimum-time and minimum-energy problems. Differential-dynamic programming was developed from dynamic programming considerations. The conditional-gradient method, the gradient-projection method, and a couple of feasible directions methods were obtained as extensions or adaptations of related algorithms for finite-dimensional problems. Finally, the so-called epsilon-methods combine the Ritz method with penalty function techniques.

  6. Framework to trade optimality for local processing in large-scale wavefront reconstruction problems.

    PubMed

    Haber, Aleksandar; Verhaegen, Michel

    2016-11-15

    We show that the minimum variance wavefront estimation problems permit localized approximate solutions, in the sense that the wavefront value at a point (excluding unobservable modes, such as the piston mode) can be approximated by a linear combination of the wavefront slope measurements in the point's neighborhood. This enables us to efficiently compute a wavefront estimate by performing a single sparse matrix-vector multiplication. Moreover, our results open the possibility for the development of wavefront estimators that can be easily implemented in a decentralized/distributed manner, and in which the estimate optimality can be easily traded for computational efficiency. We numerically validate our approach on Hudgin wavefront sensor geometries, and the results can be easily generalized to Fried geometries.

  7. Design of Life Extending Controls Using Nonlinear Parameter Optimization

    NASA Technical Reports Server (NTRS)

    Lorenzo, Carl F.; Holmes, Michael S.; Ray, Asok

    1998-01-01

    This report presents the conceptual development of a life extending control system where the objective is to achieve high performance and structural durability of the plant. A life extending controller is designed for a reusable rocket engine via damage mitigation in both the fuel and oxidizer turbines while achieving high performance for transient responses of the combustion chamber pressure and the O2/H2 mixture ratio. This design approach makes use of a combination of linear and nonlinear controller synthesis techniques and also allows adaptation of the life extending controller module to augment a conventional performance controller of a rocket engine. The nonlinear aspect of the design is achieved using nonlinear parameter optimization of a prescribed control structure.

  8. Missile Guidance Law Based on Robust Model Predictive Control Using Neural-Network Optimization.

    PubMed

    Li, Zhijun; Xia, Yuanqing; Su, Chun-Yi; Deng, Jun; Fu, Jun; He, Wei

    2015-08-01

    In this brief, the utilization of robust model-based predictive control is investigated for the problem of missile interception. Treating the target acceleration as a bounded disturbance, novel guidance law using model predictive control is developed by incorporating missile inside constraints. The combined model predictive approach could be transformed as a constrained quadratic programming (QP) problem, which may be solved using a linear variational inequality-based primal-dual neural network over a finite receding horizon. Online solutions to multiple parametric QP problems are used so that constrained optimal control decisions can be made in real time. Simulation studies are conducted to illustrate the effectiveness and performance of the proposed guidance control law for missile interception.

  9. A robust hybrid fuzzy-simulated annealing-intelligent water drops approach for tuning a distribution static compensator nonlinear controller in a distribution system

    NASA Astrophysics Data System (ADS)

    Bagheri Tolabi, Hajar; Hosseini, Rahil; Shakarami, Mahmoud Reza

    2016-06-01

    This article presents a novel hybrid optimization approach for a nonlinear controller of a distribution static compensator (DSTATCOM). The DSTATCOM is connected to a distribution system with the distributed generation units. The nonlinear control is based on partial feedback linearization. Two proportional-integral-derivative (PID) controllers regulate the voltage and track the output in this control system. In the conventional scheme, the trial-and-error method is used to determine the PID controller coefficients. This article uses a combination of a fuzzy system, simulated annealing (SA) and intelligent water drops (IWD) algorithms to optimize the parameters of the controllers. The obtained results reveal that the response of the optimized controlled system is effectively improved by finding a high-quality solution. The results confirm that using the tuning method based on the fuzzy-SA-IWD can significantly decrease the settling and rising times, the maximum overshoot and the steady-state error of the voltage step response of the DSTATCOM. The proposed hybrid tuning method for the partial feedback linearizing (PFL) controller achieved better regulation of the direct current voltage for the capacitor within the DSTATCOM. Furthermore, in the event of a fault the proposed controller tuned by the fuzzy-SA-IWD method showed better performance than the conventional controller or the PFL controller without optimization by the fuzzy-SA-IWD method with regard to both fault duration and clearing times.

  10. Batch-mode Reinforcement Learning for improved hydro-environmental systems management

    NASA Astrophysics Data System (ADS)

    Castelletti, A.; Galelli, S.; Restelli, M.; Soncini-Sessa, R.

    2010-12-01

    Despite the great progresses made in the last decades, the optimal management of hydro-environmental systems still remains a very active and challenging research area. The combination of multiple, often conflicting interests, high non-linearities of the physical processes and the management objectives, strong uncertainties in the inputs, and high dimensional state makes the problem challenging and intriguing. Stochastic Dynamic Programming (SDP) is one of the most suitable methods for designing (Pareto) optimal management policies preserving the original problem complexity. However, it suffers from a dual curse, which, de facto, prevents its practical application to even reasonably complex water systems. (i) Computational requirement grows exponentially with state and control dimension (Bellman's curse of dimensionality), so that SDP can not be used with water systems where the state vector includes more than few (2-3) units. (ii) An explicit model of each system's component is required (curse of modelling) to anticipate the effects of the system transitions, i.e. any information included into the SDP framework can only be either a state variable described by a dynamic model or a stochastic disturbance, independent in time, with the associated pdf. Any exogenous information that could effectively improve the system operation cannot be explicitly considered in taking the management decision, unless a dynamic model is identified for each additional information, thus adding to the problem complexity through the curse of dimensionality (additional state variables). To mitigate this dual curse, the combined use of batch-mode Reinforcement Learning (bRL) and Dynamic Model Reduction (DMR) techniques is explored in this study. bRL overcomes the curse of modelling by replacing explicit modelling with an external simulator and/or historical observations. The curse of dimensionality is averted using a functional approximation of the SDP value function based on proper non-linear regressors. DMR reduces the complexity and the associated computational requirements of non-linear distributed process based models, making them suitable for being included into optimization schemes. Results from real world applications of the approach are also presented, including reservoir operation with both quality and quantity targets.

  11. Analysis of optimal phenotypic space using elementary modes as applied to Corynebacterium glutamicum

    PubMed Central

    Gayen, Kalyan; Venkatesh, KV

    2006-01-01

    Background Quantification of the metabolic network of an organism offers insights into possible ways of developing mutant strain for better productivity of an extracellular metabolite. The first step in this quantification is the enumeration of stoichiometries of all reactions occurring in a metabolic network. The structural details of the network in combination with experimentally observed accumulation rates of external metabolites can yield flux distribution at steady state. One such methodology for quantification is the use of elementary modes, which are minimal set of enzymes connecting external metabolites. Here, we have used a linear objective function subject to elementary modes as constraint to determine the fluxes in the metabolic network of Corynebacterium glutamicum. The feasible phenotypic space was evaluated at various combinations of oxygen and ammonia uptake rates. Results Quantification of the fluxes of the elementary modes in the metabolism of C. glutamicum was formulated as linear programming. The analysis demonstrated that the solution was dependent on the criteria of objective function when less than four accumulation rates of the external metabolites were considered. The analysis yielded feasible ranges of fluxes of elementary modes that satisfy the experimental accumulation rates. In C. glutamicum, the elementary modes relating to biomass synthesis through glycolysis and TCA cycle were predominantly operational in the initial growth phase. At a later time, the elementary modes contributing to lysine synthesis became active. The oxygen and ammonia uptake rates were shown to be bounded in the phenotypic space due to the stoichiometric constraint of the elementary modes. Conclusion We have demonstrated the use of elementary modes and the linear programming to quantify a metabolic network. We have used the methodology to quantify the network of C. glutamicum, which evaluates the set of operational elementary modes at different phases of fermentation. The methodology was also used to determine the feasible solution space for a given set of substrate uptake rates under specific optimization criteria. Such an approach can be used to determine the optimality of the accumulation rates of any metabolite in a given network. PMID:17038164

  12. Constructing an Efficient Self-Tuning Aircraft Engine Model for Control and Health Management Applications

    NASA Technical Reports Server (NTRS)

    Armstrong, Jeffrey B.; Simon, Donald L.

    2012-01-01

    Self-tuning aircraft engine models can be applied for control and health management applications. The self-tuning feature of these models minimizes the mismatch between any given engine and the underlying engineering model describing an engine family. This paper provides details of the construction of a self-tuning engine model centered on a piecewise linear Kalman filter design. Starting from a nonlinear transient aerothermal model, a piecewise linear representation is first extracted. The linearization procedure creates a database of trim vectors and state-space matrices that are subsequently scheduled for interpolation based on engine operating point. A series of steady-state Kalman gains can next be constructed from a reduced-order form of the piecewise linear model. Reduction of the piecewise linear model to an observable dimension with respect to available sensed engine measurements can be achieved using either a subset or an optimal linear combination of "health" parameters, which describe engine performance. The resulting piecewise linear Kalman filter is then implemented for faster-than-real-time processing of sensed engine measurements, generating outputs appropriate for trending engine performance, estimating both measured and unmeasured parameters for control purposes, and performing on-board gas-path fault diagnostics. Computational efficiency is achieved by designing multidimensional interpolation algorithms that exploit the shared scheduling of multiple trim vectors and system matrices. An example application illustrates the accuracy of a self-tuning piecewise linear Kalman filter model when applied to a nonlinear turbofan engine simulation. Additional discussions focus on the issue of transient response accuracy and the advantages of a piecewise linear Kalman filter in the context of validation and verification. The techniques described provide a framework for constructing efficient self-tuning aircraft engine models from complex nonlinear simulations.Self-tuning aircraft engine models can be applied for control and health management applications. The self-tuning feature of these models minimizes the mismatch between any given engine and the underlying engineering model describing an engine family. This paper provides details of the construction of a self-tuning engine model centered on a piecewise linear Kalman filter design. Starting from a nonlinear transient aerothermal model, a piecewise linear representation is first extracted. The linearization procedure creates a database of trim vectors and state-space matrices that are subsequently scheduled for interpolation based on engine operating point. A series of steady-state Kalman gains can next be constructed from a reduced-order form of the piecewise linear model. Reduction of the piecewise linear model to an observable dimension with respect to available sensed engine measurements can be achieved using either a subset or an optimal linear combination of "health" parameters, which describe engine performance. The resulting piecewise linear Kalman filter is then implemented for faster-than-real-time processing of sensed engine measurements, generating outputs appropriate for trending engine performance, estimating both measured and unmeasured parameters for control purposes, and performing on-board gas-path fault diagnostics. Computational efficiency is achieved by designing multidimensional interpolation algorithms that exploit the shared scheduling of multiple trim vectors and system matrices. An example application illustrates the accuracy of a self-tuning piecewise linear Kalman filter model when applied to a nonlinear turbofan engine simulation. Additional discussions focus on the issue of transient response accuracy and the advantages of a piecewise linear Kalman filter in the context of validation and verification. The techniques described provide a framework for constructing efficient self-tuning aircraft engine models from complex nonlinear simulatns.

  13. Finding Optimal Gains In Linear-Quadratic Control Problems

    NASA Technical Reports Server (NTRS)

    Milman, Mark H.; Scheid, Robert E., Jr.

    1990-01-01

    Analytical method based on Volterra factorization leads to new approximations for optimal control gains in finite-time linear-quadratic control problem of system having infinite number of dimensions. Circumvents need to analyze and solve Riccati equations and provides more transparent connection between dynamics of system and optimal gain.

  14. Steady-state global optimization of metabolic non-linear dynamic models through recasting into power-law canonical models

    PubMed Central

    2011-01-01

    Background Design of newly engineered microbial strains for biotechnological purposes would greatly benefit from the development of realistic mathematical models for the processes to be optimized. Such models can then be analyzed and, with the development and application of appropriate optimization techniques, one could identify the modifications that need to be made to the organism in order to achieve the desired biotechnological goal. As appropriate models to perform such an analysis are necessarily non-linear and typically non-convex, finding their global optimum is a challenging task. Canonical modeling techniques, such as Generalized Mass Action (GMA) models based on the power-law formalism, offer a possible solution to this problem because they have a mathematical structure that enables the development of specific algorithms for global optimization. Results Based on the GMA canonical representation, we have developed in previous works a highly efficient optimization algorithm and a set of related strategies for understanding the evolution of adaptive responses in cellular metabolism. Here, we explore the possibility of recasting kinetic non-linear models into an equivalent GMA model, so that global optimization on the recast GMA model can be performed. With this technique, optimization is greatly facilitated and the results are transposable to the original non-linear problem. This procedure is straightforward for a particular class of non-linear models known as Saturable and Cooperative (SC) models that extend the power-law formalism to deal with saturation and cooperativity. Conclusions Our results show that recasting non-linear kinetic models into GMA models is indeed an appropriate strategy that helps overcoming some of the numerical difficulties that arise during the global optimization task. PMID:21867520

  15. Can Linear Superiorization Be Useful for Linear Optimization Problems?

    PubMed Central

    Censor, Yair

    2017-01-01

    Linear superiorization considers linear programming problems but instead of attempting to solve them with linear optimization methods it employs perturbation resilient feasibility-seeking algorithms and steers them toward reduced (not necessarily minimal) target function values. The two questions that we set out to explore experimentally are (i) Does linear superiorization provide a feasible point whose linear target function value is lower than that obtained by running the same feasibility-seeking algorithm without superiorization under identical conditions? and (ii) How does linear superiorization fare in comparison with the Simplex method for solving linear programming problems? Based on our computational experiments presented here, the answers to these two questions are: “yes” and “very well”, respectively. PMID:29335660

  16. [Fast optimization of stepwise gradient conditions for ternary mobile phase in reversed-phase high performance liquid chromatography].

    PubMed

    Shan, Yi-chu; Zhang, Yu-kui; Zhao, Rui-huan

    2002-07-01

    In high performance liquid chromatography, it is necessary to apply multi-composition gradient elution for the separation of complex samples such as environmental and biological samples. Multivariate stepwise gradient elution is one of the most efficient elution modes, because it combines the high selectivity of multi-composition mobile phase and shorter analysis time of gradient elution. In practical separations, the separation selectivity of samples can be effectively adjusted by using ternary mobile phase. For the optimization of these parameters, the retention equation of samples must be obtained at first. Traditionally, several isocratic experiments are used to get the retention equation of solute. However, it is time consuming especially for the separation of complex samples with a wide range of polarity. A new method for the fast optimization of ternary stepwise gradient elution was proposed based on the migration rule of solute in column. First, the coefficients of retention equation of solute are obtained by running several linear gradient experiments, then the optimal separation conditions are searched according to the hierarchical chromatography response function which acts as the optimization criterion. For each kind of organic modifier, two initial linear gradient experiments are used to obtain the primary coefficients of retention equation of each solute. For ternary mobile phase, only four linear gradient runs are needed to get the coefficients of retention equation. Then the retention times of solutes under arbitrary mobile phase composition can be predicted. The initial optimal mobile phase composition is obtained by resolution mapping for all of the solutes. A hierarchical chromatography response function is used to evaluate the separation efficiencies and search the optimal elution conditions. In subsequent optimization, the migrating distance of solute in the column is considered to decide the mobile phase composition and sustaining time of the latter steps until all the solutes are eluted out. Thus the first stepwise gradient elution conditions are predicted. If the resolution of samples under the predicted optimal separation conditions is satisfactory, the optimization procedure is stopped; otherwise, the coefficients of retention equation are adjusted according to the experimental results under the previously predicted elution conditions. Then the new stepwise gradient elution conditions are predicted repeatedly until satisfactory resolution is obtained. Normally, the satisfactory separation conditions can be found only after six experiments by using the proposed method. In comparison with the traditional optimization method, the time needed to finish the optimization procedure can be greatly reduced. The method has been validated by its application to the separation of several samples such as amino acid derivatives, aromatic amines, in which satisfactory separations were obtained with predicted resolution.

  17. Optimal monochromatic color combinations for fusion imaging of FDG-PET and diffusion-weighted MR images.

    PubMed

    Kamei, Ryotaro; Watanabe, Yuji; Sagiyama, Koji; Isoda, Takuro; Togao, Osamu; Honda, Hiroshi

    2018-05-23

    To investigate the optimal monochromatic color combination for fusion imaging of FDG-PET and diffusion-weighted MR images (DW) regarding lesion conspicuity of each image. Six linear monochromatic color-maps of red, blue, green, cyan, magenta, and yellow were assigned to each of the FDG-PET and DW images. Total perceptual color differences of the lesions were calculated based on the lightness and chromaticity measured with the photometer. Visual lesion conspicuity was also compared among the PET-only, DW-only and PET-DW-double positive portions with mean conspicuity scores. Statistical analysis was performed with a one-way analysis of variance and Spearman's rank correlation coefficient. Among all the 12 possible monochromatic color-map combinations, the 3 combinations of red/cyan, magenta/green, and red/green produced the highest conspicuity scores. Total color differences between PET-positive and double-positive portions correlated with conspicuity scores (ρ = 0.2933, p < 0.005). Lightness differences showed a significant negative correlation with conspicuity scores between the PET-only and DWI-only positive portions. Chromaticity differences showed a marginally significant correlation with conspicuity scores between DWI-positive and double-positive portions. Monochromatic color combinations can facilitate the visual evaluation of FDG-uptake and diffusivity as well as registration accuracy on the FDG-PET/DW fusion images, when red- and green-colored elements are assigned to FDG-PET and DW images, respectively.

  18. The effect of dropout on the efficiency of D-optimal designs of linear mixed models.

    PubMed

    Ortega-Azurduy, S A; Tan, F E S; Berger, M P F

    2008-06-30

    Dropout is often encountered in longitudinal data. Optimal designs will usually not remain optimal in the presence of dropout. In this paper, we study D-optimal designs for linear mixed models where dropout is encountered. Moreover, we estimate the efficiency loss in cases where a D-optimal design for complete data is chosen instead of that for data with dropout. Two types of monotonically decreasing response probability functions are investigated to describe dropout. Our results show that the location of D-optimal design points for the dropout case will shift with respect to that for the complete and uncorrelated data case. Owing to this shift, the information collected at the D-optimal design points for the complete data case does not correspond to the smallest variance. We show that the size of the displacement of the time points depends on the linear mixed model and that the efficiency loss is moderate.

  19. Portfolio optimization using fuzzy linear programming

    NASA Astrophysics Data System (ADS)

    Pandit, Purnima K.

    2013-09-01

    Portfolio Optimization (PO) is a problem in Finance, in which investor tries to maximize return and minimize risk by carefully choosing different assets. Expected return and risk are the most important parameters with regard to optimal portfolios. In the simple form PO can be modeled as quadratic programming problem which can be put into equivalent linear form. PO problems with the fuzzy parameters can be solved as multi-objective fuzzy linear programming problem. In this paper we give the solution to such problems with an illustrative example.

  20. Asymptotic Linearity of Optimal Control Modification Adaptive Law with Analytical Stability Margins

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.

    2010-01-01

    Optimal control modification has been developed to improve robustness to model-reference adaptive control. For systems with linear matched uncertainty, optimal control modification adaptive law can be shown by a singular perturbation argument to possess an outer solution that exhibits a linear asymptotic property. Analytical expressions of phase and time delay margins for the outer solution can be obtained. Using the gradient projection operator, a free design parameter of the adaptive law can be selected to satisfy stability margins.

  1. Design of Linear Accelerator (LINAC) tanks for proton therapy via Particle Swarm Optimization (PSO) and Genetic Algorithm (GA) approaches

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Castellano, T.; De Palma, L.; Laneve, D.

    2015-07-01

    A homemade computer code for designing a Side- Coupled Linear Accelerator (SCL) is written. It integrates a simplified model of SCL tanks with the Particle Swarm Optimization (PSO) algorithm. The computer code main aim is to obtain useful guidelines for the design of Linear Accelerator (LINAC) resonant cavities. The design procedure, assisted via the aforesaid approach seems very promising, allowing future improvements towards the optimization of actual accelerating geometries. (authors)

  2. A reducing of a chaotic movement to a periodic orbit, of a micro-electro-mechanical system, by using an optimal linear control design

    NASA Astrophysics Data System (ADS)

    Chavarette, Fábio Roberto; Balthazar, José Manoel; Felix, Jorge L. P.; Rafikov, Marat

    2009-05-01

    This paper analyzes the non-linear dynamics, with a chaotic behavior of a particular micro-electro-mechanical system. We used a technique of the optimal linear control for reducing the irregular (chaotic) oscillatory movement of the non-linear systems to a periodic orbit. We use the mathematical model of a (MEMS) proposed by Luo and Wang.

  3. High efficiency machining technology and equipment for edge chamfer of KDP crystals

    NASA Astrophysics Data System (ADS)

    Chen, Dongsheng; Wang, Baorui; Chen, Jihong

    2016-10-01

    Potassium dihydrogen phosphate (KDP) is a type of nonlinear optical crystal material. To Inhibit the transverse stimulated Raman scattering of laser beam and then enhance the optical performance of the optics, the edges of the large-sized KDP crystal needs to be removed to form chamfered faces with high surface quality (RMS<5 nm). However, as the depth of cut (DOC) of fly cutting is usually several, its machining efficiency is too low to be accepted for chamfering of the KDP crystal as the amount of materials to be removed is in the order of millimeter. This paper proposes a novel hybrid machining method, which combines precision grinding with fly cutting, for crackless and high efficiency chamfer of KDP crystal. A specialized machine tool, which adopts aerostatic bearing linear slide and aerostatic bearing spindle, was developed for chamfer of the KDP crystal. The aerostatic bearing linear slide consists of an aerostatic bearing guide with linearity of 0.1 μm/100mm and a linear motor to achieve linear feeding with high precision and high dynamic performance. The vertical spindle consists of an aerostatic bearing spindle with the rotation accuracy (axial) of 0.05 microns and Fork type flexible connection precision driving mechanism. The machining experiment on flying and grinding was carried out, the optimize machining parameters was gained by a series of experiment. Surface roughness of 2.4 nm has been obtained. The machining efficiency can be improved by six times using the combined method to produce the same machined surface quality.

  4. Combining measurements to estimate properties and characterization extent of complex biochemical mixtures; applications to Heparan Sulfate

    PubMed Central

    Pradines, Joël R.; Beccati, Daniela; Lech, Miroslaw; Ozug, Jennifer; Farutin, Victor; Huang, Yongqing; Gunay, Nur Sibel; Capila, Ishan

    2016-01-01

    Complex mixtures of molecular species, such as glycoproteins and glycosaminoglycans, have important biological and therapeutic functions. Characterization of these mixtures with analytical chemistry measurements is an important step when developing generic drugs such as biosimilars. Recent developments have focused on analytical methods and statistical approaches to test similarity between mixtures. The question of how much uncertainty on mixture composition is reduced by combining several measurements still remains mostly unexplored. Mathematical frameworks to combine measurements, estimate mixture properties, and quantify remaining uncertainty, i.e. a characterization extent, are introduced here. Constrained optimization and mathematical modeling are applied to a set of twenty-three experimental measurements on heparan sulfate, a mixture of linear chains of disaccharides having different levels of sulfation. While this mixture has potentially over two million molecular species, mathematical modeling and the small set of measurements establish the existence of nonhomogeneity of sulfate level along chains and the presence of abundant sulfate repeats. Constrained optimization yields not only estimations of sulfate repeats and sulfate level at each position in the chains but also bounds on these levels, thereby estimating the extent of characterization of the sulfation pattern which is achieved by the set of measurements. PMID:27112127

  5. Combining measurements to estimate properties and characterization extent of complex biochemical mixtures; applications to Heparan Sulfate.

    PubMed

    Pradines, Joël R; Beccati, Daniela; Lech, Miroslaw; Ozug, Jennifer; Farutin, Victor; Huang, Yongqing; Gunay, Nur Sibel; Capila, Ishan

    2016-04-26

    Complex mixtures of molecular species, such as glycoproteins and glycosaminoglycans, have important biological and therapeutic functions. Characterization of these mixtures with analytical chemistry measurements is an important step when developing generic drugs such as biosimilars. Recent developments have focused on analytical methods and statistical approaches to test similarity between mixtures. The question of how much uncertainty on mixture composition is reduced by combining several measurements still remains mostly unexplored. Mathematical frameworks to combine measurements, estimate mixture properties, and quantify remaining uncertainty, i.e. a characterization extent, are introduced here. Constrained optimization and mathematical modeling are applied to a set of twenty-three experimental measurements on heparan sulfate, a mixture of linear chains of disaccharides having different levels of sulfation. While this mixture has potentially over two million molecular species, mathematical modeling and the small set of measurements establish the existence of nonhomogeneity of sulfate level along chains and the presence of abundant sulfate repeats. Constrained optimization yields not only estimations of sulfate repeats and sulfate level at each position in the chains but also bounds on these levels, thereby estimating the extent of characterization of the sulfation pattern which is achieved by the set of measurements.

  6. Combining measurements to estimate properties and characterization extent of complex biochemical mixtures; applications to Heparan Sulfate

    NASA Astrophysics Data System (ADS)

    Pradines, Joël R.; Beccati, Daniela; Lech, Miroslaw; Ozug, Jennifer; Farutin, Victor; Huang, Yongqing; Gunay, Nur Sibel; Capila, Ishan

    2016-04-01

    Complex mixtures of molecular species, such as glycoproteins and glycosaminoglycans, have important biological and therapeutic functions. Characterization of these mixtures with analytical chemistry measurements is an important step when developing generic drugs such as biosimilars. Recent developments have focused on analytical methods and statistical approaches to test similarity between mixtures. The question of how much uncertainty on mixture composition is reduced by combining several measurements still remains mostly unexplored. Mathematical frameworks to combine measurements, estimate mixture properties, and quantify remaining uncertainty, i.e. a characterization extent, are introduced here. Constrained optimization and mathematical modeling are applied to a set of twenty-three experimental measurements on heparan sulfate, a mixture of linear chains of disaccharides having different levels of sulfation. While this mixture has potentially over two million molecular species, mathematical modeling and the small set of measurements establish the existence of nonhomogeneity of sulfate level along chains and the presence of abundant sulfate repeats. Constrained optimization yields not only estimations of sulfate repeats and sulfate level at each position in the chains but also bounds on these levels, thereby estimating the extent of characterization of the sulfation pattern which is achieved by the set of measurements.

  7. A comparison of optimal semi-active suspension systems regarding vehicle ride comfort

    NASA Astrophysics Data System (ADS)

    Koulocheris, Dimitrios; Papaioannou, Georgios; Chrysos, Emmanouil

    2017-10-01

    The aim of this work is to present a comparison of the main semi active suspension systems used in a passenger car, after having optimized the suspension systems of the vehicle model in respect with ride comfort and road holding. Thus, a half car model, equipped with controllable dampers, along with a seat and a driver was implemented. Semi-active suspensions have received a lot of attention since they seem to provide the best compromise between cost (energy consumption, actuators/sensors hardware) and performance in comparison with active and passive suspensions. In this work, the semi active suspension systems studied are comfort oriented and consist of (a) the two version of Skyhook control (two states skyhook and skyhook linear approximation damper), (b) the acceleration driven damper (ADD), (c) the power driven damper (PDD), (d) the combination of Skyhook and ADD (Mixed Skyhook-ADD) and (e) the combination of the two with the use of a sensor. The half car model equipped with the above suspension systems was excited by a road bump, and was optimized using genetic algorithms (GA) in respect with ride comfort and road holding. This study aims to highlight how the optimization of the vehicle model could lead to the best compromise between ride comfort and road holding, overcoming their well-known trade-off. The optimum results were compared with important performance metrics regarding the vehicle’s dynamic behaviour in general.

  8. Optimized principal component analysis on coronagraphic images of the fomalhaut system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meshkat, Tiffany; Kenworthy, Matthew A.; Quanz, Sascha P.

    We present the results of a study to optimize the principal component analysis (PCA) algorithm for planet detection, a new algorithm complementing angular differential imaging and locally optimized combination of images (LOCI) for increasing the contrast achievable next to a bright star. The stellar point spread function (PSF) is constructed by removing linear combinations of principal components, allowing the flux from an extrasolar planet to shine through. The number of principal components used determines how well the stellar PSF is globally modeled. Using more principal components may decrease the number of speckles in the final image, but also increases themore » background noise. We apply PCA to Fomalhaut Very Large Telescope NaCo images acquired at 4.05 μm with an apodized phase plate. We do not detect any companions, with a model dependent upper mass limit of 13-18 M {sub Jup} from 4-10 AU. PCA achieves greater sensitivity than the LOCI algorithm for the Fomalhaut coronagraphic data by up to 1 mag. We make several adaptations to the PCA code and determine which of these prove the most effective at maximizing the signal-to-noise from a planet very close to its parent star. We demonstrate that optimizing the number of principal components used in PCA proves most effective for pulling out a planet signal.« less

  9. MMASS: an optimized array-based method for assessing CpG island methylation.

    PubMed

    Ibrahim, Ashraf E K; Thorne, Natalie P; Baird, Katie; Barbosa-Morais, Nuno L; Tavaré, Simon; Collins, V Peter; Wyllie, Andrew H; Arends, Mark J; Brenton, James D

    2006-01-01

    We describe an optimized microarray method for identifying genome-wide CpG island methylation called microarray-based methylation assessment of single samples (MMASS) which directly compares methylated to unmethylated sequences within a single sample. To improve previous methods we used bioinformatic analysis to predict an optimized combination of methylation-sensitive enzymes that had the highest utility for CpG-island probes and different methods to produce unmethylated representations of test DNA for more sensitive detection of differential methylation by hybridization. Subtraction or methylation-dependent digestion with McrBC was used with optimized (MMASS-v2) or previously described (MMASS-v1, MMASS-sub) methylation-sensitive enzyme combinations and compared with a published McrBC method. Comparison was performed using DNA from the cell line HCT116. We show that the distribution of methylation microarray data is inherently skewed and requires exogenous spiked controls for normalization and that analysis of digestion of methylated and unmethylated control sequences together with linear fit models of replicate data showed superior statistical power for the MMASS-v2 method. Comparison with previous methylation data for HCT116 and validation of CpG islands from PXMP4, SFRP2, DCC, RARB and TSEN2 confirmed the accuracy of MMASS-v2 results. The MMASS-v2 method offers improved sensitivity and statistical power for high-throughput microarray identification of differential methylation.

  10. Research on optimal DEM cell size for 3D visualization of loess terraces

    NASA Astrophysics Data System (ADS)

    Zhao, Weidong; Tang, Guo'an; Ji, Bin; Ma, Lei

    2009-10-01

    In order to represent the complex artificial terrains like loess terraces in Shanxi Province in northwest China, a new 3D visual method namely Terraces Elevation Incremental Visual Method (TEIVM) is put forth by the authors. 406 elevation points and 14 enclosed constrained lines are sampled according to the TIN-based Sampling Method (TSM) and DEM Elevation Points and Lines Classification (DEPLC). The elevation points and constrained lines are used to construct Constrained Delaunay Triangulated Irregular Networks (CD-TINs) of the loess terraces. In order to visualize the loess terraces well by use of optimal combination of cell size and Elevation Increment Value (EIV), the CD-TINs is converted to Grid-based DEM (G-DEM) by use of different combination of cell size and EIV with linear interpolating method called Bilinear Interpolation Method (BIM). Our case study shows that the new visual method can visualize the loess terraces steps very well when the combination of cell size and EIV is reasonable. The optimal combination is that the cell size is 1 m and the EIV is 6 m. Results of case study also show that the cell size should be at least smaller than half of both the terraces average width and the average vertical offset of terraces steps for representing the planar shapes of the terraces surfaces and steps well, while the EIV also should be larger than 4.6 times of the terraces average height. The TEIVM and results above is of great significance to the highly refined visualization of artificial terrains like loess terraces.

  11. Multivariate modelling of prostate cancer combining magnetic resonance derived T2, diffusion, dynamic contrast-enhanced and spectroscopic parameters.

    PubMed

    Riches, S F; Payne, G S; Morgan, V A; Dearnaley, D; Morgan, S; Partridge, M; Livni, N; Ogden, C; deSouza, N M

    2015-05-01

    The objectives are determine the optimal combination of MR parameters for discriminating tumour within the prostate using linear discriminant analysis (LDA) and to compare model accuracy with that of an experienced radiologist. Multiparameter MRIs in 24 patients before prostatectomy were acquired. Tumour outlines from whole-mount histology, T2-defined peripheral zone (PZ), and central gland (CG) were superimposed onto slice-matched parametric maps. T2, Apparent Diffusion Coefficient, initial area under the gadolinium curve, vascular parameters (K(trans),Kep,Ve), and (choline+polyamines+creatine)/citrate were compared between tumour and non-tumour tissues. Receiver operating characteristic (ROC) curves determined sensitivity and specificity at spectroscopic voxel resolution and per lesion, and LDA determined the optimal multiparametric model for identifying tumours. Accuracy was compared with an expert observer. Tumours were significantly different from PZ and CG for all parameters (all p < 0.001). Area under the ROC curve for discriminating tumour from non-tumour was significantly greater (p < 0.001) for the multiparametric model than for individual parameters; at 90 % specificity, sensitivity was 41 % (MRSI voxel resolution) and 59 % per lesion. At this specificity, an expert observer achieved 28 % and 49 % sensitivity, respectively. The model was more accurate when parameters from all techniques were included and performed better than an expert observer evaluating these data. • The combined model increases diagnostic accuracy in prostate cancer compared with individual parameters • The optimal combined model includes parameters from diffusion, spectroscopy, perfusion, and anatominal MRI • The computed model improves tumour detection compared to an expert viewing parametric maps.

  12. New adaptive method to optimize the secondary reflector of linear Fresnel collectors

    DOE PAGES

    Zhu, Guangdong

    2017-01-16

    Performance of linear Fresnel collectors may largely depend on the secondary-reflector profile design when small-aperture absorbers are used. Optimization of the secondary-reflector profile is an extremely challenging task because there is no established theory to ensure superior performance of derived profiles. In this work, an innovative optimization method is proposed to optimize the secondary-reflector profile of a generic linear Fresnel configuration. The method correctly and accurately captures impacts of both geometric and optical aspects of a linear Fresnel collector to secondary-reflector design. The proposed method is an adaptive approach that does not assume a secondary shape of any particular form,more » but rather, starts at a single edge point and adaptively constructs the next surface point to maximize the reflected power to be reflected to absorber(s). As a test case, the proposed optimization method is applied to an industrial linear Fresnel configuration, and the results show that the derived optimal secondary reflector is able to redirect more than 90% of the power to the absorber in a wide range of incidence angles. Here, the proposed method can be naturally extended to other types of solar collectors as well, and it will be a valuable tool for solar-collector designs with a secondary reflector.« less

  13. New adaptive method to optimize the secondary reflector of linear Fresnel collectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Guangdong

    Performance of linear Fresnel collectors may largely depend on the secondary-reflector profile design when small-aperture absorbers are used. Optimization of the secondary-reflector profile is an extremely challenging task because there is no established theory to ensure superior performance of derived profiles. In this work, an innovative optimization method is proposed to optimize the secondary-reflector profile of a generic linear Fresnel configuration. The method correctly and accurately captures impacts of both geometric and optical aspects of a linear Fresnel collector to secondary-reflector design. The proposed method is an adaptive approach that does not assume a secondary shape of any particular form,more » but rather, starts at a single edge point and adaptively constructs the next surface point to maximize the reflected power to be reflected to absorber(s). As a test case, the proposed optimization method is applied to an industrial linear Fresnel configuration, and the results show that the derived optimal secondary reflector is able to redirect more than 90% of the power to the absorber in a wide range of incidence angles. Here, the proposed method can be naturally extended to other types of solar collectors as well, and it will be a valuable tool for solar-collector designs with a secondary reflector.« less

  14. An efficient method for model refinement in diffuse optical tomography

    NASA Astrophysics Data System (ADS)

    Zirak, A. R.; Khademi, M.

    2007-11-01

    Diffuse optical tomography (DOT) is a non-linear, ill-posed, boundary value and optimization problem which necessitates regularization. Also, Bayesian methods are suitable owing to measurements data are sparse and correlated. In such problems which are solved with iterative methods, for stabilization and better convergence, the solution space must be small. These constraints subject to extensive and overdetermined system of equations which model retrieving criteria specially total least squares (TLS) must to refine model error. Using TLS is limited to linear systems which is not achievable when applying traditional Bayesian methods. This paper presents an efficient method for model refinement using regularized total least squares (RTLS) for treating on linearized DOT problem, having maximum a posteriori (MAP) estimator and Tikhonov regulator. This is done with combination Bayesian and regularization tools as preconditioner matrices, applying them to equations and then using RTLS to the resulting linear equations. The preconditioning matrixes are guided by patient specific information as well as a priori knowledge gained from the training set. Simulation results illustrate that proposed method improves the image reconstruction performance and localize the abnormally well.

  15. Development of a nearshore oscillating surge wave energy converter with variable geometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tom, N. M.; Lawson, M. J.; Yu, Y. H.

    This paper presents an analysis of a novel wave energy converter concept that combines an oscillating surge wave energy converter (OSWEC) with control surfaces. The control surfaces allow for a variable device geometry that enables the hydrodynamic properties to be adapted with respect to structural loading, absorption range and power-take-off capability. The device geometry is adjusted on a sea state-to-sea state time scale and combined with wave-to-wave manipulation of the power take-off (PTO) to provide greater control over the capture efficiency, capacity factor, and design loads. This work begins with a sensitivity study of the hydrodynamic coefficients with respect tomore » device width, support structure thickness, and geometry. A linear frequency domain analysis is used to evaluate device performance in terms of absorbed power, foundation loads, and PTO torque. Previous OSWEC studies included nonlinear hydrodynamics, in response a nonlinear model that includes a quadratic viscous damping torque that was linearized via the Lorentz linearization. Inclusion of the quadratic viscous torque led to construction of an optimization problem that incorporated motion and PTO constraints. Results from this study found that, when transitioning from moderate-to-large sea states the novel OSWEC was capable of reducing structural loads while providing a near constant power output.« less

  16. Simultaneous determination of penicillin G salts by infrared spectroscopy: Evaluation of combining orthogonal signal correction with radial basis function-partial least squares regression

    NASA Astrophysics Data System (ADS)

    Talebpour, Zahra; Tavallaie, Roya; Ahmadi, Seyyed Hamid; Abdollahpour, Assem

    2010-09-01

    In this study, a new method for the simultaneous determination of penicillin G salts in pharmaceutical mixture via FT-IR spectroscopy combined with chemometrics was investigated. The mixture of penicillin G salts is a complex system due to similar analytical characteristics of components. Partial least squares (PLS) and radial basis function-partial least squares (RBF-PLS) were used to develop the linear and nonlinear relation between spectra and components, respectively. The orthogonal signal correction (OSC) preprocessing method was used to correct unexpected information, such as spectral overlapping and scattering effects. In order to compare the influence of OSC on PLS and RBF-PLS models, the optimal linear (PLS) and nonlinear (RBF-PLS) models based on conventional and OSC preprocessed spectra were established and compared. The obtained results demonstrated that OSC clearly enhanced the performance of both RBF-PLS and PLS calibration models. Also in the case of some nonlinear relation between spectra and component, OSC-RBF-PLS gave satisfactory results than OSC-PLS model which indicated that the OSC was helpful to remove extrinsic deviations from linearity without elimination of nonlinear information related to component. The chemometric models were tested on an external dataset and finally applied to the analysis commercialized injection product of penicillin G salts.

  17. A linear model fails to predict orientation selectivity of cells in the cat visual cortex.

    PubMed Central

    Volgushev, M; Vidyasagar, T R; Pei, X

    1996-01-01

    1. Postsynaptic potentials (PSPs) evoked by visual stimulation in simple cells in the cat visual cortex were recorded using in vivo whole-cell technique. Responses to small spots of light presented at different positions over the receptive field and responses to elongated bars of different orientations centred on the receptive field were recorded. 2. To test whether a linear model can account for orientation selectivity of cortical neurones, responses to elongated bars were compared with responses predicted by a linear model from the receptive field map obtained from flashing spots. 3. The linear model faithfully predicted the preferred orientation, but not the degree of orientation selectivity or the sharpness of orientation tuning. The ratio of optimal to non-optimal responses was always underestimated by the model. 4. Thus non-linear mechanisms, which can include suppression of non-optimal responses and/or amplification of optimal responses, are involved in the generation of orientation selectivity in the primary visual cortex. PMID:8930828

  18. Discrete-time Markovian-jump linear quadratic optimal control

    NASA Technical Reports Server (NTRS)

    Chizeck, H. J.; Willsky, A. S.; Castanon, D.

    1986-01-01

    This paper is concerned with the optimal control of discrete-time linear systems that possess randomly jumping parameters described by finite-state Markov processes. For problems having quadratic costs and perfect observations, the optimal control laws and expected costs-to-go can be precomputed from a set of coupled Riccati-like matrix difference equations. Necessary and sufficient conditions are derived for the existence of optimal constant control laws which stabilize the controlled system as the time horizon becomes infinite, with finite optimal expected cost.

  19. The intelligence of dual simplex method to solve linear fractional fuzzy transportation problem.

    PubMed

    Narayanamoorthy, S; Kalyani, S

    2015-01-01

    An approach is presented to solve a fuzzy transportation problem with linear fractional fuzzy objective function. In this proposed approach the fractional fuzzy transportation problem is decomposed into two linear fuzzy transportation problems. The optimal solution of the two linear fuzzy transportations is solved by dual simplex method and the optimal solution of the fractional fuzzy transportation problem is obtained. The proposed method is explained in detail with an example.

  20. Multiresidue analysis of 36 pesticides in soil using a modified quick, easy, cheap, effective, rugged, and safe method by liquid chromatography with tandem quadruple linear ion trap mass spectrometry.

    PubMed

    Feng, Xue; He, Zeying; Wang, Lu; Peng, Yi; Luo, Ming; Liu, Xiaowei

    2015-09-01

    A new method for simultaneous determination of 36 pesticides, including 15 organophosphorus, six carbamate, and some other pesticides in soil was developed by liquid chromatography with tandem quadruple linear ion trap mass spectrometry. The extraction and clean-up steps were optimized based on the quick, easy, cheap, effective, rugged, and safe method. The data were acquired in multiple reaction monitoring mode combined with enhanced product ion to increase confidence of the analytical results. Validation experiments were performed in soil samples. The average recoveries of pesticides at four spiking levels (1, 5, 50, and 100 μg/kg) ranged from 63 to 126% with relative standard deviation below 20%. The limits of detection of pesticides were 0.04-0.8 μg/kg, and the limits of quantification were 0.1-2.6 μg/kg. The correlation coefficients (r(2) ) were higher than 0.990 in the linearity range of 0.5-200 μg/L for most of the pesticides. The method allowed for the analysis of the target pesticides in the lower μg/kg concentration range. The optimized method was then applied to the test of real soil samples obtained from several areas in China, confirming the feasibility of the method. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Is 3D true non linear traveltime tomography reasonable ?

    NASA Astrophysics Data System (ADS)

    Herrero, A.; Virieux, J.

    2003-04-01

    The data sets requiring 3D analysis tools in the context of seismic exploration (both onshore and offshore experiments) or natural seismicity (micro seismicity surveys or post event measurements) are more and more numerous. Classical linearized tomographies and also earthquake localisation codes need an accurate 3D background velocity model. However, if the medium is complex and a priori information not available, a 1D analysis is not able to provide an adequate background velocity image. Moreover, the design of the acquisition layouts is often intrinsically 3D and renders difficult even 2D approaches, especially in natural seismicity cases. Thus, the solution relies on the use of a 3D true non linear approach, which allows to explore the model space and to identify an optimal velocity image. The problem becomes then practical and its feasibility depends on the available computing resources (memory and time). In this presentation, we show that facing a 3D traveltime tomography problem with an extensive non-linear approach combining fast travel time estimators based on level set methods and optimisation techniques such as multiscale strategy is feasible. Moreover, because management of inhomogeneous inversion parameters is more friendly in a non linear approach, we describe how to perform a jointly non-linear inversion for the seismic velocities and the sources locations.

  2. Linearly polarized GHz magnetization dynamics of spin helix modes in the ferrimagnetic insulator Cu2OSeO3.

    PubMed

    Stasinopoulos, I; Weichselbaumer, S; Bauer, A; Waizner, J; Berger, H; Garst, M; Pfleiderer, C; Grundler, D

    2017-08-01

    Linear dichroism - the polarization dependent absorption of electromagnetic waves- is routinely exploited in applications as diverse as structure determination of DNA or polarization filters in optical technologies. Here filamentary absorbers with a large length-to-width ratio are a prerequisite. For magnetization dynamics in the few GHz frequency regime strictly linear dichroism was not observed for more than eight decades. Here, we show that the bulk chiral magnet Cu 2 OSeO 3 exhibits linearly polarized magnetization dynamics at an unexpectedly small frequency of about 2 GHz at zero magnetic field. Unlike optical filters that are assembled from filamentary absorbers, the magnet is shown to provide linear polarization as a bulk material for an extremely wide range of length-to-width ratios. In addition, the polarization plane of a given mode can be switched by 90° via a small variation in width. Our findings shed a new light on magnetization dynamics in that ferrimagnetic ordering combined with antisymmetric exchange interaction offers strictly linear polarization and cross-polarized modes for a broad spectrum of sample shapes at zero field. The discovery allows for novel design rules and optimization of microwave-to-magnon transduction in emerging microwave technologies.

  3. Use of borated polyethylene to improve low energy response of a prompt gamma based neutron dosimeter

    NASA Astrophysics Data System (ADS)

    Priyada, P.; Ashwini, U.; Sarkar, P. K.

    2016-05-01

    The feasibility of using a combined sample of borated polyethylene and normal polyethylene to estimate neutron ambient dose equivalent from measured prompt gamma emissions is investigated theoretically to demonstrate improvements in low energy neutron dose response compared to only polyethylene. Monte Carlo simulations have been carried out using the FLUKA code to calculate the response of boron, hydrogen and carbon prompt gamma emissions to mono energetic neutrons. The weighted least square method is employed to arrive at the best linear combination of these responses that approximates the ICRP fluence to dose conversion coefficients well in the energy range of 10-8 MeV to 14 MeV. The configuration of the combined system is optimized through FLUKA simulations. The proposed method is validated theoretically with five different workplace neutron spectra with satisfactory outcome.

  4. Performance improvement for optimization of the non-linear geometric fitting problem in manufacturing metrology

    NASA Astrophysics Data System (ADS)

    Moroni, Giovanni; Syam, Wahyudin P.; Petrò, Stefano

    2014-08-01

    Product quality is a main concern today in manufacturing; it drives competition between companies. To ensure high quality, a dimensional inspection to verify the geometric properties of a product must be carried out. High-speed non-contact scanners help with this task, by both speeding up acquisition speed and increasing accuracy through a more complete description of the surface. The algorithms for the management of the measurement data play a critical role in ensuring both the measurement accuracy and speed of the device. One of the most fundamental parts of the algorithm is the procedure for fitting the substitute geometry to a cloud of points. This article addresses this challenge. Three relevant geometries are selected as case studies: a non-linear least-squares fitting of a circle, sphere and cylinder. These geometries are chosen in consideration of their common use in practice; for example the sphere is often adopted as a reference artifact for performance verification of a coordinate measuring machine (CMM) and a cylinder is the most relevant geometry for a pin-hole relation as an assembly feature to construct a complete functioning product. In this article, an improvement of the initial point guess for the Levenberg-Marquardt (LM) algorithm by employing a chaos optimization (CO) method is proposed. This causes a performance improvement in the optimization of a non-linear function fitting the three geometries. The results show that, with this combination, a higher quality of fitting results a smaller norm of the residuals can be obtained while preserving the computational cost. Fitting an ‘incomplete-point-cloud’, which is a situation where the point cloud does not cover a complete feature e.g. from half of the total part surface, is also investigated. Finally, a case study of fitting a hemisphere is presented.

  5. Relevance of Linear Stability Results to Enhanced Oil Recovery

    NASA Astrophysics Data System (ADS)

    Ding, Xueru; Daripa, Prabir

    2012-11-01

    How relevant can the results based on linear stability theory for any problem for that matter be to full scale simulation results? Put it differently, is the optimal design of a system based on linear stability results is optimal or even near optimal for the complex nonlinear system with certain objectives of interest in mind? We will address these issues in the context of enhanced oil recovery by chemical flooding. This will be based on an ongoing work. Supported by Qatar National Research Fund (a member of the Qatar Foundation).

  6. Generalized massive optimal data compression

    NASA Astrophysics Data System (ADS)

    Alsing, Justin; Wandelt, Benjamin

    2018-05-01

    In this paper, we provide a general procedure for optimally compressing N data down to n summary statistics, where n is equal to the number of parameters of interest. We show that compression to the score function - the gradient of the log-likelihood with respect to the parameters - yields n compressed statistics that are optimal in the sense that they preserve the Fisher information content of the data. Our method generalizes earlier work on linear Karhunen-Loéve compression for Gaussian data whilst recovering both lossless linear compression and quadratic estimation as special cases when they are optimal. We give a unified treatment that also includes the general non-Gaussian case as long as mild regularity conditions are satisfied, producing optimal non-linear summary statistics when appropriate. As a worked example, we derive explicitly the n optimal compressed statistics for Gaussian data in the general case where both the mean and covariance depend on the parameters.

  7. Cooperative global optimal preview tracking control of linear multi-agent systems: an internal model approach

    NASA Astrophysics Data System (ADS)

    Lu, Yanrong; Liao, Fucheng; Deng, Jiamei; Liu, Huiyang

    2017-09-01

    This paper investigates the cooperative global optimal preview tracking problem of linear multi-agent systems under the assumption that the output of a leader is a previewable periodic signal and the topology graph contains a directed spanning tree. First, a type of distributed internal model is introduced, and the cooperative preview tracking problem is converted to a global optimal regulation problem of an augmented system. Second, an optimal controller, which can guarantee the asymptotic stability of the augmented system, is obtained by means of the standard linear quadratic optimal preview control theory. Third, on the basis of proving the existence conditions of the controller, sufficient conditions are given for the original problem to be solvable, meanwhile a cooperative global optimal controller with error integral and preview compensation is derived. Finally, the validity of theoretical results is demonstrated by a numerical simulation.

  8. Using a genetic algorithm to optimize a water-monitoring network for accuracy and cost effectiveness

    NASA Astrophysics Data System (ADS)

    Julich, R. J.

    2004-05-01

    The purpose of this project is to determine the optimal spatial distribution of water-monitoring wells to maximize important data collection and to minimize the cost of managing the network. We have employed a genetic algorithm (GA) towards this goal. The GA uses a simple fitness measure with two parts: the first part awards a maximal score to those combinations of hydraulic head observations whose net uncertainty is closest to the value representing all observations present, thereby maximizing accuracy; the second part applies a penalty function to minimize the number of observations, thereby minimizing the overall cost of the monitoring network. We used the linear statistical inference equation to calculate standard deviations on predictions from a numerical model generated for the 501-observation Death Valley Regional Flow System as the basis for our uncertainty calculations. We have organized the results to address the following three questions: 1) what is the optimal design strategy for a genetic algorithm to optimize this problem domain; 2) what is the consistency of solutions over several optimization runs; and 3) how do these results compare to what is known about the conceptual hydrogeology? Our results indicate the genetic algorithms are a more efficient and robust method for solving this class of optimization problems than have been traditional optimization approaches.

  9. The Intelligence of Dual Simplex Method to Solve Linear Fractional Fuzzy Transportation Problem

    PubMed Central

    Narayanamoorthy, S.; Kalyani, S.

    2015-01-01

    An approach is presented to solve a fuzzy transportation problem with linear fractional fuzzy objective function. In this proposed approach the fractional fuzzy transportation problem is decomposed into two linear fuzzy transportation problems. The optimal solution of the two linear fuzzy transportations is solved by dual simplex method and the optimal solution of the fractional fuzzy transportation problem is obtained. The proposed method is explained in detail with an example. PMID:25810713

  10. A single-phase axially-magnetized permanent-magnet oscillating machine for miniature aerospace power sources

    NASA Astrophysics Data System (ADS)

    Sui, Yi; Zheng, Ping; Cheng, Luming; Wang, Weinan; Liu, Jiaqi

    2017-05-01

    A single-phase axially-magnetized permanent-magnet (PM) oscillating machine which can be integrated with a free-piston Stirling engine to generate electric power, is investigated for miniature aerospace power sources. Machine structure, operating principle and detent force characteristic are elaborately studied. With the sinusoidal speed characteristic of the mover considered, the proposed machine is designed by 2D finite-element analysis (FEA), and some main structural parameters such as air gap diameter, dimensions of PMs, pole pitches of both stator and mover, and the pole-pitch combinations, etc., are optimized to improve both the power density and force capability. Compared with the three-phase PM linear machines, the proposed single-phase machine features less PM use, simple control and low controller cost. The power density of the proposed machine is higher than that of the three-phase radially-magnetized PM linear machine, but lower than the three-phase axially-magnetized PM linear machine.

  11. HgCdTe APD-based linear-mode photon counting components and ladar receivers

    NASA Astrophysics Data System (ADS)

    Jack, Michael; Wehner, Justin; Edwards, John; Chapman, George; Hall, Donald N. B.; Jacobson, Shane M.

    2011-05-01

    Linear mode photon counting (LMPC) provides significant advantages in comparison with Geiger Mode (GM) Photon Counting including absence of after-pulsing, nanosecond pulse to pulse temporal resolution and robust operation in the present of high density obscurants or variable reflectivity objects. For this reason Raytheon has developed and previously reported on unique linear mode photon counting components and modules based on combining advanced APDs and advanced high gain circuits. By using HgCdTe APDs we enable Poisson number preserving photon counting. A metric of photon counting technology is dark count rate and detection probability. In this paper we report on a performance breakthrough resulting from improvement in design, process and readout operation enabling >10x reduction in dark counts rate to ~10,000 cps and >104x reduction in surface dark current enabling long 10 ms integration times. Our analysis of key dark current contributors suggest that substantial further reduction in DCR to ~ 1/sec or less can be achieved by optimizing wavelength, operating voltage and temperature.

  12. Applications of hybrid genetic algorithms in seismic tomography

    NASA Astrophysics Data System (ADS)

    Soupios, Pantelis; Akca, Irfan; Mpogiatzis, Petros; Basokur, Ahmet T.; Papazachos, Constantinos

    2011-11-01

    Almost all earth sciences inverse problems are nonlinear and involve a large number of unknown parameters, making the application of analytical inversion methods quite restrictive. In practice, most analytical methods are local in nature and rely on a linearized form of the problem equations, adopting an iterative procedure which typically employs partial derivatives in order to optimize the starting (initial) model by minimizing a misfit (penalty) function. Unfortunately, especially for highly non-linear cases, the final model strongly depends on the initial model, hence it is prone to solution-entrapment in local minima of the misfit function, while the derivative calculation is often computationally inefficient and creates instabilities when numerical approximations are used. An alternative is to employ global techniques which do not rely on partial derivatives, are independent of the misfit form and are computationally robust. Such methods employ pseudo-randomly generated models (sampling an appropriately selected section of the model space) which are assessed in terms of their data-fit. A typical example is the class of methods known as genetic algorithms (GA), which achieves the aforementioned approximation through model representation and manipulations, and has attracted the attention of the earth sciences community during the last decade, with several applications already presented for several geophysical problems. In this paper, we examine the efficiency of the combination of the typical regularized least-squares and genetic methods for a typical seismic tomography problem. The proposed approach combines a local (LOM) and a global (GOM) optimization method, in an attempt to overcome the limitations of each individual approach, such as local minima and slow convergence, respectively. The potential of both optimization methods is tested and compared, both independently and jointly, using the several test models and synthetic refraction travel-time date sets that employ the same experimental geometry, wavelength and geometrical characteristics of the model anomalies. Moreover, real data from a crosswell tomographic project for the subsurface mapping of an ancient wall foundation are used for testing the efficiency of the proposed algorithm. The results show that the combined use of both methods can exploit the benefits of each approach, leading to improved final models and producing realistic velocity models, without significantly increasing the required computation time.

  13. Exhaustive Search for Sparse Variable Selection in Linear Regression

    NASA Astrophysics Data System (ADS)

    Igarashi, Yasuhiko; Takenaka, Hikaru; Nakanishi-Ohno, Yoshinori; Uemura, Makoto; Ikeda, Shiro; Okada, Masato

    2018-04-01

    We propose a K-sparse exhaustive search (ES-K) method and a K-sparse approximate exhaustive search method (AES-K) for selecting variables in linear regression. With these methods, K-sparse combinations of variables are tested exhaustively assuming that the optimal combination of explanatory variables is K-sparse. By collecting the results of exhaustively computing ES-K, various approximate methods for selecting sparse variables can be summarized as density of states. With this density of states, we can compare different methods for selecting sparse variables such as relaxation and sampling. For large problems where the combinatorial explosion of explanatory variables is crucial, the AES-K method enables density of states to be effectively reconstructed by using the replica-exchange Monte Carlo method and the multiple histogram method. Applying the ES-K and AES-K methods to type Ia supernova data, we confirmed the conventional understanding in astronomy when an appropriate K is given beforehand. However, we found the difficulty to determine K from the data. Using virtual measurement and analysis, we argue that this is caused by data shortage.

  14. The Efficiency of Split Panel Designs in an Analysis of Variance Model

    PubMed Central

    Wang, Wei-Guo; Liu, Hai-Jun

    2016-01-01

    We consider split panel design efficiency in analysis of variance models, that is, the determination of the cross-sections series optimal proportion in all samples, to minimize parametric best linear unbiased estimators of linear combination variances. An orthogonal matrix is constructed to obtain manageable expression of variances. On this basis, we derive a theorem for analyzing split panel design efficiency irrespective of interest and budget parameters. Additionally, relative estimator efficiency based on the split panel to an estimator based on a pure panel or a pure cross-section is present. The analysis shows that the gains from split panel can be quite substantial. We further consider the efficiency of split panel design, given a budget, and transform it to a constrained nonlinear integer programming. Specifically, an efficient algorithm is designed to solve the constrained nonlinear integer programming. Moreover, we combine one at time designs and factorial designs to illustrate the algorithm’s efficiency with an empirical example concerning monthly consumer expenditure on food in 1985, in the Netherlands, and the efficient ranges of the algorithm parameters are given to ensure a good solution. PMID:27163447

  15. Multineuron spike train analysis with R-convolution linear combination kernel.

    PubMed

    Tezuka, Taro

    2018-06-01

    A spike train kernel provides an effective way of decoding information represented by a spike train. Some spike train kernels have been extended to multineuron spike trains, which are simultaneously recorded spike trains obtained from multiple neurons. However, most of these multineuron extensions were carried out in a kernel-specific manner. In this paper, a general framework is proposed for extending any single-neuron spike train kernel to multineuron spike trains, based on the R-convolution kernel. Special subclasses of the proposed R-convolution linear combination kernel are explored. These subclasses have a smaller number of parameters and make optimization tractable when the size of data is limited. The proposed kernel was evaluated using Gaussian process regression for multineuron spike trains recorded from an animal brain. It was compared with the sum kernel and the population Spikernel, which are existing ways of decoding multineuron spike trains using kernels. The results showed that the proposed approach performs better than these kernels and also other commonly used neural decoding methods. Copyright © 2018 Elsevier Ltd. All rights reserved.

  16. Ultrasonic-assisted extraction and dispersive liquid-liquid microextraction combined with gas chromatography-mass spectrometry as an efficient and sensitive method for determining of acrylamide in potato chips samples.

    PubMed

    Zokaei, Maryam; Abedi, Abdol-Samad; Kamankesh, Marzieh; Shojaee-Aliababadi, Saeedeh; Mohammadi, Abdorreza

    2017-11-01

    In this research, for the first time, we successfully developed ultrasonic-assisted extraction and dispersive liquid-liquid microextraction combined with gas chromatography-mass spectrometry as a new, fast and highly sensitive method for determining of acrylamide in potato chips samples. Xanthydrol was used as a derivatization reagent and parameters affecting in the derivatization and microextraction steps were studied and optimized. Under optimum conditions, the calibration curves showed high levels of linearity (R 2 >0.9993) for acrylamide in the range of 2-500ngmL -1 . The relative standard deviation (RSD) for the seven analyses was 6.8%. The limit of detection (LOD) and limit of quantification (LOQ) were 0.6ngg -1 and 2ngg -1 , respectively. The UAE-DLLME-GC-MS method demonstrated high sensitivity, good linearity, recovery, and enrichment factor. The performance of the new proposed method was evaluated for the determination of acrylamide in various types of chips samples and satisfactory results were obtained. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Magnetically suspended stepping motors for clean room and vacuum environments

    NASA Technical Reports Server (NTRS)

    Higuchi, Toshiro

    1994-01-01

    To answer the growing needs for super-clean or contact free actuators for uses in clean rooms, vacuum chambers, and space, innovative actuators which combine the functions of stepping motors and magnetic bearings in one body were developed. The rotor of the magnetically suspended stepping motor is suspended like a magnetic bearing and rotated and positioned like a stepping motor. The important trait of the motor is that it is not a simple mixture or combination of a stepping motor and conventional magnetic bearing, but an amalgam of a stepping motor and a magnetic bearing. Owing to optimal design and feed-back control, a toothed stator and rotor are all that are needed structurewise for stable suspension. More than ten types of motors such as linear type, high accuracy rotary type, two-dimensional type, and high vacuum type were built and tested. This paper describes the structure and design of these motors and their performance for such applications as precise positioning rotary table, linear conveyor system, and theta-zeta positioner for clean room and high vacuum use.

  18. Combining functional and structural tests improves the diagnostic accuracy of relevance vector machine classifiers

    PubMed Central

    Racette, Lyne; Chiou, Christine Y.; Hao, Jiucang; Bowd, Christopher; Goldbaum, Michael H.; Zangwill, Linda M.; Lee, Te-Won; Weinreb, Robert N.; Sample, Pamela A.

    2009-01-01

    Purpose To investigate whether combining optic disc topography and short-wavelength automated perimetry (SWAP) data improves the diagnostic accuracy of relevance vector machine (RVM) classifiers for detecting glaucomatous eyes compared to using each test alone. Methods One eye of 144 glaucoma patients and 68 healthy controls from the Diagnostic Innovations in Glaucoma Study were included. RVM were trained and tested with cross-validation on optimized (backward elimination) SWAP features (thresholds plus age; pattern deviation (PD); total deviation (TD)) and on Heidelberg Retina Tomograph II (HRT) optic disc topography features, independently and in combination. RVM performance was also compared to two HRT linear discriminant functions (LDF) and to SWAP mean deviation (MD) and pattern standard deviation (PSD). Classifier performance was measured by the area under the receiver operating characteristic curves (AUROCs) generated for each feature set and by the sensitivities at set specificities of 75%, 90% and 96%. Results RVM trained on combined HRT and SWAP thresholds plus age had significantly higher AUROC (0.93) than RVM trained on HRT (0.88) and SWAP (0.76) alone. AUROCs for the SWAP global indices (MD: 0.68; PSD: 0.72) offered no advantage over SWAP thresholds plus age, while the LDF AUROCs were significantly lower than RVM trained on the combined SWAP and HRT feature set and on HRT alone feature set. Conclusions Training RVM on combined optimized HRT and SWAP data improved diagnostic accuracy compared to training on SWAP and HRT parameters alone. Future research may identify other combinations of tests and classifiers that can also improve diagnostic accuracy. PMID:19528827

  19. Functional linear models for association analysis of quantitative traits.

    PubMed

    Fan, Ruzong; Wang, Yifan; Mills, James L; Wilson, Alexander F; Bailey-Wilson, Joan E; Xiong, Momiao

    2013-11-01

    Functional linear models are developed in this paper for testing associations between quantitative traits and genetic variants, which can be rare variants or common variants or the combination of the two. By treating multiple genetic variants of an individual in a human population as a realization of a stochastic process, the genome of an individual in a chromosome region is a continuum of sequence data rather than discrete observations. The genome of an individual is viewed as a stochastic function that contains both linkage and linkage disequilibrium (LD) information of the genetic markers. By using techniques of functional data analysis, both fixed and mixed effect functional linear models are built to test the association between quantitative traits and genetic variants adjusting for covariates. After extensive simulation analysis, it is shown that the F-distributed tests of the proposed fixed effect functional linear models have higher power than that of sequence kernel association test (SKAT) and its optimal unified test (SKAT-O) for three scenarios in most cases: (1) the causal variants are all rare, (2) the causal variants are both rare and common, and (3) the causal variants are common. The superior performance of the fixed effect functional linear models is most likely due to its optimal utilization of both genetic linkage and LD information of multiple genetic variants in a genome and similarity among different individuals, while SKAT and SKAT-O only model the similarities and pairwise LD but do not model linkage and higher order LD information sufficiently. In addition, the proposed fixed effect models generate accurate type I error rates in simulation studies. We also show that the functional kernel score tests of the proposed mixed effect functional linear models are preferable in candidate gene analysis and small sample problems. The methods are applied to analyze three biochemical traits in data from the Trinity Students Study. © 2013 WILEY PERIODICALS, INC.

  20. Large-scale linear programs in planning and prediction.

    DOT National Transportation Integrated Search

    2017-06-01

    Large-scale linear programs are at the core of many traffic-related optimization problems in both planning and prediction. Moreover, many of these involve significant uncertainty, and hence are modeled using either chance constraints, or robust optim...

  1. Optimization with Fuzzy Data via Evolutionary Algorithms

    NASA Astrophysics Data System (ADS)

    Kosiński, Witold

    2010-09-01

    Order fuzzy numbers (OFN) that make possible to deal with fuzzy inputs quantitatively, exactly in the same way as with real numbers, have been recently defined by the author and his 2 coworkers. The set of OFN forms a normed space and is a partially ordered ring. The case when the numbers are presented in the form of step functions, with finite resolution, simplifies all operations and the representation of defuzzification functionals. A general optimization problem with fuzzy data is formulated. Its fitness function attains fuzzy values. Since the adjoint space to the space of OFN is finite dimensional, a convex combination of all linear defuzzification functionals may be used to introduce a total order and a real-valued fitness function. Genetic operations on individuals representing fuzzy data are defined.

  2. High-performance image reconstruction in fluorescence tomography on desktop computers and graphics hardware.

    PubMed

    Freiberger, Manuel; Egger, Herbert; Liebmann, Manfred; Scharfetter, Hermann

    2011-11-01

    Image reconstruction in fluorescence optical tomography is a three-dimensional nonlinear ill-posed problem governed by a system of partial differential equations. In this paper we demonstrate that a combination of state of the art numerical algorithms and a careful hardware optimized implementation allows to solve this large-scale inverse problem in a few seconds on standard desktop PCs with modern graphics hardware. In particular, we present methods to solve not only the forward but also the non-linear inverse problem by massively parallel programming on graphics processors. A comparison of optimized CPU and GPU implementations shows that the reconstruction can be accelerated by factors of about 15 through the use of the graphics hardware without compromising the accuracy in the reconstructed images.

  3. Simplified Design Method for Tension Fasteners

    NASA Astrophysics Data System (ADS)

    Olmstead, Jim; Barker, Paul; Vandersluis, Jonathan

    2012-07-01

    Tension fastened joints design has traditionally been an iterative tradeoff between separation and strength requirements. This paper presents equations for the maximum external load that a fastened joint can support and the optimal preload to achieve this load. The equations, based on linear joint theory, account for separation and strength safety factors and variations in joint geometry, materials, preload, load-plane factor and thermal loading. The strength-normalized versions of the equations are applicable to any fastener and can be plotted to create a "Fastener Design Space", FDS. Any combination of preload and tension that falls within the FDS represents a safe joint design. The equation for the FDS apex represents the optimal preload and load capacity of a set of joints. The method can be used for preliminary design or to evaluate multiple pre-existing joints.

  4. Development and application of a sol-gel immunosorbent-based method for the determination of isoproturon in surface water.

    PubMed

    Zhang, Xiuli; Martens, Dieter; Krämer, Petra M; Kettrup, Antonius A; Liang, Xinmiao

    2006-01-13

    An immunosorbent was fabricated by encapsulation of monoclonal anti-isoproturon antibodies in sol-gel matrix. The immunosorbent-based loading, rinsing and eluting processes were optimized. Based on these optimizations, the sol-gel immunosorbent (SG-IS) selectively extracted isoproturon from an artificial mixture of 68 pesticides. In addition to this high selectivity, the SG-IS proved to be reusable. The SG-IS was combined with liquid chromatography-tandem mass spectrometry (LC-MS-MS) to determine isoproturon in surface water, and the linear range was up to 2.2 microg/l with correlation coefficient higher than 0.99 and relative standard deviation (RSD) lower than 5% (n=8). The limit of quantitation (LOQ) for 25-ml surface water sample was 5 ng/l.

  5. Can linear superiorization be useful for linear optimization problems?

    NASA Astrophysics Data System (ADS)

    Censor, Yair

    2017-04-01

    Linear superiorization (LinSup) considers linear programming problems but instead of attempting to solve them with linear optimization methods it employs perturbation resilient feasibility-seeking algorithms and steers them toward reduced (not necessarily minimal) target function values. The two questions that we set out to explore experimentally are: (i) does LinSup provide a feasible point whose linear target function value is lower than that obtained by running the same feasibility-seeking algorithm without superiorization under identical conditions? (ii) How does LinSup fare in comparison with the Simplex method for solving linear programming problems? Based on our computational experiments presented here, the answers to these two questions are: ‘yes’ and ‘very well’, respectively.

  6. Performance optimization for rotors in hover and axial flight

    NASA Technical Reports Server (NTRS)

    Quackenbush, T. R.; Wachspress, D. A.; Kaufman, A. E.; Bliss, D. B.

    1989-01-01

    Performance optimization for rotors in hover and axial flight is a topic of continuing importance to rotorcraft designers. The aim of this Phase 1 effort has been to demonstrate that a linear optimization algorithm could be coupled to an existing influence coefficient hover performance code. This code, dubbed EHPIC (Evaluation of Hover Performance using Influence Coefficients), uses a quasi-linear wake relaxation to solve for the rotor performance. The coupling was accomplished by expanding of the matrix of linearized influence coefficients in EHPIC to accommodate design variables and deriving new coefficients for linearized equations governing perturbations in power and thrust. These coefficients formed the input to a linear optimization analysis, which used the flow tangency conditions on the blade and in the wake to impose equality constraints on the expanded system of equations; user-specified inequality contraints were also employed to bound the changes in the design. It was found that this locally linearized analysis could be invoked to predict a design change that would produce a reduction in the power required by the rotor at constant thrust. Thus, an efficient search for improved versions of the baseline design can be carried out while retaining the accuracy inherent in a free wake/lifting surface performance analysis.

  7. Combining information from 3 anatomic regions in the diagnosis of glaucoma with time-domain optical coherence tomography.

    PubMed

    Wang, Mingwu; Lu, Ake Tzu-Hui; Varma, Rohit; Schuman, Joel S; Greenfield, David S; Huang, David

    2014-03-01

    To improve the diagnosis of glaucoma by combining time-domain optical coherence tomography (TD-OCT) measurements of the optic disc, circumpapillary retinal nerve fiber layer (RNFL), and macular retinal thickness. Ninety-six age-matched normal and 96 perimetric glaucoma participants were included in this observational, cross-sectional study. Or-logic, support vector machine, relevance vector machine, and linear discrimination function were used to analyze the performances of combined TD-OCT diagnostic variables. The area under the receiver-operating curve (AROC) was used to evaluate the diagnostic accuracy and to compare the diagnostic performance of single and combined anatomic variables. The best RNFL thickness variables were the inferior (AROC=0.900), overall (AROC=0.892), and superior quadrants (AROC=0.850). The best optic disc variables were horizontal integrated rim width (AROC=0.909), vertical integrated rim area (AROC=0.908), and cup/disc vertical ratio (AROC=0.890). All macular retinal thickness variables had AROCs of 0.829 or less. Combining the top 3 RNFL and optic disc variables in optimizing glaucoma diagnosis, support vector machine had the highest AROC, 0.954, followed by or-logic (AROC=0.946), linear discrimination function (AROC=0.946), and relevance vector machine (AROC=0.943). All combination diagnostic variables had significantly larger AROCs than any single diagnostic variable. There are no significant differences among the combination diagnostic indices. With TD-OCT, RNFL and optic disc variables had better diagnostic accuracy than macular retinal variables. Combining top RNFL and optic disc variables significantly improved diagnostic performance. Clinically, or-logic classification was the most practical analytical tool with sufficient accuracy to diagnose early glaucoma.

  8. Electro-driven extraction of inorganic anions from water samples and water miscible organic solvents and analysis by ion chromatography.

    PubMed

    Nojavan, Saeed; Bidarmanesh, Tina; Memarzadeh, Farkhondeh; Chalavi, Soheila

    2014-09-01

    A simple electromembrane extraction (EME) procedure combined with ion chromatography (IC) was developed to quantify inorganic anions in different pure water samples and water miscible organic solvents. The parameters affecting extraction performance, such as supported liquid membrane (SLM) solvent, extraction time, pH of donor and acceptor solutions, and extraction voltage were optimized. The optimized EME conditions were as follows: 1-heptanol was used as the SLM solvent, the extraction time was 10 min, pHs of the acceptor and donor solutions were 10 and 7, respectively, and the extraction voltage was 15 V. The mobile phase used for IC was a combination of 1.8 mM sodium carbonate and 1.7 mM sodium bicarbonate. Under these optimized conditions, all anions had enrichment factors ranging from 67 to 117 with RSDs between 7.3 and 13.5% (n = 5). Good linearity values ranging from 2 to 1200 ng/mL with coefficients of determination (R(2) ) between 0.987 and 0.999 were obtained. The LODs of the EME-IC method ranged from 0.6 to 7.5 ng/mL. The developed method was applied to different samples to evaluate the feasibility of the method for real applications. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Optimal Recursive Digital Filters for Active Bending Stabilization

    NASA Technical Reports Server (NTRS)

    Orr, Jeb S.

    2013-01-01

    In the design of flight control systems for large flexible boosters, it is common practice to utilize active feedback control of the first lateral structural bending mode so as to suppress transients and reduce gust loading. Typically, active stabilization or phase stabilization is achieved by carefully shaping the loop transfer function in the frequency domain via the use of compensating filters combined with the frequency response characteristics of the nozzle/actuator system. In this paper we present a new approach for parameterizing and determining optimal low-order recursive linear digital filters so as to satisfy phase shaping constraints for bending and sloshing dynamics while simultaneously maximizing attenuation in other frequency bands of interest, e.g. near higher frequency parasitic structural modes. By parameterizing the filter directly in the z-plane with certain restrictions, the search space of candidate filter designs that satisfy the constraints is restricted to stable, minimum phase recursive low-pass filters with well-conditioned coefficients. Combined with optimal output feedback blending from multiple rate gyros, the present approach enables rapid and robust parametrization of autopilot bending filters to attain flight control performance objectives. Numerical results are presented that illustrate the application of the present technique to the development of rate gyro filters for an exploration-class multi-engined space launch vehicle.

  10. In situ magnetic compensation for potassium spin-exchange relaxation-free magnetometer considering probe beam pumping effect.

    PubMed

    Fang, Jiancheng; Wang, Tao; Quan, Wei; Yuan, Heng; Zhang, Hong; Li, Yang; Zou, Sheng

    2014-06-01

    A novel method to compensate the residual magnetic field for an atomic magnetometer consisting of two perpendicular beams of polarizations was demonstrated in this paper. The method can realize magnetic compensation in the case where the pumping rate of the probe beam cannot be ignored. In the experiment, the probe beam is always linearly polarized, whereas, the probe beam contains a residual circular component due to the imperfection of the polarizer, which leads to the pumping effect of the probe beam. A simulation of the probe beam's optical rotation and pumping rate was demonstrated. At the optimized points, the wavelength of the probe beam was optimized to achieve the largest optical rotation. Although, there is a small circular component in the linearly polarized probe beam, the pumping rate of the probe beam was non-negligible at the optimized wavelength which if ignored would lead to inaccuracies in the magnetic field compensation. Therefore, the dynamic equation of spin evolution was solved by considering the pumping effect of the probe beam. Based on the quasi-static solution, a novel magnetic compensation method was proposed, which contains two main steps: (1) the non-pumping compensation and (2) the sequence compensation with a very specific sequence. After these two main steps, a three-axis in situ magnetic compensation was achieved. The compensation method was suitable to design closed-loop spin-exchange relaxation-free magnetometer. By a combination of the magnetic compensation and the optimization, the magnetic field sensitivity was approximately 4 fT/Hz(1/2), which was mainly dominated by the noise of the magnetic shield.

  11. Modularity-like objective function in annotated networks

    NASA Astrophysics Data System (ADS)

    Xie, Jia-Rong; Wang, Bing-Hong

    2017-12-01

    We ascertain the modularity-like objective function whose optimization is equivalent to the maximum likelihood in annotated networks. We demonstrate that the modularity-like objective function is a linear combination of modularity and conditional entropy. In contrast with statistical inference methods, in our method, the influence of the metadata is adjustable; when its influence is strong enough, the metadata can be recovered. Conversely, when it is weak, the detection may correspond to another partition. Between the two, there is a transition. This paper provides a concept for expanding the scope of modularity methods.

  12. Convex Optimization Methods for Graphs and Statistical Modeling

    DTIC Science & Technology

    2011-06-01

    of a set obtained by taking nonnegative linear combinations of elements of the set. The cone TC(x) is the set of directions to points in C from the...Proof. The tangent cone at any signed vector x? with respect to the `∞ ball is a rotation of the nonnegative orthant. Thus we only need to compute the...that ξ(B ?) 1−4ξ(B?)µ(A?) < γ in the second inequality. Sec. A.2. Proofs 167 Proof of Proposition 3.4.2 Based on the Perron - Frobenius theorem [82

  13. Coherent detection of frequency-hopped quadrature modulations in the presence of jamming. I - QPSK and QASK modulations

    NASA Technical Reports Server (NTRS)

    Simon, M. K.; Polydoros, A.

    1981-01-01

    This paper examines the performance of coherent QPSK and QASK systems combined with FH or FH/PN spread spectrum techniques in the presence of partial-band multitone or noise jamming. The worst-case jammer and worst-case performance are determined as functions of the signal-to-background noise ratio (SNR) and signal-to-jammer power ratio (SJR). Asymptotic results for high SNR are shown to have a linear dependence between the jammer's optimal power allocation and the system error probability performance.

  14. Optimization of insulation of a linear Fresnel collector

    NASA Astrophysics Data System (ADS)

    Ardekani, Mohammad Moghimi; Craig, Ken J.; Meyer, Josua P.

    2017-06-01

    This study presents a simulation based optimization study of insulation around the cavity receiver of a Linear Fresnel Collector. This optimization study focuses on minimizing heat losses from a cavity receiver (maximizing plant thermal efficiency), while minimizing insulation cross-sectional area (minimizing material cost and cavity dead load), which leads to a cheaper and thermally more efficient LFC cavity receiver.

  15. Improved Speech Coding Based on Open-Loop Parameter Estimation

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan; Chen, Ya-Chin; Longman, Richard W.

    2000-01-01

    A nonlinear optimization algorithm for linear predictive speech coding was developed early that not only optimizes the linear model coefficients for the open loop predictor, but does the optimization including the effects of quantization of the transmitted residual. It also simultaneously optimizes the quantization levels used for each speech segment. In this paper, we present an improved method for initialization of this nonlinear algorithm, and demonstrate substantial improvements in performance. In addition, the new procedure produces monotonically improving speech quality with increasing numbers of bits used in the transmitted error residual. Examples of speech encoding and decoding are given for 8 speech segments and signal to noise levels as high as 47 dB are produced. As in typical linear predictive coding, the optimization is done on the open loop speech analysis model. Here we demonstrate that minimizing the error of the closed loop speech reconstruction, instead of the simpler open loop optimization, is likely to produce negligible improvement in speech quality. The examples suggest that the algorithm here is close to giving the best performance obtainable from a linear model, for the chosen order with the chosen number of bits for the codebook.

  16. Indirect synthesis of multi-degree of freedom transient systems. [linear programming for a kinematically linear system

    NASA Technical Reports Server (NTRS)

    Pilkey, W. D.; Chen, Y. H.

    1974-01-01

    An indirect synthesis method is used in the efficient optimal design of multi-degree of freedom, multi-design element, nonlinear, transient systems. A limiting performance analysis which requires linear programming for a kinematically linear system is presented. The system is selected using system identification methods such that the designed system responds as closely as possible to the limiting performance. The efficiency is a result of the method avoiding the repetitive systems analyses accompanying other numerical optimization methods.

  17. Optimal de novo design of MRM experiments for rapid assay development in targeted proteomics.

    PubMed

    Bertsch, Andreas; Jung, Stephan; Zerck, Alexandra; Pfeifer, Nico; Nahnsen, Sven; Henneges, Carsten; Nordheim, Alfred; Kohlbacher, Oliver

    2010-05-07

    Targeted proteomic approaches such as multiple reaction monitoring (MRM) overcome problems associated with classical shotgun mass spectrometry experiments. Developing MRM quantitation assays can be time consuming, because relevant peptide representatives of the proteins must be found and their retention time and the product ions must be determined. Given the transitions, hundreds to thousands of them can be scheduled into one experiment run. However, it is difficult to select which of the transitions should be included into a measurement. We present a novel algorithm that allows the construction of MRM assays from the sequence of the targeted proteins alone. This enables the rapid development of targeted MRM experiments without large libraries of transitions or peptide spectra. The approach relies on combinatorial optimization in combination with machine learning techniques to predict proteotypicity, retention time, and fragmentation of peptides. The resulting potential transitions are scheduled optimally by solving an integer linear program. We demonstrate that fully automated construction of MRM experiments from protein sequences alone is possible and over 80% coverage of the targeted proteins can be achieved without further optimization of the assay.

  18. Photonic crystal enhanced silicon cell based thermophotovoltaic systems

    DOE PAGES

    Yeng, Yi Xiang; Chan, Walker R.; Rinnerbauer, Veronika; ...

    2015-01-30

    We report the design, optimization, and experimental results of large area commercial silicon solar cell based thermophotovoltaic (TPV) energy conversion systems. Using global non-linear optimization tools, we demonstrate theoretically a maximum radiative heat-to-electricity efficiency of 6.4% and a corresponding output electrical power density of 0.39 W cm⁻² at temperature T = 1660 K when implementing both the optimized two-dimensional (2D) tantalum photonic crystal (PhC) selective emitter, and the optimized 1D tantalum pentoxide – silicon dioxide PhC cold-side selective filter. In addition, we have developed an experimental large area TPV test setup that enables accurate measurement of radiative heat-to-electricity efficiency formore » any emitter-filter-TPV cell combination of interest. In fact, the experimental results match extremely well with predictions of our numerical models. Our experimental setup achieved a maximum output electrical power density of 0.10W cm⁻² and radiative heat-to-electricity efficiency of 1.18% at T = 1380 K using commercial wafer size back-contacted silicon solar cells.« less

  19. Enhancing Degradation of Low Density Polyethylene Films by Curvularia lunata SG1 Using Particle Swarm Optimization Strategy.

    PubMed

    Raut, Sangeeta; Raut, Smita; Sharma, Manisha; Srivastav, Chaitanya; Adhikari, Basudam; Sen, Sudip Kumar

    2015-09-01

    In the present study, artificial neural network (ANN) modelling coupled with particle swarm optimization (PSO) algorithm was used to optimize the process variables for enhanced low density polyethylene (LDPE) degradation by Curvularia lunata SG1. In the non-linear ANN model, temperature, pH, contact time and agitation were used as input variables and polyethylene bio-degradation as the output variable. Further, on application of PSO to the ANN model, the optimum values of the process parameters were as follows: pH = 7.6, temperature = 37.97 °C, agitation rate = 190.48 rpm and incubation time = 261.95 days. A comparison between the model results and experimental data gave a high correlation coefficient ([Formula: see text]). Significant enhancement of LDPE bio-degradation using C. lunata SG1by about 48 % was achieved under optimum conditions. Thus, the novelty of the work lies in the application of combination of ANN-PSO as optimization strategy to enhance the bio-degradation of LDPE.

  20. Simulation study on single event burnout in linear doping buffer layer engineered power VDMOSFET

    NASA Astrophysics Data System (ADS)

    Yunpeng, Jia; Hongyuan, Su; Rui, Jin; Dongqing, Hu; Yu, Wu

    2016-02-01

    The addition of a buffer layer can improve the device's secondary breakdown voltage, thus, improving the single event burnout (SEB) threshold voltage. In this paper, an N type linear doping buffer layer is proposed. According to quasi-stationary avalanche simulation and heavy ion beam simulation, the results show that an optimized linear doping buffer layer is critical. As SEB is induced by heavy ions impacting, the electric field of an optimized linear doping buffer device is much lower than that with an optimized constant doping buffer layer at a given buffer layer thickness and the same biasing voltages. Secondary breakdown voltage and the parasitic bipolar turn-on current are much higher than those with the optimized constant doping buffer layer. So the linear buffer layer is more advantageous to improving the device's SEB performance. Project supported by the National Natural Science Foundation of China (No. 61176071), the Doctoral Fund of Ministry of Education of China (No. 20111103120016), and the Science and Technology Program of State Grid Corporation of China (No. SGRI-WD-71-13-006).

  1. Parameterized LMI Based Diagonal Dominance Compensator Study for Polynomial Linear Parameter Varying System

    NASA Astrophysics Data System (ADS)

    Han, Xiaobao; Li, Huacong; Jia, Qiusheng

    2017-12-01

    For dynamic decoupling of polynomial linear parameter varying(PLPV) system, a robust dominance pre-compensator design method is given. The parameterized precompensator design problem is converted into an optimal problem constrained with parameterized linear matrix inequalities(PLMI) by using the conception of parameterized Lyapunov function(PLF). To solve the PLMI constrained optimal problem, the precompensator design problem is reduced into a normal convex optimization problem with normal linear matrix inequalities (LMI) constraints on a new constructed convex polyhedron. Moreover, a parameter scheduling pre-compensator is achieved, which satisfies robust performance and decoupling performances. Finally, the feasibility and validity of the robust diagonal dominance pre-compensator design method are verified by the numerical simulation on a turbofan engine PLPV model.

  2. A predictive machine learning approach for microstructure optimization and materials design

    NASA Astrophysics Data System (ADS)

    Liu, Ruoqian; Kumar, Abhishek; Chen, Zhengzhang; Agrawal, Ankit; Sundararaghavan, Veera; Choudhary, Alok

    2015-06-01

    This paper addresses an important materials engineering question: How can one identify the complete space (or as much of it as possible) of microstructures that are theoretically predicted to yield the desired combination of properties demanded by a selected application? We present a problem involving design of magnetoelastic Fe-Ga alloy microstructure for enhanced elastic, plastic and magnetostrictive properties. While theoretical models for computing properties given the microstructure are known for this alloy, inversion of these relationships to obtain microstructures that lead to desired properties is challenging, primarily due to the high dimensionality of microstructure space, multi-objective design requirement and non-uniqueness of solutions. These challenges render traditional search-based optimization methods incompetent in terms of both searching efficiency and result optimality. In this paper, a route to address these challenges using a machine learning methodology is proposed. A systematic framework consisting of random data generation, feature selection and classification algorithms is developed. Experiments with five design problems that involve identification of microstructures that satisfy both linear and nonlinear property constraints show that our framework outperforms traditional optimization methods with the average running time reduced by as much as 80% and with optimality that would not be achieved otherwise.

  3. Design optimization using adjoint of Long-time LES for the trailing edge of a transonic turbine vane

    NASA Astrophysics Data System (ADS)

    Talnikar, Chaitanya; Wang, Qiqi

    2017-11-01

    Adjoint-based design optimization methods have been applied to low-fidelity simulation methods like Reynolds Averaged Navier-Stokes (RANS) and are useful for designing fluid machinery components. But to reliably capture the complex flow phenomena involved in turbomachinery, high fidelity simulations like large eddy simulation (LES) are required. Unfortunately due to the chaotic dynamics of turbulence, the unsteady adjoint method for LES diverges and produces incorrect gradients. Using a viscosity stabilized unsteady adjoint method developed for LES, the gradient can be obtained with reasonable accuracy. In this paper, design of the trailing edge of a gas turbine inlet guide vane is performed with the objective to reduce stagnation pressure loss and heat transfer over the surface of the vane. Slight changes in the shape of trailing edge can significantly impact these quantities by altering the boundary layer development process and separation points. The trailing edge is parameterized using a linear combination of 5 convex designs. Bayesian optimization is used as a global optimizer with the objective function evaluated from the LES and gradients obtained using the viscosity adjoint method. Results from the optimization, performed on the supercomputer Mira, are presented.

  4. SIRF: Simultaneous Satellite Image Registration and Fusion in a Unified Framework.

    PubMed

    Chen, Chen; Li, Yeqing; Liu, Wei; Huang, Junzhou

    2015-11-01

    In this paper, we propose a novel method for image fusion with a high-resolution panchromatic image and a low-resolution multispectral (Ms) image at the same geographical location. The fusion is formulated as a convex optimization problem which minimizes a linear combination of a least-squares fitting term and a dynamic gradient sparsity regularizer. The former is to preserve accurate spectral information of the Ms image, while the latter is to keep sharp edges of the high-resolution panchromatic image. We further propose to simultaneously register the two images during the fusing process, which is naturally achieved by virtue of the dynamic gradient sparsity property. An efficient algorithm is then devised to solve the optimization problem, accomplishing a linear computational complexity in the size of the output image in each iteration. We compare our method against six state-of-the-art image fusion methods on Ms image data sets from four satellites. Extensive experimental results demonstrate that the proposed method substantially outperforms the others in terms of both spatial and spectral qualities. We also show that our method can provide high-quality products from coarsely registered real-world IKONOS data sets. Finally, a MATLAB implementation is provided to facilitate future research.

  5. Development and Validation of a Sensitive Method for Trace Nickel Determination by Slotted Quartz Tube Flame Atomic Absorption Spectrometry After Dispersive Liquid-Liquid Microextraction.

    PubMed

    Yolcu, Şükran Melda; Fırat, Merve; Chormey, Dotse Selali; Büyükpınar, Çağdaş; Turak, Fatma; Bakırdere, Sezgin

    2018-05-01

    In this study, dispersive liquid-liquid microextraction was systematically optimized for the preconcentration of nickel after forming a complex with diphenylcarbazone. The measurement output of the flame atomic absorption spectrometer was further enhanced by fitting a custom-cut slotted quartz tube to the flame burner head. The extraction method increased the amount of nickel reaching the flame and the slotted quartz tube increased the residence time of nickel atoms in the flame to record higher absorbance. Two methods combined to give about 90 fold enhancement in sensitivity over the conventional flame atomic absorption spectrometry. The optimized method was applicable over a wide linear concentration range, and it gave a detection limit of 2.1 µg L -1 . Low relative standard deviations at the lowest concentration in the linear calibration plot indicated high precision for both extraction process and instrumental measurements. A coal fly ash standard reference material (SRM 1633c) was used to determine the accuracy of the method, and experimented results were compatible with the certified value. Spiked recovery tests were also used to validate the applicability of the method.

  6. Optimization of dispersive liquid-phase microextraction based on solidified floating organic drop combined with high-performance liquid chromatography for the analysis of glucocorticoid residues in food.

    PubMed

    Huang, Yuan; Zheng, Zhiqun; Huang, Liying; Yao, Hong; Wu, Xiao Shan; Li, Shaoguang; Lin, Dandan

    2017-05-10

    A rapid, simple, cost-effective dispersive liquid-phase microextraction based on solidified floating organic drop (SFOD-LPME) was developed in this study. Along with high-performance liquid chromatography, we used the developed approach to determine and enrich trace amounts of four glucocorticoids, namely, prednisone, betamethasone, dexamethasone, and cortisone acetate, in animal-derived food. We also investigated and optimized several important parameters that influenced the extraction efficiency of SFOD-LPME. These parameters include the extractant species, volumes of extraction and dispersant solvents, sodium chloride addition, sample pH, extraction time and temperature, and stirring rate. Under optimum experimental conditions, the calibration graph exhibited linearity over the range of 1.2-200.0ng/ml for the four analytes, with a reasonable linearity(r 2 : 0.9990-0.9999). The enrichment factor was 142-276, and the detection limits was 0.39-0.46ng/ml (0.078-0.23μg/kg). This method was successfully applied to analyze actual food samples, and good spiked recoveries of over 81.5%-114.3% were obtained. Copyright © 2017. Published by Elsevier B.V.

  7. Model Predictive Control considering Reachable Range of Wheels for Leg / Wheel Mobile Robots

    NASA Astrophysics Data System (ADS)

    Suzuki, Naito; Nonaka, Kenichiro; Sekiguchi, Kazuma

    2016-09-01

    Obstacle avoidance is one of the important tasks for mobile robots. In this paper, we study obstacle avoidance control for mobile robots equipped with four legs comprised of three DoF SCARA leg/wheel mechanism, which enables the robot to change its shape adapting to environments. Our previous method achieves obstacle avoidance by model predictive control (MPC) considering obstacle size and lateral wheel positions. However, this method does not ensure existence of joint angles which achieves reference wheel positions calculated by MPC. In this study, we propose a model predictive control considering reachable mobile ranges of wheels positions by combining multiple linear constraints, where each reachable mobile range is approximated as a convex trapezoid. Thus, we achieve to formulate a MPC as a quadratic problem with linear constraints for nonlinear problem of longitudinal and lateral wheel position control. By optimization of MPC, the reference wheel positions are calculated, while each joint angle is determined by inverse kinematics. Considering reachable mobile ranges explicitly, the optimal joint angles are calculated, which enables wheels to reach the reference wheel positions. We verify its advantages by comparing the proposed method with the previous method through numerical simulations.

  8. Effects of Combined Surface and In-Depth Absorption on Ignition of PMMA

    PubMed Central

    Gong, Junhui; Chen, Yixuan; Li, Jing; Jiang, Juncheng; Wang, Zhirong; Wang, Jinghong

    2016-01-01

    A one-dimensional numerical model and theoretical analysis involving both surface and in-depth radiative heat flux absorption are utilized to investigate the influence of their combination on ignition of PMMA (Polymethyl Methacrylate). Ignition time, transient temperature in a solid and optimized combination of these two absorption modes of black and clear PMMA are examined to understand the ignition mechanism. Based on the comparison, it is found that the selection of constant or variable thermal parameters of PMMA barely affects the ignition time of simulation results. The linearity between tig−0.5 and heat flux does not exist anymore for high heat flux. Both analytical and numerical models underestimate the surface temperature and overestimate the temperature in a solid beneath the heat penetration layer for pure in-depth absorption. Unlike surface absorption circumstances, the peak value of temperature is in the vicinity of the surface but not on the surface for in-depth absorption. The numerical model predicts the ignition time better than the analytical model due to the more reasonable ignition criterion selected. The surface temperature increases with increasing incident heat flux. Furthermore, it also increases with the fraction of surface absorption and the radiative extinction coefficient for fixed heat flux. Finally, the combination is optimized by ignition time, temperature distribution in a solid and mass loss rate. PMID:28773940

  9. Effects of Combined Surface and In-Depth Absorption on Ignition of PMMA.

    PubMed

    Gong, Junhui; Chen, Yixuan; Li, Jing; Jiang, Juncheng; Wang, Zhirong; Wang, Jinghong

    2016-10-05

    A one-dimensional numerical model and theoretical analysis involving both surface and in-depth radiative heat flux absorption are utilized to investigate the influence of their combination on ignition of PMMA (Polymethyl Methacrylate). Ignition time, transient temperature in a solid and optimized combination of these two absorption modes of black and clear PMMA are examined to understand the ignition mechanism. Based on the comparison, it is found that the selection of constant or variable thermal parameters of PMMA barely affects the ignition time of simulation results. The linearity between t ig -0.5 and heat flux does not exist anymore for high heat flux. Both analytical and numerical models underestimate the surface temperature and overestimate the temperature in a solid beneath the heat penetration layer for pure in-depth absorption. Unlike surface absorption circumstances, the peak value of temperature is in the vicinity of the surface but not on the surface for in-depth absorption. The numerical model predicts the ignition time better than the analytical model due to the more reasonable ignition criterion selected. The surface temperature increases with increasing incident heat flux. Furthermore, it also increases with the fraction of surface absorption and the radiative extinction coefficient for fixed heat flux. Finally, the combination is optimized by ignition time, temperature distribution in a solid and mass loss rate.

  10. Simultaneous extraction, identification and quantification of phenolic compounds in Eclipta prostrata using microwave-assisted extraction combined with HPLC-DAD-ESI-MS/MS.

    PubMed

    Fang, Xinsheng; Wang, Jianhua; Hao, Jifu; Li, Xueke; Guo, Ning

    2015-12-01

    A simple and rapid method was developed using microwave-assisted extraction (MAE) combined with HPLC-DAD-ESI-MS/MS for the simultaneous extraction, identification, and quantification of phenolic compounds in Eclipta prostrata, a common herb and vegetable in China. The optimized parameters of MAE were: employing 50% ethanol as solvent, microwave power 400 W, temperature 70 °C, ratio of liquid/solid 30 mL/g and extraction time 2 min. Compared to conventional extraction methods, the optimized MAE can avoid the degradation of the phenolic compounds and simultaneously obtained the highest yields of all components faster with less consumption of solvent and energy. Six phenolic acids, six flavonoid glycosides and one coumarin were firstly identified. The phenolic compounds were quantified by HPLC-DAD with good linearity, precision, and accuracy. The extract obtained by MAE showed significant antioxidant activity. The proposed method provides a valuable and green analytical methodology for the investigation of phenolic components in natural plants. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Evolutionary Algorithm Based Feature Optimization for Multi-Channel EEG Classification.

    PubMed

    Wang, Yubo; Veluvolu, Kalyana C

    2017-01-01

    The most BCI systems that rely on EEG signals employ Fourier based methods for time-frequency decomposition for feature extraction. The band-limited multiple Fourier linear combiner is well-suited for such band-limited signals due to its real-time applicability. Despite the improved performance of these techniques in two channel settings, its application in multiple-channel EEG is not straightforward and challenging. As more channels are available, a spatial filter will be required to eliminate the noise and preserve the required useful information. Moreover, multiple-channel EEG also adds the high dimensionality to the frequency feature space. Feature selection will be required to stabilize the performance of the classifier. In this paper, we develop a new method based on Evolutionary Algorithm (EA) to solve these two problems simultaneously. The real-valued EA encodes both the spatial filter estimates and the feature selection into its solution and optimizes it with respect to the classification error. Three Fourier based designs are tested in this paper. Our results show that the combination of Fourier based method with covariance matrix adaptation evolution strategy (CMA-ES) has the best overall performance.

  12. Modeling antibiotic treatment in hospitals: A systematic approach shows benefits of combination therapy over cycling, mixing, and mono-drug therapies.

    PubMed

    Tepekule, Burcu; Uecker, Hildegard; Derungs, Isabel; Frenoy, Antoine; Bonhoeffer, Sebastian

    2017-09-01

    Multiple treatment strategies are available for empiric antibiotic therapy in hospitals, but neither clinical studies nor theoretical investigations have yielded a clear picture when which strategy is optimal and why. Extending earlier work of others and us, we present a mathematical model capturing treatment strategies using two drugs, i.e the multi-drug therapies referred to as cycling, mixing, and combination therapy, as well as monotherapy with either drug. We randomly sample a large parameter space to determine the conditions determining success or failure of these strategies. We find that combination therapy tends to outperform the other treatment strategies. By using linear discriminant analysis and particle swarm optimization, we find that the most important parameters determining success or failure of combination therapy relative to the other treatment strategies are the de novo rate of emergence of double resistance in patients infected with sensitive bacteria and the fitness costs associated with double resistance. The rate at which double resistance is imported into the hospital via patients admitted from the outside community has little influence, as all treatment strategies are affected equally. The parameter sets for which combination therapy fails tend to fall into areas with low biological plausibility as they are characterised by very high rates of de novo emergence of resistance to both drugs compared to a single drug, and the cost of double resistance is considerably smaller than the sum of the costs of single resistance.

  13. Optimal blood glucose control in diabetes mellitus treatment using dynamic programming based on Ackerman’s linear model

    NASA Astrophysics Data System (ADS)

    Pradanti, Paskalia; Hartono

    2018-03-01

    Determination of insulin injection dose in diabetes mellitus treatment can be considered as an optimal control problem. This article is aimed to simulate optimal blood glucose control for patient with diabetes mellitus. The blood glucose regulation of diabetic patient is represented by Ackerman’s Linear Model. This problem is then solved using dynamic programming method. The desired blood glucose level is obtained by minimizing the performance index in Lagrange form. The results show that dynamic programming based on Ackerman’s Linear Model is quite good to solve the problem.

  14. Enriched Imperialist Competitive Algorithm for system identification of magneto-rheological dampers

    NASA Astrophysics Data System (ADS)

    Talatahari, Siamak; Rahbari, Nima Mohajer

    2015-10-01

    In the current research, the imperialist competitive algorithm is dramatically enhanced and a new optimization method dubbed as Enriched Imperialist Competitive Algorithm (EICA) is effectively introduced to deal with high non-linear optimization problems. To conduct a close examination of its functionality and efficacy, the proposed metaheuristic optimization approach is actively employed to sort out the parameter identification of two different types of hysteretic Bouc-Wen models which are simulating the non-linear behavior of MR dampers. Two types of experimental data are used for the optimization problems to minutely examine the robustness of the proposed EICA. The obtained results self-evidently demonstrate the high adaptability of EICA to suitably get to the bottom of such non-linear and hysteretic problems.

  15. The quasi-optimality criterion in the linear functional strategy

    NASA Astrophysics Data System (ADS)

    Kindermann, Stefan; Pereverzyev, Sergiy, Jr.; Pilipenko, Andrey

    2018-07-01

    The linear functional strategy for the regularization of inverse problems is considered. For selecting the regularization parameter therein, we propose the heuristic quasi-optimality principle and some modifications including the smoothness of the linear functionals. We prove convergence rates for the linear functional strategy with these heuristic rules taking into account the smoothness of the solution and the functionals and imposing a structural condition on the noise. Furthermore, we study these noise conditions in both a deterministic and stochastic setup and verify that for mildly-ill-posed problems and Gaussian noise, these conditions are satisfied almost surely, where on the contrary, in the severely-ill-posed case and in a similar setup, the corresponding noise condition fails to hold. Moreover, we propose an aggregation method for adaptively optimizing the parameter choice rule by making use of improved rates for linear functionals. Numerical results indicate that this method yields better results than the standard heuristic rule.

  16. Optimal observables for multiparameter seismic tomography

    NASA Astrophysics Data System (ADS)

    Bernauer, Moritz; Fichtner, Andreas; Igel, Heiner

    2014-08-01

    We propose a method for the design of seismic observables with maximum sensitivity to a target model parameter class, and minimum sensitivity to all remaining parameter classes. The resulting optimal observables thereby minimize interparameter trade-offs in multiparameter inverse problems. Our method is based on the linear combination of fundamental observables that can be any scalar measurement extracted from seismic waveforms. Optimal weights of the fundamental observables are determined with an efficient global search algorithm. While most optimal design methods assume variable source and/or receiver positions, our method has the flexibility to operate with a fixed source-receiver geometry, making it particularly attractive in studies where the mobility of sources and receivers is limited. In a series of examples we illustrate the construction of optimal observables, and assess the potentials and limitations of the method. The combination of Rayleigh-wave traveltimes in four frequency bands yields an observable with strongly enhanced sensitivity to 3-D density structure. Simultaneously, sensitivity to S velocity is reduced, and sensitivity to P velocity is eliminated. The original three-parameter problem thereby collapses into a simpler two-parameter problem with one dominant parameter. By defining parameter classes to equal earth model properties within specific regions, our approach mimics the Backus-Gilbert method where data are combined to focus sensitivity in a target region. This concept is illustrated using rotational ground motion measurements as fundamental observables. Forcing dominant sensitivity in the near-receiver region produces an observable that is insensitive to the Earth structure at more than a few wavelengths' distance from the receiver. This observable may be used for local tomography with teleseismic data. While our test examples use a small number of well-understood fundamental observables, few parameter classes and a radially symmetric earth model, the method itself does not impose such restrictions. It can easily be applied to large numbers of fundamental observables and parameters classes, as well as to 3-D heterogeneous earth models.

  17. A robust optimization methodology for preliminary aircraft design

    NASA Astrophysics Data System (ADS)

    Prigent, S.; Maréchal, P.; Rondepierre, A.; Druot, T.; Belleville, M.

    2016-05-01

    This article focuses on a robust optimization of an aircraft preliminary design under operational constraints. According to engineers' know-how, the aircraft preliminary design problem can be modelled as an uncertain optimization problem whose objective (the cost or the fuel consumption) is almost affine, and whose constraints are convex. It is shown that this uncertain optimization problem can be approximated in a conservative manner by an uncertain linear optimization program, which enables the use of the techniques of robust linear programming of Ben-Tal, El Ghaoui, and Nemirovski [Robust Optimization, Princeton University Press, 2009]. This methodology is then applied to two real cases of aircraft design and numerical results are presented.

  18. Mass Optimization of Battery/Supercapacitors Hybrid Systems Based on a Linear Programming Approach

    NASA Astrophysics Data System (ADS)

    Fleury, Benoit; Labbe, Julien

    2014-08-01

    The objective of this paper is to show that, on a specific launcher-type mission profile, a 40% gain of mass is expected using a battery/supercapacitors active hybridization instead of a single battery solution. This result is based on the use of a linear programming optimization approach to perform the mass optimization of the hybrid power supply solution.

  19. Research on NC laser combined cutting optimization model of sheet metal parts

    NASA Astrophysics Data System (ADS)

    Wu, Z. Y.; Zhang, Y. L.; Li, L.; Wu, L. H.; Liu, N. B.

    2017-09-01

    The optimization problem for NC laser combined cutting of sheet metal parts was taken as the research object in this paper. The problem included two contents: combined packing optimization and combined cutting path optimization. In the problem of combined packing optimization, the method of “genetic algorithm + gravity center NFP + geometric transformation” was used to optimize the packing of sheet metal parts. In the problem of combined cutting path optimization, the mathematical model of cutting path optimization was established based on the parts cutting constraint rules of internal contour priority and cross cutting. The model played an important role in the optimization calculation of NC laser combined cutting.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, S. M.; Kim, K. Y.

    Printed circuit heat exchanger (PCHE) is recently considered as a recuperator for the high temperature gas cooled reactor. In this work, the zigzag-channels of a PCHE have been optimized by using three-dimensional Reynolds-Averaged Navier-Stokes (RANS) analysis and response surface approximation (RSA) modeling technique to enhance thermal-hydraulic performance. Shear stress transport turbulence model is used as a turbulence closure. The objective function is defined as a linear combination of the functions related to heat transfer and friction loss of the PCHE, respectively. Three geometric design variables viz., the ratio of the radius of the fillet to hydraulic diameter of the channels,more » the ratio of wavelength to hydraulic diameter of the channels, and the ratio of wave height to hydraulic diameter of the channels, are used for the optimization. Design points are selected through Latin-hypercube sampling. The optimal design is determined through the RSA model which uses RANS derived calculations at the design points. The results show that the optimum shape enhances considerably the thermal-hydraulic performance than a reference shape. (authors)« less

  1. A Systematic Approach to Sensor Selection for Aircraft Engine Health Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Garg, Sanjay

    2009-01-01

    A systematic approach for selecting an optimal suite of sensors for on-board aircraft gas turbine engine health estimation is presented. The methodology optimally chooses the engine sensor suite and the model tuning parameter vector to minimize the Kalman filter mean squared estimation error in the engine s health parameters or other unmeasured engine outputs. This technique specifically addresses the underdetermined estimation problem where there are more unknown system health parameters representing degradation than available sensor measurements. This paper presents the theoretical estimation error equations, and describes the optimization approach that is applied to select the sensors and model tuning parameters to minimize these errors. Two different model tuning parameter vector selection approaches are evaluated: the conventional approach of selecting a subset of health parameters to serve as the tuning parameters, and an alternative approach that selects tuning parameters as a linear combination of all health parameters. Results from the application of the technique to an aircraft engine simulation are presented, and compared to those from an alternative sensor selection strategy.

  2. Optimal combination of illusory and luminance-defined 3-D surfaces: A role for ambiguity.

    PubMed

    Hartle, Brittney; Wilcox, Laurie M; Murray, Richard F

    2018-04-01

    The shape of the illusory surface in stereoscopic Kanizsa figures is determined by the interpolation of depth from the luminance edges of adjacent inducing elements. Despite ambiguity in the position of illusory boundaries, observers reliably perceive a coherent three-dimensional (3-D) surface. However, this ambiguity may contribute additional uncertainty to the depth percept beyond what is expected from measurement noise alone. We evaluated the intrinsic ambiguity of illusory boundaries by using a cue-combination paradigm to measure the reliability of depth percepts elicited by stereoscopic illusory surfaces. We assessed the accuracy and precision of depth percepts using 3-D Kanizsa figures relative to luminance-defined surfaces. The location of the surface peak was defined by illusory boundaries, luminance-defined edges, or both. Accuracy and precision were assessed using a depth-discrimination paradigm. A maximum likelihood linear cue combination model was used to evaluate the relative contribution of illusory and luminance-defined signals to the perceived depth of the combined surface. Our analysis showed that the standard deviation of depth estimates was consistent with an optimal cue combination model, but the points of subjective equality indicated that observers consistently underweighted the contribution of illusory boundaries. This systematic underweighting may reflect a combination rule that attributes additional intrinsic ambiguity to the location of the illusory boundary. Although previous studies show that illusory and luminance-defined contours share many perceptual similarities, our model suggests that ambiguity plays a larger role in the perceptual representation of illusory contours than of luminance-defined contours.

  3. Statistical model based iterative reconstruction in clinical CT systems. Part III. Task-based kV/mAs optimization for radiation dose reduction

    PubMed Central

    Li, Ke; Gomez-Cardona, Daniel; Hsieh, Jiang; Lubner, Meghan G.; Pickhardt, Perry J.; Chen, Guang-Hong

    2015-01-01

    Purpose: For a given imaging task and patient size, the optimal selection of x-ray tube potential (kV) and tube current-rotation time product (mAs) is pivotal in achieving the maximal radiation dose reduction while maintaining the needed diagnostic performance. Although contrast-to-noise (CNR)-based strategies can be used to optimize kV/mAs for computed tomography (CT) imaging systems employing the linear filtered backprojection (FBP) reconstruction method, a more general framework needs to be developed for systems using the nonlinear statistical model-based iterative reconstruction (MBIR) method. The purpose of this paper is to present such a unified framework for the optimization of kV/mAs selection for both FBP- and MBIR-based CT systems. Methods: The optimal selection of kV and mAs was formulated as a constrained optimization problem to minimize the objective function, Dose(kV,mAs), under the constraint that the achievable detectability index d′(kV,mAs) is not lower than the prescribed value of d℞′ for a given imaging task. Since it is difficult to analytically model the dependence of d′ on kV and mAs for the highly nonlinear MBIR method, this constrained optimization problem is solved with comprehensive measurements of Dose(kV,mAs) and d′(kV,mAs) at a variety of kV–mAs combinations, after which the overlay of the dose contours and d′ contours is used to graphically determine the optimal kV–mAs combination to achieve the lowest dose while maintaining the needed detectability for the given imaging task. As an example, d′ for a 17 mm hypoattenuating liver lesion detection task was experimentally measured with an anthropomorphic abdominal phantom at four tube potentials (80, 100, 120, and 140 kV) and fifteen mA levels (25 and 50–700) with a sampling interval of 50 mA at a fixed rotation time of 0.5 s, which corresponded to a dose (CTDIvol) range of [0.6, 70] mGy. Using the proposed method, the optimal kV and mA that minimized dose for the prescribed detectability level of d℞′=16 were determined. As another example, the optimal kV and mA for an 8 mm hyperattenuating liver lesion detection task were also measured using the developed framework. Both an in vivo animal and human subject study were used as demonstrations of how the developed framework can be applied to the clinical work flow. Results: For the first task, the optimal kV and mAs were measured to be 100 and 500, respectively, for FBP, which corresponded to a dose level of 24 mGy. In comparison, the optimal kV and mAs for MBIR were 80 and 150, respectively, which corresponded to a dose level of 4 mGy. The topographies of the iso-d′ map and the iso-CNR map were the same for FBP; thus, the use of d′- and CNR-based optimization methods generated the same results for FBP. However, the topographies of the iso-d′ and iso-CNR map were significantly different in MBIR; the CNR-based method overestimated the performance of MBIR, predicting an overly aggressive dose reduction factor. For the second task, the developed framework generated the following optimization results: for FBP, kV = 140, mA = 350, dose = 37.5 mGy; for MBIR, kV = 120, mA = 250, dose = 18.8 mGy. Again, the CNR-based method overestimated the performance of MBIR. Results of the preliminary in vivo studies were consistent with those of the phantom experiments. Conclusions: A unified and task-driven kV/mAs optimization framework has been developed in this work. The framework is applicable to both linear and nonlinear CT systems such as those using the MBIR method. As expected, the developed framework can be reduced to the conventional CNR-based kV/mAs optimization frameworks if the system is linear. For MBIR-based nonlinear CT systems, however, the developed task-based kV/mAs optimization framework is needed to achieve the maximal dose reduction while maintaining the desired diagnostic performance. PMID:26328971

  4. Finite-time convergent recurrent neural network with a hard-limiting activation function for constrained optimization with piecewise-linear objective functions.

    PubMed

    Liu, Qingshan; Wang, Jun

    2011-04-01

    This paper presents a one-layer recurrent neural network for solving a class of constrained nonsmooth optimization problems with piecewise-linear objective functions. The proposed neural network is guaranteed to be globally convergent in finite time to the optimal solutions under a mild condition on a derived lower bound of a single gain parameter in the model. The number of neurons in the neural network is the same as the number of decision variables of the optimization problem. Compared with existing neural networks for optimization, the proposed neural network has a couple of salient features such as finite-time convergence and a low model complexity. Specific models for two important special cases, namely, linear programming and nonsmooth optimization, are also presented. In addition, applications to the shortest path problem and constrained least absolute deviation problem are discussed with simulation results to demonstrate the effectiveness and characteristics of the proposed neural network.

  5. Optimal mode transformations for linear-optical cluster-state generation

    DOE PAGES

    Uskov, Dmitry B.; Lougovski, Pavel; Alsing, Paul M.; ...

    2015-06-15

    In this paper, we analyze the generation of linear-optical cluster states (LOCSs) via sequential addition of one and two qubits. Existing approaches employ the stochastic linear-optical two-qubit controlled-Z (CZ) gate with success rate of 1/9 per operation. The question of optimality of the CZ gate with respect to LOCS generation has remained open. We report that there are alternative schemes to the CZ gate that are exponentially more efficient and show that sequential LOCS growth is indeed globally optimal. We find that the optimal cluster growth operation is a state transformation on a subspace of the full Hilbert space. Finally,more » we show that the maximal success rate of postselected entangling n photonic qubits or m Bell pairs into a cluster is (1/2) n-1 and (1/4) m-1, respectively, with no ancilla photons, and we give an explicit optical description of the optimal mode transformations.« less

  6. Beyond endoscopic assessment in inflammatory bowel disease: real-time histology of disease activity by non-linear multimodal imaging

    NASA Astrophysics Data System (ADS)

    Chernavskaia, Olga; Heuke, Sandro; Vieth, Michael; Friedrich, Oliver; Schürmann, Sebastian; Atreya, Raja; Stallmach, Andreas; Neurath, Markus F.; Waldner, Maximilian; Petersen, Iver; Schmitt, Michael; Bocklitz, Thomas; Popp, Jürgen

    2016-07-01

    Assessing disease activity is a prerequisite for an adequate treatment of inflammatory bowel diseases (IBD) such as Crohn’s disease and ulcerative colitis. In addition to endoscopic mucosal healing, histologic remission poses a promising end-point of IBD therapy. However, evaluating histological remission harbors the risk for complications due to the acquisition of biopsies and results in a delay of diagnosis because of tissue processing procedures. In this regard, non-linear multimodal imaging techniques might serve as an unparalleled technique that allows the real-time evaluation of microscopic IBD activity in the endoscopy unit. In this study, tissue sections were investigated using the non-linear multimodal microscopy combination of coherent anti-Stokes Raman scattering (CARS), two-photon excited auto fluorescence (TPEF) and second-harmonic generation (SHG). After the measurement a gold-standard assessment of histological indexes was carried out based on a conventional H&E stain. Subsequently, various geometry and intensity related features were extracted from the multimodal images. An optimized feature set was utilized to predict histological index levels based on a linear classifier. Based on the automated prediction, the diagnosis time interval is decreased. Therefore, non-linear multimodal imaging may provide a real-time diagnosis of IBD activity suited to assist clinical decision making within the endoscopy unit.

  7. [Vis-NIR spectroscopic pattern recognition combined with SG smoothing applied to breed screening of transgenic sugarcane].

    PubMed

    Liu, Gui-Song; Guo, Hao-Song; Pan, Tao; Wang, Ji-Hua; Cao, Gan

    2014-10-01

    Based on Savitzky-Golay (SG) smoothing screening, principal component analysis (PCA) combined with separately supervised linear discriminant analysis (LDA) and unsupervised hierarchical clustering analysis (HCA) were used for non-destructive visible and near-infrared (Vis-NIR) detection for breed screening of transgenic sugarcane. A random and stability-dependent framework of calibration, prediction, and validation was proposed. A total of 456 samples of sugarcane leaves planting in the elongating stage were collected from the field, which was composed of 306 transgenic (positive) samples containing Bt and Bar gene and 150 non-transgenic (negative) samples. A total of 156 samples (negative 50 and positive 106) were randomly selected as the validation set; the remaining samples (negative 100 and positive 200, a total of 300 samples) were used as the modeling set, and then the modeling set was subdivided into calibration (negative 50 and positive 100, a total of 150 samples) and prediction sets (negative 50 and positive 100, a total of 150 samples) for 50 times. The number of SG smoothing points was ex- panded, while some modes of higher derivative were removed because of small absolute value, and a total of 264 smoothing modes were used for screening. The pairwise combinations of first three principal components were used, and then the optimal combination of principal components was selected according to the model effect. Based on all divisions of calibration and prediction sets and all SG smoothing modes, the SG-PCA-LDA and SG-PCA-HCA models were established, the model parameters were optimized based on the average prediction effect for all divisions to produce modeling stability. Finally, the model validation was performed by validation set. With SG smoothing, the modeling accuracy and stability of PCA-LDA, PCA-HCA were signif- icantly improved. For the optimal SG-PCA-LDA model, the recognition rate of positive and negative validation samples were 94.3%, 96.0%; and were 92.5%, 98.0% for the optimal SG-PCA-LDA model, respectively. Vis-NIR spectro- scopic pattern recognition combined with SG smoothing could be used for accurate recognition of transgenic sugarcane leaves, and provided a convenient screening method for transgenic sugarcane breeding.

  8. Robust energy harvesting from walking vibrations by means of nonlinear cantilever beams

    NASA Astrophysics Data System (ADS)

    Kluger, Jocelyn M.; Sapsis, Themistoklis P.; Slocum, Alexander H.

    2015-04-01

    In the present work we examine how mechanical nonlinearity can be appropriately utilized to achieve strong robustness of performance in an energy harvesting setting. More specifically, for energy harvesting applications, a great challenge is the uncertain character of the excitation. The combination of this uncertainty with the narrow range of good performance for linear oscillators creates the need for more robust designs that adapt to a wider range of excitation signals. A typical application of this kind is energy harvesting from walking vibrations. Depending on the particular characteristics of the person that walks as well as on the pace of walking, the excitation signal obtains completely different forms. In the present work we study a nonlinear spring mechanism that is composed of a cantilever wrapping around a curved surface as it deflects. While for the free cantilever, the force acting on the free tip depends linearly on the tip displacement, the utilization of a contact surface with the appropriate distribution of curvature leads to essentially nonlinear dependence between the tip displacement and the acting force. The studied nonlinear mechanism has favorable mechanical properties such as low frictional losses, minimal moving parts, and a rugged design that can withstand excessive loads. Through numerical simulations we illustrate that by utilizing this essentially nonlinear element in a 2 degrees-of-freedom (DOF) system, we obtain strongly nonlinear energy transfers between the modes of the system. We illustrate that this nonlinear behavior is associated with strong robustness over three radically different excitation signals that correspond to different walking paces. To validate the strong robustness properties of the 2DOF nonlinear system, we perform a direct parameter optimization for 1DOF and 2DOF linear systems as well as for a class of 1DOF and 2DOF systems with nonlinear springs similar to that of the cubic spring that are physically realized by the cantilever-surface mechanism. The optimization results show that the 2DOF nonlinear system presents the best average performance when the excitation signals have three possible forms. Moreover, we observe that while for the linear systems the optimal performance is obtained for small values of the electromagnetic damping, for the 2DOF nonlinear system optimal performance is achieved for large values of damping. This feature is of particular importance for the system's robustness to parasitic damping.

  9. Progressive sparse representation-based classification using local discrete cosine transform evaluation for image recognition

    NASA Astrophysics Data System (ADS)

    Song, Xiaoning; Feng, Zhen-Hua; Hu, Guosheng; Yang, Xibei; Yang, Jingyu; Qi, Yunsong

    2015-09-01

    This paper proposes a progressive sparse representation-based classification algorithm using local discrete cosine transform (DCT) evaluation to perform face recognition. Specifically, the sum of the contributions of all training samples of each subject is first taken as the contribution of this subject, then the redundant subject with the smallest contribution to the test sample is iteratively eliminated. Second, the progressive method aims at representing the test sample as a linear combination of all the remaining training samples, by which the representation capability of each training sample is exploited to determine the optimal "nearest neighbors" for the test sample. Third, the transformed DCT evaluation is constructed to measure the similarity between the test sample and each local training sample using cosine distance metrics in the DCT domain. The final goal of the proposed method is to determine an optimal weighted sum of nearest neighbors that are obtained under the local correlative degree evaluation, which is approximately equal to the test sample, and we can use this weighted linear combination to perform robust classification. Experimental results conducted on the ORL database of faces (created by the Olivetti Research Laboratory in Cambridge), the FERET face database (managed by the Defense Advanced Research Projects Agency and the National Institute of Standards and Technology), AR face database (created by Aleix Martinez and Robert Benavente in the Computer Vision Center at U.A.B), and USPS handwritten digit database (gathered at the Center of Excellence in Document Analysis and Recognition at SUNY Buffalo) demonstrate the effectiveness of the proposed method.

  10. Stiffness optimization of non-linear elastic structures

    DOE PAGES

    Wallin, Mathias; Ivarsson, Niklas; Tortorelli, Daniel

    2017-11-13

    Our paper revisits stiffness optimization of non-linear elastic structures. Due to the non-linearity, several possible stiffness measures can be identified and in this work conventional compliance, i.e. secant stiffness designs are compared to tangent stiffness designs. The optimization problem is solved by the method of moving asymptotes and the sensitivities are calculated using the adjoint method. And for the tangent cost function it is shown that although the objective involves the third derivative of the strain energy an efficient formulation for calculating the sensitivity can be obtained. Loss of convergence due to large deformations in void regions is addressed bymore » using a fictitious strain energy such that small strain linear elasticity is approached in the void regions. We formulate a well-posed topology optimization problem by using restriction which is achieved via a Helmholtz type filter. The numerical examples provided show that for low load levels, the designs obtained from the different stiffness measures coincide whereas for large deformations significant differences are observed.« less

  11. Stiffness optimization of non-linear elastic structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wallin, Mathias; Ivarsson, Niklas; Tortorelli, Daniel

    Our paper revisits stiffness optimization of non-linear elastic structures. Due to the non-linearity, several possible stiffness measures can be identified and in this work conventional compliance, i.e. secant stiffness designs are compared to tangent stiffness designs. The optimization problem is solved by the method of moving asymptotes and the sensitivities are calculated using the adjoint method. And for the tangent cost function it is shown that although the objective involves the third derivative of the strain energy an efficient formulation for calculating the sensitivity can be obtained. Loss of convergence due to large deformations in void regions is addressed bymore » using a fictitious strain energy such that small strain linear elasticity is approached in the void regions. We formulate a well-posed topology optimization problem by using restriction which is achieved via a Helmholtz type filter. The numerical examples provided show that for low load levels, the designs obtained from the different stiffness measures coincide whereas for large deformations significant differences are observed.« less

  12. Reduced-Order Models Based on POD-Tpwl for Compositional Subsurface Flow Simulation

    NASA Astrophysics Data System (ADS)

    Durlofsky, L. J.; He, J.; Jin, L. Z.

    2014-12-01

    A reduced-order modeling procedure applicable for compositional subsurface flow simulation will be described and applied. The technique combines trajectory piecewise linearization (TPWL) and proper orthogonal decomposition (POD) to provide highly efficient surrogate models. The method is based on a molar formulation (which uses pressure and overall component mole fractions as the primary variables) and is applicable for two-phase, multicomponent systems. The POD-TPWL procedure expresses new solutions in terms of linearizations around solution states generated and saved during previously simulated 'training' runs. High-dimensional states are projected into a low-dimensional subspace using POD. Thus, at each time step, only a low-dimensional linear system needs to be solved. Results will be presented for heterogeneous three-dimensional simulation models involving CO2 injection. Both enhanced oil recovery and carbon storage applications (with horizontal CO2 injectors) will be considered. Reasonably close agreement between full-order reference solutions and compositional POD-TPWL simulations will be demonstrated for 'test' runs in which the well controls differ from those used for training. Construction of the POD-TPWL model requires preprocessing overhead computations equivalent to about 3-4 full-order runs. Runtime speedups using POD-TPWL are, however, very significant - typically O(100-1000). The use of POD-TPWL for well control optimization will also be illustrated. For this application, some amount of retraining during the course of the optimization is required, which leads to smaller, but still significant, speedup factors.

  13. A Three-Phase Microgrid Restoration Model Considering Unbalanced Operation of Distributed Generation

    DOE PAGES

    Wang, Zeyu; Wang, Jianhui; Chen, Chen

    2016-12-07

    Recent severe outages highlight the urgency of improving grid resiliency in the U.S. Microgrid formation schemes are proposed to restore critical loads after outages occur. Most distribution networks have unbalanced configurations that are not represented in sufficient detail by single-phase models. This study provides a microgrid formation plan that adopts a three-phase network model to represent unbalanced distribution networks. The problem formulation has a quadratic objective function with mixed-integer linear constraints. The three-phase network model enables us to examine the three-phase power outputs of distributed generators (DGs), preventing unbalanced operation that might trip DGs. Because the DG unbalanced operation constraintmore » is non-convex, an iterative process is presented that checks whether the unbalanced operation limits for DGs are satisfied after each iteration of optimization. We also develop a relatively conservative linear approximation on the unbalanced operation constraint to handle larger networks. Compared with the iterative solution process, the conservative linear approximation is able to accelerate the solution process at the cost of sacrificing optimality to a limited extent. Simulation in the IEEE 34 node and IEEE 123 test feeders indicate that the proposed method yields more practical microgrid formations results. In addition, this paper explores the coordinated operation of DGs and energy storage (ES) installations. The unbalanced three-phase outputs of ESs combined with the relatively balanced outputs of DGs could supply unbalanced loads. In conclusion, the case study also validates the DG-ES coordination.« less

  14. A Three-Phase Microgrid Restoration Model Considering Unbalanced Operation of Distributed Generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Zeyu; Wang, Jianhui; Chen, Chen

    Recent severe outages highlight the urgency of improving grid resiliency in the U.S. Microgrid formation schemes are proposed to restore critical loads after outages occur. Most distribution networks have unbalanced configurations that are not represented in sufficient detail by single-phase models. This study provides a microgrid formation plan that adopts a three-phase network model to represent unbalanced distribution networks. The problem formulation has a quadratic objective function with mixed-integer linear constraints. The three-phase network model enables us to examine the three-phase power outputs of distributed generators (DGs), preventing unbalanced operation that might trip DGs. Because the DG unbalanced operation constraintmore » is non-convex, an iterative process is presented that checks whether the unbalanced operation limits for DGs are satisfied after each iteration of optimization. We also develop a relatively conservative linear approximation on the unbalanced operation constraint to handle larger networks. Compared with the iterative solution process, the conservative linear approximation is able to accelerate the solution process at the cost of sacrificing optimality to a limited extent. Simulation in the IEEE 34 node and IEEE 123 test feeders indicate that the proposed method yields more practical microgrid formations results. In addition, this paper explores the coordinated operation of DGs and energy storage (ES) installations. The unbalanced three-phase outputs of ESs combined with the relatively balanced outputs of DGs could supply unbalanced loads. In conclusion, the case study also validates the DG-ES coordination.« less

  15. Application of optimal control theory to the design of the NASA/JPL 70-meter antenna servos

    NASA Technical Reports Server (NTRS)

    Alvarez, L. S.; Nickerson, J.

    1989-01-01

    The application of Linear Quadratic Gaussian (LQG) techniques to the design of the 70-m axis servos is described. Linear quadratic optimal control and Kalman filter theory are reviewed, and model development and verification are discussed. Families of optimal controller and Kalman filter gain vectors were generated by varying weight parameters. Performance specifications were used to select final gain vectors.

  16. Optimal clinical trial design based on a dichotomous Markov-chain mixed-effect sleep model.

    PubMed

    Steven Ernest, C; Nyberg, Joakim; Karlsson, Mats O; Hooker, Andrew C

    2014-12-01

    D-optimal designs for discrete-type responses have been derived using generalized linear mixed models, simulation based methods and analytical approximations for computing the fisher information matrix (FIM) of non-linear mixed effect models with homogeneous probabilities over time. In this work, D-optimal designs using an analytical approximation of the FIM for a dichotomous, non-homogeneous, Markov-chain phase advanced sleep non-linear mixed effect model was investigated. The non-linear mixed effect model consisted of transition probabilities of dichotomous sleep data estimated as logistic functions using piecewise linear functions. Theoretical linear and nonlinear dose effects were added to the transition probabilities to modify the probability of being in either sleep stage. D-optimal designs were computed by determining an analytical approximation the FIM for each Markov component (one where the previous state was awake and another where the previous state was asleep). Each Markov component FIM was weighted either equally or by the average probability of response being awake or asleep over the night and summed to derive the total FIM (FIM(total)). The reference designs were placebo, 0.1, 1-, 6-, 10- and 20-mg dosing for a 2- to 6-way crossover study in six dosing groups. Optimized design variables were dose and number of subjects in each dose group. The designs were validated using stochastic simulation/re-estimation (SSE). Contrary to expectations, the predicted parameter uncertainty obtained via FIM(total) was larger than the uncertainty in parameter estimates computed by SSE. Nevertheless, the D-optimal designs decreased the uncertainty of parameter estimates relative to the reference designs. Additionally, the improvement for the D-optimal designs were more pronounced using SSE than predicted via FIM(total). Through the use of an approximate analytic solution and weighting schemes, the FIM(total) for a non-homogeneous, dichotomous Markov-chain phase advanced sleep model was computed and provided more efficient trial designs and increased nonlinear mixed-effects modeling parameter precision.

  17. A Linear Electromagnetic Piston Pump

    NASA Astrophysics Data System (ADS)

    Hogan, Paul H.

    Advancements in mobile hydraulics for human-scale applications have increased demand for a compact hydraulic power supply. Conventional designs couple a rotating electric motor to a hydraulic pump, which increases the package volume and requires several energy conversions. This thesis investigates the use of a free piston as the moving element in a linear motor to eliminate multiple energy conversions and decrease the overall package volume. A coupled model used a quasi-static magnetic equivalent circuit to calculate the motor inductance and the electromagnetic force acting on the piston. The force was an input to a time domain model to evaluate the mechanical and pressure dynamics. The magnetic circuit model was validated with finite element analysis and an experimental prototype linear motor. The coupled model was optimized using a multi-objective genetic algorithm to explore the parameter space and maximize power density and efficiency. An experimental prototype linear pump coupled pistons to an off-the-shelf linear motor to validate the mechanical and pressure dynamics models. The magnetic circuit force calculation agreed within 3% of finite element analysis, and within 8% of experimental data from the unoptimized prototype linear motor. The optimized motor geometry also had good agreement with FEA; at zero piston displacement, the magnetic circuit calculates optimized motor force within 10% of FEA in less than 1/1000 the computational time. This makes it well suited to genetic optimization algorithms. The mechanical model agrees very well with the experimental piston pump position data when tuned for additional unmodeled mechanical friction. Optimized results suggest that an improvement of 400% of the state of the art power density is attainable with as high as 85% net efficiency. This demonstrates that a linear electromagnetic piston pump has potential to serve as a more compact and efficient supply of fluid power for the human scale.

  18. Fitting dynamic models to the Geosat sea level observations in the tropical Pacific Ocean. I - A free wave model

    NASA Technical Reports Server (NTRS)

    Fu, Lee-Lueng; Vazquez, Jorge; Perigaud, Claire

    1991-01-01

    Free, equatorially trapped sinusoidal wave solutions to a linear model on an equatorial beta plane are used to fit the Geosat altimetric sea level observations in the tropical Pacific Ocean. The Kalman filter technique is used to estimate the wave amplitude and phase from the data. The estimation is performed at each time step by combining the model forecast with the observation in an optimal fashion utilizing the respective error covariances. The model error covariance is determined such that the performance of the model forecast is optimized. It is found that the dominant observed features can be described qualitatively by basin-scale Kelvin waves and the first meridional-mode Rossby waves. Quantitatively, however, only 23 percent of the signal variance can be accounted for by this simple model.

  19. Application of modern control theory to scheduling and path-stretching maneuvers of aircraft in the near terminal area

    NASA Technical Reports Server (NTRS)

    Athans, M.

    1974-01-01

    A design concept of the dynamic control of aircraft in the near terminal area is discussed. An arbitrary set of nominal air routes, with possible multiple merging points, all leading to a single runway, is considered. The system allows for the automated determination of acceleration/deceleration of aircraft along the nominal air routes, as well as for the automated determination of path-stretching delay maneuvers. In addition to normal operating conditions, the system accommodates: (1) variable commanded separations over the outer marker to allow for takeoffs and between successive landings and (2) emergency conditions under which aircraft in distress have priority. The system design is based on a combination of three distinct optimal control problems involving a standard linear-quadratic problem, a parameter optimization problem, and a minimum-time rendezvous problem.

  20. Facial Affect Recognition Using Regularized Discriminant Analysis-Based Algorithms

    NASA Astrophysics Data System (ADS)

    Lee, Chien-Cheng; Huang, Shin-Sheng; Shih, Cheng-Yuan

    2010-12-01

    This paper presents a novel and effective method for facial expression recognition including happiness, disgust, fear, anger, sadness, surprise, and neutral state. The proposed method utilizes a regularized discriminant analysis-based boosting algorithm (RDAB) with effective Gabor features to recognize the facial expressions. Entropy criterion is applied to select the effective Gabor feature which is a subset of informative and nonredundant Gabor features. The proposed RDAB algorithm uses RDA as a learner in the boosting algorithm. The RDA combines strengths of linear discriminant analysis (LDA) and quadratic discriminant analysis (QDA). It solves the small sample size and ill-posed problems suffered from QDA and LDA through a regularization technique. Additionally, this study uses the particle swarm optimization (PSO) algorithm to estimate optimal parameters in RDA. Experiment results demonstrate that our approach can accurately and robustly recognize facial expressions.

  1. Optimization of Bread Enriched with Garcinia mangostana Pericarp Powder

    NASA Astrophysics Data System (ADS)

    Ibrahim, U. K.; Salleh, R. Mohd; Maqsood-ul-Hague, S. N. S.; Hashib, S. Abd; Karim, S. F. Abd

    2018-05-01

    The aim of present work is to optimize the formulation of bread enhanced with Garcinia mangostana pericarp powder with the combination of baking process conditions. The independent variables used were baking time (15 - 30 minutes), baking temperature (180 - 220°C) and pericarp powder concentration (0.5 - 2.0%). The physical and chemical properties of bread sample such as antioxidant activity, phenolic content, moisture analysis and colour parameters were studied. Bread dough without fortification of pericarp powder was used as control. Data obtained were analyzed by multiple regressions and the significant model such as linear and quadratic with variables interactions were used. As a conclusion, the optimum baking conditions were found at 213°C baking temperature with 23 minutes baking time and addition of 0.87% for Garcinia mangostana pericarp powder to the bread formulation.

  2. A statistical rain attenuation prediction model with application to the advanced communication technology satellite project. 3: A stochastic rain fade control algorithm for satellite link power via non linear Markow filtering theory

    NASA Technical Reports Server (NTRS)

    Manning, Robert M.

    1991-01-01

    The dynamic and composite nature of propagation impairments that are incurred on Earth-space communications links at frequencies in and above 30/20 GHz Ka band, i.e., rain attenuation, cloud and/or clear air scintillation, etc., combined with the need to counter such degradations after the small link margins have been exceeded, necessitate the use of dynamic statistical identification and prediction processing of the fading signal in order to optimally estimate and predict the levels of each of the deleterious attenuation components. Such requirements are being met in NASA's Advanced Communications Technology Satellite (ACTS) Project by the implementation of optimal processing schemes derived through the use of the Rain Attenuation Prediction Model and nonlinear Markov filtering theory.

  3. Mean First Passage Time and Stochastic Resonance in a Transcriptional Regulatory System with Non-Gaussian Noise

    NASA Astrophysics Data System (ADS)

    Kang, Yan-Mei; Chen, Xi; Lin, Xu-Dong; Tan, Ning

    The mean first passage time (MFPT) in a phenomenological gene transcriptional regulatory model with non-Gaussian noise is analytically investigated based on the singular perturbation technique. The effect of the non-Gaussian noise on the phenomenon of stochastic resonance (SR) is then disclosed based on a new combination of adiabatic elimination and linear response approximation. Compared with the results in the Gaussian noise case, it is found that bounded non-Gaussian noise inhibits the transition between different concentrations of protein, while heavy-tailed non-Gaussian noise accelerates the transition. It is also found that the optimal noise intensity for SR in the heavy-tailed noise case is smaller, while the optimal noise intensity in the bounded noise case is larger. These observations can be explained by the heavy-tailed noise easing random transitions.

  4. Control of mechanical systems by the mixed "time and expenditure" criterion

    NASA Astrophysics Data System (ADS)

    Alesova, I. M.; Babadzanjanz, L. K.; Pototskaya, I. Yu.; Pupysheva, Yu. Yu.; Saakyan, A. T.

    2018-05-01

    The optimal controlled motion of a mechanical system, that is determined by the linear system ODE with constant coefficients and piecewise constant control components, is considered. The number of control switching points and the heights of control steps are considered as preset. The optimized functional is combination of classical time criteria and "Expenditure criteria", that is equal to the total area of all steps of all control components. In the absence of control, the solution of the system is equal to the sum of components (frequency components) corresponding to different eigenvalues of the matrix of the ODE system. Admissible controls are those that turn to zero (at a non predetermined time moment) the previously chosen frequency components of the solution. An algorithm for the finding of control switching points, based on the necessary minimum conditions for mixed criteria, is proposed.

  5. Linear quadratic regulators with eigenvalue placement in a specified region

    NASA Technical Reports Server (NTRS)

    Shieh, Leang S.; Dib, Hani M.; Ganesan, Sekar

    1988-01-01

    A linear optimal quadratic regulator is developed for optimally placing the closed-loop poles of multivariable continuous-time systems within the common region of an open sector, bounded by lines inclined at + or - pi/2k (k = 2 or 3) from the negative real axis with a sector angle of pi/2 or less, and the left-hand side of a line parallel to the imaginary axis in the complex s-plane. The design method is mainly based on the solution of a linear matrix Liapunov equation, and the resultant closed-loop system with its eigenvalues in the desired region is optimal with respect to a quadratic performance index.

  6. Parametric optimal control of uncertain systems under an optimistic value criterion

    NASA Astrophysics Data System (ADS)

    Li, Bo; Zhu, Yuanguo

    2018-01-01

    It is well known that the optimal control of a linear quadratic model is characterized by the solution of a Riccati differential equation. In many cases, the corresponding Riccati differential equation cannot be solved exactly such that the optimal feedback control may be a complex time-oriented function. In this article, a parametric optimal control problem of an uncertain linear quadratic model under an optimistic value criterion is considered for simplifying the expression of optimal control. Based on the equation of optimality for the uncertain optimal control problem, an approximation method is presented to solve it. As an application, a two-spool turbofan engine optimal control problem is given to show the utility of the proposed model and the efficiency of the presented approximation method.

  7. Classical Optimal Control for Energy Minimization Based On Diffeomorphic Modulation under Observable-Response-Preserving Homotopy.

    PubMed

    Soley, Micheline B; Markmann, Andreas; Batista, Victor S

    2018-06-12

    We introduce the so-called "Classical Optimal Control Optimization" (COCO) method for global energy minimization based on the implementation of the diffeomorphic modulation under observable-response-preserving homotopy (DMORPH) gradient algorithm. A probe particle with time-dependent mass m( t;β) and dipole μ( r, t;β) is evolved classically on the potential energy surface V( r) coupled to an electric field E( t;β), as described by the time-dependent density of states represented on a grid, or otherwise as a linear combination of Gaussians generated by the k-means clustering algorithm. Control parameters β defining m( t;β), μ( r, t;β), and E( t;β) are optimized by following the gradients of the energy with respect to β, adapting them to steer the particle toward the global minimum energy configuration. We find that the resulting COCO algorithm is capable of resolving near-degenerate states separated by large energy barriers and successfully locates the global minima of golf potentials on flat and rugged surfaces, previously explored for testing quantum annealing methodologies and the quantum optimal control optimization (QuOCO) method. Preliminary results show successful energy minimization of multidimensional Lennard-Jones clusters. Beyond the analysis of energy minimization in the specific model systems investigated, we anticipate COCO should be valuable for solving minimization problems in general, including optimization of parameters in applications to machine learning and molecular structure determination.

  8. An MILP-based cross-layer optimization for a multi-reader arbitration in the UHF RFID system.

    PubMed

    Choi, Jinchul; Lee, Chaewoo

    2011-01-01

    In RFID systems, the performance of each reader such as interrogation range and tag recognition rate may suffer from interferences from other readers. Since the reader interference can be mitigated by output signal power control, spectral and/or temporal separation among readers, the system performance depends on how to adapt the various reader arbitration metrics such as time, frequency, and output power to the system environment. However, complexity and difficulty of the optimization problem increase with respect to the variety of the arbitration metrics. Thus, most proposals in previous study have been suggested to primarily prevent the reader collision with consideration of one or two arbitration metrics. In this paper, we propose a novel cross-layer optimization design based on the concept of combining time division, frequency division, and power control not only to solve the reader interference problem, but also to achieve the multiple objectives such as minimum interrogation delay, maximum reader utilization, and energy efficiency. Based on the priority of the multiple objectives, our cross-layer design optimizes the system sequentially by means of the mixed-integer linear programming. In spite of the multi-stage optimization, the optimization design is formulated as a concise single mathematical form by properly assigning a weight to each objective. Numerical results demonstrate the effectiveness of the proposed optimization design.

  9. An MILP-Based Cross-Layer Optimization for a Multi-Reader Arbitration in the UHF RFID System

    PubMed Central

    Choi, Jinchul; Lee, Chaewoo

    2011-01-01

    In RFID systems, the performance of each reader such as interrogation range and tag recognition rate may suffer from interferences from other readers. Since the reader interference can be mitigated by output signal power control, spectral and/or temporal separation among readers, the system performance depends on how to adapt the various reader arbitration metrics such as time, frequency, and output power to the system environment. However, complexity and difficulty of the optimization problem increase with respect to the variety of the arbitration metrics. Thus, most proposals in previous study have been suggested to primarily prevent the reader collision with consideration of one or two arbitration metrics. In this paper, we propose a novel cross-layer optimization design based on the concept of combining time division, frequency division, and power control not only to solve the reader interference problem, but also to achieve the multiple objectives such as minimum interrogation delay, maximum reader utilization, and energy efficiency. Based on the priority of the multiple objectives, our cross-layer design optimizes the system sequentially by means of the mixed-integer linear programming. In spite of the multi-stage optimization, the optimization design is formulated as a concise single mathematical form by properly assigning a weight to each objective. Numerical results demonstrate the effectiveness of the proposed optimization design. PMID:22163743

  10. Pseudo-spectral control of a novel oscillating surge wave energy converter in regular waves for power optimization including load reduction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tom, Nathan M.; Yu, Yi -Hsiang; Wright, Alan D.

    The aim of this study is to describe a procedure to maximize the power-to-load ratio of a novel wave energy converter (WEC) that combines an oscillating surge wave energy converter with variable structural components. The control of the power-take-off torque will be on a wave-to-wave timescale, whereas the structure will be controlled statically such that the geometry remains the same throughout the wave period. Linear hydrodynamic theory is used to calculate the upper and lower bounds for the time-averaged absorbed power and surge foundation loads while assuming that the WEC motion remains sinusoidal. Previous work using pseudo-spectral techniques to solvemore » the optimal control problem focused solely on maximizing absorbed energy. This work extends the optimal control problem to include a measure of the surge foundation force in the optimization. The objective function includes two competing terms that force the optimizer to maximize power capture while minimizing structural loads. A penalty weight was included with the surge foundation force that allows control of the optimizer performance based on whether emphasis should be placed on power absorption or load shedding. Results from pseudo-spectral optimal control indicate that a unit reduction in time-averaged power can be accompanied by a greater reduction in surge-foundation force.« less

  11. Pseudo-spectral control of a novel oscillating surge wave energy converter in regular waves for power optimization including load reduction

    DOE PAGES

    Tom, Nathan M.; Yu, Yi -Hsiang; Wright, Alan D.; ...

    2017-04-18

    The aim of this study is to describe a procedure to maximize the power-to-load ratio of a novel wave energy converter (WEC) that combines an oscillating surge wave energy converter with variable structural components. The control of the power-take-off torque will be on a wave-to-wave timescale, whereas the structure will be controlled statically such that the geometry remains the same throughout the wave period. Linear hydrodynamic theory is used to calculate the upper and lower bounds for the time-averaged absorbed power and surge foundation loads while assuming that the WEC motion remains sinusoidal. Previous work using pseudo-spectral techniques to solvemore » the optimal control problem focused solely on maximizing absorbed energy. This work extends the optimal control problem to include a measure of the surge foundation force in the optimization. The objective function includes two competing terms that force the optimizer to maximize power capture while minimizing structural loads. A penalty weight was included with the surge foundation force that allows control of the optimizer performance based on whether emphasis should be placed on power absorption or load shedding. Results from pseudo-spectral optimal control indicate that a unit reduction in time-averaged power can be accompanied by a greater reduction in surge-foundation force.« less

  12. Unification theory of optimal life histories and linear demographic models in internal stochasticity.

    PubMed

    Oizumi, Ryo

    2014-01-01

    Life history of organisms is exposed to uncertainty generated by internal and external stochasticities. Internal stochasticity is generated by the randomness in each individual life history, such as randomness in food intake, genetic character and size growth rate, whereas external stochasticity is due to the environment. For instance, it is known that the external stochasticity tends to affect population growth rate negatively. It has been shown in a recent theoretical study using path-integral formulation in structured linear demographic models that internal stochasticity can affect population growth rate positively or negatively. However, internal stochasticity has not been the main subject of researches. Taking account of effect of internal stochasticity on the population growth rate, the fittest organism has the optimal control of life history affected by the stochasticity in the habitat. The study of this control is known as the optimal life schedule problems. In order to analyze the optimal control under internal stochasticity, we need to make use of "Stochastic Control Theory" in the optimal life schedule problem. There is, however, no such kind of theory unifying optimal life history and internal stochasticity. This study focuses on an extension of optimal life schedule problems to unify control theory of internal stochasticity into linear demographic models. First, we show the relationship between the general age-states linear demographic models and the stochastic control theory via several mathematical formulations, such as path-integral, integral equation, and transition matrix. Secondly, we apply our theory to a two-resource utilization model for two different breeding systems: semelparity and iteroparity. Finally, we show that the diversity of resources is important for species in a case. Our study shows that this unification theory can address risk hedges of life history in general age-states linear demographic models.

  13. Unification Theory of Optimal Life Histories and Linear Demographic Models in Internal Stochasticity

    PubMed Central

    Oizumi, Ryo

    2014-01-01

    Life history of organisms is exposed to uncertainty generated by internal and external stochasticities. Internal stochasticity is generated by the randomness in each individual life history, such as randomness in food intake, genetic character and size growth rate, whereas external stochasticity is due to the environment. For instance, it is known that the external stochasticity tends to affect population growth rate negatively. It has been shown in a recent theoretical study using path-integral formulation in structured linear demographic models that internal stochasticity can affect population growth rate positively or negatively. However, internal stochasticity has not been the main subject of researches. Taking account of effect of internal stochasticity on the population growth rate, the fittest organism has the optimal control of life history affected by the stochasticity in the habitat. The study of this control is known as the optimal life schedule problems. In order to analyze the optimal control under internal stochasticity, we need to make use of “Stochastic Control Theory” in the optimal life schedule problem. There is, however, no such kind of theory unifying optimal life history and internal stochasticity. This study focuses on an extension of optimal life schedule problems to unify control theory of internal stochasticity into linear demographic models. First, we show the relationship between the general age-states linear demographic models and the stochastic control theory via several mathematical formulations, such as path–integral, integral equation, and transition matrix. Secondly, we apply our theory to a two-resource utilization model for two different breeding systems: semelparity and iteroparity. Finally, we show that the diversity of resources is important for species in a case. Our study shows that this unification theory can address risk hedges of life history in general age-states linear demographic models. PMID:24945258

  14. Linear quadratic regulators with eigenvalue placement in a horizontal strip

    NASA Technical Reports Server (NTRS)

    Shieh, Leang S.; Dib, Hani M.; Ganesan, Sekar

    1987-01-01

    A method for optimally shifting the imaginary parts of the open-loop poles of a multivariable control system to the desirable closed-loop locations is presented. The optimal solution with respect to a quadratic performance index is obtained by solving a linear matrix Liapunov equation.

  15. Quadratic correlation filters for optical correlators

    NASA Astrophysics Data System (ADS)

    Mahalanobis, Abhijit; Muise, Robert R.; Vijaya Kumar, Bhagavatula V. K.

    2003-08-01

    Linear correlation filters have been implemented in optical correlators and successfully used for a variety of applications. The output of an optical correlator is usually sensed using a square law device (such as a CCD array) which forces the output to be the squared magnitude of the desired correlation. It is however not a traditional practice to factor the effect of the square-law detector in the design of the linear correlation filters. In fact, the input-output relationship of an optical correlator is more accurately modeled as a quadratic operation than a linear operation. Quadratic correlation filters (QCFs) operate directly on the image data without the need for feature extraction or segmentation. In this sense, the QCFs retain the main advantages of conventional linear correlation filters while offering significant improvements in other respects. Not only is more processing required to detect peaks in the outputs of multiple linear filters, but choosing a winner among them is an error prone task. In contrast, all channels in a QCF work together to optimize the same performance metric and produce a combined output that leads to considerable simplification of the post-processing. In this paper, we propose a novel approach to the design of quadratic correlation based on the Fukunaga Koontz transform. Although quadratic filters are known to be optimum when the data is Gaussian, it is expected that they will perform as well as or better than linear filters in general. Preliminary performance results are provided that show that quadratic correlation filters perform better than their linear counterparts.

  16. Optimization of detectors for the ILC

    NASA Astrophysics Data System (ADS)

    Suehara, Taikan; ILD Group; SID Group

    2016-04-01

    International Linear Collider (ILC) is a next-generation e+e- linear collider to explore Higgs, Beyond-Standard-Models, top and electroweak particles with great precision. We are optimizing our two detectors, International Large Detector (ILD) and Silicon Detector (SiD) to maximize the physics reach expected in ILC with reasonable detector cost and good reliability. The optimization study on vertex detectors, main trackers and calorimeters is underway. We aim to conclude the optimization to establish final designs in a few years, to finish detector TDR and proposal in reply to expected ;green sign; of the ILC project.

  17. TU-EF-204-03: Task-Based KV and MAs Optimization for Radiation Dose Reduction in CT: From FBP to Statistical Model-Based Iterative Reconstruction (MBIR)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gomez-Cardona, D; Li, K; Lubner, M G

    Purpose: The introduction of the highly nonlinear MBIR algorithm to clinical CT systems has made CNR an invalid metric for kV optimization. The purpose of this work was to develop a task-based framework to unify kV and mAs optimization for both FBP- and MBIR-based CT systems. Methods: The kV-mAs optimization was formulated as a constrained minimization problem: to select kV and mAs to minimize dose under the constraint of maintaining the detection performance as clinically prescribed. To experimentally solve this optimization problem, exhaustive measurements of detectability index (d’) for a hepatic lesion detection task were performed at 15 different mAmore » levels and 4 kV levels using an anthropomorphic phantom. The measured d’ values were used to generate an iso-detectability map; similarly, dose levels recorded at different kV-mAs combinations were used to generate an iso-dose map. The iso-detectability map was overlaid on top of the iso-dose map so that for a prescribed detectability level d’, the optimal kV-mA can be determined from the crossing between the d’ contour and the dose contour that corresponds to the minimum dose. Results: Taking d’=16 as an example: the kV-mAs combinations on the measured iso-d’ line of MBIR are 80–150 (3.8), 100–140 (6.6), 120–150 (11.3), and 140–160 (17.2), where values in the parentheses are measured dose values. As a Result, the optimal kV was 80 and optimal mA was 150. In comparison, the optimal kV and mA for FBP were 100 and 500, which corresponded to a dose level of 24 mGy. Results of in vivo animal experiments were consistent with the phantom results. Conclusion: A new method to optimize kV and mAs selection has been developed. This method is applicable to both linear and nonlinear CT systems such as those using MBIR. Additional dose savings can be achieved by combining MBIR with this method. This work was partially supported by an NIH grant R01CA169331 and GE Healthcare. K. Li, D. Gomez-Cardona, M. G. Lubner: Nothing to disclose. P. J. Pickhardt: Co-founder, VirtuoCTC, LLC Stockholder, Cellectar Biosciences, Inc. G.-H. Chen: Research funded, GE Healthcare; Research funded, Siemens AX.« less

  18. A comparative analysis of chaotic particle swarm optimizations for detecting single nucleotide polymorphism barcodes.

    PubMed

    Chuang, Li-Yeh; Moi, Sin-Hua; Lin, Yu-Da; Yang, Cheng-Hong

    2016-10-01

    Evolutionary algorithms could overcome the computational limitations for the statistical evaluation of large datasets for high-order single nucleotide polymorphism (SNP) barcodes. Previous studies have proposed several chaotic particle swarm optimization (CPSO) methods to detect SNP barcodes for disease analysis (e.g., for breast cancer and chronic diseases). This work evaluated additional chaotic maps combined with the particle swarm optimization (PSO) method to detect SNP barcodes using a high-dimensional dataset. Nine chaotic maps were used to improve PSO method results and compared the searching ability amongst all CPSO methods. The XOR and ZZ disease models were used to compare all chaotic maps combined with PSO method. Efficacy evaluations of CPSO methods were based on statistical values from the chi-square test (χ 2 ). The results showed that chaotic maps could improve the searching ability of PSO method when population are trapped in the local optimum. The minor allele frequency (MAF) indicated that, amongst all CPSO methods, the numbers of SNPs, sample size, and the highest χ 2 value in all datasets were found in the Sinai chaotic map combined with PSO method. We used the simple linear regression results of the gbest values in all generations to compare the all methods. Sinai chaotic map combined with PSO method provided the highest β values (β≥0.32 in XOR disease model and β≥0.04 in ZZ disease model) and the significant p-value (p-value<0.001 in both the XOR and ZZ disease models). The Sinai chaotic map was found to effectively enhance the fitness values (χ 2 ) of PSO method, indicating that the Sinai chaotic map combined with PSO method is more effective at detecting potential SNP barcodes in both the XOR and ZZ disease models. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. Accuracy of 1H magnetic resonance spectroscopy for quantification of 2-hydroxyglutarate using linear combination and J-difference editing at 9.4T.

    PubMed

    Neuberger, Ulf; Kickingereder, Philipp; Helluy, Xavier; Fischer, Manuel; Bendszus, Martin; Heiland, Sabine

    2017-12-01

    Non-invasive detection of 2-hydroxyglutarate (2HG) by magnetic resonance spectroscopy is attractive since it is related to tumor metabolism. Here, we compare the detection accuracy of 2HG in a controlled phantom setting via widely used localized spectroscopy sequences quantified by linear combination of metabolite signals vs. a more complex approach applying a J-difference editing technique at 9.4T. Different phantoms, comprised out of a concentration series of 2HG and overlapping brain metabolites, were measured with an optimized point-resolved-spectroscopy sequence (PRESS) and an in-house developed J-difference editing sequence. The acquired spectra were post-processed with LCModel and a simulated metabolite set (PRESS) or with a quantification formula for J-difference editing. Linear regression analysis demonstrated a high correlation of real 2HG values with those measured with the PRESS method (adjusted R-squared: 0.700, p<0.001) as well as with those measured with the J-difference editing method (adjusted R-squared: 0.908, p<0.001). The regression model with the J-difference editing method however had a significantly higher explanatory value over the regression model with the PRESS method (p<0.0001). Moreover, with J-difference editing 2HG was discernible down to 1mM, whereas with the PRESS method 2HG values were not discernable below 2mM and with higher systematic errors, particularly in phantoms with high concentrations of N-acetyl-asparate (NAA) and glutamate (Glu). In summary, quantification of 2HG with linear combination of metabolite signals shows high systematic errors particularly at low 2HG concentration and high concentration of confounding metabolites such as NAA and Glu. In contrast, J-difference editing offers a more accurate quantification even at low 2HG concentrations, which outweighs the downsides of longer measurement time and more complex postprocessing. Copyright © 2017. Published by Elsevier GmbH.

  20. Improved genetic algorithm for the protein folding problem by use of a Cartesian combination operator.

    PubMed Central

    Rabow, A. A.; Scheraga, H. A.

    1996-01-01

    We have devised a Cartesian combination operator and coding scheme for improving the performance of genetic algorithms applied to the protein folding problem. The genetic coding consists of the C alpha Cartesian coordinates of the protein chain. The recombination of the genes of the parents is accomplished by: (1) a rigid superposition of one parent chain on the other, to make the relation of Cartesian coordinates meaningful, then, (2) the chains of the children are formed through a linear combination of the coordinates of their parents. The children produced with this Cartesian combination operator scheme have similar topology and retain the long-range contacts of their parents. The new scheme is significantly more efficient than the standard genetic algorithm methods for locating low-energy conformations of proteins. The considerable superiority of genetic algorithms over Monte Carlo optimization methods is also demonstrated. We have also devised a new dynamic programming lattice fitting procedure for use with the Cartesian combination operator method. The procedure finds excellent fits of real-space chains to the lattice while satisfying bond-length, bond-angle, and overlap constraints. PMID:8880904

  1. The mean-square error optimal linear discriminant function and its application to incomplete data vectors

    NASA Technical Reports Server (NTRS)

    Walker, H. F.

    1979-01-01

    In many pattern recognition problems, data vectors are classified although one or more of the data vector elements are missing. This problem occurs in remote sensing when the ground is obscured by clouds. Optimal linear discrimination procedures for classifying imcomplete data vectors are discussed.

  2. Aircraft adaptive learning control

    NASA Technical Reports Server (NTRS)

    Lee, P. S. T.; Vanlandingham, H. F.

    1979-01-01

    The optimal control theory of stochastic linear systems is discussed in terms of the advantages of distributed-control systems, and the control of randomly-sampled systems. An optimal solution to longitudinal control is derived and applied to the F-8 DFBW aircraft. A randomly-sampled linear process model with additive process and noise is developed.

  3. In situ magnetic compensation for potassium spin-exchange relaxation-free magnetometer considering probe beam pumping effect

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fang, Jiancheng; Wang, Tao, E-mail: wangtaowt@aspe.buaa.edu.cn; Quan, Wei

    2014-06-15

    A novel method to compensate the residual magnetic field for an atomic magnetometer consisting of two perpendicular beams of polarizations was demonstrated in this paper. The method can realize magnetic compensation in the case where the pumping rate of the probe beam cannot be ignored. In the experiment, the probe beam is always linearly polarized, whereas, the probe beam contains a residual circular component due to the imperfection of the polarizer, which leads to the pumping effect of the probe beam. A simulation of the probe beam's optical rotation and pumping rate was demonstrated. At the optimized points, the wavelengthmore » of the probe beam was optimized to achieve the largest optical rotation. Although, there is a small circular component in the linearly polarized probe beam, the pumping rate of the probe beam was non-negligible at the optimized wavelength which if ignored would lead to inaccuracies in the magnetic field compensation. Therefore, the dynamic equation of spin evolution was solved by considering the pumping effect of the probe beam. Based on the quasi-static solution, a novel magnetic compensation method was proposed, which contains two main steps: (1) the non-pumping compensation and (2) the sequence compensation with a very specific sequence. After these two main steps, a three-axis in situ magnetic compensation was achieved. The compensation method was suitable to design closed-loop spin-exchange relaxation-free magnetometer. By a combination of the magnetic compensation and the optimization, the magnetic field sensitivity was approximately 4 fT/Hz{sup 1/2}, which was mainly dominated by the noise of the magnetic shield.« less

  4. Optimizing an experimental design for an electromagnetic experiment

    NASA Astrophysics Data System (ADS)

    Roux, Estelle; Garcia, Xavier

    2013-04-01

    Most of geophysical studies focus on data acquisition and analysis, but another aspect which is gaining importance is the discussion on acquisition of suitable datasets. This can be done through the design of an optimal experiment. Optimizing an experimental design implies a compromise between maximizing the information we get about the target and reducing the cost of the experiment, considering a wide range of constraints (logistical, financial, experimental …). We are currently developing a method to design an optimal controlled-source electromagnetic (CSEM) experiment to detect a potential CO2 reservoir and monitor this reservoir during and after CO2 injection. Our statistical algorithm combines the use of linearized inverse theory (to evaluate the quality of one given design via the objective function) and stochastic optimization methods like genetic algorithm (to examine a wide range of possible surveys). The particularity of our method is that it uses a multi-objective genetic algorithm that searches for designs that fit several objective functions simultaneously. One main advantage of this kind of technique to design an experiment is that it does not require the acquisition of any data and can thus be easily conducted before any geophysical survey. Our new experimental design algorithm has been tested with a realistic one-dimensional resistivity model of the Earth in the region of study (northern Spain CO2 sequestration test site). We show that a small number of well distributed observations have the potential to resolve the target. This simple test also points out the importance of a well chosen objective function. Finally, in the context of CO2 sequestration that motivates this study, we might be interested in maximizing the information we get about the reservoir layer. In that case, we show how the combination of two different objective functions considerably improve its resolution.

  5. Neuro-fuzzy and neural network techniques for forecasting sea level in Darwin Harbor, Australia

    NASA Astrophysics Data System (ADS)

    Karimi, Sepideh; Kisi, Ozgur; Shiri, Jalal; Makarynskyy, Oleg

    2013-03-01

    Accurate predictions of sea level with different forecast horizons are important for coastal and ocean engineering applications, as well as in land drainage and reclamation studies. The methodology of tidal harmonic analysis, which is generally used for obtaining a mathematical description of the tides, is data demanding requiring processing of tidal observation collected over several years. In the present study, hourly sea levels for Darwin Harbor, Australia were predicted using two different, data driven techniques, adaptive neuro-fuzzy inference system (ANFIS) and artificial neural network (ANN). Multi linear regression (MLR) technique was used for selecting the optimal input combinations (lag times) of hourly sea level. The input combination comprises current sea level as well as five previous level values found to be optimal. For the ANFIS models, five different membership functions namely triangular, trapezoidal, generalized bell, Gaussian and two Gaussian membership function were tested and employed for predicting sea level for the next 1 h, 24 h, 48 h and 72 h. The used ANN models were trained using three different algorithms, namely, Levenberg-Marquardt, conjugate gradient and gradient descent. Predictions of optimal ANFIS and ANN models were compared with those of the optimal auto-regressive moving average (ARMA) models. The coefficient of determination, root mean square error and variance account statistics were used as comparison criteria. The obtained results indicated that triangular membership function was optimal for predictions with the ANFIS models while adaptive learning rate and Levenberg-Marquardt were most suitable for training the ANN models. Consequently, ANFIS and ANN models gave similar forecasts and performed better than the developed for the same purpose ARMA models for all the prediction intervals.

  6. Theater-Level Gaming and Analysis Workshop for Force Planning. Volume II. Summary, Discussion of Issues and Requirements for Research. September 27- 29, 1977, Held at Xerox International Center for Training and Management Development, Leesburg, Virginia

    DTIC Science & Technology

    1981-05-01

    be allocated to targets on the battlefield and in the rear area. The speaker describes the VECTOR I/NUCLEAR model, a combination of the UNICORN target...outlined. UNICORN is compatible with VECTOR 1 in level of detail. It is an expected value damage model and uses linear programming to optimize the...and a growing appreciation for the power of simulation in addressing large, complex problems, it was only a few short years before these games had

  7. Robust neural network with applications to credit portfolio data analysis.

    PubMed

    Feng, Yijia; Li, Runze; Sudjianto, Agus; Zhang, Yiyun

    2010-01-01

    In this article, we study nonparametric conditional quantile estimation via neural network structure. We proposed an estimation method that combines quantile regression and neural network (robust neural network, RNN). It provides good smoothing performance in the presence of outliers and can be used to construct prediction bands. A Majorization-Minimization (MM) algorithm was developed for optimization. Monte Carlo simulation study is conducted to assess the performance of RNN. Comparison with other nonparametric regression methods (e.g., local linear regression and regression splines) in real data application demonstrate the advantage of the newly proposed procedure.

  8. An Extended Microcomputer-Based Network Optimization Package.

    DTIC Science & Technology

    1982-10-01

    Analysis, Laxenberq, Austria, 1981, pp. 781-808. 9. Anton , H., Elementary Linear Algebra , John Wiley & Sons, New York, 1977. 10. Koopmans, T. C...fCaRUlue do leVee. aide It 001100"M OW eedea9f’ OF Nooke~e Network, generalized network, microcomputer, optimization, network with gains, linear ...Oboe &111111041 network problem, in turn, can be viewed as a specialization of a linear programuing problem having at most two non-zero entries in each

  9. A feasible DY conjugate gradient method for linear equality constraints

    NASA Astrophysics Data System (ADS)

    LI, Can

    2017-09-01

    In this paper, we propose a feasible conjugate gradient method for solving linear equality constrained optimization problem. The method is an extension of the Dai-Yuan conjugate gradient method proposed by Dai and Yuan to linear equality constrained optimization problem. It can be applied to solve large linear equality constrained problem due to lower storage requirement. An attractive property of the method is that the generated direction is always feasible and descent direction. Under mild conditions, the global convergence of the proposed method with exact line search is established. Numerical experiments are also given which show the efficiency of the method.

  10. Numerical approximation for the infinite-dimensional discrete-time optimal linear-quadratic regulator problem

    NASA Technical Reports Server (NTRS)

    Gibson, J. S.; Rosen, I. G.

    1986-01-01

    An abstract approximation framework is developed for the finite and infinite time horizon discrete-time linear-quadratic regulator problem for systems whose state dynamics are described by a linear semigroup of operators on an infinite dimensional Hilbert space. The schemes included the framework yield finite dimensional approximations to the linear state feedback gains which determine the optimal control law. Convergence arguments are given. Examples involving hereditary and parabolic systems and the vibration of a flexible beam are considered. Spline-based finite element schemes for these classes of problems, together with numerical results, are presented and discussed.

  11. Solving deterministic non-linear programming problem using Hopfield artificial neural network and genetic programming techniques

    NASA Astrophysics Data System (ADS)

    Vasant, P.; Ganesan, T.; Elamvazuthi, I.

    2012-11-01

    A fairly reasonable result was obtained for non-linear engineering problems using the optimization techniques such as neural network, genetic algorithms, and fuzzy logic independently in the past. Increasingly, hybrid techniques are being used to solve the non-linear problems to obtain better output. This paper discusses the use of neuro-genetic hybrid technique to optimize the geological structure mapping which is known as seismic survey. It involves the minimization of objective function subject to the requirement of geophysical and operational constraints. In this work, the optimization was initially performed using genetic programming, and followed by hybrid neuro-genetic programming approaches. Comparative studies and analysis were then carried out on the optimized results. The results indicate that the hybrid neuro-genetic hybrid technique produced better results compared to the stand-alone genetic programming method.

  12. Connectivity Restoration in Wireless Sensor Networks via Space Network Coding.

    PubMed

    Uwitonze, Alfred; Huang, Jiaqing; Ye, Yuanqing; Cheng, Wenqing

    2017-04-20

    The problem of finding the number and optimal positions of relay nodes for restoring the network connectivity in partitioned Wireless Sensor Networks (WSNs) is Non-deterministic Polynomial-time hard (NP-hard) and thus heuristic methods are preferred to solve it. This paper proposes a novel polynomial time heuristic algorithm, namely, Relay Placement using Space Network Coding (RPSNC), to solve this problem, where Space Network Coding, also called Space Information Flow (SIF), is a new research paradigm that studies network coding in Euclidean space, in which extra relay nodes can be introduced to reduce the cost of communication. Unlike contemporary schemes that are often based on Minimum Spanning Tree (MST), Euclidean Steiner Minimal Tree (ESMT) or a combination of MST with ESMT, RPSNC is a new min-cost multicast space network coding approach that combines Delaunay triangulation and non-uniform partitioning techniques for generating a number of candidate relay nodes, and then linear programming is applied for choosing the optimal relay nodes and computing their connection links with terminals. Subsequently, an equilibrium method is used to refine the locations of the optimal relay nodes, by moving them to balanced positions. RPSNC can adapt to any density distribution of relay nodes and terminals, as well as any density distribution of terminals. The performance and complexity of RPSNC are analyzed and its performance is validated through simulation experiments.

  13. Nonexpansiveness of a linearized augmented Lagrangian operator for hierarchical convex optimization

    NASA Astrophysics Data System (ADS)

    Yamagishi, Masao; Yamada, Isao

    2017-04-01

    Hierarchical convex optimization concerns two-stage optimization problems: the first stage problem is a convex optimization; the second stage problem is the minimization of a convex function over the solution set of the first stage problem. For the hierarchical convex optimization, the hybrid steepest descent method (HSDM) can be applied, where the solution set of the first stage problem must be expressed as the fixed point set of a certain nonexpansive operator. In this paper, we propose a nonexpansive operator that yields a computationally efficient update when it is plugged into the HSDM. The proposed operator is inspired by the update of the linearized augmented Lagrangian method. It is applicable to characterize the solution set of recent sophisticated convex optimization problems found in the context of inverse problems, where the sum of multiple proximable convex functions involving linear operators must be minimized to incorporate preferable properties into the minimizers. For such a problem formulation, there has not yet been reported any nonexpansive operator that yields an update free from the inversions of linear operators in cases where it is utilized in the HSDM. Unlike previously known nonexpansive operators, the proposed operator yields an inversion-free update in such cases. As an application of the proposed operator plugged into the HSDM, we also present, in the context of the so-called superiorization, an algorithmic solution to a convex optimization problem over the generalized convex feasible set where the intersection of the hard constraints is not necessarily simple.

  14. Landmark matching based retinal image alignment by enforcing sparsity in correspondence matrix.

    PubMed

    Zheng, Yuanjie; Daniel, Ebenezer; Hunter, Allan A; Xiao, Rui; Gao, Jianbin; Li, Hongsheng; Maguire, Maureen G; Brainard, David H; Gee, James C

    2014-08-01

    Retinal image alignment is fundamental to many applications in diagnosis of eye diseases. In this paper, we address the problem of landmark matching based retinal image alignment. We propose a novel landmark matching formulation by enforcing sparsity in the correspondence matrix and offer its solutions based on linear programming. The proposed formulation not only enables a joint estimation of the landmark correspondences and a predefined transformation model but also combines the benefits of the softassign strategy (Chui and Rangarajan, 2003) and the combinatorial optimization of linear programming. We also introduced a set of reinforced self-similarities descriptors which can better characterize local photometric and geometric properties of the retinal image. Theoretical analysis and experimental results with both fundus color images and angiogram images show the superior performances of our algorithms to several state-of-the-art techniques. Copyright © 2013 Elsevier B.V. All rights reserved.

  15. Quantum conductance oscillation in linear monatomic silicon chains

    NASA Astrophysics Data System (ADS)

    Liu, Fu-Ti; Cheng, Yan; Yang, Fu-Bin; Chen, Xiang-Rong

    2014-02-01

    The conductance of linear silicon atomic chains with n=1-8 atoms sandwiched between Au electrodes is investigated by using the density functional theory combined with non-equilibrium Green's function. The results show that the conductance oscillates with a period of two atoms as the number of atoms in the chain is varied. We optimize the geometric structure of nanoscale junctions in different distances, and obtain that the average bond-length of silicon atoms in each chain at equilibrium positions is 2.15±0.03 Å. The oscillation of average Si-Si bond-length can explain the conductance oscillation from the geometric structure of atomic chains. We calculate the transmission spectrum of the chains in the equilibrium positions, and explain the conductance oscillation from the electronic structure. The transport channel is mainly contributed by px and py orbital electrons of silicon atoms. The even-odd oscillation is robust under external voltage up to 1.2 V.

  16. Automated discrimination of dementia spectrum disorders using extreme learning machine and structural T1 MRI features.

    PubMed

    Jongin Kim; Boreom Lee

    2017-07-01

    The classification of neuroimaging data for the diagnosis of Alzheimer's Disease (AD) is one of the main research goals of the neuroscience and clinical fields. In this study, we performed extreme learning machine (ELM) classifier to discriminate the AD, mild cognitive impairment (MCI) from normal control (NC). We compared the performance of ELM with that of a linear kernel support vector machine (SVM) for 718 structural MRI images from Alzheimer's Disease Neuroimaging Initiative (ADNI) database. The data consisted of normal control, MCI converter (MCI-C), MCI non-converter (MCI-NC), and AD. We employed SVM-based recursive feature elimination (RFE-SVM) algorithm to find the optimal subset of features. In this study, we found that the RFE-SVM feature selection approach in combination with ELM shows the superior classification accuracy to that of linear kernel SVM for structural T1 MRI data.

  17. Optimal control of an invasive species using a reaction-diffusion model and linear programming

    USGS Publications Warehouse

    Bonneau, Mathieu; Johnson, Fred A.; Smith, Brian J.; Romagosa, Christina M.; Martin, Julien; Mazzotti, Frank J.

    2017-01-01

    Managing an invasive species is particularly challenging as little is generally known about the species’ biological characteristics in its new habitat. In practice, removal of individuals often starts before the species is studied to provide the information that will later improve control. Therefore, the locations and the amount of control have to be determined in the face of great uncertainty about the species characteristics and with a limited amount of resources. We propose framing spatial control as a linear programming optimization problem. This formulation, paired with a discrete reaction-diffusion model, permits calculation of an optimal control strategy that minimizes the remaining number of invaders for a fixed cost or that minimizes the control cost for containment or protecting specific areas from invasion. We propose computing the optimal strategy for a range of possible model parameters, representing current uncertainty on the possible invasion scenarios. Then, a best strategy can be identified depending on the risk attitude of the decision-maker. We use this framework to study the spatial control of the Argentine black and white tegus (Salvator merianae) in South Florida. There is uncertainty about tegu demography and we considered several combinations of model parameters, exhibiting various dynamics of invasion. For a fixed one-year budget, we show that the risk-averse strategy, which optimizes the worst-case scenario of tegus’ dynamics, and the risk-neutral strategy, which optimizes the expected scenario, both concentrated control close to the point of introduction. A risk-seeking strategy, which optimizes the best-case scenario, focuses more on models where eradication of the species in a cell is possible and consists of spreading control as much as possible. For the establishment of a containment area, assuming an exponential growth we show that with current control methods it might not be possible to implement such a strategy for some of the models that we considered. Including different possible models allows an examination of how the strategy is expected to perform in different scenarios. Then, a strategy that accounts for the risk attitude of the decision-maker can be designed.

  18. Flight control optimization from design to assessment application on the Cessna Citation X business aircraft =

    NASA Astrophysics Data System (ADS)

    Boughari, Yamina

    New methodologies have been developed to optimize the integration, testing and certification of flight control systems, an expensive process in the aerospace industry. This thesis investigates the stability of the Cessna Citation X aircraft without control, and then optimizes two different flight controllers from design to validation. The aircraft's model was obtained from the data provided by the Research Aircraft Flight Simulator (RAFS) of the Cessna Citation business aircraft. To increase the stability and control of aircraft systems, optimizations of two different flight control designs were performed: 1) the Linear Quadratic Regulation and the Proportional Integral controllers were optimized using the Differential Evolution algorithm and the level 1 handling qualities as the objective function. The results were validated for the linear and nonlinear aircraft models, and some of the clearance criteria were investigated; and 2) the Hinfinity control method was applied on the stability and control augmentation systems. To minimize the time required for flight control design and its validation, an optimization of the controllers design was performed using the Differential Evolution (DE), and the Genetic algorithms (GA). The DE algorithm proved to be more efficient than the GA. New tools for visualization of the linear validation process were also developed to reduce the time required for the flight controller assessment. Matlab software was used to validate the different optimization algorithms' results. Research platforms of the aircraft's linear and nonlinear models were developed, and compared with the results of flight tests performed on the Research Aircraft Flight Simulator. Some of the clearance criteria of the optimized H-infinity flight controller were evaluated, including its linear stability, eigenvalues, and handling qualities criteria. Nonlinear simulations of the maneuvers criteria were also investigated during this research to assess the Cessna Citation X's flight controller clearance, and therefore, for its anticipated certification.

  19. Collective Human Mobility Pattern from Taxi Trips in Urban Area

    PubMed Central

    Peng, Chengbin; Jin, Xiaogang; Wong, Ka-Chun; Shi, Meixia; Liò, Pietro

    2012-01-01

    We analyze the passengers' traffic pattern for 1.58 million taxi trips of Shanghai, China. By employing the non-negative matrix factorization and optimization methods, we find that, people travel on workdays mainly for three purposes: commuting between home and workplace, traveling from workplace to workplace, and others such as leisure activities. Therefore, traffic flow in one area or between any pair of locations can be approximated by a linear combination of three basis flows, corresponding to the three purposes respectively. We name the coefficients in the linear combination as traffic powers, each of which indicates the strength of each basis flow. The traffic powers on different days are typically different even for the same location, due to the uncertainty of the human motion. Therefore, we provide a probability distribution function for the relative deviation of the traffic power. This distribution function is in terms of a series of functions for normalized binomial distributions. It can be well explained by statistical theories and is verified by empirical data. These findings are applicable in predicting the road traffic, tracing the traffic pattern and diagnosing the traffic related abnormal events. These results can also be used to infer land uses of urban area quite parsimoniously. PMID:22529917

  20. Analytical modelling of Halbach linear generator incorporating pole shifting and piece-wise spring for ocean wave energy harvesting

    NASA Astrophysics Data System (ADS)

    Tan, Yimin; Lin, Kejian; Zu, Jean W.

    2018-05-01

    Halbach permanent magnet (PM) array has attracted tremendous research attention in the development of electromagnetic generators for its unique properties. This paper has proposed a generalized analytical model for linear generators. The slotted stator pole-shifting and implementation of Halbach array have been combined for the first time. Initially, the magnetization components of the Halbach array have been determined using Fourier decomposition. Then, based on the magnetic scalar potential method, the magnetic field distribution has been derived employing specially treated boundary conditions. FEM analysis has been conducted to verify the analytical model. A slotted linear PM generator with Halbach PM has been constructed to validate the model and further improved using piece-wise springs to trigger full range reciprocating motion. A dynamic model has been developed to characterize the dynamic behavior of the slider. This analytical method provides an effective tool in development and optimization of Halbach PM generator. The experimental results indicate that piece-wise springs can be employed to improve generator performance under low excitation frequency.

  1. Linearization of microwave photonic link based on nonlinearity of distributed feedback laser

    NASA Astrophysics Data System (ADS)

    Kang, Zi-jian; Gu, Yi-ying; Zhu, Wen-wu; Fan, Feng; Hu, Jing-jing; Zhao, Ming-shan

    2016-02-01

    A microwave photonic link (MPL) with spurious-free dynamic range (SFDR) improvement utilizing the nonlinearity of a distributed feedback (DFB) laser is proposed and demonstrated. First, the relationship between the bias current and nonlinearity of a semiconductor DFB laser is experimentally studied. On this basis, the proposed linear optimization of MPL is realized by the combination of the external intensity Mach-Zehnder modulator (MZM) modulation MPL and the direct modulation MPL with the nonlinear operation of the DFB laser. In the external modulation MPL, the MZM is biased at the linear point to achieve the radio frequency (RF) signal transmission. In the direct modulation MPL, the third-order intermodulation (IMD3) components are generated for enhancing the SFDR of the external modulation MPL. When the center frequency of the input RF signal is 5 GHz and the two-tone signal interval is 10 kHz, the experimental results show that IMD3 of the system is effectively suppressed by 29.3 dB and the SFDR is increased by 7.7 dB.

  2. Optimal operating rules definition in complex water resource systems combining fuzzy logic, expert criteria and stochastic programming

    NASA Astrophysics Data System (ADS)

    Macian-Sorribes, Hector; Pulido-Velazquez, Manuel

    2016-04-01

    This contribution presents a methodology for defining optimal seasonal operating rules in multireservoir systems coupling expert criteria and stochastic optimization. Both sources of information are combined using fuzzy logic. The structure of the operating rules is defined based on expert criteria, via a joint expert-technician framework consisting in a series of meetings, workshops and surveys carried out between reservoir managers and modelers. As a result, the decision-making process used by managers can be assessed and expressed using fuzzy logic: fuzzy rule-based systems are employed to represent the operating rules and fuzzy regression procedures are used for forecasting future inflows. Once done that, a stochastic optimization algorithm can be used to define optimal decisions and transform them into fuzzy rules. Finally, the optimal fuzzy rules and the inflow prediction scheme are combined into a Decision Support System for making seasonal forecasts and simulate the effect of different alternatives in response to the initial system state and the foreseen inflows. The approach presented has been applied to the Jucar River Basin (Spain). Reservoir managers explained how the system is operated, taking into account the reservoirs' states at the beginning of the irrigation season and the inflows previewed during that season. According to the information given by them, the Jucar River Basin operating policies were expressed via two fuzzy rule-based (FRB) systems that estimate the amount of water to be allocated to the users and how the reservoir storages should be balanced to guarantee those deliveries. A stochastic optimization model using Stochastic Dual Dynamic Programming (SDDP) was developed to define optimal decisions, which are transformed into optimal operating rules embedding them into the two FRBs previously created. As a benchmark, historical records are used to develop alternative operating rules. A fuzzy linear regression procedure was employed to foresee future inflows depending on present and past hydrological and meteorological variables actually used by the reservoir managers to define likely inflow scenarios. A Decision Support System (DSS) was created coupling the FRB systems and the inflow prediction scheme in order to give the user a set of possible optimal releases in response to the reservoir states at the beginning of the irrigation season and the fuzzy inflow projections made using hydrological and meteorological information. The results show that the optimal DSS created using the FRB operating policies are able to increase the amount of water allocated to the users in 20 to 50 Mm3 per irrigation season with respect to the current policies. Consequently, the mechanism used to define optimal operating rules and transform them into a DSS is able to increase the water deliveries in the Jucar River Basin, combining expert criteria and optimization algorithms in an efficient way. This study has been partially supported by the IMPADAPT project (CGL2013-48424-C2-1-R) with Spanish MINECO (Ministerio de Economía y Competitividad) and FEDER funds. It also has received funding from the European Union's Horizon 2020 research and innovation programme under the IMPREX project (grant agreement no: 641.811).

  3. Optimization design of LED heat dissipation structure based on strip fins

    NASA Astrophysics Data System (ADS)

    Xue, Lingyun; Wan, Wenbin; Chen, Qingguang; Rao, Huanle; Xu, Ping

    2018-03-01

    To solve the heat dissipation problem of LED, a radiator structure based on strip fins is designed and the method to optimize the structure parameters of strip fins is proposed in this paper. The combination of RBF neural networks and particle swarm optimization (PSO) algorithm is used for modeling and optimization respectively. During the experiment, the 150 datasets of LED junction temperature when structure parameters of number of strip fins, length, width and height of the fins have different values are obtained by ANSYS software. Then RBF neural network is applied to build the non-linear regression model and the parameters optimization of structure based on particle swarm optimization algorithm is performed with this model. The experimental results show that the lowest LED junction temperature reaches 43.88 degrees when the number of hidden layer nodes in RBF neural network is 10, the two learning factors in particle swarm optimization algorithm are 0.5, 0.5 respectively, the inertia factor is 1 and the maximum number of iterations is 100, and now the number of fins is 64, the distribution structure is 8*8, and the length, width and height of fins are 4.3mm, 4.48mm and 55.3mm respectively. To compare the modeling and optimization results, LED junction temperature at the optimized structure parameters was simulated and the result is 43.592°C which approximately equals to the optimal result. Compared with the ordinary plate-fin-type radiator structure whose temperature is 56.38°C, the structure greatly enhances heat dissipation performance of the structure.

  4. Deployment-based lifetime optimization for linear wireless sensor networks considering both retransmission and discrete power control.

    PubMed

    Li, Ruiying; Ma, Wenting; Huang, Ning; Kang, Rui

    2017-01-01

    A sophisticated method for node deployment can efficiently reduce the energy consumption of a Wireless Sensor Network (WSN) and prolong the corresponding network lifetime. Pioneers have proposed many node deployment based lifetime optimization methods for WSNs, however, the retransmission mechanism and the discrete power control strategy, which are widely used in practice and have large effect on the network energy consumption, are often neglected and assumed as a continuous one, respectively, in the previous studies. In this paper, both retransmission and discrete power control are considered together, and a more realistic energy-consumption-based network lifetime model for linear WSNs is provided. Using this model, we then propose a generic deployment-based optimization model that maximizes network lifetime under coverage, connectivity and transmission rate success constraints. The more accurate lifetime evaluation conduces to a longer optimal network lifetime in the realistic situation. To illustrate the effectiveness of our method, both one-tiered and two-tiered uniformly and non-uniformly distributed linear WSNs are optimized in our case studies, and the comparisons between our optimal results and those based on relatively inaccurate lifetime evaluation show the advantage of our method when investigating WSN lifetime optimization problems.

  5. A Computational/Experimental Study of Two Optimized Supersonic Transport Designs and the Reference H Baseline

    NASA Technical Reports Server (NTRS)

    Cliff, Susan E.; Baker, Timothy J.; Hicks, Raymond M.; Reuther, James J.

    1999-01-01

    Two supersonic transport configurations designed by use of non-linear aerodynamic optimization methods are compared with a linearly designed baseline configuration. One optimized configuration, designated Ames 7-04, was designed at NASA Ames Research Center using an Euler flow solver, and the other, designated Boeing W27, was designed at Boeing using a full-potential method. The two optimized configurations and the baseline were tested in the NASA Langley Unitary Plan Supersonic Wind Tunnel to evaluate the non-linear design optimization methodologies. In addition, the experimental results are compared with computational predictions for each of the three configurations from the Enter flow solver, AIRPLANE. The computational and experimental results both indicate moderate to substantial performance gains for the optimized configurations over the baseline configuration. The computed performance changes with and without diverters and nacelles were in excellent agreement with experiment for all three models. Comparisons of the computational and experimental cruise drag increments for the optimized configurations relative to the baseline show excellent agreement for the model designed by the Euler method, but poorer comparisons were found for the configuration designed by the full-potential code.

  6. Angle-domain inverse scattering migration/inversion in isotropic media

    NASA Astrophysics Data System (ADS)

    Li, Wuqun; Mao, Weijian; Li, Xuelei; Ouyang, Wei; Liang, Quan

    2018-07-01

    The classical seismic asymptotic inversion can be transformed into a problem of inversion of generalized Radon transform (GRT). In such methods, the combined parameters are linearly attached to the scattered wave-field by Born approximation and recovered by applying an inverse GRT operator to the scattered wave-field data. Typical GRT-style true-amplitude inversion procedure contains an amplitude compensation process after the weighted migration via dividing an illumination associated matrix whose elements are integrals of scattering angles. It is intuitional to some extent that performs the generalized linear inversion and the inversion of GRT together by this process for direct inversion. However, it is imprecise to carry out such operation when the illumination at the image point is limited, which easily leads to the inaccuracy and instability of the matrix. This paper formulates the GRT true-amplitude inversion framework in an angle-domain version, which naturally degrades the external integral term related to the illumination in the conventional case. We solve the linearized integral equation for combined parameters of different fixed scattering angle values. With this step, we obtain high-quality angle-domain common-image gathers (CIGs) in the migration loop which provide correct amplitude-versus-angle (AVA) behavior and reasonable illumination range for subsurface image points. Then we deal with the over-determined problem to solve each parameter in the combination by a standard optimization operation. The angle-domain GRT inversion method keeps away from calculating the inaccurate and unstable illumination matrix. Compared with the conventional method, the angle-domain method can obtain more accurate amplitude information and wider amplitude-preserved range. Several model tests demonstrate the effectiveness and practicability.

  7. Optimal preview control for a linear continuous-time stochastic control system in finite-time horizon

    NASA Astrophysics Data System (ADS)

    Wu, Jiang; Liao, Fucheng; Tomizuka, Masayoshi

    2017-01-01

    This paper discusses the design of the optimal preview controller for a linear continuous-time stochastic control system in finite-time horizon, using the method of augmented error system. First, an assistant system is introduced for state shifting. Then, in order to overcome the difficulty of the state equation of the stochastic control system being unable to be differentiated because of Brownian motion, the integrator is introduced. Thus, the augmented error system which contains the integrator vector, control input, reference signal, error vector and state of the system is reconstructed. This leads to the tracking problem of the optimal preview control of the linear stochastic control system being transformed into the optimal output tracking problem of the augmented error system. With the method of dynamic programming in the theory of stochastic control, the optimal controller with previewable signals of the augmented error system being equal to the controller of the original system is obtained. Finally, numerical simulations show the effectiveness of the controller.

  8. The importance of functional form in optimal control solutions of problems in population dynamics

    USGS Publications Warehouse

    Runge, M.C.; Johnson, F.A.

    2002-01-01

    Optimal control theory is finding increased application in both theoretical and applied ecology, and it is a central element of adaptive resource management. One of the steps in an adaptive management process is to develop alternative models of system dynamics, models that are all reasonable in light of available data, but that differ substantially in their implications for optimal control of the resource. We explored how the form of the recruitment and survival functions in a general population model for ducks affected the patterns in the optimal harvest strategy, using a combination of analytical, numerical, and simulation techniques. We compared three relationships between recruitment and population density (linear, exponential, and hyperbolic) and three relationships between survival during the nonharvest season and population density (constant, logistic, and one related to the compensatory harvest mortality hypothesis). We found that the form of the component functions had a dramatic influence on the optimal harvest strategy and the ultimate equilibrium state of the system. For instance, while it is commonly assumed that a compensatory hypothesis leads to higher optimal harvest rates than an additive hypothesis, we found this to depend on the form of the recruitment function, in part because of differences in the optimal steady-state population density. This work has strong direct consequences for those developing alternative models to describe harvested systems, but it is relevant to a larger class of problems applying optimal control at the population level. Often, different functional forms will not be statistically distinguishable in the range of the data. Nevertheless, differences between the functions outside the range of the data can have an important impact on the optimal harvest strategy. Thus, development of alternative models by identifying a single functional form, then choosing different parameter combinations from extremes on the likelihood profile may end up producing alternatives that do not differ as importantly as if different functional forms had been used. We recommend that biological knowledge be used to bracket a range of possible functional forms, and robustness of conclusions be checked over this range.

  9. Time-frequency analysis of band-limited EEG with BMFLC and Kalman filter for BCI applications

    PubMed Central

    2013-01-01

    Background Time-Frequency analysis of electroencephalogram (EEG) during different mental tasks received significant attention. As EEG is non-stationary, time-frequency analysis is essential to analyze brain states during different mental tasks. Further, the time-frequency information of EEG signal can be used as a feature for classification in brain-computer interface (BCI) applications. Methods To accurately model the EEG, band-limited multiple Fourier linear combiner (BMFLC), a linear combination of truncated multiple Fourier series models is employed. A state-space model for BMFLC in combination with Kalman filter/smoother is developed to obtain accurate adaptive estimation. By virtue of construction, BMFLC with Kalman filter/smoother provides accurate time-frequency decomposition of the bandlimited signal. Results The proposed method is computationally fast and is suitable for real-time BCI applications. To evaluate the proposed algorithm, a comparison with short-time Fourier transform (STFT) and continuous wavelet transform (CWT) for both synthesized and real EEG data is performed in this paper. The proposed method is applied to BCI Competition data IV for ERD detection in comparison with existing methods. Conclusions Results show that the proposed algorithm can provide optimal time-frequency resolution as compared to STFT and CWT. For ERD detection, BMFLC-KF outperforms STFT and BMFLC-KS in real-time applicability with low computational requirement. PMID:24274109

  10. Processing time tolerance-based ACO algorithm for solving job-shop scheduling problem

    NASA Astrophysics Data System (ADS)

    Luo, Yabo; Waden, Yongo P.

    2017-06-01

    Ordinarily, Job Shop Scheduling Problem (JSSP) is known as NP-hard problem which has uncertainty and complexity that cannot be handled by a linear method. Thus, currently studies on JSSP are concentrated mainly on applying different methods of improving the heuristics for optimizing the JSSP. However, there still exist many problems for efficient optimization in the JSSP, namely, low efficiency and poor reliability, which can easily trap the optimization process of JSSP into local optima. Therefore, to solve this problem, a study on Ant Colony Optimization (ACO) algorithm combined with constraint handling tactics is carried out in this paper. Further, the problem is subdivided into three parts: (1) Analysis of processing time tolerance-based constraint features in the JSSP which is performed by the constraint satisfying model; (2) Satisfying the constraints by considering the consistency technology and the constraint spreading algorithm in order to improve the performance of ACO algorithm. Hence, the JSSP model based on the improved ACO algorithm is constructed; (3) The effectiveness of the proposed method based on reliability and efficiency is shown through comparative experiments which are performed on benchmark problems. Consequently, the results obtained by the proposed method are better, and the applied technique can be used in optimizing JSSP.

  11. A predictive machine learning approach for microstructure optimization and materials design

    DOE PAGES

    Liu, Ruoqian; Kumar, Abhishek; Chen, Zhengzhang; ...

    2015-06-23

    This paper addresses an important materials engineering question: How can one identify the complete space (or as much of it as possible) of microstructures that are theoretically predicted to yield the desired combination of properties demanded by a selected application? We present a problem involving design of magnetoelastic Fe-Ga alloy microstructure for enhanced elastic, plastic and magnetostrictive properties. While theoretical models for computing properties given the microstructure are known for this alloy, inversion of these relationships to obtain microstructures that lead to desired properties is challenging, primarily due to the high dimensionality of microstructure space, multi-objective design requirement and non-uniquenessmore » of solutions. These challenges render traditional search-based optimization methods incompetent in terms of both searching efficiency and result optimality. In this paper, a route to address these challenges using a machine learning methodology is proposed. A systematic framework consisting of random data generation, feature selection and classification algorithms is developed. In conclusion, experiments with five design problems that involve identification of microstructures that satisfy both linear and nonlinear property constraints show that our framework outperforms traditional optimization methods with the average running time reduced by as much as 80% and with optimality that would not be achieved otherwise.« less

  12. Learning With Mixed Hard/Soft Pointwise Constraints.

    PubMed

    Gnecco, Giorgio; Gori, Marco; Melacci, Stefano; Sanguineti, Marcello

    2015-09-01

    A learning paradigm is proposed and investigated, in which the classical framework of learning from examples is enhanced by the introduction of hard pointwise constraints, i.e., constraints imposed on a finite set of examples that cannot be violated. Such constraints arise, e.g., when requiring coherent decisions of classifiers acting on different views of the same pattern. The classical examples of supervised learning, which can be violated at the cost of some penalization (quantified by the choice of a suitable loss function) play the role of soft pointwise constraints. Constrained variational calculus is exploited to derive a representer theorem that provides a description of the functional structure of the optimal solution to the proposed learning paradigm. It is shown that such an optimal solution can be represented in terms of a set of support constraints, which generalize the concept of support vectors and open the doors to a novel learning paradigm, called support constraint machines. The general theory is applied to derive the representation of the optimal solution to the problem of learning from hard linear pointwise constraints combined with soft pointwise constraints induced by supervised examples. In some cases, closed-form optimal solutions are obtained.

  13. Mitigation of epidemics in contact networks through optimal contact adaptation *

    PubMed Central

    Youssef, Mina; Scoglio, Caterina

    2013-01-01

    This paper presents an optimal control problem formulation to minimize the total number of infection cases during the spread of susceptible-infected-recovered SIR epidemics in contact networks. In the new approach, contact weighted are reduced among nodes and a global minimum contact level is preserved in the network. In addition, the infection cost and the cost associated with the contact reduction are linearly combined in a single objective function. Hence, the optimal control formulation addresses the tradeoff between minimization of total infection cases and minimization of contact weights reduction. Using Pontryagin theorem, the obtained solution is a unique candidate representing the dynamical weighted contact network. To find the near-optimal solution in a decentralized way, we propose two heuristics based on Bang-Bang control function and on a piecewise nonlinear control function, respectively. We perform extensive simulations to evaluate the two heuristics on different networks. Our results show that the piecewise nonlinear control function outperforms the well-known Bang-Bang control function in minimizing both the total number of infection cases and the reduction of contact weights. Finally, our results show awareness of the infection level at which the mitigation strategies are effectively applied to the contact weights. PMID:23906209

  14. Mitigation of epidemics in contact networks through optimal contact adaptation.

    PubMed

    Youssef, Mina; Scoglio, Caterina

    2013-08-01

    This paper presents an optimal control problem formulation to minimize the total number of infection cases during the spread of susceptible-infected-recovered SIR epidemics in contact networks. In the new approach, contact weighted are reduced among nodes and a global minimum contact level is preserved in the network. In addition, the infection cost and the cost associated with the contact reduction are linearly combined in a single objective function. Hence, the optimal control formulation addresses the tradeoff between minimization of total infection cases and minimization of contact weights reduction. Using Pontryagin theorem, the obtained solution is a unique candidate representing the dynamical weighted contact network. To find the near-optimal solution in a decentralized way, we propose two heuristics based on Bang-Bang control function and on a piecewise nonlinear control function, respectively. We perform extensive simulations to evaluate the two heuristics on different networks. Our results show that the piecewise nonlinear control function outperforms the well-known Bang-Bang control function in minimizing both the total number of infection cases and the reduction of contact weights. Finally, our results show awareness of the infection level at which the mitigation strategies are effectively applied to the contact weights.

  15. Non-linear dynamic characteristics and optimal control of giant magnetostrictive film subjected to in-plane stochastic excitation

    NASA Astrophysics Data System (ADS)

    Zhu, Z. W.; Zhang, W. D.; Xu, J.

    2014-03-01

    The non-linear dynamic characteristics and optimal control of a giant magnetostrictive film (GMF) subjected to in-plane stochastic excitation were studied. Non-linear differential items were introduced to interpret the hysteretic phenomena of the GMF, and the non-linear dynamic model of the GMF subjected to in-plane stochastic excitation was developed. The stochastic stability was analysed, and the probability density function was obtained. The condition of stochastic Hopf bifurcation and noise-induced chaotic response were determined, and the fractal boundary of the system's safe basin was provided. The reliability function was solved from the backward Kolmogorov equation, and an optimal control strategy was proposed in the stochastic dynamic programming method. Numerical simulation shows that the system stability varies with the parameters, and stochastic Hopf bifurcation and chaos appear in the process; the area of the safe basin decreases when the noise intensifies, and the boundary of the safe basin becomes fractal; the system reliability improved through stochastic optimal control. Finally, the theoretical and numerical results were proved by experiments. The results are helpful in the engineering applications of GMF.

  16. Performance optimization of the Varian aS500 EPID system.

    PubMed

    Berger, Lucie; François, Pascal; Gaboriaud, Geneviève; Rosenwald, Jean-Claude

    2006-01-01

    Today, electronic portal imaging devices (EPIDs) are widely used as a replacement to portal films for patient position verification, but the image quality is not always optimal. The general aim of this study was to optimize the acquisition parameters of an amorphous silicon EPID commercially available for clinical use in radiation therapy with the view to avoid saturation of the system. Special attention was paid to selection of the parameter corresponding to the number of rows acquired between accelerator pulses (NRP) for various beam energies and dose rates. The image acquisition system (IAS2) has been studied, and portal image acquisition was found to be strongly dependent on the accelerator pulse frequency. This frequency is set for each "energy - dose rate" combination of the linear accelerator. For all combinations, the image acquisition parameters were systematically changed to determine their influence on the performances of the Varian aS500 EPID system. New parameters such as the maximum number of rows (MNR) and the number of pulses per frame (NPF) were introduced to explain portal image acquisition theory. Theoretical and experimental values of MNR and NPF were compared, and they were in good agreement. Other results showed that NRP had a major influence on detector saturation and dose per image. A rule of thumb was established to determine the optimum NRP value to be used. This practical application was illustrated by a clinical example in which the saturation of the aSi EPID was avoided by NRP optimization. Moreover, an additional study showed that image quality was relatively insensitive to this parameter.

  17. CSOLNP: Numerical Optimization Engine for Solving Non-linearly Constrained Problems.

    PubMed

    Zahery, Mahsa; Maes, Hermine H; Neale, Michael C

    2017-08-01

    We introduce the optimizer CSOLNP, which is a C++ implementation of the R package RSOLNP (Ghalanos & Theussl, 2012, Rsolnp: General non-linear optimization using augmented Lagrange multiplier method. R package version, 1) alongside some improvements. CSOLNP solves non-linearly constrained optimization problems using a Sequential Quadratic Programming (SQP) algorithm. CSOLNP, NPSOL (a very popular implementation of SQP method in FORTRAN (Gill et al., 1986, User's guide for NPSOL (version 4.0): A Fortran package for nonlinear programming (No. SOL-86-2). Stanford, CA: Stanford University Systems Optimization Laboratory), and SLSQP (another SQP implementation available as part of the NLOPT collection (Johnson, 2014, The NLopt nonlinear-optimization package. Retrieved from http://ab-initio.mit.edu/nlopt)) are three optimizers available in OpenMx package. These optimizers are compared in terms of runtimes, final objective values, and memory consumption. A Monte Carlo analysis of the performance of the optimizers was performed on ordinal and continuous models with five variables and one or two factors. While the relative difference between the objective values is less than 0.5%, CSOLNP is in general faster than NPSOL and SLSQP for ordinal analysis. As for continuous data, none of the optimizers performs consistently faster than the others. In terms of memory usage, we used Valgrind's heap profiler tool, called Massif, on one-factor threshold models. CSOLNP and NPSOL consume the same amount of memory, while SLSQP uses 71 MB more memory than the other two optimizers.

  18. Optimized multiple linear mappings for single image super-resolution

    NASA Astrophysics Data System (ADS)

    Zhang, Kaibing; Li, Jie; Xiong, Zenggang; Liu, Xiuping; Gao, Xinbo

    2017-12-01

    Learning piecewise linear regression has been recognized as an effective way for example learning-based single image super-resolution (SR) in literature. In this paper, we employ an expectation-maximization (EM) algorithm to further improve the SR performance of our previous multiple linear mappings (MLM) based SR method. In the training stage, the proposed method starts with a set of linear regressors obtained by the MLM-based method, and then jointly optimizes the clustering results and the low- and high-resolution subdictionary pairs for regression functions by using the metric of the reconstruction errors. In the test stage, we select the optimal regressor for SR reconstruction by accumulating the reconstruction errors of m-nearest neighbors in the training set. Thorough experimental results carried on six publicly available datasets demonstrate that the proposed SR method can yield high-quality images with finer details and sharper edges in terms of both quantitative and perceptual image quality assessments.

  19. LQR-Based Optimal Distributed Cooperative Design for Linear Discrete-Time Multiagent Systems.

    PubMed

    Zhang, Huaguang; Feng, Tao; Liang, Hongjing; Luo, Yanhong

    2017-03-01

    In this paper, a novel linear quadratic regulator (LQR)-based optimal distributed cooperative design method is developed for synchronization control of general linear discrete-time multiagent systems on a fixed, directed graph. Sufficient conditions are derived for synchronization, which restrict the graph eigenvalues into a bounded circular region in the complex plane. The synchronizing speed issue is also considered, and it turns out that the synchronizing region reduces as the synchronizing speed becomes faster. To obtain more desirable synchronizing capacity, the weighting matrices are selected by sufficiently utilizing the guaranteed gain margin of the optimal regulators. Based on the developed LQR-based cooperative design framework, an approximate dynamic programming technique is successfully introduced to overcome the (partially or completely) model-free cooperative design for linear multiagent systems. Finally, two numerical examples are given to illustrate the effectiveness of the proposed design methods.

  20. Global optimization algorithm for heat exchanger networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quesada, I.; Grossmann, I.E.

    This paper deals with the global optimization of heat exchanger networks with fixed topology. It is shown that if linear area cost functions are assumed, as well as arithmetic mean driving force temperature differences in networks with isothermal mixing, the corresponding nonlinear programming (NLP) optimization problem involves linear constraints and a sum of linear fractional functions in the objective which are nonconvex. A rigorous algorithm is proposed that is based on a convex NLP underestimator that involves linear and nonlinear estimators for fractional and bilinear terms which provide a tight lower bound to the global optimum. This NLP problem ismore » used within a spatial branch and bound method for which branching rules are given. Basic properties of the proposed method are presented, and its application is illustrated with several example problems. The results show that the proposed method only requires few nodes in the branch and bound search.« less

  1. Sensitivity Analysis of Genetic Algorithm Parameters for Optimal Groundwater Monitoring Network Design

    NASA Astrophysics Data System (ADS)

    Abdeh-Kolahchi, A.; Satish, M.; Datta, B.

    2004-05-01

    A state art groundwater monitoring network design is introduced. The method combines groundwater flow and transport results with optimization Genetic Algorithm (GA) to identify optimal monitoring well locations. Optimization theory uses different techniques to find a set of parameter values that minimize or maximize objective functions. The suggested groundwater optimal monitoring network design is based on the objective of maximizing the probability of tracking a transient contamination plume by determining sequential monitoring locations. The MODFLOW and MT3DMS models included as separate modules within the Groundwater Modeling System (GMS) are used to develop three dimensional groundwater flow and contamination transport simulation. The groundwater flow and contamination simulation results are introduced as input to the optimization model, using Genetic Algorithm (GA) to identify the groundwater optimal monitoring network design, based on several candidate monitoring locations. The groundwater monitoring network design model is used Genetic Algorithms with binary variables representing potential monitoring location. As the number of decision variables and constraints increase, the non-linearity of the objective function also increases which make difficulty to obtain optimal solutions. The genetic algorithm is an evolutionary global optimization technique, which is capable of finding the optimal solution for many complex problems. In this study, the GA approach capable of finding the global optimal solution to a groundwater monitoring network design problem involving 18.4X 1018 feasible solutions will be discussed. However, to ensure the efficiency of the solution process and global optimality of the solution obtained using GA, it is necessary that appropriate GA parameter values be specified. The sensitivity analysis of genetic algorithms parameters such as random number, crossover probability, mutation probability, and elitism are discussed for solution of monitoring network design.

  2. Design of two-channel filter bank using nature inspired optimization based fractional derivative constraints.

    PubMed

    Kuldeep, B; Singh, V K; Kumar, A; Singh, G K

    2015-01-01

    In this article, a novel approach for 2-channel linear phase quadrature mirror filter (QMF) bank design based on a hybrid of gradient based optimization and optimization of fractional derivative constraints is introduced. For the purpose of this work, recently proposed nature inspired optimization techniques such as cuckoo search (CS), modified cuckoo search (MCS) and wind driven optimization (WDO) are explored for the design of QMF bank. 2-Channel QMF is also designed with particle swarm optimization (PSO) and artificial bee colony (ABC) nature inspired optimization techniques. The design problem is formulated in frequency domain as sum of L2 norm of error in passband, stopband and transition band at quadrature frequency. The contribution of this work is the novel hybrid combination of gradient based optimization (Lagrange multiplier method) and nature inspired optimization (CS, MCS, WDO, PSO and ABC) and its usage for optimizing the design problem. Performance of the proposed method is evaluated by passband error (ϕp), stopband error (ϕs), transition band error (ϕt), peak reconstruction error (PRE), stopband attenuation (As) and computational time. The design examples illustrate the ingenuity of the proposed method. Results are also compared with the other existing algorithms, and it was found that the proposed method gives best result in terms of peak reconstruction error and transition band error while it is comparable in terms of passband and stopband error. Results show that the proposed method is successful for both lower and higher order 2-channel QMF bank design. A comparative study of various nature inspired optimization techniques is also presented, and the study singles out CS as a best QMF optimization technique. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  3. Hybrid Support Vector Regression and Autoregressive Integrated Moving Average Models Improved by Particle Swarm Optimization for Property Crime Rates Forecasting with Economic Indicators

    PubMed Central

    Alwee, Razana; Hj Shamsuddin, Siti Mariyam; Sallehuddin, Roselina

    2013-01-01

    Crimes forecasting is an important area in the field of criminology. Linear models, such as regression and econometric models, are commonly applied in crime forecasting. However, in real crimes data, it is common that the data consists of both linear and nonlinear components. A single model may not be sufficient to identify all the characteristics of the data. The purpose of this study is to introduce a hybrid model that combines support vector regression (SVR) and autoregressive integrated moving average (ARIMA) to be applied in crime rates forecasting. SVR is very robust with small training data and high-dimensional problem. Meanwhile, ARIMA has the ability to model several types of time series. However, the accuracy of the SVR model depends on values of its parameters, while ARIMA is not robust to be applied to small data sets. Therefore, to overcome this problem, particle swarm optimization is used to estimate the parameters of the SVR and ARIMA models. The proposed hybrid model is used to forecast the property crime rates of the United State based on economic indicators. The experimental results show that the proposed hybrid model is able to produce more accurate forecasting results as compared to the individual models. PMID:23766729

  4. Nonlocal games and optimal steering at the boundary of the quantum set

    NASA Astrophysics Data System (ADS)

    Zhen, Yi-Zheng; Goh, Koon Tong; Zheng, Yu-Lin; Cao, Wen-Fei; Wu, Xingyao; Chen, Kai; Scarani, Valerio

    2016-08-01

    The boundary between classical and quantum correlations is well characterized by linear constraints called Bell inequalities. It is much harder to characterize the boundary of the quantum set itself in the space of no-signaling correlations. For the points on the quantum boundary that violate maximally some Bell inequalities, J. Oppenheim and S. Wehner [Science 330, 1072 (2010), 10.1126/science.1192065] pointed out a complex property: Alice's optimal measurements steer Bob's local state to the eigenstate of an effective operator corresponding to its maximal eigenvalue. This effective operator is the linear combination of Bob's local operators induced by the coefficients of the Bell inequality, and it can be interpreted as defining a fine-grained uncertainty relation. It is natural to ask whether the same property holds for other points on the quantum boundary, using the Bell expression that defines the tangent hyperplane at each point. We prove that this is indeed the case for a large set of points, including some that were believed to provide counterexamples. The price to pay is to acknowledge that the Oppenheim-Wehner criterion does not respect equivalence under the no-signaling constraint: for each point, one has to look for specific forms of writing the Bell expressions.

  5. The optimal hormonal replacement modality selection for multiple organ procurement from brain-dead organ donors

    PubMed Central

    Mi, Zhibao; Novitzky, Dimitri; Collins, Joseph F; Cooper, David KC

    2015-01-01

    The management of brain-dead organ donors is complex. The use of inotropic agents and replacement of depleted hormones (hormonal replacement therapy) is crucial for successful multiple organ procurement, yet the optimal hormonal replacement has not been identified, and the statistical adjustment to determine the best selection is not trivial. Traditional pair-wise comparisons between every pair of treatments, and multiple comparisons to all (MCA), are statistically conservative. Hsu’s multiple comparisons with the best (MCB) – adapted from the Dunnett’s multiple comparisons with control (MCC) – has been used for selecting the best treatment based on continuous variables. We selected the best hormonal replacement modality for successful multiple organ procurement using a two-step approach. First, we estimated the predicted margins by constructing generalized linear models (GLM) or generalized linear mixed models (GLMM), and then we applied the multiple comparison methods to identify the best hormonal replacement modality given that the testing of hormonal replacement modalities is independent. Based on 10-year data from the United Network for Organ Sharing (UNOS), among 16 hormonal replacement modalities, and using the 95% simultaneous confidence intervals, we found that the combination of thyroid hormone, a corticosteroid, antidiuretic hormone, and insulin was the best modality for multiple organ procurement for transplantation. PMID:25565890

  6. Stationary variational estimates for the effective response and field fluctuations in nonlinear composites

    NASA Astrophysics Data System (ADS)

    Ponte Castañeda, Pedro

    2016-11-01

    This paper presents a variational method for estimating the effective constitutive response of composite materials with nonlinear constitutive behavior. The method is based on a stationary variational principle for the macroscopic potential in terms of the corresponding potential of a linear comparison composite (LCC) whose properties are the trial fields in the variational principle. When used in combination with estimates for the LCC that are exact to second order in the heterogeneity contrast, the resulting estimates for the nonlinear composite are also guaranteed to be exact to second-order in the contrast. In addition, the new method allows full optimization with respect to the properties of the LCC, leading to estimates that are fully stationary and exhibit no duality gaps. As a result, the effective response and field statistics of the nonlinear composite can be estimated directly from the appropriately optimized linear comparison composite. By way of illustration, the method is applied to a porous, isotropic, power-law material, and the results are found to compare favorably with earlier bounds and estimates. However, the basic ideas of the method are expected to work for broad classes of composites materials, whose effective response can be given appropriate variational representations, including more general elasto-plastic and soft hyperelastic composites and polycrystals.

  7. Hybrid support vector regression and autoregressive integrated moving average models improved by particle swarm optimization for property crime rates forecasting with economic indicators.

    PubMed

    Alwee, Razana; Shamsuddin, Siti Mariyam Hj; Sallehuddin, Roselina

    2013-01-01

    Crimes forecasting is an important area in the field of criminology. Linear models, such as regression and econometric models, are commonly applied in crime forecasting. However, in real crimes data, it is common that the data consists of both linear and nonlinear components. A single model may not be sufficient to identify all the characteristics of the data. The purpose of this study is to introduce a hybrid model that combines support vector regression (SVR) and autoregressive integrated moving average (ARIMA) to be applied in crime rates forecasting. SVR is very robust with small training data and high-dimensional problem. Meanwhile, ARIMA has the ability to model several types of time series. However, the accuracy of the SVR model depends on values of its parameters, while ARIMA is not robust to be applied to small data sets. Therefore, to overcome this problem, particle swarm optimization is used to estimate the parameters of the SVR and ARIMA models. The proposed hybrid model is used to forecast the property crime rates of the United State based on economic indicators. The experimental results show that the proposed hybrid model is able to produce more accurate forecasting results as compared to the individual models.

  8. Electromagnetically induced transparency and Autler-Townes splitting in superconducting flux quantum circuits

    NASA Astrophysics Data System (ADS)

    Sun, Hui-Chen; Liu, Yu-xi; Ian, Hou; You, J. Q.; Il'ichev, E.; Nori, Franco

    2014-06-01

    We study the microwave absorption of a driven three-level quantum system, which is realized by a superconducting flux quantum circuit (SFQC), with a magnetic driving field applied to the two upper levels. The interaction between the three-level system and its environment is studied within the Born-Markov approximation, and we take into account the effects of the driving field on the damping rates of the three-level system. We study the linear response of the driven three-level SFQC to a weak probe field. The linear magnetic susceptibility of the SFQC can be changed by both the driving field and the bias magnetic flux. When the bias magnetic flux is at the optimal point, the transition from the ground state to the second-excited state is forbidden and the three-level SFQC has a ladder-type transition. Thus, the SFQC responds to the probe field like natural atoms with ladder-type transitions. However, when the bias magnetic flux deviates from the optimal point, the three-level SFQC has a cyclic transition, thus it responds to the probe field like a combination of natural atoms with ladder-type transitions and natural atoms with Λ-type transitions. In particular, we provide detailed discussions on the conditions for realizing electromagnetically induced transparency and Autler-Townes splitting in three-level SFQCs.

  9. Optimization of heavy metals total emission, case study: Bor (Serbia)

    NASA Astrophysics Data System (ADS)

    Ilić, Ivana; Bogdanović, Dejan; Živković, Dragana; Milošević, Novica; Todorović, Boban

    2011-07-01

    The town of Bor (Serbia) is one of the most polluted towns in southeastern Europe. The copper smelter which is situated in the centre of the town is the main pollutant, mostly because of its old technology, which leads to environmental pollution caused by higher concentrations of SO 2 and PM 10. These facts show that the word is about a very polluted region in Europe which, apart from harming human health in the region itself, poses a particular danger for wider area of southeastern Europe. Optimization of heavy metal's total emission was undertaken because years of long contamination of the soil with heavy metals of anthropogenic origin created a danger that those heavy metals may enter the food chains of animals and people, which can lead to disastrous consequences. This work represents the usage of Geographic Information System (GIS) for establishing a multifactor assessment model to quantitatively divide polluted zones and for selecting control sites in a linear programming model, combined with PROMETHEE/GAIA method, Screen View modeling system, and linear programming model. The results show that emissions at some control sites need to be cut for about 40%. In order to control the background of heavy metal pollution in Bor, the ecological environment must be improved.

  10. Use of the Hotelling observer to optimize image reconstruction in digital breast tomosynthesis

    PubMed Central

    Sánchez, Adrian A.; Sidky, Emil Y.; Pan, Xiaochuan

    2015-01-01

    Abstract. We propose an implementation of the Hotelling observer that can be applied to the optimization of linear image reconstruction algorithms in digital breast tomosynthesis. The method is based on considering information within a specific region of interest, and it is applied to the optimization of algorithms for detectability of microcalcifications. Several linear algorithms are considered: simple back-projection, filtered back-projection, back-projection filtration, and Λ-tomography. The optimized algorithms are then evaluated through the reconstruction of phantom data. The method appears robust across algorithms and parameters and leads to the generation of algorithm implementations which subjectively appear optimized for the task of interest. PMID:26702408

  11. Study on Temperature and Synthetic Compensation of Piezo-Resistive Differential Pressure Sensors by Coupled Simulated Annealing and Simplex Optimized Kernel Extreme Learning Machine

    PubMed Central

    Li, Ji; Hu, Guoqing; Zhou, Yonghong; Zou, Chong; Peng, Wei; Alam SM, Jahangir

    2017-01-01

    As a high performance-cost ratio solution for differential pressure measurement, piezo-resistive differential pressure sensors are widely used in engineering processes. However, their performance is severely affected by the environmental temperature and the static pressure applied to them. In order to modify the non-linear measuring characteristics of the piezo-resistive differential pressure sensor, compensation actions should synthetically consider these two aspects. Advantages such as nonlinear approximation capability, highly desirable generalization ability and computational efficiency make the kernel extreme learning machine (KELM) a practical approach for this critical task. Since the KELM model is intrinsically sensitive to the regularization parameter and the kernel parameter, a searching scheme combining the coupled simulated annealing (CSA) algorithm and the Nelder-Mead simplex algorithm is adopted to find an optimal KLEM parameter set. A calibration experiment at different working pressure levels was conducted within the temperature range to assess the proposed method. In comparison with other compensation models such as the back-propagation neural network (BP), radius basis neural network (RBF), particle swarm optimization optimized support vector machine (PSO-SVM), particle swarm optimization optimized least squares support vector machine (PSO-LSSVM) and extreme learning machine (ELM), the compensation results show that the presented compensation algorithm exhibits a more satisfactory performance with respect to temperature compensation and synthetic compensation problems. PMID:28422080

  12. Study on Temperature and Synthetic Compensation of Piezo-Resistive Differential Pressure Sensors by Coupled Simulated Annealing and Simplex Optimized Kernel Extreme Learning Machine.

    PubMed

    Li, Ji; Hu, Guoqing; Zhou, Yonghong; Zou, Chong; Peng, Wei; Alam Sm, Jahangir

    2017-04-19

    As a high performance-cost ratio solution for differential pressure measurement, piezo-resistive differential pressure sensors are widely used in engineering processes. However, their performance is severely affected by the environmental temperature and the static pressure applied to them. In order to modify the non-linear measuring characteristics of the piezo-resistive differential pressure sensor, compensation actions should synthetically consider these two aspects. Advantages such as nonlinear approximation capability, highly desirable generalization ability and computational efficiency make the kernel extreme learning machine (KELM) a practical approach for this critical task. Since the KELM model is intrinsically sensitive to the regularization parameter and the kernel parameter, a searching scheme combining the coupled simulated annealing (CSA) algorithm and the Nelder-Mead simplex algorithm is adopted to find an optimal KLEM parameter set. A calibration experiment at different working pressure levels was conducted within the temperature range to assess the proposed method. In comparison with other compensation models such as the back-propagation neural network (BP), radius basis neural network (RBF), particle swarm optimization optimized support vector machine (PSO-SVM), particle swarm optimization optimized least squares support vector machine (PSO-LSSVM) and extreme learning machine (ELM), the compensation results show that the presented compensation algorithm exhibits a more satisfactory performance with respect to temperature compensation and synthetic compensation problems.

  13. Application of quantum-behaved particle swarm optimization to motor imagery EEG classification.

    PubMed

    Hsu, Wei-Yen

    2013-12-01

    In this study, we propose a recognition system for single-trial analysis of motor imagery (MI) electroencephalogram (EEG) data. Applying event-related brain potential (ERP) data acquired from the sensorimotor cortices, the system chiefly consists of automatic artifact elimination, feature extraction, feature selection and classification. In addition to the use of independent component analysis, a similarity measure is proposed to further remove the electrooculographic (EOG) artifacts automatically. Several potential features, such as wavelet-fractal features, are then extracted for subsequent classification. Next, quantum-behaved particle swarm optimization (QPSO) is used to select features from the feature combination. Finally, selected sub-features are classified by support vector machine (SVM). Compared with without artifact elimination, feature selection using a genetic algorithm (GA) and feature classification with Fisher's linear discriminant (FLD) on MI data from two data sets for eight subjects, the results indicate that the proposed method is promising in brain-computer interface (BCI) applications.

  14. Low Density Solvent-Based Dispersive Liquid-Liquid Microextraction for the Determination of Synthetic Antioxidants in Beverages by High-Performance Liquid Chromatography

    PubMed Central

    Çabuk, Hasan; Köktürk, Mustafa

    2013-01-01

    A simple and efficient method was established for the determination of synthetic antioxidants in beverages by using dispersive liquid-liquid microextraction combined with high-performance liquid chromatography with ultraviolet detection. Butylated hydroxy toluene, butylated hydroxy anisole, and tert-butylhydroquinone were the antioxidants evaluated. Experimental parameters including extraction solvent, dispersive solvent, pH of sample solution, salt concentration, and extraction time were optimized. Under optimal conditions, the extraction recoveries ranged from 53 to 96%. Good linearity was observed by the square of correlation coefficients ranging from 0.9975 to 0.9997. The relative standard deviations ranged from 1.0 to 5.2% for all of the analytes. Limits of detection ranged from 0.85 to 2.73 ng mL−1. The method was successfully applied for determination of synthetic antioxidants in undiluted beverage samples with satisfactory recoveries. PMID:23853535

  15. Fast determination of total ginsenosides content in ginseng powder by near infrared reflectance spectroscopy

    NASA Astrophysics Data System (ADS)

    Chen, Hua-cai; Chen, Xing-dan; Lu, Yong-jun; Cao, Zhi-qiang

    2006-01-01

    Near infrared (NIR) reflectance spectroscopy was used to develop a fast determination method for total ginsenosides in Ginseng (Panax Ginseng) powder. The spectra were analyzed with multiplicative signal correction (MSC) correlation method. The best correlative spectra region with the total ginsenosides content was 1660 nm~1880 nm and 2230nm~2380 nm. The NIR calibration models of ginsenosides were built with multiple linear regression (MLR), principle component regression (PCR) and partial least squares (PLS) regression respectively. The results showed that the calibration model built with PLS combined with MSC and the optimal spectrum region was the best one. The correlation coefficient and the root mean square error of correction validation (RMSEC) of the best calibration model were 0.98 and 0.15% respectively. The optimal spectrum region for calibration was 1204nm~2014nm. The result suggested that using NIR to rapidly determinate the total ginsenosides content in ginseng powder were feasible.

  16. Jerk Minimization Method for Vibration Control in Buildings

    NASA Technical Reports Server (NTRS)

    Abatan, Ayo O.; Yao, Leummim

    1997-01-01

    In many vibration minimization control problems for high rise buildings subject to strong earthquake loads, the emphasis has been on a combination of minimizing the displacement, the velocity and the acceleration of the motion of the building. In most cases, the accelerations that are involved are not necessarily large but the change in them (jerk) are abrupt. These changes in magnitude or direction are responsible for most building damage and also create discomfort like motion sickness for inhabitants of these structures because of the element of surprise. We propose a method of minimizing also the jerk which is the sudden change in acceleration or the derivative of the acceleration using classical linear quadratic optimal controls. This was done through the introduction of a quadratic performance index involving the cost due to the jerk; a special change of variable; and using the jerk as a control variable. The values of the optimal control are obtained using the Riccati equation.

  17. First-Principles-Driven Model-Based Optimal Control of the Current Profile in NSTX-U

    NASA Astrophysics Data System (ADS)

    Ilhan, Zeki; Barton, Justin; Wehner, William; Schuster, Eugenio; Gates, David; Gerhardt, Stefan; Kolemen, Egemen; Menard, Jonathan

    2014-10-01

    Regulation in time of the toroidal current profile is one of the main challenges toward the realization of the next-step operational goals for NSTX-U. A nonlinear, control-oriented, physics-based model describing the temporal evolution of the current profile is obtained by combining the magnetic diffusion equation with empirical correlations obtained at NSTX-U for the electron density, electron temperature, and non-inductive current drives. In this work, the proposed model is embedded into the control design process to synthesize a time-variant, linear-quadratic-integral, optimal controller capable of regulating the safety factor profile around a desired target profile while rejecting disturbances. Neutral beam injectors and the total plasma current are used as actuators to shape the current profile. The effectiveness of the proposed controller in regulating the safety factor profile in NSTX-U is demonstrated via closed-loop predictive simulations carried out in PTRANSP. Supported by PPPL.

  18. Optimizing basin-scale coupled water quantity and water quality man-agement with stochastic dynamic programming

    NASA Astrophysics Data System (ADS)

    Davidsen, Claus; Liu, Suxia; Mo, Xingguo; Engelund Holm, Peter; Trapp, Stefan; Rosbjerg, Dan; Bauer-Gottwein, Peter

    2015-04-01

    Few studies address water quality in hydro-economic models, which often focus primarily on optimal allocation of water quantities. Water quality and water quantity are closely coupled, and optimal management with focus solely on either quantity or quality may cause large costs in terms of the oth-er component. In this study, we couple water quality and water quantity in a joint hydro-economic catchment-scale optimization problem. Stochastic dynamic programming (SDP) is used to minimize the basin-wide total costs arising from water allocation, water curtailment and water treatment. The simple water quality module can handle conservative pollutants, first order depletion and non-linear reactions. For demonstration purposes, we model pollutant releases as biochemical oxygen demand (BOD) and use the Streeter-Phelps equation for oxygen deficit to compute the resulting min-imum dissolved oxygen concentrations. Inelastic water demands, fixed water allocation curtailment costs and fixed wastewater treatment costs (before and after use) are estimated for the water users (agriculture, industry and domestic). If the BOD concentration exceeds a given user pollution thresh-old, the user will need to pay for pre-treatment of the water before use. Similarly, treatment of the return flow can reduce the BOD load to the river. A traditional SDP approach is used to solve one-step-ahead sub-problems for all combinations of discrete reservoir storage, Markov Chain inflow clas-ses and monthly time steps. Pollution concentration nodes are introduced for each user group and untreated return flow from the users contribute to increased BOD concentrations in the river. The pollutant concentrations in each node depend on multiple decision variables (allocation and wastewater treatment) rendering the objective function non-linear. Therefore, the pollution concen-tration decisions are outsourced to a genetic algorithm, which calls a linear program to determine the remainder of the decision variables. This hybrid formulation keeps the optimization problem computationally feasible and represents a flexible and customizable method. The method has been applied to the Ziya River basin, an economic hotspot located on the North China Plain in Northern China. The basin is subject to severe water scarcity, and the rivers are heavily polluted with wastewater and nutrients from diffuse sources. The coupled hydro-economic optimiza-tion model can be used to assess costs of meeting additional constraints such as minimum water qual-ity or to economically prioritize investments in waste water treatment facilities based on economic criteria.

  19. SU-G-TeP3-01: A New Approach for Calculating Variable Relative Biological Effectiveness in IMPT Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cao, W; Randeniya, K; Grosshans, D

    2016-06-15

    Purpose: To investigate the impact of a new approach for calculating relative biological effectiveness (RBE) in intensity-modulated proton therapy (IMPT) optimization on RBE-weighted dose distributions. This approach includes the nonlinear RBE for the high linear energy transfer (LET) region, which was revealed by recent experiments at our institution. In addition, this approach utilizes RBE data as a function of LET without using dose-averaged LET in calculating RBE values. Methods: We used a two-piece function for calculating RBE from LET. Within the Bragg peak, RBE is linearly correlated to LET. Beyond the Bragg peak, we use a nonlinear (quadratic) RBE functionmore » of LET based on our experimental. The IMPT optimization was devised to incorporate variable RBE by maximizing biological effect (based on the Linear Quadratic model) in tumor and minimizing biological effect in normal tissues. Three glioblastoma patients were retrospectively selected from our institution in this study. For each patient, three optimized IMPT plans were created based on three RBE resolutions, i.e., fixed RBE of 1.1 (RBE-1.1), variable RBE based on linear RBE and LET relationship (RBE-L), and variable RBE based on linear and quadratic relationship (RBE-LQ). The RBE weighted dose distributions of each optimized plan were evaluated in terms of different RBE values, i.e., RBE-1.1, RBE-L and RBE-LQ. Results: The RBE weighted doses recalculated from RBE-1.1 based optimized plans demonstrated an increasing pattern from using RBE-1.1, RBE-L to RBE-LQ consistently for all three patients. The variable RBE (RBE-L and RBE-LQ) weighted dose distributions recalculated from RBE-L and RBE-LQ based optimization were more homogenous within the targets and better spared in the critical structures than the ones recalculated from RBE-1.1 based optimization. Conclusion: We implemented a new approach for RBE calculation and optimization and demonstrated potential benefits of improving tumor coverage and normal sparing in IMPT planning.« less

  20. Fleet Assignment Using Collective Intelligence

    NASA Technical Reports Server (NTRS)

    Antoine, Nicolas E.; Bieniawski, Stefan R.; Kroo, Ilan M.; Wolpert, David H.

    2004-01-01

    Product distribution theory is a new collective intelligence-based framework for analyzing and controlling distributed systems. Its usefulness in distributed stochastic optimization is illustrated here through an airline fleet assignment problem. This problem involves the allocation of aircraft to a set of flights legs in order to meet passenger demand, while satisfying a variety of linear and non-linear constraints. Over the course of the day, the routing of each aircraft is determined in order to minimize the number of required flights for a given fleet. The associated flow continuity and aircraft count constraints have led researchers to focus on obtaining quasi-optimal solutions, especially at larger scales. In this paper, the authors propose the application of this new stochastic optimization algorithm to a non-linear objective cold start fleet assignment problem. Results show that the optimizer can successfully solve such highly-constrained problems (130 variables, 184 constraints).

  1. Optimal inventories for overhaul of repairable redundant systems - A Markov decision model

    NASA Technical Reports Server (NTRS)

    Schaefer, M. K.

    1984-01-01

    A Markovian decision model was developed to calculate the optimal inventory of repairable spare parts for an avionics control system for commercial aircraft. Total expected shortage costs, repair costs, and holding costs are minimized for a machine containing a single system of redundant parts. Transition probabilities are calculated for each repair state and repair rate, and optimal spare parts inventory and repair strategies are determined through linear programming. The linear programming solutions are given in a table.

  2. The solution of the optimization problem of small energy complexes using linear programming methods

    NASA Astrophysics Data System (ADS)

    Ivanin, O. A.; Director, L. B.

    2016-11-01

    Linear programming methods were used for solving the optimization problem of schemes and operation modes of distributed generation energy complexes. Applicability conditions of simplex method, applied to energy complexes, including installations of renewable energy (solar, wind), diesel-generators and energy storage, considered. The analysis of decomposition algorithms for various schemes of energy complexes was made. The results of optimization calculations for energy complexes, operated autonomously and as a part of distribution grid, are presented.

  3. Neighboring extremal optimal control design including model mismatch errors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, T.J.; Hull, D.G.

    1994-11-01

    The mismatch control technique that is used to simplify model equations of motion in order to determine analytic optimal control laws is extended using neighboring extremal theory. The first variation optimal control equations are linearized about the extremal path to account for perturbations in the initial state and the final constraint manifold. A numerical example demonstrates that the tuning procedure inherent in the mismatch control method increases the performance of the controls to the level of a numerically-determined piecewise-linear controller.

  4. Calm Multi-Baryon Operators

    NASA Astrophysics Data System (ADS)

    Berkowitz, Evan; Nicholson, Amy; Chang, Chia Cheng; Rinaldi, Enrico; Clark, M. A.; Joó, Bálint; Kurth, Thorsten; Vranas, Pavlos; Walker-Loud, André

    2018-03-01

    There are many outstanding problems in nuclear physics which require input and guidance from lattice QCD calculations of few baryons systems. However, these calculations suffer from an exponentially bad signal-to-noise problem which has prevented a controlled extrapolation to the physical point. The variational method has been applied very successfully to two-meson systems, allowing for the extraction of the two-meson states very early in Euclidean time through the use of improved single hadron operators. The sheer numerical cost of using the same techniques in two-baryon systems has so far been prohibitive. We present an alternate strategy which offers some of the same advantages as the variational method while being significantly less numerically expensive. We first use the Matrix Prony method to form an optimal linear combination of single baryon interpolating fields generated from the same source and different sink interpolating fields. Very early in Euclidean time this optimal linear combination is numerically free of excited state contamination, so we coin it a calm baryon. This calm baryon operator is then used in the construction of the two-baryon correlation functions. To test this method, we perform calculations on the WM/JLab iso-clover gauge configurations at the SU(3) flavor symmetric point with mπ 800 MeV — the same configurations we have previously used for the calculation of two-nucleon correlation functions. We observe the calm baryon significantly removes the excited state contamination from the two-nucleon correlation function to as early a time as the single-nucleon is improved, provided non-local (displaced nucleon) sources are used. For the local two-nucleon correlation function (where both nucleons are created from the same space-time location) there is still improvement, but there is significant excited state contamination in the region the single calm baryon displays no excited state contamination.

  5. Effect of simultaneous variation in temperature and ammonia concentration on percent fertilization and hatching in Crassostrea ariakensis.

    PubMed

    Hui, Wang; Jiahui, Liu; Hongshuai, Yang; Jin, Liu; Zhigang, Liu

    2014-04-01

    The combined effects of temperature and ammonia concentration on the percent fertilization and percent hatching in Crassostrea ariakensis were examined under laboratory conditions using the central composite design and response surface methodology. The results indicated: (1) The linear effects of temperature and ammonia concentration on the percent fertilization were significant (P<0.05), and the quadratic effects were highly significant (P<0.01). The interactive effect between temperature and ammonia concentration on the percent fertilization was not significant (P>0.05). (2) The linear effect of temperature on the percent hatching was highly significant (P<0.01), and that of ammonia concentration was nonsignificant (P>0.05). The quadratic effects of temperature and ammonia concentration on the percent hatching were highly significant (P<0.01). The interaction on the percent hatching was not significant (P>0.05). Temperature was more important than ammonia in influencing the fertilization and hatching in C. ariakensis. (3) The model equations of the percent fertilization and hatching towards temperature and ammonia concentration were established, with the coefficients of determination R(2)=99.4% and 99.76%, respectively. Through the lack-of-fit test, these models were of great adequacy. The predictive coefficients of determination for the two model equations were as high as 94.6% and 98.03%, respectively, showing that they could be used for practical projection. (4) Via the statistical simultaneous optimization technique, the optimal factor level combination, i.e., 25°C/0.038mgmL(-1), was derived, at which the greatest percent fertilization 95.25% and hatching 83.26% was achieved, with the desirability being 97.81%. Our results may provide advantageous guidelines for the successful reproduction of C. ariakensis. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Optimization of classification and regression analysis of four monoclonal antibodies from Raman spectra using collaborative machine learning approach.

    PubMed

    Le, Laetitia Minh Maï; Kégl, Balázs; Gramfort, Alexandre; Marini, Camille; Nguyen, David; Cherti, Mehdi; Tfaili, Sana; Tfayli, Ali; Baillet-Guffroy, Arlette; Prognon, Patrice; Chaminade, Pierre; Caudron, Eric

    2018-07-01

    The use of monoclonal antibodies (mAbs) constitutes one of the most important strategies to treat patients suffering from cancers such as hematological malignancies and solid tumors. These antibodies are prescribed by the physician and prepared by hospital pharmacists. An analytical control enables the quality of the preparations to be ensured. The aim of this study was to explore the development of a rapid analytical method for quality control. The method used four mAbs (Infliximab, Bevacizumab, Rituximab and Ramucirumab) at various concentrations and was based on recording Raman data and coupling them to a traditional chemometric and machine learning approach for data analysis. Compared to conventional linear approach, prediction errors are reduced with a data-driven approach using statistical machine learning methods. In the latter, preprocessing and predictive models are jointly optimized. An additional original aspect of the work involved on submitting the problem to a collaborative data challenge platform called Rapid Analytics and Model Prototyping (RAMP). This allowed using solutions from about 300 data scientists in collaborative work. Using machine learning, the prediction of the four mAbs samples was considerably improved. The best predictive model showed a combined error of 2.4% versus 14.6% using linear approach. The concentration and classification errors were 5.8% and 0.7%, only three spectra were misclassified over the 429 spectra of the test set. This large improvement obtained with machine learning techniques was uniform for all molecules but maximal for Bevacizumab with an 88.3% reduction on combined errors (2.1% versus 17.9%). Copyright © 2018 Elsevier B.V. All rights reserved.

  7. An experimentally validated model for geometrically nonlinear plucking-based frequency up-conversion in energy harvesting

    NASA Astrophysics Data System (ADS)

    Kathpalia, B.; Tan, D.; Stern, I.; Erturk, A.

    2018-01-01

    It is well known that plucking-based frequency up-conversion can enhance the power output in piezoelectric energy harvesting by enabling cyclic free vibration at the fundamental bending mode of the harvester even for very low excitation frequencies. In this work, we present a geometrically nonlinear plucking-based framework for frequency up-conversion in piezoelectric energy harvesting under quasistatic excitations associated with low-frequency stimuli such as walking and similar rigid body motions. Axial shortening of the plectrum is essential to enable plucking excitation, which requires a nonlinear framework relating the plectrum parameters (e.g. overlap length between the plectrum and harvester) to the overall electrical power output. Von Kármán-type geometrically nonlinear deformation of the flexible plectrum cantilever is employed to relate the overlap length between the flexible (nonlinear) plectrum and the stiff (linear) harvester to the transverse quasistatic tip displacement of the plectrum, and thereby the tip load on the linear harvester in each plucking cycle. By combining the nonlinear plectrum mechanics and linear harvester dynamics with two-way electromechanical coupling, the electrical power output is obtained directly in terms of the overlap length. Experimental case studies and validations are presented for various overlap lengths and a set of electrical load resistance values. Further analysis results are reported regarding the combined effects of plectrum thickness and overlap length on the plucking force and harvested power output. The experimentally validated nonlinear plectrum-linear harvester framework proposed herein can be employed to design and optimize frequency up-conversion by properly choosing the plectrum parameters (geometry, material, overlap length, etc) as well as the harvester parameters.

  8. Flood Nowcasting With Linear Catchment Models, Radar and Kalman Filters

    NASA Astrophysics Data System (ADS)

    Pegram, Geoff; Sinclair, Scott

    A pilot study using real time rainfall data as input to a parsimonious linear distributed flood forecasting model is presented. The aim of the study is to deliver an operational system capable of producing flood forecasts, in real time, for the Mgeni and Mlazi catchments near the city of Durban in South Africa. The forecasts can be made at time steps which are of the order of a fraction of the catchment response time. To this end, the model is formulated in Finite Difference form in an equation similar to an Auto Regressive Moving Average (ARMA) model; it is this formulation which provides the required computational efficiency. The ARMA equation is a discretely coincident form of the State-Space equations that govern the response of an arrangement of linear reservoirs. This results in a functional relationship between the reservoir response con- stants and the ARMA coefficients, which guarantees stationarity of the ARMA model. Input to the model is a combined "Best Estimate" spatial rainfall field, derived from a combination of weather RADAR and Satellite rainfield estimates with point rain- fall given by a network of telemetering raingauges. Several strategies are employed to overcome the uncertainties associated with forecasting. Principle among these are the use of optimal (double Kalman) filtering techniques to update the model states and parameters in response to current streamflow observations and the application of short term forecasting techniques to provide future estimates of the rainfield as input to the model.

  9. Landscape resistance and habitat combine to provide an optimal model of genetic structure and connectivity at the range margin of a small mammal.

    PubMed

    Marrotte, R R; Gonzalez, A; Millien, V

    2014-08-01

    We evaluated the effect of habitat and landscape characteristics on the population genetic structure of the white-footed mouse. We develop a new approach that uses numerical optimization to define a model that combines site differences and landscape resistance to explain the genetic differentiation between mouse populations inhabiting forest patches in southern Québec. We used ecological distance computed from resistance surfaces with Circuitscape to infer the effect of the landscape matrix on gene flow. We calculated site differences using a site index of habitat characteristics. A model that combined site differences and resistance distances explained a high proportion of the variance in genetic differentiation and outperformed models that used geographical distance alone. Urban and agriculture-related land uses were, respectively, the most and the least resistant landscape features influencing gene flow. Our method detected the effect of rivers and highways as highly resistant linear barriers. The density of grass and shrubs on the ground best explained the variation in the site index of habitat characteristics. Our model indicates that movement of white-footed mouse in this region is constrained along routes of low resistance. Our approach can generate models that may improve predictions of future northward range expansion of this small mammal. © 2014 John Wiley & Sons Ltd.

  10. Bypassing the Limits of Ll Regularization: Convex Sparse Signal Processing Using Non-Convex Regularization

    NASA Astrophysics Data System (ADS)

    Parekh, Ankit

    Sparsity has become the basis of some important signal processing methods over the last ten years. Many signal processing problems (e.g., denoising, deconvolution, non-linear component analysis) can be expressed as inverse problems. Sparsity is invoked through the formulation of an inverse problem with suitably designed regularization terms. The regularization terms alone encode sparsity into the problem formulation. Often, the ℓ1 norm is used to induce sparsity, so much so that ℓ1 regularization is considered to be `modern least-squares'. The use of ℓ1 norm, as a sparsity-inducing regularizer, leads to a convex optimization problem, which has several benefits: the absence of extraneous local minima, well developed theory of globally convergent algorithms, even for large-scale problems. Convex regularization via the ℓ1 norm, however, tends to under-estimate the non-zero values of sparse signals. In order to estimate the non-zero values more accurately, non-convex regularization is often favored over convex regularization. However, non-convex regularization generally leads to non-convex optimization, which suffers from numerous issues: convergence may be guaranteed to only a stationary point, problem specific parameters may be difficult to set, and the solution is sensitive to the initialization of the algorithm. The first part of this thesis is aimed toward combining the benefits of non-convex regularization and convex optimization to estimate sparse signals more effectively. To this end, we propose to use parameterized non-convex regularizers with designated non-convexity and provide a range for the non-convex parameter so as to ensure that the objective function is strictly convex. By ensuring convexity of the objective function (sum of data-fidelity and non-convex regularizer), we can make use of a wide variety of convex optimization algorithms to obtain the unique global minimum reliably. The second part of this thesis proposes a non-linear signal decomposition technique for an important biomedical signal processing problem: the detection of sleep spindles and K-complexes in human sleep electroencephalography (EEG). We propose a non-linear model for the EEG consisting of three components: (1) a transient (sparse piecewise constant) component, (2) a low-frequency component, and (3) an oscillatory component. The oscillatory component admits a sparse time-frequency representation. Using a convex objective function, we propose a fast non-linear optimization algorithm to estimate the three components in the proposed signal model. The low-frequency and oscillatory components are then used to estimate the K-complexes and sleep spindles respectively. The proposed detection method is shown to outperform several state-of-the-art automated sleep spindles detection methods.

  11. Sub-optimal control of fuzzy linear dynamical systems under granular differentiability concept.

    PubMed

    Mazandarani, Mehran; Pariz, Naser

    2018-05-01

    This paper deals with sub-optimal control of a fuzzy linear dynamical system. The aim is to keep the state variables of the fuzzy linear dynamical system close to zero in an optimal manner. In the fuzzy dynamical system, the fuzzy derivative is considered as the granular derivative; and all the coefficients and initial conditions can be uncertain. The criterion for assessing the optimality is regarded as a granular integral whose integrand is a quadratic function of the state variables and control inputs. Using the relative-distance-measure (RDM) fuzzy interval arithmetic and calculus of variations, the optimal control law is presented as the fuzzy state variables feedback. Since the optimal feedback gains are obtained as fuzzy functions, they need to be defuzzified. This will result in the sub-optimal control law. This paper also sheds light on the restrictions imposed by the approaches which are based on fuzzy standard interval arithmetic (FSIA), and use strongly generalized Hukuhara and generalized Hukuhara differentiability concepts for obtaining the optimal control law. The granular eigenvalues notion is also defined. Using an RLC circuit mathematical model, it is shown that, due to their unnatural behavior in the modeling phenomenon, the FSIA-based approaches may obtain some eigenvalues sets that might be different from the inherent eigenvalues set of the fuzzy dynamical system. This is, however, not the case with the approach proposed in this study. The notions of granular controllability and granular stabilizability of the fuzzy linear dynamical system are also presented in this paper. Moreover, a sub-optimal control for regulating a Boeing 747 in longitudinal direction with uncertain initial conditions and parameters is gained. In addition, an uncertain suspension system of one of the four wheels of a bus is regulated using the sub-optimal control introduced in this paper. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  12. A computer tool for a minimax criterion in binary response and heteroscedastic simple linear regression models.

    PubMed

    Casero-Alonso, V; López-Fidalgo, J; Torsney, B

    2017-01-01

    Binary response models are used in many real applications. For these models the Fisher information matrix (FIM) is proportional to the FIM of a weighted simple linear regression model. The same is also true when the weight function has a finite integral. Thus, optimal designs for one binary model are also optimal for the corresponding weighted linear regression model. The main objective of this paper is to provide a tool for the construction of MV-optimal designs, minimizing the maximum of the variances of the estimates, for a general design space. MV-optimality is a potentially difficult criterion because of its nondifferentiability at equal variance designs. A methodology for obtaining MV-optimal designs where the design space is a compact interval [a, b] will be given for several standard weight functions. The methodology will allow us to build a user-friendly computer tool based on Mathematica to compute MV-optimal designs. Some illustrative examples will show a representation of MV-optimal designs in the Euclidean plane, taking a and b as the axes. The applet will be explained using two relevant models. In the first one the case of a weighted linear regression model is considered, where the weight function is directly chosen from a typical family. In the second example a binary response model is assumed, where the probability of the outcome is given by a typical probability distribution. Practitioners can use the provided applet to identify the solution and to know the exact support points and design weights. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  13. Approaches to the Optimal Nonlinear Analysis of Microcalorimeter Pulses

    NASA Astrophysics Data System (ADS)

    Fowler, J. W.; Pappas, C. G.; Alpert, B. K.; Doriese, W. B.; O'Neil, G. C.; Ullom, J. N.; Swetz, D. S.

    2018-03-01

    We consider how to analyze microcalorimeter pulses for quantities that are nonlinear in the data, while preserving the signal-to-noise advantages of linear optimal filtering. We successfully apply our chosen approach to compute the electrothermal feedback energy deficit (the "Joule energy") of a pulse, which has been proposed as a linear estimator of the deposited photon energy.

  14. Scheduled Relaxation Jacobi method: Improvements and applications

    NASA Astrophysics Data System (ADS)

    Adsuara, J. E.; Cordero-Carrión, I.; Cerdá-Durán, P.; Aloy, M. A.

    2016-09-01

    Elliptic partial differential equations (ePDEs) appear in a wide variety of areas of mathematics, physics and engineering. Typically, ePDEs must be solved numerically, which sets an ever growing demand for efficient and highly parallel algorithms to tackle their computational solution. The Scheduled Relaxation Jacobi (SRJ) is a promising class of methods, atypical for combining simplicity and efficiency, that has been recently introduced for solving linear Poisson-like ePDEs. The SRJ methodology relies on computing the appropriate parameters of a multilevel approach with the goal of minimizing the number of iterations needed to cut down the residuals below specified tolerances. The efficiency in the reduction of the residual increases with the number of levels employed in the algorithm. Applying the original methodology to compute the algorithm parameters with more than 5 levels notably hinders obtaining optimal SRJ schemes, as the mixed (non-linear) algebraic-differential system of equations from which they result becomes notably stiff. Here we present a new methodology for obtaining the parameters of SRJ schemes that overcomes the limitations of the original algorithm and provide parameters for SRJ schemes with up to 15 levels and resolutions of up to 215 points per dimension, allowing for acceleration factors larger than several hundreds with respect to the Jacobi method for typical resolutions and, in some high resolution cases, close to 1000. Most of the success in finding SRJ optimal schemes with more than 10 levels is based on an analytic reduction of the complexity of the previously mentioned system of equations. Furthermore, we extend the original algorithm to apply it to certain systems of non-linear ePDEs.

  15. CAMELOT: A machine learning approach for coarse-grained simulations of aggregation of block-copolymeric protein sequences

    PubMed Central

    Ruff, Kiersten M.; Harmon, Tyler S.; Pappu, Rohit V.

    2015-01-01

    We report the development and deployment of a coarse-graining method that is well suited for computer simulations of aggregation and phase separation of protein sequences with block-copolymeric architectures. Our algorithm, named CAMELOT for Coarse-grained simulations Aided by MachinE Learning Optimization and Training, leverages information from converged all atom simulations that is used to determine a suitable resolution and parameterize the coarse-grained model. To parameterize a system-specific coarse-grained model, we use a combination of Boltzmann inversion, non-linear regression, and a Gaussian process Bayesian optimization approach. The accuracy of the coarse-grained model is demonstrated through direct comparisons to results from all atom simulations. We demonstrate the utility of our coarse-graining approach using the block-copolymeric sequence from the exon 1 encoded sequence of the huntingtin protein. This sequence comprises of 17 residues from the N-terminal end of huntingtin (N17) followed by a polyglutamine (polyQ) tract. Simulations based on the CAMELOT approach are used to show that the adsorption and unfolding of the wild type N17 and its sequence variants on the surface of polyQ tracts engender a patchy colloid like architecture that promotes the formation of linear aggregates. These results provide a plausible explanation for experimental observations, which show that N17 accelerates the formation of linear aggregates in block-copolymeric N17-polyQ sequences. The CAMELOT approach is versatile and is generalizable for simulating the aggregation and phase behavior of a range of block-copolymeric protein sequences. PMID:26723608

  16. Hybrid Genetic Agorithms and Line Search Method for Industrial Production Planning with Non-Linear Fitness Function

    NASA Astrophysics Data System (ADS)

    Vasant, Pandian; Barsoum, Nader

    2008-10-01

    Many engineering, science, information technology and management optimization problems can be considered as non linear programming real world problems where the all or some of the parameters and variables involved are uncertain in nature. These can only be quantified using intelligent computational techniques such as evolutionary computation and fuzzy logic. The main objective of this research paper is to solve non linear fuzzy optimization problem where the technological coefficient in the constraints involved are fuzzy numbers which was represented by logistic membership functions by using hybrid evolutionary optimization approach. To explore the applicability of the present study a numerical example is considered to determine the production planning for the decision variables and profit of the company.

  17. Time-saving design of experiment protocol for optimization of LC-MS data processing in metabolomic approaches.

    PubMed

    Zheng, Hong; Clausen, Morten Rahr; Dalsgaard, Trine Kastrup; Mortensen, Grith; Bertram, Hanne Christine

    2013-08-06

    We describe a time-saving protocol for the processing of LC-MS-based metabolomics data by optimizing parameter settings in XCMS and threshold settings for removing noisy and low-intensity peaks using design of experiment (DoE) approaches including Plackett-Burman design (PBD) for screening and central composite design (CCD) for optimization. A reliability index, which is based on evaluation of the linear response to a dilution series, was used as a parameter for the assessment of data quality. After identifying the significant parameters in the XCMS software by PBD, CCD was applied to determine their values by maximizing the reliability and group indexes. Optimal settings by DoE resulted in improvements of 19.4% and 54.7% in the reliability index for a standard mixture and human urine, respectively, as compared with the default setting, and a total of 38 h was required to complete the optimization. Moreover, threshold settings were optimized by using CCD for further improvement. The approach combining optimal parameter setting and the threshold method improved the reliability index about 9.5 times for a standards mixture and 14.5 times for human urine data, which required a total of 41 h. Validation results also showed improvements in the reliability index of about 5-7 times even for urine samples from different subjects. It is concluded that the proposed methodology can be used as a time-saving approach for improving the processing of LC-MS-based metabolomics data.

  18. Hybrid PV/diesel solar power system design using multi-level factor analysis optimization

    NASA Astrophysics Data System (ADS)

    Drake, Joshua P.

    Solar power systems represent a large area of interest across a spectrum of organizations at a global level. It was determined that a clear understanding of current state of the art software and design methods, as well as optimization methods, could be used to improve the design methodology. Solar power design literature was researched for an in depth understanding of solar power system design methods and algorithms. Multiple software packages for the design and optimization of solar power systems were analyzed for a critical understanding of their design workflow. In addition, several methods of optimization were studied, including brute force, Pareto analysis, Monte Carlo, linear and nonlinear programming, and multi-way factor analysis. Factor analysis was selected as the most efficient optimization method for engineering design as it applied to solar power system design. The solar power design algorithms, software work flow analysis, and factor analysis optimization were combined to develop a solar power system design optimization software package called FireDrake. This software was used for the design of multiple solar power systems in conjunction with an energy audit case study performed in seven Tibetan refugee camps located in Mainpat, India. A report of solar system designs for the camps, as well as a proposed schedule for future installations was generated. It was determined that there were several improvements that could be made to the state of the art in modern solar power system design, though the complexity of current applications is significant.

  19. Simultaneous beam sampling and aperture shape optimization for SPORT.

    PubMed

    Zarepisheh, Masoud; Li, Ruijiang; Ye, Yinyu; Xing, Lei

    2015-02-01

    Station parameter optimized radiation therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital linear accelerators, in which the station parameters of a delivery system, such as aperture shape and weight, couch position/angle, gantry/collimator angle, can be optimized simultaneously. SPORT promises to deliver remarkable radiation dose distributions in an efficient manner, yet there exists no optimization algorithm for its implementation. The purpose of this work is to develop an algorithm to simultaneously optimize the beam sampling and aperture shapes. The authors build a mathematical model with the fundamental station point parameters as the decision variables. To solve the resulting large-scale optimization problem, the authors devise an effective algorithm by integrating three advanced optimization techniques: column generation, subgradient method, and pattern search. Column generation adds the most beneficial stations sequentially until the plan quality improvement saturates and provides a good starting point for the subsequent optimization. It also adds the new stations during the algorithm if beneficial. For each update resulted from column generation, the subgradient method improves the selected stations locally by reshaping the apertures and updating the beam angles toward a descent subgradient direction. The algorithm continues to improve the selected stations locally and globally by a pattern search algorithm to explore the part of search space not reachable by the subgradient method. By combining these three techniques together, all plausible combinations of station parameters are searched efficiently to yield the optimal solution. A SPORT optimization framework with seamlessly integration of three complementary algorithms, column generation, subgradient method, and pattern search, was established. The proposed technique was applied to two previously treated clinical cases: a head and neck and a prostate case. It significantly improved the target conformality and at the same time critical structure sparing compared with conventional intensity modulated radiation therapy (IMRT). In the head and neck case, for example, the average PTV coverage D99% for two PTVs, cord and brainstem max doses, and right parotid gland mean dose were improved, respectively, by about 7%, 37%, 12%, and 16%. The proposed method automatically determines the number of the stations required to generate a satisfactory plan and optimizes simultaneously the involved station parameters, leading to improved quality of the resultant treatment plans as compared with the conventional IMRT plans.

  20. Simultaneous beam sampling and aperture shape optimization for SPORT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zarepisheh, Masoud; Li, Ruijiang; Xing, Lei, E-mail: Lei@stanford.edu

    Purpose: Station parameter optimized radiation therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital linear accelerators, in which the station parameters of a delivery system, such as aperture shape and weight, couch position/angle, gantry/collimator angle, can be optimized simultaneously. SPORT promises to deliver remarkable radiation dose distributions in an efficient manner, yet there exists no optimization algorithm for its implementation. The purpose of this work is to develop an algorithm to simultaneously optimize the beam sampling and aperture shapes. Methods: The authors build a mathematical model with the fundamental station point parameters as the decisionmore » variables. To solve the resulting large-scale optimization problem, the authors devise an effective algorithm by integrating three advanced optimization techniques: column generation, subgradient method, and pattern search. Column generation adds the most beneficial stations sequentially until the plan quality improvement saturates and provides a good starting point for the subsequent optimization. It also adds the new stations during the algorithm if beneficial. For each update resulted from column generation, the subgradient method improves the selected stations locally by reshaping the apertures and updating the beam angles toward a descent subgradient direction. The algorithm continues to improve the selected stations locally and globally by a pattern search algorithm to explore the part of search space not reachable by the subgradient method. By combining these three techniques together, all plausible combinations of station parameters are searched efficiently to yield the optimal solution. Results: A SPORT optimization framework with seamlessly integration of three complementary algorithms, column generation, subgradient method, and pattern search, was established. The proposed technique was applied to two previously treated clinical cases: a head and neck and a prostate case. It significantly improved the target conformality and at the same time critical structure sparing compared with conventional intensity modulated radiation therapy (IMRT). In the head and neck case, for example, the average PTV coverage D99% for two PTVs, cord and brainstem max doses, and right parotid gland mean dose were improved, respectively, by about 7%, 37%, 12%, and 16%. Conclusions: The proposed method automatically determines the number of the stations required to generate a satisfactory plan and optimizes simultaneously the involved station parameters, leading to improved quality of the resultant treatment plans as compared with the conventional IMRT plans.« less

  1. Controlling the high frequency response of H2 by ultra-short tailored laser pulses: A time-dependent configuration interaction study

    NASA Astrophysics Data System (ADS)

    Schönborn, Jan Boyke; Saalfrank, Peter; Klamroth, Tillmann

    2016-01-01

    We combine the stochastic pulse optimization (SPO) scheme with the time-dependent configuration interaction singles method in order to control the high frequency response of a simple molecular model system to a tailored femtosecond laser pulse. For this purpose, we use H2 treated in the fixed nuclei approximation. The SPO scheme, as similar genetic algorithms, is especially suited to control highly non-linear processes, which we consider here in the context of high harmonic generation. Here, we will demonstrate that SPO can be used to realize a "non-harmonic" response of H2 to a laser pulse. Specifically, we will show how adding low intensity side frequencies to the dominant carrier frequency of the laser pulse and stochastically optimizing their contribution can create a high-frequency spectral signal of significant intensity, not harmonic to the carrier frequency. At the same time, it is possible to suppress the harmonic signals in the same spectral region, although the carrier frequency is kept dominant during the optimization.

  2. An Optimal Image-Based Method for Identification of Acoustic Emission (AE) Sources in Plate-Like Structures Using a Lead Zirconium Titanate (PZT) Sensor Array.

    PubMed

    Yan, Gang; Zhou, Li

    2018-02-21

    This paper proposes an innovative method for identifying the locations of multiple simultaneous acoustic emission (AE) events in plate-like structures from the view of image processing. By using a linear lead zirconium titanate (PZT) sensor array to record the AE wave signals, a reverse-time frequency-wavenumber (f-k) migration is employed to produce images displaying the locations of AE sources by back-propagating the AE waves. Lamb wave theory is included in the f-k migration to consider the dispersive property of the AE waves. Since the exact occurrence time of the AE events is usually unknown when recording the AE wave signals, a heuristic artificial bee colony (ABC) algorithm combined with an optimal criterion using minimum Shannon entropy is used to find the image with the identified AE source locations and occurrence time that mostly approximate the actual ones. Experimental studies on an aluminum plate with AE events simulated by PZT actuators are performed to validate the applicability and effectiveness of the proposed optimal image-based AE source identification method.

  3. Modeling and Optimization of Optical Half Adder in Two Dimensional Photonic Crystals

    NASA Astrophysics Data System (ADS)

    Sonth, Mahesh V.; Soma, Savita; Gowre, Sanjaykumar C.; Biradar, Nagashettappa

    2018-05-01

    The output of photonic integrated devices is enhanced using crystal waveguides and cavities but optimization of these devices is a topic of research. In this paper, optimization of the optical half adder in two-dimensional (2-D) linear photonic crystals using four symmetric T-shaped waveguides with 180° phase shift inputs is proposed. The input section of a T-waveguide acts as a beam splitter, and the output section acts as a power combiner. The constructive and destructive interference phenomenon will provide an output optical power. Output port Cout will receive in-phase power through the 180° phase shifter cavity designed near the junction. The optical half adder is modeled in a 2-D photonic crystal using the finite difference time domain method (FDTD). It consists of a cubic lattice with an array of 39 × 43 silicon rods of radius r 0.12 μm and 0.6 μm lattice constant a. The extinction ratio r e of 11.67 dB and 12.51 dB are achieved at output ports using the RSoft FullWAVE-6.1 software package.

  4. Application of the LSQR algorithm in non-parametric estimation of aerosol size distribution

    NASA Astrophysics Data System (ADS)

    He, Zhenzong; Qi, Hong; Lew, Zhongyuan; Ruan, Liming; Tan, Heping; Luo, Kun

    2016-05-01

    Based on the Least Squares QR decomposition (LSQR) algorithm, the aerosol size distribution (ASD) is retrieved in non-parametric approach. The direct problem is solved by the Anomalous Diffraction Approximation (ADA) and the Lambert-Beer Law. An optimal wavelength selection method is developed to improve the retrieval accuracy of the ASD. The proposed optimal wavelength set is selected by the method which can make the measurement signals sensitive to wavelength and decrease the degree of the ill-condition of coefficient matrix of linear systems effectively to enhance the anti-interference ability of retrieval results. Two common kinds of monomodal and bimodal ASDs, log-normal (L-N) and Gamma distributions, are estimated, respectively. Numerical tests show that the LSQR algorithm can be successfully applied to retrieve the ASD with high stability in the presence of random noise and low susceptibility to the shape of distributions. Finally, the experimental measurement ASD over Harbin in China is recovered reasonably. All the results confirm that the LSQR algorithm combined with the optimal wavelength selection method is an effective and reliable technique in non-parametric estimation of ASD.

  5. Controlling the high frequency response of H{sub 2} by ultra-short tailored laser pulses: A time-dependent configuration interaction study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schönborn, Jan Boyke; Saalfrank, Peter; Klamroth, Tillmann, E-mail: klamroth@uni-potsdam.de

    2016-01-28

    We combine the stochastic pulse optimization (SPO) scheme with the time-dependent configuration interaction singles method in order to control the high frequency response of a simple molecular model system to a tailored femtosecond laser pulse. For this purpose, we use H{sub 2} treated in the fixed nuclei approximation. The SPO scheme, as similar genetic algorithms, is especially suited to control highly non-linear processes, which we consider here in the context of high harmonic generation. Here, we will demonstrate that SPO can be used to realize a “non-harmonic” response of H{sub 2} to a laser pulse. Specifically, we will show howmore » adding low intensity side frequencies to the dominant carrier frequency of the laser pulse and stochastically optimizing their contribution can create a high-frequency spectral signal of significant intensity, not harmonic to the carrier frequency. At the same time, it is possible to suppress the harmonic signals in the same spectral region, although the carrier frequency is kept dominant during the optimization.« less

  6. Optimization of auxiliary basis sets for the LEDO expansion and a projection technique for LEDO-DFT.

    PubMed

    Götz, Andreas W; Kollmar, Christian; Hess, Bernd A

    2005-09-01

    We present a systematic procedure for the optimization of the expansion basis for the limited expansion of diatomic overlap density functional theory (LEDO-DFT) and report on optimized auxiliary orbitals for the Ahlrichs split valence plus polarization basis set (SVP) for the elements H, Li--F, and Na--Cl. A new method to deal with near-linear dependences in the LEDO expansion basis is introduced, which greatly reduces the computational effort of LEDO-DFT calculations. Numerical results for a test set of small molecules demonstrate the accuracy of electronic energies, structural parameters, dipole moments, and harmonic frequencies. For larger molecular systems the numerical errors introduced by the LEDO approximation can lead to an uncontrollable behavior of the self-consistent field (SCF) process. A projection technique suggested by Löwdin is presented in the framework of LEDO-DFT, which guarantees for SCF convergence. Numerical results on some critical test molecules suggest the general applicability of the auxiliary orbitals presented in combination with this projection technique. Timing results indicate that LEDO-DFT is competitive with conventional density fitting methods. (c) 2005 Wiley Periodicals, Inc.

  7. An Optimal Image-Based Method for Identification of Acoustic Emission (AE) Sources in Plate-Like Structures Using a Lead Zirconium Titanate (PZT) Sensor Array

    PubMed Central

    Zhou, Li

    2018-01-01

    This paper proposes an innovative method for identifying the locations of multiple simultaneous acoustic emission (AE) events in plate-like structures from the view of image processing. By using a linear lead zirconium titanate (PZT) sensor array to record the AE wave signals, a reverse-time frequency-wavenumber (f-k) migration is employed to produce images displaying the locations of AE sources by back-propagating the AE waves. Lamb wave theory is included in the f-k migration to consider the dispersive property of the AE waves. Since the exact occurrence time of the AE events is usually unknown when recording the AE wave signals, a heuristic artificial bee colony (ABC) algorithm combined with an optimal criterion using minimum Shannon entropy is used to find the image with the identified AE source locations and occurrence time that mostly approximate the actual ones. Experimental studies on an aluminum plate with AE events simulated by PZT actuators are performed to validate the applicability and effectiveness of the proposed optimal image-based AE source identification method. PMID:29466310

  8. Optimal exposure techniques for iodinated contrast enhanced breast CT

    NASA Astrophysics Data System (ADS)

    Glick, Stephen J.; Makeev, Andrey

    2016-03-01

    Screening for breast cancer using mammography has been very successful in the effort to reduce breast cancer mortality, and its use has largely resulted in the 30% reduction in breast cancer mortality observed since 1990 [1]. However, diagnostic mammography remains an area of breast imaging that is in great need for improvement. One imaging modality proposed for improving the accuracy of diagnostic workup is iodinated contrast-enhanced breast CT [2]. In this study, a mathematical framework is used to evaluate optimal exposure techniques for contrast-enhanced breast CT. The ideal observer signal-to-noise ratio (i.e., d') figure-of-merit is used to provide a task performance based assessment of optimal acquisition parameters under the assumptions of a linear, shift-invariant imaging system. A parallel-cascade model was used to estimate signal and noise propagation through the detector, and a realistic lesion model with iodine uptake was embedded into a structured breast background. Ideal observer performance was investigated across kVp settings, filter materials, and filter thickness. Results indicated many kVp spectra/filter combinations can improve performance over currently used x-ray spectra.

  9. Polarized atomic orbitals for self-consistent field electronic structure calculations

    NASA Astrophysics Data System (ADS)

    Lee, Michael S.; Head-Gordon, Martin

    1997-12-01

    We present a new self-consistent field approach which, given a large "secondary" basis set of atomic orbitals, variationally optimizes molecular orbitals in terms of a small "primary" basis set of distorted atomic orbitals, which are simultaneously optimized. If the primary basis is taken as a minimal basis, the resulting functions are termed polarized atomic orbitals (PAO's) because they are valence (or core) atomic orbitals which have distorted or polarized in an optimal way for their molecular environment. The PAO's derive their flexibility from the fact that they are formed from atom-centered linear-combinations of the larger set of secondary atomic orbitals. The variational conditions satisfied by PAO's are defined, and an iterative method for performing a PAO-SCF calculation is introduced. We compare the PAO-SCF approach against full SCF calculations for the energies, dipoles, and molecular geometries of various molecules. The PAO's are potentially useful for studying large systems that are currently intractable with larger than minimal basis sets, as well as offering potential interpretative benefits relative to calculations in extended basis sets.

  10. Optical Correlation of Images With Signal-Dependent Noise Using Constrained-Modulation Filter Devices

    NASA Technical Reports Server (NTRS)

    Downie, John D.

    1995-01-01

    Images with signal-dependent noise present challenges beyond those of images with additive white or colored signal-independent noise in terms of designing the optimal 4-f correlation filter that maximizes correlation-peak signal-to-noise ratio, or combinations of correlation-peak metrics. Determining the proper design becomes more difficult when the filter is to be implemented on a constrained-modulation spatial light modulator device. The design issues involved for updatable optical filters for images with signal-dependent film-grain noise and speckle noise are examined. It is shown that although design of the optimal linear filter in the Fourier domain is impossible for images with signal-dependent noise, proper nonlinear preprocessing of the images allows the application of previously developed design rules for optimal filters to be implemented on constrained-modulation devices. Thus the nonlinear preprocessing becomes necessary for correlation in optical systems with current spatial light modulator technology. These results are illustrated with computer simulations of images with signal-dependent noise correlated with binary-phase-only filters and ternary-phase-amplitude filters.

  11. Multi-objective based spectral unmixing for hyperspectral images

    NASA Astrophysics Data System (ADS)

    Xu, Xia; Shi, Zhenwei

    2017-02-01

    Sparse hyperspectral unmixing assumes that each observed pixel can be expressed by a linear combination of several pure spectra in a priori library. Sparse unmixing is challenging, since it is usually transformed to a NP-hard l0 norm based optimization problem. Existing methods usually utilize a relaxation to the original l0 norm. However, the relaxation may bring in sensitive weighted parameters and additional calculation error. In this paper, we propose a novel multi-objective based algorithm to solve the sparse unmixing problem without any relaxation. We transform sparse unmixing to a multi-objective optimization problem, which contains two correlative objectives: minimizing the reconstruction error and controlling the endmember sparsity. To improve the efficiency of multi-objective optimization, a population-based randomly flipping strategy is designed. Moreover, we theoretically prove that the proposed method is able to recover a guaranteed approximate solution from the spectral library within limited iterations. The proposed method can directly deal with l0 norm via binary coding for the spectral signatures in the library. Experiments on both synthetic and real hyperspectral datasets demonstrate the effectiveness of the proposed method.

  12. Antiproliferative activity of Curcuma phaeocaulis Valeton extract using ultrasonic assistance and response surface methodology.

    PubMed

    Wang, Xiaoqin; Jiang, Ying; Hu, Daode

    2017-01-02

    The objective of the study was to optimize the ultrasonic-assisted extraction of curdione, furanodienone, curcumol, and germacrone from Curcuma phaeocaulis Valeton (Val.) and investigate the antiproliferative activity of the extract. Under the suitable high-performance liquid chromatography condition, the calibration curves for these four tested compounds showed high levels of linearity and the recoveries of these four compounds were between 97.9 and 104.3%. Response surface methodology (RSM) combining central composite design and desirability function (DF) was used to define optimal extraction parameters. The results of RSM and DF revealed that the optimum conditions were obtained as 8 mL g -1 for liquid-solid ratio, 70% ethanol concentration, and 20 min of ultrasonic time. It was found that the surface structures of the sonicated herbal materials were fluffy and irregular. The C. phaeocaulis Val. extract significantly inhibited the proliferation of RKO and HT-29 cells in vitro. The results reveal that the RSM can be effectively used for optimizing the ultrasonic-assisted extraction of bioactive components from C. phaeocaulis Val. for antiproliferative activity.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gearhart, Jared Lee; Adair, Kristin Lynn; Durfee, Justin David.

    When developing linear programming models, issues such as budget limitations, customer requirements, or licensing may preclude the use of commercial linear programming solvers. In such cases, one option is to use an open-source linear programming solver. A survey of linear programming tools was conducted to identify potential open-source solvers. From this survey, four open-source solvers were tested using a collection of linear programming test problems and the results were compared to IBM ILOG CPLEX Optimizer (CPLEX) [1], an industry standard. The solvers considered were: COIN-OR Linear Programming (CLP) [2], [3], GNU Linear Programming Kit (GLPK) [4], lp_solve [5] and Modularmore » In-core Nonlinear Optimization System (MINOS) [6]. As no open-source solver outperforms CPLEX, this study demonstrates the power of commercial linear programming software. CLP was found to be the top performing open-source solver considered in terms of capability and speed. GLPK also performed well but cannot match the speed of CLP or CPLEX. lp_solve and MINOS were considerably slower and encountered issues when solving several test problems.« less

  14. Effect of alkaline addition on anaerobic sludge digestion with combined pretreatment of alkaline and high pressure homogenization.

    PubMed

    Fang, Wei; Zhang, Panyue; Zhang, Guangming; Jin, Shuguang; Li, Dongyi; Zhang, Meixia; Xu, Xiangzhe

    2014-09-01

    To improve anaerobic digestion efficiency, combination pretreatment of alkaline and high pressure homogenization was applied to pretreat sewage sludge. Effect of alkaline dosage on anaerobic sludge digestion was investigated in detail. SCOD of sludge supernatant significantly increased with the alkaline dosage increase after the combined pretreatment because of sludge disintegration. Organics were significantly degraded after the anaerobic digestion, and the maximal SCOD, TCOD and VS removal was 73.5%, 61.3% and 43.5%, respectively. Cumulative biogas production, methane content in biogas and biogas production rate obviously increased with the alkaline dosage increase. Considering both the biogas production and alkaline dosage, the optimal alkaline dosage was selected as 0.04 mol/L. Relationships between biogas production and sludge disintegration showed that the accumulative biogas was mainly enhanced by the sludge disintegration. The methane yield linearly increased with the DDCOD increase as Methane yield (ml/gVS)=4.66 DDCOD-9.69. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. Cost-effective imprinting combining macromolecular crowding and a dummy template for the fast purification of punicalagin from pomegranate husk extract.

    PubMed

    Sun, Guang-Ying; Wang, Chao; Luo, Yu-Qin; Zhao, Yong-Xin; Yang, Jian; Liu, Zhao-Sheng; Aisa, Haji Akber

    2016-05-01

    The combination of molecular crowding and virtual imprinting was employed to develop a cost-effective method to prepare molecularly imprinted polymers. By using linear polymer polystyrene as a macromolecular crowding agent, an imprinted polymer recognizable to punicalagin had been successfully synthesized with punicalin as the dummy template. The resulting punicalin-imprinted polymer presented a remarkable selectivity to punicalagin with an imprinting factor of 3.17 even at extremely low consumption of the template (template/monomer ratio of 1:782). In contrast, the imprinted polymer synthesized without crowding agent, did not show any imprinting effect at so low template amount. The imprinted polymers made by combination of molecular crowding and virtual imprinting can be utilized for the fast separation of punicalagin from pomegranate husk extract after optimizing the protocol of solid-phase extraction with the recovery of 85.3 ± 1.2%. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Mum, why do you keep on growing? Impacts of environmental variability on optimal growth and reproduction allocation strategies of annual plants.

    PubMed

    De Lara, Michel

    2006-05-01

    In their 1990 paper Optimal reproductive efforts and the timing of reproduction of annual plants in randomly varying environments, Amir and Cohen considered stochastic environments consisting of i.i.d. sequences in an optimal allocation discrete-time model. We suppose here that the sequence of environmental factors is more generally described by a Markov chain. Moreover, we discuss the connection between the time interval of the discrete-time dynamic model and the ability of the plant to rebuild completely its vegetative body (from reserves). We formulate a stochastic optimization problem covering the so-called linear and logarithmic fitness (corresponding to variation within and between years), which yields optimal strategies. For "linear maximizers'', we analyse how optimal strategies depend upon the environmental variability type: constant, random stationary, random i.i.d., random monotonous. We provide general patterns in terms of targets and thresholds, including both determinate and indeterminate growth. We also provide a partial result on the comparison between ;"linear maximizers'' and "log maximizers''. Numerical simulations are provided, allowing to give a hint at the effect of different mathematical assumptions.

  17. Sensor placement algorithm development to maximize the efficiency of acid gas removal unit for integrated gasifiction combined sycle (IGCC) power plant with CO2 capture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paul, P.; Bhattacharyya, D.; Turton, R.

    2012-01-01

    Future integrated gasification combined cycle (IGCC) power plants with CO{sub 2} capture will face stricter operational and environmental constraints. Accurate values of relevant states/outputs/disturbances are needed to satisfy these constraints and to maximize the operational efficiency. Unfortunately, a number of these process variables cannot be measured while a number of them can be measured, but have low precision, reliability, or signal-to-noise ratio. In this work, a sensor placement (SP) algorithm is developed for optimal selection of sensor location, number, and type that can maximize the plant efficiency and result in a desired precision of the relevant measured/unmeasured states. In thismore » work, an SP algorithm is developed for an selective, dual-stage Selexol-based acid gas removal (AGR) unit for an IGCC plant with pre-combustion CO{sub 2} capture. A comprehensive nonlinear dynamic model of the AGR unit is developed in Aspen Plus Dynamics® (APD) and used to generate a linear state-space model that is used in the SP algorithm. The SP algorithm is developed with the assumption that an optimal Kalman filter will be implemented in the plant for state and disturbance estimation. The algorithm is developed assuming steady-state Kalman filtering and steady-state operation of the plant. The control system is considered to operate based on the estimated states and thereby, captures the effects of the SP algorithm on the overall plant efficiency. The optimization problem is solved by Genetic Algorithm (GA) considering both linear and nonlinear equality and inequality constraints. Due to the very large number of candidate sets available for sensor placement and because of the long time that it takes to solve the constrained optimization problem that includes more than 1000 states, solution of this problem is computationally expensive. For reducing the computation time, parallel computing is performed using the Distributed Computing Server (DCS®) and the Parallel Computing® toolbox from Mathworks®. In this presentation, we will share our experience in setting up parallel computing using GA in the MATLAB® environment and present the overall approach for achieving higher computational efficiency in this framework.« less

  18. Sensor placement algorithm development to maximize the efficiency of acid gas removal unit for integrated gasification combined cycle (IGCC) power plant with CO{sub 2} capture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paul, P.; Bhattacharyya, D.; Turton, R.

    2012-01-01

    Future integrated gasification combined cycle (IGCC) power plants with CO{sub 2} capture will face stricter operational and environmental constraints. Accurate values of relevant states/outputs/disturbances are needed to satisfy these constraints and to maximize the operational efficiency. Unfortunately, a number of these process variables cannot be measured while a number of them can be measured, but have low precision, reliability, or signal-to-noise ratio. In this work, a sensor placement (SP) algorithm is developed for optimal selection of sensor location, number, and type that can maximize the plant efficiency and result in a desired precision of the relevant measured/unmeasured states. In thismore » work, an SP algorithm is developed for an selective, dual-stage Selexol-based acid gas removal (AGR) unit for an IGCC plant with pre-combustion CO{sub 2} capture. A comprehensive nonlinear dynamic model of the AGR unit is developed in Aspen Plus Dynamics® (APD) and used to generate a linear state-space model that is used in the SP algorithm. The SP algorithm is developed with the assumption that an optimal Kalman filter will be implemented in the plant for state and disturbance estimation. The algorithm is developed assuming steady-state Kalman filtering and steady-state operation of the plant. The control system is considered to operate based on the estimated states and thereby, captures the effects of the SP algorithm on the overall plant efficiency. The optimization problem is solved by Genetic Algorithm (GA) considering both linear and nonlinear equality and inequality constraints. Due to the very large number of candidate sets available for sensor placement and because of the long time that it takes to solve the constrained optimization problem that includes more than 1000 states, solution of this problem is computationally expensive. For reducing the computation time, parallel computing is performed using the Distributed Computing Server (DCS®) and the Parallel Computing® toolbox from Mathworks®. In this presentation, we will share our experience in setting up parallel computing using GA in the MATLAB® environment and present the overall approach for achieving higher computational efficiency in this framework.« less

  19. Low-complexity stochastic modeling of wall-bounded shear flows

    NASA Astrophysics Data System (ADS)

    Zare, Armin

    Turbulent flows are ubiquitous in nature and they appear in many engineering applications. Transition to turbulence, in general, increases skin-friction drag in air/water vehicles compromising their fuel-efficiency and reduces the efficiency and longevity of wind turbines. While traditional flow control techniques combine physical intuition with costly experiments, their effectiveness can be significantly enhanced by control design based on low-complexity models and optimization. In this dissertation, we develop a theoretical and computational framework for the low-complexity stochastic modeling of wall-bounded shear flows. Part I of the dissertation is devoted to the development of a modeling framework which incorporates data-driven techniques to refine physics-based models. We consider the problem of completing partially known sample statistics in a way that is consistent with underlying stochastically driven linear dynamics. Neither the statistics nor the dynamics are precisely known. Thus, our objective is to reconcile the two in a parsimonious manner. To this end, we formulate optimization problems to identify the dynamics and directionality of input excitation in order to explain and complete available covariance data. For problem sizes that general-purpose solvers cannot handle, we develop customized optimization algorithms based on alternating direction methods. The solution to the optimization problem provides information about critical directions that have maximal effect in bringing model and statistics in agreement. In Part II, we employ our modeling framework to account for statistical signatures of turbulent channel flow using low-complexity stochastic dynamical models. We demonstrate that white-in-time stochastic forcing is not sufficient to explain turbulent flow statistics and develop models for colored-in-time forcing of the linearized Navier-Stokes equations. We also examine the efficacy of stochastically forced linearized NS equations and their parabolized equivalents in the receptivity analysis of velocity fluctuations to external sources of excitation as well as capturing the effect of the slowly-varying base flow on streamwise streaks and Tollmien-Schlichting waves. In Part III, we develop a model-based approach to design surface actuation of turbulent channel flow in the form of streamwise traveling waves. This approach is capable of identifying the drag reducing trends of traveling waves in a simulation-free manner. We also use the stochastically forced linearized NS equations to examine the Reynolds number independent effects of spanwise wall oscillations on drag reduction in turbulent channel flows. This allows us to extend the predictive capability of our simulation-free approach to high Reynolds numbers.

  20. A novel recurrent neural network with finite-time convergence for linear programming.

    PubMed

    Liu, Qingshan; Cao, Jinde; Chen, Guanrong

    2010-11-01

    In this letter, a novel recurrent neural network based on the gradient method is proposed for solving linear programming problems. Finite-time convergence of the proposed neural network is proved by using the Lyapunov method. Compared with the existing neural networks for linear programming, the proposed neural network is globally convergent to exact optimal solutions in finite time, which is remarkable and rare in the literature of neural networks for optimization. Some numerical examples are given to show the effectiveness and excellent performance of the new recurrent neural network.

Top