Sample records for optimal weighting function

  1. Weighted finite impulse response filter for chromatic dispersion equalization in coherent optical fiber communication systems

    NASA Astrophysics Data System (ADS)

    Zeng, Ziyi; Yang, Aiying; Guo, Peng; Feng, Lihui

    2018-01-01

    Time-domain CD equalization using finite impulse response (FIR) filter is now a common approach for coherent optical fiber communication systems. The complex weights of FIR taps are calculated from a truncated impulse response of the CD transfer function, and the modulus of the complex weights is constant. In our work, we take the limited bandwidth of a single channel signal into account and propose weighted FIRs to improve the performance of CD equalization. The key in weighted FIR filters is the selection and optimization of weighted functions. In order to present the performance of different types of weighted FIR filters, a square-root raised cosine FIR (SRRC-FIR) and a Gaussian FIR (GS-FIR) are investigated. The optimization of square-root raised cosine FIR and Gaussian FIR are made in term of the bit rate error (BER) of QPSK and 16QAM coherent detection signal. The results demonstrate that the optimized parameters of the weighted filters are independent of the modulation format, symbol rate and the length of transmission fiber. With the optimized weighted FIRs, the BER of CD equalization signal is decreased significantly. Although this paper has investigated two types of weighted FIR filters, i.e. SRRC-FIR filter and GS-FIR filter, the principle of weighted FIR can also be extended to other symmetric functions super Gaussian function, hyperbolic secant function and etc.

  2. Near-Optimal Operation of Dual-Fuel Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Ardema, M. D.; Chou, H. C.; Bowles, J. V.

    1996-01-01

    A near-optimal guidance law for the ascent trajectory from earth surface to earth orbit of a fully reusable single-stage-to-orbit pure rocket launch vehicle is derived. Of interest are both the optimal operation of the propulsion system and the optimal flight path. A methodology is developed to investigate the optimal throttle switching of dual-fuel engines. The method is based on selecting propulsion system modes and parameters that maximize a certain performance function. This function is derived from consideration of the energy-state model of the aircraft equations of motion. Because the density of liquid hydrogen is relatively low, the sensitivity of perturbations in volume need to be taken into consideration as well as weight sensitivity. The cost functional is a weighted sum of fuel mass and volume; the weighting factor is chosen to minimize vehicle empty weight for a given payload mass and volume in orbit.

  3. Mitigation of epidemics in contact networks through optimal contact adaptation *

    PubMed Central

    Youssef, Mina; Scoglio, Caterina

    2013-01-01

    This paper presents an optimal control problem formulation to minimize the total number of infection cases during the spread of susceptible-infected-recovered SIR epidemics in contact networks. In the new approach, contact weighted are reduced among nodes and a global minimum contact level is preserved in the network. In addition, the infection cost and the cost associated with the contact reduction are linearly combined in a single objective function. Hence, the optimal control formulation addresses the tradeoff between minimization of total infection cases and minimization of contact weights reduction. Using Pontryagin theorem, the obtained solution is a unique candidate representing the dynamical weighted contact network. To find the near-optimal solution in a decentralized way, we propose two heuristics based on Bang-Bang control function and on a piecewise nonlinear control function, respectively. We perform extensive simulations to evaluate the two heuristics on different networks. Our results show that the piecewise nonlinear control function outperforms the well-known Bang-Bang control function in minimizing both the total number of infection cases and the reduction of contact weights. Finally, our results show awareness of the infection level at which the mitigation strategies are effectively applied to the contact weights. PMID:23906209

  4. Mitigation of epidemics in contact networks through optimal contact adaptation.

    PubMed

    Youssef, Mina; Scoglio, Caterina

    2013-08-01

    This paper presents an optimal control problem formulation to minimize the total number of infection cases during the spread of susceptible-infected-recovered SIR epidemics in contact networks. In the new approach, contact weighted are reduced among nodes and a global minimum contact level is preserved in the network. In addition, the infection cost and the cost associated with the contact reduction are linearly combined in a single objective function. Hence, the optimal control formulation addresses the tradeoff between minimization of total infection cases and minimization of contact weights reduction. Using Pontryagin theorem, the obtained solution is a unique candidate representing the dynamical weighted contact network. To find the near-optimal solution in a decentralized way, we propose two heuristics based on Bang-Bang control function and on a piecewise nonlinear control function, respectively. We perform extensive simulations to evaluate the two heuristics on different networks. Our results show that the piecewise nonlinear control function outperforms the well-known Bang-Bang control function in minimizing both the total number of infection cases and the reduction of contact weights. Finally, our results show awareness of the infection level at which the mitigation strategies are effectively applied to the contact weights.

  5. Adopting epidemic model to optimize medication and surgical intervention of excess weight

    NASA Astrophysics Data System (ADS)

    Sun, Ruoyan

    2017-01-01

    We combined an epidemic model with an objective function to minimize the weighted sum of people with excess weight and the cost of a medication and surgical intervention in the population. The epidemic model is consisted of ordinary differential equations to describe three subpopulation groups based on weight. We introduced an intervention using medication and surgery to deal with excess weight. An objective function is constructed taking into consideration the cost of the intervention as well as the weight distribution of the population. Using empirical data, we show that fixed participation rate reduces the size of obese population but increases the size for overweight. An optimal participation rate exists and decreases with respect to time. Both theoretical analysis and empirical example confirm the existence of an optimal participation rate, u*. Under u*, the weighted sum of overweight (S) and obese (O) population as well as the cost of the program is minimized. This article highlights the existence of an optimal participation rate that minimizes the number of people with excess weight and the cost of the intervention. The time-varying optimal participation rate could contribute to designing future public health interventions of excess weight.

  6. SU-F-BRD-01: A Logistic Regression Model to Predict Objective Function Weights in Prostate Cancer IMRT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boutilier, J; Chan, T; Lee, T

    2014-06-15

    Purpose: To develop a statistical model that predicts optimization objective function weights from patient geometry for intensity-modulation radiotherapy (IMRT) of prostate cancer. Methods: A previously developed inverse optimization method (IOM) is applied retrospectively to determine optimal weights for 51 treated patients. We use an overlap volume ratio (OVR) of bladder and rectum for different PTV expansions in order to quantify patient geometry in explanatory variables. Using the optimal weights as ground truth, we develop and train a logistic regression (LR) model to predict the rectum weight and thus the bladder weight. Post hoc, we fix the weights of the leftmore » femoral head, right femoral head, and an artificial structure that encourages conformity to the population average while normalizing the bladder and rectum weights accordingly. The population average of objective function weights is used for comparison. Results: The OVR at 0.7cm was found to be the most predictive of the rectum weights. The LR model performance is statistically significant when compared to the population average over a range of clinical metrics including bladder/rectum V53Gy, bladder/rectum V70Gy, and mean voxel dose to the bladder, rectum, CTV, and PTV. On average, the LR model predicted bladder and rectum weights that are both 63% closer to the optimal weights compared to the population average. The treatment plans resulting from the LR weights have, on average, a rectum V70Gy that is 35% closer to the clinical plan and a bladder V70Gy that is 43% closer. Similar results are seen for bladder V54Gy and rectum V54Gy. Conclusion: Statistical modelling from patient anatomy can be used to determine objective function weights in IMRT for prostate cancer. Our method allows the treatment planners to begin the personalization process from an informed starting point, which may lead to more consistent clinical plans and reduce overall planning time.« less

  7. A computer tool for a minimax criterion in binary response and heteroscedastic simple linear regression models.

    PubMed

    Casero-Alonso, V; López-Fidalgo, J; Torsney, B

    2017-01-01

    Binary response models are used in many real applications. For these models the Fisher information matrix (FIM) is proportional to the FIM of a weighted simple linear regression model. The same is also true when the weight function has a finite integral. Thus, optimal designs for one binary model are also optimal for the corresponding weighted linear regression model. The main objective of this paper is to provide a tool for the construction of MV-optimal designs, minimizing the maximum of the variances of the estimates, for a general design space. MV-optimality is a potentially difficult criterion because of its nondifferentiability at equal variance designs. A methodology for obtaining MV-optimal designs where the design space is a compact interval [a, b] will be given for several standard weight functions. The methodology will allow us to build a user-friendly computer tool based on Mathematica to compute MV-optimal designs. Some illustrative examples will show a representation of MV-optimal designs in the Euclidean plane, taking a and b as the axes. The applet will be explained using two relevant models. In the first one the case of a weighted linear regression model is considered, where the weight function is directly chosen from a typical family. In the second example a binary response model is assumed, where the probability of the outcome is given by a typical probability distribution. Practitioners can use the provided applet to identify the solution and to know the exact support points and design weights. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  8. Particle swarm optimizer for weighting factor selection in intensity-modulated radiation therapy optimization algorithms.

    PubMed

    Yang, Jie; Zhang, Pengcheng; Zhang, Liyuan; Shu, Huazhong; Li, Baosheng; Gui, Zhiguo

    2017-01-01

    In inverse treatment planning of intensity-modulated radiation therapy (IMRT), the objective function is typically the sum of the weighted sub-scores, where the weights indicate the importance of the sub-scores. To obtain a high-quality treatment plan, the planner manually adjusts the objective weights using a trial-and-error procedure until an acceptable plan is reached. In this work, a new particle swarm optimization (PSO) method which can adjust the weighting factors automatically was investigated to overcome the requirement of manual adjustment, thereby reducing the workload of the human planner and contributing to the development of a fully automated planning process. The proposed optimization method consists of three steps. (i) First, a swarm of weighting factors (i.e., particles) is initialized randomly in the search space, where each particle corresponds to a global objective function. (ii) Then, a plan optimization solver is employed to obtain the optimal solution for each particle, and the values of the evaluation functions used to determine the particle's location and the population global location for the PSO are calculated based on these results. (iii) Next, the weighting factors are updated based on the particle's location and the population global location. Step (ii) is performed alternately with step (iii) until the termination condition is reached. In this method, the evaluation function is a combination of several key points on the dose volume histograms. Furthermore, a perturbation strategy - the crossover and mutation operator hybrid approach - is employed to enhance the population diversity, and two arguments are applied to the evaluation function to improve the flexibility of the algorithm. In this study, the proposed method was used to develop IMRT treatment plans involving five unequally spaced 6MV photon beams for 10 prostate cancer cases. The proposed optimization algorithm yielded high-quality plans for all of the cases, without human planner intervention. A comparison of the results with the optimized solution obtained using a similar optimization model but with human planner intervention revealed that the proposed algorithm produced optimized plans superior to that developed using the manual plan. The proposed algorithm can generate admissible solutions within reasonable computational times and can be used to develop fully automated IMRT treatment planning methods, thus reducing human planners' workloads during iterative processes. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  9. Nontangent, Developed Contour Bulkheads for a Single-Stage Launch Vehicle

    NASA Technical Reports Server (NTRS)

    Wu, K. Chauncey; Lepsch, Roger A., Jr.

    2000-01-01

    Dry weights for single-stage launch vehicles that incorporate nontangent, developed contour bulkheads are estimated and compared to a baseline vehicle with 1.414 aspect ratio ellipsoidal bulkheads. Weights, volumes, and heights of optimized bulkhead designs are computed using a preliminary design bulkhead analysis code. The dry weights of vehicles that incorporate the optimized bulkheads are predicted using a vehicle weights and sizing code. Two optimization approaches are employed. A structural-level method, where the vehicle's three major bulkhead regions are optimized separately and then incorporated into a model for computation of the vehicle dry weight, predicts a reduction of4365 lb (2.2 %) from the 200,679-lb baseline vehicle dry weight. In the second, vehicle-level, approach, the vehicle dry weight is the objective function for the optimization. For the vehicle-level analysis, modified bulkhead designs are analyzed and incorporated into the weights model for computation of a dry weight. The optimizer simultaneously manipulates design variables for all three bulkheads to reduce the dry weight. The vehicle-level analysis predicts a dry weight reduction of 5129 lb, a 2.6% reduction from the baseline weight. Based on these results, nontangent, developed contour bulkheads may provide substantial weight savings for single stage vehicles.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boutilier, Justin J., E-mail: j.boutilier@mail.utoronto.ca; Lee, Taewoo; Craig, Tim

    Purpose: To develop and evaluate the clinical applicability of advanced machine learning models that simultaneously predict multiple optimization objective function weights from patient geometry for intensity-modulated radiation therapy of prostate cancer. Methods: A previously developed inverse optimization method was applied retrospectively to determine optimal objective function weights for 315 treated patients. The authors used an overlap volume ratio (OV) of bladder and rectum for different PTV expansions and overlap volume histogram slopes (OVSR and OVSB for the rectum and bladder, respectively) as explanatory variables that quantify patient geometry. Using the optimal weights as ground truth, the authors trained and appliedmore » three prediction models: logistic regression (LR), multinomial logistic regression (MLR), and weighted K-nearest neighbor (KNN). The population average of the optimal objective function weights was also calculated. Results: The OV at 0.4 cm and OVSR at 0.1 cm features were found to be the most predictive of the weights. The authors observed comparable performance (i.e., no statistically significant difference) between LR, MLR, and KNN methodologies, with LR appearing to perform the best. All three machine learning models outperformed the population average by a statistically significant amount over a range of clinical metrics including bladder/rectum V53Gy, bladder/rectum V70Gy, and dose to the bladder, rectum, CTV, and PTV. When comparing the weights directly, the LR model predicted bladder and rectum weights that had, on average, a 73% and 74% relative improvement over the population average weights, respectively. The treatment plans resulting from the LR weights had, on average, a rectum V70Gy that was 35% closer to the clinical plan and a bladder V70Gy that was 29% closer, compared to the population average weights. Similar results were observed for all other clinical metrics. Conclusions: The authors demonstrated that the KNN and MLR weight prediction methodologies perform comparably to the LR model and can produce clinical quality treatment plans by simultaneously predicting multiple weights that capture trade-offs associated with sparing multiple OARs.« less

  11. Optimal weight based on energy imbalance and utility maximization

    NASA Astrophysics Data System (ADS)

    Sun, Ruoyan

    2016-01-01

    This paper investigates the optimal weight for both male and female using energy imbalance and utility maximization. Based on the difference of energy intake and expenditure, we develop a state equation that reveals the weight gain from this energy gap. We ​construct an objective function considering food consumption, eating habits and survival rate to measure utility. Through applying mathematical tools from optimal control methods and qualitative theory of differential equations, we obtain some results. For both male and female, the optimal weight is larger than the physiologically optimal weight calculated by the Body Mass Index (BMI). We also study the corresponding trajectories to steady state weight respectively. Depending on the value of a few parameters, the steady state can either be a saddle point with a monotonic trajectory or a focus with dampened oscillations.

  12. Optimal design application on the advanced aeroelastic rotor blade

    NASA Technical Reports Server (NTRS)

    Wei, F. S.; Jones, R.

    1985-01-01

    The vibration and performance optimization procedure using regression analysis was successfully applied to an advanced aeroelastic blade design study. The major advantage of this regression technique is that multiple optimizations can be performed to evaluate the effects of various objective functions and constraint functions. The data bases obtained from the rotorcraft flight simulation program C81 and Myklestad mode shape program are analytically determined as a function of each design variable. This approach has been verified for various blade radial ballast weight locations and blade planforms. This method can also be utilized to ascertain the effect of a particular cost function which is composed of several objective functions with different weighting factors for various mission requirements without any additional effort.

  13. Point-based warping with optimized weighting factors of displacement vectors

    NASA Astrophysics Data System (ADS)

    Pielot, Ranier; Scholz, Michael; Obermayer, Klaus; Gundelfinger, Eckart D.; Hess, Andreas

    2000-06-01

    The accurate comparison of inter-individual 3D image brain datasets requires non-affine transformation techniques (warping) to reduce geometric variations. Constrained by the biological prerequisites we use in this study a landmark-based warping method with weighted sums of displacement vectors, which is enhanced by an optimization process. Furthermore, we investigate fast automatic procedures for determining landmarks to improve the practicability of 3D warping. This combined approach was tested on 3D autoradiographs of Gerbil brains. The autoradiographs were obtained after injecting a non-metabolized radioactive glucose derivative into the Gerbil thereby visualizing neuronal activity in the brain. Afterwards the brain was processed with standard autoradiographical methods. The landmark-generator computes corresponding reference points simultaneously within a given number of datasets by Monte-Carlo-techniques. The warping function is a distance weighted exponential function with a landmark- specific weighting factor. These weighting factors are optimized by a computational evolution strategy. The warping quality is quantified by several coefficients (correlation coefficient, overlap-index, and registration error). The described approach combines a highly suitable procedure to automatically detect landmarks in autoradiographical brain images and an enhanced point-based warping technique, optimizing the local weighting factors. This optimization process significantly improves the similarity between the warped and the target dataset.

  14. Optimal linear reconstruction of dark matter from halo catalogues

    DOE PAGES

    Cai, Yan -Chuan; Bernstein, Gary; Sheth, Ravi K.

    2011-04-01

    The dark matter lumps (or "halos") that contain galaxies have locations in the Universe that are to some extent random with respect to the overall matter distributions. We investigate how best to estimate the total matter distribution from the locations of the halos. We derive the weight function w(M) to apply to dark-matter haloes that minimizes the stochasticity between the weighted halo distribution and its underlying mass density field. The optimal w(M) depends on the range of masses of halos being used. While the standard biased-Poisson model of the halo distribution predicts that bias weighting is optimal, the simple factmore » that the mass is comprised of haloes implies that the optimal w(M) will be a mixture of mass-weighting and bias-weighting. In N-body simulations, the Poisson estimator is up to 15× noisier than the optimal. Optimal weighting could make cosmological tests based on the matter power spectrum or cross-correlations much more powerful and/or cost effective.« less

  15. Optimal trajectories for hypersonic launch vehicles

    NASA Technical Reports Server (NTRS)

    Ardema, Mark D.; Bowles, Jeffrey V.; Whittaker, Thomas

    1994-01-01

    In this paper, we derive a near-optimal guidance law for the ascent trajectory from earth surface to earth orbit of a hypersonic, dual-mode propulsion, lifting vehicle. Of interest are both the optical flight path and the optimal operation of the propulsion system. The guidance law is developed from the energy-state approximation of the equations of motion. Because liquid hydrogen fueled hypersonic aircraft are volume sensitive, as well as weight sensitive, the cost functional is a weighted sum of fuel mass and volume; the weighting factor is chosen to minimize gross take-off weight for a given payload mass and volume in orbit.

  16. Discriminating Among Probability Weighting Functions Using Adaptive Design Optimization

    PubMed Central

    Cavagnaro, Daniel R.; Pitt, Mark A.; Gonzalez, Richard; Myung, Jay I.

    2014-01-01

    Probability weighting functions relate objective probabilities and their subjective weights, and play a central role in modeling choices under risk within cumulative prospect theory. While several different parametric forms have been proposed, their qualitative similarities make it challenging to discriminate among them empirically. In this paper, we use both simulation and choice experiments to investigate the extent to which different parametric forms of the probability weighting function can be discriminated using adaptive design optimization, a computer-based methodology that identifies and exploits model differences for the purpose of model discrimination. The simulation experiments show that the correct (data-generating) form can be conclusively discriminated from its competitors. The results of an empirical experiment reveal heterogeneity between participants in terms of the functional form, with two models (Prelec-2, Linear in Log Odds) emerging as the most common best-fitting models. The findings shed light on assumptions underlying these models. PMID:24453406

  17. A method for determining optimum phasing of a multiphase propulsion system for a single-stage vehicle with linearized inert weight

    NASA Technical Reports Server (NTRS)

    Martin, J. A.

    1974-01-01

    A general analytical treatment is presented of a single-stage vehicle with multiple propulsion phases. A closed-form solution for the cost and for the performance and a derivation of the optimal phasing of the propulsion are included. Linearized variations in the inert weight elements are included, and the function to be minimized can be selected. The derivation of optimal phasing results in a set of nonlinear algebraic equations for optimal fuel volumes, for which a solution method is outlined. Three specific example cases are analyzed: minimum gross lift-off weight, minimum inert weight, and a minimized general function for a two-phase vehicle. The results for the two-phase vehicle are applied to the dual-fuel rocket. Comparisons with single-fuel vehicles indicate that dual-fuel vehicles can have lower inert weight either by development of a dual-fuel engine or by parallel burning of separate engines from lift-off.

  18. Modeling of Mean-VaR portfolio optimization by risk tolerance when the utility function is quadratic

    NASA Astrophysics Data System (ADS)

    Sukono, Sidi, Pramono; Bon, Abdul Talib bin; Supian, Sudradjat

    2017-03-01

    The problems of investing in financial assets are to choose a combination of weighting a portfolio can be maximized return expectations and minimizing the risk. This paper discusses the modeling of Mean-VaR portfolio optimization by risk tolerance, when square-shaped utility functions. It is assumed that the asset return has a certain distribution, and the risk of the portfolio is measured using the Value-at-Risk (VaR). So, the process of optimization of the portfolio is done based on the model of Mean-VaR portfolio optimization model for the Mean-VaR done using matrix algebra approach, and the Lagrange multiplier method, as well as Khun-Tucker. The results of the modeling portfolio optimization is in the form of a weighting vector equations depends on the vector mean return vector assets, identities, and matrix covariance between return of assets, as well as a factor in risk tolerance. As an illustration of numeric, analyzed five shares traded on the stock market in Indonesia. Based on analysis of five stocks return data gained the vector of weight composition and graphics of efficient surface of portfolio. Vector composition weighting weights and efficient surface charts can be used as a guide for investors in decisions to invest.

  19. Integrated multidisciplinary design optimization of rotorcraft

    NASA Technical Reports Server (NTRS)

    Adelman, Howard M.; Mantay, Wayne R.

    1989-01-01

    The NASA/Army research plan for developing the logic elements for helicopter rotor design optimization by integrating appropriate disciplines and accounting for important interactions among the disciplines is discussed. The paper describes the optimization formulation in terms of the objective function, design variables, and constraints. The analysis aspects are discussed, and an initial effort at defining the interdisciplinary coupling is summarized. Results are presented on the achievements made in the rotor aerodynamic performance optimization for minimum hover horsepower, rotor dynamic optimization for vibration reduction, rotor structural optimization for minimum weight, and integrated aerodynamic load/dynamics optimization for minimum vibration and weight.

  20. The Sizing and Optimization Language (SOL): A computer language to improve the user/optimizer interface

    NASA Technical Reports Server (NTRS)

    Lucas, S. H.; Scotti, S. J.

    1989-01-01

    The nonlinear mathematical programming method (formal optimization) has had many applications in engineering design. A figure illustrates the use of optimization techniques in the design process. The design process begins with the design problem, such as the classic example of the two-bar truss designed for minimum weight as seen in the leftmost part of the figure. If formal optimization is to be applied, the design problem must be recast in the form of an optimization problem consisting of an objective function, design variables, and constraint function relations. The middle part of the figure shows the two-bar truss design posed as an optimization problem. The total truss weight is the objective function, the tube diameter and truss height are design variables, with stress and Euler buckling considered as constraint function relations. Lastly, the designer develops or obtains analysis software containing a mathematical model of the object being optimized, and then interfaces the analysis routine with existing optimization software such as CONMIN, ADS, or NPSOL. This final state of software development can be both tedious and error-prone. The Sizing and Optimization Language (SOL), a special-purpose computer language whose goal is to make the software implementation phase of optimum design easier and less error-prone, is presented.

  1. CHOW PARAMETERS IN THRESHOLD LOGIC,

    DTIC Science & Technology

    respect to threshold functions, they provide the optimal test-synthesis method for completely specified 7-argument (or less) functions, reflect the...signs and relative magnitudes of realizing weights and threshold , and can be used themselves as approximating weights. Results are reproved in a

  2. Eigenvectors of optimal color spectra.

    PubMed

    Flinkman, Mika; Laamanen, Hannu; Tuomela, Jukka; Vahimaa, Pasi; Hauta-Kasari, Markku

    2013-09-01

    Principal component analysis (PCA) and weighted PCA were applied to spectra of optimal colors belonging to the outer surface of the object-color solid or to so-called MacAdam limits. The correlation matrix formed from this data is a circulant matrix whose biggest eigenvalue is simple and the corresponding eigenvector is constant. All other eigenvalues are double, and the eigenvectors can be expressed with trigonometric functions. Found trigonometric functions can be used as a general basis to reconstruct all possible smooth reflectance spectra. When the spectral data are weighted with an appropriate weight function, the essential part of the color information is compressed to the first three components and the shapes of the first three eigenvectors correspond to one achromatic response function and to two chromatic response functions, the latter corresponding approximately to Munsell opponent-hue directions 9YR-9B and 2BG-2R.

  3. Design optimization of axial flow hydraulic turbine runner: Part II - multi-objective constrained optimization method

    NASA Astrophysics Data System (ADS)

    Peng, Guoyi; Cao, Shuliang; Ishizuka, Masaru; Hayama, Shinji

    2002-06-01

    This paper is concerned with the design optimization of axial flow hydraulic turbine runner blade geometry. In order to obtain a better design plan with good performance, a new comprehensive performance optimization procedure has been presented by combining a multi-variable multi-objective constrained optimization model with a Q3D inverse computation and a performance prediction procedure. With careful analysis of the inverse design of axial hydraulic turbine runner, the total hydraulic loss and the cavitation coefficient are taken as optimization objectives and a comprehensive objective function is defined using the weight factors. Parameters of a newly proposed blade bound circulation distribution function and parameters describing positions of blade leading and training edges in the meridional flow passage are taken as optimization variables.The optimization procedure has been applied to the design optimization of a Kaplan runner with specific speed of 440 kW. Numerical results show that the performance of designed runner is successfully improved through optimization computation. The optimization model is found to be validated and it has the feature of good convergence. With the multi-objective optimization model, it is possible to control the performance of designed runner by adjusting the value of weight factors defining the comprehensive objective function. Copyright

  4. Reliability Based Design for a Raked Wing Tip of an Airframe

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Pai, Shantaram S.; Coroneos, Rula M.

    2011-01-01

    A reliability-based optimization methodology has been developed to design the raked wing tip of the Boeing 767-400 extended range airliner made of composite and metallic materials. Design is formulated for an accepted level of risk or reliability. The design variables, weight and the constraints became functions of reliability. Uncertainties in the load, strength and the material properties, as well as the design variables, were modeled as random parameters with specified distributions, like normal, Weibull or Gumbel functions. The objective function and constraint, or a failure mode, became derived functions of the risk-level. Solution to the problem produced the optimum design with weight, variables and constraints as a function of the risk-level. Optimum weight versus reliability traced out an inverted-S shaped graph. The center of the graph corresponded to a 50 percent probability of success, or one failure in two samples. Under some assumptions, this design would be quite close to the deterministic optimum solution. The weight increased when reliability exceeded 50 percent, and decreased when the reliability was compromised. A design could be selected depending on the level of risk acceptable to a situation. The optimization process achieved up to a 20-percent reduction in weight over traditional design.

  5. Efficient Iris Recognition Based on Optimal Subfeature Selection and Weighted Subregion Fusion

    PubMed Central

    Deng, Ning

    2014-01-01

    In this paper, we propose three discriminative feature selection strategies and weighted subregion matching method to improve the performance of iris recognition system. Firstly, we introduce the process of feature extraction and representation based on scale invariant feature transformation (SIFT) in detail. Secondly, three strategies are described, which are orientation probability distribution function (OPDF) based strategy to delete some redundant feature keypoints, magnitude probability distribution function (MPDF) based strategy to reduce dimensionality of feature element, and compounded strategy combined OPDF and MPDF to further select optimal subfeature. Thirdly, to make matching more effective, this paper proposes a novel matching method based on weighted sub-region matching fusion. Particle swarm optimization is utilized to accelerate achieve different sub-region's weights and then weighted different subregions' matching scores to generate the final decision. The experimental results, on three public and renowned iris databases (CASIA-V3 Interval, Lamp, andMMU-V1), demonstrate that our proposed methods outperform some of the existing methods in terms of correct recognition rate, equal error rate, and computation complexity. PMID:24683317

  6. Efficient iris recognition based on optimal subfeature selection and weighted subregion fusion.

    PubMed

    Chen, Ying; Liu, Yuanning; Zhu, Xiaodong; He, Fei; Wang, Hongye; Deng, Ning

    2014-01-01

    In this paper, we propose three discriminative feature selection strategies and weighted subregion matching method to improve the performance of iris recognition system. Firstly, we introduce the process of feature extraction and representation based on scale invariant feature transformation (SIFT) in detail. Secondly, three strategies are described, which are orientation probability distribution function (OPDF) based strategy to delete some redundant feature keypoints, magnitude probability distribution function (MPDF) based strategy to reduce dimensionality of feature element, and compounded strategy combined OPDF and MPDF to further select optimal subfeature. Thirdly, to make matching more effective, this paper proposes a novel matching method based on weighted sub-region matching fusion. Particle swarm optimization is utilized to accelerate achieve different sub-region's weights and then weighted different subregions' matching scores to generate the final decision. The experimental results, on three public and renowned iris databases (CASIA-V3 Interval, Lamp, and MMU-V1), demonstrate that our proposed methods outperform some of the existing methods in terms of correct recognition rate, equal error rate, and computation complexity.

  7. Aeroelastic Tailoring Study of N+2 Low Boom Supersonic Commerical Transport Aircraft

    NASA Technical Reports Server (NTRS)

    Pak, Chan-Gi

    2015-01-01

    The Lockheed Martin N+2 Low - boom Supersonic Commercial Transport (LSCT) aircraft was optimized in this study through the use of a multidisciplinary design optimization tool developed at the National Aeronautics and S pace Administration Armstrong Flight Research Center. A total of 111 design variables we re used in the first optimization run. Total structural weight was the objective function in this optimization run. Design requirements for strength, buckling, and flutter we re selected as constraint functions during the first optimization run. The MSC Nastran code was used to obtain the modal, strength, and buckling characteristics. Flutter and trim analyses we re based on ZAERO code, and landing and ground control loads were computed using an in - house code. The w eight penalty to satisfy all the design requirement s during the first optimization run was 31,367 lb, a 9.4% increase from the baseline configuration. The second optimization run was prepared and based on the big-bang big-crunch algorithm. Six composite ply angles for the second and fourth composite layers were selected as discrete design variables for the second optimization run. Composite ply angle changes can't improve the weight configuration of the N+2 LSCT aircraft. However, this second optimization run can create more tolerance for the active and near active strength constraint values for future weight optimization runs.

  8. A lexicographic weighted Tchebycheff approach for multi-constrained multi-objective optimization of the surface grinding process

    NASA Astrophysics Data System (ADS)

    Khalilpourazari, Soheyl; Khalilpourazary, Saman

    2017-05-01

    In this article a multi-objective mathematical model is developed to minimize total time and cost while maximizing the production rate and surface finish quality in the grinding process. The model aims to determine optimal values of the decision variables considering process constraints. A lexicographic weighted Tchebycheff approach is developed to obtain efficient Pareto-optimal solutions of the problem in both rough and finished conditions. Utilizing a polyhedral branch-and-cut algorithm, the lexicographic weighted Tchebycheff model of the proposed multi-objective model is solved using GAMS software. The Pareto-optimal solutions provide a proper trade-off between conflicting objective functions which helps the decision maker to select the best values for the decision variables. Sensitivity analyses are performed to determine the effect of change in the grain size, grinding ratio, feed rate, labour cost per hour, length of workpiece, wheel diameter and downfeed of grinding parameters on each value of the objective function.

  9. Optimization Testbed Cometboards Extended into Stochastic Domain

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Pai, Shantaram S.; Coroneos, Rula M.; Patnaik, Surya N.

    2010-01-01

    COMparative Evaluation Testbed of Optimization and Analysis Routines for the Design of Structures (CometBoards) is a multidisciplinary design optimization software. It was originally developed for deterministic calculation. It has now been extended into the stochastic domain for structural design problems. For deterministic problems, CometBoards is introduced through its subproblem solution strategy as well as the approximation concept in optimization. In the stochastic domain, a design is formulated as a function of the risk or reliability. Optimum solution including the weight of a structure, is also obtained as a function of reliability. Weight versus reliability traced out an inverted-S-shaped graph. The center of the graph corresponded to 50 percent probability of success, or one failure in two samples. A heavy design with weight approaching infinity could be produced for a near-zero rate of failure that corresponded to unity for reliability. Weight can be reduced to a small value for the most failure-prone design with a compromised reliability approaching zero. The stochastic design optimization (SDO) capability for an industrial problem was obtained by combining three codes: MSC/Nastran code was the deterministic analysis tool, fast probabilistic integrator, or the FPI module of the NESSUS software, was the probabilistic calculator, and CometBoards became the optimizer. The SDO capability requires a finite element structural model, a material model, a load model, and a design model. The stochastic optimization concept is illustrated considering an academic example and a real-life airframe component made of metallic and composite materials.

  10. A new inertia weight control strategy for particle swarm optimization

    NASA Astrophysics Data System (ADS)

    Zhu, Xianming; Wang, Hongbo

    2018-04-01

    Particle Swarm Optimization is a member of swarm intelligence algorithms, which is inspired by the behavior of bird flocks. The inertia weight, one of the most important parameters of PSO, is crucial for PSO, for it balances the performance of exploration and exploitation of the algorithm. This paper proposes a new inertia weight control strategy and PSO with this new strategy is tested by four benchmark functions. The results shows that the new strategy provides the PSO with better performance.

  11. Topology optimization of a gas-turbine engine part

    NASA Astrophysics Data System (ADS)

    Faskhutdinov, R. N.; Dubrovskaya, A. S.; Dongauzer, K. A.; Maksimov, P. V.; Trufanov, N. A.

    2017-02-01

    One of the key goals of aerospace industry is a reduction of the gas turbine engine weight. The solution of this task consists in the design of gas turbine engine components with reduced weight retaining their functional capabilities. Topology optimization of the part geometry leads to an efficient weight reduction. A complex geometry can be achieved in a single operation with the Selective Laser Melting technology. It should be noted that the complexity of structural features design does not affect the product cost in this case. Let us consider a step-by-step procedure of topology optimization by an example of a gas turbine engine part.

  12. An Optimization Principle for Deriving Nonequilibrium Statistical Models of Hamiltonian Dynamics

    NASA Astrophysics Data System (ADS)

    Turkington, Bruce

    2013-08-01

    A general method for deriving closed reduced models of Hamiltonian dynamical systems is developed using techniques from optimization and statistical estimation. Given a vector of resolved variables, selected to describe the macroscopic state of the system, a family of quasi-equilibrium probability densities on phase space corresponding to the resolved variables is employed as a statistical model, and the evolution of the mean resolved vector is estimated by optimizing over paths of these densities. Specifically, a cost function is constructed to quantify the lack-of-fit to the microscopic dynamics of any feasible path of densities from the statistical model; it is an ensemble-averaged, weighted, squared-norm of the residual that results from submitting the path of densities to the Liouville equation. The path that minimizes the time integral of the cost function determines the best-fit evolution of the mean resolved vector. The closed reduced equations satisfied by the optimal path are derived by Hamilton-Jacobi theory. When expressed in terms of the macroscopic variables, these equations have the generic structure of governing equations for nonequilibrium thermodynamics. In particular, the value function for the optimization principle coincides with the dissipation potential that defines the relation between thermodynamic forces and fluxes. The adjustable closure parameters in the best-fit reduced equations depend explicitly on the arbitrary weights that enter into the lack-of-fit cost function. Two particular model reductions are outlined to illustrate the general method. In each example the set of weights in the optimization principle contracts into a single effective closure parameter.

  13. Nonlinear program based optimization of boost and buck-boost converter designs

    NASA Astrophysics Data System (ADS)

    Rahman, S.; Lee, F. C.

    The facility of an Augmented Lagrangian (ALAG) multiplier based nonlinear programming technique is demonstrated for minimum-weight design optimizations of boost and buck-boost power converters. Certain important features of ALAG are presented in the framework of a comprehensive design example for buck-boost power converter design optimization. The study provides refreshing design insight of power converters and presents such information as weight and loss profiles of various semiconductor components and magnetics as a function of the switching frequency.

  14. Automatic weight determination in nonlinear model predictive control of wind turbines using swarm optimization technique

    NASA Astrophysics Data System (ADS)

    Tofighi, Elham; Mahdizadeh, Amin

    2016-09-01

    This paper addresses the problem of automatic tuning of weighting coefficients for the nonlinear model predictive control (NMPC) of wind turbines. The choice of weighting coefficients in NMPC is critical due to their explicit impact on efficiency of the wind turbine control. Classically, these weights are selected based on intuitive understanding of the system dynamics and control objectives. The empirical methods, however, may not yield optimal solutions especially when the number of parameters to be tuned and the nonlinearity of the system increase. In this paper, the problem of determining weighting coefficients for the cost function of the NMPC controller is formulated as a two-level optimization process in which the upper- level PSO-based optimization computes the weighting coefficients for the lower-level NMPC controller which generates control signals for the wind turbine. The proposed method is implemented to tune the weighting coefficients of a NMPC controller which drives the NREL 5-MW wind turbine. The results are compared with similar simulations for a manually tuned NMPC controller. Comparison verify the improved performance of the controller for weights computed with the PSO-based technique.

  15. A new optimized GA-RBF neural network algorithm.

    PubMed

    Jia, Weikuan; Zhao, Dean; Shen, Tian; Su, Chunyang; Hu, Chanli; Zhao, Yuyan

    2014-01-01

    When confronting the complex problems, radial basis function (RBF) neural network has the advantages of adaptive and self-learning ability, but it is difficult to determine the number of hidden layer neurons, and the weights learning ability from hidden layer to the output layer is low; these deficiencies easily lead to decreasing learning ability and recognition precision. Aiming at this problem, we propose a new optimized RBF neural network algorithm based on genetic algorithm (GA-RBF algorithm), which uses genetic algorithm to optimize the weights and structure of RBF neural network; it chooses new ways of hybrid encoding and optimizing simultaneously. Using the binary encoding encodes the number of the hidden layer's neurons and using real encoding encodes the connection weights. Hidden layer neurons number and connection weights are optimized simultaneously in the new algorithm. However, the connection weights optimization is not complete; we need to use least mean square (LMS) algorithm for further leaning, and finally get a new algorithm model. Using two UCI standard data sets to test the new algorithm, the results show that the new algorithm improves the operating efficiency in dealing with complex problems and also improves the recognition precision, which proves that the new algorithm is valid.

  16. Technology forecasting for space communication. Task one report: Cost and weight tradeoff studies for EOS and TDRS

    NASA Technical Reports Server (NTRS)

    1974-01-01

    Weight and cost optimized EOS communication links are determined for 2.25, 7.25, 14.5, 21, and 60 GHz systems and for a 10.6 micron homodyne detection laser system. EOS to ground links are examined for 556, 834, and 1112 km EOS orbits, with ground terminals at the Network Test and Tracking Facility and at Goldstone. Optimized 21 GHz and 10.6 micron links are also examined. For the EOS to Tracking and Data Relay Satellite to ground link, signal-to-noise ratios of the uplink and downlink are also optimized for minimum overall cost or spaceborne weight. Finally, the optimized 21 GHz EOS to ground link is determined for various precipitation rates. All system performance parameters and mission dependent constraints are presented, as are the system cost and weight functional dependencies. The features and capabilities of the computer program to perform the foregoing analyses are described.

  17. Reliability-Based Design Optimization of a Composite Airframe Component

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Pai, Shantaram S.; Coroneos, Rula M.

    2009-01-01

    A stochastic design optimization methodology (SDO) has been developed to design components of an airframe structure that can be made of metallic and composite materials. The design is obtained as a function of the risk level, or reliability, p. The design method treats uncertainties in load, strength, and material properties as distribution functions, which are defined with mean values and standard deviations. A design constraint or a failure mode is specified as a function of reliability p. Solution to stochastic optimization yields the weight of a structure as a function of reliability p. Optimum weight versus reliability p traced out an inverted-S-shaped graph. The center of the inverted-S graph corresponded to 50 percent (p = 0.5) probability of success. A heavy design with weight approaching infinity could be produced for a near-zero rate of failure that corresponds to unity for reliability p (or p = 1). Weight can be reduced to a small value for the most failure-prone design with a reliability that approaches zero (p = 0). Reliability can be changed for different components of an airframe structure. For example, the landing gear can be designed for a very high reliability, whereas it can be reduced to a small extent for a raked wingtip. The SDO capability is obtained by combining three codes: (1) The MSC/Nastran code was the deterministic analysis tool, (2) The fast probabilistic integrator, or the FPI module of the NESSUS software, was the probabilistic calculator, and (3) NASA Glenn Research Center s optimization testbed CometBoards became the optimizer. The SDO capability requires a finite element structural model, a material model, a load model, and a design model. The stochastic optimization concept is illustrated considering an academic example and a real-life raked wingtip structure of the Boeing 767-400 extended range airliner made of metallic and composite materials.

  18. The invariant of the stiffness filter function with the weight filter function of the power function form

    NASA Astrophysics Data System (ADS)

    Shang, Zhen; Sui, Yun-Kang

    2012-12-01

    Based on the independent, continuous and mapping (ICM) method and homogenization method, a research model is constructed to propose and deduce a theorem and corollary from the invariant between the weight filter function and the corresponding stiffness filter function of the form of power function. The efficiency in searching for optimum solution will be raised via the choice of rational filter functions, so the above mentioned results are very important to the further study of structural topology optimization.

  19. Modified Discrete Grey Wolf Optimizer Algorithm for Multilevel Image Thresholding

    PubMed Central

    Sun, Lijuan; Guo, Jian; Xu, Bin; Li, Shujing

    2017-01-01

    The computation of image segmentation has become more complicated with the increasing number of thresholds, and the option and application of the thresholds in image thresholding fields have become an NP problem at the same time. The paper puts forward the modified discrete grey wolf optimizer algorithm (MDGWO), which improves on the optimal solution updating mechanism of the search agent by the weights. Taking Kapur's entropy as the optimized function and based on the discreteness of threshold in image segmentation, the paper firstly discretizes the grey wolf optimizer (GWO) and then proposes a new attack strategy by using the weight coefficient to replace the search formula for optimal solution used in the original algorithm. The experimental results show that MDGWO can search out the optimal thresholds efficiently and precisely, which are very close to the result examined by exhaustive searches. In comparison with the electromagnetism optimization (EMO), the differential evolution (DE), the Artifical Bee Colony (ABC), and the classical GWO, it is concluded that MDGWO has advantages over the latter four in terms of image segmentation quality and objective function values and their stability. PMID:28127305

  20. Statistical Optimality in Multipartite Ranking and Ordinal Regression.

    PubMed

    Uematsu, Kazuki; Lee, Yoonkyung

    2015-05-01

    Statistical optimality in multipartite ranking is investigated as an extension of bipartite ranking. We consider the optimality of ranking algorithms through minimization of the theoretical risk which combines pairwise ranking errors of ordinal categories with differential ranking costs. The extension shows that for a certain class of convex loss functions including exponential loss, the optimal ranking function can be represented as a ratio of weighted conditional probability of upper categories to lower categories, where the weights are given by the misranking costs. This result also bridges traditional ranking methods such as proportional odds model in statistics with various ranking algorithms in machine learning. Further, the analysis of multipartite ranking with different costs provides a new perspective on non-smooth list-wise ranking measures such as the discounted cumulative gain and preference learning. We illustrate our findings with simulation study and real data analysis.

  1. Optimal error functional for parameter identification in anisotropic finite strain elasto-plasticity

    NASA Astrophysics Data System (ADS)

    Shutov, A. V.; Kaygorodtseva, A. A.; Dranishnikov, N. S.

    2017-10-01

    A problem of parameter identification for a model of finite strain elasto-plasticity is discussed. The utilized phenomenological material model accounts for nonlinear isotropic and kinematic hardening; the model kinematics is described by a nested multiplicative split of the deformation gradient. A hierarchy of optimization problems is considered. First, following the standard procedure, the material parameters are identified through minimization of a certain least square error functional. Next, the focus is placed on finding optimal weighting coefficients which enter the error functional. Toward that end, a stochastic noise with systematic and non-systematic components is introduced to the available measurement results; a superordinate optimization problem seeks to minimize the sensitivity of the resulting material parameters to the introduced noise. The advantage of this approach is that no additional experiments are required; it also provides an insight into the robustness of the identification procedure. As an example, experimental data for the steel 42CrMo4 are considered and a set of weighting coefficients is found, which is optimal in a certain class.

  2. Truss topology optimization with simultaneous analysis and design

    NASA Technical Reports Server (NTRS)

    Sankaranarayanan, S.; Haftka, Raphael T.; Kapania, Rakesh K.

    1992-01-01

    Strategies for topology optimization of trusses for minimum weight subject to stress and displacement constraints by Simultaneous Analysis and Design (SAND) are considered. The ground structure approach is used. A penalty function formulation of SAND is compared with an augmented Lagrangian formulation. The efficiency of SAND in handling combinations of general constraints is tested. A strategy for obtaining an optimal topology by minimizing the compliance of the truss is compared with a direct weight minimization solution to satisfy stress and displacement constraints. It is shown that for some problems, starting from the ground structure and using SAND is better than starting from a minimum compliance topology design and optimizing only the cross sections for minimum weight under stress and displacement constraints. A member elimination strategy to save CPU time is discussed.

  3. Calibration of neural networks using genetic algorithms, with application to optimal path planning

    NASA Technical Reports Server (NTRS)

    Smith, Terence R.; Pitney, Gilbert A.; Greenwood, Daniel

    1987-01-01

    Genetic algorithms (GA) are used to search the synaptic weight space of artificial neural systems (ANS) for weight vectors that optimize some network performance function. GAs do not suffer from some of the architectural constraints involved with other techniques and it is straightforward to incorporate terms into the performance function concerning the metastructure of the ANS. Hence GAs offer a remarkably general approach to calibrating ANS. GAs are applied to the problem of calibrating an ANS that finds optimal paths over a given surface. This problem involves training an ANS on a relatively small set of paths and then examining whether the calibrated ANS is able to find good paths between arbitrary start and end points on the surface.

  4. On the optimization of a mixed speaker array in an enclosed space using the virtual-speaker weighting method

    NASA Astrophysics Data System (ADS)

    Peng, Bo; Zheng, Sifa; Liao, Xiangning; Lian, Xiaomin

    2018-03-01

    In order to achieve sound field reproduction in a wide frequency band, multiple-type speakers are used. The reproduction accuracy is not only affected by the signals sent to the speakers, but also depends on the position and the number of each type of speaker. The method of optimizing a mixed speaker array is investigated in this paper. A virtual-speaker weighting method is proposed to optimize both the position and the number of each type of speaker. In this method, a virtual-speaker model is proposed to quantify the increment of controllability of the speaker array when the speaker number increases. While optimizing a mixed speaker array, the gain of the virtual-speaker transfer function is used to determine the priority orders of the candidate speaker positions, which optimizes the position of each type of speaker. Then the relative gain of the virtual-speaker transfer function is used to determine whether the speakers are redundant, which optimizes the number of each type of speaker. Finally the virtual-speaker weighting method is verified by reproduction experiments of the interior sound field in a passenger car. The results validate that the optimum mixed speaker array can be obtained using the proposed method.

  5. Quaternion Averaging

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis; Cheng, Yang; Crassidis, John L.; Oshman, Yaakov

    2007-01-01

    Many applications require an algorithm that averages quaternions in an optimal manner. For example, when combining the quaternion outputs of multiple star trackers having this output capability, it is desirable to properly average the quaternions without recomputing the attitude from the the raw star tracker data. Other applications requiring some sort of optimal quaternion averaging include particle filtering and multiple-model adaptive estimation, where weighted quaternions are used to determine the quaternion estimate. For spacecraft attitude estimation applications, derives an optimal averaging scheme to compute the average of a set of weighted attitude matrices using the singular value decomposition method. Focusing on a 4-dimensional quaternion Gaussian distribution on the unit hypersphere, provides an approach to computing the average quaternion by minimizing a quaternion cost function that is equivalent to the attitude matrix cost function Motivated by and extending its results, this Note derives an algorithm that deterniines an optimal average quaternion from a set of scalar- or matrix-weighted quaternions. Rirthermore, a sufficient condition for the uniqueness of the average quaternion, and the equivalence of the mininiization problem, stated herein, to maximum likelihood estimation, are shown.

  6. Optimal dual-fuel propulsion for minimum inert weight or minimum fuel cost

    NASA Technical Reports Server (NTRS)

    Martin, J. A.

    1973-01-01

    An analytical investigation of single-stage vehicles with multiple propulsion phases has been conducted with the phasing optimized to minimize a general cost function. Some results are presented for linearized sizing relationships which indicate that single-stage-to-orbit, dual-fuel rocket vehicles can have lower inert weight than similar single-fuel rocket vehicles and that the advantage of dual-fuel vehicles can be increased if a dual-fuel engine is developed. The results also indicate that the optimum split can vary considerably with the choice of cost function to be minimized.

  7. Comparison of Traditional Design Nonlinear Programming Optimization and Stochastic Methods for Structural Design

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Pai, Shantaram S.; Coroneos, Rula M.

    2010-01-01

    Structural design generated by traditional method, optimization method and the stochastic design concept are compared. In the traditional method, the constraints are manipulated to obtain the design and weight is back calculated. In design optimization, the weight of a structure becomes the merit function with constraints imposed on failure modes and an optimization algorithm is used to generate the solution. Stochastic design concept accounts for uncertainties in loads, material properties, and other parameters and solution is obtained by solving a design optimization problem for a specified reliability. Acceptable solutions were produced by all the three methods. The variation in the weight calculated by the methods was modest. Some variation was noticed in designs calculated by the methods. The variation may be attributed to structural indeterminacy. It is prudent to develop design by all three methods prior to its fabrication. The traditional design method can be improved when the simplified sensitivities of the behavior constraint is used. Such sensitivity can reduce design calculations and may have a potential to unify the traditional and optimization methods. Weight versus reliabilitytraced out an inverted-S-shaped graph. The center of the graph corresponded to mean valued design. A heavy design with weight approaching infinity could be produced for a near-zero rate of failure. Weight can be reduced to a small value for a most failure-prone design. Probabilistic modeling of load and material properties remained a challenge.

  8. Integrated aerodynamic/dynamic optimization of helicopter rotor blades

    NASA Technical Reports Server (NTRS)

    Chattopadhyay, Aditi; Walsh, Joanne L.; Riley, Michael F.

    1989-01-01

    An integrated aerodynamic/dynamic optimization procedure is used to minimize blade weight and 4 per rev vertical hub shear for a rotor blade in forward flight. The coupling of aerodynamics and dynamics is accomplished through the inclusion of airloads which vary with the design variables during the optimization process. Both single and multiple objective functions are used in the optimization formulation. The Global Criteria Approach is used to formulate the multiple objective optimization and results are compared with those obtained by using single objective function formulations. Constraints are imposed on natural frequencies, autorotational inertia, and centrifugal stress. The program CAMRAD is used for the blade aerodynamic and dynamic analyses, and the program CONMIN is used for the optimization. Since the spanwise and the azimuthal variations of loading are responsible for most rotor vibration and noise, the vertical airload distributions on the blade, before and after optimization, are compared. The total power required by the rotor to produce the same amount of thrust for a given area is also calculated before and after optimization. Results indicate that integrated optimization can significantly reduce the blade weight, the hub shear and the amplitude of the vertical airload distributions on the blade and the total power required by the rotor.

  9. Multiple Interactive Pollutants in Water Quality Trading

    NASA Astrophysics Data System (ADS)

    Sarang, Amin; Lence, Barbara J.; Shamsai, Abolfazl

    2008-10-01

    Efficient environmental management calls for the consideration of multiple pollutants, for which two main types of transferable discharge permit (TDP) program have been described: separate permits that manage each pollutant individually in separate markets, with each permit based on the quantity of the pollutant or its environmental effects, and weighted-sum permits that aggregate several pollutants as a single commodity to be traded in a single market. In this paper, we perform a mathematical analysis of TDP programs for multiple pollutants that jointly affect the environment (i.e., interactive pollutants) and demonstrate the practicality of this approach for cost-efficient maintenance of river water quality. For interactive pollutants, the relative weighting factors are functions of the water quality impacts, marginal damage function, and marginal treatment costs at optimality. We derive the optimal set of weighting factors required by this approach for important scenarios for multiple interactive pollutants and propose using an analytical elasticity of substitution function to estimate damage functions for these scenarios. We evaluate the applicability of this approach using a hypothetical example that considers two interactive pollutants. We compare the weighted-sum permit approach for interactive pollutants with individual permit systems and TDP programs for multiple additive pollutants. We conclude by discussing practical considerations and implementation issues that result from the application of weighted-sum permit programs.

  10. Stochastic Least-Squares Petrov--Galerkin Method for Parameterized Linear Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Kookjin; Carlberg, Kevin; Elman, Howard C.

    Here, we consider the numerical solution of parameterized linear systems where the system matrix, the solution, and the right-hand side are parameterized by a set of uncertain input parameters. We explore spectral methods in which the solutions are approximated in a chosen finite-dimensional subspace. It has been shown that the stochastic Galerkin projection technique fails to minimize any measure of the solution error. As a remedy for this, we propose a novel stochatic least-squares Petrov--Galerkin (LSPG) method. The proposed method is optimal in the sense that it produces the solution that minimizes a weightedmore » $$\\ell^2$$-norm of the residual over all solutions in a given finite-dimensional subspace. Moreover, the method can be adapted to minimize the solution error in different weighted $$\\ell^2$$-norms by simply applying a weighting function within the least-squares formulation. In addition, a goal-oriented seminorm induced by an output quantity of interest can be minimized by defining a weighting function as a linear functional of the solution. We establish optimality and error bounds for the proposed method, and extensive numerical experiments show that the weighted LSPG method outperforms other spectral methods in minimizing corresponding target weighted norms.« less

  11. Optimal Frequency-Domain System Realization with Weighting

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan; Maghami, Peiman G.

    1999-01-01

    Several approaches are presented to identify an experimental system model directly from frequency response data. The formulation uses a matrix-fraction description as the model structure. Frequency weighting such as exponential weighting is introduced to solve a weighted least-squares problem to obtain the coefficient matrices for the matrix-fraction description. A multi-variable state-space model can then be formed using the coefficient matrices of the matrix-fraction description. Three different approaches are introduced to fine-tune the model using nonlinear programming methods to minimize the desired cost function. The first method uses an eigenvalue assignment technique to reassign a subset of system poles to improve the identified model. The second method deals with the model in the real Schur or modal form, reassigns a subset of system poles, and adjusts the columns (rows) of the input (output) influence matrix using a nonlinear optimizer. The third method also optimizes a subset of poles, but the input and output influence matrices are refined at every optimization step through least-squares procedures.

  12. Suboptimal Decision Criteria Are Predicted by Subjectively Weighted Probabilities and Rewards

    PubMed Central

    Ackermann, John F.; Landy, Michael S.

    2014-01-01

    Subjects performed a visual detection task in which the probability of target occurrence at each of the two possible locations, and the rewards for correct responses for each, were varied across conditions. To maximize monetary gain, observers should bias their responses, choosing one location more often than the other in line with the varied probabilities and rewards. Typically, and in our task, observers do not bias their responses to the extent they should, and instead distribute their responses more evenly across locations, a phenomenon referred to as ‘conservatism.’ We investigated several hypotheses regarding the source of the conservatism. We measured utility and probability weighting functions under Prospect Theory for each subject in an independent economic choice task and used the weighting-function parameters to calculate each subject’s subjective utility (SU(c)) as a function of the criterion c, and the corresponding weighted optimal criteria (wcopt). Subjects’ criteria were not close to optimal relative to wcopt. The slope of SU (c) and of expected gain EG(c) at the neutral criterion corresponding to β = 1 were both predictive of subjects’ criteria. The slope of SU(c) was a better predictor of observers’ decision criteria overall. Thus, rather than behaving optimally, subjects move their criterion away from the neutral criterion by estimating how much they stand to gain by such a change based on the slope of subjective gain as a function of criterion, using inherently distorted probabilities and values. PMID:25366822

  13. Suboptimal decision criteria are predicted by subjectively weighted probabilities and rewards.

    PubMed

    Ackermann, John F; Landy, Michael S

    2015-02-01

    Subjects performed a visual detection task in which the probability of target occurrence at each of the two possible locations, and the rewards for correct responses for each, were varied across conditions. To maximize monetary gain, observers should bias their responses, choosing one location more often than the other in line with the varied probabilities and rewards. Typically, and in our task, observers do not bias their responses to the extent they should, and instead distribute their responses more evenly across locations, a phenomenon referred to as 'conservatism.' We investigated several hypotheses regarding the source of the conservatism. We measured utility and probability weighting functions under Prospect Theory for each subject in an independent economic choice task and used the weighting-function parameters to calculate each subject's subjective utility (SU(c)) as a function of the criterion c, and the corresponding weighted optimal criteria (wc opt ). Subjects' criteria were not close to optimal relative to wc opt . The slope of SU(c) and of expected gain EG(c) at the neutral criterion corresponding to β = 1 were both predictive of the subjects' criteria. The slope of SU(c) was a better predictor of observers' decision criteria overall. Thus, rather than behaving optimally, subjects move their criterion away from the neutral criterion by estimating how much they stand to gain by such a change based on the slope of subjective gain as a function of criterion, using inherently distorted probabilities and values.

  14. Functional annotation by sequence-weighted structure alignments: statistical analysis and case studies from the Protein 3000 structural genomics project in Japan.

    PubMed

    Standley, Daron M; Toh, Hiroyuki; Nakamura, Haruki

    2008-09-01

    A method to functionally annotate structural genomics targets, based on a novel structural alignment scoring function, is proposed. In the proposed score, position-specific scoring matrices are used to weight structurally aligned residue pairs to highlight evolutionarily conserved motifs. The functional form of the score is first optimized for discriminating domains belonging to the same Pfam family from domains belonging to different families but the same CATH or SCOP superfamily. In the optimization stage, we consider four standard weighting functions as well as our own, the "maximum substitution probability," and combinations of these functions. The optimized score achieves an area of 0.87 under the receiver-operating characteristic curve with respect to identifying Pfam families within a sequence-unique benchmark set of domain pairs. Confidence measures are then derived from the benchmark distribution of true-positive scores. The alignment method is next applied to the task of functionally annotating 230 query proteins released to the public as part of the Protein 3000 structural genomics project in Japan. Of these queries, 78 were found to align to templates with the same Pfam family as the query or had sequence identities > or = 30%. Another 49 queries were found to match more distantly related templates. Within this group, the template predicted by our method to be the closest functional relative was often not the most structurally similar. Several nontrivial cases are discussed in detail. Finally, 103 queries matched templates at the fold level, but not the family or superfamily level, and remain functionally uncharacterized. 2008 Wiley-Liss, Inc.

  15. Designing Artificial Neural Networks Using Particle Swarm Optimization Algorithms

    PubMed Central

    Vázquez, Roberto A.

    2015-01-01

    Artificial Neural Network (ANN) design is a complex task because its performance depends on the architecture, the selected transfer function, and the learning algorithm used to train the set of synaptic weights. In this paper we present a methodology that automatically designs an ANN using particle swarm optimization algorithms such as Basic Particle Swarm Optimization (PSO), Second Generation of Particle Swarm Optimization (SGPSO), and a New Model of PSO called NMPSO. The aim of these algorithms is to evolve, at the same time, the three principal components of an ANN: the set of synaptic weights, the connections or architecture, and the transfer functions for each neuron. Eight different fitness functions were proposed to evaluate the fitness of each solution and find the best design. These functions are based on the mean square error (MSE) and the classification error (CER) and implement a strategy to avoid overtraining and to reduce the number of connections in the ANN. In addition, the ANN designed with the proposed methodology is compared with those designed manually using the well-known Back-Propagation and Levenberg-Marquardt Learning Algorithms. Finally, the accuracy of the method is tested with different nonlinear pattern classification problems. PMID:26221132

  16. Computer simulations of optimum boost and buck-boost converters

    NASA Technical Reports Server (NTRS)

    Rahman, S.

    1982-01-01

    The development of mathematicl models suitable for minimum weight boost and buck-boost converter designs are presented. The facility of an augumented Lagrangian (ALAG) multiplier-based nonlinear programming technique is demonstrated for minimum weight design optimizations of boost and buck-boost power converters. ALAG-based computer simulation results for those two minimum weight designs are discussed. Certain important features of ALAG are presented in the framework of a comprehensive design example for boost and buck-boost power converter design optimization. The study provides refreshing design insight of power converters and presents such information as weight annd loss profiles of various semiconductor components and magnetics as a function of the switching frequency.

  17. Color Feature-Based Object Tracking through Particle Swarm Optimization with Improved Inertia Weight

    PubMed Central

    Guo, Siqiu; Zhang, Tao; Song, Yulong

    2018-01-01

    This paper presents a particle swarm tracking algorithm with improved inertia weight based on color features. The weighted color histogram is used as the target feature to reduce the contribution of target edge pixels in the target feature, which makes the algorithm insensitive to the target non-rigid deformation, scale variation, and rotation. Meanwhile, the influence of partial obstruction on the description of target features is reduced. The particle swarm optimization algorithm can complete the multi-peak search, which can cope well with the object occlusion tracking problem. This means that the target is located precisely where the similarity function appears multi-peak. When the particle swarm optimization algorithm is applied to the object tracking, the inertia weight adjustment mechanism has some limitations. This paper presents an improved method. The concept of particle maturity is introduced to improve the inertia weight adjustment mechanism, which could adjust the inertia weight in time according to the different states of each particle in each generation. Experimental results show that our algorithm achieves state-of-the-art performance in a wide range of scenarios. PMID:29690610

  18. Color Feature-Based Object Tracking through Particle Swarm Optimization with Improved Inertia Weight.

    PubMed

    Guo, Siqiu; Zhang, Tao; Song, Yulong; Qian, Feng

    2018-04-23

    This paper presents a particle swarm tracking algorithm with improved inertia weight based on color features. The weighted color histogram is used as the target feature to reduce the contribution of target edge pixels in the target feature, which makes the algorithm insensitive to the target non-rigid deformation, scale variation, and rotation. Meanwhile, the influence of partial obstruction on the description of target features is reduced. The particle swarm optimization algorithm can complete the multi-peak search, which can cope well with the object occlusion tracking problem. This means that the target is located precisely where the similarity function appears multi-peak. When the particle swarm optimization algorithm is applied to the object tracking, the inertia weight adjustment mechanism has some limitations. This paper presents an improved method. The concept of particle maturity is introduced to improve the inertia weight adjustment mechanism, which could adjust the inertia weight in time according to the different states of each particle in each generation. Experimental results show that our algorithm achieves state-of-the-art performance in a wide range of scenarios.

  19. Application of multivariable search techniques to structural design optimization

    NASA Technical Reports Server (NTRS)

    Jones, R. T.; Hague, D. S.

    1972-01-01

    Multivariable optimization techniques are applied to a particular class of minimum weight structural design problems: the design of an axially loaded, pressurized, stiffened cylinder. Minimum weight designs are obtained by a variety of search algorithms: first- and second-order, elemental perturbation, and randomized techniques. An exterior penalty function approach to constrained minimization is employed. Some comparisons are made with solutions obtained by an interior penalty function procedure. In general, it would appear that an interior penalty function approach may not be as well suited to the class of design problems considered as the exterior penalty function approach. It is also shown that a combination of search algorithms will tend to arrive at an extremal design in a more reliable manner than a single algorithm. The effect of incorporating realistic geometrical constraints on stiffener cross-sections is investigated. A limited comparison is made between minimum weight cylinders designed on the basis of a linear stability analysis and cylinders designed on the basis of empirical buckling data. Finally, a technique for locating more than one extremal is demonstrated.

  20. Optimal Network-based Intervention in the Presence of Undetectable Viruses.

    PubMed

    Youssef, Mina; Scoglio, Caterina

    2014-08-01

    This letter presents an optimal control framework to reduce the spread of viruses in networks. The network is modeled as an undirected graph of nodes and weighted links. We consider the spread of viruses in a network as a system, and the total number of infected nodes as the state of the system, while the control function is the weight reduction leading to slow/reduce spread of viruses. Our epidemic model overcomes three assumptions that were extensively used in the literature and produced inaccurate results. We apply the optimal control formulation to crucial network structures. Numerical results show the dynamical weight reduction and reveal the role of the network structure and the epidemic model in reducing the infection size in the presence of indiscernible infected nodes.

  1. Optimal Network-based Intervention in the Presence of Undetectable Viruses

    PubMed Central

    Youssef, Mina; Scoglio, Caterina

    2014-01-01

    This letter presents an optimal control framework to reduce the spread of viruses in networks. The network is modeled as an undirected graph of nodes and weighted links. We consider the spread of viruses in a network as a system, and the total number of infected nodes as the state of the system, while the control function is the weight reduction leading to slow/reduce spread of viruses. Our epidemic model overcomes three assumptions that were extensively used in the literature and produced inaccurate results. We apply the optimal control formulation to crucial network structures. Numerical results show the dynamical weight reduction and reveal the role of the network structure and the epidemic model in reducing the infection size in the presence of indiscernible infected nodes. PMID:25422579

  2. Optimization of locations of diffusion spots in indoor optical wireless local area networks

    NASA Astrophysics Data System (ADS)

    Eltokhey, Mahmoud W.; Mahmoud, K. R.; Ghassemlooy, Zabih; Obayya, Salah S. A.

    2018-03-01

    In this paper, we present a novel optimization of the locations of the diffusion spots in indoor optical wireless local area networks, based on the central force optimization (CFO) scheme. The users' performance uniformity is addressed by using the CFO algorithm, and adopting different objective function's configurations, while considering maximization and minimization of the signal to noise ratio and the delay spread, respectively. We also investigate the effect of varying the objective function's weights on the system and the users' performance as part of the adaptation process. The results show that the proposed objective function configuration-based optimization procedure offers an improvement of 65% in the standard deviation of individual receivers' performance.

  3. Size-exclusion chromatography (HPLC-SEC) technique optimization by simplex method to estimate molecular weight distribution of agave fructans.

    PubMed

    Moreno-Vilet, Lorena; Bostyn, Stéphane; Flores-Montaño, Jose-Luis; Camacho-Ruiz, Rosa-María

    2017-12-15

    Agave fructans are increasingly important in food industry and nutrition sciences as a potential ingredient of functional food, thus practical analysis tools to characterize them are needed. In view of the importance of the molecular weight on the functional properties of agave fructans, this study has the purpose to optimize a method to determine their molecular weight distribution by HPLC-SEC for industrial application. The optimization was carried out using a simplex method. The optimum conditions obtained were at column temperature of 61.7°C using tri-distilled water without salt, adjusted pH of 5.4 and a flow rate of 0.36mL/min. The exclusion range is from 1 to 49 of polymerization degree (180-7966Da). This proposed method represents an accurate and fast alternative to standard methods involving multiple-detection or hydrolysis of fructans. The industrial applications of this technique might be for quality control, study of fractionation processes and determination of purity. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. An Empirical Comparison of Seven Iterative and Evolutionary Function Optimization Heuristics

    NASA Technical Reports Server (NTRS)

    Baluja, Shumeet

    1995-01-01

    This report is a repository of the results obtained from a large scale empirical comparison of seven iterative and evolution-based optimization heuristics. Twenty-seven static optimization problems, spanning six sets of problem classes which are commonly explored in genetic algorithm literature, are examined. The problem sets include job-shop scheduling, traveling salesman, knapsack, binpacking, neural network weight optimization, and standard numerical optimization. The search spaces in these problems range from 2368 to 22040. The results indicate that using genetic algorithms for the optimization of static functions does not yield a benefit, in terms of the final answer obtained, over simpler optimization heuristics. Descriptions of the algorithms tested and the encodings of the problems are described in detail for reproducibility.

  5. Grey-Markov prediction model based on background value optimization and central-point triangular whitenization weight function

    NASA Astrophysics Data System (ADS)

    Ye, Jing; Dang, Yaoguo; Li, Bingjun

    2018-01-01

    Grey-Markov forecasting model is a combination of grey prediction model and Markov chain which show obvious optimization effects for data sequences with characteristics of non-stationary and volatility. However, the state division process in traditional Grey-Markov forecasting model is mostly based on subjective real numbers that immediately affects the accuracy of forecasting values. To seek the solution, this paper introduces the central-point triangular whitenization weight function in state division to calculate possibilities of research values in each state which reflect preference degrees in different states in an objective way. On the other hand, background value optimization is applied in the traditional grey model to generate better fitting data. By this means, the improved Grey-Markov forecasting model is built. Finally, taking the grain production in Henan Province as an example, it verifies this model's validity by comparing with GM(1,1) based on background value optimization and the traditional Grey-Markov forecasting model.

  6. Predicting objective function weights from patient anatomy in prostate IMRT treatment planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Taewoo, E-mail: taewoo.lee@utoronto.ca; Hammad, Muhannad; Chan, Timothy C. Y.

    2013-12-15

    Purpose: Intensity-modulated radiation therapy (IMRT) treatment planning typically combines multiple criteria into a single objective function by taking a weighted sum. The authors propose a statistical model that predicts objective function weights from patient anatomy for prostate IMRT treatment planning. This study provides a proof of concept for geometry-driven weight determination. Methods: A previously developed inverse optimization method (IOM) was used to generate optimal objective function weights for 24 patients using their historical treatment plans (i.e., dose distributions). These IOM weights were around 1% for each of the femoral heads, while bladder and rectum weights varied greatly between patients. Amore » regression model was developed to predict a patient's rectum weight using the ratio of the overlap volume of the rectum and bladder with the planning target volume at a 1 cm expansion as the independent variable. The femoral head weights were fixed to 1% each and the bladder weight was calculated as one minus the rectum and femoral head weights. The model was validated using leave-one-out cross validation. Objective values and dose distributions generated through inverse planning using the predicted weights were compared to those generated using the original IOM weights, as well as an average of the IOM weights across all patients. Results: The IOM weight vectors were on average six times closer to the predicted weight vectors than to the average weight vector, usingl{sub 2} distance. Likewise, the bladder and rectum objective values achieved by the predicted weights were more similar to the objective values achieved by the IOM weights. The difference in objective value performance between the predicted and average weights was statistically significant according to a one-sided sign test. For all patients, the difference in rectum V54.3 Gy, rectum V70.0 Gy, bladder V54.3 Gy, and bladder V70.0 Gy values between the dose distributions generated by the predicted weights and IOM weights was less than 5 percentage points. Similarly, the difference in femoral head V54.3 Gy values between the two dose distributions was less than 5 percentage points for all but one patient. Conclusions: This study demonstrates a proof of concept that patient anatomy can be used to predict appropriate objective function weights for treatment planning. In the long term, such geometry-driven weights may serve as a starting point for iterative treatment plan design or may provide information about the most clinically relevant region of the Pareto surface to explore.« less

  7. Predicting objective function weights from patient anatomy in prostate IMRT treatment planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Taewoo, E-mail: taewoo.lee@utoronto.ca; Hammad, Muhannad; Chan, Timothy C. Y.

    Purpose: Intensity-modulated radiation therapy (IMRT) treatment planning typically combines multiple criteria into a single objective function by taking a weighted sum. The authors propose a statistical model that predicts objective function weights from patient anatomy for prostate IMRT treatment planning. This study provides a proof of concept for geometry-driven weight determination. Methods: A previously developed inverse optimization method (IOM) was used to generate optimal objective function weights for 24 patients using their historical treatment plans (i.e., dose distributions). These IOM weights were around 1% for each of the femoral heads, while bladder and rectum weights varied greatly between patients. Amore » regression model was developed to predict a patient's rectum weight using the ratio of the overlap volume of the rectum and bladder with the planning target volume at a 1 cm expansion as the independent variable. The femoral head weights were fixed to 1% each and the bladder weight was calculated as one minus the rectum and femoral head weights. The model was validated using leave-one-out cross validation. Objective values and dose distributions generated through inverse planning using the predicted weights were compared to those generated using the original IOM weights, as well as an average of the IOM weights across all patients. Results: The IOM weight vectors were on average six times closer to the predicted weight vectors than to the average weight vector, usingl{sub 2} distance. Likewise, the bladder and rectum objective values achieved by the predicted weights were more similar to the objective values achieved by the IOM weights. The difference in objective value performance between the predicted and average weights was statistically significant according to a one-sided sign test. For all patients, the difference in rectum V54.3 Gy, rectum V70.0 Gy, bladder V54.3 Gy, and bladder V70.0 Gy values between the dose distributions generated by the predicted weights and IOM weights was less than 5 percentage points. Similarly, the difference in femoral head V54.3 Gy values between the two dose distributions was less than 5 percentage points for all but one patient. Conclusions: This study demonstrates a proof of concept that patient anatomy can be used to predict appropriate objective function weights for treatment planning. In the long term, such geometry-driven weights may serve as a starting point for iterative treatment plan design or may provide information about the most clinically relevant region of the Pareto surface to explore.« less

  8. Optimization and evaluation of a proportional derivative controller for planar arm movement.

    PubMed

    Jagodnik, Kathleen M; van den Bogert, Antonie J

    2010-04-19

    In most clinical applications of functional electrical stimulation (FES), the timing and amplitude of electrical stimuli have been controlled by open-loop pattern generators. The control of upper extremity reaching movements, however, will require feedback control to achieve the required precision. Here we present three controllers using proportional derivative (PD) feedback to stimulate six arm muscles, using two joint angle sensors. Controllers were first optimized and then evaluated on a computational arm model that includes musculoskeletal dynamics. Feedback gains were optimized by minimizing a weighted sum of position errors and muscle forces. Generalizability of the controllers was evaluated by performing movements for which the controller was not optimized, and robustness was tested via model simulations with randomly weakened muscles. Robustness was further evaluated by adding joint friction and doubling the arm mass. After optimization with a properly weighted cost function, all PD controllers performed fast, accurate, and robust reaching movements in simulation. Oscillatory behavior was seen after improper tuning. Performance improved slightly as the complexity of the feedback gain matrix increased. Copyright 2009 Elsevier Ltd. All rights reserved.

  9. Optimization and evaluation of a proportional derivative controller for planar arm movement

    PubMed Central

    Jagodnik, Kathleen M.; van den Bogert, Antonie J.

    2013-01-01

    In most clinical applications of functional electrical stimulation (FES), the timing and amplitude of electrical stimuli have been controlled by open-loop pattern generators. The control of upper extremity reaching movements, however, will require feedback control to achieve the required precision. Here we present three controllers using proportional derivative (PD) feedback to stimulate six arm muscles, using two joint angle sensors. Controllers were first optimized and then evaluated on a computational arm model that includes musculoskeletal dynamics. Feedback gains were optimized by minimizing a weighted sum of position errors and muscle forces. Generalizability of the controllers was evaluated by performing movements for which the controller was not optimized, and robustness was tested via model simulations with randomly weakened muscles. Robustness was further evaluated by adding joint friction and doubling the arm mass. After optimization with a properly weighted cost function, all PD controllers performed fast, accurate, and robust reaching movements in simulation. Oscillatory behavior was seen after improper tuning. Performance improved slightly as the complexity of the feedback gain matrix increased. PMID:20097345

  10. GENIE - Generation of computational geometry-grids for internal-external flow configurations

    NASA Technical Reports Server (NTRS)

    Soni, B. K.

    1988-01-01

    Progress realized in the development of a master geometry-grid generation code GENIE is presented. The grid refinement process is enhanced by developing strategies to utilize bezier curves/surfaces and splines along with weighted transfinite interpolation technique and by formulating new forcing function for the elliptic solver based on the minimization of a non-orthogonality functional. A two step grid adaptation procedure is developed by optimally blending adaptive weightings with weighted transfinite interpolation technique. Examples of 2D-3D grids are provided to illustrate the success of these methods.

  11. An entropy-assisted musculoskeletal shoulder model.

    PubMed

    Xu, Xu; Lin, Jia-Hua; McGorry, Raymond W

    2017-04-01

    Optimization combined with a musculoskeletal shoulder model has been used to estimate mechanical loading of musculoskeletal elements around the shoulder. Traditionally, the objective function is to minimize the summation of the total activities of the muscles with forces, moments, and stability constraints. Such an objective function, however, tends to neglect the antagonist muscle co-contraction. In this study, an objective function including an entropy term is proposed to address muscle co-contractions. A musculoskeletal shoulder model is developed to apply the proposed objective function. To find the optimal weight for the entropy term, an experiment was conducted. In the experiment, participants generated various 3-D shoulder moments in six shoulder postures. The surface EMG of 8 shoulder muscles was measured and compared with the predicted muscle activities based on the proposed objective function using Bhattacharyya distance and concordance ratio under different weight of the entropy term. The results show that a small weight of the entropy term can improve the predictability of the model in terms of muscle activities. Such a result suggests that the concept of entropy could be helpful for further understanding the mechanism of muscle co-contractions as well as developing a shoulder biomechanical model with greater validity. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. An adaptive evolutionary multi-objective approach based on simulated annealing.

    PubMed

    Li, H; Landa-Silva, D

    2011-01-01

    A multi-objective optimization problem can be solved by decomposing it into one or more single objective subproblems in some multi-objective metaheuristic algorithms. Each subproblem corresponds to one weighted aggregation function. For example, MOEA/D is an evolutionary multi-objective optimization (EMO) algorithm that attempts to optimize multiple subproblems simultaneously by evolving a population of solutions. However, the performance of MOEA/D highly depends on the initial setting and diversity of the weight vectors. In this paper, we present an improved version of MOEA/D, called EMOSA, which incorporates an advanced local search technique (simulated annealing) and adapts the search directions (weight vectors) corresponding to various subproblems. In EMOSA, the weight vector of each subproblem is adaptively modified at the lowest temperature in order to diversify the search toward the unexplored parts of the Pareto-optimal front. Our computational results show that EMOSA outperforms six other well established multi-objective metaheuristic algorithms on both the (constrained) multi-objective knapsack problem and the (unconstrained) multi-objective traveling salesman problem. Moreover, the effects of the main algorithmic components and parameter sensitivities on the search performance of EMOSA are experimentally investigated.

  13. A method for automatically optimizing medical devices for treating heart failure: designing polymeric injection patterns.

    PubMed

    Wenk, Jonathan F; Wall, Samuel T; Peterson, Robert C; Helgerson, Sam L; Sabbah, Hani N; Burger, Mike; Stander, Nielen; Ratcliffe, Mark B; Guccione, Julius M

    2009-12-01

    Heart failure continues to present a significant medical and economic burden throughout the developed world. Novel treatments involving the injection of polymeric materials into the myocardium of the failing left ventricle (LV) are currently being developed, which may reduce elevated myofiber stresses during the cardiac cycle and act to retard the progression of heart failure. A finite element (FE) simulation-based method was developed in this study that can automatically optimize the injection pattern of the polymeric "inclusions" according to a specific objective function, using commercially available software tools. The FE preprocessor TRUEGRID((R)) was used to create a parametric axisymmetric LV mesh matched to experimentally measured end-diastole and end-systole metrics from dogs with coronary microembolization-induced heart failure. Passive and active myocardial material properties were defined by a pseudo-elastic-strain energy function and a time-varying elastance model of active contraction, respectively, that were implemented in the FE software LS-DYNA. The companion optimization software LS-OPT was used to communicate directly with TRUEGRID((R)) to determine FE model parameters, such as defining the injection pattern and inclusion characteristics. The optimization resulted in an intuitive optimal injection pattern (i.e., the one with the greatest number of inclusions) when the objective function was weighted to minimize mean end-diastolic and end-systolic myofiber stress and ignore LV stroke volume. In contrast, the optimization resulted in a nonintuitive optimal pattern (i.e., 3 inclusions longitudinallyx6 inclusions circumferentially) when both myofiber stress and stroke volume were incorporated into the objective function with different weights.

  14. A Novel Weighted Kernel PCA-Based Method for Optimization and Uncertainty Quantification

    NASA Astrophysics Data System (ADS)

    Thimmisetty, C.; Talbot, C.; Chen, X.; Tong, C. H.

    2016-12-01

    It has been demonstrated that machine learning methods can be successfully applied to uncertainty quantification for geophysical systems through the use of the adjoint method coupled with kernel PCA-based optimization. In addition, it has been shown through weighted linear PCA how optimization with respect to both observation weights and feature space control variables can accelerate convergence of such methods. Linear machine learning methods, however, are inherently limited in their ability to represent features of non-Gaussian stochastic random fields, as they are based on only the first two statistical moments of the original data. Nonlinear spatial relationships and multipoint statistics leading to the tortuosity characteristic of channelized media, for example, are captured only to a limited extent by linear PCA. With the aim of coupling the kernel-based and weighted methods discussed, we present a novel mathematical formulation of kernel PCA, Weighted Kernel Principal Component Analysis (WKPCA), that both captures nonlinear relationships and incorporates the attribution of significance levels to different realizations of the stochastic random field of interest. We also demonstrate how new instantiations retaining defining characteristics of the random field can be generated using Bayesian methods. In particular, we present a novel WKPCA-based optimization method that minimizes a given objective function with respect to both feature space random variables and observation weights through which optimal snapshot significance levels and optimal features are learned. We showcase how WKPCA can be applied to nonlinear optimal control problems involving channelized media, and in particular demonstrate an application of the method to learning the spatial distribution of material parameter values in the context of linear elasticity, and discuss further extensions of the method to stochastic inversion.

  15. An optimized time varying filtering based empirical mode decomposition method with grey wolf optimizer for machinery fault diagnosis

    NASA Astrophysics Data System (ADS)

    Zhang, Xin; Liu, Zhiwen; Miao, Qiang; Wang, Lei

    2018-03-01

    A time varying filtering based empirical mode decomposition (EMD) (TVF-EMD) method was proposed recently to solve the mode mixing problem of EMD method. Compared with the classical EMD, TVF-EMD was proven to improve the frequency separation performance and be robust to noise interference. However, the decomposition parameters (i.e., bandwidth threshold and B-spline order) significantly affect the decomposition results of this method. In original TVF-EMD method, the parameter values are assigned in advance, which makes it difficult to achieve satisfactory analysis results. To solve this problem, this paper develops an optimized TVF-EMD method based on grey wolf optimizer (GWO) algorithm for fault diagnosis of rotating machinery. Firstly, a measurement index termed weighted kurtosis index is constructed by using kurtosis index and correlation coefficient. Subsequently, the optimal TVF-EMD parameters that match with the input signal can be obtained by GWO algorithm using the maximum weighted kurtosis index as objective function. Finally, fault features can be extracted by analyzing the sensitive intrinsic mode function (IMF) owning the maximum weighted kurtosis index. Simulations and comparisons highlight the performance of TVF-EMD method for signal decomposition, and meanwhile verify the fact that bandwidth threshold and B-spline order are critical to the decomposition results. Two case studies on rotating machinery fault diagnosis demonstrate the effectiveness and advantages of the proposed method.

  16. Role of the parameters involved in the plan optimization based on the generalized equivalent uniform dose and radiobiological implications

    NASA Astrophysics Data System (ADS)

    Widesott, L.; Strigari, L.; Pressello, M. C.; Benassi, M.; Landoni, V.

    2008-03-01

    We investigated the role and the weight of the parameters involved in the intensity modulated radiation therapy (IMRT) optimization based on the generalized equivalent uniform dose (gEUD) method, for prostate and head-and-neck plans. We systematically varied the parameters (gEUDmax and weight) involved in the gEUD-based optimization of rectal wall and parotid glands. We found that the proper value of weight factor, still guaranteeing planning treatment volumes coverage, produced similar organs at risks dose-volume (DV) histograms for different gEUDmax with fixed a = 1. Most of all, we formulated a simple relation that links the reference gEUDmax and the associated weight factor. As secondary objective, we evaluated plans obtained with the gEUD-based optimization and ones based on DV criteria, using the normal tissue complication probability (NTCP) models. gEUD criteria seemed to improve sparing of rectum and parotid glands with respect to DV-based optimization: the mean dose, the V40 and V50 values to the rectal wall were decreased of about 10%, the mean dose to parotids decreased of about 20-30%. But more than the OARs sparing, we underlined the halving of the OARs optimization time with the implementation of the gEUD-based cost function. Using NTCP models we enhanced differences between the two optimization criteria for parotid glands, but no for rectum wall.

  17. A class of stochastic optimization problems with one quadratic & several linear objective functions and extended portfolio selection model

    NASA Astrophysics Data System (ADS)

    Xu, Jiuping; Li, Jun

    2002-09-01

    In this paper a class of stochastic multiple-objective programming problems with one quadratic, several linear objective functions and linear constraints has been introduced. The former model is transformed into a deterministic multiple-objective nonlinear programming model by means of the introduction of random variables' expectation. The reference direction approach is used to deal with linear objectives and results in a linear parametric optimization formula with a single linear objective function. This objective function is combined with the quadratic function using the weighted sums. The quadratic problem is transformed into a linear (parametric) complementary problem, the basic formula for the proposed approach. The sufficient and necessary conditions for (properly, weakly) efficient solutions and some construction characteristics of (weakly) efficient solution sets are obtained. An interactive algorithm is proposed based on reference direction and weighted sums. Varying the parameter vector on the right-hand side of the model, the DM can freely search the efficient frontier with the model. An extended portfolio selection model is formed when liquidity is considered as another objective to be optimized besides expectation and risk. The interactive approach is illustrated with a practical example.

  18. Multi-objective design optimization and control of magnetorheological fluid brakes for automotive applications

    NASA Astrophysics Data System (ADS)

    Shamieh, Hadi; Sedaghati, Ramin

    2017-12-01

    The magnetorheological brake (MRB) is an electromechanical device that generates a retarding torque through employing magnetorheological (MR) fluids. The objective of this paper is to design, optimize and control an MRB for automotive applications considering. The dynamic range of a disk-type MRB expressing the ratio of generated toque at on and off states has been formulated as a function of the rotational speed, geometrical and material properties, and applied electrical current. Analytical magnetic circuit analysis has been conducted to derive the relation between magnetic field intensity and the applied electrical current as a function of the MRB geometrical and material properties. A multidisciplinary design optimization problem has then been formulated to identify the optimal brake geometrical parameters to maximize the dynamic range and minimize the response time and weight of the MRB under weight, size and magnetic flux density constraints. The optimization problem has been solved using combined genetic and sequential quadratic programming algorithms. Finally, the performance of the optimally designed MRB has been investigated in a quarter vehicle model. A PID controller has been designed to regulate the applied current required by the MRB in order to improve vehicle’s slipping on different road conditions.

  19. A Regularizer Approach for RBF Networks Under the Concurrent Weight Failure Situation.

    PubMed

    Leung, Chi-Sing; Wan, Wai Yan; Feng, Ruibin

    2017-06-01

    Many existing results on fault-tolerant algorithms focus on the single fault source situation, where a trained network is affected by one kind of weight failure. In fact, a trained network may be affected by multiple kinds of weight failure. This paper first studies how the open weight fault and the multiplicative weight noise degrade the performance of radial basis function (RBF) networks. Afterward, we define the objective function for training fault-tolerant RBF networks. Based on the objective function, we then develop two learning algorithms, one batch mode and one online mode. Besides, the convergent conditions of our online algorithm are investigated. Finally, we develop a formula to estimate the test set error of faulty networks trained from our approach. This formula helps us to optimize some tuning parameters, such as RBF width.

  20. Mineral inversion for element capture spectroscopy logging based on optimization theory

    NASA Astrophysics Data System (ADS)

    Zhao, Jianpeng; Chen, Hui; Yin, Lu; Li, Ning

    2017-12-01

    Understanding the mineralogical composition of a formation is an essential key step in the petrophysical evaluation of petroleum reservoirs. Geochemical logging tools can provide quantitative measurements of a wide range of elements. In this paper, element capture spectroscopy (ECS) was taken as an example and an optimization method was adopted to solve the mineral inversion problem for ECS. This method used the converting relationship between elements and minerals as response equations and took into account the statistical uncertainty of the element measurements and established an optimization function for ECS. Objective function value and reconstructed elemental logs were used to check the robustness and reliability of the inversion method. Finally, the inversion mineral results had a good agreement with x-ray diffraction laboratory data. The accurate conversion of elemental dry weights to mineral dry weights formed the foundation for the subsequent applications based on ECS.

  1. Design of pilot studies to inform the construction of composite outcome measures.

    PubMed

    Edland, Steven D; Ard, M Colin; Li, Weiwei; Jiang, Lingjing

    2017-06-01

    Composite scales have recently been proposed as outcome measures for clinical trials. For example, the Prodromal Alzheimer's Cognitive Composite (PACC) is the sum of z-score normed component measures assessing episodic memory, timed executive function, and global cognition. Alternative methods of calculating composite total scores using the weighted sum of the component measures that maximize signal-to-noise of the resulting composite score have been proposed. Optimal weights can be estimated from pilot data, but it is an open question how large a pilot trial is required to calculate reliably optimal weights. In this manuscript, we describe the calculation of optimal weights, and use large-scale computer simulations to investigate the question of how large a pilot study sample is required to inform the calculation of optimal weights. The simulations are informed by the pattern of decline observed in cognitively normal subjects enrolled in the Alzheimer's Disease Cooperative Study (ADCS) Prevention Instrument cohort study, restricting to n=75 subjects age 75 and over with an ApoE E4 risk allele and therefore likely to have an underlying Alzheimer neurodegenerative process. In the context of secondary prevention trials in Alzheimer's disease, and using the components of the PACC, we found that pilot studies as small as 100 are sufficient to meaningfully inform weighting parameters. Regardless of the pilot study sample size used to inform weights, the optimally weighted PACC consistently outperformed the standard PACC in terms of statistical power to detect treatment effects in a clinical trial. Pilot studies of size 300 produced weights that achieved near-optimal statistical power, and reduced required sample size relative to the standard PACC by more than half. These simulations suggest that modestly sized pilot studies, comparable to that of a phase 2 clinical trial, are sufficient to inform the construction of composite outcome measures. Although these findings apply only to the PACC in the context of prodromal AD, the observation that weights only have to approximate the optimal weights to achieve near-optimal performance should generalize. Performing a pilot study or phase 2 trial to inform the weighting of proposed composite outcome measures is highly cost-effective. The net effect of more efficient outcome measures is that smaller trials will be required to test novel treatments. Alternatively, second generation trials can use prior clinical trial data to inform weighting, so that greater efficiency can be achieved as we move forward.

  2. Using spatiotemporal source separation to identify prominent features in multichannel data without sinusoidal filters.

    PubMed

    Cohen, Michael X

    2017-09-27

    The number of simultaneously recorded electrodes in neuroscience is steadily increasing, providing new opportunities for understanding brain function, but also new challenges for appropriately dealing with the increase in dimensionality. Multivariate source separation analysis methods have been particularly effective at improving signal-to-noise ratio while reducing the dimensionality of the data and are widely used for cleaning, classifying and source-localizing multichannel neural time series data. Most source separation methods produce a spatial component (that is, a weighted combination of channels to produce one time series); here, this is extended to apply source separation to a time series, with the idea of obtaining a weighted combination of successive time points, such that the weights are optimized to satisfy some criteria. This is achieved via a two-stage source separation procedure, in which an optimal spatial filter is first constructed and then its optimal temporal basis function is computed. This second stage is achieved with a time-delay-embedding matrix, in which additional rows of a matrix are created from time-delayed versions of existing rows. The optimal spatial and temporal weights can be obtained by solving a generalized eigendecomposition of covariance matrices. The method is demonstrated in simulated data and in an empirical electroencephalogram study on theta-band activity during response conflict. Spatiotemporal source separation has several advantages, including defining empirical filters without the need to apply sinusoidal narrowband filters. © 2017 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  3. Dynamic Mesh Adaptation for Front Evolution Using Discontinuous Galerkin Based Weighted Condition Number Mesh Relaxation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greene, Patrick T.; Schofield, Samuel P.; Nourgaliev, Robert

    2016-06-21

    A new mesh smoothing method designed to cluster mesh cells near a dynamically evolving interface is presented. The method is based on weighted condition number mesh relaxation with the weight function being computed from a level set representation of the interface. The weight function is expressed as a Taylor series based discontinuous Galerkin projection, which makes the computation of the derivatives of the weight function needed during the condition number optimization process a trivial matter. For cases when a level set is not available, a fast method for generating a low-order level set from discrete cell-centered elds, such as amore » volume fraction or index function, is provided. Results show that the low-order level set works equally well for the weight function as the actual level set. Meshes generated for a number of interface geometries are presented, including cases with multiple level sets. Dynamic cases for moving interfaces are presented to demonstrate the method's potential usefulness to arbitrary Lagrangian Eulerian (ALE) methods.« less

  4. Aeroelastic Tailoring Study of N+2 Low-Boom Supersonic Commercial Transport Aircraft

    NASA Technical Reports Server (NTRS)

    Pak, Chan-gi

    2015-01-01

    The Lockheed Martins N+2 Low-boom Supersonic Commercial Transport (LSCT) aircraft is optimized in this study through the use of a multidisciplinary design optimization tool developed at the NASA Armstrong Flight Research Center. A total of 111 design variables are used in the first optimization run. Total structural weight is the objective function in this optimization run. Design requirements for strength, buckling, and flutter are selected as constraint functions during the first optimization run. The MSC Nastran code is used to obtain the modal, strength, and buckling characteristics. Flutter and trim analyses are based on ZAERO code and landing and ground control loads are computed using an in-house code.

  5. Neural networks: What non-linearity to choose

    NASA Technical Reports Server (NTRS)

    Kreinovich, Vladik YA.; Quintana, Chris

    1991-01-01

    Neural networks are now one of the most successful learning formalisms. Neurons transform inputs (x(sub 1),...,x(sub n)) into an output f(w(sub 1)x(sub 1) + ... + w(sub n)x(sub n)), where f is a non-linear function and w, are adjustable weights. What f to choose? Usually the logistic function is chosen, but sometimes the use of different functions improves the practical efficiency of the network. The problem of choosing f as a mathematical optimization problem is formulated and solved under different optimality criteria. As a result, a list of functions f that are optimal under these criteria are determined. This list includes both the functions that were empirically proved to be the best for some problems, and some new functions that may be worth trying.

  6. Formation Flight System Extremum-Seeking-Control Using Blended Performance Parameters

    NASA Technical Reports Server (NTRS)

    Ryan, John J. (Inventor)

    2018-01-01

    An extremum-seeking control system for formation flight that uses blended performance parameters in a conglomerate performance function that better approximates drag reduction than performance functions formed from individual measurements. Generally, a variety of different measurements are taken and fed to a control system, the measurements are weighted, and are then subjected to a peak-seeking control algorithm. As measurements are continually taken, the aircraft will be guided to a relative position which optimizes the drag reduction of the formation. Two embodiments are discussed. Two approaches are shown for determining relative weightings: "a priori" by which they are qualitatively determined (by minimizing the error between the conglomerate function and the drag reduction function), and by periodically updating the weightings as the formation evolves.

  7. Particle swarm optimization-based local entropy weighted histogram equalization for infrared image enhancement

    NASA Astrophysics Data System (ADS)

    Wan, Minjie; Gu, Guohua; Qian, Weixian; Ren, Kan; Chen, Qian; Maldague, Xavier

    2018-06-01

    Infrared image enhancement plays a significant role in intelligent urban surveillance systems for smart city applications. Unlike existing methods only exaggerating the global contrast, we propose a particle swam optimization-based local entropy weighted histogram equalization which involves the enhancement of both local details and fore-and background contrast. First of all, a novel local entropy weighted histogram depicting the distribution of detail information is calculated based on a modified hyperbolic tangent function. Then, the histogram is divided into two parts via a threshold maximizing the inter-class variance in order to improve the contrasts of foreground and background, respectively. To avoid over-enhancement and noise amplification, double plateau thresholds of the presented histogram are formulated by means of particle swarm optimization algorithm. Lastly, each sub-image is equalized independently according to the constrained sub-local entropy weighted histogram. Comparative experiments implemented on real infrared images prove that our algorithm outperforms other state-of-the-art methods in terms of both visual and quantized evaluations.

  8. SU-G-TeP3-01: A New Approach for Calculating Variable Relative Biological Effectiveness in IMPT Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cao, W; Randeniya, K; Grosshans, D

    2016-06-15

    Purpose: To investigate the impact of a new approach for calculating relative biological effectiveness (RBE) in intensity-modulated proton therapy (IMPT) optimization on RBE-weighted dose distributions. This approach includes the nonlinear RBE for the high linear energy transfer (LET) region, which was revealed by recent experiments at our institution. In addition, this approach utilizes RBE data as a function of LET without using dose-averaged LET in calculating RBE values. Methods: We used a two-piece function for calculating RBE from LET. Within the Bragg peak, RBE is linearly correlated to LET. Beyond the Bragg peak, we use a nonlinear (quadratic) RBE functionmore » of LET based on our experimental. The IMPT optimization was devised to incorporate variable RBE by maximizing biological effect (based on the Linear Quadratic model) in tumor and minimizing biological effect in normal tissues. Three glioblastoma patients were retrospectively selected from our institution in this study. For each patient, three optimized IMPT plans were created based on three RBE resolutions, i.e., fixed RBE of 1.1 (RBE-1.1), variable RBE based on linear RBE and LET relationship (RBE-L), and variable RBE based on linear and quadratic relationship (RBE-LQ). The RBE weighted dose distributions of each optimized plan were evaluated in terms of different RBE values, i.e., RBE-1.1, RBE-L and RBE-LQ. Results: The RBE weighted doses recalculated from RBE-1.1 based optimized plans demonstrated an increasing pattern from using RBE-1.1, RBE-L to RBE-LQ consistently for all three patients. The variable RBE (RBE-L and RBE-LQ) weighted dose distributions recalculated from RBE-L and RBE-LQ based optimization were more homogenous within the targets and better spared in the critical structures than the ones recalculated from RBE-1.1 based optimization. Conclusion: We implemented a new approach for RBE calculation and optimization and demonstrated potential benefits of improving tumor coverage and normal sparing in IMPT planning.« less

  9. Comparison of weighting techniques for acoustic full waveform inversion

    NASA Astrophysics Data System (ADS)

    Jeong, Gangwon; Hwang, Jongha; Min, Dong-Joo

    2017-12-01

    To reconstruct long-wavelength structures in full waveform inversion (FWI), the wavefield-damping and weighting techniques have been used to synthesize and emphasize low-frequency data components in frequency-domain FWI. However, these methods have some weak points. The application of wavefield-damping method on filtered data fails to synthesize reliable low-frequency data; the optimization formula obtained introducing the weighting technique is not theoretically complete, because it is not directly derived from the objective function. In this study, we address these weak points and present how to overcome them. We demonstrate that the source estimation in FWI using damped wavefields fails when the data used in the FWI process does not satisfy the causality condition. This phenomenon occurs when a non-causal filter is applied to data. We overcome this limitation by designing a causal filter. Also we modify the conventional weighting technique so that its optimization formula is directly derived from the objective function, retaining its original characteristic of emphasizing the low-frequency data components. Numerical results show that the newly designed causal filter enables to recover long-wavelength structures using low-frequency data components synthesized by damping wavefields in frequency-domain FWI, and the proposed weighting technique enhances the inversion results.

  10. An approach of traffic signal control based on NLRSQP algorithm

    NASA Astrophysics Data System (ADS)

    Zou, Yuan-Yang; Hu, Yu

    2017-11-01

    This paper presents a linear program model with linear complementarity constraints (LPLCC) to solve traffic signal optimization problem. The objective function of the model is to obtain the minimization of total queue length with weight factors at the end of each cycle. Then, a combination algorithm based on the nonlinear least regression and sequence quadratic program (NLRSQP) is proposed, by which the local optimal solution can be obtained. Furthermore, four numerical experiments are proposed to study how to set the initial solution of the algorithm that can get a better local optimal solution more quickly. In particular, the results of numerical experiments show that: The model is effective for different arrival rates and weight factors; and the lower bound of the initial solution is, the better optimal solution can be obtained.

  11. Intelligent Ensemble Forecasting System of Stock Market Fluctuations Based on Symetric and Asymetric Wavelet Functions

    NASA Astrophysics Data System (ADS)

    Lahmiri, Salim; Boukadoum, Mounir

    2015-08-01

    We present a new ensemble system for stock market returns prediction where continuous wavelet transform (CWT) is used to analyze return series and backpropagation neural networks (BPNNs) for processing CWT-based coefficients, determining the optimal ensemble weights, and providing final forecasts. Particle swarm optimization (PSO) is used for finding optimal weights and biases for each BPNN. To capture symmetry/asymmetry in the underlying data, three wavelet functions with different shapes are adopted. The proposed ensemble system was tested on three Asian stock markets: The Hang Seng, KOSPI, and Taiwan stock market data. Three statistical metrics were used to evaluate the forecasting accuracy; including, mean of absolute errors (MAE), root mean of squared errors (RMSE), and mean of absolute deviations (MADs). Experimental results showed that our proposed ensemble system outperformed the individual CWT-ANN models each with different wavelet function. In addition, the proposed ensemble system outperformed the conventional autoregressive moving average process. As a result, the proposed ensemble system is suitable to capture symmetry/asymmetry in financial data fluctuations for better prediction accuracy.

  12. Minimizing the stochasticity of halos in large-scale structure surveys

    NASA Astrophysics Data System (ADS)

    Hamaus, Nico; Seljak, Uroš; Desjacques, Vincent; Smith, Robert E.; Baldauf, Tobias

    2010-08-01

    In recent work (Seljak, Hamaus, and Desjacques 2009) it was found that weighting central halo galaxies by halo mass can significantly suppress their stochasticity relative to the dark matter, well below the Poisson model expectation. This is useful for constraining relations between galaxies and the dark matter, such as the galaxy bias, especially in situations where sampling variance errors can be eliminated. In this paper we extend this study with the goal of finding the optimal mass-dependent halo weighting. We use N-body simulations to perform a general analysis of halo stochasticity and its dependence on halo mass. We investigate the stochasticity matrix, defined as Cij≡⟨(δi-biδm)(δj-bjδm)⟩, where δm is the dark matter overdensity in Fourier space, δi the halo overdensity of the i-th halo mass bin, and bi the corresponding halo bias. In contrast to the Poisson model predictions we detect nonvanishing correlations between different mass bins. We also find the diagonal terms to be sub-Poissonian for the highest-mass halos. The diagonalization of this matrix results in one large and one low eigenvalue, with the remaining eigenvalues close to the Poisson prediction 1/n¯, where n¯ is the mean halo number density. The eigenmode with the lowest eigenvalue contains most of the information and the corresponding eigenvector provides an optimal weighting function to minimize the stochasticity between halos and dark matter. We find this optimal weighting function to match linear mass weighting at high masses, while at the low-mass end the weights approach a constant whose value depends on the low-mass cut in the halo mass function. This weighting further suppresses the stochasticity as compared to the previously explored mass weighting. Finally, we employ the halo model to derive the stochasticity matrix and the scale-dependent bias from an analytical perspective. It is remarkably successful in reproducing our numerical results and predicts that the stochasticity between halos and the dark matter can be reduced further when going to halo masses lower than we can resolve in current simulations.

  13. Status of integrated multidisciplinary rotorcraft optimization research at the Langley Research Center

    NASA Technical Reports Server (NTRS)

    Mantay, Wayne R.; Adelman, Howard M.

    1990-01-01

    This paper describes a joint NASA/Army research activity at the Langley Research Center to develop optimization procedures aimed at improving the rotor blade design process by integrating appropriate disciplines and accounting for important interactions among the disciplines. The activity is being guided by a Steering Committee made up of key NASA and Army researchers and managers. The paper describes the optimization formulation in terms of the objective function, design variables, and constraints. The analysis aspects are discussed, and the interdisciplinary interactions are defined in terms of the information that must be transferred among disciplinary analyses as well as the trade-offs between disciplines in determining the details of the design. At this writing, some significant progress has been made. Results given in the paper represent accomplishments in rotor aerodynamic performance optimization for minimum horsepower, rotor dynamic optimization for vibration reduction, approximate analysis of frequencies and mode shapes, rotor structural optimization for minimum weight, and integrated aerodynamic load/dynamics optimization for minimum vibration and weight.

  14. Optimal graph search segmentation using arc-weighted graph for simultaneous surface detection of bladder and prostate.

    PubMed

    Song, Qi; Wu, Xiaodong; Liu, Yunlong; Smith, Mark; Buatti, John; Sonka, Milan

    2009-01-01

    We present a novel method for globally optimal surface segmentation of multiple mutually interacting objects, incorporating both edge and shape knowledge in a 3-D graph-theoretic approach. Hard surface interacting constraints are enforced in the interacting regions, preserving the geometric relationship of those partially interacting surfaces. The soft smoothness a priori shape compliance is introduced into the energy functional to provide shape guidance. The globally optimal surfaces can be simultaneously achieved by solving a maximum flow problem based on an arc-weighted graph representation. Representing the segmentation problem in an arc-weighted graph, one can incorporate a wider spectrum of constraints into the formulation, thus increasing segmentation accuracy and robustness in volumetric image data. To the best of our knowledge, our method is the first attempt to introduce the arc-weighted graph representation into the graph-searching approach for simultaneous segmentation of multiple partially interacting objects, which admits a globally optimal solution in a low-order polynomial time. Our new approach was applied to the simultaneous surface detection of bladder and prostate. The result was quite encouraging in spite of the low saliency of the bladder and prostate in CT images.

  15. The optimal design of UAV wing structure

    NASA Astrophysics Data System (ADS)

    Długosz, Adam; Klimek, Wiktor

    2018-01-01

    The paper presents an optimal design of UAV wing, made of composite materials. The aim of the optimization is to improve strength and stiffness together with reduction of the weight of the structure. Three different types of functionals, which depend on stress, stiffness and the total mass are defined. The paper presents an application of the in-house implementation of the evolutionary multi-objective algorithm in optimization of the UAV wing structure. Values of the functionals are calculated on the basis of results obtained from numerical simulations. Numerical FEM model, consisting of different composite materials is created. Adequacy of the numerical model is verified by results obtained from the experiment, performed on a tensile testing machine. Examples of multi-objective optimization by means of Pareto-optimal set of solutions are presented.

  16. Policy Gradient Adaptive Dynamic Programming for Data-Based Optimal Control.

    PubMed

    Luo, Biao; Liu, Derong; Wu, Huai-Ning; Wang, Ding; Lewis, Frank L

    2017-10-01

    The model-free optimal control problem of general discrete-time nonlinear systems is considered in this paper, and a data-based policy gradient adaptive dynamic programming (PGADP) algorithm is developed to design an adaptive optimal controller method. By using offline and online data rather than the mathematical system model, the PGADP algorithm improves control policy with a gradient descent scheme. The convergence of the PGADP algorithm is proved by demonstrating that the constructed Q -function sequence converges to the optimal Q -function. Based on the PGADP algorithm, the adaptive control method is developed with an actor-critic structure and the method of weighted residuals. Its convergence properties are analyzed, where the approximate Q -function converges to its optimum. Computer simulation results demonstrate the effectiveness of the PGADP-based adaptive control method.

  17. Multi-Objective Programming for Lot-Sizing with Quantity Discount

    NASA Astrophysics Data System (ADS)

    Kang, He-Yau; Lee, Amy H. I.; Lai, Chun-Mei; Kang, Mei-Sung

    2011-11-01

    Multi-objective programming (MOP) is one of the popular methods for decision making in a complex environment. In a MOP, decision makers try to optimize two or more objectives simultaneously under various constraints. A complete optimal solution seldom exists, and a Pareto-optimal solution is usually used. Some methods, such as the weighting method which assigns priorities to the objectives and sets aspiration levels for the objectives, are used to derive a compromise solution. The ɛ-constraint method is a modified weight method. One of the objective functions is optimized while the other objective functions are treated as constraints and are incorporated in the constraint part of the model. This research considers a stochastic lot-sizing problem with multi-suppliers and quantity discounts. The model is transformed into a mixed integer programming (MIP) model next based on the ɛ-constraint method. An illustrative example is used to illustrate the practicality of the proposed model. The results demonstrate that the model is an effective and accurate tool for determining the replenishment of a manufacturer from multiple suppliers for multi-periods.

  18. A Maximum-Likelihood Approach to Force-Field Calibration.

    PubMed

    Zaborowski, Bartłomiej; Jagieła, Dawid; Czaplewski, Cezary; Hałabis, Anna; Lewandowska, Agnieszka; Żmudzińska, Wioletta; Ołdziej, Stanisław; Karczyńska, Agnieszka; Omieczynski, Christian; Wirecki, Tomasz; Liwo, Adam

    2015-09-28

    A new approach to the calibration of the force fields is proposed, in which the force-field parameters are obtained by maximum-likelihood fitting of the calculated conformational ensembles to the experimental ensembles of training system(s). The maximum-likelihood function is composed of logarithms of the Boltzmann probabilities of the experimental conformations, calculated with the current energy function. Because the theoretical distribution is given in the form of the simulated conformations only, the contributions from all of the simulated conformations, with Gaussian weights in the distances from a given experimental conformation, are added to give the contribution to the target function from this conformation. In contrast to earlier methods for force-field calibration, the approach does not suffer from the arbitrariness of dividing the decoy set into native-like and non-native structures; however, if such a division is made instead of using Gaussian weights, application of the maximum-likelihood method results in the well-known energy-gap maximization. The computational procedure consists of cycles of decoy generation and maximum-likelihood-function optimization, which are iterated until convergence is reached. The method was tested with Gaussian distributions and then applied to the physics-based coarse-grained UNRES force field for proteins. The NMR structures of the tryptophan cage, a small α-helical protein, determined at three temperatures (T = 280, 305, and 313 K) by Hałabis et al. ( J. Phys. Chem. B 2012 , 116 , 6898 - 6907 ), were used. Multiplexed replica-exchange molecular dynamics was used to generate the decoys. The iterative procedure exhibited steady convergence. Three variants of optimization were tried: optimization of the energy-term weights alone and use of the experimental ensemble of the folded protein only at T = 280 K (run 1); optimization of the energy-term weights and use of experimental ensembles at all three temperatures (run 2); and optimization of the energy-term weights and the coefficients of the torsional and multibody energy terms and use of experimental ensembles at all three temperatures (run 3). The force fields were subsequently tested with a set of 14 α-helical and two α + β proteins. Optimization run 1 resulted in better agreement with the experimental ensemble at T = 280 K compared with optimization run 2 and in comparable performance on the test set but poorer agreement of the calculated folding temperature with the experimental folding temperature. Optimization run 3 resulted in the best fit of the calculated ensembles to the experimental ones for the tryptophan cage but in much poorer performance on the training set, suggesting that use of a small α-helical protein for extensive force-field calibration resulted in overfitting of the data for this protein at the expense of transferability. The optimized force field resulting from run 2 was found to fold 13 of the 14 tested α-helical proteins and one small α + β protein with the correct topologies; the average structures of 10 of them were predicted with accuracies of about 5 Å C(α) root-mean-square deviation or better. Test simulations with an additional set of 12 α-helical proteins demonstrated that this force field performed better on α-helical proteins than the previous parametrizations of UNRES. The proposed approach is applicable to any problem of maximum-likelihood parameter estimation when the contributions to the maximum-likelihood function cannot be evaluated at the experimental points and the dimension of the configurational space is too high to construct histograms of the experimental distributions.

  19. Influence of Body Weight on Bone Mass, Architecture, and Turnover

    PubMed Central

    Iwaniec, Urszula T.; Turner, Russell T.

    2016-01-01

    Weight-dependent loading of the skeleton plays an important role in establishing and maintaining bone mass and strength. This review focuses on mechanical signaling induced by body weight as an essential mechanism for maintaining bone health. In addition, the skeletal effects of deviation from normal weight are discussed. The magnitude of mechanical strain experienced by bone during normal activities is remarkably similar among vertebrates, regardless of size, supporting the existence of a conserved regulatory mechanism, or mechanostat, that senses mechanical strain. The mechanostat functions as an adaptive mechanism to optimize bone mass and architecture based on prevailing mechanical strain. Changes in weight, due to altered mass, weightlessness (spaceflight), and hypergravity (modeled by centrifugation), induce an adaptive skeletal response. However, the precise mechanisms governing the skeletal response are incompletely understood. Furthermore, establishing whether the adaptive response maintains the mechanical competence of the skeleton has proven difficult, necessitating development of surrogate measures of bone quality. The mechanostat is influenced by regulatory inputs to facilitate non-mechanical functions of the skeleton, such as mineral homeostasis, as well as hormones and energy/nutrient availability that support bone metabolism. While the skeleton is very capable of adapting to changes in weight, the mechanostat has limits. At the limits, extreme deviations from normal weight and body composition are associated with impaired optimization of bone strength to prevailing body size. PMID:27352896

  20. Extended Information Ratio for Portfolio Optimization Using Simulated Annealing with Constrained Neighborhood

    NASA Astrophysics Data System (ADS)

    Orito, Yukiko; Yamamoto, Hisashi; Tsujimura, Yasuhiro; Kambayashi, Yasushi

    The portfolio optimizations are to determine the proportion-weighted combination in the portfolio in order to achieve investment targets. This optimization is one of the multi-dimensional combinatorial optimizations and it is difficult for the portfolio constructed in the past period to keep its performance in the future period. In order to keep the good performances of portfolios, we propose the extended information ratio as an objective function, using the information ratio, beta, prime beta, or correlation coefficient in this paper. We apply the simulated annealing (SA) to optimize the portfolio employing the proposed ratio. For the SA, we make the neighbor by the operation that changes the structure of the weights in the portfolio. In the numerical experiments, we show that our portfolios keep the good performances when the market trend of the future period becomes different from that of the past period.

  1. Structural Optimization for Reliability Using Nonlinear Goal Programming

    NASA Technical Reports Server (NTRS)

    El-Sayed, Mohamed E.

    1999-01-01

    This report details the development of a reliability based multi-objective design tool for solving structural optimization problems. Based on two different optimization techniques, namely sequential unconstrained minimization and nonlinear goal programming, the developed design method has the capability to take into account the effects of variability on the proposed design through a user specified reliability design criterion. In its sequential unconstrained minimization mode, the developed design tool uses a composite objective function, in conjunction with weight ordered design objectives, in order to take into account conflicting and multiple design criteria. Multiple design criteria of interest including structural weight, load induced stress and deflection, and mechanical reliability. The nonlinear goal programming mode, on the other hand, provides for a design method that eliminates the difficulty of having to define an objective function and constraints, while at the same time has the capability of handling rank ordered design objectives or goals. For simulation purposes the design of a pressure vessel cover plate was undertaken as a test bed for the newly developed design tool. The formulation of this structural optimization problem into sequential unconstrained minimization and goal programming form is presented. The resulting optimization problem was solved using: (i) the linear extended interior penalty function method algorithm; and (ii) Powell's conjugate directions method. Both single and multi-objective numerical test cases are included demonstrating the design tool's capabilities as it applies to this design problem.

  2. Cuckoo Search with Lévy Flights for Weighted Bayesian Energy Functional Optimization in Global-Support Curve Data Fitting

    PubMed Central

    Gálvez, Akemi; Iglesias, Andrés; Cabellos, Luis

    2014-01-01

    The problem of data fitting is very important in many theoretical and applied fields. In this paper, we consider the problem of optimizing a weighted Bayesian energy functional for data fitting by using global-support approximating curves. By global-support curves we mean curves expressed as a linear combination of basis functions whose support is the whole domain of the problem, as opposed to other common approaches in CAD/CAM and computer graphics driven by piecewise functions (such as B-splines and NURBS) that provide local control of the shape of the curve. Our method applies a powerful nature-inspired metaheuristic algorithm called cuckoo search, introduced recently to solve optimization problems. A major advantage of this method is its simplicity: cuckoo search requires only two parameters, many fewer than other metaheuristic approaches, so the parameter tuning becomes a very simple task. The paper shows that this new approach can be successfully used to solve our optimization problem. To check the performance of our approach, it has been applied to five illustrative examples of different types, including open and closed 2D and 3D curves that exhibit challenging features, such as cusps and self-intersections. Our results show that the method performs pretty well, being able to solve our minimization problem in an astonishingly straightforward way. PMID:24977175

  3. Cuckoo search with Lévy flights for weighted Bayesian energy functional optimization in global-support curve data fitting.

    PubMed

    Gálvez, Akemi; Iglesias, Andrés; Cabellos, Luis

    2014-01-01

    The problem of data fitting is very important in many theoretical and applied fields. In this paper, we consider the problem of optimizing a weighted Bayesian energy functional for data fitting by using global-support approximating curves. By global-support curves we mean curves expressed as a linear combination of basis functions whose support is the whole domain of the problem, as opposed to other common approaches in CAD/CAM and computer graphics driven by piecewise functions (such as B-splines and NURBS) that provide local control of the shape of the curve. Our method applies a powerful nature-inspired metaheuristic algorithm called cuckoo search, introduced recently to solve optimization problems. A major advantage of this method is its simplicity: cuckoo search requires only two parameters, many fewer than other metaheuristic approaches, so the parameter tuning becomes a very simple task. The paper shows that this new approach can be successfully used to solve our optimization problem. To check the performance of our approach, it has been applied to five illustrative examples of different types, including open and closed 2D and 3D curves that exhibit challenging features, such as cusps and self-intersections. Our results show that the method performs pretty well, being able to solve our minimization problem in an astonishingly straightforward way.

  4. Reinforcement learning design-based adaptive tracking control with less learning parameters for nonlinear discrete-time MIMO systems.

    PubMed

    Liu, Yan-Jun; Tang, Li; Tong, Shaocheng; Chen, C L Philip; Li, Dong-Juan

    2015-01-01

    Based on the neural network (NN) approximator, an online reinforcement learning algorithm is proposed for a class of affine multiple input and multiple output (MIMO) nonlinear discrete-time systems with unknown functions and disturbances. In the design procedure, two networks are provided where one is an action network to generate an optimal control signal and the other is a critic network to approximate the cost function. An optimal control signal and adaptation laws can be generated based on two NNs. In the previous approaches, the weights of critic and action networks are updated based on the gradient descent rule and the estimations of optimal weight vectors are directly adjusted in the design. Consequently, compared with the existing results, the main contributions of this paper are: 1) only two parameters are needed to be adjusted, and thus the number of the adaptation laws is smaller than the previous results and 2) the updating parameters do not depend on the number of the subsystems for MIMO systems and the tuning rules are replaced by adjusting the norms on optimal weight vectors in both action and critic networks. It is proven that the tracking errors, the adaptation laws, and the control inputs are uniformly bounded using Lyapunov analysis method. The simulation examples are employed to illustrate the effectiveness of the proposed algorithm.

  5. A comparative analysis of 9 multi-model averaging approaches in hydrological continuous streamflow simulation

    NASA Astrophysics Data System (ADS)

    Arsenault, Richard; Gatien, Philippe; Renaud, Benoit; Brissette, François; Martel, Jean-Luc

    2015-10-01

    This study aims to test whether a weighted combination of several hydrological models can simulate flows more accurately than the models taken individually. In addition, the project attempts to identify the most efficient model averaging method and the optimal number of models to include in the weighting scheme. In order to address the first objective, streamflow was simulated using four lumped hydrological models (HSAMI, HMETS, MOHYSE and GR4J-6), each of which were calibrated with three different objective functions on 429 watersheds. The resulting 12 hydrographs (4 models × 3 metrics) were weighted and combined with the help of 9 averaging methods which are the simple arithmetic mean (SAM), Akaike information criterion (AICA), Bates-Granger (BGA), Bayes information criterion (BICA), Bayesian model averaging (BMA), Granger-Ramanathan average variant A, B and C (GRA, GRB and GRC) and the average by SCE-UA optimization (SCA). The same weights were then applied to the hydrographs in validation mode, and the Nash-Sutcliffe Efficiency metric was measured between the averaged and observed hydrographs. Statistical analyses were performed to compare the accuracy of weighted methods to that of individual models. A Kruskal-Wallis test and a multi-objective optimization algorithm were then used to identify the most efficient weighted method and the optimal number of models to integrate. Results suggest that the GRA, GRB, GRC and SCA weighted methods perform better than the individual members. Model averaging from these four methods were superior to the best of the individual members in 76% of the cases. Optimal combinations on all watersheds included at least one of each of the four hydrological models. None of the optimal combinations included all members of the ensemble of 12 hydrographs. The Granger-Ramanathan average variant C (GRC) is recommended as the best compromise between accuracy, speed of execution, and simplicity.

  6. Metabolic Cost, Mechanical Work, and Efficiency during Normal Walking in Obese and Normal-Weight Children

    ERIC Educational Resources Information Center

    Huang, Liang; Chen, Peijie; Zhuang, Jie; Zhang, Yanxin; Walt, Sharon

    2013-01-01

    Purpose: This study aimed to investigate the influence of childhood obesity on energetic cost during normal walking and to determine if obese children choose a walking strategy optimizing their gait pattern. Method: Sixteen obese children with no functional abnormalities were matched by age and gender with 16 normal-weight children. All…

  7. Displacement Based Multilevel Structural Optimization

    NASA Technical Reports Server (NTRS)

    Sobieszezanski-Sobieski, J.; Striz, A. G.

    1996-01-01

    In the complex environment of true multidisciplinary design optimization (MDO), efficiency is one of the most desirable attributes of any approach. In the present research, a new and highly efficient methodology for the MDO subset of structural optimization is proposed and detailed, i.e., for the weight minimization of a given structure under size, strength, and displacement constraints. Specifically, finite element based multilevel optimization of structures is performed. In the system level optimization, the design variables are the coefficients of assumed polynomially based global displacement functions, and the load unbalance resulting from the solution of the global stiffness equations is minimized. In the subsystems level optimizations, the weight of each element is minimized under the action of stress constraints, with the cross sectional dimensions as design variables. The approach is expected to prove very efficient since the design task is broken down into a large number of small and efficient subtasks, each with a small number of variables, which are amenable to parallel computing.

  8. Weight optimal design of lateral wing upper covers made of composite materials

    NASA Astrophysics Data System (ADS)

    Barkanov, Evgeny; Eglītis, Edgars; Almeida, Filipe; Bowering, Mark C.; Watson, Glenn

    2016-09-01

    The present investigation is devoted to the development of a new optimal design of lateral wing upper covers made of advanced composite materials, with special emphasis on closer conformity of the developed finite element analysis and operational requirements for aircraft wing panels. In the first stage, 24 weight optimization problems based on linear buckling analysis were solved for the laminated composite panels with three types of stiffener, two stiffener pitches and four load levels, taking into account manufacturing, reparability and damage tolerance requirements. In the second stage, a composite panel with the best weight/design performance from the previous study was verified by nonlinear buckling analysis and optimization to investigate the effect of shear and fuel pressure on the performance of stiffened panels, and their behaviour under skin post-buckling. Three rib-bay laminated composite panels with T-, I- and HAT-stiffeners were modelled with ANSYS, NASTRAN and ABAQUS finite element codes to study their buckling behaviour as a function of skin and stiffener lay-ups, stiffener height, stiffener top and root width. Owing to the large dimension of numerical problems to be solved, an optimization methodology was developed employing the method of experimental design and response surface technique. Optimal results obtained in terms of cross-sectional areas were verified successfully using ANSYS and ABAQUS shared-node models and a NASTRAN rigid-linked model, and were used later to estimate the weight of the Advanced Low Cost Aircraft Structures (ALCAS) lateral wing upper cover.

  9. Stochastic search, optimization and regression with energy applications

    NASA Astrophysics Data System (ADS)

    Hannah, Lauren A.

    Designing clean energy systems will be an important task over the next few decades. One of the major roadblocks is a lack of mathematical tools to economically evaluate those energy systems. However, solutions to these mathematical problems are also of interest to the operations research and statistical communities in general. This thesis studies three problems that are of interest to the energy community itself or provide support for solution methods: R&D portfolio optimization, nonparametric regression and stochastic search with an observable state variable. First, we consider the one stage R&D portfolio optimization problem to avoid the sequential decision process associated with the multi-stage. The one stage problem is still difficult because of a non-convex, combinatorial decision space and a non-convex objective function. We propose a heuristic solution method that uses marginal project values---which depend on the selected portfolio---to create a linear objective function. In conjunction with the 0-1 decision space, this new problem can be solved as a knapsack linear program. This method scales well to large decision spaces. We also propose an alternate, provably convergent algorithm that does not exploit problem structure. These methods are compared on a solid oxide fuel cell R&D portfolio problem. Next, we propose Dirichlet Process mixtures of Generalized Linear Models (DPGLM), a new method of nonparametric regression that accommodates continuous and categorical inputs, and responses that can be modeled by a generalized linear model. We prove conditions for the asymptotic unbiasedness of the DP-GLM regression mean function estimate. We also give examples for when those conditions hold, including models for compactly supported continuous distributions and a model with continuous covariates and categorical response. We empirically analyze the properties of the DP-GLM and why it provides better results than existing Dirichlet process mixture regression models. We evaluate DP-GLM on several data sets, comparing it to modern methods of nonparametric regression like CART, Bayesian trees and Gaussian processes. Compared to existing techniques, the DP-GLM provides a single model (and corresponding inference algorithms) that performs well in many regression settings. Finally, we study convex stochastic search problems where a noisy objective function value is observed after a decision is made. There are many stochastic search problems whose behavior depends on an exogenous state variable which affects the shape of the objective function. Currently, there is no general purpose algorithm to solve this class of problems. We use nonparametric density estimation to take observations from the joint state-outcome distribution and use them to infer the optimal decision for a given query state. We propose two solution methods that depend on the problem characteristics: function-based and gradient-based optimization. We examine two weighting schemes, kernel-based weights and Dirichlet process-based weights, for use with the solution methods. The weights and solution methods are tested on a synthetic multi-product newsvendor problem and the hour-ahead wind commitment problem. Our results show that in some cases Dirichlet process weights offer substantial benefits over kernel based weights and more generally that nonparametric estimation methods provide good solutions to otherwise intractable problems.

  10. Generation of optimal artificial neural networks using a pattern search algorithm: application to approximation of chemical systems.

    PubMed

    Ihme, Matthias; Marsden, Alison L; Pitsch, Heinz

    2008-02-01

    A pattern search optimization method is applied to the generation of optimal artificial neural networks (ANNs). Optimization is performed using a mixed variable extension to the generalized pattern search method. This method offers the advantage that categorical variables, such as neural transfer functions and nodal connectivities, can be used as parameters in optimization. When used together with a surrogate, the resulting algorithm is highly efficient for expensive objective functions. Results demonstrate the effectiveness of this method in optimizing an ANN for the number of neurons, the type of transfer function, and the connectivity among neurons. The optimization method is applied to a chemistry approximation of practical relevance. In this application, temperature and a chemical source term are approximated as functions of two independent parameters using optimal ANNs. Comparison of the performance of optimal ANNs with conventional tabulation methods demonstrates equivalent accuracy by considerable savings in memory storage. The architecture of the optimal ANN for the approximation of the chemical source term consists of a fully connected feedforward network having four nonlinear hidden layers and 117 synaptic weights. An equivalent representation of the chemical source term using tabulation techniques would require a 500 x 500 grid point discretization of the parameter space.

  11. Metabolic flux estimation using particle swarm optimization with penalty function.

    PubMed

    Long, Hai-Xia; Xu, Wen-Bo; Sun, Jun

    2009-01-01

    Metabolic flux estimation through 13C trace experiment is crucial for quantifying the intracellular metabolic fluxes. In fact, it corresponds to a constrained optimization problem that minimizes a weighted distance between measured and simulated results. In this paper, we propose particle swarm optimization (PSO) with penalty function to solve 13C-based metabolic flux estimation problem. The stoichiometric constraints are transformed to an unconstrained one, by penalizing the constraints and building a single objective function, which in turn is minimized using PSO algorithm for flux quantification. The proposed algorithm is applied to estimate the central metabolic fluxes of Corynebacterium glutamicum. From simulation results, it is shown that the proposed algorithm has superior performance and fast convergence ability when compared to other existing algorithms.

  12. Optimal weighting in fNL constraints from large scale structure in an idealised case

    NASA Astrophysics Data System (ADS)

    Slosar, Anže

    2009-03-01

    We consider the problem of optimal weighting of tracers of structure for the purpose of constraining the non-Gaussianity parameter fNL. We work within the Fisher matrix formalism expanded around fiducial model with fNL = 0 and make several simplifying assumptions. By slicing a general sample into infinitely many samples with different biases, we derive the analytic expression for the relevant Fisher matrix element. We next consider weighting schemes that construct two effective samples from a single sample of tracers with a continuously varying bias. We show that a particularly simple ansatz for weighting functions can recover all information about fNL in the initial sample that is recoverable using a given bias observable and that simple division into two equal samples is considerably suboptimal when sampling of modes is good, but only marginally suboptimal in the limit where Poisson errors dominate.

  13. On the Performance of the Martin Digital Filter for High- and Low-pass Applications

    NASA Technical Reports Server (NTRS)

    Mcclain, C. R.

    1979-01-01

    A nonrecursive numerical filter is described in which the weighting sequence is optimized by minimizing the excursion from the ideal rectangular filter in a least squares sense over the entire domain of normalized frequency. Additional corrections to the weights in order to reduce overshoot oscillations (Gibbs phenomenon) and to insure unity gain at zero frequency for the low pass filter are incorporated. The filter is characterized by a zero phase shift for all frequencies (due to a symmetric weighting sequence), a finite memory and stability, and it may readily be transformed to a high pass filter. Equations for the filter weights and the frequency response function are presented, and applications to high and low pass filtering are examined. A discussion of optimization of high pass filter parameters for a rather stringent response requirement is given in an application to the removal of aircraft low frequency oscillations superimposed on remotely sensed ocean surface profiles. Several frequency response functions are displayed, both in normalized frequency space and in period space. A comparison of the performance of the Martin filter with some other commonly used low pass digital filters is provided in an application to oceanographic data.

  14. Getting the most out of additional guidance information in deformable image registration by leveraging multi-objective optimization

    NASA Astrophysics Data System (ADS)

    Alderliesten, Tanja; Bosman, Peter A. N.; Bel, Arjan

    2015-03-01

    Incorporating additional guidance information, e.g., landmark/contour correspondence, in deformable image registration is often desirable and is typically done by adding constraints or cost terms to the optimization function. Commonly, deciding between a "hard" constraint and a "soft" additional cost term as well as the weighting of cost terms in the optimization function is done on a trial-and-error basis. The aim of this study is to investigate the advantages of exploiting guidance information by taking a multi-objective optimization perspective. Hereto, next to objectives related to match quality and amount of deformation, we define a third objective related to guidance information. Multi-objective optimization eliminates the need to a-priori tune a weighting of objectives in a single optimization function or the strict requirement of fulfilling hard guidance constraints. Instead, Pareto-efficient trade-offs between all objectives are found, effectively making the introduction of guidance information straightforward, independent of its type or scale. Further, since complete Pareto fronts also contain less interesting parts (i.e., solutions with near-zero deformation effort), we study how adaptive steering mechanisms can be incorporated to automatically focus more on solutions of interest. We performed experiments on artificial and real clinical data with large differences, including disappearing structures. Results show the substantial benefit of using additional guidance information. Moreover, compared to the 2-objective case, additional computational cost is negligible. Finally, with the same computational budget, use of the adaptive steering mechanism provides superior solutions in the area of interest.

  15. MODOPTIM: A general optimization program for ground-water flow model calibration and ground-water management with MODFLOW

    USGS Publications Warehouse

    Halford, Keith J.

    2006-01-01

    MODOPTIM is a non-linear ground-water model calibration and management tool that simulates flow with MODFLOW-96 as a subroutine. A weighted sum-of-squares objective function defines optimal solutions for calibration and management problems. Water levels, discharges, water quality, subsidence, and pumping-lift costs are the five direct observation types that can be compared in MODOPTIM. Differences between direct observations of the same type can be compared to fit temporal changes and spatial gradients. Water levels in pumping wells, wellbore storage in the observation wells, and rotational translation of observation wells also can be compared. Negative and positive residuals can be weighted unequally so inequality constraints such as maximum chloride concentrations or minimum water levels can be incorporated in the objective function. Optimization parameters are defined with zones and parameter-weight matrices. Parameter change is estimated iteratively with a quasi-Newton algorithm and is constrained to a user-defined maximum parameter change per iteration. Parameters that are less sensitive than a user-defined threshold are not estimated. MODOPTIM facilitates testing more conceptual models by expediting calibration of each conceptual model. Examples of applying MODOPTIM to aquifer-test analysis, ground-water management, and parameter estimation problems are presented.

  16. Methods of Constructing a Blended Performance Function Suitable for Formation Flight

    NASA Technical Reports Server (NTRS)

    Ryan, Jack

    2017-01-01

    Two methods for constructing performance functions for formation fight-for-drag-reduction suitable for use with an extreme-seeking control system are presented. The first method approximates an a prior measured or estimated drag-reduction performance function by combining real-time measurements of readily available parameters. The parameters are combined with weightings determined from a minimum squares optimization to form a blended performance function.

  17. Jig-Shape Optimization of a Low-Boom Supersonic Aircraft

    NASA Technical Reports Server (NTRS)

    Pak, Chan-Gi

    2018-01-01

    A simple approach for optimizing the jig-shape is proposed in this study. This simple approach is based on an unconstrained optimization problem and applied to a low-boom supersonic aircraft. In this study, the jig-shape optimization is performed using the two-step approach. First, starting design variables are computed using the least-squares surface fitting technique. Next, the jig-shape is further tuned using a numerical optimization procedure based on an in-house object-oriented optimization tool. During the numerical optimization procedure, a design jig-shape is determined by the baseline jig-shape and basis functions. A total of 12 symmetric mode shapes of the cruise-weight configuration, rigid pitch shape, rigid left and right stabilator rotation shapes, and a residual shape are selected as sixteen basis functions. After three optimization runs, the trim shape error distribution is improved, and the maximum trim shape error of 0.9844 inches of the starting configuration becomes 0.00367 inch by the end of the third optimization run.

  18. The universal scissor component: Optimization of a reconfigurable component for deployable scissor structures

    NASA Astrophysics Data System (ADS)

    Alegria Mira, Lara; Thrall, Ashley P.; De Temmerman, Niels

    2016-02-01

    Deployable scissor structures are well equipped for temporary and mobile applications since they are able to change their form and functionality. They are structural mechanisms that transform from a compact state to an expanded, fully deployed configuration. A barrier to the current design and reuse of scissor structures, however, is that they are traditionally designed for a single purpose. Alternatively, a universal scissor component (USC)-a generalized element which can achieve all traditional scissor types-introduces an opportunity for reuse in which the same component can be utilized for different configurations and spans. In this article, the USC is optimized for structural performance. First, an optimized length for the USC is determined based on a trade-off between component weight and structural performance (measured by deflections). Then, topology optimization, using the simulated annealing algorithm, is implemented to determine a minimum weight layout of beams within a single USC component.

  19. Functional weight-bearing mobilization after Achilles tendon rupture enhances early healing response: a single-blinded randomized controlled trial.

    PubMed

    Valkering, Kars P; Aufwerber, Susanna; Ranuccio, Francesco; Lunini, Enricomaria; Edman, Gunnar; Ackermann, Paul W

    2017-06-01

    Functional weight-bearing mobilization may improve repair of Achilles tendon rupture (ATR), but the underlying mechanisms and outcome were unknown. We hypothesized that functional weight-bearing mobilization by means of increased metabolism could improve both early and long-term healing. In this prospective randomized controlled trial, patients with acute ATR were randomized to either direct post-operative functional weight-bearing mobilization (n = 27) in an orthosis or to non-weight-bearing (n = 29) plaster cast immobilization. During the first two post-operative weeks, 15°-30° of plantar flexion was allowed and encouraged in the functional weight-bearing mobilization group. At 2 weeks, patients in the non-weight-bearing cast immobilization group received a stiff orthosis, while the functional weight-bearing mobilization group continued with increased range of motion. At 6 weeks, all patients discontinued immobilization. At 2 weeks, healing metabolites and markers of procollagen type I (PINP) and III (PIIINP) were examined using microdialysis. At 6 and 12 months, functional outcome using heel-rise test was assessed. Healing tendons of both groups exhibited increased levels of metabolites glutamate, lactate, pyruvate, and of PIIINP (all p < 0.05). Patients in functional weight-bearing mobilization group demonstrated significantly higher concentrations of glutamate compared to the non-weight-bearing cast immobilization group (p = 0.045).The upregulated glutamate levels were significantly correlated with the concentrations of PINP (r = 0.5, p = 0.002) as well as with improved functional outcome at 6 months (r = 0.4; p = 0.014). Heel-rise tests at 6 and 12 months did not display any differences between the two groups. Functional weight-bearing mobilization enhanced the early healing response of ATR. In addition, early ankle range of motion was improved without the risk of Achilles tendon elongation and without altering long-term functional outcome. The relationship between functional weight-bearing mobilization-induced upregulation of glutamate and enhanced healing suggests novel opportunities to optimize post-operative rehabilitation.

  20. Direct position determination for digital modulation signals based on improved particle swarm optimization algorithm

    NASA Astrophysics Data System (ADS)

    Yu, Wan-Ting; Yu, Hong-yi; Du, Jian-Ping; Wang, Ding

    2018-04-01

    The Direct Position Determination (DPD) algorithm has been demonstrated to achieve a better accuracy with known signal waveforms. However, the signal waveform is difficult to be completely known in the actual positioning process. To solve the problem, we proposed a DPD method for digital modulation signals based on improved particle swarm optimization algorithm. First, a DPD model is established for known modulation signals and a cost function is obtained on symbol estimation. Second, as the optimization of the cost function is a nonlinear integer optimization problem, an improved Particle Swarm Optimization (PSO) algorithm is considered for the optimal symbol search. Simulations are carried out to show the higher position accuracy of the proposed DPD method and the convergence of the fitness function under different inertia weight and population size. On the one hand, the proposed algorithm can take full advantage of the signal feature to improve the positioning accuracy. On the other hand, the improved PSO algorithm can improve the efficiency of symbol search by nearly one hundred times to achieve a global optimal solution.

  1. Multi-objective optimization for model predictive control.

    PubMed

    Wojsznis, Willy; Mehta, Ashish; Wojsznis, Peter; Thiele, Dirk; Blevins, Terry

    2007-06-01

    This paper presents a technique of multi-objective optimization for Model Predictive Control (MPC) where the optimization has three levels of the objective function, in order of priority: handling constraints, maximizing economics, and maintaining control. The greatest weights are assigned dynamically to control or constraint variables that are predicted to be out of their limits. The weights assigned for economics have to out-weigh those assigned for control objectives. Control variables (CV) can be controlled at fixed targets or within one- or two-sided ranges around the targets. Manipulated Variables (MV) can have assigned targets too, which may be predefined values or current actual values. This MV functionality is extremely useful when economic objectives are not defined for some or all the MVs. To achieve this complex operation, handle process outputs predicted to go out of limits, and have a guaranteed solution for any condition, the technique makes use of the priority structure, penalties on slack variables, and redefinition of the constraint and control model. An engineering implementation of this approach is shown in the MPC embedded in an industrial control system. The optimization and control of a distillation column, the standard Shell heavy oil fractionator (HOF) problem, is adequately achieved with this MPC.

  2. Inverse optimization of objective function weights for treatment planning using clinical dose-volume histograms

    NASA Astrophysics Data System (ADS)

    Babier, Aaron; Boutilier, Justin J.; Sharpe, Michael B.; McNiven, Andrea L.; Chan, Timothy C. Y.

    2018-05-01

    We developed and evaluated a novel inverse optimization (IO) model to estimate objective function weights from clinical dose-volume histograms (DVHs). These weights were used to solve a treatment planning problem to generate ‘inverse plans’ that had similar DVHs to the original clinical DVHs. Our methodology was applied to 217 clinical head and neck cancer treatment plans that were previously delivered at Princess Margaret Cancer Centre in Canada. Inverse plan DVHs were compared to the clinical DVHs using objective function values, dose-volume differences, and frequency of clinical planning criteria satisfaction. Median differences between the clinical and inverse DVHs were within 1.1 Gy. For most structures, the difference in clinical planning criteria satisfaction between the clinical and inverse plans was at most 1.4%. For structures where the two plans differed by more than 1.4% in planning criteria satisfaction, the difference in average criterion violation was less than 0.5 Gy. Overall, the inverse plans were very similar to the clinical plans. Compared with a previous inverse optimization method from the literature, our new inverse plans typically satisfied the same or more clinical criteria, and had consistently lower fluence heterogeneity. Overall, this paper demonstrates that DVHs, which are essentially summary statistics, provide sufficient information to estimate objective function weights that result in high quality treatment plans. However, as with any summary statistic that compresses three-dimensional dose information, care must be taken to avoid generating plans with undesirable features such as hotspots; our computational results suggest that such undesirable spatial features were uncommon. Our IO-based approach can be integrated into the current clinical planning paradigm to better initialize the planning process and improve planning efficiency. It could also be embedded in a knowledge-based planning or adaptive radiation therapy framework to automatically generate a new plan given a predicted or updated target DVH, respectively.

  3. Inverse optimization of objective function weights for treatment planning using clinical dose-volume histograms.

    PubMed

    Babier, Aaron; Boutilier, Justin J; Sharpe, Michael B; McNiven, Andrea L; Chan, Timothy C Y

    2018-05-10

    We developed and evaluated a novel inverse optimization (IO) model to estimate objective function weights from clinical dose-volume histograms (DVHs). These weights were used to solve a treatment planning problem to generate 'inverse plans' that had similar DVHs to the original clinical DVHs. Our methodology was applied to 217 clinical head and neck cancer treatment plans that were previously delivered at Princess Margaret Cancer Centre in Canada. Inverse plan DVHs were compared to the clinical DVHs using objective function values, dose-volume differences, and frequency of clinical planning criteria satisfaction. Median differences between the clinical and inverse DVHs were within 1.1 Gy. For most structures, the difference in clinical planning criteria satisfaction between the clinical and inverse plans was at most 1.4%. For structures where the two plans differed by more than 1.4% in planning criteria satisfaction, the difference in average criterion violation was less than 0.5 Gy. Overall, the inverse plans were very similar to the clinical plans. Compared with a previous inverse optimization method from the literature, our new inverse plans typically satisfied the same or more clinical criteria, and had consistently lower fluence heterogeneity. Overall, this paper demonstrates that DVHs, which are essentially summary statistics, provide sufficient information to estimate objective function weights that result in high quality treatment plans. However, as with any summary statistic that compresses three-dimensional dose information, care must be taken to avoid generating plans with undesirable features such as hotspots; our computational results suggest that such undesirable spatial features were uncommon. Our IO-based approach can be integrated into the current clinical planning paradigm to better initialize the planning process and improve planning efficiency. It could also be embedded in a knowledge-based planning or adaptive radiation therapy framework to automatically generate a new plan given a predicted or updated target DVH, respectively.

  4. Low Temperature Performance of High-Speed Neural Network Circuits

    NASA Technical Reports Server (NTRS)

    Duong, T.; Tran, M.; Daud, T.; Thakoor, A.

    1995-01-01

    Artificial neural networks, derived from their biological counterparts, offer a new and enabling computing paradigm specially suitable for such tasks as image and signal processing with feature classification/object recognition, global optimization, and adaptive control. When implemented in fully parallel electronic hardware, it offers orders of magnitude speed advantage. Basic building blocks of the new architecture are the processing elements called neurons implemented as nonlinear operational amplifiers with sigmoidal transfer function, interconnected through weighted connections called synapses implemented using circuitry for weight storage and multiply functions either in an analog, digital, or hybrid scheme.

  5. Adaptive critic designs for optimal control of uncertain nonlinear systems with unmatched interconnections.

    PubMed

    Yang, Xiong; He, Haibo

    2018-05-26

    In this paper, we develop a novel optimal control strategy for a class of uncertain nonlinear systems with unmatched interconnections. To begin with, we present a stabilizing feedback controller for the interconnected nonlinear systems by modifying an array of optimal control laws of auxiliary subsystems. We also prove that this feedback controller ensures a specified cost function to achieve optimality. Then, under the framework of adaptive critic designs, we use critic networks to solve the Hamilton-Jacobi-Bellman equations associated with auxiliary subsystem optimal control laws. The critic network weights are tuned through the gradient descent method combined with an additional stabilizing term. By using the newly established weight tuning rules, we no longer need the initial admissible control condition. In addition, we demonstrate that all signals in the closed-loop auxiliary subsystems are stable in the sense of uniform ultimate boundedness by using classic Lyapunov techniques. Finally, we provide an interconnected nonlinear plant to validate the present control scheme. Copyright © 2018 Elsevier Ltd. All rights reserved.

  6. The tensor distribution function.

    PubMed

    Leow, A D; Zhu, S; Zhan, L; McMahon, K; de Zubicaray, G I; Meredith, M; Wright, M J; Toga, A W; Thompson, P M

    2009-01-01

    Diffusion weighted magnetic resonance imaging is a powerful tool that can be employed to study white matter microstructure by examining the 3D displacement profile of water molecules in brain tissue. By applying diffusion-sensitized gradients along a minimum of six directions, second-order tensors (represented by three-by-three positive definite matrices) can be computed to model dominant diffusion processes. However, conventional DTI is not sufficient to resolve more complicated white matter configurations, e.g., crossing fiber tracts. Recently, a number of high-angular resolution schemes with more than six gradient directions have been employed to address this issue. In this article, we introduce the tensor distribution function (TDF), a probability function defined on the space of symmetric positive definite matrices. Using the calculus of variations, we solve the TDF that optimally describes the observed data. Here, fiber crossing is modeled as an ensemble of Gaussian diffusion processes with weights specified by the TDF. Once this optimal TDF is determined, the orientation distribution function (ODF) can easily be computed by analytic integration of the resulting displacement probability function. Moreover, a tensor orientation distribution function (TOD) may also be derived from the TDF, allowing for the estimation of principal fiber directions and their corresponding eigenvalues.

  7. A new optimal seam method for seamless image stitching

    NASA Astrophysics Data System (ADS)

    Xue, Jiale; Chen, Shengyong; Cheng, Xu; Han, Ying; Zhao, Meng

    2017-07-01

    A novel optimal seam method which aims to stitch those images with overlapping area more seamlessly has been propos ed. Considering the traditional gradient domain optimal seam method and fusion algorithm result in bad color difference measurement and taking a long time respectively, the input images would be converted to HSV space and a new energy function is designed to seek optimal stitching path. To smooth the optimal stitching path, a simplified pixel correction and weighted average method are utilized individually. The proposed methods exhibit performance in eliminating the stitching seam compared with the traditional gradient optimal seam and high efficiency with multi-band blending algorithm.

  8. Accumulating Data to Optimally Predict Obesity Treatment (ADOPT) Core Measures: Psychosocial Domain.

    PubMed

    Sutin, Angelina R; Boutelle, Kerri; Czajkowski, Susan M; Epel, Elissa S; Green, Paige A; Hunter, Christine M; Rice, Elise L; Williams, David M; Young-Hyman, Deborah; Rothman, Alexander J

    2018-04-01

    Within the Accumulating Data to Optimally Predict obesity Treatment (ADOPT) Core Measures Project, the psychosocial domain addresses how psychosocial processes underlie the influence of obesity treatment strategies on weight loss and weight maintenance. The subgroup for the psychosocial domain identified an initial list of high-priority constructs and measures that ranged from relatively stable characteristics about the person (cognitive function, personality) to dynamic characteristics that may change over time (motivation, affect). This paper describes (a) how the psychosocial domain fits into the broader model of weight loss and weight maintenance as conceptualized by ADOPT; (b) the guiding principles used to select constructs and measures for recommendation; (c) the high-priority constructs recommended for inclusion; (d) domain-specific issues for advancing the science; and (e) recommendations for future research. The inclusion of similar measures across trials will help to better identify how psychosocial factors mediate and moderate the weight loss and weight maintenance process, facilitate research into dynamic interactions with factors in the other ADOPT domains, and ultimately improve the design and delivery of effective interventions. © 2018 The Obesity Society.

  9. A partition function-based weighting scheme in force field parameter development using ab initio calculation results in global configurational space.

    PubMed

    Wu, Yao; Dai, Xiaodong; Huang, Niu; Zhao, Lifeng

    2013-06-05

    In force field parameter development using ab initio potential energy surfaces (PES) as target data, an important but often neglected matter is the lack of a weighting scheme with optimal discrimination power to fit the target data. Here, we developed a novel partition function-based weighting scheme, which not only fits the target potential energies exponentially like the general Boltzmann weighting method, but also reduces the effect of fitting errors leading to overfitting. The van der Waals (vdW) parameters of benzene and propane were reparameterized by using the new weighting scheme to fit the high-level ab initio PESs probed by a water molecule in global configurational space. The molecular simulation results indicate that the newly derived parameters are capable of reproducing experimental properties in a broader range of temperatures, which supports the partition function-based weighting scheme. Our simulation results also suggest that structural properties are more sensitive to vdW parameters than partial atomic charge parameters in these systems although the electrostatic interactions are still important in energetic properties. As no prerequisite conditions are required, the partition function-based weighting method may be applied in developing any types of force field parameters. Copyright © 2013 Wiley Periodicals, Inc.

  10. The Study of Intelligent Vehicle Navigation Path Based on Behavior Coordination of Particle Swarm.

    PubMed

    Han, Gaining; Fu, Weiping; Wang, Wen

    2016-01-01

    In the behavior dynamics model, behavior competition leads to the shock problem of the intelligent vehicle navigation path, because of the simultaneous occurrence of the time-variant target behavior and obstacle avoidance behavior. Considering the safety and real-time of intelligent vehicle, the particle swarm optimization (PSO) algorithm is proposed to solve these problems for the optimization of weight coefficients of the heading angle and the path velocity. Firstly, according to the behavior dynamics model, the fitness function is defined concerning the intelligent vehicle driving characteristics, the distance between intelligent vehicle and obstacle, and distance of intelligent vehicle and target. Secondly, behavior coordination parameters that minimize the fitness function are obtained by particle swarm optimization algorithms. Finally, the simulation results show that the optimization method and its fitness function can improve the perturbations of the vehicle planning path and real-time and reliability.

  11. The Study of Intelligent Vehicle Navigation Path Based on Behavior Coordination of Particle Swarm

    PubMed Central

    Han, Gaining; Fu, Weiping; Wang, Wen

    2016-01-01

    In the behavior dynamics model, behavior competition leads to the shock problem of the intelligent vehicle navigation path, because of the simultaneous occurrence of the time-variant target behavior and obstacle avoidance behavior. Considering the safety and real-time of intelligent vehicle, the particle swarm optimization (PSO) algorithm is proposed to solve these problems for the optimization of weight coefficients of the heading angle and the path velocity. Firstly, according to the behavior dynamics model, the fitness function is defined concerning the intelligent vehicle driving characteristics, the distance between intelligent vehicle and obstacle, and distance of intelligent vehicle and target. Secondly, behavior coordination parameters that minimize the fitness function are obtained by particle swarm optimization algorithms. Finally, the simulation results show that the optimization method and its fitness function can improve the perturbations of the vehicle planning path and real-time and reliability. PMID:26880881

  12. On the Impact of Local Taxes in a Set Cover Game

    NASA Astrophysics Data System (ADS)

    Escoffier, Bruno; Gourvès, Laurent; Monnot, Jérôme

    Given a collection C of weighted subsets of a ground set E, the SET cover problem is to find a minimum weight subset of C which covers all elements of E. We study a strategic game defined upon this classical optimization problem. Every element of E is a player which chooses one set of C where it appears. Following a public tax function, every player is charged a fraction of the weight of the set that it has selected. Our motivation is to design a tax function having the following features: it can be implemented in a distributed manner, existence of an equilibrium is guaranteed and the social cost for these equilibria is minimized.

  13. Automated Surgical Approach Planning for Complex Skull Base Targets: Development and Validation of a Cost Function and Semantic At-las.

    PubMed

    Aghdasi, Nava; Whipple, Mark; Humphreys, Ian M; Moe, Kris S; Hannaford, Blake; Bly, Randall A

    2018-06-01

    Successful multidisciplinary treatment of skull base pathology requires precise preoperative planning. Current surgical approach (pathway) selection for these complex procedures depends on an individual surgeon's experiences and background training. Because of anatomical variation in both normal tissue and pathology (eg, tumor), a successful surgical pathway used on one patient is not necessarily the best approach on another patient. The question is how to define and obtain optimized patient-specific surgical approach pathways? In this article, we demonstrate that the surgeon's knowledge and decision making in preoperative planning can be modeled by a multiobjective cost function in a retrospective analysis of actual complex skull base cases. Two different approaches- weighted-sum approach and Pareto optimality-were used with a defined cost function to derive optimized surgical pathways based on preoperative computed tomography (CT) scans and manually designated pathology. With the first method, surgeon's preferences were input as a set of weights for each objective before the search. In the second approach, the surgeon's preferences were used to select a surgical pathway from the computed Pareto optimal set. Using preoperative CT and magnetic resonance imaging, the patient-specific surgical pathways derived by these methods were similar (85% agreement) to the actual approaches performed on patients. In one case where the actual surgical approach was different, revision surgery was required and was performed utilizing the computationally derived approach pathway.

  14. Near-Optimal Re-Entry Trajectories for Reusable Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Chou, H.-C.; Ardema, M. D.; Bowles, J. V.

    1997-01-01

    A near-optimal guidance law for the descent trajectory for earth orbit re-entry of a fully reusable single-stage-to-orbit pure rocket launch vehicle is derived. A methodology is developed to investigate using both bank angle and altitude as control variables and selecting parameters that maximize various performance functions. The method is based on the energy-state model of the aircraft equations of motion. The major task of this paper is to obtain optimal re-entry trajectories under a variety of performance goals: minimum time, minimum surface temperature, minimum heating, and maximum heading change; four classes of trajectories were investigated: no banking, optimal left turn banking, optimal right turn banking, and optimal bank chattering. The cost function is in general a weighted sum of all performance goals. In particular, the trade-off between minimizing heat load into the vehicle and maximizing cross range distance is investigated. The results show that the optimization methodology can be used to derive a wide variety of near-optimal trajectories.

  15. An A Priori Multiobjective Optimization Model of a Search and Rescue Network

    DTIC Science & Technology

    1992-03-01

    sequences. Classical sensitivity analysis and tolerance analysis were used to analyze the frequency assignments generated by the different weight...function for excess coverage of a frequency. Sensitivity analysis is used to investigate the robustness of the frequency assignments produced by the...interest. The linear program solution is used to produce classical sensitivity analysis for the weight ranges. 17 III. Model Formulation This chapter

  16. A Survey of Reliability, Maintainability, Supportability, and Testability Software Tools

    DTIC Science & Technology

    1991-04-01

    designs in terms of their contributions toward forced mission termination and vehicle or function loss . Includes the ability to treat failure modes of...ABSTRACT: Inputs: MTBFs, MTTRs, support equipment costs, equipment weights and costs, available targets, military occupational specialty skill level and...US Army CECOM NAME: SPARECOST ABSTRACT: Calculates expected number of failures and performs spares holding optimization based on cost, weight , or

  17. Effects of lighting and air-conditioning systems on growth weight and functional composition of frill-lettuce produced in plant factory

    NASA Astrophysics Data System (ADS)

    Yoshida, Atsumasa; Okamura, Nobuya; Furukawa, Hajime; Myojin, Chiho; Moriuchi, Koji; Kinoshita, Shinichi

    2017-06-01

    The aim of the present study was to develop optimal air-conditioning systems for plant factories. To verify the effect of particular air-conditioning and lighting systems, cultivation experiments were performed with frill-lettuce for two weeks. In the present study, the relationship between the cultivation condition, the yield (i.e., increase in edible portion weight), and the functional components were discussed. Based on the measured data, increased photosynthetic photon flux density increased antioxidative activity and edible portion weight, possibly because high light intensities are stressful for frill lettuce. Antioxidative activity also increased under conditions of low CO2 concentration, weak and strong winds, and high air temperature because these conditions became stresses for the plants. However, a decrease in edible portion weight was observed under these conditions, implying there is a negative correlation between antioxidative activity and edible portion weight.

  18. Optimal design of a bank of spatio-temporal filters for EEG signal classification.

    PubMed

    Higashi, Hiroshi; Tanaka, Toshihisa

    2011-01-01

    The spatial weights for electrodes called common spatial pattern (CSP) are known to be effective in EEG signal classification for motor imagery based brain computer interfaces (MI-BCI). To achieve accurate classification in CSP, the frequency filter should be properly designed. To this end, several methods for designing the filter have been proposed. However, the existing methods cannot consider plural brain activities described with different frequency bands and different spatial patterns such as activities of mu and beta rhythms. In order to efficiently extract these brain activities, we propose a method to design plural filters and spatial weights which extract desired brain activity. The proposed method designs finite impulse response (FIR) filters and the associated spatial weights by optimization of an objective function which is a natural extension of CSP. Moreover, we show by a classification experiment that the bank of FIR filters which are designed by introducing an orthogonality into the objective function can extract good discriminative features. Moreover, the experiment result suggests that the proposed method can automatically detect and extract brain activities related to motor imagery.

  19. Weighting of Acoustic Cues to a Manner Distinction by Children With and Without Hearing Loss

    PubMed Central

    Lowenstein, Joanna H.

    2015-01-01

    Purpose Children must develop optimal perceptual weighting strategies for processing speech in their first language. Hearing loss can interfere with that development, especially if cochlear implants are required. The three goals of this study were to measure, for children with and without hearing loss: (a) cue weighting for a manner distinction, (b) sensitivity to those cues, and (c) real-world communication functions. Method One hundred and seven children (43 with normal hearing [NH], 17 with hearing aids [HAs], and 47 with cochlear implants [CIs]) performed several tasks: labeling of stimuli from /bɑ/-to-/wɑ/ continua varying in formant and amplitude rise time (FRT and ART), discrimination of ART, word recognition, and phonemic awareness. Results Children with hearing loss were less attentive overall to acoustic structure than children with NH. Children with CIs, but not those with HAs, weighted FRT less and ART more than children with NH. Sensitivity could not explain cue weighting. FRT cue weighting explained significant amounts of variability in word recognition and phonemic awareness; ART cue weighting did not. Conclusion Signal degradation inhibits access to spectral structure for children with CIs, but cannot explain their delayed development of optimal weighting strategies. Auditory training could strengthen the weighting of spectral cues for children with CIs, thus aiding spoken language acquisition. PMID:25813201

  20. Neural-network-based online HJB solution for optimal robust guaranteed cost control of continuous-time uncertain nonlinear systems.

    PubMed

    Liu, Derong; Wang, Ding; Wang, Fei-Yue; Li, Hongliang; Yang, Xiong

    2014-12-01

    In this paper, the infinite horizon optimal robust guaranteed cost control of continuous-time uncertain nonlinear systems is investigated using neural-network-based online solution of Hamilton-Jacobi-Bellman (HJB) equation. By establishing an appropriate bounded function and defining a modified cost function, the optimal robust guaranteed cost control problem is transformed into an optimal control problem. It can be observed that the optimal cost function of the nominal system is nothing but the optimal guaranteed cost of the original uncertain system. A critic neural network is constructed to facilitate the solution of the modified HJB equation corresponding to the nominal system. More importantly, an additional stabilizing term is introduced for helping to verify the stability, which reinforces the updating process of the weight vector and reduces the requirement of an initial stabilizing control. The uniform ultimate boundedness of the closed-loop system is analyzed by using the Lyapunov approach as well. Two simulation examples are provided to verify the effectiveness of the present control approach.

  1. Enhancing fatigue life of cylinder-crown integrated structure by optimizing dimension

    NASA Astrophysics Data System (ADS)

    Zhang, Weiwei; Wang, Xiaosong; Wang, Zhongren; Yuan, Shijian

    2015-03-01

    Cylinder-crown integrated hydraulic press (CCIHP) is a new press structure. The hemispherical hydraulic cylinder also functions as a main portion of crown, which has lower weight and higher section modulus compared with the conventional hydraulic cylinder and press crown. As a result, the material strength capacity is better utilized. During the engineering design of cylinder-crown integrated structure, in order to increase the fatigue life, structural optimization on the basis of the adaptive macro genetic algorithms (AMGA) is first conducted to both reduce weight and decrease peak stress. It is shown that the magnitude of the maximum principal stress is decreased by 28.6%, and simultaneously the total weight is reduced by 4.4%. Subsequently, strain-controlled fatigue test is carried out, and the stress-strain hysteresis loops and cyclic hardening curve are obtained. Based on linear fit, the fatigue properties are calculated and used for the fatigue life prediction. It is shown that the predicted fatigue life is significantly increased from 157000 to 1070000 cycles after structural optimization. Finally, according to the optimization design, a 6300 kN CCIHP has been manufactured, and priority application has been also suggested.

  2. Optimal control theory (OWEM) applied to a helicopter in the hover and approach phase

    NASA Technical Reports Server (NTRS)

    Born, G. J.; Kai, T.

    1975-01-01

    A major difficulty in the practical application of linear-quadratic regulator theory is how to choose the weighting matrices in quadratic cost functions. The control system design with optimal weighting matrices was applied to a helicopter in the hover and approach phase. The weighting matrices were calculated to extremize the closed loop total system damping subject to constraints on the determinants. The extremization is really a minimization of the effects of disturbances, and interpreted as a compromise between the generalized system accuracy and the generalized system response speed. The trade-off between the accuracy and the response speed is adjusted by a single parameter, the ratio of determinants. By this approach an objective measure can be obtained for the design of a control system. The measure is to be determined by the system requirements.

  3. Dynamic mesh adaptation for front evolution using discontinuous Galerkin based weighted condition number relaxation

    DOE PAGES

    Greene, Patrick T.; Schofield, Samuel P.; Nourgaliev, Robert

    2017-01-27

    A new mesh smoothing method designed to cluster cells near a dynamically evolving interface is presented. The method is based on weighted condition number mesh relaxation with the weight function computed from a level set representation of the interface. The weight function is expressed as a Taylor series based discontinuous Galerkin projection, which makes the computation of the derivatives of the weight function needed during the condition number optimization process a trivial matter. For cases when a level set is not available, a fast method for generating a low-order level set from discrete cell-centered fields, such as a volume fractionmore » or index function, is provided. Results show that the low-order level set works equally well as the actual level set for mesh smoothing. Meshes generated for a number of interface geometries are presented, including cases with multiple level sets. Lastly, dynamic cases with moving interfaces show the new method is capable of maintaining a desired resolution near the interface with an acceptable number of relaxation iterations per time step, which demonstrates the method's potential to be used as a mesh relaxer for arbitrary Lagrangian Eulerian (ALE) methods.« less

  4. Horsetail matching: a flexible approach to optimization under uncertainty

    NASA Astrophysics Data System (ADS)

    Cook, L. W.; Jarrett, J. P.

    2018-04-01

    It is important to design engineering systems to be robust with respect to uncertainties in the design process. Often, this is done by considering statistical moments, but over-reliance on statistical moments when formulating a robust optimization can produce designs that are stochastically dominated by other feasible designs. This article instead proposes a formulation for optimization under uncertainty that minimizes the difference between a design's cumulative distribution function and a target. A standard target is proposed that produces stochastically non-dominated designs, but the formulation also offers enough flexibility to recover existing approaches for robust optimization. A numerical implementation is developed that employs kernels to give a differentiable objective function. The method is applied to algebraic test problems and a robust transonic airfoil design problem where it is compared to multi-objective, weighted-sum and density matching approaches to robust optimization; several advantages over these existing methods are demonstrated.

  5. Complex-energy approach to sum rules within nuclear density functional theory

    DOE PAGES

    Hinohara, Nobuo; Kortelainen, Markus; Nazarewicz, Witold; ...

    2015-04-27

    The linear response of the nucleus to an external field contains unique information about the effective interaction, correlations governing the behavior of the many-body system, and properties of its excited states. To characterize the response, it is useful to use its energy-weighted moments, or sum rules. By comparing computed sum rules with experimental values, the information content of the response can be utilized in the optimization process of the nuclear Hamiltonian or nuclear energy density functional (EDF). But the additional information comes at a price: compared to the ground state, computation of excited states is more demanding. To establish anmore » efficient framework to compute energy-weighted sum rules of the response that is adaptable to the optimization of the nuclear EDF and large-scale surveys of collective strength, we have developed a new technique within the complex-energy finite-amplitude method (FAM) based on the quasiparticle random- phase approximation. The proposed sum-rule technique based on the complex-energy FAM is a tool of choice when optimizing effective interactions or energy functionals. The method is very efficient and well-adaptable to parallel computing. As a result, the FAM formulation is especially useful when standard theorems based on commutation relations involving the nuclear Hamiltonian and external field cannot be used.« less

  6. EUD-based biological optimization for carbon ion therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brüningk, Sarah C., E-mail: sarah.brueningk@icr.ac.uk; Kamp, Florian; Wilkens, Jan J.

    2015-11-15

    Purpose: Treatment planning for carbon ion therapy requires an accurate modeling of the biological response of each tissue to estimate the clinical outcome of a treatment. The relative biological effectiveness (RBE) accounts for this biological response on a cellular level but does not refer to the actual impact on the organ as a whole. For photon therapy, the concept of equivalent uniform dose (EUD) represents a simple model to take the organ response into account, yet so far no formulation of EUD has been reported that is suitable to carbon ion therapy. The authors introduce the concept of an equivalentmore » uniform effect (EUE) that is directly applicable to both ion and photon therapies and exemplarily implemented it as a basis for biological treatment plan optimization for carbon ion therapy. Methods: In addition to a classical EUD concept, which calculates a generalized mean over the RBE-weighted dose distribution, the authors propose the EUE to simplify the optimization process of carbon ion therapy plans. The EUE is defined as the biologically equivalent uniform effect that yields the same probability of injury as the inhomogeneous effect distribution in an organ. Its mathematical formulation is based on the generalized mean effect using an effect-volume parameter to account for different organ architectures and is thus independent of a reference radiation. For both EUD concepts, quadratic and logistic objective functions are implemented into a research treatment planning system. A flexible implementation allows choosing for each structure between biological effect constraints per voxel and EUD constraints per structure. Exemplary treatment plans are calculated for a head-and-neck patient for multiple combinations of objective functions and optimization parameters. Results: Treatment plans optimized using an EUE-based objective function were comparable to those optimized with an RBE-weighted EUD-based approach. In agreement with previous results from photon therapy, the optimization by biological objective functions resulted in slightly superior treatment plans in terms of final EUD for the organs at risk (OARs) compared to voxel-based optimization approaches. This observation was made independent of the underlying objective function metric. An absolute gain in OAR sparing was observed for quadratic objective functions, whereas intersecting DVHs were found for logistic approaches. Even for considerable under- or overestimations of the used effect- or dose–volume parameters during the optimization, treatment plans were obtained that were of similar quality as the results of a voxel-based optimization. Conclusions: EUD-based optimization with either of the presented concepts can successfully be applied to treatment plan optimization. This makes EUE-based optimization for carbon ion therapy a useful tool to optimize more specifically in the sense of biological outcome while voxel-to-voxel variations of the biological effectiveness are still properly accounted for. This may be advantageous in terms of computational cost during treatment plan optimization but also enables a straight forward comparison of different fractionation schemes or treatment modalities.« less

  7. An Efficient Method Coupling Kernel Principal Component Analysis with Adjoint-Based Optimal Control and Its Goal-Oriented Extensions

    NASA Astrophysics Data System (ADS)

    Thimmisetty, C.; Talbot, C.; Tong, C. H.; Chen, X.

    2016-12-01

    The representativeness of available data poses a significant fundamental challenge to the quantification of uncertainty in geophysical systems. Furthermore, the successful application of machine learning methods to geophysical problems involving data assimilation is inherently constrained by the extent to which obtainable data represent the problem considered. We show how the adjoint method, coupled with optimization based on methods of machine learning, can facilitate the minimization of an objective function defined on a space of significantly reduced dimension. By considering uncertain parameters as constituting a stochastic process, the Karhunen-Loeve expansion and its nonlinear extensions furnish an optimal basis with respect to which optimization using L-BFGS can be carried out. In particular, we demonstrate that kernel PCA can be coupled with adjoint-based optimal control methods to successfully determine the distribution of material parameter values for problems in the context of channelized deformable media governed by the equations of linear elasticity. Since certain subsets of the original data are characterized by different features, the convergence rate of the method in part depends on, and may be limited by, the observations used to furnish the kernel principal component basis. By determining appropriate weights for realizations of the stochastic random field, then, one may accelerate the convergence of the method. To this end, we present a formulation of Weighted PCA combined with a gradient-based means using automatic differentiation to iteratively re-weight observations concurrent with the determination of an optimal reduced set control variables in the feature space. We demonstrate how improvements in the accuracy and computational efficiency of the weighted linear method can be achieved over existing unweighted kernel methods, and discuss nonlinear extensions of the algorithm.

  8. A hybrid linear/nonlinear training algorithm for feedforward neural networks.

    PubMed

    McLoone, S; Brown, M D; Irwin, G; Lightbody, A

    1998-01-01

    This paper presents a new hybrid optimization strategy for training feedforward neural networks. The algorithm combines gradient-based optimization of nonlinear weights with singular value decomposition (SVD) computation of linear weights in one integrated routine. It is described for the multilayer perceptron (MLP) and radial basis function (RBF) networks and then extended to the local model network (LMN), a new feedforward structure in which a global nonlinear model is constructed from a set of locally valid submodels. Simulation results are presented demonstrating the superiority of the new hybrid training scheme compared to second-order gradient methods. It is particularly effective for the LMN architecture where the linear to nonlinear parameter ratio is large.

  9. Integrated multidisciplinary optimization of rotorcraft: A plan for development

    NASA Technical Reports Server (NTRS)

    Adelman, Howard M. (Editor); Mantay, Wayne R. (Editor)

    1989-01-01

    This paper describes a joint NASA/Army initiative at the Langley Research Center to develop optimization procedures aimed at improving the rotor blade design process by integrating appropriate disciplines and accounting for important interactions among the disciplines. The paper describes the optimization formulation in terms of the objective function, design variables, and constraints. Additionally, some of the analysis aspects are discussed, validation strategies are described, and an initial attempt at defining the interdisciplinary couplings is summarized. At this writing, significant progress has been made, principally in the areas of single discipline optimization. Accomplishments are described in areas of rotor aerodynamic performance optimization for minimum hover horsepower, rotor dynamic optimization for vibration reduction, and rotor structural optimization for minimum weight.

  10. WE-AB-209-09: Optimization of Rotational Arc Station Parameter Optimized Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dong, P; Xing, L; Ungun, B

    Purpose: To develop a fast optimization method for station parameter optimized radiation therapy (SPORT) and show that SPORT is capable of improving VMAT in both plan quality and delivery efficiency. Methods: The angular space from 0° to 360° was divided into 180 station points (SPs). A candidate aperture was assigned to each of the SPs based on the calculation results using a column generation algorithm. The weights of the apertures were then obtained by optimizing the objective function using a state-of-the-art GPU based Proximal Operator Graph Solver (POGS) within seconds. Apertures with zero or low weight were thrown out. Tomore » avoid being trapped in a local minimum, a stochastic gradient descent method was employed which also greatly increased the convergence rate of the objective function. The above procedure repeated until the plan could not be improved any further. A weighting factor associated with the total plan MU also indirectly controlled the complexities of aperture shapes. The number of apertures for VMAT and SPORT was confined to 180. The SPORT allowed the coexistence of multiple apertures in a single SP. The optimization technique was assessed by using three clinical cases (prostate, H&N and brain). Results: Marked dosimetric quality improvement was demonstrated in the SPORT plans for all three studied cases. Prostate case: the volume of the 50% prescription dose was decreased by 22% for the rectum. H&N case: SPORT improved the mean dose for the left and right parotids by 15% each. Brain case: the doses to the eyes, chiasm and inner ears were all improved. SPORT shortened the treatment time by ∼1 min for the prostate case, ∼0.5 min for brain case, and ∼0.2 min for the H&N case. Conclusion: The superior dosimetric quality and delivery efficiency presented here indicates that SPORT is an intriguing alternative treatment modality.« less

  11. Using the Firefly optimization method to weight an ensemble of rainfall forecasts from the Brazilian developments on the Regional Atmospheric Modeling System (BRAMS)

    NASA Astrophysics Data System (ADS)

    dos Santos, A. F.; Freitas, S. R.; de Mattos, J. G. Z.; de Campos Velho, H. F.; Gan, M. A.; da Luz, E. F. P.; Grell, G. A.

    2013-09-01

    In this paper we consider an optimization problem applying the metaheuristic Firefly algorithm (FY) to weight an ensemble of rainfall forecasts from daily precipitation simulations with the Brazilian developments on the Regional Atmospheric Modeling System (BRAMS) over South America during January 2006. The method is addressed as a parameter estimation problem to weight the ensemble of precipitation forecasts carried out using different options of the convective parameterization scheme. Ensemble simulations were performed using different choices of closures, representing different formulations of dynamic control (the modulation of convection by the environment) in a deep convection scheme. The optimization problem is solved as an inverse problem of parameter estimation. The application and validation of the methodology is carried out using daily precipitation fields, defined over South America and obtained by merging remote sensing estimations with rain gauge observations. The quadratic difference between the model and observed data was used as the objective function to determine the best combination of the ensemble members to reproduce the observations. To reduce the model rainfall biases, the set of weights determined by the algorithm is used to weight members of an ensemble of model simulations in order to compute a new precipitation field that represents the observed precipitation as closely as possible. The validation of the methodology is carried out using classical statistical scores. The algorithm has produced the best combination of the weights, resulting in a new precipitation field closest to the observations.

  12. Green ultrasound-assisted extraction of anthocyanin and phenolic compounds from purple sweet potato using response surface methodology

    NASA Astrophysics Data System (ADS)

    Zhu, Zhenzhou; Guan, Qingyan; Guo, Ying; He, Jingren; Liu, Gang; Li, Shuyi; Barba, Francisco J.; Jaffrin, Michel Y.

    2016-01-01

    Response surface methodology was used to optimize experimental conditions for ultrasound-assisted extraction of valuable components (anthocyanins and phenolics) from purple sweet potatoes using water as a solvent. The Box-Behnken design was used for optimizing extraction responses of anthocyanin extraction yield, phenolic extraction yield, and specific energy consumption. Conditions to obtain maximal anthocyanin extraction yield, maximal phenolic extraction yield, and minimal specific energy consumption were different; an overall desirability function was used to search for overall optimal conditions: extraction temperature of 68ºC, ultrasonic treatment time of 52 min, and a liquid/solid ratio of 20. The optimized anthocyanin extraction yield, phenolic extraction yield, and specific energy consumption were 4.91 mg 100 g-1 fresh weight, 3.24 mg g-1 fresh weight, and 2.07 kWh g-1, respectively, with a desirability of 0.99. This study indicates that ultrasound-assisted extraction should contribute to a green process for valorization of purple sweet potatoes.

  13. The role of under-determined approximations in engineering and science application

    NASA Technical Reports Server (NTRS)

    Carpenter, William C.

    1992-01-01

    There is currently a great deal of interest in using response surfaces in the optimization of aircraft performance. The objective function and/or constraint equations involved in these optimization problems may come from numerous disciplines such as structures, aerodynamics, environmental engineering, etc. In each of these disciplines, the mathematical complexity of the governing equations usually dictates that numerical results be obtained from large computer programs such as a finite element method program. Thus, when performing optimization studies, response surfaces are a convenient way of transferring information from the various disciplines to the optimization algorithm as opposed to bringing all the sundry computer programs together in a massive computer code. Response surfaces offer another advantage in the optimization of aircraft structures. A characteristic of these types of optimization problems is that evaluation of the objective function and response equations (referred to as a functional evaluation) can be very expensive in a computational sense. Because of the computational expense in obtaining functional evaluations, the present study was undertaken to investigate under-determinined approximations. An under-determined approximation is one in which there are fewer training pairs (pieces of information about a function) than there are undetermined parameters (coefficients or weights) associated with the approximation. Both polynomial approximations and neural net approximations were examined. Three main example problems were investigated: (1) a function of one design variable was considered; (2) a function of two design variables was considered; and (3) a 35 bar truss with 4 design variables was considered.

  14. Finite-horizon control-constrained nonlinear optimal control using single network adaptive critics.

    PubMed

    Heydari, Ali; Balakrishnan, Sivasubramanya N

    2013-01-01

    To synthesize fixed-final-time control-constrained optimal controllers for discrete-time nonlinear control-affine systems, a single neural network (NN)-based controller called the Finite-horizon Single Network Adaptive Critic is developed in this paper. Inputs to the NN are the current system states and the time-to-go, and the network outputs are the costates that are used to compute optimal feedback control. Control constraints are handled through a nonquadratic cost function. Convergence proofs of: 1) the reinforcement learning-based training method to the optimal solution; 2) the training error; and 3) the network weights are provided. The resulting controller is shown to solve the associated time-varying Hamilton-Jacobi-Bellman equation and provide the fixed-final-time optimal solution. Performance of the new synthesis technique is demonstrated through different examples including an attitude control problem wherein a rigid spacecraft performs a finite-time attitude maneuver subject to control bounds. The new formulation has great potential for implementation since it consists of only one NN with single set of weights and it provides comprehensive feedback solutions online, though it is trained offline.

  15. Analysis of Intergrade Variables In The Fuzzy C-Means And Improved Algorithm Cat Swarm Optimization(FCM-ISO) In Search Segmentation

    NASA Astrophysics Data System (ADS)

    Saragih, Jepronel; Salim Sitompul, Opim; Situmorang, Zakaria

    2017-12-01

    One of the techniques known in Data Mining namely clustering. Image segmentation process does not always represent the actual image which is caused by a combination of algorithms as long as it has not been able to obtain optimal cluster centers. In this research will search for the smallest error with the counting result of a Fuzzy C Means process optimized with Cat swam Algorithm Optimization that has been developed by adding the weight of the energy in the process of Tracing Mode.So with the parameter can be determined the most optimal cluster centers and most closely with the data will be made the cluster. Weigh inertia in this research, namely: (0.1), (0.2), (0.3), (0.4), (0.5), (0.6), (0.7), (0.8) and (0.9). Then compare the results of each variable values inersia (W) which is different and taken the smallest results. Of this weighting analysis process can acquire the right produce inertia variable cost function the smallest.

  16. Online adaptive optimal control for continuous-time nonlinear systems with completely unknown dynamics

    NASA Astrophysics Data System (ADS)

    Lv, Yongfeng; Na, Jing; Yang, Qinmin; Wu, Xing; Guo, Yu

    2016-01-01

    An online adaptive optimal control is proposed for continuous-time nonlinear systems with completely unknown dynamics, which is achieved by developing a novel identifier-critic-based approximate dynamic programming algorithm with a dual neural network (NN) approximation structure. First, an adaptive NN identifier is designed to obviate the requirement of complete knowledge of system dynamics, and a critic NN is employed to approximate the optimal value function. Then, the optimal control law is computed based on the information from the identifier NN and the critic NN, so that the actor NN is not needed. In particular, a novel adaptive law design method with the parameter estimation error is proposed to online update the weights of both identifier NN and critic NN simultaneously, which converge to small neighbourhoods around their ideal values. The closed-loop system stability and the convergence to small vicinity around the optimal solution are all proved by means of the Lyapunov theory. The proposed adaptation algorithm is also improved to achieve finite-time convergence of the NN weights. Finally, simulation results are provided to exemplify the efficacy of the proposed methods.

  17. A cubic extended interior penalty function for structural optimization

    NASA Technical Reports Server (NTRS)

    Prasad, B.; Haftka, R. T.

    1979-01-01

    This paper describes an optimization procedure for the minimum weight design of complex structures. The procedure is based on a new cubic extended interior penalty function (CEIPF) used with the sequence of unconstrained minimization technique (SUMT) and Newton's method. The Hessian matrix of the penalty function is approximated using only constraints and their derivatives. The CEIPF is designed to minimize the error in the approximation of the Hessian matrix, and as a result the number of structural analyses required is small and independent of the number of design variables. Three example problems are reported. The number of structural analyses is reduced by as much as 50 per cent below previously reported results.

  18. Periconception Weight Loss: Common Sense for Mothers, but What about for Babies?

    PubMed Central

    Barrett, Helen L.; Callaway, Leonie K.; Nitert, Marloes Dekker

    2014-01-01

    Obesity in the childbearing population is increasingly common. Obesity is associated with increased risk for a number of maternal and neonatal pregnancy complications. Some of these complications, such as gestational diabetes, are risk factors for long-term disease in both mother and baby. While clinical practice guidelines advocate for healthy weight prior to pregnancy, there is not a clear directive for achieving healthy weight before conception. There are known benefits to even moderate weight loss prior to pregnancy, but there are potential adverse effects of restricted nutrition during the periconceptional period. Epidemiological and animal studies point to differences in offspring conceived during a time of maternal nutritional restriction. These include changes in hypothalamic-pituitary-adrenal axis function, body composition, glucose metabolism, and cardiovascular function. The periconceptional period is therefore believed to play an important role in programming offspring physiological function and is sensitive to nutritional insult. This review summarizes the evidence to date for offspring programming as a result of maternal periconception weight loss. Further research is needed in humans to clearly identify benefits and potential risks of losing weight in the months before conceiving. This may then inform us of clinical practice guidelines for optimal approaches to achieving a healthy weight before pregnancy. PMID:24804085

  19. Periconception weight loss: common sense for mothers, but what about for babies?

    PubMed

    Matusiak, Kristine; Barrett, Helen L; Callaway, Leonie K; Nitert, Marloes Dekker

    2014-01-01

    Obesity in the childbearing population is increasingly common. Obesity is associated with increased risk for a number of maternal and neonatal pregnancy complications. Some of these complications, such as gestational diabetes, are risk factors for long-term disease in both mother and baby. While clinical practice guidelines advocate for healthy weight prior to pregnancy, there is not a clear directive for achieving healthy weight before conception. There are known benefits to even moderate weight loss prior to pregnancy, but there are potential adverse effects of restricted nutrition during the periconceptional period. Epidemiological and animal studies point to differences in offspring conceived during a time of maternal nutritional restriction. These include changes in hypothalamic-pituitary-adrenal axis function, body composition, glucose metabolism, and cardiovascular function. The periconceptional period is therefore believed to play an important role in programming offspring physiological function and is sensitive to nutritional insult. This review summarizes the evidence to date for offspring programming as a result of maternal periconception weight loss. Further research is needed in humans to clearly identify benefits and potential risks of losing weight in the months before conceiving. This may then inform us of clinical practice guidelines for optimal approaches to achieving a healthy weight before pregnancy.

  20. Weighted Global Artificial Bee Colony Algorithm Makes Gas Sensor Deployment Efficient

    PubMed Central

    Jiang, Ye; He, Ziqing; Li, Yanhai; Xu, Zhengyi; Wei, Jianming

    2016-01-01

    This paper proposes an improved artificial bee colony algorithm named Weighted Global ABC (WGABC) algorithm, which is designed to improve the convergence speed in the search stage of solution search equation. The new method not only considers the effect of global factors on the convergence speed in the search phase, but also provides the expression of global factor weights. Experiment on benchmark functions proved that the algorithm can improve the convergence speed greatly. We arrive at the gas diffusion concentration based on the theory of CFD and then simulate the gas diffusion model with the influence of buildings based on the algorithm. Simulation verified the effectiveness of the WGABC algorithm in improving the convergence speed in optimal deployment scheme of gas sensors. Finally, it is verified that the optimal deployment method based on WGABC algorithm can improve the monitoring efficiency of sensors greatly as compared with the conventional deployment methods. PMID:27322262

  1. Enhancing pairwise state-transition weights: A new weighting scheme in simulated tempering that can minimize transition time between a pair of conformational states

    NASA Astrophysics Data System (ADS)

    Qiao, Qin; Zhang, Hou-Dao; Huang, Xuhui

    2016-04-01

    Simulated tempering (ST) is a widely used enhancing sampling method for Molecular Dynamics simulations. As one expanded ensemble method, ST is a combination of canonical ensembles at different temperatures and the acceptance probability of cross-temperature transitions is determined by both the temperature difference and the weights of each temperature. One popular way to obtain the weights is to adopt the free energy of each canonical ensemble, which achieves uniform sampling among temperature space. However, this uniform distribution in temperature space may not be optimal since high temperatures do not always speed up the conformational transitions of interest, as anti-Arrhenius kinetics are prevalent in protein and RNA folding. Here, we propose a new method: Enhancing Pairwise State-transition Weights (EPSW), to obtain the optimal weights by minimizing the round-trip time for transitions among different metastable states at the temperature of interest in ST. The novelty of the EPSW algorithm lies in explicitly considering the kinetics of conformation transitions when optimizing the weights of different temperatures. We further demonstrate the power of EPSW in three different systems: a simple two-temperature model, a two-dimensional model for protein folding with anti-Arrhenius kinetics, and the alanine dipeptide. The results from these three systems showed that the new algorithm can substantially accelerate the transitions between conformational states of interest in the ST expanded ensemble and further facilitate the convergence of thermodynamics compared to the widely used free energy weights. We anticipate that this algorithm is particularly useful for studying functional conformational changes of biological systems where the initial and final states are often known from structural biology experiments.

  2. Enhancing pairwise state-transition weights: A new weighting scheme in simulated tempering that can minimize transition time between a pair of conformational states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qiao, Qin, E-mail: qqiao@ust.hk; Zhang, Hou-Dao; Huang, Xuhui, E-mail: xuhuihuang@ust.hk

    2016-04-21

    Simulated tempering (ST) is a widely used enhancing sampling method for Molecular Dynamics simulations. As one expanded ensemble method, ST is a combination of canonical ensembles at different temperatures and the acceptance probability of cross-temperature transitions is determined by both the temperature difference and the weights of each temperature. One popular way to obtain the weights is to adopt the free energy of each canonical ensemble, which achieves uniform sampling among temperature space. However, this uniform distribution in temperature space may not be optimal since high temperatures do not always speed up the conformational transitions of interest, as anti-Arrhenius kineticsmore » are prevalent in protein and RNA folding. Here, we propose a new method: Enhancing Pairwise State-transition Weights (EPSW), to obtain the optimal weights by minimizing the round-trip time for transitions among different metastable states at the temperature of interest in ST. The novelty of the EPSW algorithm lies in explicitly considering the kinetics of conformation transitions when optimizing the weights of different temperatures. We further demonstrate the power of EPSW in three different systems: a simple two-temperature model, a two-dimensional model for protein folding with anti-Arrhenius kinetics, and the alanine dipeptide. The results from these three systems showed that the new algorithm can substantially accelerate the transitions between conformational states of interest in the ST expanded ensemble and further facilitate the convergence of thermodynamics compared to the widely used free energy weights. We anticipate that this algorithm is particularly useful for studying functional conformational changes of biological systems where the initial and final states are often known from structural biology experiments.« less

  3. Optimized Reduction of Unsteady Radial Forces in a Singlechannel Pump for Wastewater Treatment

    NASA Astrophysics Data System (ADS)

    Kim, Jin-Hyuk; Cho, Bo-Min; Choi, Young-Seok; Lee, Kyoung-Yong; Peck, Jong-Hyeon; Kim, Seon-Chang

    2016-11-01

    A single-channel pump for wastewater treatment was optimized to reduce unsteady radial force sources caused by impeller-volute interactions. The steady and unsteady Reynolds- averaged Navier-Stokes equations using the shear-stress transport turbulence model were discretized by finite volume approximations and solved on tetrahedral grids to analyze the flow in the single-channel pump. The sweep area of radial force during one revolution and the distance of the sweep-area center of mass from the origin were selected as the objective functions; the two design variables were related to the internal flow cross-sectional area of the volute. These objective functions were integrated into one objective function by applying the weighting factor for optimization. Latin hypercube sampling was employed to generate twelve design points within the design space. A response-surface approximation model was constructed as a surrogate model for the objectives, based on the objective function values at the generated design points. The optimized results showed considerable reduction in the unsteady radial force sources in the optimum design, relative to those of the reference design.

  4. Adjoint optimization of natural convection problems: differentially heated cavity

    NASA Astrophysics Data System (ADS)

    Saglietti, Clio; Schlatter, Philipp; Monokrousos, Antonios; Henningson, Dan S.

    2017-12-01

    Optimization of natural convection-driven flows may provide significant improvements to the performance of cooling devices, but a theoretical investigation of such flows has been rarely done. The present paper illustrates an efficient gradient-based optimization method for analyzing such systems. We consider numerically the natural convection-driven flow in a differentially heated cavity with three Prandtl numbers (Pr=0.15{-}7) at super-critical conditions. All results and implementations were done with the spectral element code Nek5000. The flow is analyzed using linear direct and adjoint computations about a nonlinear base flow, extracting in particular optimal initial conditions using power iteration and the solution of the full adjoint direct eigenproblem. The cost function for both temperature and velocity is based on the kinetic energy and the concept of entransy, which yields a quadratic functional. Results are presented as a function of Prandtl number, time horizons and weights between kinetic energy and entransy. In particular, it is shown that the maximum transient growth is achieved at time horizons on the order of 5 time units for all cases, whereas for larger time horizons the adjoint mode is recovered as optimal initial condition. For smaller time horizons, the influence of the weights leads either to a concentric temperature distribution or to an initial condition pattern that opposes the mean shear and grows according to the Orr mechanism. For specific cases, it could also been shown that the computation of optimal initial conditions leads to a degenerate problem, with a potential loss of symmetry. In these situations, it turns out that any initial condition lying in a specific span of the eigenfunctions will yield exactly the same transient amplification. As a consequence, the power iteration converges very slowly and fails to extract all possible optimal initial conditions. According to the authors' knowledge, this behavior is illustrated here for the first time.

  5. Optimizing Synchronization Stability of the Kuramoto Model in Complex Networks and Power Grids

    NASA Astrophysics Data System (ADS)

    Li, Bo; Wong, K. Y. Michael

    Maintaining the stability of synchronization state is crucial for the functioning of many natural and artificial systems. For the Kuramoto model on general weighted networks, the synchronization stability, measured by the dominant Lyapunov exponent at the steady state, is shown to have intricate and nonlinear dependence on the network topology and the dynamical parameters. Specifically, the dominant Lyapunov exponent corresponds to the algebraic connectivity of a meta-graph whose edge weight depends nonlinearly on the steady states. In this study, we utilize the cut-set space (DC) approximation to estimate the nonlinear steady state and simplify the calculation of the stability measure, based on which we further derive efficient algorithms to optimize the synchronization stability. The properties of the optimized networks and application in power grid stability are also discussed. This work is supported by a Grant from the Research Grant Council of Hong Kong (Grant Numbers 605813 and 16322616).

  6. Local classifier weighting by quadratic programming.

    PubMed

    Cevikalp, Hakan; Polikar, Robi

    2008-10-01

    It has been widely accepted that the classification accuracy can be improved by combining outputs of multiple classifiers. However, how to combine multiple classifiers with various (potentially conflicting) decisions is still an open problem. A rich collection of classifier combination procedures -- many of which are heuristic in nature -- have been developed for this goal. In this brief, we describe a dynamic approach to combine classifiers that have expertise in different regions of the input space. To this end, we use local classifier accuracy estimates to weight classifier outputs. Specifically, we estimate local recognition accuracies of classifiers near a query sample by utilizing its nearest neighbors, and then use these estimates to find the best weights of classifiers to label the query. The problem is formulated as a convex quadratic optimization problem, which returns optimal nonnegative classifier weights with respect to the chosen objective function, and the weights ensure that locally most accurate classifiers are weighted more heavily for labeling the query sample. Experimental results on several data sets indicate that the proposed weighting scheme outperforms other popular classifier combination schemes, particularly on problems with complex decision boundaries. Hence, the results indicate that local classification-accuracy-based combination techniques are well suited for decision making when the classifiers are trained by focusing on different regions of the input space.

  7. Graph Design via Convex Optimization: Online and Distributed Perspectives

    NASA Astrophysics Data System (ADS)

    Meng, De

    Network and graph have long been natural abstraction of relations in a variety of applications, e.g. transportation, power system, social network, communication, electrical circuit, etc. As a large number of computation and optimization problems are naturally defined on graphs, graph structures not only enable important properties of these problems, but also leads to highly efficient distributed and online algorithms. For example, graph separability enables the parallelism for computation and operation as well as limits the size of local problems. More interestingly, graphs can be defined and constructed in order to take best advantage of those problem properties. This dissertation focuses on graph structure and design in newly proposed optimization problems, which establish a bridge between graph properties and optimization problem properties. We first study a new optimization problem called Geodesic Distance Maximization Problem (GDMP). Given a graph with fixed edge weights, finding the shortest path, also known as the geodesic, between two nodes is a well-studied network flow problem. We introduce the Geodesic Distance Maximization Problem (GDMP): the problem of finding the edge weights that maximize the length of the geodesic subject to convex constraints on the weights. We show that GDMP is a convex optimization problem for a wide class of flow costs, and provide a physical interpretation using the dual. We present applications of the GDMP in various fields, including optical lens design, network interdiction, and resource allocation in the control of forest fires. We develop an Alternating Direction Method of Multipliers (ADMM) by exploiting specific problem structures to solve large-scale GDMP, and demonstrate its effectiveness in numerical examples. We then turn our attention to distributed optimization on graph with only local communication. Distributed optimization arises in a variety of applications, e.g. distributed tracking and localization, estimation problems in sensor networks, multi-agent coordination. Distributed optimization aims to optimize a global objective function formed by summation of coupled local functions over a graph via only local communication and computation. We developed a weighted proximal ADMM for distributed optimization using graph structure. This fully distributed, single-loop algorithm allows simultaneous updates and can be viewed as a generalization of existing algorithms. More importantly, we achieve faster convergence by jointly designing graph weights and algorithm parameters. Finally, we propose a new problem on networks called Online Network Formation Problem: starting with a base graph and a set of candidate edges, at each round of the game, player one first chooses a candidate edge and reveals it to player two, then player two decides whether to accept it; player two can only accept limited number of edges and make online decisions with the goal to achieve the best properties of the synthesized network. The network properties considered include the number of spanning trees, algebraic connectivity and total effective resistance. These network formation games arise in a variety of cooperative multiagent systems. We propose a primal-dual algorithm framework for the general online network formation game, and analyze the algorithm performance by the competitive ratio and regret.

  8. Event-Triggered Distributed Control of Nonlinear Interconnected Systems Using Online Reinforcement Learning With Exploration.

    PubMed

    Narayanan, Vignesh; Jagannathan, Sarangapani

    2017-09-07

    In this paper, a distributed control scheme for an interconnected system composed of uncertain input affine nonlinear subsystems with event triggered state feedback is presented by using a novel hybrid learning scheme-based approximate dynamic programming with online exploration. First, an approximate solution to the Hamilton-Jacobi-Bellman equation is generated with event sampled neural network (NN) approximation and subsequently, a near optimal control policy for each subsystem is derived. Artificial NNs are utilized as function approximators to develop a suite of identifiers and learn the dynamics of each subsystem. The NN weight tuning rules for the identifier and event-triggering condition are derived using Lyapunov stability theory. Taking into account, the effects of NN approximation of system dynamics and boot-strapping, a novel NN weight update is presented to approximate the optimal value function. Finally, a novel strategy to incorporate exploration in online control framework, using identifiers, is introduced to reduce the overall cost at the expense of additional computations during the initial online learning phase. System states and the NN weight estimation errors are regulated and local uniformly ultimately bounded results are achieved. The analytical results are substantiated using simulation studies.

  9. Nonlinear Curve-Fitting Program

    NASA Technical Reports Server (NTRS)

    Everhart, Joel L.; Badavi, Forooz F.

    1989-01-01

    Nonlinear optimization algorithm helps in finding best-fit curve. Nonlinear Curve Fitting Program, NLINEAR, interactive curve-fitting routine based on description of quadratic expansion of X(sup 2) statistic. Utilizes nonlinear optimization algorithm calculating best statistically weighted values of parameters of fitting function and X(sup 2) minimized. Provides user with such statistical information as goodness of fit and estimated values of parameters producing highest degree of correlation between experimental data and mathematical model. Written in FORTRAN 77.

  10. Wildlife tradeoffs based on landscape models of habitat preference

    USGS Publications Warehouse

    Loehle, C.; Mitchell, M.S.; White, M.

    2000-01-01

    Wildlife tradeoffs based on landscape models of habitat preference were presented. Multiscale logistic regression models were used and based on these models a spatial optimization technique was utilized to generate optimal maps. The tradeoffs were analyzed by gradually increasing the weighting on a single species in the objective function over a series of simulations. Results indicated that efficiency of habitat management for species diversity could be maximized for small landscapes by incorporating spatial context.

  11. Ethnic variation in localized prostate cancer: a pilot study of preferences, optimism, and quality of life among black and white veterans.

    PubMed

    Knight, Sara J; Siston, Amy K; Chmiel, Joan S; Slimack, Nicholas; Elstein, Arthur S; Chapman, Gretchen B; Nadler, Robert B; Bennett, Charles L

    2004-06-01

    Ethnic variations that may influence the preferences and outcomes associated with prostate cancer treatment are not well delineated. Our objective was to evaluate prospectively preferences, optimism, involvement in care, and quality of life (QOL) in black and white veterans newly diagnosed with localized prostate cancer. A total of 95 men who identified themselves as black/African-American or white who had newly diagnosed, localized prostate cancer completed a "time trade-off" task to assess utilities for current health and mild, moderate, and severe functional impairment; importance rankings for attributes associated with prostate cancer (eg, urinary function); and baseline and follow-up measures of optimism, involvement in care, and QOL. Interviews were scheduled before treatment, and at 3 and 12 months after treatment. At baseline, both blacks and whites ranked pain, bowel, and bladder function as their most important concerns. Optimism, involvement in care, and QOL were similar. Utilities for mild impairment were lower for blacks than whites, but were similar for moderate and severe problems. Decline in QOL at 3 and 12 months compared to baseline occurred for both groups. However, even with adjustment for marital status, education level, and treatment, blacks had less increase in nausea and vomiting and more increase in difficulty with sexual interest and weight gain compared with whites. Black and white veterans entered localized prostate cancer treatment with similar priorities, optimism, and involvement in care. Quality-of-life declines were common to both groups during the first year after diagnosis, but ethnic variation occurred with respect to nausea and vomiting, sexual interest, and weight gain.

  12. Forecasting outpatient visits using empirical mode decomposition coupled with back-propagation artificial neural networks optimized by particle swarm optimization

    PubMed Central

    Huang, Daizheng; Wu, Zhihui

    2017-01-01

    Accurately predicting the trend of outpatient visits by mathematical modeling can help policy makers manage hospitals effectively, reasonably organize schedules for human resources and finances, and appropriately distribute hospital material resources. In this study, a hybrid method based on empirical mode decomposition and back-propagation artificial neural networks optimized by particle swarm optimization is developed to forecast outpatient visits on the basis of monthly numbers. The data outpatient visits are retrieved from January 2005 to December 2013 and first obtained as the original time series. Second, the original time series is decomposed into a finite and often small number of intrinsic mode functions by the empirical mode decomposition technique. Third, a three-layer back-propagation artificial neural network is constructed to forecast each intrinsic mode functions. To improve network performance and avoid falling into a local minimum, particle swarm optimization is employed to optimize the weights and thresholds of back-propagation artificial neural networks. Finally, the superposition of forecasting results of the intrinsic mode functions is regarded as the ultimate forecasting value. Simulation indicates that the proposed method attains a better performance index than the other four methods. PMID:28222194

  13. Forecasting outpatient visits using empirical mode decomposition coupled with back-propagation artificial neural networks optimized by particle swarm optimization.

    PubMed

    Huang, Daizheng; Wu, Zhihui

    2017-01-01

    Accurately predicting the trend of outpatient visits by mathematical modeling can help policy makers manage hospitals effectively, reasonably organize schedules for human resources and finances, and appropriately distribute hospital material resources. In this study, a hybrid method based on empirical mode decomposition and back-propagation artificial neural networks optimized by particle swarm optimization is developed to forecast outpatient visits on the basis of monthly numbers. The data outpatient visits are retrieved from January 2005 to December 2013 and first obtained as the original time series. Second, the original time series is decomposed into a finite and often small number of intrinsic mode functions by the empirical mode decomposition technique. Third, a three-layer back-propagation artificial neural network is constructed to forecast each intrinsic mode functions. To improve network performance and avoid falling into a local minimum, particle swarm optimization is employed to optimize the weights and thresholds of back-propagation artificial neural networks. Finally, the superposition of forecasting results of the intrinsic mode functions is regarded as the ultimate forecasting value. Simulation indicates that the proposed method attains a better performance index than the other four methods.

  14. Computational techniques for design optimization of thermal protection systems for the space shuttle vehicle. Volume 1: Final report

    NASA Technical Reports Server (NTRS)

    1971-01-01

    Computational techniques were developed and assimilated for the design optimization. The resulting computer program was then used to perform initial optimization and sensitivity studies on a typical thermal protection system (TPS) to demonstrate its application to the space shuttle TPS design. The program was developed in Fortran IV for the CDC 6400 but was subsequently converted to the Fortran V language to be used on the Univac 1108. The program allows for improvement and update of the performance prediction techniques. The program logic involves subroutines which handle the following basic functions: (1) a driver which calls for input, output, and communication between program and user and between the subroutines themselves; (2) thermodynamic analysis; (3) thermal stress analysis; (4) acoustic fatigue analysis; and (5) weights/cost analysis. In addition, a system total cost is predicted based on system weight and historical cost data of similar systems. Two basic types of input are provided, both of which are based on trajectory data. These are vehicle attitude (altitude, velocity, and angles of attack and sideslip), for external heat and pressure loads calculation, and heating rates and pressure loads as a function of time.

  15. Methods of Constructing a Blended Performance Function Suitable for Formation Flight

    NASA Technical Reports Server (NTRS)

    Ryan, John J.

    2017-01-01

    This paper presents two methods for constructing an approximate performance function of a desired parameter using correlated parameters. The methods are useful when real-time measurements of a desired performance function are not available to applications such as extremum-seeking control systems. The first method approximates an a priori measured or estimated desired performance function by combining real-time measurements of readily available correlated parameters. The parameters are combined using a weighting vector determined from a minimum-squares optimization to form a blended performance function. The blended performance function better matches the desired performance function mini- mum than single-measurement performance functions. The second method expands upon the first by replacing the a priori data with near-real-time measurements of the desired performance function. The resulting blended performance function weighting vector is up- dated when measurements of the desired performance function are available. Both methods are applied to data collected during formation- flight-for-drag-reduction flight experiments.

  16. Displacement based multilevel structural optimization

    NASA Technical Reports Server (NTRS)

    Striz, Alfred G.

    1995-01-01

    Multidisciplinary design optimization (MDO) is expected to play a major role in the competitive transportation industries of tomorrow, i.e., in the design of aircraft and spacecraft, of high speed trains, boats, and automobiles. All of these vehicles require maximum performance at minimum weight to keep fuel consumption low and conserve resources. Here, MDO can deliver mathematically based design tools to create systems with optimum performance subject to the constraints of disciplines such as structures, aerodynamics, controls, etc. Although some applications of MDO are beginning to surface, the key to a widespread use of this technology lies in the improvement of its efficiency. This aspect is investigated here for the MDO subset of structural optimization, i.e., for the weight minimization of a given structure under size, strength, and displacement constraints. Specifically, finite element based multilevel optimization of structures (here, statically indeterminate trusses and beams for proof of concept) is performed. In the system level optimization, the design variables are the coefficients of assumed displacement functions, and the load unbalance resulting from the solution of the stiffness equations is minimized. Constraints are placed on the deflection amplitudes and the weight of the structure. In the subsystems level optimizations, the weight of each element is minimized under the action of stress constraints, with the cross sectional dimensions as design variables. This approach is expected to prove very efficient, especially for complex structures, since the design task is broken down into a large number of small and efficiently handled subtasks, each with only a small number of variables. This partitioning will also allow for the use of parallel computing, first, by sending the system and subsystems level computations to two different processors, ultimately, by performing all subsystems level optimizations in a massively parallel manner on separate processors. It is expected that the subsystems level optimizations can be further improved through the use of controlled growth, a method which reduces an optimization to a more efficient analysis with only a slight degradation in accuracy. The efficiency of all proposed techniques is being evaluated relative to the performance of the standard single level optimization approach where the complete structure is weight minimized under the action of all given constraints by one processor and to the performance of simultaneous analysis and design which combines analysis and optimization into a single step. It is expected that the present approach can be expanded to include additional structural constraints (buckling, free and forced vibration, etc.) or other disciplines (passive and active controls, aerodynamics, etc.) for true MDO.

  17. Algorithm for pose estimation based on objective function with uncertainty-weighted measuring error of feature point cling to the curved surface.

    PubMed

    Huo, Ju; Zhang, Guiyang; Yang, Ming

    2018-04-20

    This paper is concerned with the anisotropic and non-identical gray distribution of feature points clinging to the curved surface, upon which a high precision and uncertainty-resistance algorithm for pose estimation is proposed. Weighted contribution of uncertainty to the objective function of feature points measuring error is analyzed. Then a novel error objective function based on the spatial collinear error is constructed by transforming the uncertainty into a covariance-weighted matrix, which is suitable for the practical applications. Further, the optimized generalized orthogonal iterative (GOI) algorithm is utilized for iterative solutions such that it avoids the poor convergence and significantly resists the uncertainty. Hence, the optimized GOI algorithm extends the field-of-view applications and improves the accuracy and robustness of the measuring results by the redundant information. Finally, simulation and practical experiments show that the maximum error of re-projection image coordinates of the target is less than 0.110 pixels. Within the space 3000  mm×3000  mm×4000  mm, the maximum estimation errors of static and dynamic measurement for rocket nozzle motion are superior to 0.065° and 0.128°, respectively. Results verify the high accuracy and uncertainty attenuation performance of the proposed approach and should therefore have potential for engineering applications.

  18. Closed-form solutions for linear regulator-design of mechanical systems including optimal weighting matrix selection

    NASA Technical Reports Server (NTRS)

    Hanks, Brantley R.; Skelton, Robert E.

    1991-01-01

    This paper addresses the restriction of Linear Quadratic Regulator (LQR) solutions to the algebraic Riccati Equation to design spaces which can be implemented as passive structural members and/or dampers. A general closed-form solution to the optimal free-decay control problem is presented which is tailored for structural-mechanical systems. The solution includes, as subsets, special cases such as the Rayleigh Dissipation Function and total energy. Weighting matrix selection is a constrained choice among several parameters to obtain desired physical relationships. The closed-form solution is also applicable to active control design for systems where perfect, collocated actuator-sensor pairs exist. Some examples of simple spring mass systems are shown to illustrate key points.

  19. Improved event positioning in a gamma ray detector using an iterative position-weighted centre-of-gravity algorithm.

    PubMed

    Liu, Chen-Yi; Goertzen, Andrew L

    2013-07-21

    An iterative position-weighted centre-of-gravity algorithm was developed and tested for positioning events in a silicon photomultiplier (SiPM)-based scintillation detector for positron emission tomography. The algorithm used a Gaussian-based weighting function centred at the current estimate of the event location. The algorithm was applied to the signals from a 4 × 4 array of SiPM detectors that used individual channel readout and a LYSO:Ce scintillator array. Three scintillator array configurations were tested: single layer with 3.17 mm crystal pitch, matched to the SiPM size; single layer with 1.5 mm crystal pitch; and dual layer with 1.67 mm crystal pitch and a ½ crystal offset in the X and Y directions between the two layers. The flood histograms generated by this algorithm were shown to be superior to those generated by the standard centre of gravity. The width of the Gaussian weighting function of the algorithm was optimized for different scintillator array setups. The optimal width of the Gaussian curve was found to depend on the amount of light spread. The algorithm required less than 20 iterations to calculate the position of an event. The rapid convergence of this algorithm will readily allow for implementation on a front-end detector processing field programmable gate array for use in improved real-time event positioning and identification.

  20. Statistically optimal estimation of Greenland Ice Sheet mass variations from GRACE monthly solutions using an improved mascon approach

    NASA Astrophysics Data System (ADS)

    Ran, J.; Ditmar, P.; Klees, R.; Farahani, H. H.

    2018-03-01

    We present an improved mascon approach to transform monthly spherical harmonic solutions based on GRACE satellite data into mass anomaly estimates in Greenland. The GRACE-based spherical harmonic coefficients are used to synthesize gravity anomalies at satellite altitude, which are then inverted into mass anomalies per mascon. The limited spectral content of the gravity anomalies is properly accounted for by applying a low-pass filter as part of the inversion procedure to make the functional model spectrally consistent with the data. The full error covariance matrices of the monthly GRACE solutions are properly propagated using the law of covariance propagation. Using numerical experiments, we demonstrate the importance of a proper data weighting and of the spectral consistency between functional model and data. The developed methodology is applied to process real GRACE level-2 data (CSR RL05). The obtained mass anomaly estimates are integrated over five drainage systems, as well as over entire Greenland. We find that the statistically optimal data weighting reduces random noise by 35-69%, depending on the drainage system. The obtained mass anomaly time-series are de-trended to eliminate the contribution of ice discharge and are compared with de-trended surface mass balance (SMB) time-series computed with the Regional Atmospheric Climate Model (RACMO 2.3). We show that when using a statistically optimal data weighting in GRACE data processing, the discrepancies between GRACE-based estimates of SMB and modelled SMB are reduced by 24-47%.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tom, Nathan; Yu, Yi-Hsiang; Wright, Alan

    The focus of this paper is to balance power absorption against structural loading for a novel fixed-bottom oscillating surge wave energy converter in both regular and irregular wave environments. The power-to-load ratio will be evaluated using pseudospectral control (PSC) to determine the optimum power-takeoff (PTO) torque based on a multiterm objective function. This paper extends the pseudospectral optimal control problem to not just maximize the time-averaged absorbed power but also include measures for the surge-foundation force and PTO torque in the optimization. The objective function may now potentially include three competing terms that the optimizer must balance. Separate weighting factorsmore » are attached to the surge-foundation force and PTO control torque that can be used to tune the optimizer performance to emphasize either power absorption or load shedding. To correct the pitch equation of motion, derived from linear hydrodynamic theory, a quadratic-viscous-drag torque has been included in the system dynamics; however, to continue the use of quadratic programming solvers, an iteratively obtained linearized drag coefficient was utilized that provided good accuracy in the predicted pitch motion. Furthermore, the analysis considers the use of a nonideal PTO unit to more accurately evaluate controller performance. The PTO efficiency is not directly included in the objective function but rather the weighting factors are utilized to limit the PTO torque amplitudes, thereby reducing the losses resulting from the bidirectional energy flow through a nonideal PTO. Results from PSC show that shedding a portion of the available wave energy can lead to greater reductions in structural loads, peak-to-average power ratio, and reactive power requirement.« less

  2. Optimal apodization design for medical ultrasound using constrained least squares part I: theory.

    PubMed

    Guenther, Drake A; Walker, William F

    2007-02-01

    Aperture weighting functions are critical design parameters in the development of ultrasound systems because beam characteristics affect the contrast and point resolution of the final output image. In previous work by our group, we developed a metric that quantifies a broadband imaging system's contrast resolution performance. We now use this metric to formulate a novel general ultrasound beamformer design method. In our algorithm, we use constrained least squares (CLS) techniques and a linear algebra formulation to describe the system point spread function (PSF) as a function of the aperture weightings. In one approach, we minimize the energy of the PSF outside a certain boundary and impose a linear constraint on the aperture weights. In a second approach, we minimize the energy of the PSF outside a certain boundary while imposing a quadratic constraint on the energy of the PSF inside the boundary. We present detailed analysis for an arbitrary ultrasound imaging system and discuss several possible applications of the CLS techniques, such as designing aperture weightings to maximize contrast resolution and improve the system depth of field.

  3. Weighted graph cuts without eigenvectors a multilevel approach.

    PubMed

    Dhillon, Inderjit S; Guan, Yuqiang; Kulis, Brian

    2007-11-01

    A variety of clustering algorithms have recently been proposed to handle data that is not linearly separable; spectral clustering and kernel k-means are two of the main methods. In this paper, we discuss an equivalence between the objective functions used in these seemingly different methods--in particular, a general weighted kernel k-means objective is mathematically equivalent to a weighted graph clustering objective. We exploit this equivalence to develop a fast, high-quality multilevel algorithm that directly optimizes various weighted graph clustering objectives, such as the popular ratio cut, normalized cut, and ratio association criteria. This eliminates the need for any eigenvector computation for graph clustering problems, which can be prohibitive for very large graphs. Previous multilevel graph partitioning methods, such as Metis, have suffered from the restriction of equal-sized clusters; our multilevel algorithm removes this restriction by using kernel k-means to optimize weighted graph cuts. Experimental results show that our multilevel algorithm outperforms a state-of-the-art spectral clustering algorithm in terms of speed, memory usage, and quality. We demonstrate that our algorithm is applicable to large-scale clustering tasks such as image segmentation, social network analysis and gene network analysis.

  4. Reflecting metallic metasurfaces designed with stochastic optimization as waveplates for manipulating light polarization

    NASA Astrophysics Data System (ADS)

    Haberko, Jakub; Wasylczyk, Piotr

    2018-03-01

    We demonstrate that a stochastic optimization algorithm with a properly chosen, weighted fitness function, following a global variation of parameters upon each step can be used to effectively design reflective polarizing optical elements. Two sub-wavelength metallic metasurfaces, corresponding to broadband half- and quarter-waveplates are demonstrated with simple structure topology, a uniform metallic coating and with the design suited for the currently available microfabrication techniques, such as ion milling or 3D printing.

  5. The use of locally optimal trajectory management for base reaction control of robots in a microgravity environment

    NASA Technical Reports Server (NTRS)

    Lin, N. J.; Quinn, R. D.

    1991-01-01

    A locally-optimal trajectory management (LOTM) approach is analyzed, and it is found that care should be taken in choosing the Ritz expansion and cost function. A modified cost function for the LOTM approach is proposed which includes the kinetic energy along with the base reactions in a weighted and scale sum. The effects of the modified functions are demonstrated with numerical examples for robots operating in two- and three-dimensional space. It is pointed out that this modified LOTM approach shows good performance, the reactions do not fluctuate greatly, joint velocities reach their objectives at the end of the manifestation, and the CPU time is slightly more than twice the manipulation time.

  6. Issues and Strategies in Solving Multidisciplinary Optimization Problems

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya

    2013-01-01

    Optimization research at NASA Glenn Research Center has addressed the design of structures, aircraft and airbreathing propulsion engines. The accumulated multidisciplinary design activity is collected under a testbed entitled COMETBOARDS. Several issues were encountered during the solution of the problems. Four issues and the strategies adapted for their resolution are discussed. This is followed by a discussion on analytical methods that is limited to structural design application. An optimization process can lead to an inefficient local solution. This deficiency was encountered during design of an engine component. The limitation was overcome through an augmentation of animation into optimization. Optimum solutions obtained were infeasible for aircraft and airbreathing propulsion engine problems. Alleviation of this deficiency required a cascading of multiple algorithms. Profile optimization of a beam produced an irregular shape. Engineering intuition restored the regular shape for the beam. The solution obtained for a cylindrical shell by a subproblem strategy converged to a design that can be difficult to manufacture. Resolution of this issue remains a challenge. The issues and resolutions are illustrated through a set of problems: Design of an engine component, Synthesis of a subsonic aircraft, Operation optimization of a supersonic engine, Design of a wave-rotor-topping device, Profile optimization of a cantilever beam, and Design of a cylindrical shell. This chapter provides a cursory account of the issues. Cited references provide detailed discussion on the topics. Design of a structure can also be generated by traditional method and the stochastic design concept. Merits and limitations of the three methods (traditional method, optimization method and stochastic concept) are illustrated. In the traditional method, the constraints are manipulated to obtain the design and weight is back calculated. In design optimization, the weight of a structure becomes the merit function with constraints imposed on failure modes and an optimization algorithm is used to generate the solution. Stochastic design concept accounts for uncertainties in loads, material properties, and other parameters and solution is obtained by solving a design optimization problem for a specified reliability. Acceptable solutions can be produced by all the three methods. The variation in the weight calculated by the methods was found to be modest. Some variation was noticed in designs calculated by the methods. The variation may be attributed to structural indeterminacy. It is prudent to develop design by all three methods prior to its fabrication. The traditional design method can be improved when the simplified sensitivities of the behavior constraint is used. Such sensitivity can reduce design calculations and may have a potential to unify the traditional and optimization methods. Weight versus reliability traced out an inverted-S-shaped graph. The center of the graph corresponded to mean valued design. A heavy design with weight approaching infinity could be produced for a near-zero rate of failure. Weight can be reduced to a small value for a most failure-prone design. Probabilistic modeling of load and material properties remained a challenge.

  7. Enhanced Multiobjective Optimization Technique for Comprehensive Aerospace Design. Part A

    NASA Technical Reports Server (NTRS)

    Chattopadhyay, Aditi; Rajadas, John N.

    1997-01-01

    A multidisciplinary design optimization procedure which couples formal multiobjectives based techniques and complex analysis procedures (such as computational fluid dynamics (CFD) codes) developed. The procedure has been demonstrated on a specific high speed flow application involving aerodynamics and acoustics (sonic boom minimization). In order to account for multiple design objectives arising from complex performance requirements, multiobjective formulation techniques are used to formulate the optimization problem. Techniques to enhance the existing Kreisselmeier-Steinhauser (K-S) function multiobjective formulation approach have been developed. The K-S function procedure used in the proposed work transforms a constrained multiple objective functions problem into an unconstrained problem which then is solved using the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm. Weight factors are introduced during the transformation process to each objective function. This enhanced procedure will provide the designer the capability to emphasize specific design objectives during the optimization process. The demonstration of the procedure utilizes a computational Fluid dynamics (CFD) code which solves the three-dimensional parabolized Navier-Stokes (PNS) equations for the flow field along with an appropriate sonic boom evaluation procedure thus introducing both aerodynamic performance as well as sonic boom as the design objectives to be optimized simultaneously. Sensitivity analysis is performed using a discrete differentiation approach. An approximation technique has been used within the optimizer to improve the overall computational efficiency of the procedure in order to make it suitable for design applications in an industrial setting.

  8. A Collaborative Neurodynamic Approach to Multiple-Objective Distributed Optimization.

    PubMed

    Yang, Shaofu; Liu, Qingshan; Wang, Jun

    2018-04-01

    This paper is concerned with multiple-objective distributed optimization. Based on objective weighting and decision space decomposition, a collaborative neurodynamic approach to multiobjective distributed optimization is presented. In the approach, a system of collaborative neural networks is developed to search for Pareto optimal solutions, where each neural network is associated with one objective function and given constraints. Sufficient conditions are derived for ascertaining the convergence to a Pareto optimal solution of the collaborative neurodynamic system. In addition, it is proved that each connected subsystem can generate a Pareto optimal solution when the communication topology is disconnected. Then, a switching-topology-based method is proposed to compute multiple Pareto optimal solutions for discretized approximation of Pareto front. Finally, simulation results are discussed to substantiate the performance of the collaborative neurodynamic approach. A portfolio selection application is also given.

  9. Optimization of a GO2/GH2 Impinging Injector Element

    NASA Technical Reports Server (NTRS)

    Tucker, P. Kevin; Shyy, Wei; Vaidyanathan, Rajkumar

    2001-01-01

    An injector optimization methodology, method i, is used to investigate optimal design points for a gaseous oxygen/gaseous hydrogen (GO2/GH2) impinging injector element. The unlike impinging element, a fuel-oxidizer- fuel (F-O-F) triplet, is optimized in terms of design variables such as fuel pressure drop, (Delta)P(sub f), oxidizer pressure drop, (Delta)P(sub o), combustor length, L(sub comb), and impingement half-angle, alpha, for a given mixture ratio and chamber pressure. Dependent variables such as energy release efficiency, ERE, wall heat flux, Q(sub w), injector heat flux, Q(sub inj), relative combustor weight, W(sub rel), and relative injector cost, C(sub rel), are calculated and then correlated with the design variables. An empirical design methodology is used to generate these responses for 163 combinations of input variables. Method i is then used to generate response surfaces for each dependent variable. Desirability functions based on dependent variable constraints are created and used to facilitate development of composite response surfaces representing some, or all, of the five dependent variables in terms of the input variables. Three examples illustrating the utility and flexibility of method i are discussed in detail. First, joint response surfaces are constructed by sequentially adding dependent variables. Optimum designs are identified after addition of each variable and the effect each variable has on the design is shown. This stepwise demonstration also highlights the importance of including variables such as weight and cost early in the design process. Secondly, using the composite response surface which includes all five dependent variables, unequal weights are assigned to emphasize certain variables relative to others. Here, method i is used to enable objective trade studies on design issues such as component life and thrust to weight ratio. Finally, specific variable weights are further increased to illustrate the high marginal cost of realizing the last increment of injector performance and thruster weight.

  10. Pulmonary Nodule Recognition Based on Multiple Kernel Learning Support Vector Machine-PSO

    PubMed Central

    Zhu, Zhichuan; Zhao, Qingdong; Liu, Liwei; Zhang, Lijuan

    2018-01-01

    Pulmonary nodule recognition is the core module of lung CAD. The Support Vector Machine (SVM) algorithm has been widely used in pulmonary nodule recognition, and the algorithm of Multiple Kernel Learning Support Vector Machine (MKL-SVM) has achieved good results therein. Based on grid search, however, the MKL-SVM algorithm needs long optimization time in course of parameter optimization; also its identification accuracy depends on the fineness of grid. In the paper, swarm intelligence is introduced and the Particle Swarm Optimization (PSO) is combined with MKL-SVM algorithm to be MKL-SVM-PSO algorithm so as to realize global optimization of parameters rapidly. In order to obtain the global optimal solution, different inertia weights such as constant inertia weight, linear inertia weight, and nonlinear inertia weight are applied to pulmonary nodules recognition. The experimental results show that the model training time of the proposed MKL-SVM-PSO algorithm is only 1/7 of the training time of the MKL-SVM grid search algorithm, achieving better recognition effect. Moreover, Euclidean norm of normalized error vector is proposed to measure the proximity between the average fitness curve and the optimal fitness curve after convergence. Through statistical analysis of the average of 20 times operation results with different inertial weights, it can be seen that the dynamic inertial weight is superior to the constant inertia weight in the MKL-SVM-PSO algorithm. In the dynamic inertial weight algorithm, the parameter optimization time of nonlinear inertia weight is shorter; the average fitness value after convergence is much closer to the optimal fitness value, which is better than the linear inertial weight. Besides, a better nonlinear inertial weight is verified. PMID:29853983

  11. Pulmonary Nodule Recognition Based on Multiple Kernel Learning Support Vector Machine-PSO.

    PubMed

    Li, Yang; Zhu, Zhichuan; Hou, Alin; Zhao, Qingdong; Liu, Liwei; Zhang, Lijuan

    2018-01-01

    Pulmonary nodule recognition is the core module of lung CAD. The Support Vector Machine (SVM) algorithm has been widely used in pulmonary nodule recognition, and the algorithm of Multiple Kernel Learning Support Vector Machine (MKL-SVM) has achieved good results therein. Based on grid search, however, the MKL-SVM algorithm needs long optimization time in course of parameter optimization; also its identification accuracy depends on the fineness of grid. In the paper, swarm intelligence is introduced and the Particle Swarm Optimization (PSO) is combined with MKL-SVM algorithm to be MKL-SVM-PSO algorithm so as to realize global optimization of parameters rapidly. In order to obtain the global optimal solution, different inertia weights such as constant inertia weight, linear inertia weight, and nonlinear inertia weight are applied to pulmonary nodules recognition. The experimental results show that the model training time of the proposed MKL-SVM-PSO algorithm is only 1/7 of the training time of the MKL-SVM grid search algorithm, achieving better recognition effect. Moreover, Euclidean norm of normalized error vector is proposed to measure the proximity between the average fitness curve and the optimal fitness curve after convergence. Through statistical analysis of the average of 20 times operation results with different inertial weights, it can be seen that the dynamic inertial weight is superior to the constant inertia weight in the MKL-SVM-PSO algorithm. In the dynamic inertial weight algorithm, the parameter optimization time of nonlinear inertia weight is shorter; the average fitness value after convergence is much closer to the optimal fitness value, which is better than the linear inertial weight. Besides, a better nonlinear inertial weight is verified.

  12. Weights and topology: a study of the effects of graph construction on 3D image segmentation.

    PubMed

    Grady, Leo; Jolly, Marie-Pierre

    2008-01-01

    Graph-based algorithms have become increasingly popular for medical image segmentation. The fundamental process for each of these algorithms is to use the image content to generate a set of weights for the graph and then set conditions for an optimal partition of the graph with respect to these weights. To date, the heuristics used for generating the weighted graphs from image intensities have largely been ignored, while the primary focus of attention has been on the details of providing the partitioning conditions. In this paper we empirically study the effects of graph connectivity and weighting function on the quality of the segmentation results. To control for algorithm-specific effects, we employ both the Graph Cuts and Random Walker algorithms in our experiments.

  13. Threshold-driven optimization for reference-based auto-planning

    NASA Astrophysics Data System (ADS)

    Long, Troy; Chen, Mingli; Jiang, Steve; Lu, Weiguo

    2018-02-01

    We study threshold-driven optimization methodology for automatically generating a treatment plan that is motivated by a reference DVH for IMRT treatment planning. We present a framework for threshold-driven optimization for reference-based auto-planning (TORA). Commonly used voxel-based quadratic penalties have two components for penalizing under- and over-dosing of voxels: a reference dose threshold and associated penalty weight. Conventional manual- and auto-planning using such a function involves iteratively updating the preference weights while keeping the thresholds constant, an unintuitive and often inconsistent method for planning toward some reference DVH. However, driving a dose distribution by threshold values instead of preference weights can achieve similar plans with less computational effort. The proposed methodology spatially assigns reference DVH information to threshold values, and iteratively improves the quality of that assignment. The methodology effectively handles both sub-optimal and infeasible DVHs. TORA was applied to a prostate case and a liver case as a proof-of-concept. Reference DVHs were generated using a conventional voxel-based objective, then altered to be either infeasible or easy-to-achieve. TORA was able to closely recreate reference DVHs in 5-15 iterations of solving a simple convex sub-problem. TORA has the potential to be effective for auto-planning based on reference DVHs. As dose prediction and knowledge-based planning becomes more prevalent in the clinical setting, incorporating such data into the treatment planning model in a clear, efficient way will be crucial for automated planning. A threshold-focused objective tuning should be explored over conventional methods of updating preference weights for DVH-guided treatment planning.

  14. Obtaining Arbitrary Prescribed Mean Field Dynamics for Recurrently Coupled Networks of Type-I Spiking Neurons with Analytically Determined Weights

    PubMed Central

    Nicola, Wilten; Tripp, Bryan; Scott, Matthew

    2016-01-01

    A fundamental question in computational neuroscience is how to connect a network of spiking neurons to produce desired macroscopic or mean field dynamics. One possible approach is through the Neural Engineering Framework (NEF). The NEF approach requires quantities called decoders which are solved through an optimization problem requiring large matrix inversion. Here, we show how a decoder can be obtained analytically for type I and certain type II firing rates as a function of the heterogeneity of its associated neuron. These decoders generate approximants for functions that converge to the desired function in mean-squared error like 1/N, where N is the number of neurons in the network. We refer to these decoders as scale-invariant decoders due to their structure. These decoders generate weights for a network of neurons through the NEF formula for weights. These weights force the spiking network to have arbitrary and prescribed mean field dynamics. The weights generated with scale-invariant decoders all lie on low dimensional hypersurfaces asymptotically. We demonstrate the applicability of these scale-invariant decoders and weight surfaces by constructing networks of spiking theta neurons that replicate the dynamics of various well known dynamical systems such as the neural integrator, Van der Pol system and the Lorenz system. As these decoders are analytically determined and non-unique, the weights are also analytically determined and non-unique. We discuss the implications for measured weights of neuronal networks. PMID:26973503

  15. Obtaining Arbitrary Prescribed Mean Field Dynamics for Recurrently Coupled Networks of Type-I Spiking Neurons with Analytically Determined Weights.

    PubMed

    Nicola, Wilten; Tripp, Bryan; Scott, Matthew

    2016-01-01

    A fundamental question in computational neuroscience is how to connect a network of spiking neurons to produce desired macroscopic or mean field dynamics. One possible approach is through the Neural Engineering Framework (NEF). The NEF approach requires quantities called decoders which are solved through an optimization problem requiring large matrix inversion. Here, we show how a decoder can be obtained analytically for type I and certain type II firing rates as a function of the heterogeneity of its associated neuron. These decoders generate approximants for functions that converge to the desired function in mean-squared error like 1/N, where N is the number of neurons in the network. We refer to these decoders as scale-invariant decoders due to their structure. These decoders generate weights for a network of neurons through the NEF formula for weights. These weights force the spiking network to have arbitrary and prescribed mean field dynamics. The weights generated with scale-invariant decoders all lie on low dimensional hypersurfaces asymptotically. We demonstrate the applicability of these scale-invariant decoders and weight surfaces by constructing networks of spiking theta neurons that replicate the dynamics of various well known dynamical systems such as the neural integrator, Van der Pol system and the Lorenz system. As these decoders are analytically determined and non-unique, the weights are also analytically determined and non-unique. We discuss the implications for measured weights of neuronal networks.

  16. Direct aperture optimization using an inverse form of back-projection.

    PubMed

    Zhu, Xiaofeng; Cullip, Timothy; Tracton, Gregg; Tang, Xiaoli; Lian, Jun; Dooley, John; Chang, Sha X

    2014-03-06

    Direct aperture optimization (DAO) has been used to produce high dosimetric quality intensity-modulated radiotherapy (IMRT) treatment plans with fast treatment delivery by directly modeling the multileaf collimator segment shapes and weights. To improve plan quality and reduce treatment time for our in-house treatment planning system, we implemented a new DAO approach without using a global objective function (GFO). An index concept is introduced as an inverse form of back-projection used in the CT multiplicative algebraic reconstruction technique (MART). The index, introduced for IMRT optimization in this work, is analogous to the multiplicand in MART. The index is defined as the ratio of the optima over the current. It is assigned to each voxel and beamlet to optimize the fluence map. The indices for beamlets and segments are used to optimize multileaf collimator (MLC) segment shapes and segment weights, respectively. Preliminary data show that without sacrificing dosimetric quality, the implementation of the DAO reduced average IMRT treatment time from 13 min to 8 min for the prostate, and from 15 min to 9 min for the head and neck using our in-house treatment planning system PlanUNC. The DAO approach has also shown promise in optimizing rotational IMRT with burst mode in a head and neck test case.

  17. Comparison of two schemes for automatic keyword extraction from MEDLINE for functional gene clustering.

    PubMed

    Liu, Ying; Ciliax, Brian J; Borges, Karin; Dasigi, Venu; Ram, Ashwin; Navathe, Shamkant B; Dingledine, Ray

    2004-01-01

    One of the key challenges of microarray studies is to derive biological insights from the unprecedented quatities of data on gene-expression patterns. Clustering genes by functional keyword association can provide direct information about the nature of the functional links among genes within the derived clusters. However, the quality of the keyword lists extracted from biomedical literature for each gene significantly affects the clustering results. We extracted keywords from MEDLINE that describes the most prominent functions of the genes, and used the resulting weights of the keywords as feature vectors for gene clustering. By analyzing the resulting cluster quality, we compared two keyword weighting schemes: normalized z-score and term frequency-inverse document frequency (TFIDF). The best combination of background comparison set, stop list and stemming algorithm was selected based on precision and recall metrics. In a test set of four known gene groups, a hierarchical algorithm correctly assigned 25 of 26 genes to the appropriate clusters based on keywords extracted by the TDFIDF weighting scheme, but only 23 og 26 with the z-score method. To evaluate the effectiveness of the weighting schemes for keyword extraction for gene clusters from microarray profiles, 44 yeast genes that are differentially expressed during the cell cycle were used as a second test set. Using established measures of cluster quality, the results produced from TFIDF-weighted keywords had higher purity, lower entropy, and higher mutual information than those produced from normalized z-score weighted keywords. The optimized algorithms should be useful for sorting genes from microarray lists into functionally discrete clusters.

  18. Adaptive near-optimal neuro controller for continuous-time nonaffine nonlinear systems with constrained input.

    PubMed

    Esfandiari, Kasra; Abdollahi, Farzaneh; Talebi, Heidar Ali

    2017-09-01

    In this paper, an identifier-critic structure is introduced to find an online near-optimal controller for continuous-time nonaffine nonlinear systems having saturated control signal. By employing two Neural Networks (NNs), the solution of Hamilton-Jacobi-Bellman (HJB) equation associated with the cost function is derived without requiring a priori knowledge about system dynamics. Weights of the identifier and critic NNs are tuned online and simultaneously such that unknown terms are approximated accurately and the control signal is kept between the saturation bounds. The convergence of NNs' weights, identification error, and system states is guaranteed using Lyapunov's direct method. Finally, simulation results are performed on two nonlinear systems to confirm the effectiveness of the proposed control strategy. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Filtering genetic variants and placing informative priors based on putative biological function.

    PubMed

    Friedrichs, Stefanie; Malzahn, Dörthe; Pugh, Elizabeth W; Almeida, Marcio; Liu, Xiao Qing; Bailey, Julia N

    2016-02-03

    High-density genetic marker data, especially sequence data, imply an immense multiple testing burden. This can be ameliorated by filtering genetic variants, exploiting or accounting for correlations between variants, jointly testing variants, and by incorporating informative priors. Priors can be based on biological knowledge or predicted variant function, or even be used to integrate gene expression or other omics data. Based on Genetic Analysis Workshop (GAW) 19 data, this article discusses diversity and usefulness of functional variant scores provided, for example, by PolyPhen2, SIFT, or RegulomeDB annotations. Incorporating functional scores into variant filters or weights and adjusting the significance level for correlations between variants yielded significant associations with blood pressure traits in a large family study of Mexican Americans (GAW19 data set). Marker rs218966 in gene PHF14 and rs9836027 in MAP4 significantly associated with hypertension; additionally, rare variants in SNUPN significantly associated with systolic blood pressure. Variant weights strongly influenced the power of kernel methods and burden tests. Apart from variant weights in test statistics, prior weights may also be used when combining test statistics or to informatively weight p values while controlling false discovery rate (FDR). Indeed, power improved when gene expression data for FDR-controlled informative weighting of association test p values of genes was used. Finally, approaches exploiting variant correlations included identity-by-descent mapping and the optimal strategy for joint testing rare and common variants, which was observed to depend on linkage disequilibrium structure.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dufour, F., E-mail: dufour@math.u-bordeaux1.fr; Prieto-Rumeau, T., E-mail: tprieto@ccia.uned.es

    We consider a discrete-time constrained discounted Markov decision process (MDP) with Borel state and action spaces, compact action sets, and lower semi-continuous cost functions. We introduce a set of hypotheses related to a positive weight function which allow us to consider cost functions that might not be bounded below by a constant, and which imply the solvability of the linear programming formulation of the constrained MDP. In particular, we establish the existence of a constrained optimal stationary policy. Our results are illustrated with an application to a fishery management problem.

  1. Tensor distribution function

    NASA Astrophysics Data System (ADS)

    Leow, Alex D.; Zhu, Siwei

    2008-03-01

    Diffusion weighted MR imaging is a powerful tool that can be employed to study white matter microstructure by examining the 3D displacement profile of water molecules in brain tissue. By applying diffusion-sensitizing gradients along a minimum of 6 directions, second-order tensors (represetnted by 3-by-3 positive definiite matrices) can be computed to model dominant diffusion processes. However, it has been shown that conventional DTI is not sufficient to resolve more complicated white matter configurations, e.g. crossing fiber tracts. More recently, High Angular Resolution Diffusion Imaging (HARDI) seeks to address this issue by employing more than 6 gradient directions. To account for fiber crossing when analyzing HARDI data, several methodologies have been introduced. For example, q-ball imaging was proposed to approximate Orientation Diffusion Function (ODF). Similarly, the PAS method seeks to reslove the angular structure of displacement probability functions using the maximum entropy principle. Alternatively, deconvolution methods extract multiple fiber tracts by computing fiber orientations using a pre-specified single fiber response function. In this study, we introduce Tensor Distribution Function (TDF), a probability function defined on the space of symmetric and positive definite matrices. Using calculus of variations, we solve for the TDF that optimally describes the observed data. Here, fiber crossing is modeled as an ensemble of Gaussian diffusion processes with weights specified by the TDF. Once this optimal TDF is determined, ODF can easily be computed by analytical integration of the resulting displacement probability function. Moreover, principle fiber directions can also be directly derived from the TDF.

  2. A rapid learning and dynamic stepwise updating algorithm for flat neural networks and the application to time-series prediction.

    PubMed

    Chen, C P; Wan, J Z

    1999-01-01

    A fast learning algorithm is proposed to find an optimal weights of the flat neural networks (especially, the functional-link network). Although the flat networks are used for nonlinear function approximation, they can be formulated as linear systems. Thus, the weights of the networks can be solved easily using a linear least-square method. This formulation makes it easier to update the weights instantly for both a new added pattern and a new added enhancement node. A dynamic stepwise updating algorithm is proposed to update the weights of the system on-the-fly. The model is tested on several time-series data including an infrared laser data set, a chaotic time-series, a monthly flour price data set, and a nonlinear system identification problem. The simulation results are compared to existing models in which more complex architectures and more costly training are needed. The results indicate that the proposed model is very attractive to real-time processes.

  3. A novel beamformer design method for medical ultrasound. Part I: Theory.

    PubMed

    Ranganathan, Karthik; Walker, William F

    2003-01-01

    The design of transmit and receive aperture weightings is a critical step in the development of ultrasound imaging systems. Current design methods are generally iterative, and consequently time consuming and inexact. We describe a new and general ultrasound beamformer design method, the minimum sum squared error (MSSE) technique. The MSSE technique enables aperture design for arbitrary beam patterns (within fundamental limitations imposed by diffraction). It uses a linear algebra formulation to describe the system point spread function (psf) as a function of the aperture weightings. The sum squared error (SSE) between the system psf and the desired or goal psf is minimized, yielding the optimal aperture weightings. We present detailed analysis for continuous wave (CW) and broadband systems. We also discuss several possible applications of the technique, such as the design of aperture weightings that improve the system depth of field, generate limited diffraction transmit beams, and improve the correlation depth of field in translated aperture system geometries. Simulation results are presented in an accompanying paper.

  4. Meal-based enhancement of protein quality and quantity during weight loss in obese older adults with mobility limitations: rationale and design for the MEASUR-UP trial.

    PubMed

    McDonald, Shelley R; Porter Starr, Kathryn N; Mauceri, Luisa; Orenduff, Melissa; Granville, Esther; Ocampo, Christine; Payne, Martha E; Pieper, Carl F; Bales, Connie W

    2015-01-01

    Obese older adults with even modest functional limitations are at a disadvantage for maintaining their independence into late life. However, there is no established intervention for obesity in older individuals. The Measuring Eating, Activity, and Strength: Understanding the Response - Using Protein (MEASUR-UP) trial is a randomized controlled pilot study of obese women and men aged ≥60 years with mild to moderate functional impairments. Changes in body composition (lean and fat mass) and function (Short Physical Performance Battery) in an enhanced protein weight reduction (Protein) arm will be compared to those in a traditional weight loss (Control) arm. The Protein intervention is based on evidence that older adults achieve optimal rates of muscle protein synthesis when consuming about 25-30 g of high quality protein per meal; these participants will consume ~30 g of animal protein at each meal via a combination of provided protein (beef) servings and diet counseling. This trial will provide information on the feasibility and efficacy of enhancing protein quantity and quality in the context of a weight reduction regimen and determine the impact of this intervention on body weight, functional status, and lean muscle mass. We hypothesize that the enhancement of protein quantity and quality in the Protein arm will result in better outcomes for function and/or lean muscle mass than in the Control arm. Ultimately, we hope our findings will help identify a safe weight loss approach that can delay or prevent late life disability by changing the trajectory of age-associated functional impairment associated with obesity. Copyright © 2014 Elsevier Inc. All rights reserved.

  5. Model predictive controller design for boost DC-DC converter using T-S fuzzy cost function

    NASA Astrophysics Data System (ADS)

    Seo, Sang-Wha; Kim, Yong; Choi, Han Ho

    2017-11-01

    This paper proposes a Takagi-Sugeno (T-S) fuzzy method to select cost function weights of finite control set model predictive DC-DC converter control algorithms. The proposed method updates the cost function weights at every sample time by using T-S type fuzzy rules derived from the common optimal control engineering knowledge that a state or input variable with an excessively large magnitude can be penalised by increasing the weight corresponding to the variable. The best control input is determined via the online optimisation of the T-S fuzzy cost function for all the possible control input sequences. This paper implements the proposed model predictive control algorithm in real time on a Texas Instruments TMS320F28335 floating-point Digital Signal Processor (DSP). Some experimental results are given to illuminate the practicality and effectiveness of the proposed control system under several operating conditions. The results verify that our method can yield not only good transient and steady-state responses (fast recovery time, small overshoot, zero steady-state error, etc.) but also insensitiveness to abrupt load or input voltage parameter variations.

  6. Biologically plausible learning in neural networks: a lesson from bacterial chemotaxis.

    PubMed

    Shimansky, Yury P

    2009-12-01

    Learning processes in the brain are usually associated with plastic changes made to optimize the strength of connections between neurons. Although many details related to biophysical mechanisms of synaptic plasticity have been discovered, it is unclear how the concurrent performance of adaptive modifications in a huge number of spatial locations is organized to minimize a given objective function. Since direct experimental observation of even a relatively small subset of such changes is not feasible, computational modeling is an indispensable investigation tool for solving this problem. However, the conventional method of error back-propagation (EBP) employed for optimizing synaptic weights in artificial neural networks is not biologically plausible. This study based on computational experiments demonstrated that such optimization can be performed rather efficiently using the same general method that bacteria employ for moving closer to an attractant or away from a repellent. With regard to neural network optimization, this method consists of regulating the probability of an abrupt change in the direction of synaptic weight modification according to the temporal gradient of the objective function. Neural networks utilizing this method (regulation of modification probability, RMP) can be viewed as analogous to swimming in the multidimensional space of their parameters in the flow of biochemical agents carrying information about the optimality criterion. The efficiency of RMP is comparable to that of EBP, while RMP has several important advantages. Since the biological plausibility of RMP is beyond a reasonable doubt, the RMP concept provides a constructive framework for the experimental analysis of learning in natural neural networks.

  7. The effects of weight change on glomerular filtration rate.

    PubMed

    Chang, Alex; Greene, Tom H; Wang, Xuelei; Kendrick, Cynthia; Kramer, Holly; Wright, Jackson; Astor, Brad; Shafi, Tariq; Toto, Robert; Lewis, Julia; Appel, Lawrence J; Grams, Morgan

    2015-11-01

    Little is known about the effect of weight loss/gain on kidney function. Analyses are complicated by uncertainty about optimal body surface indexing strategies for measured glomerular filtration rate (mGFR). Using data from the African-American Study of Kidney Disease and Hypertension (AASK), we determined the association of change in weight with three different estimates of change in kidney function: (i) unindexed mGFR estimated by renal clearance of iodine-125-iothalamate, (ii) mGFR indexed to concurrently measured BSA and (iii) GFR estimated from serum creatinine (eGFR). All models were adjusted for baseline weight, time, randomization group and time-varying diuretic use. We also examined whether these relationships were consistent across a number of subgroups, including tertiles of baseline 24-h urine sodium excretion. In 1094 participants followed over an average of 3.6 years, a 5-kg weight gain was associated with a 1.10 mL/min/1.73 m(2) (95% CI: 0.87 to 1.33; P < 0.001) increase in unindexed mGFR. There was no association between weight change and mGFR indexed for concurrent BSA (per 5 kg weight gain, 0.21; 95% CI: -0.02 to 0.44; P = 0.1) or between weight change and eGFR (-0.09; 95% CI: -0.32 to 0.14; P = 0.4). The effect of weight change on unindexed mGFR was less pronounced in individuals with higher baseline sodium excretion (P = 0.08 for interaction). The association between weight change and kidney function varies depending on the method of assessment. Future clinical trials should examine the effect of intentional weight change on measured GFR or filtration markers robust to changes in muscle mass. © The Author 2015. Published by Oxford University Press on behalf of ERA-EDTA. All rights reserved.

  8. The effects of weight change on glomerular filtration rate

    PubMed Central

    Chang, Alex; Greene, Tom H.; Wang, Xuelei; Kendrick, Cynthia; Kramer, Holly; Wright, Jackson; Astor, Brad; Shafi, Tariq; Toto, Robert; Lewis, Julia; Appel, Lawrence J.; Grams, Morgan

    2015-01-01

    Background Little is known about the effect of weight loss/gain on kidney function. Analyses are complicated by uncertainty about optimal body surface indexing strategies for measured glomerular filtration rate (mGFR). Methods Using data from the African-American Study of Kidney Disease and Hypertension (AASK), we determined the association of change in weight with three different estimates of change in kidney function: (i) unindexed mGFR estimated by renal clearance of iodine-125-iothalamate, (ii) mGFR indexed to concurrently measured BSA and (iii) GFR estimated from serum creatinine (eGFR). All models were adjusted for baseline weight, time, randomization group and time-varying diuretic use. We also examined whether these relationships were consistent across a number of subgroups, including tertiles of baseline 24-h urine sodium excretion. Results In 1094 participants followed over an average of 3.6 years, a 5-kg weight gain was associated with a 1.10 mL/min/1.73 m2 (95% CI: 0.87 to 1.33; P < 0.001) increase in unindexed mGFR. There was no association between weight change and mGFR indexed for concurrent BSA (per 5 kg weight gain, 0.21; 95% CI: −0.02 to 0.44; P = 0.1) or between weight change and eGFR (−0.09; 95% CI: −0.32 to 0.14; P = 0.4). The effect of weight change on unindexed mGFR was less pronounced in individuals with higher baseline sodium excretion (P = 0.08 for interaction). Conclusion The association between weight change and kidney function varies depending on the method of assessment. Future clinical trials should examine the effect of intentional weight change on measured GFR or filtration markers robust to changes in muscle mass. PMID:26085555

  9. Stripe nonuniformity correction for infrared imaging system based on single image optimization

    NASA Astrophysics Data System (ADS)

    Hua, Weiping; Zhao, Jufeng; Cui, Guangmang; Gong, Xiaoli; Ge, Peng; Zhang, Jiang; Xu, Zhihai

    2018-06-01

    Infrared imaging is often disturbed by stripe nonuniformity noise. Scene-based correction method can effectively reduce the impact of stripe noise. In this paper, a stripe nonuniformity correction method based on differential constraint is proposed. Firstly, the gray distribution of stripe nonuniformity is analyzed and the penalty function is constructed by the difference of horizontal gradient and vertical gradient. With the weight function, the penalty function is optimized to obtain the corrected image. Comparing with other single-frame approaches, experiments show that the proposed method performs better in both subjective and objective analysis, and does less damage to edge and detail. Meanwhile, the proposed method runs faster. We have also discussed the differences between the proposed idea and multi-frame methods. Our method is finally well applied in hardware system.

  10. Inversion method based on stochastic optimization for particle sizing.

    PubMed

    Sánchez-Escobar, Juan Jaime; Barbosa-Santillán, Liliana Ibeth; Vargas-Ubera, Javier; Aguilar-Valdés, Félix

    2016-08-01

    A stochastic inverse method is presented based on a hybrid evolutionary optimization algorithm (HEOA) to retrieve a monomodal particle-size distribution (PSD) from the angular distribution of scattered light. By solving an optimization problem, the HEOA (with the Fraunhofer approximation) retrieves the PSD from an intensity pattern generated by Mie theory. The analyzed light-scattering pattern can be attributed to unimodal normal, gamma, or lognormal distribution of spherical particles covering the interval of modal size parameters 46≤α≤150. The HEOA ensures convergence to the near-optimal solution during the optimization of a real-valued objective function by combining the advantages of a multimember evolution strategy and locally weighted linear regression. The numerical results show that our HEOA can be satisfactorily applied to solve the inverse light-scattering problem.

  11. Balancing Power Absorption Against Structural Loads With Viscous Drag and Power-Takeoff Efficiency Considerations

    DOE PAGES

    Tom, Nathan; Yu, Yi-Hsiang; Wright, Alan; ...

    2017-11-17

    The focus of this paper is to balance power absorption against structural loading for a novel fixed-bottom oscillating surge wave energy converter in both regular and irregular wave environments. The power-to-load ratio will be evaluated using pseudospectral control (PSC) to determine the optimum power-takeoff (PTO) torque based on a multiterm objective function. This paper extends the pseudospectral optimal control problem to not just maximize the time-averaged absorbed power but also include measures for the surge-foundation force and PTO torque in the optimization. The objective function may now potentially include three competing terms that the optimizer must balance. Separate weighting factorsmore » are attached to the surge-foundation force and PTO control torque that can be used to tune the optimizer performance to emphasize either power absorption or load shedding. To correct the pitch equation of motion, derived from linear hydrodynamic theory, a quadratic-viscous-drag torque has been included in the system dynamics; however, to continue the use of quadratic programming solvers, an iteratively obtained linearized drag coefficient was utilized that provided good accuracy in the predicted pitch motion. Furthermore, the analysis considers the use of a nonideal PTO unit to more accurately evaluate controller performance. The PTO efficiency is not directly included in the objective function but rather the weighting factors are utilized to limit the PTO torque amplitudes, thereby reducing the losses resulting from the bidirectional energy flow through a nonideal PTO. Results from PSC show that shedding a portion of the available wave energy can lead to greater reductions in structural loads, peak-to-average power ratio, and reactive power requirement.« less

  12. Design of optimally normal minimum gain controllers by continuation method

    NASA Technical Reports Server (NTRS)

    Lim, K. B.; Juang, J.-N.; Kim, Z. C.

    1989-01-01

    A measure of the departure from normality is investigated for system robustness. An attractive feature of the normality index is its simplicity for pole placement designs. To allow a tradeoff between system robustness and control effort, a cost function consisting of the sum of a norm of weighted gain matrix and a normality index is minimized. First- and second-order necessary conditions for the constrained optimization problem are derived and solved by a Newton-Raphson algorithm imbedded into a one-parameter family of neighboring zero problems. The method presented allows the direct computation of optimal gains in terms of robustness and control effort for pole placement problems.

  13. Study on multimodal transport route under low carbon background

    NASA Astrophysics Data System (ADS)

    Liu, Lele; Liu, Jie

    2018-06-01

    Low-carbon environmental protection is the focus of attention around the world, scientists are constantly researching on production of carbon emissions and living carbon emissions. However, there is little literature about multimodal transportation based on carbon emission at home and abroad. Firstly, this paper introduces the theory of multimodal transportation, the multimodal transport models that didn't consider carbon emissions and consider carbon emissions are analyzed. On this basis, a multi-objective programming 0-1 programming model with minimum total transportation cost and minimum total carbon emission is proposed. The idea of weight is applied to Ideal point method for solving problem, multi-objective programming is transformed into a single objective function. The optimal solution of carbon emission to transportation cost under different weights is determined by a single objective function with variable weights. Based on the model and algorithm, an example is given and the results are analyzed.

  14. Optimal design of structures for earthquake loads by a hybrid RBF-BPSO method

    NASA Astrophysics Data System (ADS)

    Salajegheh, Eysa; Gholizadeh, Saeed; Khatibinia, Mohsen

    2008-03-01

    The optimal seismic design of structures requires that time history analyses (THA) be carried out repeatedly. This makes the optimal design process inefficient, in particular, if an evolutionary algorithm is used. To reduce the overall time required for structural optimization, two artificial intelligence strategies are employed. In the first strategy, radial basis function (RBF) neural networks are used to predict the time history responses of structures in the optimization flow. In the second strategy, a binary particle swarm optimization (BPSO) is used to find the optimum design. Combining the RBF and BPSO, a hybrid RBF-BPSO optimization method is proposed in this paper, which achieves fast optimization with high computational performance. Two examples are presented and compared to determine the optimal weight of structures under earthquake loadings using both exact and approximate analyses. The numerical results demonstrate the computational advantages and effectiveness of the proposed hybrid RBF-BPSO optimization method for the seismic design of structures.

  15. Weighted optimization of irradiance for photodynamic therapy of port wine stains

    NASA Astrophysics Data System (ADS)

    He, Linhuan; Zhou, Ya; Hu, Xiaoming

    2016-10-01

    Planning of irradiance distribution (PID) is one of the foremost factors for on-demand treatment of port wine stains (PWS) with photodynamic therapy (PDT). A weighted optimization method for PID was proposed according to the grading of PWS with a three dimensional digital illumination instrument. Firstly, the point clouds of lesions were filtered to remove the error or redundant points, the triangulation was carried out and the lesion was divided into small triangular patches. Secondly, the parameters such as area, normal vector and orthocenter for optimization of each triangular patch were calculated, and the weighted coefficients were determined by the erythema indexes and areas of patches. Then, the optimization initial point was calculated based on the normal vectors and orthocenters to optimize the light direction. In the end, the irradiation can be optimized according to cosine values of irradiance angles and weighted coefficients. Comparing the irradiance distribution before and after optimization, the proposed weighted optimization method can make the irradiance distribution match better with the characteristics of lesions, and has the potential to improve the therapeutic efficacy.

  16. Uncertainty reduction in intensity modulated proton therapy by inverse Monte Carlo treatment planning

    NASA Astrophysics Data System (ADS)

    Morávek, Zdenek; Rickhey, Mark; Hartmann, Matthias; Bogner, Ludwig

    2009-08-01

    Treatment plans for intensity-modulated proton therapy may be sensitive to some sources of uncertainty. One source is correlated with approximations of the algorithms applied in the treatment planning system and another one depends on how robust the optimization is with regard to intra-fractional tissue movements. The irradiated dose distribution may substantially deteriorate from the planning when systematic errors occur in the dose algorithm. This can influence proton ranges and lead to improper modeling of the Braggpeak degradation in heterogeneous structures or particle scatter or the nuclear interaction part. Additionally, systematic errors influence the optimization process, which leads to the convergence error. Uncertainties with regard to organ movements are related to the robustness of a chosen beam setup to tissue movements on irradiation. We present the inverse Monte Carlo treatment planning system IKO for protons (IKO-P), which tries to minimize the errors described above to a large extent. Additionally, robust planning is introduced by beam angle optimization according to an objective function penalizing paths representing strongly longitudinal and transversal tissue heterogeneities. The same score function is applied to optimize spot planning by the selection of a robust choice of spots. As spots can be positioned on different energy grids or on geometric grids with different space filling factors, a variety of grids were used to investigate the influence on the spot-weight distribution as a result of optimization. A tighter distribution of spot weights was assumed to result in a more robust plan with respect to movements. IKO-P is described in detail and demonstrated on a test case and a lung cancer case as well. Different options of spot planning and grid types are evaluated, yielding a superior plan quality with dose delivery to the spots from all beam directions over optimized beam directions. This option shows a tighter spot-weight distribution and should therefore be less sensitive to movements compared to optimized directions. But accepting a slight loss in plan quality, the latter choice could potentially improve robustness even further by accepting only spots from the most proper direction. The choice of a geometric grid instead of an energy grid for spot positioning has only a minor influence on the plan quality, at least for the investigated lung case.

  17. Position specific interaction dependent scoring technique for virtual screening based on weighted protein--ligand interaction fingerprint profiles.

    PubMed

    Nandigam, Ravi K; Kim, Sangtae; Singh, Juswinder; Chuaqui, Claudio

    2009-05-01

    The desire to exploit structural information to aid structure based design and virtual screening led to the development of the interaction fingerprint for analyzing, mining, and filtering the binding patterns underlying the complex 3D data. In this paper we introduce a new approach, weighted SIFt (or w-SIFt), extending the concept of SIFt to capture the relative importance of different binding interactions. The methodology presented here for determining the weights in w-SIFt involves utilizing a dimensionality reduction technique for eliminating linear redundancies in the data followed by a stochastic optimization. We find that the relative weights of the fingerprint bits provide insight into what interactions are critical in determining inhibitor potency. Moreover, the weighted interaction fingerprint can serve as an interpretable position dependent scoring function for ligand protein interactions.

  18. Aerodynamic shape optimization of Airfoils in 2-D incompressible flow

    NASA Astrophysics Data System (ADS)

    Rangasamy, Srinivethan; Upadhyay, Harshal; Somasekaran, Sandeep; Raghunath, Sreekanth

    2010-11-01

    An optimization framework was developed for maximizing the region of 2-D airfoil immersed in laminar flow with enhanced aerodynamic performance. It uses genetic algorithm over a population of 125, across 1000 generations, to optimize the airfoil. On a stand-alone computer, a run takes about an hour to obtain a converged solution. The airfoil geometry was generated using two Bezier curves; one to represent the thickness and the other the camber of the airfoil. The airfoil profile was generated by adding and subtracting the thickness curve from the camber curve. The coefficient of lift and drag was computed using potential velocity distribution obtained from panel code, and boundary layer transition prediction code was used to predict the location of onset of transition. The objective function of a particular design is evaluated as the weighted-average of aerodynamic characteristics at various angles of attacks. Optimization was carried out for several objective functions and the airfoil designs obtained were analyzed.

  19. Optimization of the development of reproductive organs celepuk jawa (otus angelinae) owl which supplemented by turmeric powder

    NASA Astrophysics Data System (ADS)

    Rini Saraswati, Tyas; Yuniwarti, Enny Yusuf W.; Tana, Silvan

    2018-03-01

    Otus angelinae is included as a protected animal because of its endangered existence. Whereas, it has many values such as for mice pest control. Therefore, this research aims to optimize the reproductive function of Otus angelinae by administering turmeric powder mixed in its feed. This study was held on a laboratory scale with two male and two female Otus angelinae three months of age. Each subject is divided into two groups: a control group and a treatment group which is treated with turmeric powder 108 mg/owl/day mixed in 30 g catfish/day for a month. The parameter observed were the development of hierarchy follicles and the ovarium weight of female Otus angelinae, whereas the testis organs and testes weight were observed for the male. Both the female’s and male’s body weight, liver weight and the length of ductus reproduction were also observed. The data was analyzed descriptively. The results showed that the administration of turmeric powder can induce the development of ovarian follicles hierarchy and the length of ductus reproduction of female Otus angelinae and also induce the development of the testes and the length of ductus reproduction of male Otus angelinae. The addition of turmeric powder increased the liver weight of the female Otus angelinae, however it does not affect the body weight.

  20. Flat-Top Sector Beams Using Only Array Element Phase Weighting: A Metaheuristic Optimization Approach

    DTIC Science & Technology

    2012-10-10

    IrwIn D. OlIn Flat-Top Sector Beams Using Only Array Element Phase Weighting: A Metaheuristic Optimization Approach Sotera Defense Solutions, Inc...2012 Formal Report Flat-Top Sector Beams Using Only Array Element Phase Weighting: A Metaheuristic Optimization Approach Irwin D. Olin* Naval...Manuscript approved June 30, 2012. 1 FLAT-TOP SECTOR BEAMS USING ONLY ARRAY ELEMENT PHASE WEIGHTING: A METAHEURISTIC

  1. Image-Data Compression Using Edge-Optimizing Algorithm for WFA Inference.

    ERIC Educational Resources Information Center

    Culik, Karel II; Kari, Jarkko

    1994-01-01

    Presents an inference algorithm that produces a weighted finite automata (WFA), in particular, the grayness functions of graytone images. Image-data compression results based on the new inference algorithm produces a WFA with a relatively small number of edges. Image-data compression results alone and in combination with wavelets are discussed.…

  2. Computational Intelligence‐Assisted Understanding of Nature‐Inspired Superhydrophobic Behavior

    PubMed Central

    Zhang, Xia; Ding, Bei; Dixon, Sebastian C.

    2017-01-01

    Abstract In recent years, state‐of‐the‐art computational modeling of physical and chemical systems has shown itself to be an invaluable resource in the prediction of the properties and behavior of functional materials. However, construction of a useful computational model for novel systems in both academic and industrial contexts often requires a great depth of physicochemical theory and/or a wealth of empirical data, and a shortage in the availability of either frustrates the modeling process. In this work, computational intelligence is instead used, including artificial neural networks and evolutionary computation, to enhance our understanding of nature‐inspired superhydrophobic behavior. The relationships between experimental parameters (water droplet volume, weight percentage of nanoparticles used in the synthesis of the polymer composite, and distance separating the superhydrophobic surface and the pendant water droplet in adhesive force measurements) and multiple objectives (water droplet contact angle, sliding angle, and adhesive force) are built and weighted. The obtained optimal parameters are consistent with the experimental observations. This new approach to materials modeling has great potential to be applied more generally to aid design, fabrication, and optimization for myriad functional materials. PMID:29375975

  3. Computational Intelligence-Assisted Understanding of Nature-Inspired Superhydrophobic Behavior.

    PubMed

    Zhang, Xia; Ding, Bei; Cheng, Ran; Dixon, Sebastian C; Lu, Yao

    2018-01-01

    In recent years, state-of-the-art computational modeling of physical and chemical systems has shown itself to be an invaluable resource in the prediction of the properties and behavior of functional materials. However, construction of a useful computational model for novel systems in both academic and industrial contexts often requires a great depth of physicochemical theory and/or a wealth of empirical data, and a shortage in the availability of either frustrates the modeling process. In this work, computational intelligence is instead used, including artificial neural networks and evolutionary computation, to enhance our understanding of nature-inspired superhydrophobic behavior. The relationships between experimental parameters (water droplet volume, weight percentage of nanoparticles used in the synthesis of the polymer composite, and distance separating the superhydrophobic surface and the pendant water droplet in adhesive force measurements) and multiple objectives (water droplet contact angle, sliding angle, and adhesive force) are built and weighted. The obtained optimal parameters are consistent with the experimental observations. This new approach to materials modeling has great potential to be applied more generally to aid design, fabrication, and optimization for myriad functional materials.

  4. Pareto-Optimal Multi-objective Inversion of Geophysical Data

    NASA Astrophysics Data System (ADS)

    Schnaidt, Sebastian; Conway, Dennis; Krieger, Lars; Heinson, Graham

    2018-01-01

    In the process of modelling geophysical properties, jointly inverting different data sets can greatly improve model results, provided that the data sets are compatible, i.e., sensitive to similar features. Such a joint inversion requires a relationship between the different data sets, which can either be analytic or structural. Classically, the joint problem is expressed as a scalar objective function that combines the misfit functions of multiple data sets and a joint term which accounts for the assumed connection between the data sets. This approach suffers from two major disadvantages: first, it can be difficult to assess the compatibility of the data sets and second, the aggregation of misfit terms introduces a weighting of the data sets. We present a pareto-optimal multi-objective joint inversion approach based on an existing genetic algorithm. The algorithm treats each data set as a separate objective, avoiding forced weighting and generating curves of the trade-off between the different objectives. These curves are analysed by their shape and evolution to evaluate data set compatibility. Furthermore, the statistical analysis of the generated solution population provides valuable estimates of model uncertainty.

  5. [Gestational weight gain and optimal ranges in Chinese mothers giving singleton and full-term births in 2013].

    PubMed

    Wang, J; Duan, Y F; Pang, X H; Jiang, S; Yin, S A; Yang, Z Y; Lai, J Q

    2018-01-06

    Objective: To analyze the status of gestational weight gain (GWG) among Chinese mothers who gave singleton and full-term births, and to look at optimal GWG ranges. Methods: In 2013, using the multi-stage stratified and population proportional cluster sampling method, we investigated 8 323 mother-child pairs at their 0-24 months postpartum from 55 counties (cities/districts) of 30 provinces (except Tibet) in mainland China. Questionnaire was used to collect data on body weight before pregnancy and delivery, diseases during gestation, hemorrhage or not at postpartum, child birth weight and length, and other information about pregnant outcomes. We measured mother's body weight and height, and child's body weight and length. Based on 'Chinese Adult Body Weight Standard', we divided mothers into four groups according to their body weight before pregnancy: low weight (BMI<18.5 kg/m(2)), normal weight (BMI 18.5-23.9 kg/m(2)), overweight (BMI 24.0-27.9 kg/m(2)) and obesity (BMI≥28.0 kg/m(2)). The status of GWG was assessed by IOM optimal GWG guidelines. Chinese optimal GWG ranges were calculated according to the association of GWG with pregnant outcomes and anthropometry of mothers and children, and according to P25-P75 of GWG among mothers who had good pregnant outcomes and good anthropometry, and whose children had good anthropometry. The status of GWG was assessed by the new optimal ranges. Results: P50 (P25-P75) of GWG among the 8 323 mothers was 15.0 (10.0-19.0) kg. According to the proposed optimal GWG ranges of IOM, the proportions of inadequate, optimal and excessive GWG accounted for 27.2% (2 263 mothers), 36.2% (3 016 mothers) and 36.6% (3 044 mothers). The optimal GWG ranges for low weight, normal weight, overweight and obesity were 11.5-18.0, 10.0-15.0, 8.0-14.0 and 5.0-11.5 kg. Based on these optimal GWG ranges established in this study, the rates of inadequate, optimal and excessive GWG were 15.7% (1 303 mothers), 45.0% (3 744 mothers) and 39.3% (3 276 mothers), and these rates were significantly different from that defined by the IOM standards (χ2=345.36, P<0.001). Conclusion: The median of GWG among Chinese mothers is 15.0 kg, which is at a relatively higher level. This study suggests the optimal GWG ranges for Chinese women who give singleton and full-term babies, which appears lower than IOM's.

  6. Weighted Optimization-Based Distributed Kalman Filter for Nonlinear Target Tracking in Collaborative Sensor Networks.

    PubMed

    Chen, Jie; Li, Jiahong; Yang, Shuanghua; Deng, Fang

    2017-11-01

    The identification of the nonlinearity and coupling is crucial in nonlinear target tracking problem in collaborative sensor networks. According to the adaptive Kalman filtering (KF) method, the nonlinearity and coupling can be regarded as the model noise covariance, and estimated by minimizing the innovation or residual errors of the states. However, the method requires large time window of data to achieve reliable covariance measurement, making it impractical for nonlinear systems which are rapidly changing. To deal with the problem, a weighted optimization-based distributed KF algorithm (WODKF) is proposed in this paper. The algorithm enlarges the data size of each sensor by the received measurements and state estimates from its connected sensors instead of the time window. A new cost function is set as the weighted sum of the bias and oscillation of the state to estimate the "best" estimate of the model noise covariance. The bias and oscillation of the state of each sensor are estimated by polynomial fitting a time window of state estimates and measurements of the sensor and its neighbors weighted by the measurement noise covariance. The best estimate of the model noise covariance is computed by minimizing the weighted cost function using the exhaustive method. The sensor selection method is in addition to the algorithm to decrease the computation load of the filter and increase the scalability of the sensor network. The existence, suboptimality and stability analysis of the algorithm are given. The local probability data association method is used in the proposed algorithm for the multitarget tracking case. The algorithm is demonstrated in simulations on tracking examples for a random signal, one nonlinear target, and four nonlinear targets. Results show the feasibility and superiority of WODKF against other filtering algorithms for a large class of systems.

  7. Method for Household Refrigerators Efficiency Increasing

    NASA Astrophysics Data System (ADS)

    Lebedev, V. V.; Sumzina, L. V.; Maksimov, A. V.

    2017-11-01

    The relevance of working processes parameters optimization in air conditioning systems is proved in the work. The research is performed with the use of the simulation modeling method. The parameters optimization criteria are considered, the analysis of target functions is given while the key factors of technical and economic optimization are considered in the article. The search for the optimal solution at multi-purpose optimization of the system is made by finding out the minimum of the dual-target vector created by the Pareto method of linear and weight compromises from target functions of the total capital costs and total operating costs. The tasks are solved in the MathCAD environment. The research results show that the values of technical and economic parameters of air conditioning systems in the areas relating to the optimum solutions’ areas manifest considerable deviations from the minimum values. At the same time, the tendencies for significant growth in deviations take place at removal of technical parameters from the optimal values of both the capital investments and operating costs. The production and operation of conditioners with the parameters which are considerably deviating from the optimal values will lead to the increase of material and power costs. The research allows one to establish the borders of the area of the optimal values for technical and economic parameters at air conditioning systems’ design.

  8. Structural optimization of a motorcycle chassis by pattern search algorithm

    NASA Astrophysics Data System (ADS)

    Scappaticci, Lorenzo; Bartolini, Nicola; Guglielmino, Eugenio; Risitano, Giacomo

    2017-08-01

    Changes to the technical regulations of the motorcycle racing world classes introduced the new Moto2 category. The vehicles are prototypes that use single-brand tyres and engines derived from series production, supplied by a single manufacturer. The stability and handling of the vehicle are highly dependent on the geometric properties of the chassis. The performance of a racing motorcycle chassis can be primarily evaluated in terms of weight and stiffness. The aim of this work is to maximize the performance of a tubular frame designed for a motorcycle racing in the Moto2 category. The goal is the implementation of an optimization algorithm that acts on the dimensions of the single pipes of the frame and involves the design of an objective function to minimize the weight of the frame by controlling its stiffnesses.

  9. Multi-sensor image fusion algorithm based on multi-objective particle swarm optimization algorithm

    NASA Astrophysics Data System (ADS)

    Xie, Xia-zhu; Xu, Ya-wei

    2017-11-01

    On the basis of DT-CWT (Dual-Tree Complex Wavelet Transform - DT-CWT) theory, an approach based on MOPSO (Multi-objective Particle Swarm Optimization Algorithm) was proposed to objectively choose the fused weights of low frequency sub-bands. High and low frequency sub-bands were produced by DT-CWT. Absolute value of coefficients was adopted as fusion rule to fuse high frequency sub-bands. Fusion weights in low frequency sub-bands were used as particles in MOPSO. Spatial Frequency and Average Gradient were adopted as two kinds of fitness functions in MOPSO. The experimental result shows that the proposed approach performances better than Average Fusion and fusion methods based on local variance and local energy respectively in brightness, clarity and quantitative evaluation which includes Entropy, Spatial Frequency, Average Gradient and QAB/F.

  10. Optimization of a GO2/GH2 Swirl Coaxial Injector Element

    NASA Technical Reports Server (NTRS)

    Tucker, P. Kevin; Shyy, Wei; Vaidyanathan, Rajkumar

    1999-01-01

    An injector optimization methodology, method i, is used to investigate optimal design points for a gaseous oxygen/gaseous hydrogen (GO2/GH2) swirl coaxial injector element. The element is optimized in terms of design variables such as fuel pressure drop, DELTA P(sub f), oxidizer pressure drop, DELTA P(sub 0) combustor length, L(sub comb), and full cone swirl angle, theta, for a given mixture ratio and chamber pressure. Dependent variables such as energy release efficiency, ERE, wall heat flux, Q(sub w) injector heat flux, Q(sub inj), relative combustor weight, W(sub rel), and relative injector cost, C(sub rel), are calculated and then correlated with the design variables. An empirical design methodology is used to generate these responses for 180 combinations of input variables. Method i is then used to generate response surfaces for each dependent variable. Desirability functions based on dependent variable constraints are created and used to facilitate development of composite response surfaces representing some, or all, of the five dependent variables in terms of the input variables. Two examples illustrating the utility and flexibility of method i are discussed in detail. First, joint response surfaces are constructed by sequentially adding dependent variables. Optimum designs are identified after addition of each variable and the effect each variable has on the design is shown. This stepwise demonstration also highlights the importance of including variables such as weight and cost early in the design process. Secondly, using the composite response surface that includes all five dependent variables, unequal weights are assigned to emphasize certain variables relative to others. Here, method i is used to enable objective trade studies on design issues such as component life and thrust to weight ratio.

  11. Dual-energy contrast-enhanced digital mammography (DE-CEDM): optimization on digital subtraction with practical x-ray low/high-energy spectra

    NASA Astrophysics Data System (ADS)

    Chen, Biao; Jing, Zhenxue; Smith, Andrew P.; Parikh, Samir; Parisky, Yuri

    2006-03-01

    Dual-energy contrast enhanced digital mammography (DE-CEDM), which is based upon the digital subtraction of low/high-energy image pairs acquired before/after the administration of contrast agents, may provide physicians physiologic and morphologic information of breast lesions and help characterize their probability of malignancy. This paper proposes to use only one pair of post-contrast low / high-energy images to obtain digitally subtracted dual-energy contrast-enhanced images with an optimal weighting factor deduced from simulated characteristics of the imaging chain. Based upon our previous CEDM framework, quantitative characteristics of the materials and imaging components in the x-ray imaging chain, including x-ray tube (tungsten) spectrum, filters, breast tissues / lesions, contrast agents (non-ionized iodine solution), and selenium detector, were systemically modeled. Using the base-material (polyethylene-PMMA) decomposition method based on entrance low / high-energy x-ray spectra and breast thickness, the optimal weighting factor was calculated to cancel the contrast between fatty and glandular tissues while enhancing the contrast of iodized lesions. By contrast, previous work determined the optimal weighting factor through either a calibration step or through acquisition of a pre-contrast low/high-energy image pair. Computer simulations were conducted to determine weighting factors, lesions' contrast signal values, and dose levels as functions of x-ray techniques and breast thicknesses. Phantom and clinical feasibility studies were performed on a modified Selenia full field digital mammography system to verify the proposed method and computer-simulated results. The resultant conclusions from the computer simulations and phantom/clinical feasibility studies will be used in the upcoming clinical study.

  12. Weight optimization of plane truss using genetic algorithm

    NASA Astrophysics Data System (ADS)

    Neeraja, D.; Kamireddy, Thejesh; Santosh Kumar, Potnuru; Simha Reddy, Vijay

    2017-11-01

    Optimization of structure on basis of weight has many practical benefits in every engineering field. The efficiency is proportionally related to its weight and hence weight optimization gains prime importance. Considering the field of civil engineering, weight optimized structural elements are economical and easier to transport to the site. In this study, genetic optimization algorithm for weight optimization of steel truss considering its shape, size and topology aspects has been developed in MATLAB. Material strength and Buckling stability have been adopted from IS 800-2007 code of construction steel. The constraints considered in the present study are fabrication, basic nodes, displacements, and compatibility. Genetic programming is a natural selection search technique intended to combine good solutions to a problem from many generations to improve the results. All solutions are generated randomly and represented individually by a binary string with similarities of natural chromosomes, and hence it is termed as genetic programming. The outcome of the study is a MATLAB program, which can optimise a steel truss and display the optimised topology along with element shapes, deflections, and stress results.

  13. Study on light weight design of truss structures of spacecrafts

    NASA Astrophysics Data System (ADS)

    Zeng, Fuming; Yang, Jianzhong; Wang, Jian

    2015-08-01

    Truss structure is usually adopted as the main structure form for spacecrafts due to its high efficiency in supporting concentrated loads. Light-weight design is now becoming the primary concern during conceptual design of spacecrafts. Implementation of light-weight design on truss structure always goes through three processes: topology optimization, size optimization and composites optimization. During each optimization process, appropriate algorithm such as the traditional optimality criterion method, mathematical programming method and the intelligent algorithms which simulate the growth and evolution processes in nature will be selected. According to the practical processes and algorithms, combined with engineering practice and commercial software, summary is made for the implementation of light-weight design on truss structure for spacecrafts.

  14. Optimal and robust control of a class of nonlinear systems using dynamically re-optimised single network adaptive critic design

    NASA Astrophysics Data System (ADS)

    Tiwari, Shivendra N.; Padhi, Radhakant

    2018-01-01

    Following the philosophy of adaptive optimal control, a neural network-based state feedback optimal control synthesis approach is presented in this paper. First, accounting for a nominal system model, a single network adaptive critic (SNAC) based multi-layered neural network (called as NN1) is synthesised offline. However, another linear-in-weight neural network (called as NN2) is trained online and augmented to NN1 in such a manner that their combined output represent the desired optimal costate for the actual plant. To do this, the nominal model needs to be updated online to adapt to the actual plant, which is done by synthesising yet another linear-in-weight neural network (called as NN3) online. Training of NN3 is done by utilising the error information between the nominal and actual states and carrying out the necessary Lyapunov stability analysis using a Sobolev norm based Lyapunov function. This helps in training NN2 successfully to capture the required optimal relationship. The overall architecture is named as 'Dynamically Re-optimised single network adaptive critic (DR-SNAC)'. Numerical results for two motivating illustrative problems are presented, including comparison studies with closed form solution for one problem, which clearly demonstrate the effectiveness and benefit of the proposed approach.

  15. Risk modelling in portfolio optimization

    NASA Astrophysics Data System (ADS)

    Lam, W. H.; Jaaman, Saiful Hafizah Hj.; Isa, Zaidi

    2013-09-01

    Risk management is very important in portfolio optimization. The mean-variance model has been used in portfolio optimization to minimize the investment risk. The objective of the mean-variance model is to minimize the portfolio risk and achieve the target rate of return. Variance is used as risk measure in the mean-variance model. The purpose of this study is to compare the portfolio composition as well as performance between the optimal portfolio of mean-variance model and equally weighted portfolio. Equally weighted portfolio means the proportions that are invested in each asset are equal. The results show that the portfolio composition of the mean-variance optimal portfolio and equally weighted portfolio are different. Besides that, the mean-variance optimal portfolio gives better performance because it gives higher performance ratio than the equally weighted portfolio.

  16. Modeling of urban growth using cellular automata (CA) optimized by Particle Swarm Optimization (PSO)

    NASA Astrophysics Data System (ADS)

    Khalilnia, M. H.; Ghaemirad, T.; Abbaspour, R. A.

    2013-09-01

    In this paper, two satellite images of Tehran, the capital city of Iran, which were taken by TM and ETM+ for years 1988 and 2010 are used as the base information layers to study the changes in urban patterns of this metropolis. The patterns of urban growth for the city of Tehran are extracted in a period of twelve years using cellular automata setting the logistic regression functions as transition functions. Furthermore, the weighting coefficients of parameters affecting the urban growth, i.e. distance from urban centers, distance from rural centers, distance from agricultural centers, and neighborhood effects were selected using PSO. In order to evaluate the results of the prediction, the percent correct match index is calculated. According to the results, by combining optimization techniques with cellular automata model, the urban growth patterns can be predicted with accuracy up to 75 %.

  17. [Spectral scatter correction of coal samples based on quasi-linear local weighted method].

    PubMed

    Lei, Meng; Li, Ming; Ma, Xiao-Ping; Miao, Yan-Zi; Wang, Jian-Sheng

    2014-07-01

    The present paper puts forth a new spectral correction method based on quasi-linear expression and local weighted function. The first stage of the method is to search 3 quasi-linear expressions to replace the original linear expression in MSC method, such as quadratic, cubic and growth curve expression. Then the local weighted function is constructed by introducing 4 kernel functions, such as Gaussian, Epanechnikov, Biweight and Triweight kernel function. After adding the function in the basic estimation equation, the dependency between the original and ideal spectra is described more accurately and meticulously at each wavelength point. Furthermore, two analytical models were established respectively based on PLS and PCA-BP neural network method, which can be used for estimating the accuracy of corrected spectra. At last, the optimal correction mode was determined by the analytical results with different combination of quasi-linear expression and local weighted function. The spectra of the same coal sample have different noise ratios while the coal sample was prepared under different particle sizes. To validate the effectiveness of this method, the experiment analyzed the correction results of 3 spectral data sets with the particle sizes of 0.2, 1 and 3 mm. The results show that the proposed method can eliminate the scattering influence, and also can enhance the information of spectral peaks. This paper proves a more efficient way to enhance the correlation between corrected spectra and coal qualities significantly, and improve the accuracy and stability of the analytical model substantially.

  18. Optimal clinical trial design based on a dichotomous Markov-chain mixed-effect sleep model.

    PubMed

    Steven Ernest, C; Nyberg, Joakim; Karlsson, Mats O; Hooker, Andrew C

    2014-12-01

    D-optimal designs for discrete-type responses have been derived using generalized linear mixed models, simulation based methods and analytical approximations for computing the fisher information matrix (FIM) of non-linear mixed effect models with homogeneous probabilities over time. In this work, D-optimal designs using an analytical approximation of the FIM for a dichotomous, non-homogeneous, Markov-chain phase advanced sleep non-linear mixed effect model was investigated. The non-linear mixed effect model consisted of transition probabilities of dichotomous sleep data estimated as logistic functions using piecewise linear functions. Theoretical linear and nonlinear dose effects were added to the transition probabilities to modify the probability of being in either sleep stage. D-optimal designs were computed by determining an analytical approximation the FIM for each Markov component (one where the previous state was awake and another where the previous state was asleep). Each Markov component FIM was weighted either equally or by the average probability of response being awake or asleep over the night and summed to derive the total FIM (FIM(total)). The reference designs were placebo, 0.1, 1-, 6-, 10- and 20-mg dosing for a 2- to 6-way crossover study in six dosing groups. Optimized design variables were dose and number of subjects in each dose group. The designs were validated using stochastic simulation/re-estimation (SSE). Contrary to expectations, the predicted parameter uncertainty obtained via FIM(total) was larger than the uncertainty in parameter estimates computed by SSE. Nevertheless, the D-optimal designs decreased the uncertainty of parameter estimates relative to the reference designs. Additionally, the improvement for the D-optimal designs were more pronounced using SSE than predicted via FIM(total). Through the use of an approximate analytic solution and weighting schemes, the FIM(total) for a non-homogeneous, dichotomous Markov-chain phase advanced sleep model was computed and provided more efficient trial designs and increased nonlinear mixed-effects modeling parameter precision.

  19. Rotor design optimization using a free wake analysis

    NASA Technical Reports Server (NTRS)

    Quackenbush, Todd R.; Boschitsch, Alexander H.; Wachspress, Daniel A.; Chua, Kiat

    1993-01-01

    The aim of this effort was to develop a comprehensive performance optimization capability for tiltrotor and helicopter blades. The analysis incorporates the validated EHPIC (Evaluation of Hover Performance using Influence Coefficients) model of helicopter rotor aerodynamics within a general linear/quadratic programming algorithm that allows optimization using a variety of objective functions involving the performance. The resulting computer code, EHPIC/HERO (HElicopter Rotor Optimization), improves upon several features of the previous EHPIC performance model and allows optimization utilizing a wide spectrum of design variables, including twist, chord, anhedral, and sweep. The new analysis supports optimization of a variety of objective functions, including weighted measures of rotor thrust, power, and propulsive efficiency. The fundamental strength of the approach is that an efficient search for improved versions of the baseline design can be carried out while retaining the demonstrated accuracy inherent in the EHPIC free wake/vortex lattice performance analysis. Sample problems are described that demonstrate the success of this approach for several representative rotor configurations in hover and axial flight. Features that were introduced to convert earlier demonstration versions of this analysis into a generally applicable tool for researchers and designers is also discussed.

  20. The Development of Lightweight Commercial Vehicle Wheels Using Microalloying Steel

    NASA Astrophysics Data System (ADS)

    Lu, Hongzhou; Zhang, Lilong; Wang, Jiegong; Xuan, Zhaozhi; Liu, Xiandong; Guo, Aimin; Wang, Wenjun; Lu, Guimin

    Lightweight wheels can reduce weight about 100kg for commercial vehicles, and it can save energy and reduce emission, what's more, it can enhance the profits for logistics companies. The development of lightweight commercial vehicle wheels is achieved by the development of new steel for rim, the process optimization of flash butt welding, and structure optimization by finite element methods. Niobium micro-alloying technology can improve hole expansion rate, weldability and fatigue performance of wheel steel, and based on Niobium micro-alloying technology, a special wheel steel has been studied whose microstructure are Ferrite and Bainite, with high formability and high fatigue performance, and stable mechanical properties. The content of Nb in this new steel is 0.025% and the hole expansion rate is ≥ 100%. At the same time, welding parameters including electric upsetting time, upset allowance, upsetting pressure and flash allowance are optimized, and by CAE analysis, an optimized structure has been attained. As a results, the weight of 22.5in×8.25in wheel is up to 31.5kg, which is most lightweight comparing the same size wheels. And its functions including bending fatigue performance and radial fatigue performance meet the application requirements of truck makers and logistics companies.

  1. Optimization methods applied to hybrid vehicle design

    NASA Technical Reports Server (NTRS)

    Donoghue, J. F.; Burghart, J. H.

    1983-01-01

    The use of optimization methods as an effective design tool in the design of hybrid vehicle propulsion systems is demonstrated. Optimization techniques were used to select values for three design parameters (battery weight, heat engine power rating and power split between the two on-board energy sources) such that various measures of vehicle performance (acquisition cost, life cycle cost and petroleum consumption) were optimized. The apporach produced designs which were often significant improvements over hybrid designs already reported on in the literature. The principal conclusions are as follows. First, it was found that the strategy used to split the required power between the two on-board energy sources can have a significant effect on life cycle cost and petroleum consumption. Second, the optimization program should be constructed so that performance measures and design variables can be easily changed. Third, the vehicle simulation program has a significant effect on the computer run time of the overall optimization program; run time can be significantly reduced by proper design of the types of trips the vehicle takes in a one year period. Fourth, care must be taken in designing the cost and constraint expressions which are used in the optimization so that they are relatively smooth functions of the design variables. Fifth, proper handling of constraints on battery weight and heat engine rating, variables which must be large enough to meet power demands, is particularly important for the success of an optimization study. Finally, the principal conclusion is that optimization methods provide a practical tool for carrying out the design of a hybrid vehicle propulsion system.

  2. Temporal variation of optimal UV exposure time over Korea: risks and benefits of surface UV radiation

    NASA Astrophysics Data System (ADS)

    Lee, Y. G.; Koo, J. H.

    2015-12-01

    Solar UV radiation in a wavelength range between 280 to 400 nm has both positive and negative influences on human body. Surface UV radiation is the main natural source of vitamin D, providing the promotion of bone and musculoskeletal health and reducing the risk of a number of cancers and other medical conditions. However, overexposure to surface UV radiation is significantly related with the majority of skin cancer, in addition other negative health effects such as sunburn, skin aging, and some forms of eye cataracts. Therefore, it is important to estimate the optimal UV exposure time, representing a balance between reducing negative health effects and maximizing sufficient vitamin D production. Previous studies calculated erythemal UV and vitamin-D UV from the measured and modelled spectral irradiances, respectively, by weighting CIE Erythema and Vitamin D3 generation functions (Kazantzidis et al., 2009; Fioletov et al., 2010). In particular, McKenzie et al. (2009) suggested the algorithm to estimate vitamin-D production UV from erythemal UV (or UV index) and determined the optimum conditions of UV exposure based on skin type Ⅱ according to the Fitzpatrick (1988). Recently, there are various demands for risks and benefits of surface UV radiation on public health over Korea, thus it is necessary to estimate optimal UV exposure time suitable to skin type of East Asians. This study examined the relationship between erythemally weighted UV (UVEry) and vitamin D weighted UV (UVVitD) over Korea during 2004-2012. The temporal variations of the ratio (UVVitD/UVEry) were also analyzed and the ratio as a function of UV index was applied in estimating the optimal UV exposure time. In summer with high surface UV radiation, short exposure time leaded to sufficient vitamin D and erythema and vice versa in winter. Thus, the balancing time in winter was enough to maximize UV benefits and minimize UV risks.

  3. Modern Optimization Methods in Minimum Weight Design of Elastic Annular Rotating Disk with Variable Thickness

    NASA Astrophysics Data System (ADS)

    Jafari, S.; Hojjati, M. H.

    2011-12-01

    Rotating disks work mostly at high angular velocity and this results a large centrifugal force and consequently induce large stresses and deformations. Minimizing weight of such disks yields to benefits such as low dead weights and lower costs. This paper aims at finding an optimal disk thickness profile for minimum weight design using the simulated annealing (SA) and particle swarm optimization (PSO) as two modern optimization techniques. In using semi-analytical the radial domain of the disk is divided into some virtual sub-domains as rings where the weight of each rings must be minimized. Inequality constrain equation used in optimization is to make sure that maximum von Mises stress is always less than yielding strength of the material of the disk and rotating disk does not fail. The results show that the minimum weight obtained for all two methods is almost identical. The PSO method gives a profile with slightly less weight (6.9% less than SA) while the implementation of both PSO and SA methods are easy and provide more flexibility compared with classical methods.

  4. Optimal Guaranteed Cost Sliding Mode Control for Constrained-Input Nonlinear Systems With Matched and Unmatched Disturbances.

    PubMed

    Zhang, Huaguang; Qu, Qiuxia; Xiao, Geyang; Cui, Yang

    2018-06-01

    Based on integral sliding mode and approximate dynamic programming (ADP) theory, a novel optimal guaranteed cost sliding mode control is designed for constrained-input nonlinear systems with matched and unmatched disturbances. When the system moves on the sliding surface, the optimal guaranteed cost control problem of sliding mode dynamics is transformed into the optimal control problem of a reformulated auxiliary system with a modified cost function. The ADP algorithm based on single critic neural network (NN) is applied to obtain the approximate optimal control law for the auxiliary system. Lyapunov techniques are used to demonstrate the convergence of the NN weight errors. In addition, the derived approximate optimal control is verified to guarantee the sliding mode dynamics system to be stable in the sense of uniform ultimate boundedness. Some simulation results are presented to verify the feasibility of the proposed control scheme.

  5. Adaptive Path Control of Surface Ships in Restricted Waters.

    DTIC Science & Technology

    1980-08-01

    and Fn=0.116-- Random Walk Disturbance Model 31 6. Optimal Gains for Tokyo Mazu at H/T=- and Fn=0.116-- Random Walk Disturbance Model 39 7. RMS Cost J...yaw mass moment of inertia [kgm 2 V =21 /pL nondimensional yaw mass moment of inertia zz zz J optimal control or Weighted Least-Squares cost function...J RMS cost , eq. (70) J 5yaw added mass moment of inertia [kgm 2 iz=2Jz/pL nondimensional yaw added mass moment of inertia zz zz K Kalman-Bucy state

  6. Leaf position optimization for step-and-shoot IMRT.

    PubMed

    De Gersem, W; Claus, F; De Wagter, C; Van Duyse, B; De Neve, W

    2001-12-01

    To describe the theoretical basis, the algorithm, and implementation of a tool that optimizes segment shapes and weights for step-and-shoot intensity-modulated radiation therapy delivered by multileaf collimators. The tool, called SOWAT (Segment Outline and Weight Adapting Tool) is applied to a set of segments, segment weights, and corresponding dose distribution, computed by an external dose computation engine. SOWAT evaluates the effects of changing the position of each collimating leaf of each segment on an objective function, as follows. Changing a leaf position causes a change in the segment-specific dose matrix, which is calculated by a fast dose computation algorithm. A weighted sum of all segment-specific dose matrices provides the dose distribution and allows computation of the value of the objective function. Only leaf position changes that comply with the multileaf collimator constraints are evaluated. Leaf position changes that tend to decrease the value of the objective function are retained. After several possible positions have been evaluated for all collimating leaves of all segments, an external dose engine recomputes the dose distribution, based on the adapted leaf positions and weights. The plan is evaluated. If the plan is accepted, a segment sequencer is used to make the prescription files for the treatment machine. Otherwise, the user can restart SOWAT using the new set of segments, segment weights, and corresponding dose distribution. The implementation was illustrated using two example cases. The first example is a T1N0M0 supraglottic cancer case that was distributed as a multicenter planning exercise by investigators from Rotterdam, The Netherlands. The exercise involved a two-phase plan. Phase 1 involved the delivery of 46 Gy to a concave-shaped planning target volume (PTV) consisting of the primary tumor volume and the elective lymph nodal regions II-IV on both sides of the neck. Phase 2 involved a boost of 24 Gy to the primary tumor region only. SOWAT was applied to the Phase 1 plan. Parotid sparing was a planning goal. The second implementation example is an ethmoid sinus cancer case, planned with the intent of bilateral visus sparing. The median PTV prescription dose was 70 Gy with a maximum dose constraint to the optic pathway structures of 60 Gy. The initial set of segments, segment weights, and corresponding dose distribution were obtained, respectively, by an anatomy-based segmentation tool, a segment weight optimization tool, and a differential scatter-air ratio dose computation algorithm as external dose engine. For the supraglottic case, this resulted in a plan that proved to be comparable to the plans obtained at the other institutes by forward or inverse planning techniques. After using SOWAT, the minimum PTV dose and PTV dose homogeneity increased; the maximum dose to the spinal cord decreased from 38 Gy to 32 Gy. The left parotid mean dose decreased from 22 Gy to 19 Gy and the right parotid mean dose from 20 to 18 Gy. For the ethmoid sinus case, the target homogeneity increased by leaf position optimization, together with a better sparing of the optical tracts. By using SOWAT, the plans improved with respect to all plan evaluation end points. Compliance with the multileaf collimator constraints is guaranteed. The treatment delivery time remains almost unchanged, because no additional segments are created.

  7. Optimizing energy functions for protein-protein interface design.

    PubMed

    Sharabi, Oz; Yanover, Chen; Dekel, Ayelet; Shifman, Julia M

    2011-01-15

    Protein design methods have been originally developed for the design of monomeric proteins. When applied to the more challenging task of protein–protein complex design, these methods yield suboptimal results. In particular, they often fail to recapitulate favorable hydrogen bonds and electrostatic interactions across the interface. In this work, we aim to improve the energy function of the protein design program ORBIT to better account for binding interactions between proteins. By using the advanced machine learning framework of conditional random fields, we optimize the relative importance of all the terms in the energy function, attempting to reproduce the native side-chain conformations in protein–protein interfaces. We evaluate the performance of several optimized energy functions, each describes the van der Waals interactions using a different potential. In comparison with the original energy function, our best energy function (a) incorporates a much “softer” repulsive van der Waals potential, suitable for the discrete rotameric representation of amino acid side chains; (b) does not penalize burial of polar atoms, reflecting the frequent occurrence of polar buried residues in protein–protein interfaces; and (c) significantly up-weights the electrostatic term, attesting to the high importance of these interactions for protein–protein complex formation. Using this energy function considerably improves side chain placement accuracy for interface residues in a large test set of protein–protein complexes. Moreover, the optimized energy function recovers the native sequences of protein–protein interface at a higher rate than the default function and performs substantially better in predicting changes in free energy of binding due to mutations.

  8. The Weighted-Average Lagged Ensemble.

    PubMed

    DelSole, T; Trenary, L; Tippett, M K

    2017-11-01

    A lagged ensemble is an ensemble of forecasts from the same model initialized at different times but verifying at the same time. The skill of a lagged ensemble mean can be improved by assigning weights to different forecasts in such a way as to maximize skill. If the forecasts are bias corrected, then an unbiased weighted lagged ensemble requires the weights to sum to one. Such a scheme is called a weighted-average lagged ensemble. In the limit of uncorrelated errors, the optimal weights are positive and decay monotonically with lead time, so that the least skillful forecasts have the least weight. In more realistic applications, the optimal weights do not always behave this way. This paper presents a series of analytic examples designed to illuminate conditions under which the weights of an optimal weighted-average lagged ensemble become negative or depend nonmonotonically on lead time. It is shown that negative weights are most likely to occur when the errors grow rapidly and are highly correlated across lead time. The weights are most likely to behave nonmonotonically when the mean square error is approximately constant over the range forecasts included in the lagged ensemble. An extreme example of the latter behavior is presented in which the optimal weights vanish everywhere except at the shortest and longest lead times.

  9. Evidence-based recommendations for optimal dietary protein intake in older people: a position paper from the PROT-AGE Study Group.

    PubMed

    Bauer, Jürgen; Biolo, Gianni; Cederholm, Tommy; Cesari, Matteo; Cruz-Jentoft, Alfonso J; Morley, John E; Phillips, Stuart; Sieber, Cornel; Stehle, Peter; Teta, Daniel; Visvanathan, Renuka; Volpi, Elena; Boirie, Yves

    2013-08-01

    New evidence shows that older adults need more dietary protein than do younger adults to support good health, promote recovery from illness, and maintain functionality. Older people need to make up for age-related changes in protein metabolism, such as high splanchnic extraction and declining anabolic responses to ingested protein. They also need more protein to offset inflammatory and catabolic conditions associated with chronic and acute diseases that occur commonly with aging. With the goal of developing updated, evidence-based recommendations for optimal protein intake by older people, the European Union Geriatric Medicine Society (EUGMS), in cooperation with other scientific organizations, appointed an international study group to review dietary protein needs with aging (PROT-AGE Study Group). To help older people (>65 years) maintain and regain lean body mass and function, the PROT-AGE study group recommends average daily intake at least in the range of 1.0 to 1.2 g protein per kilogram of body weight per day. Both endurance- and resistance-type exercises are recommended at individualized levels that are safe and tolerated, and higher protein intake (ie, ≥ 1.2 g/kg body weight/d) is advised for those who are exercising and otherwise active. Most older adults who have acute or chronic diseases need even more dietary protein (ie, 1.2-1.5 g/kg body weight/d). Older people with severe kidney disease (ie, estimated GFR <30 mL/min/1.73 m(2)), but who are not on dialysis, are an exception to this rule; these individuals may need to limit protein intake. Protein quality, timing of ingestion, and intake of other nutritional supplements may be relevant, but evidence is not yet sufficient to support specific recommendations. Older people are vulnerable to losses in physical function capacity, and such losses predict loss of independence, falls, and even mortality. Thus, future studies aimed at pinpointing optimal protein intake in specific populations of older people need to include measures of physical function. Copyright © 2013 American Medical Directors Association, Inc. Published by Elsevier Inc. All rights reserved.

  10. Optimal weighted combinatorial forecasting model of QT dispersion of ECGs in Chinese adults.

    PubMed

    Wen, Zhang; Miao, Ge; Xinlei, Liu; Minyi, Cen

    2016-07-01

    This study aims to provide a scientific basis for unifying the reference value standard of QT dispersion of ECGs in Chinese adults. Three predictive models including regression model, principal component model, and artificial neural network model are combined to establish the optimal weighted combination model. The optimal weighted combination model and single model are verified and compared. Optimal weighted combinatorial model can reduce predicting risk of single model and improve the predicting precision. The reference value of geographical distribution of Chinese adults' QT dispersion was precisely made by using kriging methods. When geographical factors of a particular area are obtained, the reference value of QT dispersion of Chinese adults in this area can be estimated by using optimal weighted combinatorial model and reference value of the QT dispersion of Chinese adults anywhere in China can be obtained by using geographical distribution figure as well.

  11. On the nature of motor planning variables during arm pointing movement: Compositeness and speed dependence.

    PubMed

    Vu, Van Hoan; Isableu, Brice; Berret, Bastien

    2016-07-22

    The purpose of this study was to investigate the nature of the variables and rules underlying the planning of unrestrained 3D arm reaching. To identify whether the brain uses kinematic, dynamic and energetic values in an isolated manner or combines them in a flexible way, we examined the effects of speed variations upon the chosen arm trajectories during free arm movements. Within the optimal control framework, we uncovered which (possibly composite) optimality criterion underlays at best the empirical data. Fifteen participants were asked to perform free-endpoint reaching movements from a specific arm configuration at slow, normal and fast speeds. Experimental results revealed that prominent features of observed motor behaviors were significantly speed-dependent, such as the chosen reach endpoint and the final arm posture. Nevertheless, participants exhibited different arm trajectories and various degrees of speed dependence of their reaching behavior. These inter-individual differences were addressed using a numerical inverse optimal control methodology. Simulation results revealed that a weighted combination of kinematic, energetic and dynamic cost functions was required to account for all the critical features of the participants' behavior. Furthermore, no evidence for the existence of a speed-dependent tuning of these weights was found, thereby suggesting subject-specific but speed-invariant weightings of kinematic, energetic and dynamic variables during the motor planning process of free arm movements. This suggested that the inter-individual difference of arm trajectories and speed dependence was not only due to anthropometric singularities but also to critical differences in the composition of the subjective cost function. Copyright © 2016 IBRO. Published by Elsevier Ltd. All rights reserved.

  12. Role of leptin in energy homeostasis in humans

    PubMed Central

    Rosenbaum, Michael; Leibel, Rudolph L

    2015-01-01

    The hyperphagia, low sympathetic nervous system tone, and decreased circulating concentrations of bioactive thyroid hormones that are common to states of congenital leptin deficiency and hypoleptinemia following and during weight loss suggest that the major physiological function of leptin is to signal states of negative energy balance and decreased energy stores. In weight-reduced humans, these phenotypes together with pronounced hypometabolism and increased parasympathetic nervous system tone create the optimal circumstance for weight regain. Based on the weight loss induced by leptin administration in states of leptin deficiency (obese) and observed similarity of phenotypes in states of congenital and dietary-induced states of hypoleptinemia (reduced obese), it has been suggested that exogenous leptin could potentially be useful in initiating, promoting, and sustaining weight reduction. However, the responses of human beings to exogenous leptin administration are dependent not only on extant energy stores but also on energy balance. Leptin administration to humans at usual weight has little, if any, effect on body weight while leptin administration during weight loss mitigates hunger, especially if given in supraphysiological doses during severe caloric restriction. Leptin repletion is most effective following weight loss by dietary restriction. In this state of weight stability but reduced energy stores, leptin at least partially reverses many of the metabolic, autonomic, neuroendocrine, and behavioral adaptations that favor weight regain. The major physiological function of leptin is to signal states of negative energy balance and decreased energy stores. Leptin, and pharmacotherapies affecting leptin signaling pathways, is likely to be most useful in sustaining weight loss. PMID:25063755

  13. Quantification of soil water retention parameters using multi-section TDR-waveform analysis

    NASA Astrophysics Data System (ADS)

    Baviskar, S. M.; Heimovaara, T. J.

    2017-06-01

    Soil water retention parameters are important for describing flow in variably saturated soils. TDR is one of the standard methods used for determining water content in soil samples. In this study, we present an approach to estimate water retention parameters of a sample which is initially saturated and subjected to an incremental decrease in boundary head causing it to drain in a multi-step fashion. TDR waveforms are measured along the height of the sample at assumed different hydrostatic conditions at daily interval. The cumulative discharge outflow drained from the sample is also recorded. The saturated water content is obtained using volumetric analysis after the final step involved in multi-step drainage. The equation obtained by coupling the unsaturated parametric function and the apparent dielectric permittivity is fitted to a TDR wave propagation forward model. The unsaturated parametric function is used to spatially interpolate the water contents along TDR probe. The cumulative discharge outflow data is fitted with cumulative discharge estimated using the unsaturated parametric function. The weight of water inside the sample estimated at the first and final boundary head in multi-step drainage is fitted with the corresponding weights calculated using unsaturated parametric function. A Bayesian optimization scheme is used to obtain optimized water retention parameters for these different objective functions. This approach can be used for samples with long heights and is especially suitable for characterizing sands with a uniform particle size distribution at low capillary heads.

  14. Preparation of poly-L-lysine functionalized magnetic nanoparticles and their influence on viability of cancer cells

    NASA Astrophysics Data System (ADS)

    Khmara, I.; Koneracka, M.; Kubovcikova, M.; Zavisova, V.; Antal, I.; Csach, K.; Kopcansky, P.; Vidlickova, I.; Csaderova, L.; Pastorekova, S.; Zatovicova, M.

    2017-04-01

    This study was aimed at development of biocompatible amino-functionalized magnetic nanoparticles as carriers of specific antibodies able to detect and/or target cancer cells. Poly-L-lysine (PLL)-modified magnetic nanoparticle samples with different PLL/Fe3O4 content were prepared and tested to define the optimal PLL/Fe3O4 weight ratio. The samples were characterized for particle size and morphology (SEM, TEM and DLS), and surface properties (zeta potential measurements). The optimal PLL/Fe3O4 weight ratio of 1.0 based on both zeta potential and DLS measurements was in agreement with the UV/VIS measurements. Magnetic nanoparticles with the optimal PLL content were conjugated with antibody specific for the cancer biomarker carbonic anhydrase IX (CA IX), which is induced by hypoxia, a physiologic stress present in solid tumors and linked with aggressive tumor behavior. CA IX is localized on the cell surface with the antibody-binding epitope facing the extracellular space and is therefore suitable for antibody-based targeting of tumor cells. Here we showed that PLL/Fe3O4 magnetic nanoparticles exhibit cytotoxic activities in a cell type-dependent manner and bind to cells expressing CA IX when conjugated with the CA IX-specific antibody. These data support further investigations of the CA IX antibody-conjugated, magnetic field-guided/activated nanoparticles as tools in anticancer strategies.

  15. Optimization of process parameters for the production of collagen peptides from fish skin (Epinephelus malabaricus) using response surface methodology and its characterization.

    PubMed

    Hema, G S; Joshy, C G; Shyni, K; Chatterjee, Niladri S; Ninan, George; Mathew, Suseela

    2017-02-01

    The study optimized the hydrolysis conditions for the production of fish collagen peptides from skin of Malabar grouper ( Epinephelus malabaricus ) using response surface methodology. The hydrolysis was done with enzymes pepsin, papain and protease from bovine pancreas. Effects of process parameters viz: pH, temperature, enzyme substrate ratio and hydrolysis time of the three different enzymes on degree of hydrolysis were investigated. The optimum response of degree of hydrolysis was estimated to be 10, 20 and 28% respectively for pepsin, papain and protease. The functional properties of the product developed were analysed which showed changes in the properties from proteins to peptides. SDS-PAGE combined with MALDI TOF method was successfully applied to determine the molecular weight distribution of the hydrolysate. The electrophoretic pattern indicated that the molecular weights of peptides formed due to hydrolysis were nearly 2 kDa. MALDI TOF spectral analysis showed the developed hydrolysate contains peptides having molecular weight in the range below 2 kDa.

  16. Load balancing prediction method of cloud storage based on analytic hierarchy process and hybrid hierarchical genetic algorithm.

    PubMed

    Zhou, Xiuze; Lin, Fan; Yang, Lvqing; Nie, Jing; Tan, Qian; Zeng, Wenhua; Zhang, Nian

    2016-01-01

    With the continuous expansion of the cloud computing platform scale and rapid growth of users and applications, how to efficiently use system resources to improve the overall performance of cloud computing has become a crucial issue. To address this issue, this paper proposes a method that uses an analytic hierarchy process group decision (AHPGD) to evaluate the load state of server nodes. Training was carried out by using a hybrid hierarchical genetic algorithm (HHGA) for optimizing a radial basis function neural network (RBFNN). The AHPGD makes the aggregative indicator of virtual machines in cloud, and become input parameters of predicted RBFNN. Also, this paper proposes a new dynamic load balancing scheduling algorithm combined with a weighted round-robin algorithm, which uses the predictive periodical load value of nodes based on AHPPGD and RBFNN optimized by HHGA, then calculates the corresponding weight values of nodes and makes constant updates. Meanwhile, it keeps the advantages and avoids the shortcomings of static weighted round-robin algorithm.

  17. Optimization of flexible wing structures subject to strength and induced drag constraints

    NASA Technical Reports Server (NTRS)

    Haftka, R. T.

    1977-01-01

    An optimization procedure for designing wing structures subject to stress, strain, and drag constraints is presented. The optimization method utilizes an extended penalty function formulation for converting the constrained problem into a series of unconstrained ones. Newton's method is used to solve the unconstrained problems. An iterative analysis procedure is used to obtain the displacements of the wing structure including the effects of load redistribution due to the flexibility of the structure. The induced drag is calculated from the lift distribution. Approximate expressions for the constraints used during major portions of the optimization process enhance the efficiency of the procedure. A typical fighter wing is used to demonstrate the procedure. Aluminum and composite material designs are obtained. The tradeoff between weight savings and drag reduction is investigated.

  18. Optimal landing of a helicopter in autorotation

    NASA Technical Reports Server (NTRS)

    Lee, A. Y. N.

    1985-01-01

    Gliding descent in autorotation is a maneuver used by helicopter pilots in case of engine failure. The landing of a helicopter in autorotation is formulated as a nonlinear optimal control problem. The OH-58A helicopter was used. Helicopter vertical and horizontal velocities, vertical and horizontal displacement, and the rotor angle speed were modeled. An empirical approximation for the induced veloctiy in the vortex-ring state were provided. The cost function of the optimal control problem is a weighted sum of the squared horizontal and vertical components of the helicopter velocity at touchdown. Optimal trajectories are calculated for entry conditions well within the horizontal-vertical restriction curve, with the helicopter initially in hover or forwared flight. The resultant two-point boundary value problem with path equality constraints was successfully solved using the Sequential Gradient Restoration Technique.

  19. Optimizing global liver function in radiation therapy treatment planning

    NASA Astrophysics Data System (ADS)

    Wu, Victor W.; Epelman, Marina A.; Wang, Hesheng; Romeijn, H. Edwin; Feng, Mary; Cao, Yue; Ten Haken, Randall K.; Matuszak, Martha M.

    2016-09-01

    Liver stereotactic body radiation therapy (SBRT) patients differ in both pre-treatment liver function (e.g. due to degree of cirrhosis and/or prior treatment) and radiosensitivity, leading to high variability in potential liver toxicity with similar doses. This work investigates three treatment planning optimization models that minimize risk of toxicity: two consider both voxel-based pre-treatment liver function and local-function-based radiosensitivity with dose; one considers only dose. Each model optimizes different objective functions (varying in complexity of capturing the influence of dose on liver function) subject to the same dose constraints and are tested on 2D synthesized and 3D clinical cases. The normal-liver-based objective functions are the linearized equivalent uniform dose (\\ell \\text{EUD} ) (conventional ‘\\ell \\text{EUD} model’), the so-called perfusion-weighted \\ell \\text{EUD} (\\text{fEUD} ) (proposed ‘fEUD model’), and post-treatment global liver function (GLF) (proposed ‘GLF model’), predicted by a new liver-perfusion-based dose-response model. The resulting \\ell \\text{EUD} , fEUD, and GLF plans delivering the same target \\ell \\text{EUD} are compared with respect to their post-treatment function and various dose-based metrics. Voxel-based portal venous liver perfusion, used as a measure of local function, is computed using DCE-MRI. In cases used in our experiments, the GLF plan preserves up to 4.6 % ≤ft(7.5 % \\right) more liver function than the fEUD (\\ell \\text{EUD} ) plan does in 2D cases, and up to 4.5 % ≤ft(5.6 % \\right) in 3D cases. The GLF and fEUD plans worsen in \\ell \\text{EUD} of functional liver on average by 1.0 Gy and 0.5 Gy in 2D and 3D cases, respectively. Liver perfusion information can be used during treatment planning to minimize the risk of toxicity by improving expected GLF; the degree of benefit varies with perfusion pattern. Although fEUD model optimization is computationally inexpensive and often achieves better GLF than \\ell \\text{EUD} model optimization does, the GLF model directly optimizes a more clinically relevant metric and can further improve fEUD plan quality.

  20. Tracking the Reorganization of Module Structure in Time-Varying Weighted Brain Functional Connectivity Networks.

    PubMed

    Schmidt, Christoph; Piper, Diana; Pester, Britta; Mierau, Andreas; Witte, Herbert

    2018-05-01

    Identification of module structure in brain functional networks is a promising way to obtain novel insights into neural information processing, as modules correspond to delineated brain regions in which interactions are strongly increased. Tracking of network modules in time-varying brain functional networks is not yet commonly considered in neuroscience despite its potential for gaining an understanding of the time evolution of functional interaction patterns and associated changing degrees of functional segregation and integration. We introduce a general computational framework for extracting consensus partitions from defined time windows in sequences of weighted directed edge-complete networks and show how the temporal reorganization of the module structure can be tracked and visualized. Part of the framework is a new approach for computing edge weight thresholds for individual networks based on multiobjective optimization of module structure quality criteria as well as an approach for matching modules across time steps. By testing our framework using synthetic network sequences and applying it to brain functional networks computed from electroencephalographic recordings of healthy subjects that were exposed to a major balance perturbation, we demonstrate the framework's potential for gaining meaningful insights into dynamic brain function in the form of evolving network modules. The precise chronology of the neural processing inferred with our framework and its interpretation helps to improve the currently incomplete understanding of the cortical contribution for the compensation of such balance perturbations.

  1. Two-spoke placement optimization under explicit specific absorption rate and power constraints in parallel transmission at ultra-high field.

    PubMed

    Dupas, Laura; Massire, Aurélien; Amadon, Alexis; Vignaud, Alexandre; Boulant, Nicolas

    2015-06-01

    The spokes method combined with parallel transmission is a promising technique to mitigate the B1(+) inhomogeneity at ultra-high field in 2D imaging. To date however, the spokes placement optimization combined with the magnitude least squares pulse design has never been done in direct conjunction with the explicit Specific Absorption Rate (SAR) and hardware constraints. In this work, the joint optimization of 2-spoke trajectories and RF subpulse weights is performed under these constraints explicitly and in the small tip angle regime. The problem is first considerably simplified by making the observation that only the vector between the 2 spokes is relevant in the magnitude least squares cost-function, thereby reducing the size of the parameter space and allowing a more exhaustive search. The algorithm starts from a set of initial k-space candidates and performs in parallel for all of them optimizations of the RF subpulse weights and the k-space locations simultaneously, under explicit SAR and power constraints, using an active-set algorithm. The dimensionality of the spoke placement parameter space being low, the RF pulse performance is computed for every location in k-space to study the robustness of the proposed approach with respect to initialization, by looking at the probability to converge towards a possible global minimum. Moreover, the optimization of the spoke placement is repeated with an increased pulse bandwidth in order to investigate the impact of the constraints on the result. Bloch simulations and in vivo T2(∗)-weighted images acquired at 7 T validate the approach. The algorithm returns simulated normalized root mean square errors systematically smaller than 5% in 10 s. Copyright © 2015 Elsevier Inc. All rights reserved.

  2. A New Analysis of the Two Classical ZZ Ceti White Dwarfs GD 165 and Ross 548. II. Seismic Modeling

    NASA Astrophysics Data System (ADS)

    Giammichele, N.; Fontaine, G.; Brassard, P.; Charpinet, S.

    2016-03-01

    We present the second of a two-part seismic analysis of the bright, hot ZZ Ceti stars GD 165 and Ross 548. In this second part, we report the results of detailed searches in parameter space for identifying an optimal model for each star that can account well for the observed periods, while being consistent with the spectroscopic constraints derived in our first paper. We find optimal models for each target that reproduce the six observed periods well within ∼0.3% on the average. We also find that there is a sensitivity on the core composition for Ross 548, while there is practically none for GD 165. Our optimal model of Ross 548, with its thin envelope, indeed shows weight functions for some confined modes that extend relatively deep into the interior, thus explaining the sensitivity of the period spectrum on the core composition in that star. In contrast, our optimal seismic model of its spectroscopic sibling, GD 165 with its thick envelope, does not trap/confine modes very efficiently, and we find weight functions for all six observed modes that do not extend into the deep core, hence accounting for the lack of sensitivity in that case. Furthermore, we exploit after the fact the observed multiplet structure that we ascribe to rotation. We are able to map the rotation profile in GD 165 (Ross 548) over the outermost ∼20% (∼5%) of its radius, and we find that the profile is consistent with solid-body rotation.

  3. Fault Detection of Roller-Bearings Using Signal Processing and Optimization Algorithms

    PubMed Central

    Kwak, Dae-Ho; Lee, Dong-Han; Ahn, Jong-Hyo; Koh, Bong-Hwan

    2014-01-01

    This study presents a fault detection of roller bearings through signal processing and optimization techniques. After the occurrence of scratch-type defects on the inner race of bearings, variations of kurtosis values are investigated in terms of two different data processing techniques: minimum entropy deconvolution (MED), and the Teager-Kaiser Energy Operator (TKEO). MED and the TKEO are employed to qualitatively enhance the discrimination of defect-induced repeating peaks on bearing vibration data with measurement noise. Given the perspective of the execution sequence of MED and the TKEO, the study found that the kurtosis sensitivity towards a defect on bearings could be highly improved. Also, the vibration signal from both healthy and damaged bearings is decomposed into multiple intrinsic mode functions (IMFs), through empirical mode decomposition (EMD). The weight vectors of IMFs become design variables for a genetic algorithm (GA). The weights of each IMF can be optimized through the genetic algorithm, to enhance the sensitivity of kurtosis on damaged bearing signals. Experimental results show that the EMD-GA approach successfully improved the resolution of detectability between a roller bearing with defect, and an intact system. PMID:24368701

  4. Closed-form solutions for linear regulator design of mechanical systems including optimal weighting matrix selection

    NASA Technical Reports Server (NTRS)

    Hanks, Brantley R.; Skelton, Robert E.

    1991-01-01

    Vibration in modern structural and mechanical systems can be reduced in amplitude by increasing stiffness, redistributing stiffness and mass, and/or adding damping if design techniques are available to do so. Linear Quadratic Regulator (LQR) theory in modern multivariable control design, attacks the general dissipative elastic system design problem in a global formulation. The optimal design, however, allows electronic connections and phase relations which are not physically practical or possible in passive structural-mechanical devices. The restriction of LQR solutions (to the Algebraic Riccati Equation) to design spaces which can be implemented as passive structural members and/or dampers is addressed. A general closed-form solution to the optimal free-decay control problem is presented which is tailored for structural-mechanical system. The solution includes, as subsets, special cases such as the Rayleigh Dissipation Function and total energy. Weighting matrix selection is a constrained choice among several parameters to obtain desired physical relationships. The closed-form solution is also applicable to active control design for systems where perfect, collocated actuator-sensor pairs exist.

  5. Optimal network modification for spectral radius dependent phase transitions

    NASA Astrophysics Data System (ADS)

    Rosen, Yonatan; Kirsch, Lior; Louzoun, Yoram

    2016-09-01

    The dynamics of contact processes on networks is often determined by the spectral radius of the networks adjacency matrices. A decrease of the spectral radius can prevent the outbreak of an epidemic, or impact the synchronization among systems of coupled oscillators. The spectral radius is thus tightly linked to network dynamics and function. As such, finding the minimal change in network structure necessary to reach the intended spectral radius is important theoretically and practically. Given contemporary big data resources such as large scale communication or social networks, this problem should be solved with a low runtime complexity. We introduce a novel method for the minimal decrease in weights of edges required to reach a given spectral radius. The problem is formulated as a convex optimization problem, where a global optimum is guaranteed. The method can be easily adjusted to an efficient discrete removal of edges. We introduce a variant of the method which finds optimal decrease with a focus on weights of vertices. The proposed algorithm is exceptionally scalable, solving the problem for real networks of tens of millions of edges in a short time.

  6. RESOLVE: A new algorithm for aperture synthesis imaging of extended emission in radio astronomy

    NASA Astrophysics Data System (ADS)

    Junklewitz, H.; Bell, M. R.; Selig, M.; Enßlin, T. A.

    2016-02-01

    We present resolve, a new algorithm for radio aperture synthesis imaging of extended and diffuse emission in total intensity. The algorithm is derived using Bayesian statistical inference techniques, estimating the surface brightness in the sky assuming a priori log-normal statistics. resolve estimates the measured sky brightness in total intensity, and the spatial correlation structure in the sky, which is used to guide the algorithm to an optimal reconstruction of extended and diffuse sources. During this process, the algorithm succeeds in deconvolving the effects of the radio interferometric point spread function. Additionally, resolve provides a map with an uncertainty estimate of the reconstructed surface brightness. Furthermore, with resolve we introduce a new, optimal visibility weighting scheme that can be viewed as an extension to robust weighting. In tests using simulated observations, the algorithm shows improved performance against two standard imaging approaches for extended sources, Multiscale-CLEAN and the Maximum Entropy Method.

  7. Integrated optimization of planetary rover layout and exploration routes

    NASA Astrophysics Data System (ADS)

    Lee, Dongoo; Ahn, Jaemyung

    2018-01-01

    This article introduces an optimization framework for the integrated design of a planetary surface rover and its exploration route that is applicable to the initial phase of a planetary exploration campaign composed of multiple surface missions. The scientific capability and the mobility of a rover are modelled as functions of the science weight fraction, a key parameter characterizing the rover. The proposed problem is formulated as a mixed-integer nonlinear program that maximizes the sum of profits obtained through a planetary surface exploration mission by simultaneously determining the science weight fraction of the rover, the sites to visit and their visiting sequences under resource consumption constraints imposed on each route and collectively on a mission. A solution procedure for the proposed problem composed of two loops (the outer loop and the inner loop) is developed. The results of test cases demonstrating the effectiveness of the proposed framework are presented.

  8. Comparison of SIRT and SQS for Regularized Weighted Least Squares Image Reconstruction

    PubMed Central

    Gregor, Jens; Fessler, Jeffrey A.

    2015-01-01

    Tomographic image reconstruction is often formulated as a regularized weighted least squares (RWLS) problem optimized by iterative algorithms that are either inherently algebraic or derived from a statistical point of view. This paper compares a modified version of SIRT (Simultaneous Iterative Reconstruction Technique), which is of the former type, with a version of SQS (Separable Quadratic Surrogates), which is of the latter type. We show that the two algorithms minimize the same criterion function using similar forms of preconditioned gradient descent. We present near-optimal relaxation for both based on eigenvalue bounds and include a heuristic extension for use with ordered subsets. We provide empirical evidence that SIRT and SQS converge at the same rate for all intents and purposes. For context, we compare their performance with an implementation of preconditioned conjugate gradient. The illustrative application is X-ray CT of luggage for aviation security. PMID:26478906

  9. Added Value of Assessing Adnexal Masses with Advanced MRI Techniques

    PubMed Central

    Thomassin-Naggara, I.; Balvay, D.; Rockall, A.; Carette, M. F.; Ballester, M.; Darai, E.; Bazot, M.

    2015-01-01

    This review will present the added value of perfusion and diffusion MR sequences to characterize adnexal masses. These two functional MR techniques are readily available in routine clinical practice. We will describe the acquisition parameters and a method of analysis to optimize their added value compared with conventional images. We will then propose a model of interpretation that combines the anatomical and morphological information from conventional MRI sequences with the functional information provided by perfusion and diffusion weighted sequences. PMID:26413542

  10. Tunneling and speedup in quantum optimization for permutation-symmetric problems

    DOE PAGES

    Muthukrishnan, Siddharth; Albash, Tameem; Lidar, Daniel A.

    2016-07-21

    Tunneling is often claimed to be the key mechanism underlying possible speedups in quantum optimization via quantum annealing (QA), especially for problems featuring a cost function with tall and thin barriers. We present and analyze several counterexamples from the class of perturbed Hamming weight optimization problems with qubit permutation symmetry. We first show that, for these problems, the adiabatic dynamics that make tunneling possible should be understood not in terms of the cost function but rather the semiclassical potential arising from the spin-coherent path-integral formalism. We then provide an example where the shape of the barrier in the final costmore » function is short and wide, which might suggest no quantum advantage for QA, yet where tunneling renders QA superior to simulated annealing in the adiabatic regime. However, the adiabatic dynamics turn out not be optimal. Instead, an evolution involving a sequence of diabatic transitions through many avoided-level crossings, involving no tunneling, is optimal and outperforms adiabatic QA. We show that this phenomenon of speedup by diabatic transitions is not unique to this example, and we provide an example where it provides an exponential speedup over adiabatic QA. In yet another twist, we show that a classical algorithm, spin-vector dynamics, is at least as efficient as diabatic QA. Lastly, in a different example with a convex cost function, the diabatic transitions result in a speedup relative to both adiabatic QA with tunneling and classical spin-vector dynamics.« less

  11. Tunneling and speedup in quantum optimization for permutation-symmetric problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Muthukrishnan, Siddharth; Albash, Tameem; Lidar, Daniel A.

    Tunneling is often claimed to be the key mechanism underlying possible speedups in quantum optimization via quantum annealing (QA), especially for problems featuring a cost function with tall and thin barriers. We present and analyze several counterexamples from the class of perturbed Hamming weight optimization problems with qubit permutation symmetry. We first show that, for these problems, the adiabatic dynamics that make tunneling possible should be understood not in terms of the cost function but rather the semiclassical potential arising from the spin-coherent path-integral formalism. We then provide an example where the shape of the barrier in the final costmore » function is short and wide, which might suggest no quantum advantage for QA, yet where tunneling renders QA superior to simulated annealing in the adiabatic regime. However, the adiabatic dynamics turn out not be optimal. Instead, an evolution involving a sequence of diabatic transitions through many avoided-level crossings, involving no tunneling, is optimal and outperforms adiabatic QA. We show that this phenomenon of speedup by diabatic transitions is not unique to this example, and we provide an example where it provides an exponential speedup over adiabatic QA. In yet another twist, we show that a classical algorithm, spin-vector dynamics, is at least as efficient as diabatic QA. Lastly, in a different example with a convex cost function, the diabatic transitions result in a speedup relative to both adiabatic QA with tunneling and classical spin-vector dynamics.« less

  12. Launch Vehicle Propulsion Parameter Design Multiple Selection Criteria

    NASA Technical Reports Server (NTRS)

    Shelton, Joey Dewayne

    2004-01-01

    The optimization tool described herein addresses and emphasizes the use of computer tools to model a system and focuses on a concept development approach for a liquid hydrogen/liquid oxygen single-stage-to-orbit system, but more particularly the development of the optimized system using new techniques. This methodology uses new and innovative tools to run Monte Carlo simulations, genetic algorithm solvers, and statistical models in order to optimize a design concept. The concept launch vehicle and propulsion system were modeled and optimized to determine the best design for weight and cost by varying design and technology parameters. Uncertainty levels were applied using Monte Carlo Simulations and the model output was compared to the National Aeronautics and Space Administration Space Shuttle Main Engine. Several key conclusions are summarized here for the model results. First, the Gross Liftoff Weight and Dry Weight were 67% higher for the design case for minimization of Design, Development, Test and Evaluation cost when compared to the weights determined by the minimization of Gross Liftoff Weight case. In turn, the Design, Development, Test and Evaluation cost was 53% higher for optimized Gross Liftoff Weight case when compared to the cost determined by case for minimization of Design, Development, Test and Evaluation cost. Therefore, a 53% increase in Design, Development, Test and Evaluation cost results in a 67% reduction in Gross Liftoff Weight. Secondly, the tool outputs define the sensitivity of propulsion parameters, technology and cost factors and how these parameters differ when cost and weight are optimized separately. A key finding was that for a Space Shuttle Main Engine thrust level the oxidizer/fuel ratio of 6.6 resulted in the lowest Gross Liftoff Weight rather than at 5.2 for the maximum specific impulse, demonstrating the relationships between specific impulse, engine weight, tank volume and tank weight. Lastly, the optimum chamber pressure for Gross Liftoff Weight minimization was 2713 pounds per square inch as compared to 3162 for the Design, Development, Test and Evaluation cost optimization case. This chamber pressure range is close to 3000 pounds per square inch for the Space Shuttle Main Engine.

  13. Birth weight and infant growth: optimal infant weight gain versus optimal infant weight.

    PubMed

    Xiong, Xu; Wightkin, Joan; Magnus, Jeanette H; Pridjian, Gabriella; Acuna, Juan M; Buekens, Pierre

    2007-01-01

    Infant growth assessment often focuses on "optimal" infant weights and lengths at specific ages, while de-emphasizing infant weight gain. Objective of this study was to examine infant growth patterns by measuring infant weight gain relative to birth weight. We conducted this study based on data collected in a prospective cohort study including 3,302 births with follow up examinations of infants between the ages of 8 and 18 months. All infants were participants in the Louisiana State Women, Infant and Children Supplemental Food Program between 1999 and 2001. Growth was assessed by infant weight gain percentage (IWG%, defined as infant weight gain divided by birth weight) as well as by mean z-scores and percentiles for weight-for-age, length-for-age, and weight-for-length calculated based on growth charts published by the U.S. Centers for Disease Control (CDC). An inverse relationship was noted between birth weight category and IWG% (from 613.9% for infants with birth weights <1500 g to 151.3% for infants with birth weights of 4000 g or more). In contrast, low birth weight infants had lower weight-for-age, weight-for-length z-scores and percentiles compared to normal birth weight infants according to CDC growth charts. Although low birth weight infants had lower anthropometric measures compared to a national reference population, they had significant catch-up growth; High birth weight infants had significant slow-down growth. We suggest that growth assessments should compare infants' anthropometric data to their own previous growth measures as well as to a reference population. Further studies are needed to identify optimal ranges of infant weight gain.

  14. Optimal design of geodesically stiffened composite cylindrical shells

    NASA Technical Reports Server (NTRS)

    Gendron, G.; Guerdal, Z.

    1992-01-01

    An optimization system based on the finite element code Computations Structural Mechanics (CSM) Testbed and the optimization program, Automated Design Synthesis (ADS), is described. The optimization system can be used to obtain minimum-weight designs of composite stiffened structures. Ply thickness, ply orientations, and stiffener heights can be used as design variables. Buckling, displacement, and material failure constraints can be imposed on the design. The system is used to conduct a design study of geodesically stiffened shells. For comparison purposes, optimal designs of unstiffened shells and shells stiffened by rings and stingers are also obtained. Trends in the design of geodesically stiffened shells are identified. An approach to include local stress concentrations during the design optimization process is then presented. The method is based on a global/local analysis technique. It employs spline interpolation functions to determine displacements and rotations from a global model which are used as 'boundary conditions' for the local model. The organization of the strategy in the context of an optimization process is described. The method is validated with an example.

  15. An approximation function for frequency constrained structural optimization

    NASA Technical Reports Server (NTRS)

    Canfield, R. A.

    1989-01-01

    The purpose is to examine a function for approximating natural frequency constraints during structural optimization. The nonlinearity of frequencies has posed a barrier to constructing approximations for frequency constraints of high enough quality to facilitate efficient solutions. A new function to represent frequency constraints, called the Rayleigh Quotient Approximation (RQA), is presented. Its ability to represent the actual frequency constraint results in stable convergence with effectively no move limits. The objective of the optimization problem is to minimize structural weight subject to some minimum (or maximum) allowable frequency and perhaps subject to other constraints such as stress, displacement, and gage size, as well. A reason for constraining natural frequencies during design might be to avoid potential resonant frequencies due to machinery or actuators on the structure. Another reason might be to satisy requirements of an aircraft or spacecraft's control law. Whatever the structure supports may be sensitive to a frequency band that must be avoided. Any of these situations or others may require the designer to insure the satisfaction of frequency constraints. A further motivation for considering accurate approximations of natural frequencies is that they are fundamental to dynamic response constraints.

  16. Fault detection for piecewise affine systems with application to ship propulsion systems.

    PubMed

    Yang, Ying; Linlin, Li; Ding, Steven X; Qiu, Jianbin; Peng, Kaixiang

    2017-09-09

    In this paper, the design approach of non-synchronized diagnostic observer-based fault detection (FD) systems is investigated for piecewise affine processes via continuous piecewise Lyapunov functions. Considering that the dynamics of piecewise affine systems in different regions can be considerably different, the weighting matrices are used to weight the residual of each region, so as to optimize the fault detectability. A numerical example and a case study on a ship propulsion system are presented in the end to demonstrate the effectiveness of the proposed results. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  17. Structural Mass Saving Potential of a 5-MW Direct-Drive Generator Designed for Additive Manufacturing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sethuraman, Latha; Fingersh, Lee J; Dykes, Katherine L

    As wind turbine blade diameters and tower height increase to capture more energy in the wind, higher structural loads results in more structural support material increasing the cost of scaling. Weight reductions in the generator transfer to overall cost savings of the system. Additive manufacturing facilitates a design-for-functionality approach, thereby removing traditional manufacturing constraints and labor costs. The most feasible additive manufacturing technology identified for large, direct-drive generators in this study is powder-binder jetting of a sand cast mold. A parametric finite element analysis optimization study is performed, optimizing for mass and deformation. Also, topology optimization is employed for eachmore » parameter-optimized design.The optimized U-beam spoked web design results in a 24 percent reduction in structural mass of the rotor and 60 percent reduction in radial deflection.« less

  18. Breast Radiotherapy with Mixed Energy Photons; a Model for Optimal Beam Weighting.

    PubMed

    Birgani, Mohammadjavad Tahmasebi; Fatahiasl, Jafar; Hosseini, Seyed Mohammad; Bagheri, Ali; Behrooz, Mohammad Ali; Zabiehzadeh, Mansour; Meskani, Reza; Gomari, Maryam Talaei

    2015-01-01

    Utilization of high energy photons (>10 MV) with an optimal weight using a mixed energy technique is a practical way to generate a homogenous dose distribution while maintaining adequate target coverage in intact breast radiotherapy. This study represents a model for estimation of this optimal weight for day to day clinical usage. For this purpose, treatment planning computed tomography scans of thirty-three consecutive early stage breast cancer patients following breast conservation surgery were analyzed. After delineation of the breast clinical target volume (CTV) and placing opposed wedge paired isocenteric tangential portals, dosimeteric calculations were conducted and dose volume histograms (DVHs) were generated, first with pure 6 MV photons and then these calculations were repeated ten times with incorporating 18 MV photons (ten percent increase in weight per step) in each individual patient. For each calculation two indexes including maximum dose in the breast CTV (Dmax) and the volume of CTV which covered with 95% Isodose line (VCTV, 95%IDL) were measured according to the DVH data and then normalized values were plotted in a graph. The optimal weight of 18 MV photons was defined as the intersection point of Dmax and VCTV, 95%IDL graphs. For creating a model to predict this optimal weight multiple linear regression analysis was used based on some of the breast and tangential field parameters. The best fitting model for prediction of 18 MV photons optimal weight in breast radiotherapy using mixed energy technique, incorporated chest wall separation plus central lung distance (Adjusted R2=0.776). In conclusion, this study represents a model for the estimation of optimal beam weighting in breast radiotherapy using mixed photon energy technique for routine day to day clinical usage.

  19. An Improved Ensemble of Random Vector Functional Link Networks Based on Particle Swarm Optimization with Double Optimization Strategy

    PubMed Central

    Ling, Qing-Hua; Song, Yu-Qing; Han, Fei; Yang, Dan; Huang, De-Shuang

    2016-01-01

    For ensemble learning, how to select and combine the candidate classifiers are two key issues which influence the performance of the ensemble system dramatically. Random vector functional link networks (RVFL) without direct input-to-output links is one of suitable base-classifiers for ensemble systems because of its fast learning speed, simple structure and good generalization performance. In this paper, to obtain a more compact ensemble system with improved convergence performance, an improved ensemble of RVFL based on attractive and repulsive particle swarm optimization (ARPSO) with double optimization strategy is proposed. In the proposed method, ARPSO is applied to select and combine the candidate RVFL. As for using ARPSO to select the optimal base RVFL, ARPSO considers both the convergence accuracy on the validation data and the diversity of the candidate ensemble system to build the RVFL ensembles. In the process of combining RVFL, the ensemble weights corresponding to the base RVFL are initialized by the minimum norm least-square method and then further optimized by ARPSO. Finally, a few redundant RVFL is pruned, and thus the more compact ensemble of RVFL is obtained. Moreover, in this paper, theoretical analysis and justification on how to prune the base classifiers on classification problem is presented, and a simple and practically feasible strategy for pruning redundant base classifiers on both classification and regression problems is proposed. Since the double optimization is performed on the basis of the single optimization, the ensemble of RVFL built by the proposed method outperforms that built by some single optimization methods. Experiment results on function approximation and classification problems verify that the proposed method could improve its convergence accuracy as well as reduce the complexity of the ensemble system. PMID:27835638

  20. An Improved Ensemble of Random Vector Functional Link Networks Based on Particle Swarm Optimization with Double Optimization Strategy.

    PubMed

    Ling, Qing-Hua; Song, Yu-Qing; Han, Fei; Yang, Dan; Huang, De-Shuang

    2016-01-01

    For ensemble learning, how to select and combine the candidate classifiers are two key issues which influence the performance of the ensemble system dramatically. Random vector functional link networks (RVFL) without direct input-to-output links is one of suitable base-classifiers for ensemble systems because of its fast learning speed, simple structure and good generalization performance. In this paper, to obtain a more compact ensemble system with improved convergence performance, an improved ensemble of RVFL based on attractive and repulsive particle swarm optimization (ARPSO) with double optimization strategy is proposed. In the proposed method, ARPSO is applied to select and combine the candidate RVFL. As for using ARPSO to select the optimal base RVFL, ARPSO considers both the convergence accuracy on the validation data and the diversity of the candidate ensemble system to build the RVFL ensembles. In the process of combining RVFL, the ensemble weights corresponding to the base RVFL are initialized by the minimum norm least-square method and then further optimized by ARPSO. Finally, a few redundant RVFL is pruned, and thus the more compact ensemble of RVFL is obtained. Moreover, in this paper, theoretical analysis and justification on how to prune the base classifiers on classification problem is presented, and a simple and practically feasible strategy for pruning redundant base classifiers on both classification and regression problems is proposed. Since the double optimization is performed on the basis of the single optimization, the ensemble of RVFL built by the proposed method outperforms that built by some single optimization methods. Experiment results on function approximation and classification problems verify that the proposed method could improve its convergence accuracy as well as reduce the complexity of the ensemble system.

  1. Adaptive nearly optimal control for a class of continuous-time nonaffine nonlinear systems with inequality constraints.

    PubMed

    Fan, Quan-Yong; Yang, Guang-Hong

    2017-01-01

    The state inequality constraints have been hardly considered in the literature on solving the nonlinear optimal control problem based the adaptive dynamic programming (ADP) method. In this paper, an actor-critic (AC) algorithm is developed to solve the optimal control problem with a discounted cost function for a class of state-constrained nonaffine nonlinear systems. To overcome the difficulties resulting from the inequality constraints and the nonaffine nonlinearities of the controlled systems, a novel transformation technique with redesigned slack functions and a pre-compensator method are introduced to convert the constrained optimal control problem into an unconstrained one for affine nonlinear systems. Then, based on the policy iteration (PI) algorithm, an online AC scheme is proposed to learn the nearly optimal control policy for the obtained affine nonlinear dynamics. Using the information of the nonlinear model, novel adaptive update laws are designed to guarantee the convergence of the neural network (NN) weights and the stability of the affine nonlinear dynamics without the requirement for the probing signal. Finally, the effectiveness of the proposed method is validated by simulation studies. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  2. Optimization lighting layout based on gene density improved genetic algorithm for indoor visible light communications

    NASA Astrophysics Data System (ADS)

    Liu, Huanlin; Wang, Xin; Chen, Yong; Kong, Deqian; Xia, Peijie

    2017-05-01

    For indoor visible light communication system, the layout of LED lamps affects the uniformity of the received power on communication plane. In order to find an optimized lighting layout that meets both the lighting needs and communication needs, a gene density genetic algorithm (GDGA) is proposed. In GDGA, a gene indicates a pair of abscissa and ordinate of a LED, and an individual represents a LED layout in the room. The segmented crossover operation and gene mutation strategy based on gene density are put forward to make the received power on communication plane more uniform and increase the population's diversity. A weighted differences function between individuals is designed as the fitness function of GDGA for reserving the population having the useful LED layout genetic information and ensuring the global convergence of GDGA. Comparing square layout and circular layout, with the optimized layout achieved by the GDGA, the power uniformity increases by 83.3%, 83.1% and 55.4%, respectively. Furthermore, the convergence of GDGA is verified compared with evolutionary algorithm (EA). Experimental results show that GDGA can quickly find an approximation of optimal layout.

  3. Boundary Control of Linear Uncertain 1-D Parabolic PDE Using Approximate Dynamic Programming.

    PubMed

    Talaei, Behzad; Jagannathan, Sarangapani; Singler, John

    2018-04-01

    This paper develops a near optimal boundary control method for distributed parameter systems governed by uncertain linear 1-D parabolic partial differential equations (PDE) by using approximate dynamic programming. A quadratic surface integral is proposed to express the optimal cost functional for the infinite-dimensional state space. Accordingly, the Hamilton-Jacobi-Bellman (HJB) equation is formulated in the infinite-dimensional domain without using any model reduction. Subsequently, a neural network identifier is developed to estimate the unknown spatially varying coefficient in PDE dynamics. Novel tuning law is proposed to guarantee the boundedness of identifier approximation error in the PDE domain. A radial basis network (RBN) is subsequently proposed to generate an approximate solution for the optimal surface kernel function online. The tuning law for near optimal RBN weights is created, such that the HJB equation error is minimized while the dynamics are identified and closed-loop system remains stable. Ultimate boundedness (UB) of the closed-loop system is verified by using the Lyapunov theory. The performance of the proposed controller is successfully confirmed by simulation on an unstable diffusion-reaction process.

  4. Robust subspace clustering via joint weighted Schatten-p norm and Lq norm minimization

    NASA Astrophysics Data System (ADS)

    Zhang, Tao; Tang, Zhenmin; Liu, Qing

    2017-05-01

    Low-rank representation (LRR) has been successfully applied to subspace clustering. However, the nuclear norm in the standard LRR is not optimal for approximating the rank function in many real-world applications. Meanwhile, the L21 norm in LRR also fails to characterize various noises properly. To address the above issues, we propose an improved LRR method, which achieves low rank property via the new formulation with weighted Schatten-p norm and Lq norm (WSPQ). Specifically, the nuclear norm is generalized to be the Schatten-p norm and different weights are assigned to the singular values, and thus it can approximate the rank function more accurately. In addition, Lq norm is further incorporated into WSPQ to model different noises and improve the robustness. An efficient algorithm based on the inexact augmented Lagrange multiplier method is designed for the formulated problem. Extensive experiments on face clustering and motion segmentation clearly demonstrate the superiority of the proposed WSPQ over several state-of-the-art methods.

  5. Optimization techniques for integrating spatial data

    USGS Publications Warehouse

    Herzfeld, U.C.; Merriam, D.F.

    1995-01-01

    Two optimization techniques ta predict a spatial variable from any number of related spatial variables are presented. The applicability of the two different methods for petroleum-resource assessment is tested in a mature oil province of the Midcontinent (USA). The information on petroleum productivity, usually not directly accessible, is related indirectly to geological, geophysical, petrographical, and other observable data. This paper presents two approaches based on construction of a multivariate spatial model from the available data to determine a relationship for prediction. In the first approach, the variables are combined into a spatial model by an algebraic map-comparison/integration technique. Optimal weights for the map comparison function are determined by the Nelder-Mead downhill simplex algorithm in multidimensions. Geologic knowledge is necessary to provide a first guess of weights to start the automatization, because the solution is not unique. In the second approach, active set optimization for linear prediction of the target under positivity constraints is applied. Here, the procedure seems to select one variable from each data type (structure, isopachous, and petrophysical) eliminating data redundancy. Automating the determination of optimum combinations of different variables by applying optimization techniques is a valuable extension of the algebraic map-comparison/integration approach to analyzing spatial data. Because of the capability of handling multivariate data sets and partial retention of geographical information, the approaches can be useful in mineral-resource exploration. ?? 1995 International Association for Mathematical Geology.

  6. Thermal management in closed incubators: New software for assessing the impact of humidity on the optimal incubator air temperature.

    PubMed

    Delanaud, Stéphane; Decima, Pauline; Pelletier, Amandine; Libert, Jean-Pierre; Durand, Estelle; Stephan-Blanchard, Erwan; Bach, Véronique; Tourneux, Pierre

    2017-08-01

    Low-birth-weight (LBW) neonates are nursed in closed incubators to prevent transcutaneous water loss. The RH's impact on the optimal incubator air temperature setting has not been studied. On the basis of a clinical cohort study, we modelled all the ambient parameters influencing body heat losses and gains. The algorithm quantifies the change in RH on the air temperature, to maintain optimal thermal conditions in the incubator. Twenty-three neonates (gestational age (GA): 30.0 [28.9-31.6] weeks) were included. A 20% increase and a 20% decrease in the RH induced a change in air temperature of between -1.51 and +1.85°C for a simulated 650g neonate (GA: 26 weeks), between -1.66 and +1.87°C for a 1000g neonate (GA: 31 weeks), and between -1.77 and +1.97°C for a 2000g neonate (GA: 33 weeks) (p<0.001). According to regression analyses, the optimal incubator air temperature=a+b relative humidity +c age +d weight (p<0.001). We have developed new mathematical equations for calculating the optimal temperature for the incubator air as a function of the latter's relative humidity. The software constitutes a decision support tool for improving patient care in routine clinical practice. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.

  7. Online probabilistic learning with an ensemble of forecasts

    NASA Astrophysics Data System (ADS)

    Thorey, Jean; Mallet, Vivien; Chaussin, Christophe

    2016-04-01

    Our objective is to produce a calibrated weighted ensemble to forecast a univariate time series. In addition to a meteorological ensemble of forecasts, we rely on observations or analyses of the target variable. The celebrated Continuous Ranked Probability Score (CRPS) is used to evaluate the probabilistic forecasts. However applying the CRPS on weighted empirical distribution functions (deriving from the weighted ensemble) may introduce a bias because of which minimizing the CRPS does not produce the optimal weights. Thus we propose an unbiased version of the CRPS which relies on clusters of members and is strictly proper. We adapt online learning methods for the minimization of the CRPS. These methods generate the weights associated to the members in the forecasted empirical distribution function. The weights are updated before each forecast step using only past observations and forecasts. Our learning algorithms provide the theoretical guarantee that, in the long run, the CRPS of the weighted forecasts is at least as good as the CRPS of any weighted ensemble with weights constant in time. In particular, the performance of our forecast is better than that of any subset ensemble with uniform weights. A noteworthy advantage of our algorithm is that it does not require any assumption on the distributions of the observations and forecasts, both for the application and for the theoretical guarantee to hold. As application example on meteorological forecasts for photovoltaic production integration, we show that our algorithm generates a calibrated probabilistic forecast, with significant performance improvements on probabilistic diagnostic tools (the CRPS, the reliability diagram and the rank histogram).

  8. Optimal trajectories for hypersonic launch vehicles

    NASA Technical Reports Server (NTRS)

    Ardema, Mark D.; Bowles, Jeffrey V.; Whittaker, Thomas

    1992-01-01

    In this paper, we derive a near-optimal guidance law for the ascent trajectory from Earth surface to Earth orbit of a hypersonic, dual-mode propulsion, lifting vehicle. Of interest are both the optimal flight path and the optimal operation of the propulsion system. The guidance law is developed from the energy-state approximation of the equations of motion. The performance objective is a weighted sum of fuel mass and volume, with the weighting factor selected to give minimum gross take-off weight for a specific payload mass and volume.

  9. Multidisciplinary design optimization of aircraft wing structures with aeroelastic and aeroservoelastic constraints

    NASA Astrophysics Data System (ADS)

    Jung, Sang-Young

    Design procedures for aircraft wing structures with control surfaces are presented using multidisciplinary design optimization. Several disciplines such as stress analysis, structural vibration, aerodynamics, and controls are considered simultaneously and combined for design optimization. Vibration data and aerodynamic data including those in the transonic regime are calculated by existing codes. Flutter analyses are performed using those data. A flutter suppression method is studied using control laws in the closed-loop flutter equation. For the design optimization, optimization techniques such as approximation, design variable linking, temporary constraint deletion, and optimality criteria are used. Sensitivity derivatives of stresses and displacements for static loads, natural frequency, flutter characteristics, and control characteristics with respect to design variables are calculated for an approximate optimization. The objective function is the structural weight. The design variables are the section properties of the structural elements and the control gain factors. Existing multidisciplinary optimization codes (ASTROS* and MSC/NASTRAN) are used to perform single and multiple constraint optimizations of fully built up finite element wing structures. Three benchmark wing models are developed and/or modified for this purpose. The models are tested extensively.

  10. A comparative study and application of continuously variable transmission to a single main rotor heavy lift helicopter

    NASA Astrophysics Data System (ADS)

    Hameer, Sameer

    Rotorcraft transmission design is limited by empirical weight trends that are proportional to the power/torque raised to the two-thirds coupled with the relative inexperience industry has with the employment of variable speed transmission to heavy lift helicopters of the order of 100,000 lbs gross weight and 30,000 installed horsepower. The advanced rotorcraft transmission program objectives are to reduce transmission weight by at least 25%, reduce sound pressure levels by at least 10 dB, have a 5000 hr mean time between removal, and also incorporate the use of split torque technology in rotorcraft drivetrains of the future. The major obstacle that challenges rotorcraft drivetrain design is the selection, design, and optimization of a variable speed transmission in the goal of achieving a 50% reduction in rotor speed and its ability to handle high torque with light weight gears, as opposed to using a two-speed transmission which has inherent structural problems and is highly unreliable due to the embodiment of the traction type transmission, complex clutch and brake system. This thesis selects a nontraction pericyclic continuously variable transmission (P-CVT) as the best approach for a single main rotor heavy lift helicopter. The objective is to target and overcome the above mentioned obstacle for drivetrain design. Overcoming this obstacle provides advancement in the state of the art of drivetrain design over existing planetary and split torque transmissions currently used in helicopters. The goal of the optimization process was to decrease weight, decrease noise, increase efficiency, and increase safety and reliability. The objective function utilized the minimization of the weight and the major constraint is the tooth bending stress of the facegears. The most important parameters of the optimization process are weight, maintainability, and reliability which are cross-functionally related to each other, and these parameters are related to the torques and operating speeds. The analysis of the split torque type P-CVT achieved a weight reduction of 42.5% and 40.7% over planetary and split torque transmissions respectively. In addition, a 19.5 dB sound pressure level reduction was achieved using active gear struts, and also the use of fabricated steel truss like housing provided a higher maintainability and reliability, low cost, and low weight over cast magnesium housing currently employed in helicopters. The static finite element analysis of the split torque type P-CVT, both 2-D and 3-D, yielded stresses below the allowable bending stress of the material. The goal of the finite element analysis is to see if the designed product has met its functional requirements. The safety assessment of the split torque type P-CVT yielded a 99% probability of mission success based on a Monte Carlo simulation using stochastic-petri net analysis and a failure hazard analysis. This was followed by an FTA/RBD analysis which yielded an overall system failure rate of 140.35 failures per million hours, and a preliminary certification and time line of certification was performed. The use of spherical facegears and pericyclic kinematics has advanced the state of the art in drivetrain design primarily in the reduction of weight and noise coupled with high safety, reliability, and efficiency.

  11. Undermining and Strengthening Social Networks through Network Modification

    PubMed Central

    Mellon, Jonathan; Yoder, Jordan; Evans, Daniel

    2016-01-01

    Social networks have well documented effects at the individual and aggregate level. Consequently it is often useful to understand how an attempt to influence a network will change its structure and consequently achieve other goals. We develop a framework for network modification that allows for arbitrary objective functions, types of modification (e.g. edge weight addition, edge weight removal, node removal, and covariate value change), and recovery mechanisms (i.e. how a network responds to interventions). The framework outlined in this paper helps both to situate the existing work on network interventions but also opens up many new possibilities for intervening in networks. In particular use two case studies to highlight the potential impact of empirically calibrating the objective function and network recovery mechanisms as well as showing how interventions beyond node removal can be optimised. First, we simulate an optimal removal of nodes from the Noordin terrorist network in order to reduce the expected number of attacks (based on empirically predicting the terrorist collaboration network from multiple types of network ties). Second, we simulate optimally strengthening ties within entrepreneurial ecosystems in six developing countries. In both cases we estimate ERGM models to simulate how a network will endogenously evolve after intervention. PMID:27703198

  12. Multi-objective and Perishable Fuzzy Inventory Models Having Weibull Life-time With Time Dependent Demand, Demand Dependent Production and Time Varying Holding Cost: A Possibility/Necessity Approach

    NASA Astrophysics Data System (ADS)

    Pathak, Savita; Mondal, Seema Sarkar

    2010-10-01

    A multi-objective inventory model of deteriorating item has been developed with Weibull rate of decay, time dependent demand, demand dependent production, time varying holding cost allowing shortages in fuzzy environments for non- integrated and integrated businesses. Here objective is to maximize the profit from different deteriorating items with space constraint. The impreciseness of inventory parameters and goals for non-integrated business has been expressed by linear membership functions. The compromised solutions are obtained by different fuzzy optimization methods. To incorporate the relative importance of the objectives, the different cardinal weights crisp/fuzzy have been assigned. The models are illustrated with numerical examples and results of models with crisp/fuzzy weights are compared. The result for the model assuming them to be integrated business is obtained by using Generalized Reduced Gradient Method (GRG). The fuzzy integrated model with imprecise inventory cost is formulated to optimize the possibility necessity measure of fuzzy goal of the objective function by using credibility measure of fuzzy event by taking fuzzy expectation. The results of crisp/fuzzy integrated model are illustrated with numerical examples and results are compared.

  13. Undermining and Strengthening Social Networks through Network Modification.

    PubMed

    Mellon, Jonathan; Yoder, Jordan; Evans, Daniel

    2016-10-05

    Social networks have well documented effects at the individual and aggregate level. Consequently it is often useful to understand how an attempt to influence a network will change its structure and consequently achieve other goals. We develop a framework for network modification that allows for arbitrary objective functions, types of modification (e.g. edge weight addition, edge weight removal, node removal, and covariate value change), and recovery mechanisms (i.e. how a network responds to interventions). The framework outlined in this paper helps both to situate the existing work on network interventions but also opens up many new possibilities for intervening in networks. In particular use two case studies to highlight the potential impact of empirically calibrating the objective function and network recovery mechanisms as well as showing how interventions beyond node removal can be optimised. First, we simulate an optimal removal of nodes from the Noordin terrorist network in order to reduce the expected number of attacks (based on empirically predicting the terrorist collaboration network from multiple types of network ties). Second, we simulate optimally strengthening ties within entrepreneurial ecosystems in six developing countries. In both cases we estimate ERGM models to simulate how a network will endogenously evolve after intervention.

  14. Undermining and Strengthening Social Networks through Network Modification

    NASA Astrophysics Data System (ADS)

    Mellon, Jonathan; Yoder, Jordan; Evans, Daniel

    2016-10-01

    Social networks have well documented effects at the individual and aggregate level. Consequently it is often useful to understand how an attempt to influence a network will change its structure and consequently achieve other goals. We develop a framework for network modification that allows for arbitrary objective functions, types of modification (e.g. edge weight addition, edge weight removal, node removal, and covariate value change), and recovery mechanisms (i.e. how a network responds to interventions). The framework outlined in this paper helps both to situate the existing work on network interventions but also opens up many new possibilities for intervening in networks. In particular use two case studies to highlight the potential impact of empirically calibrating the objective function and network recovery mechanisms as well as showing how interventions beyond node removal can be optimised. First, we simulate an optimal removal of nodes from the Noordin terrorist network in order to reduce the expected number of attacks (based on empirically predicting the terrorist collaboration network from multiple types of network ties). Second, we simulate optimally strengthening ties within entrepreneurial ecosystems in six developing countries. In both cases we estimate ERGM models to simulate how a network will endogenously evolve after intervention.

  15. Efficiency Improvements to the Displacement Based Multilevel Structural Optimization Algorithm

    NASA Technical Reports Server (NTRS)

    Plunkett, C. L.; Striz, A. G.; Sobieszczanski-Sobieski, J.

    2001-01-01

    Multilevel Structural Optimization (MSO) continues to be an area of research interest in engineering optimization. In the present project, the weight optimization of beams and trusses using Displacement based Multilevel Structural Optimization (DMSO), a member of the MSO set of methodologies, is investigated. In the DMSO approach, the optimization task is subdivided into a single system and multiple subsystems level optimizations. The system level optimization minimizes the load unbalance resulting from the use of displacement functions to approximate the structural displacements. The function coefficients are then the design variables. Alternately, the system level optimization can be solved using the displacements themselves as design variables, as was shown in previous research. Both approaches ensure that the calculated loads match the applied loads. In the subsystems level, the weight of the structure is minimized using the element dimensions as design variables. The approach is expected to be very efficient for large structures, since parallel computing can be utilized in the different levels of the problem. In this paper, the method is applied to a one-dimensional beam and a large three-dimensional truss. The beam was tested to study possible simplifications to the system level optimization. In previous research, polynomials were used to approximate the global nodal displacements. The number of coefficients of the polynomials equally matched the number of degrees of freedom of the problem. Here it was desired to see if it is possible to only match a subset of the degrees of freedom in the system level. This would lead to a simplification of the system level, with a resulting increase in overall efficiency. However, the methods tested for this type of system level simplification did not yield positive results. The large truss was utilized to test further improvements in the efficiency of DMSO. In previous work, parallel processing was applied to the subsystems level, where the derivative verification feature of the optimizer NPSOL had been utilized in the optimizations. This resulted in large runtimes. In this paper, the optimizations were repeated without using the derivative verification, and the results are compared to those from the previous work. Also, the optimizations were run on both, a network of SUN workstations using the MPICH implementation of the Message Passing Interface (MPI) and on the faster Beowulf cluster at ICASE, NASA Langley Research Center, using the LAM implementation of UP]. The results on both systems were consistent and showed that it is not necessary to verify the derivatives and that this gives a large increase in efficiency of the DMSO algorithm.

  16. Protons, osmolytes, and fitness of internal milieu for protein function.

    PubMed

    Somero, G N

    1986-08-01

    The composition of the intracellular milieu shows striking similarities among widely different species. Only certain values of intracellular pH, values that generally reflect alphastat regulation, and only narrow ranges of inorganic ion concentrations are found in the cytoplasm of the cells of most animals, plants, and microorganisms. In water-stressed organisms only a few types of low-molecular-weight organic molecules (osmolytes) are accumulated. These highly conserved characteristics of the intracellular fluids reflect the need to maintain critical features of macromolecules within narrow ranges optimal for life. For proteins these features include maintaining adequate rates of catalysis, a high level of regulatory responsiveness, and a precise balance between stability and lability of structure (tertiary conformation, subunit assembly, and multiprotein complexes). The optimal values for these functional and structural features of proteins often lie near the midrange of possible values for these properties, and only under specific conditions of intracellular pH, ionic strength, and osmolyte composition are these optimal midrange values conserved. In dormant cells the departure of solution conditions from values that are optimal for protein function and structure may be instrumental in reducing or shutting down metabolic functions. Seen from a broad evolutionary perspective, the evolution of the intracellular milieu is an important complement to macromolecular evolution. In certain instances appropriate modifications of the internal milieu may reduce the need for adaptive amino acid replacements in proteins.

  17. A Semi-linear Backward Parabolic Cauchy Problem with Unbounded Coefficients of Hamilton–Jacobi–Bellman Type and Applications to Optimal Control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Addona, Davide, E-mail: d.addona@campus.unimib.it

    2015-08-15

    We obtain weighted uniform estimates for the gradient of the solutions to a class of linear parabolic Cauchy problems with unbounded coefficients. Such estimates are then used to prove existence and uniqueness of the mild solution to a semi-linear backward parabolic Cauchy problem, where the differential equation is the Hamilton–Jacobi–Bellman equation of a suitable optimal control problem. Via backward stochastic differential equations, we show that the mild solution is indeed the value function of the controlled equation and that the feedback law is verified.

  18. Optimal Control-Based Adaptive NN Design for a Class of Nonlinear Discrete-Time Block-Triangular Systems.

    PubMed

    Liu, Yan-Jun; Tong, Shaocheng

    2016-11-01

    In this paper, we propose an optimal control scheme-based adaptive neural network design for a class of unknown nonlinear discrete-time systems. The controlled systems are in a block-triangular multi-input-multi-output pure-feedback structure, i.e., there are both state and input couplings and nonaffine functions to be included in every equation of each subsystem. The design objective is to provide a control scheme, which not only guarantees the stability of the systems, but also achieves optimal control performance. The main contribution of this paper is that it is for the first time to achieve the optimal performance for such a class of systems. Owing to the interactions among subsystems, making an optimal control signal is a difficult task. The design ideas are that: 1) the systems are transformed into an output predictor form; 2) for the output predictor, the ideal control signal and the strategic utility function can be approximated by using an action network and a critic network, respectively; and 3) an optimal control signal is constructed with the weight update rules to be designed based on a gradient descent method. The stability of the systems can be proved based on the difference Lyapunov method. Finally, a numerical simulation is given to illustrate the performance of the proposed scheme.

  19. Scheduling algorithm for data relay satellite optical communication based on artificial intelligent optimization

    NASA Astrophysics Data System (ADS)

    Zhao, Wei-hu; Zhao, Jing; Zhao, Shang-hong; Li, Yong-jun; Wang, Xiang; Dong, Yi; Dong, Chen

    2013-08-01

    Optical satellite communication with the advantages of broadband, large capacity and low power consuming broke the bottleneck of the traditional microwave satellite communication. The formation of the Space-based Information System with the technology of high performance optical inter-satellite communication and the realization of global seamless coverage and mobile terminal accessing are the necessary trend of the development of optical satellite communication. Considering the resources, missions and restraints of Data Relay Satellite Optical Communication System, a model of optical communication resources scheduling is established and a scheduling algorithm based on artificial intelligent optimization is put forwarded. According to the multi-relay-satellite, multi-user-satellite, multi-optical-antenna and multi-mission with several priority weights, the resources are scheduled reasonable by the operation: "Ascertain Current Mission Scheduling Time" and "Refresh Latter Mission Time-Window". The priority weight is considered as the parameter of the fitness function and the scheduling project is optimized by the Genetic Algorithm. The simulation scenarios including 3 relay satellites with 6 optical antennas, 12 user satellites and 30 missions, the simulation result reveals that the algorithm obtain satisfactory results in both efficiency and performance and resources scheduling model and the optimization algorithm are suitable in multi-relay-satellite, multi-user-satellite, and multi-optical-antenna recourses scheduling problem.

  20. Network anomaly detection system with optimized DS evidence theory.

    PubMed

    Liu, Yuan; Wang, Xiaofeng; Liu, Kaiyu

    2014-01-01

    Network anomaly detection has been focused on by more people with the fast development of computer network. Some researchers utilized fusion method and DS evidence theory to do network anomaly detection but with low performance, and they did not consider features of network-complicated and varied. To achieve high detection rate, we present a novel network anomaly detection system with optimized Dempster-Shafer evidence theory (ODS) and regression basic probability assignment (RBPA) function. In this model, we add weights for each sensor to optimize DS evidence theory according to its previous predict accuracy. And RBPA employs sensor's regression ability to address complex network. By four kinds of experiments, we find that our novel network anomaly detection model has a better detection rate, and RBPA as well as ODS optimization methods can improve system performance significantly.

  1. Network Anomaly Detection System with Optimized DS Evidence Theory

    PubMed Central

    Liu, Yuan; Wang, Xiaofeng; Liu, Kaiyu

    2014-01-01

    Network anomaly detection has been focused on by more people with the fast development of computer network. Some researchers utilized fusion method and DS evidence theory to do network anomaly detection but with low performance, and they did not consider features of network—complicated and varied. To achieve high detection rate, we present a novel network anomaly detection system with optimized Dempster-Shafer evidence theory (ODS) and regression basic probability assignment (RBPA) function. In this model, we add weights for each senor to optimize DS evidence theory according to its previous predict accuracy. And RBPA employs sensor's regression ability to address complex network. By four kinds of experiments, we find that our novel network anomaly detection model has a better detection rate, and RBPA as well as ODS optimization methods can improve system performance significantly. PMID:25254258

  2. Production Task Queue Optimization Based on Multi-Attribute Evaluation for Complex Product Assembly Workshop.

    PubMed

    Li, Lian-Hui; Mo, Rong

    2015-01-01

    The production task queue has a great significance for manufacturing resource allocation and scheduling decision. Man-made qualitative queue optimization method has a poor effect and makes the application difficult. A production task queue optimization method is proposed based on multi-attribute evaluation. According to the task attributes, the hierarchical multi-attribute model is established and the indicator quantization methods are given. To calculate the objective indicator weight, criteria importance through intercriteria correlation (CRITIC) is selected from three usual methods. To calculate the subjective indicator weight, BP neural network is used to determine the judge importance degree, and then the trapezoid fuzzy scale-rough AHP considering the judge importance degree is put forward. The balanced weight, which integrates the objective weight and the subjective weight, is calculated base on multi-weight contribution balance model. The technique for order preference by similarity to an ideal solution (TOPSIS) improved by replacing Euclidean distance with relative entropy distance is used to sequence the tasks and optimize the queue by the weighted indicator value. A case study is given to illustrate its correctness and feasibility.

  3. Production Task Queue Optimization Based on Multi-Attribute Evaluation for Complex Product Assembly Workshop

    PubMed Central

    Li, Lian-hui; Mo, Rong

    2015-01-01

    The production task queue has a great significance for manufacturing resource allocation and scheduling decision. Man-made qualitative queue optimization method has a poor effect and makes the application difficult. A production task queue optimization method is proposed based on multi-attribute evaluation. According to the task attributes, the hierarchical multi-attribute model is established and the indicator quantization methods are given. To calculate the objective indicator weight, criteria importance through intercriteria correlation (CRITIC) is selected from three usual methods. To calculate the subjective indicator weight, BP neural network is used to determine the judge importance degree, and then the trapezoid fuzzy scale-rough AHP considering the judge importance degree is put forward. The balanced weight, which integrates the objective weight and the subjective weight, is calculated base on multi-weight contribution balance model. The technique for order preference by similarity to an ideal solution (TOPSIS) improved by replacing Euclidean distance with relative entropy distance is used to sequence the tasks and optimize the queue by the weighted indicator value. A case study is given to illustrate its correctness and feasibility. PMID:26414758

  4. Protein intake and exercise for optimal muscle function with aging: Recommendations from the ESPEN Expert Group

    PubMed Central

    Deutz, Nicolaas E. P.; Bauer, Jurgen M.; Barazzoni, Rocco; Biolo, Gianni; Boirie, Yves; Bosy-Westphal, Anja; Cederholm, Tommy; Cruz-Jentoft, Alfonso; Krznaric, Zeljko; Nair, K. Sreekumaran; Singer, Pierre; Teta, Daniel; Tipton, Kevin; Calder, Philip C.

    2014-01-01

    The aging process is associated with gradual and progressive loss of muscle mass along with lowered strength and physical endurance. This condition, sarcopenia, has been widely observed with aging in sedentary adults. Regular aerobic and resistance exercise programs have been shown to counteract most aspects of sarcopenia. In addition, good nutrition, especially adequate protein and energy intake, can help limit and treat age-related declines in muscle mass, strength, and functional abilities. Protein nutrition in combination with exercise is considered optimal for maintaining muscle function. With the goal of providing recommendations for health care professionals to help older adults sustain muscle strength and function into older age, the European Society for Clinical Nutrition and Metabolism (ESPEN) hosted a Workshop on Protein Requirements in the Elderly, held in Dubrovnik on November 24 and 25, 2013. Based on the evidence presented and discussed, the following recommendations are made: (1) for healthy older people, the diet should provide at least 1.0 to 1.2 g protein/kg body weight/day (2) for older people who are malnourished or at risk of malnutrition because they have acute or chronic illness, the diet should provide 1.2 to 1.5 g protein/kg body weight/day, with even higher intake for individuals with severe illness or injury, and (3) daily physical activity or exercise (resistance training, aerobic exercise) should be undertaken by all older people, for as long as possible. PMID:24814383

  5. Optimal guidance for the space shuttle transition

    NASA Technical Reports Server (NTRS)

    Stengel, R. F.

    1972-01-01

    A guidance method for the space shuttle's transition from hypersonic entry to subsonic cruising flight is presented. The method evolves from a numerical trajectory optimization technique in which kinetic energy and total energy (per unit weight) replace velocity and time in the dynamic equations. This allows the open end-time problem to be transformed to one of fixed terminal energy. In its ultimate form, E-Guidance obtains energy balance (including dynamic-pressure-rate damping) and path length control by angle-of-attack modulation and cross-range control by roll angle modulation. The guidance functions also form the basis for a pilot display of instantaneous maneuver limits and destination. Numerical results illustrate the E-Guidance concept and the optimal trajectories on which it is based.

  6. Basal buffer systems for a newly glycosylated recombinant human interferon-β with biophysical stability and DoE approaches.

    PubMed

    Kim, Nam Ah; Song, Kyoung; Lim, Dae Gon; Hada, Shavron; Shin, Young Kee; Shin, Sangmun; Jeong, Seong Hoon

    2015-10-12

    The purpose of this study was to develop a basal buffer system for a biobetter version of recombinant human interferon-β 1a (rhIFN-β 1a), termed R27T, to optimize its biophysical stability. The protein was pre-screened in solution as a function of pH (2-11) using differential scanning calorimetry (DSC) and dynamic light scattering (DLS). According to the result, its experimental pI and optimal pH range were 5.8 and 3.6-4.4, respectively. Design of experiment (DoE) approach was developed as a practical tool to aid formulation studies as a function of pH (2.9-5.7), buffer (phosphate, acetate, citrate, and histidine), and buffer concentration (20 mM and 50 mM). This method employed a weight-based procedure to interpret complex data sets and to investigate critical key factors representing protein stability. The factors used were Tm, enthalpy, and relative helix contents which were obtained by DSC and Fourier Transform Infrared spectroscopy (FT-IR). Although the weights changed by three responses, objective functions from a set of experimental designs based on four buffers were highest in 20 mM acetate buffer at pH 3.6 among all 19 scenarios tested. Size exclusion chromatography (SEC) was adopted to investigate accelerated storage stability in order to optimize the pH value with susceptible stability since the low pH was not patient-compliant. Interestingly, relative helix contents and storage stability (monomer remaining) increased with pH and was the highest at pH 4.0. On the other hand, relative helix contents and thermodynamic stability decreased at pH 4.2 and 4.4, suggesting protein aggregation issues. Therefore, the optimized basal buffer system for the novel biobetter was proposed to be 20 mM acetate buffer at pH 3.8±0.2. Copyright © 2015 Elsevier B.V. All rights reserved.

  7. Reliability-Weighted Integration of Audiovisual Signals Can Be Modulated by Top-down Attention

    PubMed Central

    Noppeney, Uta

    2018-01-01

    Abstract Behaviorally, it is well established that human observers integrate signals near-optimally weighted in proportion to their reliabilities as predicted by maximum likelihood estimation. Yet, despite abundant behavioral evidence, it is unclear how the human brain accomplishes this feat. In a spatial ventriloquist paradigm, participants were presented with auditory, visual, and audiovisual signals and reported the location of the auditory or the visual signal. Combining psychophysics, multivariate functional MRI (fMRI) decoding, and models of maximum likelihood estimation (MLE), we characterized the computational operations underlying audiovisual integration at distinct cortical levels. We estimated observers’ behavioral weights by fitting psychometric functions to participants’ localization responses. Likewise, we estimated the neural weights by fitting neurometric functions to spatial locations decoded from regional fMRI activation patterns. Our results demonstrate that low-level auditory and visual areas encode predominantly the spatial location of the signal component of a region’s preferred auditory (or visual) modality. By contrast, intraparietal sulcus forms spatial representations by integrating auditory and visual signals weighted by their reliabilities. Critically, the neural and behavioral weights and the variance of the spatial representations depended not only on the sensory reliabilities as predicted by the MLE model but also on participants’ modality-specific attention and report (i.e., visual vs. auditory). These results suggest that audiovisual integration is not exclusively determined by bottom-up sensory reliabilities. Instead, modality-specific attention and report can flexibly modulate how intraparietal sulcus integrates sensory signals into spatial representations to guide behavioral responses (e.g., localization and orienting). PMID:29527567

  8. SU-E-T-422: Fast Analytical Beamlet Optimization for Volumetric Intensity-Modulated Arc Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chan, Kenny S K; Lee, Louis K Y; Xing, L

    2015-06-15

    Purpose: To implement a fast optimization algorithm on CPU/GPU heterogeneous computing platform and to obtain an optimal fluence for a given target dose distribution from the pre-calculated beamlets in an analytical approach. Methods: The 2D target dose distribution was modeled as an n-dimensional vector and estimated by a linear combination of independent basis vectors. The basis set was composed of the pre-calculated beamlet dose distributions at every 6 degrees of gantry angle and the cost function was set as the magnitude square of the vector difference between the target and the estimated dose distribution. The optimal weighting of the basis,more » which corresponds to the optimal fluence, was obtained analytically by the least square method. Those basis vectors with a positive weighting were selected for entering into the next level of optimization. Totally, 7 levels of optimization were implemented in the study.Ten head-and-neck and ten prostate carcinoma cases were selected for the study and mapped to a round water phantom with a diameter of 20cm. The Matlab computation was performed in a heterogeneous programming environment with Intel i7 CPU and NVIDIA Geforce 840M GPU. Results: In all selected cases, the estimated dose distribution was in a good agreement with the given target dose distribution and their correlation coefficients were found to be in the range of 0.9992 to 0.9997. Their root-mean-square error was monotonically decreasing and converging after 7 cycles of optimization. The computation took only about 10 seconds and the optimal fluence maps at each gantry angle throughout an arc were quickly obtained. Conclusion: An analytical approach is derived for finding the optimal fluence for a given target dose distribution and a fast optimization algorithm implemented on the CPU/GPU heterogeneous computing environment greatly reduces the optimization time.« less

  9. Biphasic Finite Element Modeling Reconciles Mechanical Properties of Tissue-Engineered Cartilage Constructs Across Testing Platforms.

    PubMed

    Meloni, Gregory R; Fisher, Matthew B; Stoeckl, Brendan D; Dodge, George R; Mauck, Robert L

    2017-07-01

    Cartilage tissue engineering is emerging as a promising treatment for osteoarthritis, and the field has progressed toward utilizing large animal models for proof of concept and preclinical studies. Mechanical testing of the regenerative tissue is an essential outcome for functional evaluation. However, testing modalities and constitutive frameworks used to evaluate in vitro grown samples differ substantially from those used to evaluate in vivo derived samples. To address this, we developed finite element (FE) models (using FEBio) of unconfined compression and indentation testing, modalities commonly used for such samples. We determined the model sensitivity to tissue radius and subchondral bone modulus, as well as its ability to estimate material parameters using the built-in parameter optimization tool in FEBio. We then sequentially tested agarose gels of 4%, 6%, 8%, and 10% weight/weight using a custom indentation platform, followed by unconfined compression. Similarly, we evaluated the ability of the model to generate material parameters for living constructs by evaluating engineered cartilage. Juvenile bovine mesenchymal stem cells were seeded (2 × 10 7 cells/mL) in 1% weight/volume hyaluronic acid hydrogels and cultured in a chondrogenic medium for 3, 6, and 9 weeks. Samples were planed and tested sequentially in indentation and unconfined compression. The model successfully completed parameter optimization routines for each testing modality for both acellular and cell-based constructs. Traditional outcome measures and the FE-derived outcomes showed significant changes in material properties during the maturation of engineered cartilage tissue, capturing dynamic changes in functional tissue mechanics. These outcomes were significantly correlated with one another, establishing this FE modeling approach as a singular method for the evaluation of functional engineered and native tissue regeneration, both in vitro and in vivo.

  10. Cryogenic Tank Structure Sizing With Structural Optimization Method

    NASA Technical Reports Server (NTRS)

    Wang, J. T.; Johnson, T. F.; Sleight, D. W.; Saether, E.

    2001-01-01

    Structural optimization methods in MSC /NASTRAN are used to size substructures and to reduce the weight of a composite sandwich cryogenic tank for future launch vehicles. Because the feasible design space of this problem is non-convex, many local minima are found. This non-convex problem is investigated in detail by conducting a series of analyses along a design line connecting two feasible designs. Strain constraint violations occur for some design points along the design line. Since MSC/NASTRAN uses gradient-based optimization procedures. it does not guarantee that the lowest weight design can be found. In this study, a simple procedure is introduced to create a new starting point based on design variable values from previous optimization analyses. Optimization analysis using this new starting point can produce a lower weight design. Detailed inputs for setting up the MSC/NASTRAN optimization analysis and final tank design results are presented in this paper. Approaches for obtaining further weight reductions are also discussed.

  11. Reconstruction of Atmospheric Tracer Releases with Optimal Resolution Features: Concentration Data Assimilation

    NASA Astrophysics Data System (ADS)

    Singh, Sarvesh Kumar; Turbelin, Gregory; Issartel, Jean-Pierre; Kumar, Pramod; Feiz, Amir Ali

    2015-04-01

    The fast growing urbanization, industrialization and military developments increase the risk towards the human environment and ecology. This is realized in several past mortality incidents, for instance, Chernobyl nuclear explosion (Ukraine), Bhopal gas leak (India), Fukushima-Daichi radionuclide release (Japan), etc. To reduce the threat and exposure to the hazardous contaminants, a fast and preliminary identification of unknown releases is required by the responsible authorities for the emergency preparedness and air quality analysis. Often, an early detection of such contaminants is pursued by a distributed sensor network. However, identifying the origin and strength of unknown releases following the sensor reported concentrations is a challenging task. This requires an optimal strategy to integrate the measured concentrations with the predictions given by the atmospheric dispersion models. This is an inverse problem. The measured concentrations are insufficient and atmospheric dispersion models suffer from inaccuracy due to the lack of process understanding, turbulence uncertainties, etc. These lead to a loss of information in the reconstruction process and thus, affect the resolution, stability and uniqueness of the retrieved source. An additional well known issue is the numerical artifact arisen at the measurement locations due to the strong concentration gradient and dissipative nature of the concentration. Thus, assimilation techniques are desired which can lead to an optimal retrieval of the unknown releases. In general, this is facilitated within the Bayesian inference and optimization framework with a suitable choice of a priori information, regularization constraints, measurement and background error statistics. An inversion technique is introduced here for an optimal reconstruction of unknown releases using limited concentration measurements. This is based on adjoint representation of the source-receptor relationship and utilization of a weight function which exhibits a priori information about the unknown releases apparent to the monitoring network. The properties of the weight function provide an optimal data resolution and model resolution to the retrieved source estimates. The retrieved source estimates are proved theoretically to be stable against the random measurement errors and their reliability can be interpreted in terms of the distribution of the weight functions. Further, the same framework can be extended for the identification of the point type releases by utilizing the maximum of the retrieved source estimates. The inversion technique has been evaluated with the several diffusion experiments, like, Idaho low wind diffusion experiment (1974), IIT Delhi tracer experiment (1991), European Tracer Experiment (1994), Fusion Field Trials (2007), etc. In case of point release experiments, the source parameters are mostly retrieved close to the true source parameters with least error. Primarily, the proposed technique overcomes two major difficulties incurred in the source reconstruction: (i) The initialization of the source parameters as required by the optimization based techniques. The converged solution depends on their initialization. (ii) The statistical knowledge about the measurement and background errors as required by the Bayesian inference based techniques. These are hypothetically assumed in case of no prior knowledge.

  12. CORSSTOL: Cylinder Optimization of Rings, Skin, and Stringers with Tolerance sensitivity

    NASA Technical Reports Server (NTRS)

    Finckenor, J.; Bevill, M.

    1995-01-01

    Cylinder Optimization of Rings, Skin, and Stringers with Tolerance (CORSSTOL) sensitivity is a design optimization program incorporating a method to examine the effects of user-provided manufacturing tolerances on weight and failure. CORSSTOL gives designers a tool to determine tolerances based on need. This is a decisive way to choose the best design among several manufacturing methods with differing capabilities and costs. CORSSTOL initially optimizes a stringer-stiffened cylinder for weight without tolerances. The skin and stringer geometry are varied, subject to stress and buckling constraints. Then the same analysis and optimization routines are used to minimize the maximum material condition weight subject to the least favorable combination of tolerances. The adjusted optimum dimensions are provided with the weight and constraint sensitivities of each design variable. The designer can immediately identify critical tolerances. The safety of parts made out of tolerance can also be determined. During design and development of weight-critical systems, design/analysis tools that provide product-oriented results are of vital significance. The development of this program and methodology provides designers with an effective cost- and weight-saving design tool. The tolerance sensitivity method can be applied to any system defined by a set of deterministic equations.

  13. An optimal structure for a 34-meter millimeter-wave center-fed BWG antenna: The Cross-Box concept

    NASA Technical Reports Server (NTRS)

    Chuang, K. L.

    1988-01-01

    An approach to the design of the planned NASA/JPL 34 m elevation-over-azimuth (Az-El) antenna structure at the Venus site (DSS-13) is presented. The antenna structural configuration accommodates a large (2.44 m) beam waveguide (BWG) tube centrally routed through the reflector-alidade structure, an elevation wheel design, and an optimal structural geometry. The design encompasses a cross-box elevation wheel-reflector base substructure that preserves homology while satisfying many constraints, such as structure weight, surface tolerance, stresses, natural frequency, and various functional constraints. The functional requirements are set to ensure that microwave performance at millimeter wavelengths is adequate. The cross-box configuration was modeled, optimized, and found to satisfy all DSN HEF baseline antenna specifications. In addition, the structure design was conceptualized and analyzed with an emphasis on preserving the structure envelope and keeping modifications relative to the HEF antennas to a minimum, thus enabling the transferability of the BWG technology for future retrofitting. Good performance results were obtained.

  14. Optimization Of Feature Weight TheVoting Feature Intervals 5 Algorithm Using Partical Swarm Optimization Algorithm

    NASA Astrophysics Data System (ADS)

    Hayana Hasibuan, Eka; Mawengkang, Herman; Efendi, Syahril

    2017-12-01

    The use of Partical Swarm Optimization Algorithm in this research is to optimize the feature weights on the Voting Feature Interval 5 algorithm so that we can find the model of using PSO algorithm with VFI 5. Optimization of feature weight on Diabetes or Dyspesia data is considered important because it is very closely related to the livelihood of many people, so if there is any inaccuracy in determining the most dominant feature weight in the data will cause death. Increased accuracy by using PSO Algorithm ie fold 1 from 92.31% to 96.15% increase accuracy of 3.8%, accuracy of fold 2 on Algorithm VFI5 of 92.52% as well as generated on PSO Algorithm means accuracy fixed, then in fold 3 increase accuracy of 85.19% Increased to 96.29% Accuracy increased by 11%. The total accuracy of all three trials increased by 14%. In general the Partical Swarm Optimization algorithm has succeeded in increasing the accuracy to several fold, therefore it can be concluded the PSO algorithm is well used in optimizing the VFI5 Classification Algorithm.

  15. Multi-disciplinary optimization of aeroservoelastic systems

    NASA Technical Reports Server (NTRS)

    Karpel, Mordechay

    1990-01-01

    Efficient analytical and computational tools for simultaneous optimal design of the structural and control components of aeroservoelastic systems are presented. The optimization objective is to achieve aircraft performance requirements and sufficient flutter and control stability margins with a minimal weight penalty and without violating the design constraints. Analytical sensitivity derivatives facilitate an efficient optimization process which allows a relatively large number of design variables. Standard finite element and unsteady aerodynamic routines are used to construct a modal data base. Minimum State aerodynamic approximations and dynamic residualization methods are used to construct a high accuracy, low order aeroservoelastic model. Sensitivity derivatives of flutter dynamic pressure, control stability margins and control effectiveness with respect to structural and control design variables are presented. The performance requirements are utilized by equality constraints which affect the sensitivity derivatives. A gradient-based optimization algorithm is used to minimize an overall cost function. A realistic numerical example of a composite wing with four controls is used to demonstrate the modeling technique, the optimization process, and their accuracy and efficiency.

  16. Multidisciplinary optimization of aeroservoelastic systems using reduced-size models

    NASA Technical Reports Server (NTRS)

    Karpel, Mordechay

    1992-01-01

    Efficient analytical and computational tools for simultaneous optimal design of the structural and control components of aeroservoelastic systems are presented. The optimization objective is to achieve aircraft performance requirements and sufficient flutter and control stability margins with a minimal weight penalty and without violating the design constraints. Analytical sensitivity derivatives facilitate an efficient optimization process which allows a relatively large number of design variables. Standard finite element and unsteady aerodynamic routines are used to construct a modal data base. Minimum State aerodynamic approximations and dynamic residualization methods are used to construct a high accuracy, low order aeroservoelastic model. Sensitivity derivatives of flutter dynamic pressure, control stability margins and control effectiveness with respect to structural and control design variables are presented. The performance requirements are utilized by equality constraints which affect the sensitivity derivatives. A gradient-based optimization algorithm is used to minimize an overall cost function. A realistic numerical example of a composite wing with four controls is used to demonstrate the modeling technique, the optimization process, and their accuracy and efficiency.

  17. Multidisciplinary Shape Optimization of a Composite Blended Wing Body Aircraft

    NASA Astrophysics Data System (ADS)

    Boozer, Charles Maxwell

    A multidisciplinary shape optimization tool coupling aerodynamics, structure, and performance was developed for battery powered aircraft. Utilizing high-fidelity computational fluid dynamics analysis tools and a structural wing weight tool, coupled based on the multidisciplinary feasible optimization architecture; aircraft geometry is modified in the optimization of the aircraft's range or endurance. The developed tool is applied to three geometries: a hybrid blended wing body, delta wing UAS, the ONERA M6 wing, and a modified ONERA M6 wing. First, the optimization problem is presented with the objective function, constraints, and design vector. Next, the tool's architecture and the analysis tools that are utilized are described. Finally, various optimizations are described and their results analyzed for all test subjects. Results show that less computationally expensive inviscid optimizations yield positive performance improvements using planform, airfoil, and three-dimensional degrees of freedom. From the results obtained through a series of optimizations, it is concluded that the newly developed tool is both effective at improving performance and serves as a platform ready to receive additional performance modules, further improving its computational design support potential.

  18. An initiative in multidisciplinary optimization of rotorcraft

    NASA Technical Reports Server (NTRS)

    Adelman, Howard M.; Mantay, Wayne R.

    1989-01-01

    Described is a joint NASA/Army initiative at the Langley Research Center to develop optimization procedures aimed at improving the rotor blade design process by integrating appropriate disciplines and accounting for important interactions among the disciplines. The activity is being guided by a Steering Committee made up of key NASA and Army researchers and managers. The committee, which has been named IRASC (Integrated Rotorcraft Analysis Steering Committee), has defined two principal foci for the activity: a white paper which sets forth the goals and plans of the effort; and a rotor design project which will validate the basic constituents, as well as the overall design methodology for multidisciplinary optimization. The optimization formulation is described in terms of the objective function, design variables, and constraints. Additionally, some of the analysis aspects are discussed and an initial attempt at defining the interdisciplinary couplings is summarized. At this writing, some significant progress has been made, principally in the areas of single discipline optimization. Results are given which represent accomplishments in rotor aerodynamic performance optimization for minimum hover horsepower, rotor dynamic optimization for vibration reduction, and rotor structural optimization for minimum weight.

  19. An initiative in multidisciplinary optimization of rotorcraft

    NASA Technical Reports Server (NTRS)

    Adelman, Howard M.; Mantay, Wayne R.

    1988-01-01

    Described is a joint NASA/Army initiative at the Langley Research Center to develop optimization procedures aimed at improving the rotor blade design process by integrating appropriate disciplines and accounting for important interactions among the disciplines. The activity is being guided by a Steering Committee made up of key NASA and Army researchers and managers. The committee, which has been named IRASC (Integrated Rotorcraft Analysis Steering Committee), has defined two principal foci for the activity: a white paper which sets forth the goals and plans of the effort; and a rotor design project which will validate the basic constituents, as well as the overall design methodology for multidisciplinary optimization. The paper describes the optimization formulation in terms of the objective function, design variables, and constraints. Additionally, some of the analysis aspects are discussed and an initial attempt at defining the interdisciplinary couplings is summarized. At this writing, some significant progress has been made, principally in the areas of single discipline optimization. Results are given which represent accomplishments in rotor aerodynamic performance optimization for minimum hover horsepower, rotor dynamic optimization for vibration reduction, and rotor structural optimization for minimum weight.

  20. Optimal Battery Sizing in Photovoltaic Based Distributed Generation Using Enhanced Opposition-Based Firefly Algorithm for Voltage Rise Mitigation

    PubMed Central

    Wong, Ling Ai; Shareef, Hussain; Mohamed, Azah; Ibrahim, Ahmad Asrul

    2014-01-01

    This paper presents the application of enhanced opposition-based firefly algorithm in obtaining the optimal battery energy storage systems (BESS) sizing in photovoltaic generation integrated radial distribution network in order to mitigate the voltage rise problem. Initially, the performance of the original firefly algorithm is enhanced by utilizing the opposition-based learning and introducing inertia weight. After evaluating the performance of the enhanced opposition-based firefly algorithm (EOFA) with fifteen benchmark functions, it is then adopted to determine the optimal size for BESS. Two optimization processes are conducted where the first optimization aims to obtain the optimal battery output power on hourly basis and the second optimization aims to obtain the optimal BESS capacity by considering the state of charge constraint of BESS. The effectiveness of the proposed method is validated by applying the algorithm to the 69-bus distribution system and by comparing the performance of EOFA with conventional firefly algorithm and gravitational search algorithm. Results show that EOFA has the best performance comparatively in terms of mitigating the voltage rise problem. PMID:25054184

  1. Optimal battery sizing in photovoltaic based distributed generation using enhanced opposition-based firefly algorithm for voltage rise mitigation.

    PubMed

    Wong, Ling Ai; Shareef, Hussain; Mohamed, Azah; Ibrahim, Ahmad Asrul

    2014-01-01

    This paper presents the application of enhanced opposition-based firefly algorithm in obtaining the optimal battery energy storage systems (BESS) sizing in photovoltaic generation integrated radial distribution network in order to mitigate the voltage rise problem. Initially, the performance of the original firefly algorithm is enhanced by utilizing the opposition-based learning and introducing inertia weight. After evaluating the performance of the enhanced opposition-based firefly algorithm (EOFA) with fifteen benchmark functions, it is then adopted to determine the optimal size for BESS. Two optimization processes are conducted where the first optimization aims to obtain the optimal battery output power on hourly basis and the second optimization aims to obtain the optimal BESS capacity by considering the state of charge constraint of BESS. The effectiveness of the proposed method is validated by applying the algorithm to the 69-bus distribution system and by comparing the performance of EOFA with conventional firefly algorithm and gravitational search algorithm. Results show that EOFA has the best performance comparatively in terms of mitigating the voltage rise problem.

  2. Alternative derivation of an exchange-only density-functional optimized effective potential

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joubert, D. P.

    2007-10-15

    An alternative derivation of the exchange-only density-functional optimized effective potential equation is given. It is shown that the localized Hartree-Fock-common energy denominator Green's function approximation (LHF-CEDA) for the density-functional exchange potential proposed independently by Della Sala and Goerling [J. Chem. Phys. 115, 5718 (2001)] and Gritsenko and Baerends [Phys. Rev. A 64, 42506 (2001)] can be derived as an approximation to the OEP exchange potential in a similar way that the KLI approximation [Phys. Rev. A 45, 5453 (1992)] was derived. An exact expression for the correction term to the LHF-CEDA approximation can thus be found. The correction term canmore » be expressed in terms of the first-order perturbation-theory many-electron wave function shift when the Kohn-Sham Hamiltonian is subjected to a perturbation equal to the difference between the density-functional exchange potential and the Hartree-Fock nonlocal potential, expressed in terms of the Kohn-Sham orbitals. An explicit calculation shows that the density weighted mean of the correction term is zero, confirming that the LHF-CEDA approximation can be interpreted as a mean-field approximation. The corrected LHF-CEDA equation and the optimized effective potential equation are shown to be identical, with information distributed differently between terms in the equations. For a finite system the correction term falls off at least as fast as 1/r{sup 4} for large r.« less

  3. Alternative derivation of an exchange-only density-functional optimized effective potential

    NASA Astrophysics Data System (ADS)

    Joubert, D. P.

    2007-10-01

    An alternative derivation of the exchange-only density-functional optimized effective potential equation is given. It is shown that the localized Hartree-Fock common energy denominator Green’s function approximation (LHF-CEDA) for the density-functional exchange potential proposed independently by Della Sala and Görling [J. Chem. Phys. 115, 5718 (2001)] and Gritsenko and Baerends [Phys. Rev. A 64, 42506 (2001)] can be derived as an approximation to the OEP exchange potential in a similar way that the KLI approximation [Phys. Rev. A 45, 5453 (1992)] was derived. An exact expression for the correction term to the LHF-CEDA approximation can thus be found. The correction term can be expressed in terms of the first-order perturbation-theory many-electron wave function shift when the Kohn-Sham Hamiltonian is subjected to a perturbation equal to the difference between the density-functional exchange potential and the Hartree-Fock nonlocal potential, expressed in terms of the Kohn-Sham orbitals. An explicit calculation shows that the density weighted mean of the correction term is zero, confirming that the LHF-CEDA approximation can be interpreted as a mean-field approximation. The corrected LHF-CEDA equation and the optimized effective potential equation are shown to be identical, with information distributed differently between terms in the equations. For a finite system the correction term falls off at least as fast as 1/r4 for large r .

  4. An Optimization-Based Approach to Injector Element Design

    NASA Technical Reports Server (NTRS)

    Tucker, P. Kevin; Shyy, Wei; Vaidyanathan, Rajkumar; Turner, Jim (Technical Monitor)

    2000-01-01

    An injector optimization methodology, method i, is used to investigate optimal design points for gaseous oxygen/gaseous hydrogen (GO2/GH2) injector elements. A swirl coaxial element and an unlike impinging element (a fuel-oxidizer-fuel triplet) are used to facilitate the study. The elements are optimized in terms of design variables such as fuel pressure drop, APf, oxidizer pressure drop, deltaP(sub f), combustor length, L(sub comb), and full cone swirl angle, theta, (for the swirl element) or impingement half-angle, alpha, (for the impinging element) at a given mixture ratio and chamber pressure. Dependent variables such as energy release efficiency, ERE, wall heat flux, Q(sub w), injector heat flux, Q(sub inj), relative combustor weight, W(sub rel), and relative injector cost, C(sub rel), are calculated and then correlated with the design variables. An empirical design methodology is used to generate these responses for both element types. Method i is then used to generate response surfaces for each dependent variable for both types of elements. Desirability functions based on dependent variable constraints are created and used to facilitate development of composite response surfaces representing the five dependent variables in terms of the input variables. Three examples illustrating the utility and flexibility of method i are discussed in detail for each element type. First, joint response surfaces are constructed by sequentially adding dependent variables. Optimum designs are identified after addition of each variable and the effect each variable has on the element design is illustrated. This stepwise demonstration also highlights the importance of including variables such as weight and cost early in the design process. Secondly, using the composite response surface that includes all five dependent variables, unequal weights are assigned to emphasize certain variables relative to others. Here, method i is used to enable objective trade studies on design issues such as component life and thrust to weight ratio. Finally, combining results from both elements to simulate a trade study, thrust-to-weight trends are illustrated and examined in detail.

  5. Uncertainty plus prior equals rational bias: an intuitive Bayesian probability weighting function.

    PubMed

    Fennell, John; Baddeley, Roland

    2012-10-01

    Empirical research has shown that when making choices based on probabilistic options, people behave as if they overestimate small probabilities, underestimate large probabilities, and treat positive and negative outcomes differently. These distortions have been modeled using a nonlinear probability weighting function, which is found in several nonexpected utility theories, including rank-dependent models and prospect theory; here, we propose a Bayesian approach to the probability weighting function and, with it, a psychological rationale. In the real world, uncertainty is ubiquitous and, accordingly, the optimal strategy is to combine probability statements with prior information using Bayes' rule. First, we show that any reasonable prior on probabilities leads to 2 of the observed effects; overweighting of low probabilities and underweighting of high probabilities. We then investigate 2 plausible kinds of priors: informative priors based on previous experience and uninformative priors of ignorance. Individually, these priors potentially lead to large problems of bias and inefficiency, respectively; however, when combined using Bayesian model comparison methods, both forms of prior can be applied adaptively, gaining the efficiency of empirical priors and the robustness of ignorance priors. We illustrate this for the simple case of generic good and bad options, using Internet blogs to estimate the relevant priors of inference. Given this combined ignorant/informative prior, the Bayesian probability weighting function is not only robust and efficient but also matches all of the major characteristics of the distortions found in empirical research. PsycINFO Database Record (c) 2012 APA, all rights reserved.

  6. Cancer Cachexia: Beyond Weight Loss.

    PubMed

    Bruggeman, Andrew R; Kamal, Arif H; LeBlanc, Thomas W; Ma, Joseph D; Baracos, Vickie E; Roeland, Eric J

    2016-11-01

    Cancer cachexia is a multifactorial syndrome characterized by skeletal muscle loss leading to progressive functional impairment. Despite the ubiquity of cachexia in clinical practice, prevention, early identification, and intervention remain challenging. The impact of cancer cachexia on quality of life, treatment-related toxicity, physical function, and mortality are well established; however, establishing a clinically meaningful definition has proven challenging because of the focus on weight loss alone. Attempts to more comprehensively define cachexia through body composition, physical functioning, and molecular biomarkers, while promising, are yet to be routinely incorporated into clinical practice. Pharmacologic agents that have not been approved by the US Food and Drug Administration but that are currently used in cancer cachexia (ie, megestrol, dronabinol) may improve weight but not outcomes of interest such as muscle mass, physical activity, or mortality. Their routine use is limited by adverse effects. For the practicing oncologist, early identification and management of cachexia is critical. Oncologists must recognize cachexia beyond weight loss alone, focusing instead on body composition and physical functioning. In fact, becoming emaciated is a late sign of cachexia that characterizes its refractory stage. Given that cachexia is a multifactorial syndrome, it requires early identification and polymodal intervention, including optimal cancer therapy, symptom management, nutrition, exercise, and psychosocial support. Consequently, oncologists have a role in ensuring that these resources are available to their patients. In addition, in light of the promising investigational agents, it remains imperative to refer patients with cachexia to clinical trials so that available options can be expanded to effectively treat this pervasive problem.

  7. Simulation of fruit-set and trophic competition and optimization of yield advantages in six Capsicum cultivars using functional-structural plant modelling.

    PubMed

    Ma, Y T; Wubs, A M; Mathieu, A; Heuvelink, E; Zhu, J Y; Hu, B G; Cournède, P H; de Reffye, P

    2011-04-01

    Many indeterminate plants can have wide fluctuations in the pattern of fruit-set and harvest. Fruit-set in these types of plants depends largely on the balance between source (assimilate supply) and sink strength (assimilate demand) within the plant. This study aims to evaluate the ability of functional-structural plant models to simulate different fruit-set patterns among Capsicum cultivars through source-sink relationships. A greenhouse experiment of six Capsicum cultivars characterized with different fruit weight and fruit-set was conducted. Fruit-set patterns and potential fruit sink strength were determined through measurement. Source and sink strength of other organs were determined via the GREENLAB model, with a description of plant organ weight and dimensions according to plant topological structure established from the measured data as inputs. Parameter optimization was determined using a generalized least squares method for the entire growth cycle. Fruit sink strength differed among cultivars. Vegetative sink strength was generally lower for large-fruited cultivars than for small-fruited ones. The larger the size of the fruit, the larger variation there was in fruit-set and fruit yield. Large-fruited cultivars need a higher source-sink ratio for fruit-set, which means higher demand for assimilates. Temporal heterogeneity of fruit-set affected both number and yield of fruit. The simulation study showed that reducing heterogeneity of fruit-set was obtained by different approaches: for example, increasing source strength; decreasing vegetative sink strength, source-sink ratio for fruit-set and flower appearance rate; and harvesting individual fruits earlier before full ripeness. Simulation results showed that, when we increased source strength or decreased vegetative sink strength, fruit-set and fruit weight increased. However, no significant differences were found between large-fruited and small-fruited groups of cultivars regarding the effects of source and vegetative sink strength on fruit-set and fruit weight. When the source-sink ratio at fruit-set decreased, the number of fruit retained on the plant increased competition for assimilates with vegetative organs. Therefore, total plant and vegetative dry weights decreased, especially for large-fruited cultivars. Optimization study showed that temporal heterogeneity of fruit-set and ripening was predicted to be reduced when fruits were harvested earlier. Furthermore, there was a 20 % increase in the number of extra fruit set.

  8. The feasibility of manual parameter tuning for deformable breast MR image registration from a multi-objective optimization perspective.

    PubMed

    Pirpinia, Kleopatra; Bosman, Peter A N; Loo, Claudette E; Winter-Warnars, Gonneke; Janssen, Natasja N Y; Scholten, Astrid N; Sonke, Jan-Jakob; van Herk, Marcel; Alderliesten, Tanja

    2017-06-23

    Deformable image registration is typically formulated as an optimization problem involving a linearly weighted combination of terms that correspond to objectives of interest (e.g. similarity, deformation magnitude). The weights, along with multiple other parameters, need to be manually tuned for each application, a task currently addressed mainly via trial-and-error approaches. Such approaches can only be successful if there is a sensible interplay between parameters, objectives, and desired registration outcome. This, however, is not well established. To study this interplay, we use multi-objective optimization, where multiple solutions exist that represent the optimal trade-offs between the objectives, forming a so-called Pareto front. Here, we focus on weight tuning. To study the space a user has to navigate during manual weight tuning, we randomly sample multiple linear combinations. To understand how these combinations relate to desirability of registration outcome, we associate with each outcome a mean target registration error (TRE) based on expert-defined anatomical landmarks. Further, we employ a multi-objective evolutionary algorithm that optimizes the weight combinations, yielding a Pareto front of solutions, which can be directly navigated by the user. To study how the complexity of manual weight tuning changes depending on the registration problem, we consider an easy problem, prone-to-prone breast MR image registration, and a hard problem, prone-to-supine breast MR image registration. Lastly, we investigate how guidance information as an additional objective influences the prone-to-supine registration outcome. Results show that the interplay between weights, objectives, and registration outcome makes manual weight tuning feasible for the prone-to-prone problem, but very challenging for the harder prone-to-supine problem. Here, patient-specific, multi-objective weight optimization is needed, obtaining a mean TRE of 13.6 mm without guidance information reduced to 7.3 mm with guidance information, but also providing a Pareto front that exhibits an intuitively sensible interplay between weights, objectives, and registration outcome, allowing outcome selection.

  9. The feasibility of manual parameter tuning for deformable breast MR image registration from a multi-objective optimization perspective

    NASA Astrophysics Data System (ADS)

    Pirpinia, Kleopatra; Bosman, Peter A. N.; E Loo, Claudette; Winter-Warnars, Gonneke; Y Janssen, Natasja N.; Scholten, Astrid N.; Sonke, Jan-Jakob; van Herk, Marcel; Alderliesten, Tanja

    2017-07-01

    Deformable image registration is typically formulated as an optimization problem involving a linearly weighted combination of terms that correspond to objectives of interest (e.g. similarity, deformation magnitude). The weights, along with multiple other parameters, need to be manually tuned for each application, a task currently addressed mainly via trial-and-error approaches. Such approaches can only be successful if there is a sensible interplay between parameters, objectives, and desired registration outcome. This, however, is not well established. To study this interplay, we use multi-objective optimization, where multiple solutions exist that represent the optimal trade-offs between the objectives, forming a so-called Pareto front. Here, we focus on weight tuning. To study the space a user has to navigate during manual weight tuning, we randomly sample multiple linear combinations. To understand how these combinations relate to desirability of registration outcome, we associate with each outcome a mean target registration error (TRE) based on expert-defined anatomical landmarks. Further, we employ a multi-objective evolutionary algorithm that optimizes the weight combinations, yielding a Pareto front of solutions, which can be directly navigated by the user. To study how the complexity of manual weight tuning changes depending on the registration problem, we consider an easy problem, prone-to-prone breast MR image registration, and a hard problem, prone-to-supine breast MR image registration. Lastly, we investigate how guidance information as an additional objective influences the prone-to-supine registration outcome. Results show that the interplay between weights, objectives, and registration outcome makes manual weight tuning feasible for the prone-to-prone problem, but very challenging for the harder prone-to-supine problem. Here, patient-specific, multi-objective weight optimization is needed, obtaining a mean TRE of 13.6 mm without guidance information reduced to 7.3 mm with guidance information, but also providing a Pareto front that exhibits an intuitively sensible interplay between weights, objectives, and registration outcome, allowing outcome selection.

  10. An analytic-numerical method for the construction of the reference law of operation for a class of mechanical controlled systems

    NASA Astrophysics Data System (ADS)

    Mizhidon, A. D.; Mizhidon, K. A.

    2017-04-01

    An analytic-numerical method for the construction of a reference law of operation for a class of dynamic systems describing vibrations in controlled mechanical systems is proposed. By the reference law of operation of a system, we mean a law of the system motion that satisfies all the requirements for the quality and design features of the system under permanent external disturbances. As disturbances, we consider polyharmonic functions with known amplitudes and frequencies of the harmonics but unknown initial phases. For constructing the reference law of motion, an auxiliary optimal control problem is solved in which the cost function depends on a weighting coefficient. The choice of the weighting coefficient ensures the design of the reference law. Theoretical foundations of the proposed method are given.

  11. Do Knee Bracing and Delayed Weight Bearing Affect Mid-Term Functional Outcome after Anterior Cruciate Ligament Reconstruction?

    PubMed

    Di Miceli, Riccardo; Marambio, Carlotta Bustos; Zati, Alessandro; Monesi, Roberta; Benedetti, Maria Grazia

    2017-12-01

    Purpose  The aim of this study was to assess the effect of knee bracing and timing of full weight bearing after anterior cruciate ligament reconstruction (ACLR) on functional outcomes at mid-term follow-up. Methods  We performed a retrospective study on 41 patients with ACLR. Patients were divided in two groups: ACLR group, who received isolated ACL reconstruction and ACLR-OI group who received ACL reconstruction and adjunctive surgery. Information about age at surgery, bracing, full or progressive weight bearing permission after surgery were collected for the two groups. Subjective IKDC score was obtained at follow-up. Statistical analysis was performed to compare the two groups for IKDC score. Subgroup analysis was performed to assess the effect of postoperative regimen (knee bracing and weight bearing) on functional outcomes. Results  The mean age of patients was 30.8 ± 10.6 years. Mean IKDC score was 87.4 ± 13.9. The mean follow-up was 3.5 ± 1.8 years. Twenty-two (53.7%) patients underwent ACLR only, while 19 (46.3%) also received other interventions, such as meniscal repair and/or collateral ligament suture. Analysis of overall data showed no differences between the groups for IKDC score. Patients in the ACLR group exhibited a significantly better IKDC score when no brace and full weight bearing after 4 weeks from surgery was prescribed in comparison with patients who worn a brace and had delayed full weight bearing. No differences were found with respect to the use of brace and postoperative weight bearing regimen in the ACLR-OI group. Conclusion  Brace and delayed weight bearing after ACLR have a negative influence on long-term functional outcomes. Further research is required to explore possible differences in the patients operated on ACLR and other intervention with respect to the use of a brace and the timing of full weight bearing to identify optimal recovery strategies. Level of Evidence  Level III, retrospective observational study.

  12. Signature Tracking for Optimized Nutrition and Training (STRONG)

    DTIC Science & Technology

    2014-08-01

    manufacture , use, or sell any patented invention that may relate to them. This report was cleared for public release by the 88th Air Base Wing Public...Opportunities for performance augmentation include continuous performance feedback for self- improvement and individualized training regimens in the...the application of advanced physical training methods used by elite athletes to improve fitness in Special Operators; such as functional weight

  13. Minimization of required model runs in the Random Mixing approach to inverse groundwater flow and transport modeling

    NASA Astrophysics Data System (ADS)

    Hoerning, Sebastian; Bardossy, Andras; du Plessis, Jaco

    2017-04-01

    Most geostatistical inverse groundwater flow and transport modelling approaches utilize a numerical solver to minimize the discrepancy between observed and simulated hydraulic heads and/or hydraulic concentration values. The optimization procedure often requires many model runs, which for complex models lead to long run times. Random Mixing is a promising new geostatistical technique for inverse modelling. The method is an extension of the gradual deformation approach. It works by finding a field which preserves the covariance structure and maintains observed hydraulic conductivities. This field is perturbed by mixing it with new fields that fulfill the homogeneous conditions. This mixing is expressed as an optimization problem which aims to minimize the difference between the observed and simulated hydraulic heads and/or concentration values. To preserve the spatial structure, the mixing weights must lie on the unit hyper-sphere. We present a modification to the Random Mixing algorithm which significantly reduces the number of model runs required. The approach involves taking n equally spaced points on the unit circle as weights for mixing conditional random fields. Each of these mixtures provides a solution to the forward model at the conditioning locations. For each of the locations the solutions are then interpolated around the circle to provide solutions for additional mixing weights at very low computational cost. The interpolated solutions are used to search for a mixture which maximally reduces the objective function. This is in contrast to other approaches which evaluate the objective function for the n mixtures and then interpolate the obtained values. Keeping the mixture on the unit circle makes it easy to generate equidistant sampling points in the space; however, this means that only two fields are mixed at a time. Once the optimal mixture for two fields has been found, they are combined to form the input to the next iteration of the algorithm. This process is repeated until a threshold in the objective function is met or insufficient changes are produced in successive iterations.

  14. Design optimization of rear uprights for UniMAP Automotive Racing Team Formula SAE racing car

    NASA Astrophysics Data System (ADS)

    Azmeer, M.; Basha, M. H.; Hamid, M. F.; Rahman, M. T. A.; Hashim, M. S. M.

    2017-10-01

    In an automobile, the rear upright are used to provide a physical mounting and links the suspension arms to the hub and wheel assembly. In this work, static structural and shape optimization analysis for rear upright for UniMAP’s Formula SAE racing car had been done using ANSYS software with the objective to reduce weight while maintaining the structural strength of the vehicle upright. During the shape optimization process, the component undergoes 25%, 50% and 75 % weight reduction in order to find the best optimal shape of the upright. The final design of the upright is developed considering the weight reduction, structural integrity and the manufacturability. The final design achieved 21 % weight reduction and is able to withstand several loads.

  15. The optimal UV exposure time for vitamin D3 synthesis and erythema estimated by UV observations in Korea

    NASA Astrophysics Data System (ADS)

    Lee, Y. G.; Koo, J. H.

    2016-12-01

    Solar UV radiation in a wavelength range between 280 to 400 nm has both positive and negative influences on human body. Surface UV radiation is the main natural source of vitamin D, providing the promotion of bone and musculoskeletal health and reducing the risk of a number of cancers and other medical conditions. However, overexposure to surface UV radiation is significantly related with the majority of skin cancer, in addition other negative health effects such as sunburn, skin aging, and some forms of eye cataracts. Therefore, it is important to estimate the optimal UV exposure time, representing a balance between reducing negative health effects and maximizing sufficient vitamin D production. Previous studies calculated erythemal UV and vitamin-D UV from the measured and modelled spectral irradiances, respectively, by weighting CIE Erythema and Vitamin D3 generation functions (Kazantzidis et al., 2009; Fioletov et al., 2010). In particular, McKenzie et al. (2009) suggested the algorithm to estimate vitamin-D production UV from erythemal UV (or UV index) and determined the optimum conditions of UV exposure based on skin type Ⅱ according to the Fitzpatrick (1988). Recently, there are various demands for risks and benefits of surface UV radiation on public health over Korea, thus it is necessary to estimate optimal UV exposure time suitable to skin type of East Asians. This study examined the relationship between erythemally weighted UV (UVEry) and vitamin D weighted UV (UVVitD) from spectral UV measurements during 2006-2010. The temporal variations of the ratio (UVVitD/UVEry) were also analyzed and the ratio as a function of UV index was applied to the broadband UV measured by UV-Biometer at 6 sites in Korea Thus, the optimal UV exposure time for vitamin D3 synthesis and erythema was estimated for diurnal, seasonal, and annual scales over Korea. In summer with high surface UV radiation, short exposure time leaded to sufficient vitamin D and erythema and vice versa in winter. Thus, the balancing time in winter was enough to maximize UV benefits and minimize UV risks.

  16. Accuracy of the weighted essentially non-oscillatory conservative finite difference schemes

    NASA Astrophysics Data System (ADS)

    Don, Wai-Sun; Borges, Rafael

    2013-10-01

    In the reconstruction step of (2r-1) order weighted essentially non-oscillatory conservative finite difference schemes (WENO) for solving hyperbolic conservation laws, nonlinear weights αk and ωk, such as the WENO-JS weights by Jiang et al. and the WENO-Z weights by Borges et al., are designed to recover the formal (2r-1) order (optimal order) of the upwinded central finite difference scheme when the solution is sufficiently smooth. The smoothness of the solution is determined by the lower order local smoothness indicators βk in each substencil. These nonlinear weight formulations share two important free parameters in common: the power p, which controls the amount of numerical dissipation, and the sensitivity ε, which is added to βk to avoid a division by zero in the denominator of αk. However, ε also plays a role affecting the order of accuracy of WENO schemes, especially in the presence of critical points. It was recently shown that, for any design order (2r-1), ε should be of Ω(Δx2) (Ω(Δxm) means that ε⩾CΔxm for some C independent of Δx, as Δx→0) for the WENO-JS scheme to achieve the optimal order, regardless of critical points. In this paper, we derive an alternative proof of the sufficient condition using special properties of βk. Moreover, it is unknown if the WENO-Z scheme should obey the same condition on ε. Here, using same special properties of βk, we prove that in fact the optimal order of the WENO-Z scheme can be guaranteed with a much weaker condition ε=Ω(Δxm), where m(r,p)⩾2 is the optimal sensitivity order, regardless of critical points. Both theoretical results are confirmed numerically on smooth functions with arbitrary order of critical points. This is a highly desirable feature, as illustrated with the Lax problem and the Mach 3 shock-density wave interaction of one dimensional Euler equations, for a smaller ε allows a better essentially non-oscillatory shock capturing as it does not over-dominate over the size of βk. We also show that numerical oscillations can be further attenuated by increasing the power parameter 2⩽p⩽r-1, at the cost of increased numerical dissipation. Compact formulas of βk for WENO schemes are also presented.

  17. Due-Window Assignment Scheduling with Variable Job Processing Times

    PubMed Central

    Wu, Yu-Bin

    2015-01-01

    We consider a common due-window assignment scheduling problem jobs with variable job processing times on a single machine, where the processing time of a job is a function of its position in a sequence (i.e., learning effect) or its starting time (i.e., deteriorating effect). The problem is to determine the optimal due-windows, and the processing sequence simultaneously to minimize a cost function includes earliness, tardiness, the window location, window size, and weighted number of tardy jobs. We prove that the problem can be solved in polynomial time. PMID:25918745

  18. Renal Function Descriptors in Neonates: Which Creatinine-Based Formula Best Describes Vancomycin Clearance?

    PubMed

    Bhongsatiern, Jiraganya; Stockmann, Chris; Yu, Tian; Constance, Jonathan E; Moorthy, Ganesh; Spigarelli, Michael G; Desai, Pankaj B; Sherwin, Catherine M T

    2016-05-01

    Growth and maturational changes have been identified as significant covariates in describing variability in clearance of renally excreted drugs such as vancomycin. Because of immaturity of clearance mechanisms, quantification of renal function in neonates is of importance. Several serum creatinine (SCr)-based renal function descriptors have been developed in adults and children, but none are selectively derived for neonates. This review summarizes development of the neonatal kidney and discusses assessment of the renal function regarding estimation of glomerular filtration rate using renal function descriptors. Furthermore, identification of the renal function descriptors that best describe the variability of vancomycin clearance was performed in a sample study of a septic neonatal cohort. Population pharmacokinetic models were developed applying a combination of age-weight, renal function descriptors, or SCr alone. In addition to age and weight, SCr or renal function descriptors significantly reduced variability of vancomycin clearance. The population pharmacokinetic models with Léger and modified Schwartz formulas were selected as the optimal final models, although the other renal function descriptors and SCr provided reasonably good fit to the data, suggesting further evaluation of the final models using external data sets and cross validation. The present study supports incorporation of renal function descriptors in the estimation of vancomycin clearance in neonates. © 2015, The American College of Clinical Pharmacology.

  19. Finite Element Based HWB Centerbody Structural Optimization and Weight Prediction

    NASA Technical Reports Server (NTRS)

    Gern, Frank H.

    2012-01-01

    This paper describes a scalable structural model suitable for Hybrid Wing Body (HWB) centerbody analysis and optimization. The geometry of the centerbody and primary wing structure is based on a Vehicle Sketch Pad (VSP) surface model of the aircraft and a FLOPS compatible parameterization of the centerbody. Structural analysis, optimization, and weight calculation are based on a Nastran finite element model of the primary HWB structural components, featuring centerbody, mid section, and outboard wing. Different centerbody designs like single bay or multi-bay options are analyzed and weight calculations are compared to current FLOPS results. For proper structural sizing and weight estimation, internal pressure and maneuver flight loads are applied. Results are presented for aerodynamic loads, deformations, and centerbody weight.

  20. Prioritizing the Components of Vulnerability: A Genetic Algorithm Minimization of Flood Risk

    NASA Astrophysics Data System (ADS)

    Bongolan, Vena Pearl; Ballesteros, Florencio; Baritua, Karessa Alexandra; Junne Santos, Marie

    2013-04-01

    We define a flood resistant city as an optimal arrangement of communities according to their traits, with the goal of minimizing the flooding vulnerability via a genetic algorithm. We prioritize the different components of flooding vulnerability, giving each component a weight, thus expressing vulnerability as a weighted sum. This serves as the fitness function for the genetic algorithm. We also allowed non-linear interactions among related but independent components, viz, poverty and mortality rate, and literacy and radio/ tv penetration. The designs produced reflect the relative importance of the components, and we observed a synchronicity between the interacting components, giving us a more consistent design.

  1. Experimental verification of Space Platform battery discharger design optimization

    NASA Astrophysics Data System (ADS)

    Sable, Dan M.; Deuty, Scott; Lee, Fred C.; Cho, Bo H.

    The detailed design of two candidate topologies for the Space Platform battery discharger, a four module boost converter (FMBC) and a voltage-fed push-pull autotransformer (VFPPAT), is presented. Each has unique problems. The FMBC requires careful design and analysis in order to obtain good dynamic performance. This is due to the presence of a right-half-plane (RHP) zero in the control-to-output transfer function. The VFPPAT presents a challenging power stage design in order to yield high efficiency and light component weight. The authors describe the design of each of these converters and compare their efficiency, weight, and dynamic characteristics.

  2. Programs for analysis and resizing of complex structures. [computerized minimum weight design

    NASA Technical Reports Server (NTRS)

    Haftka, R. T.; Prasad, B.

    1978-01-01

    The paper describes the PARS (Programs for Analysis and Resizing of Structures) system. PARS is a user oriented system of programs for the minimum weight design of structures modeled by finite elements and subject to stress, displacement, flutter and thermal constraints. The system is built around SPAR - an efficient and modular general purpose finite element program, and consists of a series of processors that communicate through the use of a data base. An efficient optimizer based on the Sequence of Unconstrained Minimization Technique (SUMT) with an extended interior penalty function and Newton's method is used. Several problems are presented for demonstration of the system capabilities.

  3. Experimental verification of Space Platform battery discharger design optimization

    NASA Technical Reports Server (NTRS)

    Sable, Dan M.; Deuty, Scott; Lee, Fred C.; Cho, Bo H.

    1991-01-01

    The detailed design of two candidate topologies for the Space Platform battery discharger, a four module boost converter (FMBC) and a voltage-fed push-pull autotransformer (VFPPAT), is presented. Each has unique problems. The FMBC requires careful design and analysis in order to obtain good dynamic performance. This is due to the presence of a right-half-plane (RHP) zero in the control-to-output transfer function. The VFPPAT presents a challenging power stage design in order to yield high efficiency and light component weight. The authors describe the design of each of these converters and compare their efficiency, weight, and dynamic characteristics.

  4. SU-FF-T-668: A Simple Algorithm for Range Modulation Wheel Design in Proton Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nie, X; Nazaryan, Vahagn; Gueye, Paul

    2009-06-01

    Purpose: To develop a simple algorithm in designing the range modulation wheel to generate a very smooth Spread-Out Bragg peak (SOBP) for proton therapy.Method and Materials: A simple algorithm has been developed to generate the weight factors in corresponding pristine Bragg peaks which composed a smooth SOBP in proton therapy. We used a modified analytical Bragg peak function based on Monte Carol simulation tool-kits of Geant4 as pristine Bragg peaks input in our algorithm. A simple METLAB(R) Quad Program was introduced to optimize the cost function in our algorithm. Results: We found out that the existed analytical function of Braggmore » peak can't directly use as pristine Bragg peak dose-depth profile input file in optimization of the weight factors since this model didn't take into account of the scattering factors introducing from the range shifts in modifying the proton beam energies. We have done Geant4 simulations for proton energy of 63.4 MeV with a 1.08 cm SOBP for variation of pristine Bragg peaks which composed this SOBP and modified the existed analytical Bragg peak functions for their peak heights, ranges of R{sub 0}, and Gaussian energies {sigma}{sub E}. We found out that 19 pristine Bragg peaks are enough to achieve a flatness of 1.5% of SOBP which is the best flatness in the publications. Conclusion: This work develops a simple algorithm to generate the weight factors which is used to design a range modulation wheel to generate a smooth SOBP in protonradiation therapy. We have found out that a medium number of pristine Bragg peaks are enough to generate a SOBP with flatness less than 2%. It is potential to generate data base to store in the treatment plan to produce a clinic acceptable SOBP by using our simple algorithm.« less

  5. Integration of electromagnetic induction sensor data in soil sampling scheme optimization using simulated annealing.

    PubMed

    Barca, E; Castrignanò, A; Buttafuoco, G; De Benedetto, D; Passarella, G

    2015-07-01

    Soil survey is generally time-consuming, labor-intensive, and costly. Optimization of sampling scheme allows one to reduce the number of sampling points without decreasing or even increasing the accuracy of investigated attribute. Maps of bulk soil electrical conductivity (EC a ) recorded with electromagnetic induction (EMI) sensors could be effectively used to direct soil sampling design for assessing spatial variability of soil moisture. A protocol, using a field-scale bulk EC a survey, has been applied in an agricultural field in Apulia region (southeastern Italy). Spatial simulated annealing was used as a method to optimize spatial soil sampling scheme taking into account sampling constraints, field boundaries, and preliminary observations. Three optimization criteria were used. the first criterion (minimization of mean of the shortest distances, MMSD) optimizes the spreading of the point observations over the entire field by minimizing the expectation of the distance between an arbitrarily chosen point and its nearest observation; the second criterion (minimization of weighted mean of the shortest distances, MWMSD) is a weighted version of the MMSD, which uses the digital gradient of the grid EC a data as weighting function; and the third criterion (mean of average ordinary kriging variance, MAOKV) minimizes mean kriging estimation variance of the target variable. The last criterion utilizes the variogram model of soil water content estimated in a previous trial. The procedures, or a combination of them, were tested and compared in a real case. Simulated annealing was implemented by the software MSANOS able to define or redesign any sampling scheme by increasing or decreasing the original sampling locations. The output consists of the computed sampling scheme, the convergence time, and the cooling law, which can be an invaluable support to the process of sampling design. The proposed approach has found the optimal solution in a reasonable computation time. The use of bulk EC a gradient as an exhaustive variable, known at any node of an interpolation grid, has allowed the optimization of the sampling scheme, distinguishing among areas with different priority levels.

  6. Identification of mutated driver pathways in cancer using a multi-objective optimization model.

    PubMed

    Zheng, Chun-Hou; Yang, Wu; Chong, Yan-Wen; Xia, Jun-Feng

    2016-05-01

    New-generation high-throughput technologies, including next-generation sequencing technology, have been extensively applied to solve biological problems. As a result, large cancer genomics projects such as the Cancer Genome Atlas (TCGA) and the International Cancer Genome Consortium are producing large amount of rich and diverse data in multiple cancer types. The identification of mutated driver genes and driver pathways from these data is a significant challenge. Genome aberrations in cancer cells can be divided into two types: random 'passenger mutation' and functional 'driver mutation'. In this paper, we introduced a Multi-objective Optimization model based on a Genetic Algorithm (MOGA) to solve the maximum weight submatrix problem, which can be employed to identify driver genes and driver pathways promoting cancer proliferation. The maximum weight submatrix problem defined to find mutated driver pathways is based on two specific properties, i.e., high coverage and high exclusivity. The multi-objective optimization model can adjust the trade-off between high coverage and high exclusivity. We proposed an integrative model by combining gene expression data and mutation data to improve the performance of the MOGA algorithm in a biological context. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Sensitivity optimization in whispering gallery mode optical cylindrical biosensors

    NASA Astrophysics Data System (ADS)

    Khozeymeh, F.; Razaghi, M.

    2018-01-01

    Whispering-gallery-mode resonances propagated in cylindrical resonators have two angular and radial orders of l and i. In this work, the higher radial order whispering-gallery-mode resonances, (i = 1 - 4), at a fixed l are examined. The sensitivity of theses resonances is analysed as a function of the structural parameters of the cylindrical resonator like different radii and refractive index of composed material of the resonator. A practical application where cylindrical resonators are used for the measurement of glucose concentration in water is presented as a biosensor demonstrator. We calculate the wavelength shifts of the WG1-4, in several glucose/water solutions, with concentrations spanning from 0.0% to 9.0.% (weight/weight). Improved sensitivity can be achieved using multi-WGM cylindrical resonators with radius of R = 100 μm and resonator composed material of MgF 2 with refractive index of nc = 1.38. Also the effect of polarization on sensitivity is considered for all four WGMs. The best sensitivity of 83.07 nm/RIU for the fourth WGM with transverse magnetic polarization, is reported. These results propose optimized parameters aimed to fast designing of cylindrical resonators as optical biosensors, where both the sensitivity and the geometries can be optimized.

  8. Permutation flow-shop scheduling problem to optimize a quadratic objective function

    NASA Astrophysics Data System (ADS)

    Ren, Tao; Zhao, Peng; Zhang, Da; Liu, Bingqian; Yuan, Huawei; Bai, Danyu

    2017-09-01

    A flow-shop scheduling model enables appropriate sequencing for each job and for processing on a set of machines in compliance with identical processing orders. The objective is to achieve a feasible schedule for optimizing a given criterion. Permutation is a special setting of the model in which the processing order of the jobs on the machines is identical for each subsequent step of processing. This article addresses the permutation flow-shop scheduling problem to minimize the criterion of total weighted quadratic completion time. With a probability hypothesis, the asymptotic optimality of the weighted shortest processing time schedule under a consistency condition (WSPT-CC) is proven for sufficiently large-scale problems. However, the worst case performance ratio of the WSPT-CC schedule is the square of the number of machines in certain situations. A discrete differential evolution algorithm, where a new crossover method with multiple-point insertion is used to improve the final outcome, is presented to obtain high-quality solutions for moderate-scale problems. A sequence-independent lower bound is designed for pruning in a branch-and-bound algorithm for small-scale problems. A set of random experiments demonstrates the performance of the lower bound and the effectiveness of the proposed algorithms.

  9. Scoring of Side-Chain Packings: An Analysis of Weight Factors and Molecular Dynamics Structures.

    PubMed

    Colbes, Jose; Aguila, Sergio A; Brizuela, Carlos A

    2018-02-26

    The protein side-chain packing problem (PSCPP) is a central task in computational protein design. The problem is usually modeled as a combinatorial optimization problem, which consists of searching for a set of rotamers, from a given rotamer library, that minimizes a scoring function (SF). The SF is a weighted sum of terms, that can be decomposed in physics-based and knowledge-based terms. Although there are many methods to obtain approximate solutions for this problem, all of them have similar performances and there has not been a significant improvement in recent years. Studies on protein structure prediction and protein design revealed the limitations of current SFs to achieve further improvements for these two problems. In the same line, a recent work reported a similar result for the PSCPP. In this work, we ask whether or not this negative result regarding further improvements in performance is due to (i) an incorrect weighting of the SFs terms or (ii) the constrained conformation resulting from the protein crystallization process. To analyze these questions, we (i) model the PSCPP as a bi-objective combinatorial optimization problem, optimizing, at the same time, the two most important terms of two SFs of state-of-the-art algorithms and (ii) performed a preprocessing relaxation of the crystal structure through molecular dynamics to simulate the protein in the solvent and evaluated the performance of these two state-of-the-art SFs under these conditions. Our results indicate that (i) no matter what combination of weight factors we use the current SFs will not lead to better performances and (ii) the evaluated SFs will not be able to improve performance on relaxed structures. Furthermore, the experiments revealed that the SFs and the methods are biased toward crystallized structures.

  10. Mathematical modeling of synthetic unit hydrograph case study: Citarum watershed

    NASA Astrophysics Data System (ADS)

    Islahuddin, Muhammad; Sukrainingtyas, Adiska L. A.; Kusuma, M. Syahril B.; Soewono, Edy

    2015-09-01

    Deriving unit hydrograph is very important in analyzing watershed's hydrologic response of a rainfall event. In most cases, hourly measures of stream flow data needed in deriving unit hydrograph are not always available. Hence, one needs to develop methods for deriving unit hydrograph for ungagged watershed. Methods that have evolved are based on theoretical or empirical formulas relating hydrograph peak discharge and timing to watershed characteristics. These are usually referred to Synthetic Unit Hydrograph. In this paper, a gamma probability density function and its variant are used as mathematical approximations of a unit hydrograph for Citarum Watershed. The model is adjusted with real field condition by translation and scaling. Optimal parameters are determined by using Particle Swarm Optimization method with weighted objective function. With these models, a synthetic unit hydrograph can be developed and hydrologic parameters can be well predicted.

  11. Establishing a composite endpoint for measuring the effectiveness of geriatric interventions based on older persons' and informal caregivers' preference weights: a vignette study.

    PubMed

    Hofman, Cynthia S; Makai, Peter; Boter, Han; Buurman, Bianca M; de Craen, Anton J M; Olde Rikkert, Marcel G M; Donders, Rogier A R T; Melis, René J F

    2014-04-18

    The Older Persons and Informal Caregivers Survey Minimal Dataset's (TOPICS-MDS) questionnaire which measures relevant outcomes for elderly people was successfully incorporated into over 60 research projects of the Dutch National Care for the Elderly Programme. A composite endpoint (CEP) for this instrument would be helpful to compare effectiveness of the various intervention projects. Therefore, our aim is to establish a CEP for the TOPICS-MDS questionnaire, based on the preferences of elderly persons and informal caregivers. A vignette study was conducted with 200 persons (124 elderly and 76 informal caregivers) as raters. The vignettes described eight TOPICS-MDS outcomes of older persons (morbidity, functional limitations, emotional well-being, pain experience, cognitive functioning, social functioning, self-perceived health and self-perceived quality of life) and the raters assessed the general well-being (GWB) of these vignette cases on a numeric rating scale (0-10). Mixed linear regression analyses were used to derive the preference weights of the TOPICS-MDS outcomes (dependent variable: GWB scores; fixed factors: the eight outcomes; unstandardized coefficients: preference weights). The mixed regression model that combined the eight outcomes showed that the weights varied from 0.01 for social functioning to 0.16 for self-perceived health. A model that included "informal caregiver" showed that the interactions between this variable and each of the eight outcomes were not significant (p > 0.05). A preference-weighted CEP for TOPICS-MDS questionnaire was established based on the preferences of older persons and informal caregivers. With this CEP optimal comparing the effectiveness of interventions in older persons can be realized.

  12. A neural network construction method for surrogate modeling of physics-based analysis

    NASA Astrophysics Data System (ADS)

    Sung, Woong Je

    In this thesis existing methodologies related to the developmental methods of neural networks have been surveyed and their approaches to network sizing and structuring are carefully observed. This literature review covers the constructive methods, the pruning methods, and the evolutionary methods and questions about the basic assumption intrinsic to the conventional neural network learning paradigm, which is primarily devoted to optimization of connection weights (or synaptic strengths) for the pre-determined connection structure of the network. The main research hypothesis governing this thesis is that, without breaking a prevailing dichotomy between weights and connectivity of the network during learning phase, the efficient design of a task-specific neural network is hard to achieve because, as long as connectivity and weights are searched by separate means, a structural optimization of the neural network requires either repetitive re-training procedures or computationally expensive topological meta-search cycles. The main contribution of this thesis is designing and testing a novel learning mechanism which efficiently learns not only weight parameters but also connection structure from a given training data set, and positioning this learning mechanism within the surrogate modeling practice. In this work, a simple and straightforward extension to the conventional error Back-Propagation (BP) algorithm has been formulated to enable a simultaneous learning for both connectivity and weights of the Generalized Multilayer Perceptron (GMLP) in supervised learning tasks. A particular objective is to achieve a task-specific network having reasonable generalization performance with a minimal training time. The dichotomy between architectural design and weight optimization is reconciled by a mechanism establishing a new connection for a neuron pair which has potentially higher error-gradient than one of the existing connections. Interpreting an instance of the absence of connection as a zero-weight connection, the potential contribution to training error reduction of any present or absent connection can readily be evaluated using the BP algorithm. Instead of being broken, the connections that contribute less remain frozen with constant weight values optimized to that point but they are excluded from further weight optimization until reselected. In this way, a selective weight optimization is executed only for the dynamically maintained pool of high gradient connections. By searching the rapidly changing weights and concentrating optimization resources on them, the learning process is accelerated without either a significant increase in computational cost or a need for re-training. This results in a more task-adapted network connection structure. Combined with another important criterion for the division of a neuron which adds a new computational unit to a network, a highly fitted network can be grown out of the minimal random structure. This particular learning strategy can belong to a more broad class of the variable connectivity learning scheme and the devised algorithm has been named Optimal Brain Growth (OBG). The OBG algorithm has been tested on two canonical problems; a regression analysis using the Complicated Interaction Regression Function and a classification of the Two-Spiral Problem. A comparative study with conventional Multilayer Perceptrons (MLPs) consisting of single- and double-hidden layers shows that OBG is less sensitive to random initial conditions and generalizes better with only a minimal increase in computational time. This partially proves that a variable connectivity learning scheme has great potential to enhance computational efficiency and reduce efforts to select proper network architecture. To investigate the applicability of the OBG to more practical surrogate modeling tasks, the geometry-to-pressure mapping of a particular class of airfoils in the transonic flow regime has been sought using both the conventional MLP networks with pre-defined architecture and the OBG-developed networks started from the same initial MLP networks. Considering wide variety in airfoil geometry and diversity of flow conditions distributed over a range of flow Mach numbers and angles of attack, the new method shows a great potential to capture fundamentally nonlinear flow phenomena especially related to the occurrence of shock waves on airfoil surfaces in transonic flow regime. (Abstract shortened by UMI.).

  13. Automatic treatment plan re-optimization for adaptive radiotherapy guided with the initial plan DVHs.

    PubMed

    Li, Nan; Zarepisheh, Masoud; Uribe-Sanchez, Andres; Moore, Kevin; Tian, Zhen; Zhen, Xin; Graves, Yan Jiang; Gautier, Quentin; Mell, Loren; Zhou, Linghong; Jia, Xun; Jiang, Steve

    2013-12-21

    Adaptive radiation therapy (ART) can reduce normal tissue toxicity and/or improve tumor control through treatment adaptations based on the current patient anatomy. Developing an efficient and effective re-planning algorithm is an important step toward the clinical realization of ART. For the re-planning process, manual trial-and-error approach to fine-tune planning parameters is time-consuming and is usually considered unpractical, especially for online ART. It is desirable to automate this step to yield a plan of acceptable quality with minimal interventions. In ART, prior information in the original plan is available, such as dose-volume histogram (DVH), which can be employed to facilitate the automatic re-planning process. The goal of this work is to develop an automatic re-planning algorithm to generate a plan with similar, or possibly better, DVH curves compared with the clinically delivered original plan. Specifically, our algorithm iterates the following two loops. An inner loop is the traditional fluence map optimization, in which we optimize a quadratic objective function penalizing the deviation of the dose received by each voxel from its prescribed or threshold dose with a set of fixed voxel weighting factors. In outer loop, the voxel weighting factors in the objective function are adjusted according to the deviation of the current DVH curves from those in the original plan. The process is repeated until the DVH curves are acceptable or maximum iteration step is reached. The whole algorithm is implemented on GPU for high efficiency. The feasibility of our algorithm has been demonstrated with three head-and-neck cancer IMRT cases, each having an initial planning CT scan and another treatment CT scan acquired in the middle of treatment course. Compared with the DVH curves in the original plan, the DVH curves in the resulting plan using our algorithm with 30 iterations are better for almost all structures. The re-optimization process takes about 30 s using our in-house optimization engine.

  14. Behavior and neural basis of near-optimal visual search

    PubMed Central

    Ma, Wei Ji; Navalpakkam, Vidhya; Beck, Jeffrey M; van den Berg, Ronald; Pouget, Alexandre

    2013-01-01

    The ability to search efficiently for a target in a cluttered environment is one of the most remarkable functions of the nervous system. This task is difficult under natural circumstances, as the reliability of sensory information can vary greatly across space and time and is typically a priori unknown to the observer. In contrast, visual-search experiments commonly use stimuli of equal and known reliability. In a target detection task, we randomly assigned high or low reliability to each item on a trial-by-trial basis. An optimal observer would weight the observations by their trial-to-trial reliability and combine them using a specific nonlinear integration rule. We found that humans were near-optimal, regardless of whether distractors were homogeneous or heterogeneous and whether reliability was manipulated through contrast or shape. We present a neural-network implementation of near-optimal visual search based on probabilistic population coding. The network matched human performance. PMID:21552276

  15. An Experience Oriented-Convergence Improved Gravitational Search Algorithm for Minimum Variance Distortionless Response Beamforming Optimum.

    PubMed

    Darzi, Soodabeh; Tiong, Sieh Kiong; Tariqul Islam, Mohammad; Rezai Soleymanpour, Hassan; Kibria, Salehin

    2016-01-01

    An experience oriented-convergence improved gravitational search algorithm (ECGSA) based on two new modifications, searching through the best experiments and using of a dynamic gravitational damping coefficient (α), is introduced in this paper. ECGSA saves its best fitness function evaluations and uses those as the agents' positions in searching process. In this way, the optimal found trajectories are retained and the search starts from these trajectories, which allow the algorithm to avoid the local optimums. Also, the agents can move faster in search space to obtain better exploration during the first stage of the searching process and they can converge rapidly to the optimal solution at the final stage of the search process by means of the proposed dynamic gravitational damping coefficient. The performance of ECGSA has been evaluated by applying it to eight standard benchmark functions along with six complicated composite test functions. It is also applied to adaptive beamforming problem as a practical issue to improve the weight vectors computed by minimum variance distortionless response (MVDR) beamforming technique. The results of implementation of the proposed algorithm are compared with some well-known heuristic methods and verified the proposed method in both reaching to optimal solutions and robustness.

  16. A Decision Support System for Solving Multiple Criteria Optimization Problems

    ERIC Educational Resources Information Center

    Filatovas, Ernestas; Kurasova, Olga

    2011-01-01

    In this paper, multiple criteria optimization has been investigated. A new decision support system (DSS) has been developed for interactive solving of multiple criteria optimization problems (MOPs). The weighted-sum (WS) approach is implemented to solve the MOPs. The MOPs are solved by selecting different weight coefficient values for the criteria…

  17. An optimization-based framework for anisotropic simplex mesh adaptation

    NASA Astrophysics Data System (ADS)

    Yano, Masayuki; Darmofal, David L.

    2012-09-01

    We present a general framework for anisotropic h-adaptation of simplex meshes. Given a discretization and any element-wise, localizable error estimate, our adaptive method iterates toward a mesh that minimizes error for a given degrees of freedom. Utilizing mesh-metric duality, we consider a continuous optimization problem of the Riemannian metric tensor field that provides an anisotropic description of element sizes. First, our method performs a series of local solves to survey the behavior of the local error function. This information is then synthesized using an affine-invariant tensor manipulation framework to reconstruct an approximate gradient of the error function with respect to the metric tensor field. Finally, we perform gradient descent in the metric space to drive the mesh toward optimality. The method is first demonstrated to produce optimal anisotropic meshes minimizing the L2 projection error for a pair of canonical problems containing a singularity and a singular perturbation. The effectiveness of the framework is then demonstrated in the context of output-based adaptation for the advection-diffusion equation using a high-order discontinuous Galerkin discretization and the dual-weighted residual (DWR) error estimate. The method presented provides a unified framework for optimizing both the element size and anisotropy distribution using an a posteriori error estimate and enables efficient adaptation of anisotropic simplex meshes for high-order discretizations.

  18. Possibility-based robust design optimization for the structural-acoustic system with fuzzy parameters

    NASA Astrophysics Data System (ADS)

    Yin, Hui; Yu, Dejie; Yin, Shengwen; Xia, Baizhan

    2018-03-01

    The conventional engineering optimization problems considering uncertainties are based on the probabilistic model. However, the probabilistic model may be unavailable because of the lack of sufficient objective information to construct the precise probability distribution of uncertainties. This paper proposes a possibility-based robust design optimization (PBRDO) framework for the uncertain structural-acoustic system based on the fuzzy set model, which can be constructed by expert opinions. The objective of robust design is to optimize the expectation and variability of system performance with respect to uncertainties simultaneously. In the proposed PBRDO, the entropy of the fuzzy system response is used as the variability index; the weighted sum of the entropy and expectation of the fuzzy response is used as the objective function, and the constraints are established in the possibility context. The computations for the constraints and objective function of PBRDO are a triple-loop and a double-loop nested problem, respectively, whose computational costs are considerable. To improve the computational efficiency, the target performance approach is introduced to transform the calculation of the constraints into a double-loop nested problem. To further improve the computational efficiency, a Chebyshev fuzzy method (CFM) based on the Chebyshev polynomials is proposed to estimate the objective function, and the Chebyshev interval method (CIM) is introduced to estimate the constraints, thereby the optimization problem is transformed into a single-loop one. Numerical results on a shell structural-acoustic system verify the effectiveness and feasibility of the proposed methods.

  19. SU-E-T-436: Fluence-Based Trajectory Optimization for Non-Coplanar VMAT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smyth, G; Bamber, JC; Bedford, JL

    2015-06-15

    Purpose: To investigate a fluence-based trajectory optimization technique for non-coplanar VMAT for brain cancer. Methods: Single-arc non-coplanar VMAT trajectories were determined using a heuristic technique for five patients. Organ at risk (OAR) volume intersected during raytracing was minimized for two cases: absolute volume and the sum of relative volumes weighted by OAR importance. These trajectories and coplanar VMAT formed starting points for the fluence-based optimization method. Iterative least squares optimization was performed on control points 24° apart in gantry rotation. Optimization minimized the root-mean-square (RMS) deviation of PTV dose from the prescription (relative importance 100), maximum dose to the brainstemmore » (10), optic chiasm (5), globes (5) and optic nerves (5), plus mean dose to the lenses (5), hippocampi (3), temporal lobes (2), cochleae (1) and brain excluding other regions of interest (1). Control point couch rotations were varied in steps of up to 10° and accepted if the cost function improved. Final treatment plans were optimized with the same objectives in an in-house planning system and evaluated using a composite metric - the sum of optimization metrics weighted by importance. Results: The composite metric decreased with fluence-based optimization in 14 of the 15 plans. In the remaining case its overall value, and the PTV and OAR components, were unchanged but the balance of OAR sparing differed. PTV RMS deviation was improved in 13 cases and unchanged in two. The OAR component was reduced in 13 plans. In one case the OAR component increased but the composite metric decreased - a 4 Gy increase in OAR metrics was balanced by a reduction in PTV RMS deviation from 2.8% to 2.6%. Conclusion: Fluence-based trajectory optimization improved plan quality as defined by the composite metric. While dose differences were case specific, fluence-based optimization improved both PTV and OAR dosimetry in 80% of cases.« less

  20. General method for extracting the quantum efficiency of dispersive qubit readout in circuit QED

    NASA Astrophysics Data System (ADS)

    Bultink, C. C.; Tarasinski, B.; Haandbæk, N.; Poletto, S.; Haider, N.; Michalak, D. J.; Bruno, A.; DiCarlo, L.

    2018-02-01

    We present and demonstrate a general three-step method for extracting the quantum efficiency of dispersive qubit readout in circuit QED. We use active depletion of post-measurement photons and optimal integration weight functions on two quadratures to maximize the signal-to-noise ratio of the non-steady-state homodyne measurement. We derive analytically and demonstrate experimentally that the method robustly extracts the quantum efficiency for arbitrary readout conditions in the linear regime. We use the proven method to optimally bias a Josephson traveling-wave parametric amplifier and to quantify different noise contributions in the readout amplification chain.

  1. Protein and Older Persons.

    PubMed

    Bauer, Juergen M; Diekmann, Rebecca

    2015-08-01

    An optimal protein intake is important for the preservation of muscle mass, functionality, and quality of life in older persons. In recent years, new recommendations regarding the optimal intake of protein in this population have been published. Based on the available scientific literature, 1.0 to 1.2 g protein/kg body weight (BW)/d are recommended in healthy older adults. In certain disease states, a daily protein intake of more than 1.2 g/kg BW may be required. The distribution of protein intake over the day, the amount per meal, and the amino acid profile of proteins are also discussed. Copyright © 2015 Elsevier Inc. All rights reserved.

  2. Material Distribution Optimization for the Shell Aircraft Composite Structure

    NASA Astrophysics Data System (ADS)

    Shevtsov, S.; Zhilyaev, I.; Oganesyan, P.; Axenov, V.

    2016-09-01

    One of the main goal in aircraft structures designing isweight decreasing and stiffness increasing. Composite structures recently became popular in aircraft because of their mechanical properties and wide range of optimization possibilities.Weight distribution and lay-up are keys to creating lightweight stiff strictures. In this paperwe discuss optimization of specific structure that undergoes the non-uniform air pressure at the different flight conditions and reduce a level of noise caused by the airflowinduced vibrations at the constrained weight of the part. Initial model was created with CAD tool Siemens NX, finite element analysis and post processing were performed with COMSOL Multiphysicsr and MATLABr. Numerical solutions of the Reynolds averaged Navier-Stokes (RANS) equations supplemented by k-w turbulence model provide the spatial distributions of air pressure applied to the shell surface. At the formulation of optimization problem the global strain energy calculated within the optimized shell was assumed as the objective. Wall thickness has been changed using parametric approach by an initiation of auxiliary sphere with varied radius and coordinates of the center, which were the design variables. To avoid a local stress concentration, wall thickness increment was defined as smooth function on the shell surface dependent of auxiliary sphere position and size. Our study consists of multiple steps: CAD/CAE transformation of the model, determining wind pressure for different flow angles, optimizing wall thickness distribution for specific flow angles, designing a lay-up for optimal material distribution. The studied structure was improved in terms of maximum and average strain energy at the constrained expense ofweight growth. Developed methods and tools can be applied to wide range of shell-like structures made of multilayered quasi-isotropic laminates.

  3. Off-Policy Actor-Critic Structure for Optimal Control of Unknown Systems With Disturbances.

    PubMed

    Song, Ruizhuo; Lewis, Frank L; Wei, Qinglai; Zhang, Huaguang

    2016-05-01

    An optimal control method is developed for unknown continuous-time systems with unknown disturbances in this paper. The integral reinforcement learning (IRL) algorithm is presented to obtain the iterative control. Off-policy learning is used to allow the dynamics to be completely unknown. Neural networks are used to construct critic and action networks. It is shown that if there are unknown disturbances, off-policy IRL may not converge or may be biased. For reducing the influence of unknown disturbances, a disturbances compensation controller is added. It is proven that the weight errors are uniformly ultimately bounded based on Lyapunov techniques. Convergence of the Hamiltonian function is also proven. The simulation study demonstrates the effectiveness of the proposed optimal control method for unknown systems with disturbances.

  4. Exercise and nutritional approaches to prevent frail bones, falls and fractures: an update.

    PubMed

    Daly, R M

    2017-04-01

    Osteoporosis (low bone strength) and sarcopenia (low muscle mass, strength and/or impaired function) often co-exist (hence the term 'sarco-osteoporosis') and have similar health consequences with regard to disability, falls, frailty and fractures. Exercise and adequate nutrition, particularly with regard to vitamin D, calcium and protein, are key lifestyle approaches that can simultaneously optimize bone, muscle and functional outcomes in older people, if they are individually tailored and appropriately prescribed in terms of the type and dose. Not all forms of exercise are equally effective for optimizing musculoskeletal health. Regular walking alone has little or no effect on bone or muscle. Traditional progressive resistance training (PRT) is effective for improving muscle mass, size and strength, but it has mixed effects on muscle function and falls which may be due to the common prescription of slow and controlled movement patterns. At present, targeted multi-modal programs incorporating traditional and high-velocity PRT, weight-bearing impact exercises and challenging balance/mobility activities appear to be most effective for optimizing musculoskeletal health and function. Reducing and breaking up sitting time may also help attenuate muscle loss. There is also evidence to support an interaction between exercise and various nutritional factors, particularly protein and some multi-nutrient supplements, on muscle and bone health in the elderly. This review summary provides an overview of the latest evidence with regard to the optimal type and dose of exercise and the role of various nutritional factors for preventing bone and muscle loss and improving functional capacity in older people.

  5. OPTIMAL AIRCRAFT TRAJECTORIES FOR SPECIFIED RANGE

    NASA Technical Reports Server (NTRS)

    Lee, H.

    1994-01-01

    For an aircraft operating over a fixed range, the operating costs are basically a sum of fuel cost and time cost. While minimum fuel and minimum time trajectories are relatively easy to calculate, the determination of a minimum cost trajectory can be a complex undertaking. This computer program was developed to optimize trajectories with respect to a cost function based on a weighted sum of fuel cost and time cost. As a research tool, the program could be used to study various characteristics of optimum trajectories and their comparison to standard trajectories. It might also be used to generate a model for the development of an airborne trajectory optimization system. The program could be incorporated into an airline flight planning system, with optimum flight plans determined at takeoff time for the prevailing flight conditions. The use of trajectory optimization could significantly reduce the cost for a given aircraft mission. The algorithm incorporated in the program assumes that a trajectory consists of climb, cruise, and descent segments. The optimization of each segment is not done independently, as in classical procedures, but is performed in a manner which accounts for interaction between the segments. This is accomplished by the application of optimal control theory. The climb and descent profiles are generated by integrating a set of kinematic and dynamic equations, where the total energy of the aircraft is the independent variable. At each energy level of the climb and descent profiles, the air speed and power setting necessary for an optimal trajectory are determined. The variational Hamiltonian of the problem consists of the rate of change of cost with respect to total energy and a term dependent on the adjoint variable, which is identical to the optimum cruise cost at a specified altitude. This variable uniquely specifies the optimal cruise energy, cruise altitude, cruise Mach number, and, indirectly, the climb and descent profiles. If the optimum cruise cost is specified, an optimum trajectory can easily be generated; however, the range obtained for a particular optimum cruise cost is not known a priori. For short range flights, the program iteratively varies the optimum cruise cost until the computed range converges to the specified range. For long-range flights, iteration is unnecessary since the specified range can be divided into a cruise segment distance and full climb and descent distances. The user must supply the program with engine fuel flow rate coefficients and an aircraft aerodynamic model. The program currently includes coefficients for the Pratt-Whitney JT8D-7 engine and an aerodynamic model for the Boeing 727. Input to the program consists of the flight range to be covered and the prevailing flight conditions including pressure, temperature, and wind profiles. Information output by the program includes: optimum cruise tables at selected weights, optimal cruise quantities as a function of cruise weight and cruise distance, climb and descent profiles, and a summary of the complete synthesized optimal trajectory. This program is written in FORTRAN IV for batch execution and has been implemented on a CDC 6000 series computer with a central memory requirement of approximately 100K (octal) of 60 bit words. This aircraft trajectory optimization program was developed in 1979.

  6. Principal Eigenvalue Minimization for an Elliptic Problem with Indefinite Weight and Robin Boundary Conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hintermueller, M., E-mail: hint@math.hu-berlin.de; Kao, C.-Y., E-mail: Ckao@claremontmckenna.edu; Laurain, A., E-mail: laurain@math.hu-berlin.de

    2012-02-15

    This paper focuses on the study of a linear eigenvalue problem with indefinite weight and Robin type boundary conditions. We investigate the minimization of the positive principal eigenvalue under the constraint that the absolute value of the weight is bounded and the total weight is a fixed negative constant. Biologically, this minimization problem is motivated by the question of determining the optimal spatial arrangement of favorable and unfavorable regions for a species to survive. For rectangular domains with Neumann boundary condition, it is known that there exists a threshold value such that if the total weight is below this thresholdmore » value then the optimal favorable region is like a section of a disk at one of the four corners; otherwise, the optimal favorable region is a strip attached to the shorter side of the rectangle. Here, we investigate the same problem with mixed Robin-Neumann type boundary conditions and study how this boundary condition affects the optimal spatial arrangement.« less

  7. Weight optimization of large span steel truss structures with genetic algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mojolic, Cristian; Hulea, Radu; Pârv, Bianca Roxana

    2015-03-10

    The paper presents the weight optimization process of the main steel truss that supports the Slatina Sport Hall roof. The structure was loaded with self-weight, dead loads, live loads, snow, wind and temperature, grouped in eleven load cases. The optimization of the structure was made using genetic algorithms implemented in a Matlab code. A total number of four different cases were taken into consideration when trying to determine the lowest weight of the structure, depending on the types of connections with the concrete structure ( types of supports, bearing modes), and the possibility of the lower truss chord nodes tomore » change their vertical position. A number of restrictions for tension, maximum displacement and buckling were enforced on the elements, and the cross sections are chosen by the program from a user data base. The results in each of the four cases were analyzed in terms of weight, element tension, element section and displacement. The paper presents the optimization process and the conclusions drawn.« less

  8. Cost analysis of biologic drugs in rheumatoid arthritis first line treatment after methotrexate failure according to patients' body weight.

    PubMed

    Román Ivorra, José Andrés; Ivorra, José; Monte-Boquet, Emilio; Canal, Cristina; Oyagüez, Itziar; Gómez-Barrera, Manuel

    2016-01-01

    The objective was to assess the influence of patients' weight in the cost of rheumatoid arthritis treatment with biologic drugs used in first line after non-adequate response to methotrexate. Pharmaceutical and administration costs were calculated in two scenarios: non-optimization and optimization of intravenous (IV) vials. The retrospective analysis of 66 patients from a Spanish 1,000 beds-hospital Rheumatology Clinic Service was used to obtain posology and weight data. The study time horizon was two years. Costs were expressed in 2013 euros. For an average 69kg-weighted patient the lowest cost corresponded to abatacept subcutaneous (SC ABA) (€21,028.09) in the scenario without IV vials optimization and infliximab (IFX) (€20,779.29) with optimization. Considering patients' weight in the scenario without IV vials optimization infliximab (IFX) was the least expensive drug in patients ranged 45-49kg, IV ABA in 50-59kg and SC ABA in patients over 60kg. With IV vials optimization IFX was the least expensive drug in patients under 69kg and SC ABA over 70kg. Assuming comparable effectiveness of biological drugs, patient's weight is a variable to consider, potentials savings could reach €20,000 in two years. Copyright © 2015 Elsevier España, S.L.U. and Sociedad Española de Reumatología y Colegio Mexicano de Reumatología. All rights reserved.

  9. Distributed Optimal Consensus Control for Multiagent Systems With Input Delay.

    PubMed

    Zhang, Huaipin; Yue, Dong; Zhao, Wei; Hu, Songlin; Dou, Chunxia; Huaipin Zhang; Dong Yue; Wei Zhao; Songlin Hu; Chunxia Dou; Hu, Songlin; Zhang, Huaipin; Dou, Chunxia; Yue, Dong; Zhao, Wei

    2018-06-01

    This paper addresses the problem of distributed optimal consensus control for a continuous-time heterogeneous linear multiagent system subject to time varying input delays. First, by discretization and model transformation, the continuous-time input-delayed system is converted into a discrete-time delay-free system. Two delicate performance index functions are defined for these two systems. It is shown that the performance index functions are equivalent and the optimal consensus control problem of the input-delayed system can be cast into that of the delay-free system. Second, by virtue of the Hamilton-Jacobi-Bellman (HJB) equations, an optimal control policy for each agent is designed based on the delay-free system and a novel value iteration algorithm is proposed to learn the solutions to the HJB equations online. The proposed adaptive dynamic programming algorithm is implemented on the basis of a critic-action neural network (NN) structure. Third, it is proved that local consensus errors of the two systems and weight estimation errors of the critic-action NNs are uniformly ultimately bounded while the approximated control policies converge to their target values. Finally, two simulation examples are presented to illustrate the effectiveness of the developed method.

  10. Fully integrated aerodynamic/dynamic optimization of helicopter rotor blades

    NASA Technical Reports Server (NTRS)

    Walsh, Joanne L.; Lamarsh, William J., II; Adelman, Howard M.

    1992-01-01

    This paper describes a fully integrated aerodynamic/dynamic optimization procedure for helicopter rotor blades. The procedure combines performance and dynamics analyses with a general purpose optimizer. The procedure minimizes a linear combination of power required (in hover, forward flight, and maneuver) and vibratory hub shear. The design variables include pretwist, taper initiation, taper ratio, root chord, blade stiffnesses, tuning masses, and tuning mass locations. Aerodynamic constraints consist of limits on power required in hover, forward flight and maneuver; airfoil section stall; drag divergence Mach number; minimum tip chord; and trim. Dynamic constraints are on frequencies, minimum autorotational inertia, and maximum blade weight. The procedure is demonstrated for two cases. In the first case the objective function involves power required (in hover, forward flight, and maneuver) and dynamics. The second case involves only hover power and dynamics. The designs from the integrated procedure are compared with designs from a sequential optimization approach in which the blade is first optimized for performance and then for dynamics. In both cases, the integrated approach is superior.

  11. Fully integrated aerodynamic/dynamic optimization of helicopter rotor blades

    NASA Technical Reports Server (NTRS)

    Walsh, Joanne L.; Lamarsh, William J., II; Adelman, Howard M.

    1992-01-01

    A fully integrated aerodynamic/dynamic optimization procedure is described for helicopter rotor blades. The procedure combines performance and dynamic analyses with a general purpose optimizer. The procedure minimizes a linear combination of power required (in hover, forward flight, and maneuver) and vibratory hub shear. The design variables include pretwist, taper initiation, taper ratio, root chord, blade stiffnesses, tuning masses, and tuning mass locations. Aerodynamic constraints consist of limits on power required in hover, forward flight and maneuvers; airfoil section stall; drag divergence Mach number; minimum tip chord; and trim. Dynamic constraints are on frequencies, minimum autorotational inertia, and maximum blade weight. The procedure is demonstrated for two cases. In the first case, the objective function involves power required (in hover, forward flight and maneuver) and dynamics. The second case involves only hover power and dynamics. The designs from the integrated procedure are compared with designs from a sequential optimization approach in which the blade is first optimized for performance and then for dynamics. In both cases, the integrated approach is superior.

  12. Use and optimization of a dual-flowrate loading strategy to maximize throughput in protein-a affinity chromatography.

    PubMed

    Ghose, Sanchayita; Nagrath, Deepak; Hubbard, Brian; Brooks, Clayton; Cramer, Steven M

    2004-01-01

    The effect of an alternate strategy employing two different flowrates during loading was explored as a means of increasing system productivity in Protein-A chromatography. The effect of such a loading strategy was evaluated using a chromatographic model that was able to accurately predict experimental breakthrough curves for this Protein-A system. A gradient-based optimization routine is carried out to establish the optimal loading conditions (initial and final flowrates and switching time). The two-step loading strategy (using a higher flowrate during the initial stages followed by a lower flowrate) was evaluated for an Fc-fusion protein and was found to result in significant improvements in process throughput. In an extension of this optimization routine, dynamic loading capacity and productivity were simultaneously optimized using a weighted objective function, and this result was compared to that obtained with the single flowrate. Again, the dual-flowrate strategy was found to be superior.

  13. Use of the Collaborative Optimization Architecture for Launch Vehicle Design

    NASA Technical Reports Server (NTRS)

    Braun, R. D.; Moore, A. A.; Kroo, I. M.

    1996-01-01

    Collaborative optimization is a new design architecture specifically created for large-scale distributed-analysis applications. In this approach, problem is decomposed into a user-defined number of subspace optimization problems that are driven towards interdisciplinary compatibility and the appropriate solution by a system-level coordination process. This decentralized design strategy allows domain-specific issues to be accommodated by disciplinary analysts, while requiring interdisciplinary decisions to be reached by consensus. The present investigation focuses on application of the collaborative optimization architecture to the multidisciplinary design of a single-stage-to-orbit launch vehicle. Vehicle design, trajectory, and cost issues are directly modeled. Posed to suit the collaborative architecture, the design problem is characterized by 5 design variables and 16 constraints. Numerous collaborative solutions are obtained. Comparison of these solutions demonstrates the influence which an priori ascent-abort criterion has on development cost. Similarly, objective-function selection is discussed, demonstrating the difference between minimum weight and minimum cost concepts. The operational advantages of the collaborative optimization

  14. Post space mission lumbo-pelvic neuromuscular reconditioning: a European perspective.

    PubMed

    Evetts, Simon N; Caplan, Nick; Debuse, Dorothée; Lambrecht, Gunda; Damann, Volker; Petersen, Nora; Hides, Julie

    2014-07-01

    Long-duration exposure to the space environment causes physical adaptations that are deleterious to optimal functioning on Earth. Post-mission rehabilitation traditionally concentrates on regaining general muscle strength, neuromuscular control, and lumbo-pelvic stability. A particular problem is muscle imbalance caused by the hypertrophy of the flexor and atrophy of the extensor and local lumbo-pelvic muscles, increasing the risk of post-mission injury. A method currently used in European human spaceflight to aid post-mission recovery involves a motor control approach, focusing initially on teaching voluntary contraction of specific lumbo-pelvic muscles and optimizing spinal position, progressing to functional retraining in weight bearing positions. An alternative approach would be to use a Functional Readaptive Exercise Device to appropriately recruit this musculature, thus complementing current rehabilitation programs. Advances in post-mission recovery of this nature may both improve astronaut healthcare and aid terrestrial healthcare through more effective treatment of low back pain and accelerated post bed rest rehabilitation.

  15. Enlisted Personnel Allocation System

    DTIC Science & Technology

    1989-03-01

    hierarchy is further subdivided into two characteristic groupings: intelligence qualifications and physical qualifications. 41 I I 7 -, S- ie p if- i - LL...weighted as 30% of the applicant’s Intelligence Qualifications score). As shown in Figure 6, a step function generates a score based on the...34 There is no aritificial time window imposed on any MOS. Any open training date within the full DEP horizon may be recommended by the optimization

  16. Algorithm for optimizing bipolar interconnection weights with applications in associative memories and multitarget classification.

    PubMed

    Chang, S; Wong, K W; Zhang, W; Zhang, Y

    1999-08-10

    An algorithm for optimizing a bipolar interconnection weight matrix with the Hopfield network is proposed. The effectiveness of this algorithm is demonstrated by computer simulation and optical implementation. In the optical implementation of the neural network the interconnection weights are biased to yield a nonnegative weight matrix. Moreover, a threshold subchannel is added so that the system can realize, in real time, the bipolar weighted summation in a single channel. Preliminary experimental results obtained from the applications in associative memories and multitarget classification with rotation invariance are shown.

  17. Algorithm for Optimizing Bipolar Interconnection Weights with Applications in Associative Memories and Multitarget Classification

    NASA Astrophysics Data System (ADS)

    Chang, Shengjiang; Wong, Kwok-Wo; Zhang, Wenwei; Zhang, Yanxin

    1999-08-01

    An algorithm for optimizing a bipolar interconnection weight matrix with the Hopfield network is proposed. The effectiveness of this algorithm is demonstrated by computer simulation and optical implementation. In the optical implementation of the neural network the interconnection weights are biased to yield a nonnegative weight matrix. Moreover, a threshold subchannel is added so that the system can realize, in real time, the bipolar weighted summation in a single channel. Preliminary experimental results obtained from the applications in associative memories and multitarget classification with rotation invariance are shown.

  18. Conformal Cryogenic Tank Trade Study for Reusable Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Rivers, H. Kevin

    1999-01-01

    Future reusable launch vehicles may be lifting bodies with non-circular cross section like the proposed Lockheed-Martin VentureStar(tm). Current designs for the cryogenic tanks of these vehicles are dual-lobed and quad-lobed tanks which are packaged more efficiently than circular tanks, but still have low packaging efficiencies with large gaps existing between the vehicle outer mold line and the outer surfaces of the tanks. In this study, tanks that conform to the outer mold line of a non-circular vehicle were investigated. Four structural concepts for conformal cryogenic tanks and a quad-lobed tank concept were optimized for minimum weight designs. The conformal tank concepts included a sandwich tank stiffened with axial tension webs, a sandwich tank stiffened with transverse tension webs, a sandwich tank stiffened with rings and tension ties, and a sandwich tank stiffened with orthogrid stiffeners and tension ties. For each concept, geometric parameters (such as ring frame spacing, the number and spacing of tension ties or webs, and tank corner radius) and internal pressure loads were varied and the structure was optimized using a finite-element-based optimization procedure. Theoretical volumetric weights were calculated by dividing the weight of the barrel section of the tank concept and its associated frames, webs and tension ties by the volume it circumscribes. This paper describes the four conformal tank concepts and the design assumptions utilized in their optimization. The conformal tank optimization results included theoretical weights, trends and comparisons between the concepts, are also presented, along with results from the optimization of a quad-lobed tank. Also, the effects of minimum gauge values and non-optimum weights on the weight of the optimized structure are described in this paper.

  19. National survey on management of weight reduction in PCOS women in the United Kingdom.

    PubMed

    Sharma, Aarti; Walker, Dawn-Marie; Atiomo, William

    2010-10-01

    To identify the most commonly used methods for weight reduction in women with polycystic ovarian syndrome (PCOS) utilized by obstetricians and gynaecologists in the United Kingdom (UK). Permission was sought from the Royal College of Obstetricians and Gynaecologists (RCOG) to conduct an electronic survey of all consultants practising in the UK. The questionnaire was anonymous and an electronic link was sent via email to the 1140 consultants whose details were provided by the RCOG. A 27-item questionnaire was developed. The variables evaluated were first-line methods of weight reduction used, proportion of women with PCOS seen that were obese, whether the patients had tried other weight reduction methods before seeking help, the optimal dietary advice and optimal composition, the optimal duration and frequency of exercise suggested, BMI used for suggesting weight loss, percentage of women in whom weight loss worked, length of time allowed prior to suggesting another method, methods considered most effective by patients, use of metformin for weight loss, criteria used for prescribing metformin, first-line anti-obesity drugs preferred if any, second- and third-line methods used, referral to other specialists and criteria for referral for bariatric surgery. Responses were categorical and are reported as proportions. One hundred and seven (9.4%) consultants responded to the questionnaire. One hundred and four (97%) provided advice on diet and 101 (94%) advice on exercise as their first-line strategy for weight management. Fifty-one (47.7%) stated that they provided specific information on an optimal dietary intake, 53 (49.9%) on the optimal dietary composition and 61 (57%) on the optimal duration and frequency of exercise per week. The commonest second-line methods used were anti-obesity drugs and metformin and the most popular third-line management options were anti-obesity drugs and bariatric surgery. The results suggest that the information provided to women with PCOS on weight management is variable, and highlight the need for specific guidelines and further research on weight management in women with this condition. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  20. C1,1 regularity for degenerate elliptic obstacle problems

    NASA Astrophysics Data System (ADS)

    Daskalopoulos, Panagiota; Feehan, Paul M. N.

    2016-03-01

    The Heston stochastic volatility process is a degenerate diffusion process where the degeneracy in the diffusion coefficient is proportional to the square root of the distance to the boundary of the half-plane. The generator of this process with killing, called the elliptic Heston operator, is a second-order, degenerate-elliptic partial differential operator, where the degeneracy in the operator symbol is proportional to the distance to the boundary of the half-plane. In mathematical finance, solutions to the obstacle problem for the elliptic Heston operator correspond to value functions for perpetual American-style options on the underlying asset. With the aid of weighted Sobolev spaces and weighted Hölder spaces, we establish the optimal C 1 , 1 regularity (up to the boundary of the half-plane) for solutions to obstacle problems for the elliptic Heston operator when the obstacle functions are sufficiently smooth.

  1. A Monte Carlo simulation based inverse propagation method for stochastic model updating

    NASA Astrophysics Data System (ADS)

    Bao, Nuo; Wang, Chunjie

    2015-08-01

    This paper presents an efficient stochastic model updating method based on statistical theory. Significant parameters have been selected implementing the F-test evaluation and design of experiments, and then the incomplete fourth-order polynomial response surface model (RSM) has been developed. Exploiting of the RSM combined with Monte Carlo simulation (MCS), reduces the calculation amount and the rapid random sampling becomes possible. The inverse uncertainty propagation is given by the equally weighted sum of mean and covariance matrix objective functions. The mean and covariance of parameters are estimated synchronously by minimizing the weighted objective function through hybrid of particle-swarm and Nelder-Mead simplex optimization method, thus the better correlation between simulation and test is achieved. Numerical examples of a three degree-of-freedom mass-spring system under different conditions and GARTEUR assembly structure validated the feasibility and effectiveness of the proposed method.

  2. Some single-machine scheduling problems with learning effects and two competing agents.

    PubMed

    Li, Hongjie; Li, Zeyuan; Yin, Yunqiang

    2014-01-01

    This study considers a scheduling environment in which there are two agents and a set of jobs, each of which belongs to one of the two agents and its actual processing time is defined as a decreasing linear function of its starting time. Each of the two agents competes to process its respective jobs on a single machine and has its own scheduling objective to optimize. The objective is to assign the jobs so that the resulting schedule performs well with respect to the objectives of both agents. The objective functions addressed in this study include the maximum cost, the total weighted completion time, and the discounted total weighted completion time. We investigate three problems arising from different combinations of the objectives of the two agents. The computational complexity of the problems is discussed and solution algorithms where possible are presented.

  3. Optimization of DSC MRI Echo Times for CBV Measurements Using Error Analysis in a Pilot Study of High-Grade Gliomas.

    PubMed

    Bell, L C; Does, M D; Stokes, A M; Baxter, L C; Schmainda, K M; Dueck, A C; Quarles, C C

    2017-09-01

    The optimal TE must be calculated to minimize the variance in CBV measurements made with DSC MR imaging. Simulations can be used to determine the influence of the TE on CBV, but they may not adequately recapitulate the in vivo heterogeneity of precontrast T2*, contrast agent kinetics, and the biophysical basis of contrast agent-induced T2* changes. The purpose of this study was to combine quantitative multiecho DSC MRI T2* time curves with error analysis in order to compute the optimal TE for a traditional single-echo acquisition. Eleven subjects with high-grade gliomas were scanned at 3T with a dual-echo DSC MR imaging sequence to quantify contrast agent-induced T2* changes in this retrospective study. Optimized TEs were calculated with propagation of error analysis for high-grade glial tumors, normal-appearing white matter, and arterial input function estimation. The optimal TE is a weighted average of the T2* values that occur as a contrast agent bolus transverses a voxel. The mean optimal TEs were 30.0 ± 7.4 ms for high-grade glial tumors, 36.3 ± 4.6 ms for normal-appearing white matter, and 11.8 ± 1.4 ms for arterial input function estimation (repeated-measures ANOVA, P < .001). Greater heterogeneity was observed in the optimal TE values for high-grade gliomas, and mean values of all 3 ROIs were statistically significant. The optimal TE for the arterial input function estimation is much shorter; this finding implies that quantitative DSC MR imaging acquisitions would benefit from multiecho acquisitions. In the case of a single-echo acquisition, the optimal TE prescribed should be 30-35 ms (without a preload) and 20-30 ms (with a standard full-dose preload). © 2017 by American Journal of Neuroradiology.

  4. The effects of exercise training in a weight loss lifestyle intervention on asthma control, quality of life and psychosocial symptoms in adult obese asthmatics: protocol of a randomized controlled trial.

    PubMed

    Freitas, Patricia D; Ferreira, Palmira G; da Silva, Analuci; Trecco, Sonia; Stelmach, Rafael; Cukier, Alberto; Carvalho-Pinto, Regina; Salge, João Marcos; Fernandes, Frederico L A; Mancini, Marcio C; Martins, Milton A; Carvalho, Celso R F

    2015-10-21

    Asthma and obesity are public health problems with increasing prevalence worldwide. Clinical and epidemiologic studies have demonstrated that obese asthmatics have worse clinical control and health related quality of life (HRQL) despite an optimized medical treatment. Bariatric surgery is successful to weight-loss and improves asthma control; however, the benefits of nonsurgical interventions remain unknown. This is a randomized controlled trial with 2-arms parallel. Fifty-five moderate or severe asthmatics with grade II obesity (BMI ≥ 35 kg/m(2)) under optimized medication will be randomly assigned into either weight-loss program + sham (WL + S group) or weight-loss program + exercise (WL + E group). The weight loss program will be the same for both groups including nutrition and psychological therapies (every 15 days, total of 6 sessions, 60 min each). Exercise program will include aerobic and resistance muscle training while sham treatment will include a breathing and stretching program (both programs twice a week, 3 months, 60 min each session). The primary outcome variable will be asthma clinical control. Secondary outcomes include HRQL, levels of depression and anxiety, lung function, daily life physical activity, body composition, maximal aerobic capacity, strength muscle and sleep disorders. Potential mechanism (changes in lung mechanical and airway/systemic inflammation) will also be examined to explain the benefits in both groups. This study will bring a significant contribution to the literature evaluating the effects of exercise conditioning in a weight loss intervention in obese asthmatics as well as will evaluate possible involved mechanisms. NCT02188940.

  5. Texture and haptic cues in slant discrimination: reliability-based cue weighting without statistically optimal cue combination

    NASA Astrophysics Data System (ADS)

    Rosas, Pedro; Wagemans, Johan; Ernst, Marc O.; Wichmann, Felix A.

    2005-05-01

    A number of models of depth-cue combination suggest that the final depth percept results from a weighted average of independent depth estimates based on the different cues available. The weight of each cue in such an average is thought to depend on the reliability of each cue. In principle, such a depth estimation could be statistically optimal in the sense of producing the minimum-variance unbiased estimator that can be constructed from the available information. Here we test such models by using visual and haptic depth information. Different texture types produce differences in slant-discrimination performance, thus providing a means for testing a reliability-sensitive cue-combination model with texture as one of the cues to slant. Our results show that the weights for the cues were generally sensitive to their reliability but fell short of statistically optimal combination - we find reliability-based reweighting but not statistically optimal cue combination.

  6. Task-driven optimization of CT tube current modulation and regularization in model-based iterative reconstruction

    NASA Astrophysics Data System (ADS)

    Gang, Grace J.; Siewerdsen, Jeffrey H.; Webster Stayman, J.

    2017-06-01

    Tube current modulation (TCM) is routinely adopted on diagnostic CT scanners for dose reduction. Conventional TCM strategies are generally designed for filtered-backprojection (FBP) reconstruction to satisfy simple image quality requirements based on noise. This work investigates TCM designs for model-based iterative reconstruction (MBIR) to achieve optimal imaging performance as determined by a task-based image quality metric. Additionally, regularization is an important aspect of MBIR that is jointly optimized with TCM, and includes both the regularization strength that controls overall smoothness as well as directional weights that permits control of the isotropy/anisotropy of the local noise and resolution properties. Initial investigations focus on a known imaging task at a single location in the image volume. The framework adopts Fourier and analytical approximations for fast estimation of the local noise power spectrum (NPS) and modulation transfer function (MTF)—each carrying dependencies on TCM and regularization. For the single location optimization, the local detectability index (d‧) of the specific task was directly adopted as the objective function. A covariance matrix adaptation evolution strategy (CMA-ES) algorithm was employed to identify the optimal combination of imaging parameters. Evaluations of both conventional and task-driven approaches were performed in an abdomen phantom for a mid-frequency discrimination task in the kidney. Among the conventional strategies, the TCM pattern optimal for FBP using a minimum variance criterion yielded a worse task-based performance compared to an unmodulated strategy when applied to MBIR. Moreover, task-driven TCM designs for MBIR were found to have the opposite behavior from conventional designs for FBP, with greater fluence assigned to the less attenuating views of the abdomen and less fluence to the more attenuating lateral views. Such TCM patterns exaggerate the intrinsic anisotropy of the MTF and NPS as a result of the data weighting in MBIR. Directional penalty design was found to reinforce the same trend. The task-driven approaches outperform conventional approaches, with the maximum improvement in d‧ of 13% given by the joint optimization of TCM and regularization. This work demonstrates that the TCM optimal for MBIR is distinct from conventional strategies proposed for FBP reconstruction and strategies optimal for FBP are suboptimal and may even reduce performance when applied to MBIR. The task-driven imaging framework offers a promising approach for optimizing acquisition and reconstruction for MBIR that can improve imaging performance and/or dose utilization beyond conventional imaging strategies.

  7. Optimization of remotely delivered intensive lifestyle treatment for obesity using the Multiphase Optimization Strategy: Opt-IN study protocol.

    PubMed

    Pellegrini, Christine A; Hoffman, Sara A; Collins, Linda M; Spring, Bonnie

    2014-07-01

    Obesity-attributable medical expenditures remain high, and interventions that are both effective and cost-effective have not been adequately developed. The Opt-IN study is a theory-guided trial using the Multiphase Optimization Strategy (MOST) to develop an optimized, scalable version of a technology-supported weight loss intervention. Opt-IN aims to identify which of 5 treatment components or component levels contribute most meaningfully and cost-efficiently to the improvement of weight loss over a 6 month period. Five hundred and sixty obese adults (BMI 30-40 kg/m(2)) between 18 and 60 years old will be randomized to one of 16 conditions in a fractional factorial design involving five intervention components: treatment intensity (12 vs. 24 coaching calls), reports sent to primary care physician (No vs. Yes), text messaging (No vs. Yes), meal replacement recommendations (No vs. Yes), and training of a participant's self-selected support buddy (No vs. Yes). During the 6-month intervention, participants will monitor weight, diet, and physical activity on the Opt-IN smartphone application downloaded to their personal phone. Weight will be assessed at baseline, 3, and 6 months. The Opt-IN trial is the first study to use the MOST framework to develop a weight loss treatment that will be optimized to yield the best weight loss outcome attainable for $500 or less. Copyright © 2014 Elsevier Inc. All rights reserved.

  8. Impaired visually guided weight-shifting ability in children with cerebral palsy.

    PubMed

    Ballaz, Laurent; Robert, Maxime; Parent, Audrey; Prince, François; Lemay, Martin

    2014-09-01

    The ability to control voluntary weight shifting is crucial in many functional tasks. To our knowledge, weight shifting ability in response to a visual stimulus has never been evaluated in children with cerebral palsy (CP). The aim of the study was (1) to propose a new method to assess visually guided medio-lateral (M/L) weight shifting ability and (2) to compare weight-shifting ability in children with CP and typically developing (TD) children. Ten children with spastic diplegic CP (Gross Motor Function Classification System level I and II; age 7-12 years) and 10 TD age-matched children were tested. Participants played with the skiing game on the Wii Fit game console. Center of pressure (COP) displacements, trunk and lower-limb movements were recorded during the last virtual slalom. Maximal isometric lower limb strength and postural control during quiet standing were also assessed. Lower-limb muscle strength was reduced in children with CP compared to TD children and postural control during quiet standing was impaired in children with CP. As expected, the skiing game mainly resulted in M/L COP displacements. Children with CP showed lower M/L COP range and velocity as compared to TD children but larger trunk movements. Trunk and lower extremity movements were less in phase in children with CP compared to TD children. Commercially available active video games can be used to assess visually guided weight shifting ability. Children with spastic diplegic CP showed impaired visually guided weight shifting which can be explained by non-optimal coordination of postural movement and reduced muscular strength. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Locomotor training with body weight support in SCI: EMG improvement is more optimally expressed at a low testing speed.

    PubMed

    Meyns, P; Van de Crommert, H W A A; Rijken, H; van Kuppevelt, D H J M; Duysens, J

    2014-12-01

    Case series. To determine the optimal testing speed at which the recovery of the EMG (electromyographic) activity should be assessed during and after body weight supported (BWS) locomotor training. Tertiary hospital, Sint Maartenskliniek, Nijmegen, The Netherlands. Four participants with incomplete chronic SCI were included for BWS locomotor training; one AIS-C and three AIS-D (according to the ASIA (American Spinal Injury Association) Impairment Scale or AIS). All were at least 5 years after injury. The SCI participants were trained three times a week for a period of 6 weeks. They improved their locomotor function in terms of higher walking speed, less BWS and less assistance needed. To investigate which treadmill speed for EMG assessment reflects the functional improvement most adequately, all participants were assessed weekly using the same two speeds (0.5 and 1.5 km h(-1), referred to as low and high speed, respectively) for 6 weeks. The change in root mean square EMG (RMS EMG) was assessed in four leg muscles; biceps femoris, rectus femoris, gastrocnemius medialis and tibialis anterior. The changes in RMS EMG occurred at similar phases of the step cycle for both walking conditions, but these changes were larger when the treadmill was set at a low speed (0.5 km h(-1)). Improvement in gait is feasible with BWS treadmill training even long after injury. The EMG changes after treadmill training are more optimally expressed using a low rather than a high testing treadmill speed.

  10. Optimal design of a magnetorheological damper used in smart prosthetic knees

    NASA Astrophysics Data System (ADS)

    Gao, Fei; Liu, Yan-Nan; Liao, Wei-Hsin

    2017-03-01

    In this paper, a magnetorheological (MR) damper is optimally designed for use in smart prosthetic knees. The objective of optimization is to minimize the total energy consumption during one gait cycle and weight of the MR damper. Firstly, a smart prosthetic knee employing a DC motor, MR damper and springs is developed based on the kinetics characteristics of human knee during walking. Then the function of the MR damper is analyzed. In the initial stance phase and swing phase, the MR damper is powered off (off-state). While during the late stance phase, the MR damper is powered on to work as a clutch (on-state). Based on the MR damper model as well as the prosthetic knee model, the instantaneous energy consumption of the MR damper is derived in the two working states. Then by integrating in one gait cycle, the total energy consumption is obtained. Particle swarm optimization algorithm is used to optimize the geometric dimensions of MR damper. Finally, a prototype of the optimized MR damper is fabricated and tested with comparison to simulation.

  11. Data-Driven Zero-Sum Neuro-Optimal Control for a Class of Continuous-Time Unknown Nonlinear Systems With Disturbance Using ADP.

    PubMed

    Wei, Qinglai; Song, Ruizhuo; Yan, Pengfei

    2016-02-01

    This paper is concerned with a new data-driven zero-sum neuro-optimal control problem for continuous-time unknown nonlinear systems with disturbance. According to the input-output data of the nonlinear system, an effective recurrent neural network is introduced to reconstruct the dynamics of the nonlinear system. Considering the system disturbance as a control input, a two-player zero-sum optimal control problem is established. Adaptive dynamic programming (ADP) is developed to obtain the optimal control under the worst case of the disturbance. Three single-layer neural networks, including one critic and two action networks, are employed to approximate the performance index function, the optimal control law, and the disturbance, respectively, for facilitating the implementation of the ADP method. Convergence properties of the ADP method are developed to show that the system state will converge to a finite neighborhood of the equilibrium. The weight matrices of the critic and the two action networks are also convergent to finite neighborhoods of their optimal ones. Finally, the simulation results will show the effectiveness of the developed data-driven ADP methods.

  12. Selecting and optimizing eco-physiological parameters of Biome-BGC to reproduce observed woody and leaf biomass growth of Eucommia ulmoides plantation in China using Dakota optimizer

    NASA Astrophysics Data System (ADS)

    Miyauchi, T.; Machimura, T.

    2013-12-01

    In the simulation using an ecosystem process model, the adjustment of parameters is indispensable for improving the accuracy of prediction. This procedure, however, requires much time and effort for approaching the simulation results to the measurements on models consisting of various ecosystem processes. In this study, we tried to apply a general purpose optimization tool in the parameter optimization of an ecosystem model, and examined its validity by comparing the simulated and measured biomass growth of a woody plantation. A biometric survey of tree biomass growth was performed in 2009 in an 11-year old Eucommia ulmoides plantation in Henan Province, China. Climate of the site was dry temperate. Leaf, above- and below-ground woody biomass were measured from three cut trees and converted into carbon mass per area by measured carbon contents and stem density. Yearly woody biomass growth of the plantation was calculated according to allometric relationships determined by tree ring analysis of seven cut trees. We used Biome-BGC (Thornton, 2002) to reproduce biomass growth of the plantation. Air temperature and humidity from 1981 to 2010 was used as input climate condition. The plant functional type was deciduous broadleaf, and non-optimizing parameters were left default. 11-year long normal simulations were performed following a spin-up run. In order to select optimizing parameters, we analyzed the sensitivity of leaf, above- and below-ground woody biomass to eco-physiological parameters. Following the selection, optimization of parameters was performed by using the Dakota optimizer. Dakota is an optimizer developed by Sandia National Laboratories for providing a systematic and rapid means to obtain optimal designs using simulation based models. As the object function, we calculated the sum of relative errors between simulated and measured leaf, above- and below-ground woody carbon at each of eleven years. In an alternative run, errors at the last year (at the field survey) were weighted for priority. We compared some gradient-based global optimization methods of Dakota starting with the default parameters of Biome-BGC. In the result of sensitive analysis, carbon allocation parameters between coarse root and leaf, between stem and leaf, and SLA had high contribution on both leaf and woody biomass changes. These parameters were selected to be optimized. The measured leaf, above- and below-ground woody biomass carbon density at the last year were 0.22, 1.81 and 0.86 kgC m-2, respectively, whereas those simulated in the non-optimized control case using all default parameters were 0.12, 2.26 and 0.52 kgC m-2, respectively. After optimizing the parameters, the simulated values were improved to 0.19, 1.81 and 0.86 kgC m-2, respectively. The coliny global optimization method gave the better fitness than efficient global and ncsu direct method. The optimized parameters showed the higher carbon allocation rates to coarse roots and leaves and the lower SLA than the default parameters, which were consistent to the general water physiological response in a dry climate. The simulation using the weighted object function resulted in the closer simulations to the measurements at the last year with the lower fitness during the previous years.

  13. A two‐point scheme for optimal breast IMRT treatment planning

    PubMed Central

    2013-01-01

    We propose an approach to determining optimal beam weights in breast/chest wall IMRT treatment plans. The goal is to decrease breathing effect and to maximize skin dose if the skin is included in the target or, otherwise, to minimize the skin dose. Two points in the target are utilized to calculate the optimal weights. The optimal plan (i.e., the plan with optimal beam weights) consists of high energy unblocked beams, low energy unblocked beams, and IMRT beams. Six breast and five chest wall cases were retrospectively planned with this scheme in Eclipse, including one breast case where CTV was contoured by the physician. Compared with 3D CRT plans composed of unblocked and field‐in‐field beams, the optimal plans demonstrated comparable or better dose uniformity, homogeneity, and conformity to the target, especially at beam junction when supraclavicular nodes are involved. Compared with nonoptimal plans (i.e., plans with nonoptimized weights), the optimal plans had better dose distributions at shallow depths close to the skin, especially in cases where breathing effect was taken into account. This was verified with experiments using a MapCHECK device attached to a motion simulation table (to mimic motion caused by breathing). PACS number: 87.55 de PMID:24257291

  14. Technology Assessment in Support of the Presidential Vision for Space Exploration

    NASA Technical Reports Server (NTRS)

    Weisbin, Charles R.; Lincoln, William; Mrozinski, Joe; Hua, Hook; Merida, Sofia; Shelton, Kacie; Adumitroaie, Virgil; Derleth, Jason; Silberg, Robert

    2006-01-01

    This paper discusses the process and results of technology assessment in support of the United States Vision for Space Exploration of the Moon, Mars and Beyond. The paper begins by reviewing the Presidential Vision: a major endeavor in building systems of systems. It discusses why we wish to return to the Moon, and the exploration architecture for getting there safely, sustaining a presence, and safely returning. Next, a methodology for optimal technology investment is proposed with discussion of inputs including a capability hierarchy, mission importance weightings, available resource profiles as a function of time, likelihoods of development success, and an objective function. A temporal optimization formulation is offered, and the investment recommendations presented along with sensitivity analyses. Key questions addressed are sensitivity of budget allocations to cost uncertainties, reduction in available budget levels, and shifting funding within constraints imposed by mission timeline.

  15. Heterologous expression of Trametes versicolor laccase in Saccharomyces cerevisiae.

    PubMed

    Iimura, Yosuke; Sonoki, Tomonori; Habe, Hiroshi

    2018-01-01

    Laccase is used in various industrial fields, and it has been the subject of numerous studies. Trametes versicolor laccase has one of the highest redox potentials among the various forms of this enzyme. In this study, we optimized the expression of laccase in Saccharomyces cerevisiae. Optimizing the culture conditions resulted in an improvement in the expression level, and approximately 45 U/L of laccase was functionally secreted in the culture. The recombinant laccase was found to be a heavily hypermannosylated glycoprotein, and the molecular weight of the carbohydrate chain was approximately 60 kDa. These hypermannosylated glycans lowered the substrate affinity, but the optimum pH and thermo-stability were not changed by these hypermannosylated glycans. This functional expression system described here will aid in molecular evolutionary studies conducted to generate new variants of laccase. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Maximizing algebraic connectivity in interconnected networks.

    PubMed

    Shakeri, Heman; Albin, Nathan; Darabi Sahneh, Faryad; Poggi-Corradini, Pietro; Scoglio, Caterina

    2016-03-01

    Algebraic connectivity, the second eigenvalue of the Laplacian matrix, is a measure of node and link connectivity on networks. When studying interconnected networks it is useful to consider a multiplex model, where the component networks operate together with interlayer links among them. In order to have a well-connected multilayer structure, it is necessary to optimally design these interlayer links considering realistic constraints. In this work, we solve the problem of finding an optimal weight distribution for one-to-one interlayer links under budget constraint. We show that for the special multiplex configurations with identical layers, the uniform weight distribution is always optimal. On the other hand, when the two layers are arbitrary, increasing the budget reveals the existence of two different regimes. Up to a certain threshold budget, the second eigenvalue of the supra-Laplacian is simple, the optimal weight distribution is uniform, and the Fiedler vector is constant on each layer. Increasing the budget past the threshold, the optimal weight distribution can be nonuniform. The interesting consequence of this result is that there is no need to solve the optimization problem when the available budget is less than the threshold, which can be easily found analytically.

  17. A Comparative Study of Interferometric Regridding Algorithms

    NASA Technical Reports Server (NTRS)

    Hensley, Scott; Safaeinili, Ali

    1999-01-01

    THe paper discusses regridding options: (1) The problem of interpolating data that is not sampled on a uniform grid, that is noisy, and contains gaps is a difficult problem. (2) Several interpolation algorithms have been implemented: (a) Nearest neighbor - Fast and easy but shows some artifacts in shaded relief images. (b) Simplical interpolator - uses plane going through three points containing point where interpolation is required. Reasonably fast and accurate. (c) Convolutional - uses a windowed Gaussian approximating the optimal prolate spheroidal weighting function for a specified bandwidth. (d) First or second order surface fitting - Uses the height data centered in a box about a given point and does a weighted least squares surface fit.

  18. Electrospinning bioactive supramolecular polymers from water.

    PubMed

    Tayi, Alok S; Pashuck, E Thomas; Newcomb, Christina J; McClendon, Mark T; Stupp, Samuel I

    2014-04-14

    Electrospinning is a high-throughput, low-cost technique for manufacturing long fibers from solution. Conventionally, this technique is used with covalent polymers with large molecular weights. We report here the electrospinning of functional peptide-based supramolecular polymers from water at very low concentrations (<4 wt %). Molecules with low molecular weights (<1 kDa) could be electrospun because they self-assembled into one-dimensional supramolecular polymers upon solvation and the critical parameters of viscosity, solution conductivity, and surface tension were optimized for this technique. The supramolecular structure of the electrospun fibers could ensure that certain residues, like bioepitopes, are displayed on the surface even after processing. This system provides an opportunity to electrospin bioactive supramolecular materials from water for biomedical applications.

  19. Extraction Optimization, Purification and Physicochemical Properties of Polysaccharides from Gynura medica.

    PubMed

    Li, Fengwei; Gao, Jian; Xue, Feng; Yu, Xiaohong; Shao, Tao

    2016-03-23

    Extraction of polysaccharides from Gynura medica (GMPs) was optimized by response surface methodology (RSM). A central composition design including three parameters, namely extraction temperature (X₁), ratio of water to raw material (X₂) and extraction time (X₃), was used. The best conditions were extraction temperature of 91.7 °C, extraction time of 4.06 h and ratio of water to raw material of 29.1 mL/g. Under the optimized conditions, the yield of GMPs was 5.56%, which was similar to the predicted polysaccharides yield of 5.66%. A fraction named GMP-1 was obtained after isolation and purification by DEAE-52 and Sephadex G-100 gel chromatography, respectively. GMP-1, with a molecular weight of 401 kDa, mainly consisted of galacturonic acid (GalA), xylose (Xyl), glucose (Glu). Infrared spectroscopy was used to characterize the major functional groups of GMP-1 and the results indicated that it was an acidic polysaccharide. The antioxidant and α-glucosidase inhibitory activities of GMPs and GMP-1 were determined in vitro. The results indicated that GMPs and GMP-1 show potential for use in functional foods or medicines.

  20. Optimization of Multiple and Multipurpose Reservoir System Operations by Using Matrix Structure (Case Study: Karun and Dez Reservoir Dams)

    PubMed Central

    Othman, Faridah; Taghieh, Mahmood

    2016-01-01

    Optimal operation of water resources in multiple and multipurpose reservoirs is very complicated. This is because of the number of dams, each dam’s location (Series and parallel), conflict in objectives and the stochastic nature of the inflow of water in the system. In this paper, performance optimization of the system of Karun and Dez reservoir dams have been studied and investigated with the purposes of hydroelectric energy generation and providing water demand in 6 dams. On the Karun River, 5 dams have been built in the series arrangements, and the Dez dam has been built parallel to those 5 dams. One of the main achievements in this research is the implementation of the structure of production of hydroelectric energy as a function of matrix in MATLAB software. The results show that the role of objective function structure for generating hydroelectric energy in weighting method algorithm is more important than water supply. Nonetheless by implementing ε- constraint method algorithm, we can both increase hydroelectric power generation and supply around 85% of agricultural and industrial demands. PMID:27248152

  1. Multi-objective optimization for generating a weighted multi-model ensemble

    NASA Astrophysics Data System (ADS)

    Lee, H.

    2017-12-01

    Many studies have demonstrated that multi-model ensembles generally show better skill than each ensemble member. When generating weighted multi-model ensembles, the first step is measuring the performance of individual model simulations using observations. There is a consensus on the assignment of weighting factors based on a single evaluation metric. When considering only one evaluation metric, the weighting factor for each model is proportional to a performance score or inversely proportional to an error for the model. While this conventional approach can provide appropriate combinations of multiple models, the approach confronts a big challenge when there are multiple metrics under consideration. When considering multiple evaluation metrics, it is obvious that a simple averaging of multiple performance scores or model ranks does not address the trade-off problem between conflicting metrics. So far, there seems to be no best method to generate weighted multi-model ensembles based on multiple performance metrics. The current study applies the multi-objective optimization, a mathematical process that provides a set of optimal trade-off solutions based on a range of evaluation metrics, to combining multiple performance metrics for the global climate models and their dynamically downscaled regional climate simulations over North America and generating a weighted multi-model ensemble. NASA satellite data and the Regional Climate Model Evaluation System (RCMES) software toolkit are used for assessment of the climate simulations. Overall, the performance of each model differs markedly with strong seasonal dependence. Because of the considerable variability across the climate simulations, it is important to evaluate models systematically and make future projections by assigning optimized weighting factors to the models with relatively good performance. Our results indicate that the optimally weighted multi-model ensemble always shows better performance than an arithmetic ensemble mean and may provide reliable future projections.

  2. The influence of the free space environment on the superlight-weight thermal protection system: conception, methods, and risk analysis

    NASA Astrophysics Data System (ADS)

    Yatsenko, Vitaliy; Falchenko, Iurii; Fedorchuk, Viktor; Petrushynets, Lidiia

    2016-07-01

    This report focuses on the results of the EU project "Superlight-weight thermal protection system for space application (LIGHT-TPS)". The bottom line is an analysis of influence of the free space environment on the superlight-weight thermal protection system (TPS). This report focuses on new methods that based on the following models: synergetic, physical, and computational. This report concentrates on four approaches. The first concerns the synergetic approach. The synergetic approach to the solution of problems of self-controlled synthesis of structures and creation of self-organizing technologies is considered in connection with the super-problem of creation of materials with new functional properties. Synergetics methods and mathematical design are considered according to actual problems of material science. The second approach describes how the optimization methods can be used to determine material microstructures with optimized or targeted properties. This technique enables one to find unexpected microstructures with exotic behavior (e.g., negative thermal expansion coefficients). The third approach concerns the dynamic probabilistic risk analysis of TPS l elements with complex characterizations for damages using a physical model of TPS system and a predictable level of ionizing radiation and space weather. Focusing is given mainly on the TPS model, mathematical models for dynamic probabilistic risk assessment and software for the modeling and prediction of the influence of the free space environment. The probabilistic risk assessment method for TPS is presented considering some deterministic and stochastic factors. The last approach concerns results of experimental research of the temperature distribution on the surface of the honeycomb sandwich panel size 150 x 150 x 20 mm at the diffusion welding in vacuum are considered. An equipment, which provides alignment of temperature fields in a product for the formation of equal strength of welded joints is considered. Many tasks in computational materials science can be posed as optimization problems. This technique enables one to find unexpected microstructures with exotic behavior (e.g., negative thermal expansion coefficients). The last approach is concerned with the generation of realizations of materials with specified but limited microstructural information: an intriguing inverse problem of both fundamental and practical importance. Computational models based upon the theories of molecular dynamics or quantum mechanics would enable the prediction and modification of fundamental materials properties. This problem is solved using deterministic and stochastic optimization techniques. The main optimization approaches in the frame of the EU project "Superlight-weight thermal protection system for space application" are discussed. Optimization approach to the alloys for obtaining materials with required properties using modeling techniques and experimental data will be also considered. This report is supported by the EU project "Superlight-weight thermal protection system for space application (LIGHT-TPS)"

  3. A heuristic approach to optimization of structural topology including self-weight

    NASA Astrophysics Data System (ADS)

    Tajs-Zielińska, Katarzyna; Bochenek, Bogdan

    2018-01-01

    Topology optimization of structures under a design-dependent self-weight load is investigated in this paper. The problem deserves attention because of its significant importance in the engineering practice, especially nowadays as topology optimization is more often applied when designing large engineering structures, for example, bridges or carrying systems of tall buildings. It is worth noting that well-known approaches of topology optimization which have been successfully applied to structures under fixed loads cannot be directly adapted to the case of design-dependent loads, so that topology generation can be a challenge also for numerical algorithms. The paper presents the application of a simple but efficient non-gradient method to topology optimization of elastic structures under self-weight loading. The algorithm is based on the Cellular Automata concept, the application of which can produce effective solutions with low computational cost.

  4. Replication fidelity improvement of PMMA microlens array based on weight evaluation and optimization

    NASA Astrophysics Data System (ADS)

    Jiang, Bing-yan; Shen, Long-jiang; Peng, Hua-jiang; Yin, Xiang-lin

    2007-12-01

    High replication fidelity is a prerequisite of high quality plastic microlens array in injection molding. But, there's not an economical and practical method to evaluate and improve the replication fidelity until now. Based on part weight evaluation and optimization, this paper presents a new method of replication fidelity improvement. Firstly, a simplified analysis model of PMMA micro columns arrays (5×16) with 200μm diameter was set up. And then, Flow (3D) module of Moldflow MPI6.0 based on Navier-Stokes equations was used to calculate the weight of the micro columns arrays in injection molding. The effects of processing parameters (melt temperature, mold temperature, injection time, packing pressure and packing time) on the part weight were investigated in the simulations. The simulation results showed that the mold temperature and the injection time have important effects on the filling of micro columns; the optimal mold temperature and injection time for better replication fidelity could be determined by the curves of mold temperature vs part weight and injection time vs part weight. At last, the effects of processing parameters on part weight of micro columns array were studied experimentally. The experimental results showed that the increase of melt temperature and mold temperature can make the packing pressure transfer to micro cavity more effectively through runner system, and increase the part weight. From the observation results of the image measuring apparatus, it was discovered that the higher the part weight, the better the filling of the microstructures. In conclusion, part weight can be used to evaluate the replication fidelity of micro-feature structured parts primarily; which is an economical and practical method to improve the replication fidelity of microlens arrays based on weight evaluation and optimization.

  5. Geometrical optics and optimal transport.

    PubMed

    Rubinstein, Jacob; Wolansky, Gershon

    2017-10-01

    The Fermat principle is generalized to a system of rays. It is shown that all the ray mappings that are compatible with two given intensities of a monochromatic wave, measured at two planes, are stationary points of a canonical functional, which is the weighted average of the actions of all the rays. It is further shown that there exist at least two stationary points for this functional, implying that in the geometrical optics regime the phase from intensity problem has inherently more than one solution. The caustic structures of all the possible ray mappings are analyzed. A number of simulations illustrate the theoretical considerations.

  6. Scientific Benchmarks for Guiding Macromolecular Energy Function Improvement

    PubMed Central

    Leaver-Fay, Andrew; O’Meara, Matthew J.; Tyka, Mike; Jacak, Ron; Song, Yifan; Kellogg, Elizabeth H.; Thompson, James; Davis, Ian W.; Pache, Roland A.; Lyskov, Sergey; Gray, Jeffrey J.; Kortemme, Tanja; Richardson, Jane S.; Havranek, James J.; Snoeyink, Jack; Baker, David; Kuhlman, Brian

    2013-01-01

    Accurate energy functions are critical to macromolecular modeling and design. We describe new tools for identifying inaccuracies in energy functions and guiding their improvement, and illustrate the application of these tools to improvement of the Rosetta energy function. The feature analysis tool identifies discrepancies between structures deposited in the PDB and low energy structures generated by Rosetta; these likely arise from inaccuracies in the energy function. The optE tool optimizes the weights on the different components of the energy function by maximizing the recapitulation of a wide range of experimental observations. We use the tools to examine three proposed modifications to the Rosetta energy function: improving the unfolded state energy model (reference energies), using bicubic spline interpolation to generate knowledge based torisonal potentials, and incorporating the recently developed Dunbrack 2010 rotamer library (Shapovalov and Dunbrack, 2011). PMID:23422428

  7. Optimal transport and the placenta

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morgan, Simon; Xia, Qinglan; Salafia, Carolym

    2010-01-01

    The goal of this paper is to investigate the expected effects of (i) placental size, (ii) placental shape and (iii) the position of insertion of the umbilical cord on the work done by the foetus heart in pumping blood across the placenta. We use optimal transport theory and modeling to quantify the expected effects of these factors . Total transport cost and the shape factor contribution to cost are given by the optimal transport model. Total placental transport cost is highly correlated with birth weight, placenta weight, FPR and the metabolic scaling factor beta. The shape factor is also highlymore » correlated with birth weight, and after adjustment for placental weight, is highly correlated with the metabolic scaling factor beta.« less

  8. Variational optimization analysis of temperature and moisture advection in a severe storm environment

    NASA Technical Reports Server (NTRS)

    Mcfarland, M. J.

    1975-01-01

    Horizontal wind components, potential temperature, and mixing ratio fields associated with a severe storm environment in the south central U.S. were analyzed from synoptic upper air observations with a nonhomogeneous, anisotropic weighting function. Each data field was filtered with variational optimization analysis techniques. Variational optimization analysis was also performed on the vertical motion field and was used to produce advective forecasts of the potential temperature and mixing ratio fields. Results show that the dry intrusion is characterized by warm air, the advection of which produces a well-defined upward motion pattern. A corresponding downward motion pattern comprising a deep vertical circulation in the warm air sector of the low pressure system was detected. The axes alignment of maximum dry and warm advection with the axis of the tornado-producing squall line also resulted.

  9. Fault Detection of Bearing Systems through EEMD and Optimization Algorithm

    PubMed Central

    Lee, Dong-Han; Ahn, Jong-Hyo; Koh, Bong-Hwan

    2017-01-01

    This study proposes a fault detection and diagnosis method for bearing systems using ensemble empirical mode decomposition (EEMD) based feature extraction, in conjunction with particle swarm optimization (PSO), principal component analysis (PCA), and Isomap. First, a mathematical model is assumed to generate vibration signals from damaged bearing components, such as the inner-race, outer-race, and rolling elements. The process of decomposing vibration signals into intrinsic mode functions (IMFs) and extracting statistical features is introduced to develop a damage-sensitive parameter vector. Finally, PCA and Isomap algorithm are used to classify and visualize this parameter vector, to separate damage characteristics from healthy bearing components. Moreover, the PSO-based optimization algorithm improves the classification performance by selecting proper weightings for the parameter vector, to maximize the visualization effect of separating and grouping of parameter vectors in three-dimensional space. PMID:29143772

  10. ACT Payload Shroud Structural Concept Analysis and Optimization

    NASA Technical Reports Server (NTRS)

    Zalewski, Bart B.; Bednarcyk, Brett A.

    2010-01-01

    Aerospace structural applications demand a weight efficient design to perform in a cost effective manner. This is particularly true for launch vehicle structures, where weight is the dominant design driver. The design process typically requires many iterations to ensure that a satisfactory minimum weight has been obtained. Although metallic structures can be weight efficient, composite structures can provide additional weight savings due to their lower density and additional design flexibility. This work presents structural analysis and weight optimization of a composite payload shroud for NASA s Ares V heavy lift vehicle. Two concepts, which were previously determined to be efficient for such a structure are evaluated: a hat stiffened/corrugated panel and a fiber reinforced foam sandwich panel. A composite structural optimization code, HyperSizer, is used to optimize the panel geometry, composite material ply orientations, and sandwich core material. HyperSizer enables an efficient evaluation of thousands of potential designs versus multiple strength and stability-based failure criteria across multiple load cases. HyperSizer sizing process uses a global finite element model to obtain element forces, which are statistically processed to arrive at panel-level design-to loads. These loads are then used to analyze each candidate panel design. A near optimum design is selected as the one with the lowest weight that also provides all positive margins of safety. The stiffness of each newly sized panel or beam component is taken into account in the subsequent finite element analysis. Iteration of analysis/optimization is performed to ensure a converged design. Sizing results for the hat stiffened panel concept and the fiber reinforced foam sandwich concept are presented.

  11. Development of gradient descent adaptive algorithms to remove common mode artifact for improvement of cardiovascular signal quality.

    PubMed

    Ciaccio, Edward J; Micheli-Tzanakou, Evangelia

    2007-07-01

    Common-mode noise degrades cardiovascular signal quality and diminishes measurement accuracy. Filtering to remove noise components in the frequency domain often distorts the signal. Two adaptive noise canceling (ANC) algorithms were tested to adjust weighted reference signals for optimal subtraction from a primary signal. Update of weight w was based upon the gradient term of the steepest descent equation: [see text], where the error epsilon is the difference between primary and weighted reference signals. nabla was estimated from Deltaepsilon(2) and Deltaw without using a variable Deltaw in the denominator which can cause instability. The Parallel Comparison (PC) algorithm computed Deltaepsilon(2) using fixed finite differences +/- Deltaw in parallel during each discrete time k. The ALOPEX algorithm computed Deltaepsilon(2)x Deltaw from time k to k + 1 to estimate nabla, with a random number added to account for Deltaepsilon(2) . Deltaw--> 0 near the optimal weighting. Using simulated data, both algorithms stably converged to the optimal weighting within 50-2000 discrete sample points k even with a SNR = 1:8 and weights which were initialized far from the optimal. Using a sharply pulsatile cardiac electrogram signal with added noise so that the SNR = 1:5, both algorithms exhibited stable convergence within 100 ms (100 sample points). Fourier spectral analysis revealed minimal distortion when comparing the signal without added noise to the ANC restored signal. ANC algorithms based upon difference calculations can rapidly and stably converge to the optimal weighting in simulated and real cardiovascular data. Signal quality is restored with minimal distortion, increasing the accuracy of biophysical measurement.

  12. Piston flow in a two-dimensional channel

    NASA Astrophysics Data System (ADS)

    Katopodes, Fotini V.; Davis, A. M. J.; Stone, H. A.

    2000-05-01

    A solution using biorthogonal eigenfunctions is presented for viscous flow caused by a piston in a two-dimensional channel. The resulting infinite set of linear equations is solved using Spence's optimal weighting function method [IMA J. Appl. Math. 30, 107 (1983)]. The solution is compared to that with a shear-free piston surface; in the latter configuration the fluid more rapidly approaches the Poiseuille flow profile established away from the face of the piston.

  13. Energy weighting improves dose efficiency in clinical practice: implementation on a spectral photon-counting mammography system

    PubMed Central

    Berglund, Johan; Johansson, Henrik; Lundqvist, Mats; Cederström, Björn; Fredenberg, Erik

    2014-01-01

    Abstract. In x-ray imaging, contrast information content varies with photon energy. It is, therefore, possible to improve image quality by weighting photons according to energy. We have implemented and evaluated so-called energy weighting on a commercially available spectral photon-counting mammography system. The technique was evaluated using computer simulations, phantom experiments, and analysis of screening mammograms. The CNR benefit of energy weighting for a number of relevant target-background combinations measured by the three methods fell in the range of 2.2 to 5.2% when using optimal weight factors. This translates to a potential dose reduction at constant CNR in the range of 4.5 to 11%. We expect the choice of weight factor in practical implementations to be straightforward because (1) the CNR improvement was not very sensitive to weight, (2) the optimal weight was similar for all investigated target-background combinations, (3) aluminum/PMMA phantoms were found to represent clinically relevant tasks well, and (4) the optimal weight could be calculated directly from pixel values in phantom images. Reasonable agreement was found between the simulations and phantom measurements. Manual measurements on microcalcifications and automatic image analysis confirmed that the CNR improvement was detectable in energy-weighted screening mammograms. PMID:26158045

  14. Optimizing Barrier Removal to Restore Connectivity in Utah's Weber Basin

    NASA Astrophysics Data System (ADS)

    Kraft, M.; Null, S. E.

    2016-12-01

    Instream barriers, such as dams, culverts and diversions are economically important for water supply, but negatively affect river ecosystems and disrupt hydrologic processes. Removal of uneconomical and aging in-stream barriers to improve habitat connectivity is increasingly used to restore river connectivity. Most past barrier removal projects focused on individual barriers using a score-and-rank technique, ignoring cumulative change from multiple, spatially-connected barrier removals. Similarly, most water supply models optimize either human water use or aquatic connectivity, failing to holistically represent human and environmental benefits. In this study, a dual objective optimization model identified in-stream barriers that impede aquatic habitat connectivity for trout, using streamflow, temperature, and channel gradient as indicators of aquatic habitat suitability. Water scarcity costs are minimized using agricultural and urban economic penalty functions to incorporate water supply benefits and a budget monetizes costs of removing small barriers like culverts and road crossings. The optimization model developed is applied to a case study in Utah's Weber basin to prioritize removal of the most environmentally harmful barriers, while maintaining human water uses. The dual objective solution basis was developed to quantify and graphically visualize tradeoffs between connected quality-weighted habitat for Bonneville cutthroat trout and economic water uses. Modeled results include a spectrum of barrier removal alternatives based on budget and quality-weighted reconnected habitat that can be communicated with local stakeholders. This research will help prioritize barrier removals and future restoration decisions. The modeling approach expands current barrier removal optimization methods by explicitly including economic and environmental water uses.

  15. Fast Algorithms for Designing Unimodular Waveform(s) With Good Correlation Properties

    NASA Astrophysics Data System (ADS)

    Li, Yongzhe; Vorobyov, Sergiy A.

    2018-03-01

    In this paper, we develop new fast and efficient algorithms for designing single/multiple unimodular waveforms/codes with good auto- and cross-correlation or weighted correlation properties, which are highly desired in radar and communication systems. The waveform design is based on the minimization of the integrated sidelobe level (ISL) and weighted ISL (WISL) of waveforms. As the corresponding optimization problems can quickly grow to large scale with increasing the code length and number of waveforms, the main issue turns to be the development of fast large-scale optimization techniques. The difficulty is also that the corresponding optimization problems are non-convex, but the required accuracy is high. Therefore, we formulate the ISL and WISL minimization problems as non-convex quartic optimization problems in frequency domain, and then simplify them into quadratic problems by utilizing the majorization-minimization technique, which is one of the basic techniques for addressing large-scale and/or non-convex optimization problems. While designing our fast algorithms, we find out and use inherent algebraic structures in the objective functions to rewrite them into quartic forms, and in the case of WISL minimization, to derive additionally an alternative quartic form which allows to apply the quartic-quadratic transformation. Our algorithms are applicable to large-scale unimodular waveform design problems as they are proved to have lower or comparable computational burden (analyzed theoretically) and faster convergence speed (confirmed by comprehensive simulations) than the state-of-the-art algorithms. In addition, the waveforms designed by our algorithms demonstrate better correlation properties compared to their counterparts.

  16. On the efficiency of a randomized mirror descent algorithm in online optimization problems

    NASA Astrophysics Data System (ADS)

    Gasnikov, A. V.; Nesterov, Yu. E.; Spokoiny, V. G.

    2015-04-01

    A randomized online version of the mirror descent method is proposed. It differs from the existing versions by the randomization method. Randomization is performed at the stage of the projection of a subgradient of the function being optimized onto the unit simplex rather than at the stage of the computation of a subgradient, which is common practice. As a result, a componentwise subgradient descent with a randomly chosen component is obtained, which admits an online interpretation. This observation, for example, has made it possible to uniformly interpret results on weighting expert decisions and propose the most efficient method for searching for an equilibrium in a zero-sum two-person matrix game with sparse matrix.

  17. Hologram interferometry in automotive component vibration testing

    NASA Astrophysics Data System (ADS)

    Brown, Gordon M.; Forbes, Jamie W.; Marchi, Mitchell M.; Wales, Raymond R.

    1993-02-01

    An ever increasing variety of automotive component vibration testing is being pursued at Ford Motor Company, U.S.A. The driving force for use of hologram interferometry in these tests is the continuing need to design component structures to meet more stringent functional performance criteria. Parameters such as noise and vibration, sound quality, and reliability must be optimized for the lightest weight component possible. Continually increasing customer expectations and regulatory pressures on fuel economy and safety mandate that vehicles be built from highly optimized components. This paper includes applications of holographic interferometry for powertrain support structure tuning, body panel noise reduction, wiper system noise and vibration path analysis, and other vehicle component studies.

  18. A risk-based multi-objective model for optimal placement of sensors in water distribution system

    NASA Astrophysics Data System (ADS)

    Naserizade, Sareh S.; Nikoo, Mohammad Reza; Montaseri, Hossein

    2018-02-01

    In this study, a new stochastic model based on Conditional Value at Risk (CVaR) and multi-objective optimization methods is developed for optimal placement of sensors in water distribution system (WDS). This model determines minimization of risk which is caused by simultaneous multi-point contamination injection in WDS using CVaR approach. The CVaR considers uncertainties of contamination injection in the form of probability distribution function and calculates low-probability extreme events. In this approach, extreme losses occur at tail of the losses distribution function. Four-objective optimization model based on NSGA-II algorithm is developed to minimize losses of contamination injection (through CVaR of affected population and detection time) and also minimize the two other main criteria of optimal placement of sensors including probability of undetected events and cost. Finally, to determine the best solution, Preference Ranking Organization METHod for Enrichment Evaluation (PROMETHEE), as a subgroup of Multi Criteria Decision Making (MCDM) approach, is utilized to rank the alternatives on the trade-off curve among objective functions. Also, sensitivity analysis is done to investigate the importance of each criterion on PROMETHEE results considering three relative weighting scenarios. The effectiveness of the proposed methodology is examined through applying it to Lamerd WDS in the southwestern part of Iran. The PROMETHEE suggests 6 sensors with suitable distribution that approximately cover all regions of WDS. Optimal values related to CVaR of affected population and detection time as well as probability of undetected events for the best optimal solution are equal to 17,055 persons, 31 mins and 0.045%, respectively. The obtained results of the proposed methodology in Lamerd WDS show applicability of CVaR-based multi-objective simulation-optimization model for incorporating the main uncertainties of contamination injection in order to evaluate extreme value of losses in WDS.

  19. Adaptive GSA-based optimal tuning of PI controlled servo systems with reduced process parametric sensitivity, robust stability and controller robustness.

    PubMed

    Precup, Radu-Emil; David, Radu-Codrut; Petriu, Emil M; Radac, Mircea-Bogdan; Preitl, Stefan

    2014-11-01

    This paper suggests a new generation of optimal PI controllers for a class of servo systems characterized by saturation and dead zone static nonlinearities and second-order models with an integral component. The objective functions are expressed as the integral of time multiplied by absolute error plus the weighted sum of the integrals of output sensitivity functions of the state sensitivity models with respect to two process parametric variations. The PI controller tuning conditions applied to a simplified linear process model involve a single design parameter specific to the extended symmetrical optimum (ESO) method which offers the desired tradeoff to several control system performance indices. An original back-calculation and tracking anti-windup scheme is proposed in order to prevent the integrator wind-up and to compensate for the dead zone nonlinearity of the process. The minimization of the objective functions is carried out in the framework of optimization problems with inequality constraints which guarantee the robust stability with respect to the process parametric variations and the controller robustness. An adaptive gravitational search algorithm (GSA) solves the optimization problems focused on the optimal tuning of the design parameter specific to the ESO method and of the anti-windup tracking gain. A tuning method for PI controllers is proposed as an efficient approach to the design of resilient control systems. The tuning method and the PI controllers are experimentally validated by the adaptive GSA-based tuning of PI controllers for the angular position control of a laboratory servo system.

  20. Task-Driven Tube Current Modulation and Regularization Design in Computed Tomography with Penalized-Likelihood Reconstruction.

    PubMed

    Gang, G J; Siewerdsen, J H; Stayman, J W

    2016-02-01

    This work applies task-driven optimization to design CT tube current modulation and directional regularization in penalized-likelihood (PL) reconstruction. The relative performance of modulation schemes commonly adopted for filtered-backprojection (FBP) reconstruction were also evaluated for PL in comparison. We adopt a task-driven imaging framework that utilizes a patient-specific anatomical model and information of the imaging task to optimize imaging performance in terms of detectability index ( d' ). This framework leverages a theoretical model based on implicit function theorem and Fourier approximations to predict local spatial resolution and noise characteristics of PL reconstruction as a function of the imaging parameters to be optimized. Tube current modulation was parameterized as a linear combination of Gaussian basis functions, and regularization was based on the design of (directional) pairwise penalty weights for the 8 in-plane neighboring voxels. Detectability was optimized using a covariance matrix adaptation evolutionary strategy algorithm. Task-driven designs were compared to conventional tube current modulation strategies for a Gaussian detection task in an abdomen phantom. The task-driven design yielded the best performance, improving d' by ~20% over an unmodulated acquisition. Contrary to FBP, PL reconstruction using automatic exposure control and modulation based on minimum variance (in FBP) performed worse than the unmodulated case, decreasing d' by 16% and 9%, respectively. This work shows that conventional tube current modulation schemes suitable for FBP can be suboptimal for PL reconstruction. Thus, the proposed task-driven optimization provides additional opportunities for improved imaging performance and dose reduction beyond that achievable with conventional acquisition and reconstruction.

  1. Filter Function for Wavefront Sensing Over a Field of View

    NASA Technical Reports Server (NTRS)

    Dean, Bruce H.

    2007-01-01

    A filter function has been derived as a means of optimally weighting the wavefront estimates obtained in image-based phase retrieval performed at multiple points distributed over the field of view of a telescope or other optical system. When the data obtained in wavefront sensing and, more specifically, image-based phase retrieval, are used for controlling the shape of a deformable mirror or other optic used to correct the wavefront, the control law obtained by use of the filter function gives a more balanced optical performance over the field of view than does a wavefront-control law obtained by use of a wavefront estimate obtained from a single point in the field of view.

  2. Design optimization of high-speed proprotor aircraft

    NASA Technical Reports Server (NTRS)

    Schleicher, David R.; Phillips, James D.; Carbajal, Kevin B.

    1993-01-01

    NASA's high-speed rotorcraft (HSRC) studies have the objective of investigating technology for vehicles that have both low downwash velocities and forward flight speed capability of up to 450 knots. This paper investigates a tilt rotor, a tilt wing, and a folding tilt rotor designed for a civil transport mission. Baseline aircraft models using current technology are developed for each configuration using a vertical/short takeoff and landing (V/STOL) aircraft design synthesis computer program to generate converged vehicle designs. Sensitivity studies and numerical optimization are used to illustrate each configuration's key design tradeoffs and constraints. Minimization of the gross takeoff weight is used as the optimization objective function. Several advanced technologies are chosen, and their relative impact on future configurational development is discussed. Finally, the impact of maximum cruise speed on vehicle figures of merit (gross weight, productivity, and direct operating cost) is analyzed. The three most important conclusions from the study are payload ratios for these aircraft will be commensurate with current fixed-wing commuter aircraft; future tilt rotors and tilt wings will be significantly lighter, more productive, and cheaper than competing folding tilt rotors; and the most promising technologies are an advanced-technology proprotor for both tilt rotor and tilt wing and advanced structural materials for the folding tilt rotor.

  3. Dual-Energy Contrast-Enhanced Breast Tomosynthesis: Optimization of Beam Quality for Dose and Image Quality

    PubMed Central

    Samei, Ehsan; Saunders, Robert S.

    2014-01-01

    Dual-energy contrast-enhanced breast tomosynthesis is a promising technique to obtain three-dimensional functional information from the breast with high resolution and speed. To optimize this new method, this study searched for the beam quality that maximized image quality in terms of mass detection performance. A digital tomosynthesis system was modeled using a fast ray-tracing algorithm, which created simulated projection images by tracking photons through a voxelized anatomical breast phantom containing iodinated lesions. The single-energy images were combined into dual-energy images through a weighted log subtraction process. The weighting factor was optimized to minimize anatomical noise, while the dose distribution was chosen to minimize quantum noise. The dual-energy images were analyzed for the signal difference to noise ratio (SdNR) of iodinated masses. The fast ray-tracing explored 523,776 dual-energy combinations to identify which yields optimum mass SdNR. The ray-tracing results were verified using a Monte Carlo model for a breast tomosynthesis system with a selenium-based flat-panel detector. The projection images from our voxelized breast phantom were obtained at a constant total glandular dose. The projections were combined using weighted log subtraction and reconstructed using commercial reconstruction software. The lesion SdNR was measured in the central reconstructed slice. The SdNR performance varied markedly across the kVp and filtration space. Ray-tracing results indicated that the mass SdNR was maximized with a high-energy tungsten beam at 49 kVp with 92.5 μm of copper filtration and a low-energy tungsten beam at 49 kVp with 95 μm of tin filtration. This result was consistent with Monte Carlo findings. This mammographic technique led to a mass SdNR of 0.92 ± 0.03 in the projections and 3.68 ± 0.19 in the reconstructed slices. These values were markedly higher than those for non-optimized techniques. Our findings indicate that dual-energy breast tomosynthesis can be performed optimally at 49 kVp with alternative copper and tin filters, with reconstruction following weighted subtraction. The optimum technique provides best visibility of iodine against structured breast background in dual-energy contrast-enhanced breast tomosynthesis. PMID:21908902

  4. Variables controlling the recovery of ignitable liquid residues from simulated fire debris samples using solid-phase microextraction/gas chromatography

    NASA Astrophysics Data System (ADS)

    Furton, Kenneth G.; Almirall, Jose R.; Wang, Jing

    1999-02-01

    In this paper, we present data comparing a variety of different conditions for extracting ignitable liquid residues from simulated fire debris samples in order to optimize the conditions for using Solid Phase Microextraction. A simulated accelerant mixture containing 30 components, including those from light petroleum distillates, medium petroleum distillates and heavy petroleum distillates were used to study the important variables controlling Solid Phase Microextraction (SPME) recoveries. SPME is an inexpensive, rapid and sensitive method for the analysis of volatile residues from the headspace over solid debris samples in a container or directly from aqueous samples followed by GC. The relative effects of controllable variables, including fiber chemistry, adsorption and desorption temperature, extraction time, and desorption time, have been optimized. The addition of water and ethanol to simulated debris samples in a can was shown to increase the sensitivity when using headspace SPME extraction. The relative enhancement of sensitivity has been compared as a function of the hydrocarbon chain length, sample temperature, time, and added ethanol concentrations. The technique has also been optimized to the extraction of accelerants directly from water added to the fire debris samples. The optimum adsorption time for the low molecular weight components was found to be approximately 25 minutes. The high molecular weight components were found at a higher concentration the longer the fiber was exposed to the headspace (up to 1 hr). The higher molecular weight components were also found in higher concentrations in the headspace when water and/or ethanol was added to the debris.

  5. Energy management of three-dimensional minimum-time intercept. [for aircraft flight optimization

    NASA Technical Reports Server (NTRS)

    Kelley, H. J.; Cliff, E. M.; Visser, H. G.

    1985-01-01

    A real-time computer algorithm to control and optimize aircraft flight profiles is described and applied to a three-dimensional minimum-time intercept mission. The proposed scheme has roots in two well known techniques: singular perturbations and neighboring-optimal guidance. Use of singular-perturbation ideas is made in terms of the assumed trajectory-family structure. A heading/energy family of prestored point-mass-model state-Euler solutions is used as the baseline in this scheme. The next step is to generate a near-optimal guidance law that will transfer the aircraft to the vicinity of this reference family. The control commands fed to the autopilot (bank angle and load factor) consist of the reference controls plus correction terms which are linear combinations of the altitude and path-angle deviations from reference values, weighted by a set of precalculated gains. In this respect the proposed scheme resembles neighboring-optimal guidance. However, in contrast to the neighboring-optimal guidance scheme, the reference control and state variables as well as the feedback gains are stored as functions of energy and heading in the present approach. Some numerical results comparing open-loop optimal and approximate feedback solutions are presented.

  6. Optimized method for manufacturing large aspheric surfaces

    NASA Astrophysics Data System (ADS)

    Zhou, Xusheng; Li, Shengyi; Dai, Yifan; Xie, Xuhui

    2007-12-01

    Aspheric optics are being used more and more widely in modern optical systems, due to their ability of correcting aberrations, enhancing image quality, enlarging the field of view and extending the range of effect, while reducing the weight and volume of the system. With optical technology development, we have more pressing requirement to large-aperture and high-precision aspheric surfaces. The original computer controlled optical surfacing (CCOS) technique cannot meet the challenge of precision and machining efficiency. This problem has been thought highly of by researchers. Aiming at the problem of original polishing process, an optimized method for manufacturing large aspheric surfaces is put forward. Subsurface damage (SSD), full aperture errors and full band of frequency errors are all in control of this method. Lesser SSD depth can be gained by using little hardness tool and small abrasive grains in grinding process. For full aperture errors control, edge effects can be controlled by using smaller tools and amendment model with material removal function. For full band of frequency errors control, low frequency errors can be corrected with the optimized material removal function, while medium-high frequency errors by using uniform removing principle. With this optimized method, the accuracy of a K9 glass paraboloid mirror can reach rms 0.055 waves (where a wave is 0.6328μm) in a short time. The results show that the optimized method can guide large aspheric surface manufacturing effectively.

  7. Resilience-based optimal design of water distribution network

    NASA Astrophysics Data System (ADS)

    Suribabu, C. R.

    2017-11-01

    Optimal design of water distribution network is generally aimed to minimize the capital cost of the investments on tanks, pipes, pumps, and other appurtenances. Minimizing the cost of pipes is usually considered as a prime objective as its proportion in capital cost of the water distribution system project is very high. However, minimizing the capital cost of the pipeline alone may result in economical network configuration, but it may not be a promising solution in terms of resilience point of view. Resilience of the water distribution network has been considered as one of the popular surrogate measures to address ability of network to withstand failure scenarios. To improve the resiliency of the network, the pipe network optimization can be performed with two objectives, namely minimizing the capital cost as first objective and maximizing resilience measure of the configuration as secondary objective. In the present work, these two objectives are combined as single objective and optimization problem is solved by differential evolution technique. The paper illustrates the procedure for normalizing the objective functions having distinct metrics. Two of the existing resilience indices and power efficiency are considered for optimal design of water distribution network. The proposed normalized objective function is found to be efficient under weighted method of handling multi-objective water distribution design problem. The numerical results of the design indicate the importance of sizing pipe telescopically along shortest path of flow to have enhanced resiliency indices.

  8. Blocking reduction of Landsat Thematic Mapper JPEG browse images using optimal PSNR estimated spectra adaptive postfiltering

    NASA Technical Reports Server (NTRS)

    Linares, Irving; Mersereau, Russell M.; Smith, Mark J. T.

    1994-01-01

    Two representative sample images of Band 4 of the Landsat Thematic Mapper are compressed with the JPEG algorithm at 8:1, 16:1 and 24:1 Compression Ratios for experimental browsing purposes. We then apply the Optimal PSNR Estimated Spectra Adaptive Postfiltering (ESAP) algorithm to reduce the DCT blocking distortion. ESAP reduces the blocking distortion while preserving most of the image's edge information by adaptively postfiltering the decoded image using the block's spectral information already obtainable from each block's DCT coefficients. The algorithm iteratively applied a one dimensional log-sigmoid weighting function to the separable interpolated local block estimated spectra of the decoded image until it converges to the optimal PSNR with respect to the original using a 2-D steepest ascent search. Convergence is obtained in a few iterations for integer parameters. The optimal logsig parameters are transmitted to the decoder as a negligible byte of overhead data. A unique maxima is guaranteed due to the 2-D asymptotic exponential overshoot shape of the surface generated by the algorithm. ESAP is based on a DFT analysis of the DCT basis functions. It is implemented with pixel-by-pixel spatially adaptive separable FIR postfilters. PSNR objective improvements between 0.4 to 0.8 dB are shown together with their corresponding optimal PSNR adaptive postfiltered images.

  9. Optimal design of isotope labeling experiments.

    PubMed

    Yang, Hong; Mandy, Dominic E; Libourel, Igor G L

    2014-01-01

    Stable isotope labeling experiments (ILE) constitute a powerful methodology for estimating metabolic fluxes. An optimal label design for such an experiment is necessary to maximize the precision with which fluxes can be determined. But often, precision gained in the determination of one flux comes at the expense of the precision of other fluxes, and an appropriate label design therefore foremost depends on the question the investigator wants to address. One could liken ILE to shadows that metabolism casts on products. Optimal label design is the placement of the lamp; creating clear shadows for some parts of metabolism and obscuring others.An optimal isotope label design is influenced by: (1) the network structure; (2) the true flux values; (3) the available label measurements; and, (4) commercially available substrates. The first two aspects are dictated by nature and constrain any optimal design. The second two aspects are suitable design parameters. To create an optimal label design, an explicit optimization criterion needs to be formulated. This usually is a property of the flux covariance matrix, which can be augmented by weighting label substrate cost. An optimal design is found by using such a criterion as an objective function for an optimizer. This chapter uses a simple elementary metabolite units (EMU) representation of the TCA cycle to illustrate the process of experimental design of isotope labeled substrates.

  10. Enhanced index tracking modelling in portfolio optimization

    NASA Astrophysics Data System (ADS)

    Lam, W. S.; Hj. Jaaman, Saiful Hafizah; Ismail, Hamizun bin

    2013-09-01

    Enhanced index tracking is a popular form of passive fund management in stock market. It is a dual-objective optimization problem, a trade-off between maximizing the mean return and minimizing the risk. Enhanced index tracking aims to generate excess return over the return achieved by the index without purchasing all of the stocks that make up the index by establishing an optimal portfolio. The objective of this study is to determine the optimal portfolio composition and performance by using weighted model in enhanced index tracking. Weighted model focuses on the trade-off between the excess return and the risk. The results of this study show that the optimal portfolio for the weighted model is able to outperform the Malaysia market index which is Kuala Lumpur Composite Index because of higher mean return and lower risk without purchasing all the stocks in the market index.

  11. Some optimal considerations in attitude control systems. [evaluation of value of relative weighting between time and fuel for relay control law

    NASA Technical Reports Server (NTRS)

    Boland, J. S., III

    1973-01-01

    The conventional six-engine reaction control jet relay attitude control law with deadband is shown to be a good linear approximation to a weighted time-fuel optimal control law. Techniques for evaluating the value of the relative weighting between time and fuel for a particular relay control law is studied along with techniques to interrelate other parameters for the two control laws. Vehicle attitude control laws employing control moment gyros are then investigated. Steering laws obtained from the expression for the reaction torque of the gyro configuration are compared to a total optimal attitude control law that is derived from optimal linear regulator theory. This total optimal attitude control law has computational disadvantages in the solving of the matrix Riccati equation. Several computational algorithms for solving the matrix Riccati equation are investigated with respect to accuracy, computational storage requirements, and computational speed.

  12. An Experience Oriented-Convergence Improved Gravitational Search Algorithm for Minimum Variance Distortionless Response Beamforming Optimum

    PubMed Central

    Darzi, Soodabeh; Tiong, Sieh Kiong; Tariqul Islam, Mohammad; Rezai Soleymanpour, Hassan; Kibria, Salehin

    2016-01-01

    An experience oriented-convergence improved gravitational search algorithm (ECGSA) based on two new modifications, searching through the best experiments and using of a dynamic gravitational damping coefficient (α), is introduced in this paper. ECGSA saves its best fitness function evaluations and uses those as the agents’ positions in searching process. In this way, the optimal found trajectories are retained and the search starts from these trajectories, which allow the algorithm to avoid the local optimums. Also, the agents can move faster in search space to obtain better exploration during the first stage of the searching process and they can converge rapidly to the optimal solution at the final stage of the search process by means of the proposed dynamic gravitational damping coefficient. The performance of ECGSA has been evaluated by applying it to eight standard benchmark functions along with six complicated composite test functions. It is also applied to adaptive beamforming problem as a practical issue to improve the weight vectors computed by minimum variance distortionless response (MVDR) beamforming technique. The results of implementation of the proposed algorithm are compared with some well-known heuristic methods and verified the proposed method in both reaching to optimal solutions and robustness. PMID:27399904

  13. Plasma-mediated grafting of poly(ethylene glycol) on polyamide and polyester surfaces and evaluation of antifouling ability of modified substrates.

    PubMed

    Dong, Baiyan; Jiang, Hongquan; Manolache, Sorin; Wong, Amy C Lee; Denes, Ferencz S

    2007-06-19

    A simple cold plasma technique was developed to functionalize the surfaces of polyamide (PA) and polyester (PET) for the grafting of polyethylene glycol (PEG) with the aim of reducing biofilm formation. The surfaces of PA and PET were treated with silicon tetrachloride (SiCl4) plasma, and PEG was grafted onto plasma-functionalized substrates (PA-PEG, PET-PEG). Different molecular weights of PEG and grafting times were tested to obtain optimal surface coverage by PEG as monitored by electron spectroscopy for chemical analysis (ESCA). The presence of a predominant C-O peak on the PEG-modified substrates indicated that the grafting was successful. Data from hydroxyl group derivatization and water contact angle measurement also indicated the presence of PEG after grafting. The PEG-grafted PA and PET under optimal conditions had similar chemical composition and hydrophilicity; however, different morphology changes were observed after grafting. Both PA-PEG and PET-PEG surfaces developed under optimal plasma conditions showed about 96% reduction in biofilm formation by Listeria monocytogenes compared with that of the corresponding unmodified substrates. This plasma functionalization method provided an efficient way to graft PEG onto PA and PET surfaces. Because of the high reactivity of Si-Cl species, this method could potentially be applied to other polymeric materials.

  14. Implementation of a framework for multi-species, multi-objective adaptive management in Delaware Bay

    USGS Publications Warehouse

    McGowan, Conor P.; Smith, David R.; Nichols, James D.; Lyons, James E.; Sweka, John A.; Kalasz, Kevin; Niles, Lawrence J.; Wong, Richard; Brust, Jeffrey; Davis, Michelle C.; Spear, Braddock

    2015-01-01

    Decision analytic approaches have been widely recommended as well suited to solving disputed and ecologically complex natural resource management problems with multiple objectives and high uncertainty. However, the difference between theory and practice is substantial, as there are very few actual resource management programs that represent formal applications of decision analysis. We applied the process of structured decision making to Atlantic horseshoe crab harvest decisions in the Delaware Bay region to develop a multispecies adaptive management (AM) plan, which is currently being implemented. Horseshoe crab harvest has been a controversial management issue since the late 1990s. A largely unregulated horseshoe crab harvest caused a decline in crab spawning abundance. That decline coincided with a major decline in migratory shorebird populations that consume horseshoe crab eggs on the sandy beaches of Delaware Bay during spring migration. Our approach incorporated multiple stakeholders, including fishery and shorebird conservation advocates, to account for diverse management objectives and varied opinions on ecosystem function. Through consensus building, we devised an objective statement and quantitative objective function to evaluate alternative crab harvest policies. We developed a set of competing ecological models accounting for the leading hypotheses on the interaction between shorebirds and horseshoe crabs. The models were initially weighted based on stakeholder confidence in these hypotheses, but weights will be adjusted based on monitoring and Bayesian model weight updating. These models were used together to predict the effects of management actions on the crab and shorebird populations. Finally, we used a dynamic optimization routine to identify the state dependent optimal harvest policy for horseshoe crabs, given the possible actions, the stated objectives and our competing hypotheses about system function. The AM plan was reviewed, accepted and implemented by the Atlantic States Marine Fisheries Commission in 2012 and 2013. While disagreements among stakeholders persist, structured decision making enabled unprecedented progress towards a transparent and consensus driven management plan for crabs and shorebirds in Delaware Bay.

  15. Megasupramolecules for safer, cleaner fuel by end association of long telechelic polymers

    NASA Astrophysics Data System (ADS)

    Wei, Ming-Hsin; Li, Boyu; David, R. L. Ameri; Jones, Simon C.; Sarohia, Virendra; Schmitigal, Joel A.; Kornfield, Julia A.

    2015-10-01

    We used statistical mechanics to design polymers that defy conventional wisdom by self-assembling into “megasupramolecules” (≥5000 kg/mol) at low concentration (≤0.3 weight percent). Theoretical treatment of the distribution of individual subunits—end-functional polymers—among cyclic and linear supramolecules (ring-chain equilibrium) predicts that megasupramolecules can form at low total polymer concentration if, and only if, the backbones are long (>400 kg/mol) and end-association strength is optimal. Viscometry and scattering measurements of long telechelic polymers having polycyclooctadiene backbones and acid or amine end groups verify the formation of megasupramolecules. They control misting and reduce drag in the same manner as ultralong covalent polymers. With individual building blocks short enough to avoid hydrodynamic chain scission (weight-average molecular weights of 400 to 1000 kg/mol) and reversible linkages that protect covalent bonds, these megasupramolecules overcome the obstacles of shear degradation and engine incompatibility.

  16. A novel dynamical community detection algorithm based on weighting scheme

    NASA Astrophysics Data System (ADS)

    Li, Ju; Yu, Kai; Hu, Ke

    2015-12-01

    Network dynamics plays an important role in analyzing the correlation between the function properties and the topological structure. In this paper, we propose a novel dynamical iteration (DI) algorithm, which incorporates the iterative process of membership vector with weighting scheme, i.e. weighting W and tightness T. These new elements can be used to adjust the link strength and the node compactness for improving the speed and accuracy of community structure detection. To estimate the optimal stop time of iteration, we utilize a new stability measure which is defined as the Markov random walk auto-covariance. We do not need to specify the number of communities in advance. It naturally supports the overlapping communities by associating each node with a membership vector describing the node's involvement in each community. Theoretical analysis and experiments show that the algorithm can uncover communities effectively and efficiently.

  17. Gain weighted eigenspace assignment

    NASA Technical Reports Server (NTRS)

    Davidson, John B.; Andrisani, Dominick, II

    1994-01-01

    This report presents the development of the gain weighted eigenspace assignment methodology. This provides a designer with a systematic methodology for trading off eigenvector placement versus gain magnitudes, while still maintaining desired closed-loop eigenvalue locations. This is accomplished by forming a cost function composed of a scalar measure of error between desired and achievable eigenvectors and a scalar measure of gain magnitude, determining analytical expressions for the gradients, and solving for the optimal solution by numerical iteration. For this development the scalar measure of gain magnitude is chosen to be a weighted sum of the squares of all the individual elements of the feedback gain matrix. An example is presented to demonstrate the method. In this example, solutions yielding achievable eigenvectors close to the desired eigenvectors are obtained with significant reductions in gain magnitude compared to a solution obtained using a previously developed eigenspace (eigenstructure) assignment method.

  18. Minimum Weight Design of a Leaf Spring Tapered in Thickness and Width for the Hubble Space Telescope-Space Support Equipment

    NASA Technical Reports Server (NTRS)

    Rodriguez, P. I.

    1990-01-01

    A linear elastic solution to the problem of minimum weight design of cantilever beams with variable width and depth is presented. The solution shown is for the specific application of the Hubble Space Telescope maintenance mission hardware. During these maintenance missions, delicate instruments must be isolated from the potentially damaging vibration environment of the space shuttle cargo bay during the ascent and descent phases. The leaf springs are designed to maintain the isolation system natural frequency at a level where load transmission to the instruments in a minimum. Nonlinear programming is used for the optimization process. The weight of the beams is the objective function with the deflection and allowable bending stress as the constraint equations. The design variables are the width and depth of the beams at both the free and the fixed ends.

  19. Proteolytic enzyme engineering: a tool for wool.

    PubMed

    Araújo, Rita; Silva, Carla; Machado, Raul; Casal, Margarida; Cunha, António M; Rodriguez-Cabello, José Carlos; Cavaco-Paulo, Artur

    2009-06-08

    One of the goals of protein engineering is to tailor the structure of enzymes to optimize industrial bioprocesses. In the present work, we present the construction of a novel high molecular weight subtilisin, based on the fusion of the DNA sequences coding for Bacillus subtilis prosubtilisin E and for an elastin-like polymer (ELP). The resulting fusion protein was biologically produced in Escherichia coli , purified and used for wool finishing assays. When compared to the commercial protease Esperase, the recombinant subtilisinE-VPAVG(220) activity was restricted to the cuticle of wool, allowing a significant reduction of pilling, weight loss and tensile strength loss of wool fibers. Here we report, for the first time, the microbial production of a functionalized high molecular weight protease for controlled enzymatic hydrolysis of wool surface. This original process overcomes the unrestrained diffusion and extended fiber damage which are the major obstacles for the use of proteases for wool finishing applications.

  20. Optimized bio-inspired stiffening design for an engine nacelle.

    PubMed

    Lazo, Neil; Vodenitcharova, Tania; Hoffman, Mark

    2015-11-04

    Structural efficiency is a common engineering goal in which an ideal solution provides a structure with optimized performance at minimized weight, with consideration of material mechanical properties, structural geometry, and manufacturability. This study aims to address this goal in developing high performance lightweight, stiff mechanical components by creating an optimized design from a biologically-inspired template. The approach is implemented on the optimization of rib stiffeners along an aircraft engine nacelle. The helical and angled arrangements of cellulose fibres in plants were chosen as the bio-inspired template. Optimization of total displacement and weight was carried out using a genetic algorithm (GA) coupled with finite element analysis. Iterations showed a gradual convergence in normalized fitness. Displacement was given higher emphasis in optimization, thus the GA optimization tended towards individual designs with weights near the mass constraint. Dominant features of the resulting designs were helical ribs with rectangular cross-sections having large height-to-width ratio. Displacement reduction was at 73% as compared to an unreinforced nacelle, and is attributed to the geometric features and layout of the stiffeners, while mass is maintained within the constraint.

  1. Optimization of wall thickness and lay-up for the shell-like composite structure loaded by non-uniform pressure field

    NASA Astrophysics Data System (ADS)

    Shevtsov, S.; Zhilyaev, I.; Oganesyan, P.; Axenov, V.

    2017-01-01

    The glass/carbon fiber composites are widely used in the design of various aircraft and rotorcraft components such as fairings and cowlings, which have predominantly a shell-like geometry and are made of quasi-isotropic laminates. The main requirements to such the composite parts are the specified mechanical stiffness to withstand the non-uniform air pressure at the different flight conditions and reduce a level of noise caused by the airflow-induced vibrations at the constrained weight of the part. The main objective of present study is the optimization of wall thickness and lay-up of composite shell-like cowling. The present approach assumes conversion of the CAD model of the cowling surface to finite element (FE) representation, then its wind tunnel testing simulation at the different orientation of airflow to find the most stressed mode of flight. Numerical solutions of the Reynolds averaged Navier-Stokes (RANS) equations supplemented by k-w turbulence model provide the spatial distributions of air pressure applied to the shell surface. At the formulation of optimization problem the global strain energy calculated within the optimized shell was assumed as the objective. A wall thickness of the shell had to change over its surface to minimize the objective at the constrained weight. We used a parameterization of the problem that assumes an initiation of auxiliary sphere with varied radius and coordinates of the center, which were the design variables. Curve that formed by the intersection of the shell with sphere defined boundary of area, which should be reinforced by local thickening the shell wall. To eliminate a local stress concentration this increment was defined as the smooth function defined on the shell surface. As a result of structural optimization we obtained the thickness of shell's wall distribution, which then was used to design the draping and lay-up of composite prepreg layers. The global strain energy in the optimized cowling was reduced in2.5 times at the weight growth up to 15%, whereas the eigenfrequencies at the 6 first natural vibration modes have been increased by 5-15%. The present approach and developed programming tools that demonstrated a good efficiency and stability at the acceptable computational costs can be used to optimize a wide range of shell-like structures made of quasi-isotropic laminates.

  2. Weight optimization of an aerobrake structural concept for a lunar transfer vehicle

    NASA Technical Reports Server (NTRS)

    Bush, Lance B.; Unal, Resit; Rowell, Lawrence F.; Rehder, John J.

    1992-01-01

    An aerobrake structural concept for a lunar transfer vehicle was weight optimized through the use of the Taguchi design method, finite element analyses, and element sizing routines. Six design parameters were chosen to represent the aerobrake structural configuration. The design parameters included honeycomb core thickness, diameter-depth ratio, shape, material, number of concentric ring frames, and number of radial frames. Each parameter was assigned three levels. The aerobrake structural configuration with the minimum weight was 44 percent less than the average weight of all the remaining satisfactory experimental configurations. In addition, the results of this study have served to bolster the advocacy of the Taguchi method for aerospace vehicle design. Both reduced analysis time and an optimized design demonstrated the applicability of the Taguchi method to aerospace vehicle design.

  3. Linear feasibility algorithms for treatment planning in interstitial photodynamic therapy

    NASA Astrophysics Data System (ADS)

    Rendon, A.; Beck, J. C.; Lilge, Lothar

    2008-02-01

    Interstitial Photodynamic therapy (IPDT) has been under intense investigation in recent years, with multiple clinical trials underway. This effort has demanded the development of optimization strategies that determine the best locations and output powers for light sources (cylindrical or point diffusers) to achieve an optimal light delivery. Furthermore, we have recently introduced cylindrical diffusers with customizable emission profiles, placing additional requirements on the optimization algorithms, particularly in terms of the stability of the inverse problem. Here, we present a general class of linear feasibility algorithms and their properties. Moreover, we compare two particular instances of these algorithms, which are been used in the context of IPDT: the Cimmino algorithm and a weighted gradient descent (WGD) algorithm. The algorithms were compared in terms of their convergence properties, the cost function they minimize in the infeasible case, their ability to regularize the inverse problem, and the resulting optimal light dose distributions. Our results show that the WGD algorithm overall performs slightly better than the Cimmino algorithm and that it converges to a minimizer of a clinically relevant cost function in the infeasible case. Interestingly however, treatment plans resulting from either algorithms were very similar in terms of the resulting fluence maps and dose volume histograms, once the diffuser powers adjusted to achieve equal prostate coverage.

  4. On the use of ANN interconnection weights in optimal structural design

    NASA Technical Reports Server (NTRS)

    Hajela, P.; Szewczyk, Z.

    1992-01-01

    The present paper describes the use of interconnection weights of a multilayer, feedforward network, to extract information pertinent to the mapping space that the network is assumed to represent. In particular, these weights can be used to determine an appropriate network architecture, and an adequate number of training patterns (input-output pairs) have been used for network training. The weight analysis also provides an approach to assess the influence of each input parameter on a selected output component. The paper shows the significance of this information in decomposition driven optimal design.

  5. SU-E-T-109: An Investigation of Including Variable Relative Biological Effectiveness in Intensity Modulated Proton Therapy Planning Optimization for Head and Neck Cancer Patients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cao, W; Zaghian, M; Lim, G

    2015-06-15

    Purpose: The current practice of considering the relative biological effectiveness (RBE) of protons in intensity modulated proton therapy (IMPT) planning is to use a generic RBE value of 1.1. However, RBE is indeed a variable depending on the dose per fraction, the linear energy transfer, tissue parameters, etc. In this study, we investigate the impact of using variable RBE based optimization (vRBE-OPT) on IMPT dose distributions compared by conventional fixed RBE based optimization (fRBE-OPT). Methods: Proton plans of three head and neck cancer patients were included for our study. In order to calculate variable RBE, tissue specific parameters were obtainedmore » from the literature and dose averaged LET values were calculated by Monte Carlo simulations. Biological effects were calculated using the linear quadratic model and they were utilized in the variable RBE based optimization. We used a Polak-Ribiere conjugate gradient algorithm to solve the model. In fixed RBE based optimization, we used conventional physical dose optimization to optimize doses weighted by 1.1. IMPT plans for each patient were optimized by both methods (vRBE-OPT and fRBE-OPT). Both variable and fixed RBE weighted dose distributions were calculated for both methods and compared by dosimetric measures. Results: The variable RBE weighted dose distributions were more homogenous within the targets, compared with the fixed RBE weighted dose distributions for the plans created by vRBE-OPT. We observed that there were noticeable deviations between variable and fixed RBE weighted dose distributions if the plan were optimized by fRBE-OPT. For organs at risk sparing, dose distributions from both methods were comparable. Conclusion: Biological dose based optimization rather than conventional physical dose based optimization in IMPT planning may bring benefit in improved tumor control when evaluating biologically equivalent dose, without sacrificing OAR sparing, for head and neck cancer patients. The research is supported in part by National Institutes of Health Grant No. 2U19CA021239-35.« less

  6. Optimization of selection for growth in Menz Sheep while minimizing inbreeding depression in fitness traits

    PubMed Central

    2013-01-01

    The genetic trends in fitness (inbreeding, fertility and survival) of a closed nucleus flock of Menz sheep under selection during ten years for increased body weight were investigated to evaluate the consequences of selection for body weight on fitness. A mate selection tool was used to optimize in retrospect the actual selection and matings conducted over the project period to assess if the observed genetic gains in body weight could have been achieved with a reduced level of inbreeding. In the actual selection, the genetic trends for yearling weight, fertility of ewes and survival of lambs were 0.81 kg, –0.00026% and 0.016% per generation. The average inbreeding coefficient remained zero for the first few generations and then tended to increase over generations. The genetic gains achieved with the optimized retrospective selection and matings were highly comparable with the observed values, the correlation between the average breeding values of lambs born from the actual and optimized matings over the years being 0.99. However, the level of inbreeding with the optimized mate selections remained zero until late in the years of selection. Our results suggest that an optimal selection strategy that considers both genetic merits and coancestry of mates should be adopted to sustain the Menz sheep breeding program. PMID:23783076

  7. On the Performance of Linear Decreasing Inertia Weight Particle Swarm Optimization for Global Optimization

    PubMed Central

    Arasomwan, Martins Akugbe; Adewumi, Aderemi Oluyinka

    2013-01-01

    Linear decreasing inertia weight (LDIW) strategy was introduced to improve on the performance of the original particle swarm optimization (PSO). However, linear decreasing inertia weight PSO (LDIW-PSO) algorithm is known to have the shortcoming of premature convergence in solving complex (multipeak) optimization problems due to lack of enough momentum for particles to do exploitation as the algorithm approaches its terminal point. Researchers have tried to address this shortcoming by modifying LDIW-PSO or proposing new PSO variants. Some of these variants have been claimed to outperform LDIW-PSO. The major goal of this paper is to experimentally establish the fact that LDIW-PSO is very much efficient if its parameters are properly set. First, an experiment was conducted to acquire a percentage value of the search space limits to compute the particle velocity limits in LDIW-PSO based on commonly used benchmark global optimization problems. Second, using the experimentally obtained values, five well-known benchmark optimization problems were used to show the outstanding performance of LDIW-PSO over some of its competitors which have in the past claimed superiority over it. Two other recent PSO variants with different inertia weight strategies were also compared with LDIW-PSO with the latter outperforming both in the simulation experiments conducted. PMID:24324383

  8. Proof of concept demonstration of optimal composite MRI endpoints for clinical trials.

    PubMed

    Edland, Steven D; Ard, M Colin; Sridhar, Jaiashre; Cobia, Derin; Martersteck, Adam; Mesulam, M Marsel; Rogalski, Emily J

    2016-09-01

    Atrophy measures derived from structural MRI are promising outcome measures for early phase clinical trials, especially for rare diseases such as primary progressive aphasia (PPA), where the small available subject pool limits our ability to perform meaningfully powered trials with traditional cognitive and functional outcome measures. We investigated a composite atrophy index in 26 PPA participants with longitudinal MRIs separated by two years. Rogalski et al . [ Neurology 2014;83:1184-1191] previously demonstrated that atrophy of the left perisylvian temporal cortex (PSTC) is a highly sensitive measure of disease progression in this population and a promising endpoint for clinical trials. Using methods described by Ard et al . [ Pharmaceutical Statistics 2015;14:418-426], we constructed a composite atrophy index composed of a weighted sum of volumetric measures of 10 regions of interest within the left perisylvian cortex using weights that maximize signal-to-noise and minimize sample size required of trials using the resulting score. Sample size required to detect a fixed percentage slowing in atrophy in a two-year clinical trial with equal allocation of subjects across arms and 90% power was calculated for the PSTC and optimal composite surrogate biomarker endpoints. The optimal composite endpoint required 38% fewer subjects to detect the same percent slowing in atrophy than required by the left PSTC endpoint. Optimal composites can increase the power of clinical trials and increase the probability that smaller trials are informative, an observation especially relevant for PPA, but also for related neurodegenerative disorders including Alzheimer's disease.

  9. Acoustic design of boundary segments in aircraft fuselages using topology optimization and a specialized acoustic pressure function

    NASA Astrophysics Data System (ADS)

    Radestock, Martin; Rose, Michael; Monner, Hans Peter

    2017-04-01

    In most aviation applications, a major cost benefit can be achieved by a reduction of the system weight. Often the acoustic properties of the fuselage structure are not in the focus of the primary design process, too. A final correction of poor acoustic properties is usually done using insulation mats in the chamber between the primary and secondary shell. It is plausible that a more sophisticated material distribution in that area can result in a substantially reduced weight. Topology optimization is a well-known approach to reduce material of compliant structures. In this paper an adaption of this method to acoustic problems is investigated. The gap full of insulation mats is suitably parameterized to achieve different material distributions. To find advantageous configurations, the objective in the underlying topology optimization is chosen to obtain good acoustic pressure patterns in the aircraft cabin. An important task in the optimization is an adequate Finite Element model of the system. This can usually not be obtained from commercially available programs due to the lack of special sensitivity data with respect to the design parameters. Therefore an appropriate implementation of the algorithm has been done, exploiting the vector and matrix capabilities in the MATLABQ environment. Finally some new aspects of the Finite Element implementation will also be presented, since they are interesting on its own and can be generalized to efficiently solve other partial differential equations as well.

  10. Effect analysis of design variables on the disc in a double-eccentric butterfly valve.

    PubMed

    Kang, Sangmo; Kim, Da-Eun; Kim, Kuk-Kyeom; Kim, Jun-Oh

    2014-01-01

    We have performed a shape optimization of the disc in an industrial double-eccentric butterfly valve using the effect analysis of design variables to enhance the valve performance. For the optimization, we select three performance quantities such as pressure drop, maximum stress, and mass (weight) as the responses and three dimensions regarding the disc shape as the design variables. Subsequently, we compose a layout of orthogonal array (L16) by performing numerical simulations on the flow and structure using a commercial package, ANSYS v13.0, and then make an effect analysis of the design variables on the responses using the design of experiments. Finally, we formulate a multiobjective function consisting of the three responses and then propose an optimal combination of the design variables to maximize the valve performance. Simulation results show that the disc thickness makes the most significant effect on the performance and the optimal design provides better performance than the initial design.

  11. On the optimization of discrete structures with aeroelastic constraints

    NASA Technical Reports Server (NTRS)

    Mcintosh, S. C., Jr.; Ashley, H.

    1978-01-01

    The paper deals with the problem of dynamic structural optimization where constraints relating to flutter of a wing (or other dynamic aeroelastic performance) are imposed along with conditions of a more conventional nature such as those relating to stress under load, deflection, minimum dimensions of structural elements, etc. The discussion is limited to a flutter problem for a linear system with a finite number of degrees of freedom and a single constraint involving aeroelastic stability, and the structure motion is assumed to be a simple harmonic time function. Three search schemes are applied to the minimum-weight redesign of a particular wing: the first scheme relies on the method of feasible directions, while the other two are derived from necessary conditions for a local optimum so that they can be referred to as optimality-criteria schemes. The results suggest that a heuristic redesign algorithm involving an optimality criterion may be best suited for treating multiple constraints with large numbers of design variables.

  12. A singular value decomposition linear programming (SVDLP) optimization technique for circular cone based robotic radiotherapy.

    PubMed

    Liang, Bin; Li, Yongbao; Wei, Ran; Guo, Bin; Xu, Xuang; Liu, Bo; Li, Jiafeng; Wu, Qiuwen; Zhou, Fugen

    2018-01-05

    With robot-controlled linac positioning, robotic radiotherapy systems such as CyberKnife significantly increase freedom of radiation beam placement, but also impose more challenges on treatment plan optimization. The resampling mechanism in the vendor-supplied treatment planning system (MultiPlan) cannot fully explore the increased beam direction search space. Besides, a sparse treatment plan (using fewer beams) is desired to improve treatment efficiency. This study proposes a singular value decomposition linear programming (SVDLP) optimization technique for circular collimator based robotic radiotherapy. The SVDLP approach initializes the input beams by simulating the process of covering the entire target volume with equivalent beam tapers. The requirements on dosimetry distribution are modeled as hard and soft constraints, and the sparsity of the treatment plan is achieved by compressive sensing. The proposed linear programming (LP) model optimizes beam weights by minimizing the deviation of soft constraints subject to hard constraints, with a constraint on the l 1 norm of the beam weight. A singular value decomposition (SVD) based acceleration technique was developed for the LP model. Based on the degeneracy of the influence matrix, the model is first compressed into lower dimension for optimization, and then back-projected to reconstruct the beam weight. After beam weight optimization, the number of beams is reduced by removing the beams with low weight, and optimizing the weights of the remaining beams using the same model. This beam reduction technique is further validated by a mixed integer programming (MIP) model. The SVDLP approach was tested on a lung case. The results demonstrate that the SVD acceleration technique speeds up the optimization by a factor of 4.8. Furthermore, the beam reduction achieves a similar plan quality to the globally optimal plan obtained by the MIP model, but is one to two orders of magnitude faster. Furthermore, the SVDLP approach is tested and compared with MultiPlan on three clinical cases of varying complexities. In general, the plans generated by the SVDLP achieve steeper dose gradient, better conformity and less damage to normal tissues. In conclusion, the SVDLP approach effectively improves the quality of treatment plan due to the use of the complete beam search space. This challenging optimization problem with the complete beam search space is effectively handled by the proposed SVD acceleration.

  13. A singular value decomposition linear programming (SVDLP) optimization technique for circular cone based robotic radiotherapy

    NASA Astrophysics Data System (ADS)

    Liang, Bin; Li, Yongbao; Wei, Ran; Guo, Bin; Xu, Xuang; Liu, Bo; Li, Jiafeng; Wu, Qiuwen; Zhou, Fugen

    2018-01-01

    With robot-controlled linac positioning, robotic radiotherapy systems such as CyberKnife significantly increase freedom of radiation beam placement, but also impose more challenges on treatment plan optimization. The resampling mechanism in the vendor-supplied treatment planning system (MultiPlan) cannot fully explore the increased beam direction search space. Besides, a sparse treatment plan (using fewer beams) is desired to improve treatment efficiency. This study proposes a singular value decomposition linear programming (SVDLP) optimization technique for circular collimator based robotic radiotherapy. The SVDLP approach initializes the input beams by simulating the process of covering the entire target volume with equivalent beam tapers. The requirements on dosimetry distribution are modeled as hard and soft constraints, and the sparsity of the treatment plan is achieved by compressive sensing. The proposed linear programming (LP) model optimizes beam weights by minimizing the deviation of soft constraints subject to hard constraints, with a constraint on the l 1 norm of the beam weight. A singular value decomposition (SVD) based acceleration technique was developed for the LP model. Based on the degeneracy of the influence matrix, the model is first compressed into lower dimension for optimization, and then back-projected to reconstruct the beam weight. After beam weight optimization, the number of beams is reduced by removing the beams with low weight, and optimizing the weights of the remaining beams using the same model. This beam reduction technique is further validated by a mixed integer programming (MIP) model. The SVDLP approach was tested on a lung case. The results demonstrate that the SVD acceleration technique speeds up the optimization by a factor of 4.8. Furthermore, the beam reduction achieves a similar plan quality to the globally optimal plan obtained by the MIP model, but is one to two orders of magnitude faster. Furthermore, the SVDLP approach is tested and compared with MultiPlan on three clinical cases of varying complexities. In general, the plans generated by the SVDLP achieve steeper dose gradient, better conformity and less damage to normal tissues. In conclusion, the SVDLP approach effectively improves the quality of treatment plan due to the use of the complete beam search space. This challenging optimization problem with the complete beam search space is effectively handled by the proposed SVD acceleration.

  14. Turning Defense into Offense: Defensin Mimetics as Novel Antibiotics Targeting Lipid II

    PubMed Central

    Ateh, Eugene; Oashi, Taiji; Lu, Wuyuan; Huang, Jing; Diepeveen-de Buin, Marlies; Bryant, Joseph; Breukink, Eefjan; MacKerell, Alexander D.; de Leeuw, Erik P. H.

    2013-01-01

    We have previously reported on the functional interaction of Lipid II with human alpha-defensins, a class of antimicrobial peptides. Lipid II is an essential precursor for bacterial cell wall biosynthesis and an ideal and validated target for natural antibiotic compounds. Using a combination of structural, functional and in silico analyses, we present here the molecular basis for defensin-Lipid II binding. Based on the complex of Lipid II with Human Neutrophil peptide-1, we could identify and characterize chemically diverse low-molecular weight compounds that mimic the interactions between HNP-1 and Lipid II. Lead compound BAS00127538 was further characterized structurally and functionally; it specifically interacts with the N-acetyl muramic acid moiety and isoprenyl tail of Lipid II, targets cell wall synthesis and was protective in an in vivo model for sepsis. For the first time, we have identified and characterized low molecular weight synthetic compounds that target Lipid II with high specificity and affinity. Optimization of these compounds may allow for their development as novel, next generation therapeutic agents for the treatment of Gram-positive pathogenic infections. PMID:24244161

  15. Approximate analytical relationships for linear optimal aeroelastic flight control laws

    NASA Astrophysics Data System (ADS)

    Kassem, Ayman Hamdy

    1998-09-01

    This dissertation introduces new methods to uncover functional relationships between design parameters of a contemporary control design technique and the resulting closed-loop properties. Three new methods are developed for generating such relationships through analytical expressions: the Direct Eigen-Based Technique, the Order of Magnitude Technique, and the Cost Function Imbedding Technique. Efforts concentrated on the linear-quadratic state-feedback control-design technique applied to an aeroelastic flight control task. For this specific application, simple and accurate analytical expressions for the closed-loop eigenvalues and zeros in terms of basic parameters such as stability and control derivatives, structural vibration damping and natural frequency, and cost function weights are generated. These expressions explicitly indicate how the weights augment the short period and aeroelastic modes, as well as the closed-loop zeros, and by what physical mechanism. The analytical expressions are used to address topics such as damping, nonminimum phase behavior, stability, and performance with robustness considerations, and design modifications. This type of knowledge is invaluable to the flight control designer and would be more difficult to formulate when obtained from numerical-based sensitivity analysis.

  16. Optimizing Placement of Weather Stations: Exploring Objective Functions of Meaningful Combinations of Multiple Weather Variables

    NASA Astrophysics Data System (ADS)

    Snyder, A.; Dietterich, T.; Selker, J. S.

    2017-12-01

    Many regions of the world lack ground-based weather data due to inadequate or unreliable weather station networks. For example, most countries in Sub-Saharan Africa have unreliable, sparse networks of weather stations. The absence of these data can have consequences on weather forecasting, prediction of severe weather events, agricultural planning, and climate change monitoring. The Trans-African Hydro-Meteorological Observatory (TAHMO.org) project seeks to address these problems by deploying and operating a large network of weather stations throughout Sub-Saharan Africa. To design the TAHMO network, we must determine where to place weather stations within each country. We should consider how we can create accurate spatio-temporal maps of weather data and how to balance the desired accuracy of each weather variable of interest (precipitation, temperature, relative humidity, etc.). We can express this problem as a joint optimization of multiple weather variables, given a fixed number of weather stations. We use reanalysis data as the best representation of the "true" weather patterns that occur in the region of interest. For each possible combination of sites, we interpolate the reanalysis data between selected locations and calculate the mean average error between the reanalysis ("true") data and the interpolated data. In order to formulate our multi-variate optimization problem, we explore different methods of weighting each weather variable in our objective function. These methods include systematic variation of weights to determine which weather variables have the strongest influence on the network design, as well as combinations targeted for specific purposes. For example, we can use computed evapotranspiration as a metric that combines many weather variables in a way that is meaningful for agricultural and hydrological applications. We compare the errors of the weather station networks produced by each optimization problem formulation. We also compare these errors to those of manually designed weather station networks in West Africa, planned by the respective host-country's meteorological agency.

  17. Optimal Weights Mixed Filter for removing mixture of Gaussian and impulse noises

    PubMed Central

    Grama, Ion; Liu, Quansheng

    2017-01-01

    In this paper we consider the problem of restoration of a image contaminated by a mixture of Gaussian and impulse noises. We propose a new statistic called ROADGI which improves the well-known Rank-Ordered Absolute Differences (ROAD) statistic for detecting points contaminated with the impulse noise in this context. Combining ROADGI statistic with the method of weights optimization we obtain a new algorithm called Optimal Weights Mixed Filter (OWMF) to deal with the mixed noise. Our simulation results show that the proposed filter is effective for mixed noises, as well as for single impulse noise and for single Gaussian noise. PMID:28692667

  18. Optimal Weights Mixed Filter for removing mixture of Gaussian and impulse noises.

    PubMed

    Jin, Qiyu; Grama, Ion; Liu, Quansheng

    2017-01-01

    In this paper we consider the problem of restoration of a image contaminated by a mixture of Gaussian and impulse noises. We propose a new statistic called ROADGI which improves the well-known Rank-Ordered Absolute Differences (ROAD) statistic for detecting points contaminated with the impulse noise in this context. Combining ROADGI statistic with the method of weights optimization we obtain a new algorithm called Optimal Weights Mixed Filter (OWMF) to deal with the mixed noise. Our simulation results show that the proposed filter is effective for mixed noises, as well as for single impulse noise and for single Gaussian noise.

  19. Surface functionalization of solid state ultra-high molecular weight polyethylene through chemical grafting

    NASA Astrophysics Data System (ADS)

    Sherazi, Tauqir A.; Rehman, Tayyiba; Naqvi, Syed Ali Raza; Shaikh, Ahson Jabbar; Shahzad, Sohail Anjum; Abbas, Ghazanfar; Raza, Rizwan; Waseem, Amir

    2015-12-01

    The surface of ultra-high molecular weight polyethylene (UHMWPE) powder was functionalized with styrene using chemical grafting technique. The grafting process was initiated through radical generation on base polymer matrix in the solid state by sodium thiosulfate, while peroxides formed at radical sites during this process were dissociated by ceric ammonium nitrate. Various factors were optimized and reasonably high level of monomer grafting was achieved, i.e., 15.6%. The effect of different acids as additive and divinyl benzene (DVB) as a cross-linking agent was also studied. Post-grafting sulfonation was conducted to introduce the ionic moieties to the grafted polymer. Ion-exchange capacity (IEC) was measured experimentally and is found to be 1.04 meq g-1, which is in close agreement with the theoretical IEC values. The chemical structure of grafted and functionalized polymer was characterized by attenuated total reflection infrared spectroscopy (ATR-FTIR) and thermal properties were investigated by thermo gravimetric analysis (TGA) and differential scanning calorimetry (DSC). Thermal analysis depicts that the presence of radicals on the polymer chain accelerates the thermal decomposition process. The results signify that the chemical grafting is an effective tool for substantial surface modification and subsequent functionalization of polyethylene.

  20. Simulation electromagnetic scattering on bodies through integral equation and neural networks methods

    NASA Astrophysics Data System (ADS)

    Lvovich, I. Ya; Preobrazhenskiy, A. P.; Choporov, O. N.

    2018-05-01

    The paper deals with the issue of electromagnetic scattering on a perfectly conducting diffractive body of a complex shape. Performance calculation of the body scattering is carried out through the integral equation method. Fredholm equation of the second time was used for calculating electric current density. While solving the integral equation through the moments method, the authors have properly described the core singularity. The authors determined piecewise constant functions as basic functions. The chosen equation was solved through the moments method. Within the Kirchhoff integral approach it is possible to define the scattered electromagnetic field, in some way related to obtained electrical currents. The observation angles sector belongs to the area of the front hemisphere of the diffractive body. To improve characteristics of the diffractive body, the authors used a neural network. All the neurons contained a logsigmoid activation function and weighted sums as discriminant functions. The paper presents the matrix of weighting factors of the connectionist model, as well as the results of the optimized dimensions of the diffractive body. The paper also presents some basic steps in calculation technique of the diffractive bodies, based on the combination of integral equation and neural networks methods.

  1. Feeding rates affect growth, intestinal digestive and absorptive capabilities and endocrine functions of juvenile blunt snout bream Megalobrama amblycephala.

    PubMed

    Xu, Chao; Li, Xiang-Fei; Tian, Hong-Yan; Jiang, Guang-Zhen; Liu, Wen-Bin

    2016-04-01

    This study aimed to investigate the optimal feeding rate for juvenile blunt snout bream (average initial weight 23.74 ± 0.09 g) based on the results on growth performance, intestinal digestive and absorptive capabilities and endocrine functions. A total of 840 fish were randomly distributed into 24 cages and fed a commercial feed at six feeding rates ranging from 2.0 to 7.0% body weight (BW)/day. The results indicated that weight gain rate increased significantly (P < 0.05) as feeding rates increased from 2.0 to 5.0% BW/day, but decreased with the further increasing feeding rates (P > 0.05). Protein efficiency ratio and nitrogen and energy retention all showed a similar trend. However, feed conversion ratio increased significantly (P < 0.05) with increasing feeding rates. Feeding rates have little effects (P > 0.05) on whole-body moisture, ash and protein contents, but significantly (P < 0.05) affect both lipid and energy contents with the highest values both observed in fish fed 4.0% BW/day. In addition, moderate ration sizes (2.0-4.0% BW/day) resulted in the enhanced activities of intestinal enzymes, including lipase, protease, Na(+), K(+)-ATPase, alkaline phosphatase and creatine kinase. Furthermore, the mRNA levels of growth hormone, insulin-like growth factors-I, growth hormone receptor and neuropeptide all increased significantly (P < 0.05) as feeding rates increased from 2.0 to 5.0% and 6.0% BW/day, but decreased significantly (P < 0.05) with the further increase in feeding rates, whereas both leptin and cholecystokinin expressions showed an opposite trend. Based on the broken-line regression analysis of SGR against feeding rates, the optimal feeding rate for juvenile blunt snout bream was estimated to be 4.57% BW/day.

  2. Linear-Quadratic-Gaussian Regulator Developed for a Magnetic Bearing

    NASA Technical Reports Server (NTRS)

    Choi, Benjamin B.

    2002-01-01

    Linear-Quadratic-Gaussian (LQG) control is a modern state-space technique for designing optimal dynamic regulators. It enables us to trade off regulation performance and control effort, and to take into account process and measurement noise. The Structural Mechanics and Dynamics Branch at the NASA Glenn Research Center has developed an LQG control for a fault-tolerant magnetic bearing suspension rig to optimize system performance and to reduce the sensor and processing noise. The LQG regulator consists of an optimal state-feedback gain and a Kalman state estimator. The first design step is to seek a state-feedback law that minimizes the cost function of regulation performance, which is measured by a quadratic performance criterion with user-specified weighting matrices, and to define the tradeoff between regulation performance and control effort. The next design step is to derive a state estimator using a Kalman filter because the optimal state feedback cannot be implemented without full state measurement. Since the Kalman filter is an optimal estimator when dealing with Gaussian white noise, it minimizes the asymptotic covariance of the estimation error.

  3. Optimized Enhanced Bioremediation Through 4D Geophysical Monitoring and Autonomous Data Collection, Processing and Analysis

    DTIC Science & Technology

    2014-09-01

    High Fructose Corn Syrup Diluted 1 to 10 percent by weight 50 to 500 mg/l Slow Release Whey (fresh/powered) Dissolved (powdered form) or injected...the assessment of remedial progress and functioning. This project also addressed several high priority needs from the Navy Environmental Quality...memory high -performance computing systems. For instance, as of March 2012 the code has been successfully executed on 2 cpu’s for an inversion problem

  4. Optimization of Aircraft Seat Cushion Fire Blocking Layers.

    DTIC Science & Technology

    1983-03-01

    function of cost and weight, and the costs of labor involved in assembling a ccmposite seat cushion. The same classes of high char yield polymers that are...SEAT LATER DESIGN REPORT NRBBNBsg$$$$$$NN$$R$$$$$ SEAT DESIGN NUMBER: 009 LAYER NAME CODE NO. S MANUFACTURER 5 COST FACTORS . LABOR ...72621, 9096.. 7SS43. 73757. 77147. DELTA COSTS 0. 8340. 2922. 1136. 4327. ACOSOS in Iho..aS Of dollars. COST SUIRNY REPORT Re ....... VONR3 MORFA

  5. Aeroelastic tailoring and structural optimization of joined-wing configurations

    NASA Astrophysics Data System (ADS)

    Lee, Dong-Hwan

    2002-08-01

    Methodology for integrated aero-structural design was developed using formal optimization. ASTROS (Automated STRuctural Optimization System) was used as an analyzer and an optimizer for performing joined-wing weight optimization with stress, displacement, cantilever or body-freedom flutter constraints. As a pre/post processor, MATLAB was used for generating input file of ASTROS and for displaying the results of the ASTROS. The effects of the aeroelastic constraints on the isotropic and composite joined-wing weight were examined using this developed methodology. The aeroelastic features of a joined-wing aircraft were examined using both the Rayleigh-Ritz method and a finite element based aeroelastic stability and weight optimization procedure. Aircraft rigid-body modes are included to analyze of body-freedom flutter of the joined-wing aircraft. Several parametric studies were performed to determine the most important parameters that affect the aeroelastic behavior of a joined-wing aircraft. The special feature of a joined-wing aircraft is body-freedom flutter involving frequency interaction of the first elastic mode and the aircraft short period mode. In most parametric study cases, the body-freedom flutter speed was less than the cantilever flutter speed that is independent of fuselage inertia. As fuselage pitching moment of inertia was increased, the body-freedom flutter speed increased. When the pitching moment of inertia reaches a critical value, transition from body-freedom flutter to cantilever flutter occurred. The effects of composite laminate orientation on the front and rear wings of a joined-wing configuration were studied. An aircraft pitch divergence mode, which occurred because of forward movement of center of pressure due to wing deformation, was found. Body-freedom flutter and cantilever-like flutter were also found depending on combination of front and rear wing ply orientations. Optimized wing weight behaviors of the planar and non-planar configurations with isotropic and composite materials were investigated. Wing weight optimization of the composite joined-wing result in less weight compared to the metallic wing. Fuselage flexibility affects joined-wing flutter characteristics. Elastic mode shapes of the wing were affected by fuselage deformation and change the flutter speeds compared to the rigid fuselage. Body-freedom flutter speeds decrease as fuselage flexibility increases. Optimum wing weights increase as fuselage flexibility increases. Flutter analysis of a box wing configuration investigated the effects of center of gravity location and pitch moment of inertia on flutter speed.

  6. Ascent performance feasibility for next-generation spacecraft

    NASA Astrophysics Data System (ADS)

    Mancuso, Salvatore Massimo

    This thesis deals with the optimization of the ascent trajectories for single-stage suborbital (SSSO), single-stage-to-orbit (SSTO), and two-stage-to-orbit (TSTO) rocket-powered spacecraft. The maximum payload weight problem has been solved using the sequential gradient-restoration algorithm. For the TSTO case, some modifications to the original version of the algorithm have been necessary in order to deal with discontinuities due to staging and the fact that the functional being minimized depends on interface conditions. The optimization problem is studied for different values of the initial thrust-to-weight ratio in the range 1.3 to 1.6, engine specific impulse in the range 400 to 500 sec, and spacecraft structural factor in the range 0.08 to 0.12. For the TSTO configuration, two subproblems are studied: uniform structural factor between stages and nonuniform structural factor between stages. Due to the regular behavior of the results obtained, engineering approximations have been developed which connect the maximum payload weight to the engine specific impulse and spacecraft structural factor; in turn, this leads to useful design considerations. Also, performance sensitivity to the scale of the aerodynamic drag is studied, and it is shown that its effect on payload weight is relatively small, even for drag changes approaching ± 50%. The main conclusions are that: the design of a SSSO configuration appears to be feasible; the design of a SSTO configuration might be comfortably feasible, marginally feasible, or unfeasible, depending on the parameter values assumed; the design of a TSTO configuration is not only feasible, but its payload appears to be considerably larger than that of a SSTO configuration. Improvements in engine specific impulse and spacecraft structural factor are desirable and crucial for SSTO feasibility; indeed, it appears that aerodynamic improvements do not yield significant improvements in payload weight.

  7. Biased and greedy random walks on two-dimensional lattices with quenched randomness: The greedy ant within a disordered environment

    NASA Astrophysics Data System (ADS)

    Mitran, T. L.; Melchert, O.; Hartmann, A. K.

    2013-12-01

    The main characteristics of biased greedy random walks (BGRWs) on two-dimensional lattices with real-valued quenched disorder on the lattice edges are studied. Here the disorder allows for negative edge weights. In previous studies, considering the negative-weight percolation (NWP) problem, this was shown to change the universality class of the existing, static percolation transition. In the presented study, four different types of BGRWs and an algorithm based on the ant colony optimization heuristic were considered. Regarding the BGRWs, the precise configurations of the lattice walks constructed during the numerical simulations were influenced by two parameters: a disorder parameter ρ that controls the amount of negative edge weights on the lattice and a bias strength B that governs the drift of the walkers along a certain lattice direction. The random walks are “greedy” in the sense that the local optimal choice of the walker is to preferentially traverse edges with a negative weight (associated with a net gain of “energy” for the walker). Here, the pivotal observable is the probability that, after termination, a lattice walk exhibits a total negative weight, which is here considered as percolating. The behavior of this observable as function of ρ for different bias strengths B is put under scrutiny. Upon tuning ρ, the probability to find such a feasible lattice walk increases from zero to 1. This is the key feature of the percolation transition in the NWP model. Here, we address the question how well the transition point ρc, resulting from numerically exact and “static” simulations in terms of the NWP model, can be resolved using simple dynamic algorithms that have only local information available, one of the basic questions in the physics of glassy systems.

  8. Self-Tuning of Design Variables for Generalized Predictive Control

    NASA Technical Reports Server (NTRS)

    Lin, Chaung; Juang, Jer-Nan

    2000-01-01

    Three techniques are introduced to determine the order and control weighting for the design of a generalized predictive controller. These techniques are based on the application of fuzzy logic, genetic algorithms, and simulated annealing to conduct an optimal search on specific performance indexes or objective functions. Fuzzy logic is found to be feasible for real-time and on-line implementation due to its smooth and quick convergence. On the other hand, genetic algorithms and simulated annealing are applicable for initial estimation of the model order and control weighting, and final fine-tuning within a small region of the solution space, Several numerical simulations for a multiple-input and multiple-output system are given to illustrate the techniques developed in this paper.

  9. Optimizing Dynamical Network Structure for Pinning Control

    NASA Astrophysics Data System (ADS)

    Orouskhani, Yasin; Jalili, Mahdi; Yu, Xinghuo

    2016-04-01

    Controlling dynamics of a network from any initial state to a final desired state has many applications in different disciplines from engineering to biology and social sciences. In this work, we optimize the network structure for pinning control. The problem is formulated as four optimization tasks: i) optimizing the locations of driver nodes, ii) optimizing the feedback gains, iii) optimizing simultaneously the locations of driver nodes and feedback gains, and iv) optimizing the connection weights. A newly developed population-based optimization technique (cat swarm optimization) is used as the optimization method. In order to verify the methods, we use both real-world networks, and model scale-free and small-world networks. Extensive simulation results show that the optimal placement of driver nodes significantly outperforms heuristic methods including placing drivers based on various centrality measures (degree, betweenness, closeness and clustering coefficient). The pinning controllability is further improved by optimizing the feedback gains. We also show that one can significantly improve the controllability by optimizing the connection weights.

  10. Structure-adaptive CBCT reconstruction using weighted total variation and Hessian penalties

    PubMed Central

    Shi, Qi; Sun, Nanbo; Sun, Tao; Wang, Jing; Tan, Shan

    2016-01-01

    The exposure of normal tissues to high radiation during cone-beam CT (CBCT) imaging increases the risk of cancer and genetic defects. Statistical iterative algorithms with the total variation (TV) penalty have been widely used for low dose CBCT reconstruction, with state-of-the-art performance in suppressing noise and preserving edges. However, TV is a first-order penalty and sometimes leads to the so-called staircase effect, particularly over regions with smooth intensity transition in the reconstruction images. A second-order penalty known as the Hessian penalty was recently used to replace TV to suppress the staircase effect in CBCT reconstruction at the cost of slightly blurring object edges. In this study, we proposed a new penalty, the TV-H, which combines TV and Hessian penalties for CBCT reconstruction in a structure-adaptive way. The TV-H penalty automatically differentiates the edges, gradual transition and uniform local regions within an image using the voxel gradient, and adaptively weights TV and Hessian according to the local image structures in the reconstruction process. Our proposed penalty retains the benefits of TV, including noise suppression and edge preservation. It also maintains the structures in regions with gradual intensity transition more successfully. A majorization-minimization (MM) approach was designed to optimize the objective energy function constructed with the TV-H penalty. The MM approach employed a quadratic upper bound of the original objective function, and the original optimization problem was changed to a series of quadratic optimization problems, which could be efficiently solved using the Gauss-Seidel update strategy. We tested the reconstruction algorithm on two simulated digital phantoms and two physical phantoms. Our experiments indicated that the TV-H penalty visually and quantitatively outperformed both TV and Hessian penalties. PMID:27699100

  11. Structure-adaptive CBCT reconstruction using weighted total variation and Hessian penalties.

    PubMed

    Shi, Qi; Sun, Nanbo; Sun, Tao; Wang, Jing; Tan, Shan

    2016-09-01

    The exposure of normal tissues to high radiation during cone-beam CT (CBCT) imaging increases the risk of cancer and genetic defects. Statistical iterative algorithms with the total variation (TV) penalty have been widely used for low dose CBCT reconstruction, with state-of-the-art performance in suppressing noise and preserving edges. However, TV is a first-order penalty and sometimes leads to the so-called staircase effect, particularly over regions with smooth intensity transition in the reconstruction images. A second-order penalty known as the Hessian penalty was recently used to replace TV to suppress the staircase effect in CBCT reconstruction at the cost of slightly blurring object edges. In this study, we proposed a new penalty, the TV-H, which combines TV and Hessian penalties for CBCT reconstruction in a structure-adaptive way. The TV-H penalty automatically differentiates the edges, gradual transition and uniform local regions within an image using the voxel gradient, and adaptively weights TV and Hessian according to the local image structures in the reconstruction process. Our proposed penalty retains the benefits of TV, including noise suppression and edge preservation. It also maintains the structures in regions with gradual intensity transition more successfully. A majorization-minimization (MM) approach was designed to optimize the objective energy function constructed with the TV-H penalty. The MM approach employed a quadratic upper bound of the original objective function, and the original optimization problem was changed to a series of quadratic optimization problems, which could be efficiently solved using the Gauss-Seidel update strategy. We tested the reconstruction algorithm on two simulated digital phantoms and two physical phantoms. Our experiments indicated that the TV-H penalty visually and quantitatively outperformed both TV and Hessian penalties.

  12. Automatic treatment plan re-optimization for adaptive radiotherapy guided with the initial plan DVHs

    NASA Astrophysics Data System (ADS)

    Li, Nan; Zarepisheh, Masoud; Uribe-Sanchez, Andres; Moore, Kevin; Tian, Zhen; Zhen, Xin; Jiang Graves, Yan; Gautier, Quentin; Mell, Loren; Zhou, Linghong; Jia, Xun; Jiang, Steve

    2013-12-01

    Adaptive radiation therapy (ART) can reduce normal tissue toxicity and/or improve tumor control through treatment adaptations based on the current patient anatomy. Developing an efficient and effective re-planning algorithm is an important step toward the clinical realization of ART. For the re-planning process, manual trial-and-error approach to fine-tune planning parameters is time-consuming and is usually considered unpractical, especially for online ART. It is desirable to automate this step to yield a plan of acceptable quality with minimal interventions. In ART, prior information in the original plan is available, such as dose-volume histogram (DVH), which can be employed to facilitate the automatic re-planning process. The goal of this work is to develop an automatic re-planning algorithm to generate a plan with similar, or possibly better, DVH curves compared with the clinically delivered original plan. Specifically, our algorithm iterates the following two loops. An inner loop is the traditional fluence map optimization, in which we optimize a quadratic objective function penalizing the deviation of the dose received by each voxel from its prescribed or threshold dose with a set of fixed voxel weighting factors. In outer loop, the voxel weighting factors in the objective function are adjusted according to the deviation of the current DVH curves from those in the original plan. The process is repeated until the DVH curves are acceptable or maximum iteration step is reached. The whole algorithm is implemented on GPU for high efficiency. The feasibility of our algorithm has been demonstrated with three head-and-neck cancer IMRT cases, each having an initial planning CT scan and another treatment CT scan acquired in the middle of treatment course. Compared with the DVH curves in the original plan, the DVH curves in the resulting plan using our algorithm with 30 iterations are better for almost all structures. The re-optimization process takes about 30 s using our in-house optimization engine. This work was originally presented at the 54th AAPM annual meeting in Charlotte, NC, July 29-August 2, 2012.

  13. Teleconnection Paths via Climate Network Direct Link Detection.

    PubMed

    Zhou, Dong; Gozolchiani, Avi; Ashkenazy, Yosef; Havlin, Shlomo

    2015-12-31

    Teleconnections describe remote connections (typically thousands of kilometers) of the climate system. These are of great importance in climate dynamics as they reflect the transportation of energy and climate change on global scales (like the El Niño phenomenon). Yet, the path of influence propagation between such remote regions, and weighting associated with different paths, are only partially known. Here we propose a systematic climate network approach to find and quantify the optimal paths between remotely distant interacting locations. Specifically, we separate the correlations between two grid points into direct and indirect components, where the optimal path is found based on a minimal total cost function of the direct links. We demonstrate our method using near surface air temperature reanalysis data, on identifying cross-latitude teleconnections and their corresponding optimal paths. The proposed method may be used to quantify and improve our understanding regarding the emergence of climate patterns on global scales.

  14. Targeted ENO schemes with tailored resolution property for hyperbolic conservation laws

    NASA Astrophysics Data System (ADS)

    Fu, Lin; Hu, Xiangyu Y.; Adams, Nikolaus A.

    2017-11-01

    In this paper, we extend the range of targeted ENO (TENO) schemes (Fu et al. (2016) [18]) by proposing an eighth-order TENO8 scheme. A general formulation to construct the high-order undivided difference τK within the weighting strategy is proposed. With the underlying scale-separation strategy, sixth-order accuracy for τK in the smooth solution regions is designed for good performance and robustness. Furthermore, a unified framework to optimize independently the dispersion and dissipation properties of high-order finite-difference schemes is proposed. The new framework enables tailoring of dispersion and dissipation as function of wavenumber. The optimal linear scheme has minimum dispersion error and a dissipation error that satisfies a dispersion-dissipation relation. Employing the optimal linear scheme, a sixth-order TENO8-opt scheme is constructed. A set of benchmark cases involving strong discontinuities and broadband fluctuations is computed to demonstrate the high-resolution properties of the new schemes.

  15. On advanced configuration enhance adaptive system optimization

    NASA Astrophysics Data System (ADS)

    Liu, Hua; Ding, Quanxin; Wang, Helong; Guo, Chunjie; Chen, Hongliang; Zhou, Liwei

    2017-10-01

    For aim to find an effective method to structure to enhance these adaptive system with some complex function and look forward to establish an universally applicable solution in prototype and optimization. As the most attractive component in adaptive system, wave front corrector is constrained by some conventional technique and components, such as polarization dependence and narrow working waveband. Advanced configuration based on a polarized beam split can optimized energy splitting method used to overcome these problems effective. With the global algorithm, the bandwidth has been amplified by more than five times as compared with that of traditional ones. Simulation results show that the system can meet the application requirements in MTF and other related criteria. Compared with the conventional design, the system has reduced in volume and weight significantly. Therefore, the determining factors are the prototype selection and the system configuration, Results show their effectiveness.

  16. Microgravity vibration isolation: An optimal control law for the one-dimensional case

    NASA Technical Reports Server (NTRS)

    Hampton, Richard D.; Grodsinsky, Carlos M.; Allaire, Paul E.; Lewis, David W.; Knospe, Carl R.

    1991-01-01

    Certain experiments contemplated for space platforms must be isolated from the accelerations of the platform. An optimal active control is developed for microgravity vibration isolation, using constant state feedback gains (identical to those obtained from the Linear Quadratic Regulator (LQR) approach) along with constant feedforward gains. The quadratic cost function for this control algorithm effectively weights external accelerations of the platform disturbances by a factor proportional to (1/omega) exp 4. Low frequency accelerations are attenuated by greater than two orders of magnitude. The control relies on the absolute position and velocity feedback of the experiment and the absolute position and velocity feedforward of the platform, and generally derives the stability robustness characteristics guaranteed by the LQR approach to optimality. The method as derived is extendable to the case in which only the relative positions and velocities and the absolute accelerations of the experiment and space platform are available.

  17. Systematic Sensor Selection Strategy (S4) User Guide

    NASA Technical Reports Server (NTRS)

    Sowers, T. Shane

    2012-01-01

    This paper describes a User Guide for the Systematic Sensor Selection Strategy (S4). S4 was developed to optimally select a sensor suite from a larger pool of candidate sensors based on their performance in a diagnostic system. For aerospace systems, selecting the proper sensors is important for ensuring adequate measurement coverage to satisfy operational, maintenance, performance, and system diagnostic criteria. S4 optimizes the selection of sensors based on the system fault diagnostic approach while taking conflicting objectives such as cost, weight and reliability into consideration. S4 can be described as a general architecture structured to accommodate application-specific components and requirements. It performs combinational optimization with a user defined merit or cost function to identify optimum or near-optimum sensor suite solutions. The S4 User Guide describes the sensor selection procedure and presents an example problem using an open source turbofan engine simulation to demonstrate its application.

  18. Preliminary Structural Design Using Topology Optimization with a Comparison of Results from Gradient and Genetic Algorithm Methods

    NASA Technical Reports Server (NTRS)

    Burt, Adam O.; Tinker, Michael L.

    2014-01-01

    In this paper, genetic algorithm based and gradient-based topology optimization is presented in application to a real hardware design problem. Preliminary design of a planetary lander mockup structure is accomplished using these methods that prove to provide major weight savings by addressing the structural efficiency during the design cycle. This paper presents two alternative formulations of the topology optimization problem. The first is the widely-used gradient-based implementation using commercially available algorithms. The second is formulated using genetic algorithms and internally developed capabilities. These two approaches are applied to a practical design problem for hardware that has been built, tested and proven to be functional. Both formulations converged on similar solutions and therefore were proven to be equally valid implementations of the process. This paper discusses both of these formulations at a high level.

  19. A Canonical Ensemble Correlation Prediction Model for Seasonal Precipitation Anomaly

    NASA Technical Reports Server (NTRS)

    Shen, Samuel S. P.; Lau, William K. M.; Kim, Kyu-Myong; Li, Guilong

    2001-01-01

    This report describes an optimal ensemble forecasting model for seasonal precipitation and its error estimation. Each individual forecast is based on the canonical correlation analysis (CCA) in the spectral spaces whose bases are empirical orthogonal functions (EOF). The optimal weights in the ensemble forecasting crucially depend on the mean square error of each individual forecast. An estimate of the mean square error of a CCA prediction is made also using the spectral method. The error is decomposed onto EOFs of the predictand and decreases linearly according to the correlation between the predictor and predictand. This new CCA model includes the following features: (1) the use of area-factor, (2) the estimation of prediction error, and (3) the optimal ensemble of multiple forecasts. The new CCA model is applied to the seasonal forecasting of the United States precipitation field. The predictor is the sea surface temperature.

  20. Microgravity vibration isolation: An optimal control law for the one-dimensional case

    NASA Technical Reports Server (NTRS)

    Hampton, R. D.; Grodsinsky, C. M.; Allaire, P. E.; Lewis, D. W.; Knospe, C. R.

    1991-01-01

    Certain experiments contemplated for space platforms must be isolated from the accelerations of the platforms. An optimal active control is developed for microgravity vibration isolation, using constant state feedback gains (identical to those obtained from the Linear Quadratic Regulator (LQR) approach) along with constant feedforward (preview) gains. The quadratic cost function for this control algorithm effectively weights external accelerations of the platform disturbances by a factor proportional to (1/omega)(exp 4). Low frequency accelerations (less than 50 Hz) are attenuated by greater than two orders of magnitude. The control relies on the absolute position and velocity feedback of the experiment and the absolute position and velocity feedforward of the platform, and generally derives the stability robustness characteristics guaranteed by the LQR approach to optimality. The method as derived is extendable to the case in which only the relative positions and velocities and the absolute accelerations of the experiment and space platform are available.

  1. Anatomically related gray and white matter alterations in the brains of functional dyspepsia patients.

    PubMed

    Nan, J; Liu, J; Mu, J; Zhang, Y; Zhang, M; Tian, J; Liang, F; Zeng, F

    2015-06-01

    Previous studies summarized altered brain functional patterns in functional dyspepsia (FD) patients, but how the brain structural patterns are related to FD remains largely unclear. The objective of this study was to determine the brain structural characteristics in FD patients. Optimized voxel-based morphometry and tract-based spatial statistics were employed to investigate the changes in gray matter (GM) and white matter (WM) respectively in 34 FD patients with postprandial distress syndrome and 33 healthy controls based on T1-weighted and diffusion-weighted imaging. The Pearson's correlation evaluated the link among GM alterations, WM abnormalities, and clinical variables in FD patients. The optimal brain structural parameters for identifying FD were explored using the receiver operating characteristic curve. Compared to controls, FD patients exhibited a decrease in GM density (GMD) in the right posterior insula/temporal superior cortex (marked as pINS), right inferior frontal cortex (IFC), and left middle cingulate cortex, and an increase in fractional anisotropy (FA) in the posterior limb of the internal capsule, posterior thalamic radiation, and external capsule (EC). Interestingly, the GMD in the pINS was significantly associated with GMD in the IFC and FA in the EC. Moreover, the EC adjacent to the pINS provided the best performance for distinguishing FD patients from controls. Our results showed pINS-related structural abnormalities in FD patients, indicating that GM and WM parameters were not affected independently. These findings would lay the foundation for probing an efficient target in the brain for treating FD. © 2015 John Wiley & Sons Ltd.

  2. A reliability-based cost effective fail-safe design procedure

    NASA Technical Reports Server (NTRS)

    Hanagud, S.; Uppaluri, B.

    1976-01-01

    The authors have developed a methodology for cost-effective fatigue design of structures subject to random fatigue loading. A stochastic model for fatigue crack propagation under random loading has been discussed. Fracture mechanics is then used to estimate the parameters of the model and the residual strength of structures with cracks. The stochastic model and residual strength variations have been used to develop procedures for estimating the probability of failure and its changes with inspection frequency. This information on reliability is then used to construct an objective function in terms of either a total weight function or cost function. A procedure for selecting the design variables, subject to constraints, by optimizing the objective function has been illustrated by examples. In particular, optimum design of stiffened panel has been discussed.

  3. Practical alternatives to chronic caloric restriction for optimizing vascular function with ageing

    PubMed Central

    Seals, Douglas R.

    2016-01-01

    Abstract Calorie restriction (CR) in the absence of malnutrition exerts a multitude of physiological benefits with ageing in model organisms and in humans including improvements in vascular function. Despite the well‐known benefits of chronic CR, long‐term energy restriction is not likely to be a feasible healthy lifestyle strategy in humans due to poor sustained adherence, and presents additional concerns if applied to normal weight older adults. This review summarizes what is known about the effects of CR on vascular function with ageing including the underlying molecular ‘energy‐ and nutrient‐sensing’ mechanisms, and discusses the limited but encouraging evidence for alternative pharmacological and lifestyle interventions that may improve vascular function with ageing by mimicking the beneficial effects of long‐term CR. PMID:27641062

  4. Accurate B-spline-based 3-D interpolation scheme for digital volume correlation

    NASA Astrophysics Data System (ADS)

    Ren, Maodong; Liang, Jin; Wei, Bin

    2016-12-01

    An accurate and efficient 3-D interpolation scheme, based on sampling theorem and Fourier transform technique, is proposed to reduce the sub-voxel matching error caused by intensity interpolation bias in digital volume correlation. First, the influence factors of the interpolation bias are investigated theoretically using the transfer function of an interpolation filter (henceforth filter) in the Fourier domain. A law that the positional error of a filter can be expressed as a function of fractional position and wave number is found. Then, considering the above factors, an optimized B-spline-based recursive filter, combining B-spline transforms and least squares optimization method, is designed to virtually eliminate the interpolation bias in the process of sub-voxel matching. Besides, given each volumetric image containing different wave number ranges, a Gaussian weighting function is constructed to emphasize or suppress certain of wave number ranges based on the Fourier spectrum analysis. Finally, a novel software is developed and series of validation experiments were carried out to verify the proposed scheme. Experimental results show that the proposed scheme can reduce the interpolation bias to an acceptable level.

  5. Optimization of PID Parameters Utilizing Variable Weight Grey-Taguchi Method and Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Azmi, Nur Iffah Mohamed; Arifin Mat Piah, Kamal; Yusoff, Wan Azhar Wan; Romlay, Fadhlur Rahman Mohd

    2018-03-01

    Controller that uses PID parameters requires a good tuning method in order to improve the control system performance. Tuning PID control method is divided into two namely the classical methods and the methods of artificial intelligence. Particle swarm optimization algorithm (PSO) is one of the artificial intelligence methods. Previously, researchers had integrated PSO algorithms in the PID parameter tuning process. This research aims to improve the PSO-PID tuning algorithms by integrating the tuning process with the Variable Weight Grey- Taguchi Design of Experiment (DOE) method. This is done by conducting the DOE on the two PSO optimizing parameters: the particle velocity limit and the weight distribution factor. Computer simulations and physical experiments were conducted by using the proposed PSO- PID with the Variable Weight Grey-Taguchi DOE and the classical Ziegler-Nichols methods. They are implemented on the hydraulic positioning system. Simulation results show that the proposed PSO-PID with the Variable Weight Grey-Taguchi DOE has reduced the rise time by 48.13% and settling time by 48.57% compared to the Ziegler-Nichols method. Furthermore, the physical experiment results also show that the proposed PSO-PID with the Variable Weight Grey-Taguchi DOE tuning method responds better than Ziegler-Nichols tuning. In conclusion, this research has improved the PSO-PID parameter by applying the PSO-PID algorithm together with the Variable Weight Grey-Taguchi DOE method as a tuning method in the hydraulic positioning system.

  6. A Novel Extreme Learning Machine Classification Model for e-Nose Application Based on the Multiple Kernel Approach.

    PubMed

    Jian, Yulin; Huang, Daoyu; Yan, Jia; Lu, Kun; Huang, Ying; Wen, Tailai; Zeng, Tanyue; Zhong, Shijie; Xie, Qilong

    2017-06-19

    A novel classification model, named the quantum-behaved particle swarm optimization (QPSO)-based weighted multiple kernel extreme learning machine (QWMK-ELM), is proposed in this paper. Experimental validation is carried out with two different electronic nose (e-nose) datasets. Being different from the existing multiple kernel extreme learning machine (MK-ELM) algorithms, the combination coefficients of base kernels are regarded as external parameters of single-hidden layer feedforward neural networks (SLFNs). The combination coefficients of base kernels, the model parameters of each base kernel, and the regularization parameter are optimized by QPSO simultaneously before implementing the kernel extreme learning machine (KELM) with the composite kernel function. Four types of common single kernel functions (Gaussian kernel, polynomial kernel, sigmoid kernel, and wavelet kernel) are utilized to constitute different composite kernel functions. Moreover, the method is also compared with other existing classification methods: extreme learning machine (ELM), kernel extreme learning machine (KELM), k-nearest neighbors (KNN), support vector machine (SVM), multi-layer perceptron (MLP), radical basis function neural network (RBFNN), and probabilistic neural network (PNN). The results have demonstrated that the proposed QWMK-ELM outperforms the aforementioned methods, not only in precision, but also in efficiency for gas classification.

  7. Financial model calibration using consistency hints.

    PubMed

    Abu-Mostafa, Y S

    2001-01-01

    We introduce a technique for forcing the calibration of a financial model to produce valid parameters. The technique is based on learning from hints. It converts simple curve fitting into genuine calibration, where broad conclusions can be inferred from parameter values. The technique augments the error function of curve fitting with consistency hint error functions based on the Kullback-Leibler distance. We introduce an efficient EM-type optimization algorithm tailored to this technique. We also introduce other consistency hints, and balance their weights using canonical errors. We calibrate the correlated multifactor Vasicek model of interest rates, and apply it successfully to Japanese Yen swaps market and US dollar yield market.

  8. Distal Triceps Tendon Injuries.

    PubMed

    Keener, Jay D; Sethi, Paul M

    2015-11-01

    Acute triceps ruptures are an uncommon entity, occurring mainly in athletes, weight lifters (especially those taking anabolic steroids), and following elbow trauma. Accurate diagnosis is made clinically, although MRI may aid in confirmation and surgical planning. Acute ruptures are classified on an anatomic basis based on tear location and the degree of tendon involvement. Most complete tears are treated surgically in medically fit patients. Partial-thickness tears are managed according to the tear severity, functional demands, and response to conservative treatment. We favor an anatomic footprint repair of the triceps to provide optimal tendon to bone healing and, ultimately, functional outcome. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. On the Likely Utility of Hybrid Weights Optimized for Variances in Hybrid Error Covariance Models

    NASA Astrophysics Data System (ADS)

    Satterfield, E.; Hodyss, D.; Kuhl, D.; Bishop, C. H.

    2017-12-01

    Because of imperfections in ensemble data assimilation schemes, one cannot assume that the ensemble covariance is equal to the true error covariance of a forecast. Previous work demonstrated how information about the distribution of true error variances given an ensemble sample variance can be revealed from an archive of (observation-minus-forecast, ensemble-variance) data pairs. Here, we derive a simple and intuitively compelling formula to obtain the mean of this distribution of true error variances given an ensemble sample variance from (observation-minus-forecast, ensemble-variance) data pairs produced by a single run of a data assimilation system. This formula takes the form of a Hybrid weighted average of the climatological forecast error variance and the ensemble sample variance. Here, we test the extent to which these readily obtainable weights can be used to rapidly optimize the covariance weights used in Hybrid data assimilation systems that employ weighted averages of static covariance models and flow-dependent ensemble based covariance models. Univariate data assimilation and multi-variate cycling ensemble data assimilation are considered. In both cases, it is found that our computationally efficient formula gives Hybrid weights that closely approximate the optimal weights found through the simple but computationally expensive process of testing every plausible combination of weights.

  10. Application of the weighted total field-scattering field technique to 3D-PSTD light scattering model

    NASA Astrophysics Data System (ADS)

    Hu, Shuai; Gao, Taichang; Liu, Lei; Li, Hao; Chen, Ming; Yang, Bo

    2018-04-01

    PSTD (Pseudo Spectral Time Domain) is an excellent model for the light scattering simulation of nonspherical aerosol particles. However, due to the particularity of its discretization form of the Maxwell's equations, the traditional Total Field/Scattering Field (TF/SF) technique for FDTD (Finite Differential Time Domain) is not applicable to PSTD, and the time-consuming pure scattering field technique is mainly applied to introduce the incident wave. To this end, the weighted TF/SF technique proposed by X. Gao is generalized and applied to the 3D-PSTD scattering model. Using this technique, the incident light can be effectively introduced by modifying the electromagnetic components in an inserted connecting region between the total field and the scattering field region with incident terms, where the incident terms are obtained by weighting the incident field by a window function. To optimally determine the thickness of connection region and the window function type for PSTD calculations, their influence on the modeling accuracy is firstly analyzed. To further verify the effectiveness and advantages of the weighted TF/SF technique, the improved PSTD model is validated against the PSTD model equipped with pure scattering field technique in both calculation accuracy and efficiency. The results show that, the performance of PSTD seems to be not sensitive to variation of window functions. The number of the connection layer required decreases with the increasing of spatial resolution, where for spatial resolution of 24 grids per wavelength, a 6-layer region is thick enough. The scattering phase matrices and integral scattering parameters obtained by the improved PSTD show an excellent consistency with those well-tested models for spherical and nonspherical particles, illustrating that the weighted TF/SF technique can introduce the incident precisely. The weighted TF/SF technique shows higher computational efficiency than pure scattering technique.

  11. Intraventricular Flow Velocity Vector Visualization Based on the Continuity Equation and Measurements of Vorticity and Wall Shear Stress

    NASA Astrophysics Data System (ADS)

    Itatani, Keiichi; Okada, Takashi; Uejima, Tokuhisa; Tanaka, Tomohiko; Ono, Minoru; Miyaji, Kagami; Takenaka, Katsu

    2013-07-01

    We have developed a system to estimate velocity vector fields inside the cardiac ventricle by echocardiography and to evaluate several flow dynamical parameters to assess the pathophysiology of cardiovascular diseases. A two-dimensional continuity equation was applied to color Doppler data using speckle tracking data as boundary conditions, and the velocity component perpendicular to the echo beam line was obtained. We determined the optimal smoothing method of the color Doppler data, and the 8-pixel standard deviation of the Gaussian filter provided vorticity without nonphysiological stripe shape noise. We also determined the weight function at the bilateral boundaries given by the speckle tracking data of the ventricle or vascular wall motion, and the weight function linear to the distance from the boundary provided accurate flow velocities not only inside the vortex flow but also around near-wall regions on the basis of the results of the validation of a digital phantom of a pipe flow model.

  12. Theory of Random Copolymer Fractionation in Columns

    NASA Astrophysics Data System (ADS)

    Enders, Sabine

    Random copolymers show polydispersity both with respect to molecular weight and with respect to chemical composition, where the physical and chemical properties depend on both polydispersities. For special applications, the two-dimensional distribution function must adjusted to the application purpose. The adjustment can be achieved by polymer fractionation. From the thermodynamic point of view, the distribution function can be adjusted by the successive establishment of liquid-liquid equilibria (LLE) for suitable solutions of the polymer to be fractionated. The fractionation column is divided into theoretical stages. Assuming an LLE on each theoretical stage, the polymer fractionation can be modeled using phase equilibrium thermodynamics. As examples, simulations of stepwise fractionation in one direction, cross-fractionation in two directions, and two different column fractionations (Baker-Williams fractionation and continuous polymer fractionation) have been investigated. The simulation delivers the distribution according the molecular weight and chemical composition in every obtained fraction, depending on the operative properties, and is able to optimize the fractionation effectively.

  13. Single-hidden-layer feed-forward quantum neural network based on Grover learning.

    PubMed

    Liu, Cheng-Yi; Chen, Chein; Chang, Ching-Ter; Shih, Lun-Min

    2013-09-01

    In this paper, a novel single-hidden-layer feed-forward quantum neural network model is proposed based on some concepts and principles in the quantum theory. By combining the quantum mechanism with the feed-forward neural network, we defined quantum hidden neurons and connected quantum weights, and used them as the fundamental information processing unit in a single-hidden-layer feed-forward neural network. The quantum neurons make a wide range of nonlinear functions serve as the activation functions in the hidden layer of the network, and the Grover searching algorithm outstands the optimal parameter setting iteratively and thus makes very efficient neural network learning possible. The quantum neuron and weights, along with a Grover searching algorithm based learning, result in a novel and efficient neural network characteristic of reduced network, high efficient training and prospect application in future. Some simulations are taken to investigate the performance of the proposed quantum network and the result show that it can achieve accurate learning. Copyright © 2013 Elsevier Ltd. All rights reserved.

  14. The role of nutrition in the prevention of sarcopenia.

    PubMed

    Volkert, Dorothee

    2011-09-01

    Nutrition is regarded as one important contributing factors in the complex etiology of sarcopenia. Associations between several nutritional factors and muscle mass, strength, function and physical performance were reported in a growing number of studies in recent years. Accordingly, the avoidance of weight loss is crucial to prevent the concomitant loss of muscle mass. Adequate amounts of high-quality protein are important for optimal stimulation of muscle protein synthesis. Vitamin D, antioxidants and ω 3-polyunsaturated fatty acids may also contribute to the preservation of muscle function. In order to ensure adequate intake in all elderly, nutritional problems like loss of appetite and weight loss should be recognized early by routine screening for malnutrition in the elderly. Underlying causes need to be identified and subsequently corrected. The importance of physical activity, specifically resistance training, is emphasized, not only in order to facilitate muscle protein anabolism but also to increase energy expenditure, appetite and food intake in elderly people at risk of malnutrition.

  15. An improved local radial point interpolation method for transient heat conduction analysis

    NASA Astrophysics Data System (ADS)

    Wang, Feng; Lin, Gao; Zheng, Bao-Jing; Hu, Zhi-Qiang

    2013-06-01

    The smoothing thin plate spline (STPS) interpolation using the penalty function method according to the optimization theory is presented to deal with transient heat conduction problems. The smooth conditions of the shape functions and derivatives can be satisfied so that the distortions hardly occur. Local weak forms are developed using the weighted residual method locally from the partial differential equations of the transient heat conduction. Here the Heaviside step function is used as the test function in each sub-domain to avoid the need for a domain integral. Essential boundary conditions can be implemented like the finite element method (FEM) as the shape functions possess the Kronecker delta property. The traditional two-point difference method is selected for the time discretization scheme. Three selected numerical examples are presented in this paper to demonstrate the availability and accuracy of the present approach comparing with the traditional thin plate spline (TPS) radial basis functions.

  16. Design and Performance of Lift-Offset Rotorcraft for Short-Haul Missions

    DTIC Science & Technology

    2012-01-01

    loading and blade loading were varied to optimize the designs, based on gross weight and fuel burn. The influence of technology is shown, in terms of...loading were varied to optimize the designs, based on gross weight and fuel burn. The influence of technology is shown, in terms of rotor hub drag and...distributions were optimized for these conditions (Fig. 4), and the rotor and aircraft cruise performance was calculated (Fig. 5). Based on comprehensive

  17. Improving flood forecasting capability of physically based distributed hydrological model by parameter optimization

    NASA Astrophysics Data System (ADS)

    Chen, Y.; Li, J.; Xu, H.

    2015-10-01

    Physically based distributed hydrological models discrete the terrain of the whole catchment into a number of grid cells at fine resolution, and assimilate different terrain data and precipitation to different cells, and are regarded to have the potential to improve the catchment hydrological processes simulation and prediction capability. In the early stage, physically based distributed hydrological models are assumed to derive model parameters from the terrain properties directly, so there is no need to calibrate model parameters, but unfortunately, the uncertanties associated with this model parameter deriving is very high, which impacted their application in flood forecasting, so parameter optimization may also be necessary. There are two main purposes for this study, the first is to propose a parameter optimization method for physically based distributed hydrological models in catchment flood forecasting by using PSO algorithm and to test its competence and to improve its performances, the second is to explore the possibility of improving physically based distributed hydrological models capability in cathcment flood forecasting by parameter optimization. In this paper, based on the scalar concept, a general framework for parameter optimization of the PBDHMs for catchment flood forecasting is first proposed that could be used for all PBDHMs. Then, with Liuxihe model as the study model, which is a physically based distributed hydrological model proposed for catchment flood forecasting, the improverd Particle Swarm Optimization (PSO) algorithm is developed for the parameter optimization of Liuxihe model in catchment flood forecasting, the improvements include to adopt the linear decreasing inertia weight strategy to change the inertia weight, and the arccosine function strategy to adjust the acceleration coefficients. This method has been tested in two catchments in southern China with different sizes, and the results show that the improved PSO algorithm could be used for Liuxihe model parameter optimization effectively, and could improve the model capability largely in catchment flood forecasting, thus proven that parameter optimization is necessary to improve the flood forecasting capability of physically based distributed hydrological model. It also has been found that the appropriate particle number and the maximum evolution number of PSO algorithm used for Liuxihe model catchment flood forcasting is 20 and 30, respectively.

  18. Designing a multistage supply chain in cross-stage reverse logistics environments: application of particle swarm optimization algorithms.

    PubMed

    Chiang, Tzu-An; Che, Z H; Cui, Zhihua

    2014-01-01

    This study designed a cross-stage reverse logistics course for defective products so that damaged products generated in downstream partners can be directly returned to upstream partners throughout the stages of a supply chain for rework and maintenance. To solve this reverse supply chain design problem, an optimal cross-stage reverse logistics mathematical model was developed. In addition, we developed a genetic algorithm (GA) and three particle swarm optimization (PSO) algorithms: the inertia weight method (PSOA_IWM), V(Max) method (PSOA_VMM), and constriction factor method (PSOA_CFM), which we employed to find solutions to support this mathematical model. Finally, a real case and five simulative cases with different scopes were used to compare the execution times, convergence times, and objective function values of the four algorithms used to validate the model proposed in this study. Regarding system execution time, the GA consumed more time than the other three PSOs did. Regarding objective function value, the GA, PSOA_IWM, and PSOA_CFM could obtain a lower convergence value than PSOA_VMM could. Finally, PSOA_IWM demonstrated a faster convergence speed than PSOA_VMM, PSOA_CFM, and the GA did.

  19. Designing a Multistage Supply Chain in Cross-Stage Reverse Logistics Environments: Application of Particle Swarm Optimization Algorithms

    PubMed Central

    Chiang, Tzu-An; Che, Z. H.

    2014-01-01

    This study designed a cross-stage reverse logistics course for defective products so that damaged products generated in downstream partners can be directly returned to upstream partners throughout the stages of a supply chain for rework and maintenance. To solve this reverse supply chain design problem, an optimal cross-stage reverse logistics mathematical model was developed. In addition, we developed a genetic algorithm (GA) and three particle swarm optimization (PSO) algorithms: the inertia weight method (PSOA_IWM), V Max method (PSOA_VMM), and constriction factor method (PSOA_CFM), which we employed to find solutions to support this mathematical model. Finally, a real case and five simulative cases with different scopes were used to compare the execution times, convergence times, and objective function values of the four algorithms used to validate the model proposed in this study. Regarding system execution time, the GA consumed more time than the other three PSOs did. Regarding objective function value, the GA, PSOA_IWM, and PSOA_CFM could obtain a lower convergence value than PSOA_VMM could. Finally, PSOA_IWM demonstrated a faster convergence speed than PSOA_VMM, PSOA_CFM, and the GA did. PMID:24772026

  20. Promoting Healthy Weight with "Stability Skills First": A Randomized Trial

    ERIC Educational Resources Information Center

    Kiernan, Michaela; Brown, Susan D.; Schoffman, Danielle E.; Lee, Katherine; King, Abby C.; Taylor, C. Barr; Schleicher, Nina C.; Perri, Michael G.

    2013-01-01

    Objective: Although behavioral weight-loss interventions produce short-term weight loss, long-term maintenance remains elusive. This randomized trial examined whether learning a novel set of "stability skills" before losing weight improved long-term weight management. Stability skills were designed to optimize individuals' current…

Top