Science.gov

Sample records for optimization approach applied

  1. Optimal dynamic control of invasions: applying a systematic conservation approach.

    PubMed

    Adams, Vanessa M; Setterfield, Samantha A

    2015-06-01

    The social, economic, and environmental impacts of invasive plants are well recognized. However, these variable impacts are rarely accounted for in the spatial prioritization of funding for weed management. We examine how current spatially explicit prioritization methods can be extended to identify optimal budget allocations to both eradication and control measures of invasive species to minimize the costs and likelihood of invasion. Our framework extends recent approaches to systematic prioritization of weed management to account for multiple values that are threatened by weed invasions with a multi-year dynamic prioritization approach. We apply our method to the northern portion of the Daly catchment in the Northern Territory, which has significant conservation values that are threatened by gamba grass (Andropogon gayanus), a highly invasive species recognized by the Australian government as a Weed of National Significance (WONS). We interface Marxan, a widely applied conservation planning tool, with a dynamic biophysical model of gamba grass to optimally allocate funds to eradication and control programs under two budget scenarios comparing maximizing gain (MaxGain) and minimizing loss (MinLoss) optimization approaches. The prioritizations support previous findings that a MinLoss approach is a better strategy when threats are more spatially variable than conservation values. Over a 10-year simulation period, we find that a MinLoss approach reduces future infestations by ~8% compared to MaxGain in the constrained budget scenarios and ~12% in the unlimited budget scenarios. We find that due to the extensive current invasion and rapid rate of spread, allocating the annual budget to control efforts is more efficient than funding eradication efforts when there is a constrained budget. Under a constrained budget, applying the most efficient optimization scenario (control, minloss) reduces spread by ~27% compared to no control. Conversely, if the budget is unlimited it

  2. Homotopic approach and pseudospectral method applied jointly to low thrust trajectory optimization

    NASA Astrophysics Data System (ADS)

    Guo, Tieding; Jiang, Fanghua; Li, Junfeng

    2012-02-01

    The homotopic approach and the pseudospectral method are two popular techniques for low thrust trajectory optimization. A hybrid scheme is proposed in this paper by combining the above two together to cope with various difficulties encountered when they are applied separately. Explicitly, a smooth energy-optimal problem is first discretized by the pseudospectral method, leading to a nonlinear programming problem (NLP). Costates, especially their initial values, are then estimated from Karush-Kuhn-Tucker (KKT) multipliers of this NLP. Based upon these estimated initial costates, homotopic procedures are initiated efficiently and the desirable non-smooth fuel-optimal results are finally obtained by continuing the smooth energy-optimal results through a homotopic algorithm. Two main difficulties, one due to absence of reasonable initial costates when the homotopic procedures are being initiated and the other due to discontinuous bang-bang controls when the pseudospectral method is applied to the fuel-optimal problem, are both resolved successfully. Numerical results of two scenarios are presented in the end, demonstrating feasibility and well performance of this hybrid technique.

  3. Optimal control theory (OWEM) applied to a helicopter in the hover and approach phase

    NASA Technical Reports Server (NTRS)

    Born, G. J.; Kai, T.

    1975-01-01

    A major difficulty in the practical application of linear-quadratic regulator theory is how to choose the weighting matrices in quadratic cost functions. The control system design with optimal weighting matrices was applied to a helicopter in the hover and approach phase. The weighting matrices were calculated to extremize the closed loop total system damping subject to constraints on the determinants. The extremization is really a minimization of the effects of disturbances, and interpreted as a compromise between the generalized system accuracy and the generalized system response speed. The trade-off between the accuracy and the response speed is adjusted by a single parameter, the ratio of determinants. By this approach an objective measure can be obtained for the design of a control system. The measure is to be determined by the system requirements.

  4. Further Development of an Optimal Design Approach Applied to Axial Magnetic Bearings

    NASA Technical Reports Server (NTRS)

    Bloodgood, V. Dale, Jr.; Groom, Nelson J.; Britcher, Colin P.

    2000-01-01

    Classical design methods involved in magnetic bearings and magnetic suspension systems have always had their limitations. Because of this, the overall effectiveness of a design has always relied heavily on the skill and experience of the individual designer. This paper combines two approaches that have been developed to aid the accuracy and efficiency of magnetostatic design. The first approach integrates classical magnetic circuit theory with modern optimization theory to increase design efficiency. The second approach uses loss factors to increase the accuracy of classical magnetic circuit theory. As an example, an axial magnetic thrust bearing is designed for minimum power.

  5. Macronutrient modifications of optimal foraging theory: an approach using indifference curves applied to some modern foragers

    SciTech Connect

    Hill, K.

    1988-06-01

    The use of energy (calories) as the currency to be maximized per unit time in Optimal Foraging Models is considered in light of data on several foraging groups. Observations on the Ache, Cuiva, and Yora foragers suggest men do not attempt to maximize energetic return rates, but instead often concentration on acquiring meat resources which provide lower energetic returns. The possibility that this preference is due to the macronutrient composition of hunted and gathered foods is explored. Indifference curves are introduced as a means of modeling the tradeoff between two desirable commodities, meat (protein-lipid) and carbohydrate, and a specific indifference curve is derived using observed choices in five foraging situations. This curve is used to predict the amount of meat that Mbuti foragers will trade for carbohydrate, in an attempt to test the utility of the approach.

  6. Optimal control of open quantum systems: A combined surrogate Hamiltonian optimal control theory approach applied to photochemistry on surfaces

    SciTech Connect

    Asplund, Erik; Kluener, Thorsten

    2012-03-28

    In this paper, control of open quantum systems with emphasis on the control of surface photochemical reactions is presented. A quantum system in a condensed phase undergoes strong dissipative processes. From a theoretical viewpoint, it is important to model such processes in a rigorous way. In this work, the description of open quantum systems is realized within the surrogate Hamiltonian approach [R. Baer and R. Kosloff, J. Chem. Phys. 106, 8862 (1997)]. An efficient and accurate method to find control fields is optimal control theory (OCT) [W. Zhu, J. Botina, and H. Rabitz, J. Chem. Phys. 108, 1953 (1998); Y. Ohtsuki, G. Turinici, and H. Rabitz, J. Chem. Phys. 120, 5509 (2004)]. To gain control of open quantum systems, the surrogate Hamiltonian approach and OCT, with time-dependent targets, are combined. Three open quantum systems are investigated by the combined method, a harmonic oscillator immersed in an ohmic bath, CO adsorbed on a platinum surface, and NO adsorbed on a nickel oxide surface. Throughout this paper, atomic units, i.e., ({Dirac_h}/2{pi})=m{sub e}=e=a{sub 0}= 1, have been used unless otherwise stated.

  7. Optimal control of open quantum systems: a combined surrogate hamiltonian optimal control theory approach applied to photochemistry on surfaces.

    PubMed

    Asplund, Erik; Klüner, Thorsten

    2012-03-28

    In this paper, control of open quantum systems with emphasis on the control of surface photochemical reactions is presented. A quantum system in a condensed phase undergoes strong dissipative processes. From a theoretical viewpoint, it is important to model such processes in a rigorous way. In this work, the description of open quantum systems is realized within the surrogate hamiltonian approach [R. Baer and R. Kosloff, J. Chem. Phys. 106, 8862 (1997)]. An efficient and accurate method to find control fields is optimal control theory (OCT) [W. Zhu, J. Botina, and H. Rabitz, J. Chem. Phys. 108, 1953 (1998); Y. Ohtsuki, G. Turinici, and H. Rabitz, J. Chem. Phys. 120, 5509 (2004)]. To gain control of open quantum systems, the surrogate hamiltonian approach and OCT, with time-dependent targets, are combined. Three open quantum systems are investigated by the combined method, a harmonic oscillator immersed in an ohmic bath, CO adsorbed on a platinum surface, and NO adsorbed on a nickel oxide surface. Throughout this paper, atomic units, i.e., ℏ = m(e) = e = a(0) = 1, have been used unless otherwise stated. PMID:22462846

  8. Optimal control of open quantum systems: A combined surrogate Hamiltonian optimal control theory approach applied to photochemistry on surfaces

    NASA Astrophysics Data System (ADS)

    Asplund, Erik; Klüner, Thorsten

    2012-03-01

    In this paper, control of open quantum systems with emphasis on the control of surface photochemical reactions is presented. A quantum system in a condensed phase undergoes strong dissipative processes. From a theoretical viewpoint, it is important to model such processes in a rigorous way. In this work, the description of open quantum systems is realized within the surrogate Hamiltonian approach [R. Baer and R. Kosloff, J. Chem. Phys. 106, 8862 (1997)], 10.1063/1.473950. An efficient and accurate method to find control fields is optimal control theory (OCT) [W. Zhu, J. Botina, and H. Rabitz, J. Chem. Phys. 108, 1953 (1998), 10.1063/1.475576; Y. Ohtsuki, G. Turinici, and H. Rabitz, J. Chem. Phys. 120, 5509 (2004)], 10.1063/1.1650297. To gain control of open quantum systems, the surrogate Hamiltonian approach and OCT, with time-dependent targets, are combined. Three open quantum systems are investigated by the combined method, a harmonic oscillator immersed in an ohmic bath, CO adsorbed on a platinum surface, and NO adsorbed on a nickel oxide surface. Throughout this paper, atomic units, i.e., ℏ = me = e = a0 = 1, have been used unless otherwise stated.

  9. Data Understanding Applied to Optimization

    NASA Technical Reports Server (NTRS)

    Buntine, Wray; Shilman, Michael

    1998-01-01

    The goal of this research is to explore and develop software for supporting visualization and data analysis of search and optimization. Optimization is an ever-present problem in science. The theory of NP-completeness implies that the problems can only be resolved by increasingly smarter problem specific knowledge, possibly for use in some general purpose algorithms. Visualization and data analysis offers an opportunity to accelerate our understanding of key computational bottlenecks in optimization and to automatically tune aspects of the computation for specific problems. We will prototype systems to demonstrate how data understanding can be successfully applied to problems characteristic of NASA's key science optimization tasks, such as central tasks for parallel processing, spacecraft scheduling, and data transmission from a remote satellite.

  10. Dynamic programming in applied optimization problems

    NASA Astrophysics Data System (ADS)

    Zavalishchin, Dmitry

    2015-11-01

    Features of the use dynamic programming in applied problems are investigated. In practice such problems as finding the critical paths in network planning and control, finding the optimal supply plan in transportation problem, objects territorial distribution are traditionally solved by special methods of operations research. It should be noted that the dynamic programming is not provided computational advantages, but facilitates changes and modifications of tasks. This follows from the Bellman's optimality principle. The features of the multistage decision processes construction in applied problems are provided.

  11. In silico optimization of pharmacokinetic properties and receptor binding affinity simultaneously: a 'parallel progression approach to drug design' applied to β-blockers.

    PubMed

    Advani, Poonam; Joseph, Blessy; Ambre, Premlata; Pissurlenkar, Raghuvir; Khedkar, Vijay; Iyer, Krishna; Gabhe, Satish; Iyer, Radhakrishnan P; Coutinho, Evans

    2016-01-01

    The present work exploits the potential of in silico approaches for minimizing attrition of leads in the later stages of drug development. We propose a theoretical approach, wherein 'parallel' information is generated to simultaneously optimize the pharmacokinetics (PK) and pharmacodynamics (PD) of lead candidates. β-blockers, though in use for many years, have suboptimal PKs; hence are an ideal test series for the 'parallel progression approach'. This approach utilizes molecular modeling tools viz. hologram quantitative structure activity relationships, homology modeling, docking, predictive metabolism, and toxicity models. Validated models have been developed for PK parameters such as volume of distribution (log Vd) and clearance (log Cl), which together influence the half-life (t1/2) of a drug. Simultaneously, models for PD in terms of inhibition constant pKi have been developed. Thus, PK and PD properties of β-blockers were concurrently analyzed and after iterative cycling, modifications were proposed that lead to compounds with optimized PK and PD. We report some of the resultant re-engineered β-blockers with improved half-lives and pKi values comparable with marketed β-blockers. These were further analyzed by the docking studies to evaluate their binding poses. Finally, metabolic and toxicological assessment of these molecules was done through in silico methods. The strategy proposed herein has potential universal applicability, and can be used in any drug discovery scenario; provided that the data used is consistent in terms of experimental conditions, endpoints, and methods employed. Thus the 'parallel progression approach' helps to simultaneously fine-tune various properties of the drug and would be an invaluable tool during the drug development process.

  12. Multidisciplinary optimization applied to a transport aircraft

    NASA Technical Reports Server (NTRS)

    Giles, G. L.; Wrenn, G. A.

    1984-01-01

    Decomposition of a large optimization problem into several smaller subproblems has been proposed as an approach to making large-scale optimization problems tractable. To date, the characteristics of this approach have been tested on problems of limited complexity. The objective of the effort is to demonstrate the application of this multilevel optimization method on a large-scale design study using analytical models comparable to those currently being used in the aircraft industry. The purpose of the design study which is underway to provide this demonstration is to generate a wing design for a transport aircraft which will perform a specified mission with minimum block fuel. A definition of the problem; a discussion of the multilevel composition which is used for an aircraft wing; descriptions of analysis and optimization procedures used at each level; and numerical results obtained to date are included. Computational times required to perform various steps in the process are also given. Finally, a summary of the current status and plans for continuation of this development effort are given.

  13. Applying Squeaky-Wheel Optimization Schedule Airborne Astronomy Observations

    NASA Technical Reports Server (NTRS)

    Frank, Jeremy; Kuerklue, Elif

    2004-01-01

    We apply the Squeaky Wheel Optimization (SWO) algorithm to the problem of scheduling astronomy observations for the Stratospheric Observatory for Infrared Astronomy, an airborne observatory. The problem contains complex constraints relating the feasibility of an astronomical observation to the position and time at which the observation begins, telescope elevation limits, special use airspace, and available fuel. Solving the problem requires making discrete choices (e.g. selection and sequencing of observations) and continuous ones (e.g. takeoff time and setting up observations by repositioning the aircraft). The problem also includes optimization criteria such as maximizing observing time while simultaneously minimizing total flight time. Previous approaches to the problem fail to scale when accounting for all constraints. We describe how to customize SWO to solve this problem, and show that it finds better flight plans, often with less computation time, than previous approaches.

  14. Applying Research Evidence to Optimize Telehomecare

    PubMed Central

    Bowles, Kathryn H.; Baugh, Amy C.

    2010-01-01

    Telemedicine is the use of technology to provide healthcare over a distance. Telehomecare, a form of telemedicine based in the patient's home, is a communication and clinical information system that enables the interaction of voice, video, and health-related data using ordinary telephone lines. Most home care agencies are adopting telehomecare to assist with the care of the growing population of chronically ill adults. This article presents a summary and critique of the published empirical evidence about the effects of telehomecare on older adult patients with chronic illness. The knowledge gained will be applied in a discussion regarding telehomecare optimization and areas for future research. The referenced literature in PubMed, MEDLINE, CDSR, ACP Journal Club, DARE, CCTR, and CINAHL databases was searched for the years 1995–2005 using the keywords “telehomecare” and “telemedicine,” and limited to primary research and studies in English. Approximately 40 articles were reviewed. Articles were selected if telehealth technology with peripheral medical devices was used to deliver home care for adult patients with chronic illness. Studies where the intervention consisted of only telephone calls or did not involve video or in-person nurse contact in the home were excluded. Nineteen studies described the effects of telehomecare on adult patients, chronic illness outcomes, providers, and costs of care. Patients and providers were accepting of the technology and it appears to have positive effects on chronic illness outcomes such as self-management, rehospitalizations, and length of stay. Overall, due to savings from healthcare utilization and travel, telehomecare appears to reduce healthcare costs. Generally, studies have small sample sizes with diverse types and doses of telehomecare intervention for a select few chronic illnesses; most commonly heart failure. Very few published studies have explored the cost or quality implications since the change in home

  15. Applying optimization software libraries to engineering problems

    NASA Technical Reports Server (NTRS)

    Healy, M. J.

    1984-01-01

    Nonlinear programming, preliminary design problems, performance simulation problems trajectory optimization, flight computer optimization, and linear least squares problems are among the topics covered. The nonlinear programming applications encountered in a large aerospace company are a real challenge to those who provide mathematical software libraries and consultation services. Typical applications include preliminary design studies, data fitting and filtering, jet engine simulations, control system analysis, and trajectory optimization and optimal control. Problem sizes range from single-variable unconstrained minimization to constrained problems with highly nonlinear functions and hundreds of variables. Most of the applications can be posed as nonlinearly constrained minimization problems. Highly complex optimization problems with many variables were formulated in the early days of computing. At the time, many problems had to be reformulated or bypassed entirely, and solution methods often relied on problem-specific strategies. Problems with more than ten variables usually went unsolved.

  16. Applying Dynamic Evaluation Approach in Education

    ERIC Educational Resources Information Center

    Grammatikopoulos, V.; Koustelios, A.; Tsigilis, N.; Theodorakis, Y.

    2004-01-01

    The purpose of the current study was to implement a newly proposed evaluation method (Dimitropoulos, 1999), that is, the dynamic evaluation approach in the field of education. The dynamic approach was applied in order to evaluate the Olympic Education Program in Greece. The results of the present field study were encouraging. Dynamic evaluation…

  17. Cancer Behavior: An Optimal Control Approach

    PubMed Central

    Gutiérrez, Pedro J.; Russo, Irma H.; Russo, J.

    2009-01-01

    With special attention to cancer, this essay explains how Optimal Control Theory, mainly used in Economics, can be applied to the analysis of biological behaviors, and illustrates the ability of this mathematical branch to describe biological phenomena and biological interrelationships. Two examples are provided to show the capability and versatility of this powerful mathematical approach in the study of biological questions. The first describes a process of organogenesis, and the second the development of tumors. PMID:22247736

  18. Optimization methods applied to hybrid vehicle design

    NASA Technical Reports Server (NTRS)

    Donoghue, J. F.; Burghart, J. H.

    1983-01-01

    The use of optimization methods as an effective design tool in the design of hybrid vehicle propulsion systems is demonstrated. Optimization techniques were used to select values for three design parameters (battery weight, heat engine power rating and power split between the two on-board energy sources) such that various measures of vehicle performance (acquisition cost, life cycle cost and petroleum consumption) were optimized. The apporach produced designs which were often significant improvements over hybrid designs already reported on in the literature. The principal conclusions are as follows. First, it was found that the strategy used to split the required power between the two on-board energy sources can have a significant effect on life cycle cost and petroleum consumption. Second, the optimization program should be constructed so that performance measures and design variables can be easily changed. Third, the vehicle simulation program has a significant effect on the computer run time of the overall optimization program; run time can be significantly reduced by proper design of the types of trips the vehicle takes in a one year period. Fourth, care must be taken in designing the cost and constraint expressions which are used in the optimization so that they are relatively smooth functions of the design variables. Fifth, proper handling of constraints on battery weight and heat engine rating, variables which must be large enough to meet power demands, is particularly important for the success of an optimization study. Finally, the principal conclusion is that optimization methods provide a practical tool for carrying out the design of a hybrid vehicle propulsion system.

  19. A General Approach to Error Estimation and Optimized Experiment Design, Applied to Multislice Imaging of T1in Human Brain at 4.1 T

    NASA Astrophysics Data System (ADS)

    Mason, Graeme F.; Chu, Wen-Jang; Hetherington, Hoby P.

    1997-05-01

    In this report, a procedure to optimize inversion-recovery times, in order to minimize the uncertainty in the measuredT1from 2-point multislice images of the human brain at 4.1 T, is discussed. The 2-point, 40-slice measurement employed inversion-recovery delays chosen based on the minimization of noise-based uncertainties. For comparison of the measuredT1values and uncertainties, 10-point, 3-slice measurements were also acquired. The measuredT1values using the 2-point method were 814, 1361, and 3386 ms for white matter, gray matter, and cerebral spinal fluid, respectively, in agreement with the respectiveT1values of 817, 1329, and 3320 ms obtained using the 10-point measurement. The 2-point, 40-slice method was used to determine theT1in the cortical gray matter, cerebellar gray matter, caudate nucleus, cerebral peduncle, globus pallidus, colliculus, lenticular nucleus, base of the pons, substantia nigra, thalamus, white matter, corpus callosum, and internal capsule.

  20. Multiobjective Optimization Using a Pareto Differential Evolution Approach

    NASA Technical Reports Server (NTRS)

    Madavan, Nateri K.; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    Differential Evolution is a simple, fast, and robust evolutionary algorithm that has proven effective in determining the global optimum for several difficult single-objective optimization problems. In this paper, the Differential Evolution algorithm is extended to multiobjective optimization problems by using a Pareto-based approach. The algorithm performs well when applied to several test optimization problems from the literature.

  1. HPC CLOUD APPLIED TO LATTICE OPTIMIZATION

    SciTech Connect

    Sun, Changchun; Nishimura, Hiroshi; James, Susan; Song, Kai; Muriki, Krishna; Qin, Yong

    2011-03-18

    As Cloud services gain in popularity for enterprise use, vendors are now turning their focus towards providing cloud services suitable for scientific computing. Recently, Amazon Elastic Compute Cloud (EC2) introduced the new Cluster Compute Instances (CCI), a new instance type specifically designed for High Performance Computing (HPC) applications. At Berkeley Lab, the physicists at the Advanced Light Source (ALS) have been running Lattice Optimization on a local cluster, but the queue wait time and the flexibility to request compute resources when needed are not ideal for rapid development work. To explore alternatives, for the first time we investigate running the Lattice Optimization application on Amazon's new CCI to demonstrate the feasibility and trade-offs of using public cloud services for science.

  2. Portfolio optimization using median-variance approach

    NASA Astrophysics Data System (ADS)

    Wan Mohd, Wan Rosanisah; Mohamad, Daud; Mohamed, Zulkifli

    2013-04-01

    Optimization models have been applied in many decision-making problems particularly in portfolio selection. Since the introduction of Markowitz's theory of portfolio selection, various approaches based on mathematical programming have been introduced such as mean-variance, mean-absolute deviation, mean-variance-skewness and conditional value-at-risk (CVaR) mainly to maximize return and minimize risk. However most of the approaches assume that the distribution of data is normal and this is not generally true. As an alternative, in this paper, we employ the median-variance approach to improve the portfolio optimization. This approach has successfully catered both types of normal and non-normal distribution of data. With this actual representation, we analyze and compare the rate of return and risk between the mean-variance and the median-variance based portfolio which consist of 30 stocks from Bursa Malaysia. The results in this study show that the median-variance approach is capable to produce a lower risk for each return earning as compared to the mean-variance approach.

  3. Optimal quantisation applied to digital holographic data

    NASA Astrophysics Data System (ADS)

    Shortt, Alison E.; Naughton, Thomas J.; Javidi, Bahram

    2005-06-01

    Digital holography is an inherently three-dimensional (3D) technique for the capture of real-world objects. Many existing 3D imaging and processing techniques are based on the explicit combination of several 2D perspectives (or light stripes, etc.) through digital image processing. The advantage of recording a hologram is that multiple 2D perspectives can be optically combined in parallel, and in a constant number of steps independent of the hologram size. Although holography and its capabilities have been known for many decades, it is only very recently that digital holography has been practically investigated due to the recent development of megapixel digital sensors with sufficient spatial resolution and dynamic range. The applications of digital holography could include 3D television, virtual reality, and medical imaging. If these applications are realised, compression standards will have to be defined. We outline the techniques that have been proposed to date for the compression of digital hologram data and show that they are comparable to the performance of what in communication theory is known as optimal signal quantisation. We adapt the optimal signal quantisation technique to complex-valued 2D signals. The technique relies on knowledge of the histograms of real and imaginary values in the digital holograms. Our digital holograms of 3D objects are captured using phase-shift interferometry.

  4. Applying neural networks to optimize instrumentation performance

    SciTech Connect

    Start, S.E.; Peters, G.G.

    1995-06-01

    Well calibrated instrumentation is essential in providing meaningful information about the status of a plant. Signals from plant instrumentation frequently have inherent non-linearities, may be affected by environmental conditions and can therefore cause calibration difficulties for the people who maintain them. Two neural network approaches are described in this paper for improving the accuracy of a non-linear, temperature sensitive level probe ised in Expermental Breeder Reactor II (EBR-II) that was difficult to calibrate.

  5. Applying a managerial approach to day surgery.

    PubMed

    Onetti, Alberto

    2008-01-01

    The present article explores the day surgery topic assuming a managerial perspective. If we assume such a perspective, day surgery can be considered as a business model decision care and not just a surgical procedure alternative to the traditional ones requiring patient hospitalization. In this article we highlight the main steps required to develop a strategic approach [Cotta Ramusino E, Onetti A. Strategia d'Impresa. Milano; Il Sole 24 Ore; Second Edition, 2007] at hospital level (Onetti A, Greulich A. Strategic management in hospitals: the balanced scorecard approach. Milano: Giuffé; 2003) and to make day surgery part of it. It means understanding: - how and when day surgery can improve the health care providers' overall performance both in terms of clinical effectiveness and financial results, and, - how to organize and integrate it with the other hospital activities in order to make it work. Approaching day surgery as a business model decision requires to address in advance a list of potential issues and necessitates of continued audit to verify the results. If it does happen, day surgery can be both safe and cost effective and impact positively on surgical patient satisfaction. We propose a sort of "check-up list" useful to hospital managers and doctors that are evaluating the option of introducing day surgery or are trying to optimize it. PMID:19131286

  6. Perturbation approach applied to modal diffraction methods.

    PubMed

    Bischoff, Joerg; Hehl, Karl

    2011-05-01

    Eigenvalue computation is an important part of many modal diffraction methods, including the rigorous coupled wave approach (RCWA) and the Chandezon method. This procedure is known to be computationally intensive, accounting for a large proportion of the overall run time. However, in many cases, eigenvalue information is already available from previous calculations. Some of the examples include adjacent slices in the RCWA, spectral- or angle-resolved scans in optical scatterometry and parameter derivatives in optimization. In this paper, we present a new technique that provides accurate and highly reliable solutions with significant improvements in computational time. The proposed method takes advantage of known eigensolution information and is based on perturbation method. PMID:21532698

  7. Particle swarm optimization applied to impulsive orbital transfers

    NASA Astrophysics Data System (ADS)

    Pontani, Mauro; Conway, Bruce A.

    2012-05-01

    The particle swarm optimization (PSO) technique is a population-based stochastic method developed in recent years and successfully applied in several fields of research. It mimics the unpredictable motion of bird flocks while searching for food, with the intent of determining the optimal values of the unknown parameters of the problem under consideration. At the end of the process, the best particle (i.e. the best solution with reference to the objective function) is expected to contain the globally optimal values of the unknown parameters. The central idea underlying the method is contained in the formula for velocity updating. This formula includes three terms with stochastic weights. This research applies the particle swarm optimization algorithm to the problem of optimizing impulsive orbital transfers. More specifically, the following problems are considered and solved with the PSO algorithm: (i) determination of the globally optimal two- and three-impulse transfer trajectories between two coplanar circular orbits; (ii) determination of the optimal transfer between two coplanar, elliptic orbits with arbitrary orientation; (iii) determination of the optimal two-impulse transfer between two circular, non-coplanar orbits; (iv) determination of the globally optimal two-impulse transfer between two non-coplanar elliptic orbits. Despite its intuitiveness and simplicity, the particle swarm optimization method proves to be capable of effectively solving the orbital transfer problems of interest with great numerical accuracy.

  8. HEURISTIC OPTIMIZATION AND ALGORITHM TUNING APPLIED TO SORPTIVE BARRIER DESIGN

    EPA Science Inventory

    While heuristic optimization is applied in environmental applications, ad-hoc algorithm configuration is typical. We use a multi-layer sorptive barrier design problem as a benchmark for an algorithm-tuning procedure, as applied to three heuristics (genetic algorithms, simulated ...

  9. Multiobjective optimization approach: thermal food processing.

    PubMed

    Abakarov, A; Sushkov, Y; Almonacid, S; Simpson, R

    2009-01-01

    The objective of this study was to utilize a multiobjective optimization technique for the thermal sterilization of packaged foods. The multiobjective optimization approach used in this study is based on the optimization of well-known aggregating functions by an adaptive random search algorithm. The applicability of the proposed approach was illustrated by solving widely used multiobjective test problems taken from the literature. The numerical results obtained for the multiobjective test problems and for the thermal processing problem show that the proposed approach can be effectively used for solving multiobjective optimization problems arising in the food engineering field.

  10. Genetic Algorithms Applied to Multi-Objective Aerodynamic Shape Optimization

    NASA Technical Reports Server (NTRS)

    Holst, Terry L.

    2004-01-01

    A genetic algorithm approach suitable for solving multi-objective optimization problems is described and evaluated using a series of aerodynamic shape optimization problems. Several new features including two variations of a binning selection algorithm and a gene-space transformation procedure are included. The genetic algorithm is suitable for finding pareto optimal solutions in search spaces that are defined by any number of genes and that contain any number of local extrema. A new masking array capability is included allowing any gene or gene subset to be eliminated as decision variables from the design space. This allows determination of the effect of a single gene or gene subset on the pareto optimal solution. Results indicate that the genetic algorithm optimization approach is flexible in application and reliable. The binning selection algorithms generally provide pareto front quality enhancements and moderate convergence efficiency improvements for most of the problems solved.

  11. Genetic Algorithms Applied to Multi-Objective Aerodynamic Shape Optimization

    NASA Technical Reports Server (NTRS)

    Holst, Terry L.

    2005-01-01

    A genetic algorithm approach suitable for solving multi-objective problems is described and evaluated using a series of aerodynamic shape optimization problems. Several new features including two variations of a binning selection algorithm and a gene-space transformation procedure are included. The genetic algorithm is suitable for finding Pareto optimal solutions in search spaces that are defined by any number of genes and that contain any number of local extrema. A new masking array capability is included allowing any gene or gene subset to be eliminated as decision variables from the design space. This allows determination of the effect of a single gene or gene subset on the Pareto optimal solution. Results indicate that the genetic algorithm optimization approach is flexible in application and reliable. The binning selection algorithms generally provide Pareto front quality enhancements and moderate convergence efficiency improvements for most of the problems solved.

  12. Applying Mathematical Optimization Methods to an ACT-R Instance-Based Learning Model

    PubMed Central

    Said, Nadia; Engelhart, Michael; Kirches, Christian; Körkel, Stefan; Holt, Daniel V.

    2016-01-01

    Computational models of cognition provide an interface to connect advanced mathematical tools and methods to empirically supported theories of behavior in psychology, cognitive science, and neuroscience. In this article, we consider a computational model of instance-based learning, implemented in the ACT-R cognitive architecture. We propose an approach for obtaining mathematical reformulations of such cognitive models that improve their computational tractability. For the well-established Sugar Factory dynamic decision making task, we conduct a simulation study to analyze central model parameters. We show how mathematical optimization techniques can be applied to efficiently identify optimal parameter values with respect to different optimization goals. Beyond these methodological contributions, our analysis reveals the sensitivity of this particular task with respect to initial settings and yields new insights into how average human performance deviates from potential optimal performance. We conclude by discussing possible extensions of our approach as well as future steps towards applying more powerful derivative-based optimization methods. PMID:27387139

  13. Applying Mathematical Optimization Methods to an ACT-R Instance-Based Learning Model.

    PubMed

    Said, Nadia; Engelhart, Michael; Kirches, Christian; Körkel, Stefan; Holt, Daniel V

    2016-01-01

    Computational models of cognition provide an interface to connect advanced mathematical tools and methods to empirically supported theories of behavior in psychology, cognitive science, and neuroscience. In this article, we consider a computational model of instance-based learning, implemented in the ACT-R cognitive architecture. We propose an approach for obtaining mathematical reformulations of such cognitive models that improve their computational tractability. For the well-established Sugar Factory dynamic decision making task, we conduct a simulation study to analyze central model parameters. We show how mathematical optimization techniques can be applied to efficiently identify optimal parameter values with respect to different optimization goals. Beyond these methodological contributions, our analysis reveals the sensitivity of this particular task with respect to initial settings and yields new insights into how average human performance deviates from potential optimal performance. We conclude by discussing possible extensions of our approach as well as future steps towards applying more powerful derivative-based optimization methods. PMID:27387139

  14. Optimization of coupled systems: A critical overview of approaches

    NASA Technical Reports Server (NTRS)

    Balling, R. J.; Sobieszczanski-Sobieski, J.

    1994-01-01

    A unified overview is given of problem formulation approaches for the optimization of multidisciplinary coupled systems. The overview includes six fundamental approaches upon which a large number of variations may be made. Consistent approach names and a compact approach notation are given. The approaches are formulated to apply to general nonhierarchic systems. The approaches are compared both from a computational viewpoint and a managerial viewpoint. Opportunities for parallelism of both computation and manpower resources are discussed. Recommendations regarding the need for future research are advanced.

  15. Applying SF-Based Genre Approaches to English Writing Class

    ERIC Educational Resources Information Center

    Wu, Yan; Dong, Hailin

    2009-01-01

    By exploring genre approaches in systemic functional linguistics and examining the analytic tools that can be applied to the process of English learning and teaching, this paper seeks to find a way of applying genre approaches to English writing class.

  16. Approaches for Informing Optimal Dose of Behavioral Interventions

    PubMed Central

    King, Heather A.; Maciejewski, Matthew L.; Allen, Kelli D.; Yancy, William S.; Shaffer, Jonathan A.

    2015-01-01

    Background There is little guidance about to how select dose parameter values when designing behavioral interventions. Purpose The purpose of this study is to present approaches to inform intervention duration, frequency, and amount when (1) the investigator has no a priori expectation and is seeking a descriptive approach for identifying and narrowing the universe of dose values or (2) the investigator has an a priori expectation and is seeking validation of this expectation using an inferential approach. Methods Strengths and weaknesses of various approaches are described and illustrated with examples. Results Descriptive approaches include retrospective analysis of data from randomized trials, assessment of perceived optimal dose via prospective surveys or interviews of key stakeholders, and assessment of target patient behavior via prospective, longitudinal, observational studies. Inferential approaches include nonrandomized, early-phase trials and randomized designs. Conclusions By utilizing these approaches, researchers may more efficiently apply resources to identify the optimal values of dose parameters for behavioral interventions. PMID:24722964

  17. Conjunctive Multibasin Management: An Optimal Control Approach

    NASA Astrophysics Data System (ADS)

    Noel, Jay E.; Howitt, Richard E.

    1982-08-01

    The economic effects of conjunctive management of ground and surface water supplies for irrigation are formulated as an optimal control model. An empirical hydroeconomic model is estimated for the Yolo County district in California. Two alternative solution methodologies (analytic Riccatti and mathematical programing) are applied and compared. Results show the economic potential for interbasin transfers and the impact of increased electricity prices on optimal groundwater management.

  18. An effective model for ergonomic optimization applied to a new automotive assembly line

    NASA Astrophysics Data System (ADS)

    Duraccio, Vincenzo; Elia, Valerio; Forcina, Antonio

    2016-06-01

    An efficient ergonomic optimization can lead to a significant improvement in production performance and a considerable reduction of costs. In the present paper new model for ergonomic optimization is proposed. The new approach is based on the criteria defined by National Institute of Occupational Safety and Health and, adapted to Italian legislation. The proposed model provides an ergonomic optimization, by analyzing ergonomic relations between manual work in correct conditions. The model includes a schematic and systematic analysis method of the operations, and identifies all possible ergonomic aspects to be evaluated. The proposed approach has been applied to an automotive assembly line, where the operation repeatability makes the optimization fundamental. The proposed application clearly demonstrates the effectiveness of the new approach.

  19. Multicriterial approach to beam dynamics optimization problem

    NASA Astrophysics Data System (ADS)

    Vladimirova, L. V.

    2016-09-01

    The problem of optimization of particle beam dynamics in accelerating system is considered in the case when control process quality is estimated by several functionals. Multicriterial approach is used. When there are two criteria, compromise curve may be obtained. If the number of criteria is three or more, one can select some criteria to be main and impose the constraints on the remaining criteria. The optimization result is the set of efficient controls; a user has an opportunity to select the most appropriate control among them. The paper presents the results of multicriteria optimization of beam dynamics in linear accelerator LEA-15-M.

  20. Quantum Resonance Approach to Combinatorial Optimization

    NASA Technical Reports Server (NTRS)

    Zak, Michail

    1997-01-01

    It is shown that quantum resonance can be used for combinatorial optimization. The advantage of the approach is in independence of the computing time upon the dimensionality of the problem. As an example, the solution to a constraint satisfaction problem of exponential complexity is demonstrated.

  1. Applied topology optimization of vibro-acoustic hearing instrument models

    NASA Astrophysics Data System (ADS)

    Søndergaard, Morten Birkmose; Pedersen, Claus B. W.

    2014-02-01

    Designing hearing instruments remains an acoustic challenge as users request small designs for comfortable wear and cosmetic appeal and at the same time require sufficient amplification from the device. First, to ensure proper amplification in the device, a critical design challenge in the hearing instrument is to minimize the feedback between the outputs (generated sound and vibrations) from the receiver looping back into the microphones. Secondly, the feedback signal is minimized using time consuming trial-and-error design procedures for physical prototypes and virtual models using finite element analysis. In the present work it is demonstrated that structural topology optimization of vibro-acoustic finite element models can be used to both sufficiently minimize the feedback signal and to reduce the time consuming trial-and-error design approach. The structural topology optimization of a vibro-acoustic finite element model is shown for an industrial full scale model hearing instrument.

  2. Mathematical Modelling: A New Approach to Teaching Applied Mathematics.

    ERIC Educational Resources Information Center

    Burghes, D. N.; Borrie, M. S.

    1979-01-01

    Describes the advantages of mathematical modeling approach in teaching applied mathematics and gives many suggestions for suitable material which illustrates the links between real problems and mathematics. (GA)

  3. A deterministic global approach for mixed-discrete structural optimization

    NASA Astrophysics Data System (ADS)

    Lin, Ming-Hua; Tsai, Jung-Fa

    2014-07-01

    This study proposes a novel approach for finding the exact global optimum of a mixed-discrete structural optimization problem. Although many approaches have been developed to solve the mixed-discrete structural optimization problem, they cannot guarantee finding a global solution or they adopt too many extra binary variables and constraints in reformulating the problem. The proposed deterministic method uses convexification strategies and linearization techniques to convert a structural optimization problem into a convex mixed-integer nonlinear programming problem solvable to obtain a global optimum. To enhance the computational efficiency in treating complicated problems, the range reduction technique is also applied to tighten variable bounds. Several numerical experiments drawn from practical structural design problems are presented to demonstrate the effectiveness of the proposed method.

  4. Multidisciplinary Approach to Linear Aerospike Nozzle Optimization

    NASA Technical Reports Server (NTRS)

    Korte, J. J.; Salas, A. O.; Dunn, H. J.; Alexandrov, N. M.; Follett, W. W.; Orient, G. E.; Hadid, A. H.

    1997-01-01

    A model of a linear aerospike rocket nozzle that consists of coupled aerodynamic and structural analyses has been developed. A nonlinear computational fluid dynamics code is used to calculate the aerodynamic thrust, and a three-dimensional fink-element model is used to determine the structural response and weight. The model will be used to demonstrate multidisciplinary design optimization (MDO) capabilities for relevant engine concepts, assess performance of various MDO approaches, and provide a guide for future application development. In this study, the MDO problem is formulated using the multidisciplinary feasible (MDF) strategy. The results for the MDF formulation are presented with comparisons against sequential aerodynamic and structural optimized designs. Significant improvements are demonstrated by using a multidisciplinary approach in comparison with the single- discipline design strategy.

  5. Multiobjective genetic approach for optimal control of photoinduced processes

    SciTech Connect

    Bonacina, Luigi; Extermann, Jerome; Rondi, Ariana; Wolf, Jean-Pierre; Boutou, Veronique

    2007-08-15

    We have applied a multiobjective genetic algorithm to the optimization of multiphoton-excited fluorescence. Our study shows the advantages that this approach can offer to experiments based on adaptive shaping of femtosecond pulses. The algorithm outperforms single-objective optimizations, being totally independent from the bias of user defined parameters and giving simultaneous access to a large set of feasible solutions. The global inspection of their ensemble represents a powerful support to unravel the connections between pulse spectral field features and excitation dynamics of the sample.

  6. Optimal cooperative control synthesis applied to a control-configured aircraft

    NASA Technical Reports Server (NTRS)

    Schmidt, D. K.; Innocenti, M.

    1984-01-01

    A multivariable control augmentation synthesis method is presented that is intended to enable the designer to directly optimize pilot opinion rating of the augmented system. The approach involves the simultaneous solution for the augmentation and predicted pilot's compensation via optimal control techniques. The methodology is applied to the control law synthesis for a vehicle similar to the AFTI F16 control-configured aircraft. The resulting dynamics, expressed in terms of eigenstructure and time/frequency responses, are presented with analytical predictions of closed loop tracking performance, pilot compensation, and other predictors of pilot acceptance.

  7. Applying Loop Optimizations to Object-oriented Abstractions Through General Classification of Array Semantics

    SciTech Connect

    Yi, Q; Quinlan, D

    2004-03-05

    Optimizing compilers have a long history of applying loop transformations to C and Fortran scientific applications. However, such optimizations are rare in compilers for object-oriented languages such as C++ or Java, where loops operating on user-defined types are left unoptimized due to their unknown semantics. Our goal is to reduce the performance penalty of using high-level object-oriented abstractions. We propose an approach that allows the explicit communication between programmers and compilers. We have extended the traditional Fortran loop optimizations with an open interface. Through this interface, we have developed techniques to automatically recognize and optimize user-defined array abstractions. In addition, we have developed an adapted constant-propagation algorithm to automatically propagate properties of abstractions. We have implemented these techniques in a C++ source-to-source translator and have applied them to optimize several kernels written using an array-class library. Our experimental results show that using our approach, applications using high-level abstractions can achieve comparable, and in cases superior, performance to that achieved by efficient low-level hand-written codes.

  8. A Bayesian approach to optimizing cryopreservation protocols.

    PubMed

    Sambu, Sammy

    2015-01-01

    Cryopreservation is beset with the challenge of protocol alignment across a wide range of cell types and process variables. By taking a cross-sectional assessment of previously published cryopreservation data (sample means and standard errors) as preliminary meta-data, a decision tree learning analysis (DTLA) was performed to develop an understanding of target survival using optimized pruning methods based on different approaches. Briefly, a clear direction on the decision process for selection of methods was developed with key choices being the cooling rate, plunge temperature on the one hand and biomaterial choice, use of composites (sugars and proteins as additional constituents), loading procedure and cell location in 3D scaffolding on the other. Secondly, using machine learning and generalized approaches via the Naïve Bayes Classification (NBC) method, these metadata were used to develop posterior probabilities for combinatorial approaches that were implicitly recorded in the metadata. These latter results showed that newer protocol choices developed using probability elicitation techniques can unearth improved protocols consistent with multiple unidimensionally-optimized physical protocols. In conclusion, this article proposes the use of DTLA models and subsequently NBC for the improvement of modern cryopreservation techniques through an integrative approach.

  9. A global optimization approach to multi-polarity sentiment analysis.

    PubMed

    Li, Xinmiao; Li, Jing; Wu, Yukeng

    2015-01-01

    Following the rapid development of social media, sentiment analysis has become an important social media mining technique. The performance of automatic sentiment analysis primarily depends on feature selection and sentiment classification. While information gain (IG) and support vector machines (SVM) are two important techniques, few studies have optimized both approaches in sentiment analysis. The effectiveness of applying a global optimization approach to sentiment analysis remains unclear. We propose a global optimization-based sentiment analysis (PSOGO-Senti) approach to improve sentiment analysis with IG for feature selection and SVM as the learning engine. The PSOGO-Senti approach utilizes a particle swarm optimization algorithm to obtain a global optimal combination of feature dimensions and parameters in the SVM. We evaluate the PSOGO-Senti model on two datasets from different fields. The experimental results showed that the PSOGO-Senti model can improve binary and multi-polarity Chinese sentiment analysis. We compared the optimal feature subset selected by PSOGO-Senti with the features in the sentiment dictionary. The results of this comparison indicated that PSOGO-Senti can effectively remove redundant and noisy features and can select a domain-specific feature subset with a higher-explanatory power for a particular sentiment analysis task. The experimental results showed that the PSOGO-Senti approach is effective and robust for sentiment analysis tasks in different domains. By comparing the improvements of two-polarity, three-polarity and five-polarity sentiment analysis results, we found that the five-polarity sentiment analysis delivered the largest improvement. The improvement of the two-polarity sentiment analysis was the smallest. We conclude that the PSOGO-Senti achieves higher improvement for a more complicated sentiment analysis task. We also compared the results of PSOGO-Senti with those of the genetic algorithm (GA) and grid search method. From

  10. Optimization approaches for planning external beam radiotherapy

    NASA Astrophysics Data System (ADS)

    Gozbasi, Halil Ozan

    Cancer begins when cells grow out of control as a result of damage to their DNA. These abnormal cells can invade healthy tissue and form tumors in various parts of the body. Chemotherapy, immunotherapy, surgery and radiotherapy are the most common treatment methods for cancer. According to American Cancer Society about half of the cancer patients receive a form of radiation therapy at some stage. External beam radiotherapy is delivered from outside the body and aimed at cancer cells to damage their DNA making them unable to divide and reproduce. The beams travel through the body and may damage nearby healthy tissue unless carefully planned. Therefore, the goal of treatment plan optimization is to find the best system parameters to deliver sufficient dose to target structures while avoiding damage to healthy tissue. This thesis investigates optimization approaches for two external beam radiation therapy techniques: Intensity-Modulated Radiation Therapy (IMRT) and Volumetric-Modulated Arc Therapy (VMAT). We develop automated treatment planning technology for IMRT that produces several high-quality treatment plans satisfying provided clinical requirements in a single invocation and without human guidance. A novel bi-criteria scoring based beam selection algorithm is part of the planning system and produces better plans compared to those produced using a well-known scoring-based algorithm. Our algorithm is very efficient and finds the beam configuration at least ten times faster than an exact integer programming approach. Solution times range from 2 minutes to 15 minutes which is clinically acceptable. With certain cancers, especially lung cancer, a patient's anatomy changes during treatment. These anatomical changes need to be considered in treatment planning. Fortunately, recent advances in imaging technology can provide multiple images of the treatment region taken at different points of the breathing cycle, and deformable image registration algorithms can

  11. LP based approach to optimal stable matchings

    SciTech Connect

    Teo, Chung-Piaw; Sethuraman, J.

    1997-06-01

    We study the classical stable marriage and stable roommates problems using a polyhedral approach. We propose a new LP formulation for the stable roommates problem. This formulation is non-empty if and only if the underlying roommates problem has a stable matching. Furthermore, for certain special weight functions on the edges, we construct a 2-approximation algorithm for the optimal stable roommates problem. Our technique uses a crucial geometry of the fractional solutions in this formulation. For the stable marriage problem, we show that a related geometry allows us to express any fractional solution in the stable marriage polytope as convex combination of stable marriage solutions. This leads to a genuinely simple proof of the integrality of the stable marriage polytope. Based on these ideas, we devise a heuristic to solve the optimal stable roommates problem. The heuristic combines the power of rounding and cutting-plane methods. We present some computational results based on preliminary implementations of this heuristic.

  12. Optimization of a Solar Photovoltaic Applied to Greenhouses

    NASA Astrophysics Data System (ADS)

    Nakoul, Z.; Bibi-Triki, N.; Kherrous, A.; Bessenouci, M. Z.; Khelladi, S.

    The global energy consumption and in our country is increasing. The bulk of world energy comes from fossil fuels, whose reserves are doomed to exhaustion and are the leading cause of pollution and global warming through the greenhouse effect. This is not the case of renewable energy that are inexhaustible and from natural phenomena. For years, unanimously, solar energy is in the first rank of renewable energies .The study of energetic aspect of a solar power plant is the best way to find the optimum of its performances. The study on land with real dimensions requires a long time and therefore is very costly, and more results are not always generalizable. To avoid these drawbacks we opted for a planned study on computer only, using the software 'Matlab' by modeling different components for a better sizing and simulating all energies to optimize profitability taking into account the cost. The result of our work applied to sites of Tlemcen and Bouzareah led us to conclude that the energy required is a determining factor in the choice of components of a PV solar power plant.

  13. Optimal trading strategies—a time series approach

    NASA Astrophysics Data System (ADS)

    Bebbington, Peter A.; Kühn, Reimer

    2016-05-01

    Motivated by recent advances in the spectral theory of auto-covariance matrices, we are led to revisit a reformulation of Markowitz’ mean-variance portfolio optimization approach in the time domain. In its simplest incarnation it applies to a single traded asset and allows an optimal trading strategy to be found which—for a given return—is minimally exposed to market price fluctuations. The model is initially investigated for a range of synthetic price processes, taken to be either second order stationary, or to exhibit second order stationary increments. Attention is paid to consequences of estimating auto-covariance matrices from small finite samples, and auto-covariance matrix cleaning strategies to mitigate against these are investigated. Finally we apply our framework to real world data.

  14. Optimization approaches to nonlinear model predictive control

    SciTech Connect

    Biegler, L.T. . Dept. of Chemical Engineering); Rawlings, J.B. . Dept. of Chemical Engineering)

    1991-01-01

    With the development of sophisticated methods for nonlinear programming and powerful computer hardware, it now becomes useful and efficient to formulate and solve nonlinear process control problems through on-line optimization methods. This paper explores and reviews control techniques based on repeated solution of nonlinear programming (NLP) problems. Here several advantages present themselves. These include minimization of readily quantifiable objectives, coordinated and accurate handling of process nonlinearities and interactions, and systematic ways of dealing with process constraints. We motivate this NLP-based approach with small nonlinear examples and present a basic algorithm for optimization-based process control. As can be seen this approach is a straightforward extension of popular model-predictive controllers (MPCs) that are used for linear systems. The statement of the basic algorithm raises a number of questions regarding stability and robustness of the method, efficiency of the control calculations, incorporation of feedback into the controller and reliable ways of handling process constraints. Each of these will be treated through analysis and/or modification of the basic algorithm. To highlight and support this discussion, several examples are presented and key results are examined and further developed. 74 refs., 11 figs.

  15. Simulated annealing applied to IMRT beam angle optimization: A computational study.

    PubMed

    Dias, Joana; Rocha, Humberto; Ferreira, Brígida; Lopes, Maria do Carmo

    2015-11-01

    Electing irradiation directions to use in IMRT treatments is one of the first decisions to make in treatment planning. Beam angle optimization (BAO) is a difficult problem to tackle from the mathematical optimization point of view. It is highly non-convex, and optimization approaches based on gradient descent methods will probably get trapped in one of the many local minima. Simulated Annealing (SA) is a local search probabilistic procedure that is known to be able to deal with multimodal problems. SA for BAO was retrospectively applied to ten clinical examples of treated cases of head-and neck tumors signalized as complex cases where proper target coverage and organ sparing proved difficult to achieve. The number of directions to use was considered fixed and equal to 5 or 7. It is shown that SA can lead to solutions that significantly improve organ sparing, even considering a reduced number of angles, without jeopardizing tumor coverage.

  16. Essays on Applied Resource Economics Using Bioeconomic Optimization Models

    NASA Astrophysics Data System (ADS)

    Affuso, Ermanno

    With rising demographic growth, there is increasing interest in analytical studies that assess alternative policies to provide an optimal allocation of scarce natural resources while ensuring environmental sustainability. This dissertation consists of three essays in applied resource economics that are interconnected methodologically within the agricultural production sector of Economics. The first chapter examines the sustainability of biofuels by simulating and evaluating an agricultural voluntary program that aims to increase the land use efficiency in the production of biofuels of first generation in the state of Alabama. The results show that participatory decisions may increase the net energy value of biofuels by 208% and reduce emissions by 26%; significantly contributing to the state energy goals. The second chapter tests the hypothesis of overuse of fertilizers and pesticides in U.S. peanut farming with respect to other inputs and address genetic research to reduce the use of the most overused chemical input. The findings suggest that peanut producers overuse fungicide with respect to any other input and that fungi resistant genetically engineered peanuts may increase the producer welfare up to 36.2%. The third chapter implements a bioeconomic model, which consists of a biophysical model and a stochastic dynamic recursive model that is used to measure potential economic and environmental welfare of cotton farmers derived from a rotation scheme that uses peanut as a complementary crop. The results show that the rotation scenario would lower farming costs by 14% due to nitrogen credits from prior peanut land use and reduce non-point source pollution from nitrogen runoff by 6.13% compared to continuous cotton farming.

  17. Mixed finite element formulation applied to shape optimization

    NASA Technical Reports Server (NTRS)

    Rodrigues, Helder; Taylor, John E.; Kikuchi, Noboru

    1988-01-01

    The development presented introduces a general form of mixed formulation for the optimal shape design problem. The associated optimality conditions are easily obtained without resorting to highly elaborate mathematical developments. Also, the physical significance of the adjoint problem is clearly defined with this formulation.

  18. Optimality of collective choices: a stochastic approach.

    PubMed

    Nicolis, S C; Detrain, C; Demolin, D; Deneubourg, J L

    2003-09-01

    Amplifying communication is a characteristic of group-living animals. This study is concerned with food recruitment by chemical means, known to be associated with foraging in most ant colonies but also with defence or nest moving. A stochastic approach of collective choices made by ants faced with different sources is developed to account for the fluctuations inherent to the recruitment process. It has been established that ants are able to optimize their foraging by selecting the most rewarding source. Our results not only confirm that selection is the result of a trail modulation according to food quality but also show the existence of an optimal quantity of laid pheromone for which the selection of a source is at the maximum, whatever the difference between the two sources might be. In terms of colony size, large colonies more easily focus their activity on one source. Moreover, the selection of the rich source is more efficient if many individuals lay small quantities of pheromone, instead of a small group of individuals laying a higher trail amount. These properties due to the stochasticity of the recruitment process can be extended to other social phenomena in which competition between different sources of information occurs. PMID:12909251

  19. An Optimal Guidance Law Applied to Quadrotor Using LQR Method

    NASA Astrophysics Data System (ADS)

    Jafari, Hamidreza; Zareh, Mehran; Roshanian, Jafar; Nikkhah, Amirali

    The optimal guidance law of an autonomous four-rotor helicopter, called the Quadrotor, using linear quadratic regulators (LQR) is presented in this paper. The dynamic equations of the Quadrotor are considered nonlinear so to find an LQR controller, it is necessary that these equations be linearized in different operation points. Due to importance of energy consumption in Quadrotors, minimum energy is selected as the optimal criteria.

  20. Performance of hybrid methods for large-scale unconstrained optimization as applied to models of proteins.

    PubMed

    Das, B; Meirovitch, H; Navon, I M

    2003-07-30

    Energy minimization plays an important role in structure determination and analysis of proteins, peptides, and other organic molecules; therefore, development of efficient minimization algorithms is important. Recently, Morales and Nocedal developed hybrid methods for large-scale unconstrained optimization that interlace iterations of the limited-memory BFGS method (L-BFGS) and the Hessian-free Newton method (Computat Opt Appl 2002, 21, 143-154). We test the performance of this approach as compared to those of the L-BFGS algorithm of Liu and Nocedal and the truncated Newton (TN) with automatic preconditioner of Nash, as applied to the protein bovine pancreatic trypsin inhibitor (BPTI) and a loop of the protein ribonuclease A. These systems are described by the all-atom AMBER force field with a dielectric constant epsilon = 1 and a distance-dependent dielectric function epsilon = 2r, where r is the distance between two atoms. It is shown that for the optimal parameters the hybrid approach is typically two times more efficient in terms of CPU time and function/gradient calculations than the two other methods. The advantage of the hybrid approach increases as the electrostatic interactions become stronger, that is, in going from epsilon = 2r to epsilon = 1, which leads to a more rugged and probably more nonlinear potential energy surface. However, no general rule that defines the optimal parameters has been found and their determination requires a relatively large number of trial-and-error calculations for each problem.

  1. Learning approach to sampling optimization: Applications in astrodynamics

    NASA Astrophysics Data System (ADS)

    Henderson, Troy Allen

    A new, novel numerical optimization algorithm is developed, tested, and used to solve difficult numerical problems from the field of astrodynamics. First, a brief review of optimization theory is presented and common numerical optimization techniques are discussed. Then, the new method, called the Learning Approach to Sampling Optimization (LA) is presented. Simple, illustrative examples are given to further emphasize the simplicity and accuracy of the LA method. Benchmark functions in lower dimensions are studied and the LA is compared, in terms of performance, to widely used methods. Three classes of problems from astrodynamics are then solved. First, the N-impulse orbit transfer and rendezvous problems are solved by using the LA optimization technique along with derived bounds that make the problem computationally feasible. This marriage between analytical and numerical methods allows an answer to be found for an order of magnitude greater number of impulses than are currently published. Next, the N-impulse work is applied to design periodic close encounters (PCE) in space. The encounters are defined as an open rendezvous, meaning that two spacecraft must be at the same position at the same time, but their velocities are not necessarily equal. The PCE work is extended to include N-impulses and other constraints, and new examples are given. Finally, a trajectory optimization problem is solved using the LA algorithm and comparing performance with other methods based on two models---with varying complexity---of the Cassini-Huygens mission to Saturn. The results show that the LA consistently outperforms commonly used numerical optimization algorithms.

  2. From Nonlinear Optimization to Convex Optimization through Firefly Algorithm and Indirect Approach with Applications to CAD/CAM

    PubMed Central

    Gálvez, Akemi; Iglesias, Andrés

    2013-01-01

    Fitting spline curves to data points is a very important issue in many applied fields. It is also challenging, because these curves typically depend on many continuous variables in a highly interrelated nonlinear way. In general, it is not possible to compute these parameters analytically, so the problem is formulated as a continuous nonlinear optimization problem, for which traditional optimization techniques usually fail. This paper presents a new bioinspired method to tackle this issue. In this method, optimization is performed through a combination of two techniques. Firstly, we apply the indirect approach to the knots, in which they are not initially the subject of optimization but precomputed with a coarse approximation scheme. Secondly, a powerful bioinspired metaheuristic technique, the firefly algorithm, is applied to optimization of data parameterization; then, the knot vector is refined by using De Boor's method, thus yielding a better approximation to the optimal knot vector. This scheme converts the original nonlinear continuous optimization problem into a convex optimization problem, solved by singular value decomposition. Our method is applied to some illustrative real-world examples from the CAD/CAM field. Our experimental results show that the proposed scheme can solve the original continuous nonlinear optimization problem very efficiently. PMID:24376380

  3. Applying a Constructivist and Collaborative Methodological Approach in Engineering Education

    ERIC Educational Resources Information Center

    Moreno, Lorenzo; Gonzalez, Carina; Castilla, Ivan; Gonzalez, Evelio; Sigut, Jose

    2007-01-01

    In this paper, a methodological educational proposal based on constructivism and collaborative learning theories is described. The suggested approach has been successfully applied to a subject entitled "Computer Architecture and Engineering" in a Computer Science degree in the University of La Laguna in Spain. This methodology is supported by two…

  4. A Global Optimization Approach to Multi-Polarity Sentiment Analysis

    PubMed Central

    Li, Xinmiao; Li, Jing; Wu, Yukeng

    2015-01-01

    Following the rapid development of social media, sentiment analysis has become an important social media mining technique. The performance of automatic sentiment analysis primarily depends on feature selection and sentiment classification. While information gain (IG) and support vector machines (SVM) are two important techniques, few studies have optimized both approaches in sentiment analysis. The effectiveness of applying a global optimization approach to sentiment analysis remains unclear. We propose a global optimization-based sentiment analysis (PSOGO-Senti) approach to improve sentiment analysis with IG for feature selection and SVM as the learning engine. The PSOGO-Senti approach utilizes a particle swarm optimization algorithm to obtain a global optimal combination of feature dimensions and parameters in the SVM. We evaluate the PSOGO-Senti model on two datasets from different fields. The experimental results showed that the PSOGO-Senti model can improve binary and multi-polarity Chinese sentiment analysis. We compared the optimal feature subset selected by PSOGO-Senti with the features in the sentiment dictionary. The results of this comparison indicated that PSOGO-Senti can effectively remove redundant and noisy features and can select a domain-specific feature subset with a higher-explanatory power for a particular sentiment analysis task. The experimental results showed that the PSOGO-Senti approach is effective and robust for sentiment analysis tasks in different domains. By comparing the improvements of two-polarity, three-polarity and five-polarity sentiment analysis results, we found that the five-polarity sentiment analysis delivered the largest improvement. The improvement of the two-polarity sentiment analysis was the smallest. We conclude that the PSOGO-Senti achieves higher improvement for a more complicated sentiment analysis task. We also compared the results of PSOGO-Senti with those of the genetic algorithm (GA) and grid search method. From

  5. Bifurcation-based approach reveals synergism and optimal combinatorial perturbation.

    PubMed

    Liu, Yanwei; Li, Shanshan; Liu, Zengrong; Wang, Ruiqi

    2016-06-01

    Cells accomplish the process of fate decisions and form terminal lineages through a series of binary choices in which cells switch stable states from one branch to another as the interacting strengths of regulatory factors continuously vary. Various combinatorial effects may occur because almost all regulatory processes are managed in a combinatorial fashion. Combinatorial regulation is crucial for cell fate decisions because it may effectively integrate many different signaling pathways to meet the higher regulation demand during cell development. However, whether the contribution of combinatorial regulation to the state transition is better than that of a single one and if so, what the optimal combination strategy is, seem to be significant issue from the point of view of both biology and mathematics. Using the approaches of combinatorial perturbations and bifurcation analysis, we provide a general framework for the quantitative analysis of synergism in molecular networks. Different from the known methods, the bifurcation-based approach depends only on stable state responses to stimuli because the state transition induced by combinatorial perturbations occurs between stable states. More importantly, an optimal combinatorial perturbation strategy can be determined by investigating the relationship between the bifurcation curve of a synergistic perturbation pair and the level set of a specific objective function. The approach is applied to two models, i.e., a theoretical multistable decision model and a biologically realistic CREB model, to show its validity, although the approach holds for a general class of biological systems.

  6. Stochastic real-time optimal control: A pseudospectral approach for bearing-only trajectory optimization

    NASA Astrophysics Data System (ADS)

    Ross, Steven M.

    A method is presented to couple and solve the optimal control and the optimal estimation problems simultaneously, allowing systems with bearing-only sensors to maneuver to obtain observability for relative navigation without unnecessarily detracting from a primary mission. A fundamentally new approach to trajectory optimization and the dual control problem is presented, constraining polynomial approximations of the Fisher Information Matrix to provide an information gradient and allow prescription of the level of future estimation certainty required for mission accomplishment. Disturbances, modeling deficiencies, and corrupted measurements are addressed recursively using Radau pseudospectral collocation methods and sequential quadratic programming for the optimal path and an Unscented Kalman Filter for the target position estimate. The underlying real-time optimal control (RTOC) algorithm is developed, specifically addressing limitations of current techniques that lose error integration. The resulting guidance method can be applied to any bearing-only system, such as submarines using passive sonar, anti-radiation missiles, or small UAVs seeking to land on power lines for energy harvesting. System integration, variable timing methods, and discontinuity management techniques are provided for actual hardware implementation. Validation is accomplished with both simulation and flight test, autonomously landing a quadrotor helicopter on a wire.

  7. Optimizing communication satellites payload configuration with exact approaches

    NASA Astrophysics Data System (ADS)

    Stathakis, Apostolos; Danoy, Grégoire; Bouvry, Pascal; Talbi, El-Ghazali; Morelli, Gianluigi

    2015-12-01

    The satellite communications market is competitive and rapidly evolving. The payload, which is in charge of applying frequency conversion and amplification to the signals received from Earth before their retransmission, is made of various components. These include reconfigurable switches that permit the re-routing of signals based on market demand or because of some hardware failure. In order to meet modern requirements, the size and the complexity of current communication payloads are increasing significantly. Consequently, the optimal payload configuration, which was previously done manually by the engineers with the use of computerized schematics, is now becoming a difficult and time consuming task. Efficient optimization techniques are therefore required to find the optimal set(s) of switch positions to optimize some operational objective(s). In order to tackle this challenging problem for the satellite industry, this work proposes two Integer Linear Programming (ILP) models. The first one is single-objective and focuses on the minimization of the length of the longest channel path, while the second one is bi-objective and additionally aims at minimizing the number of switch changes in the payload switch matrix. Experiments are conducted on a large set of instances of realistic payload sizes using the CPLEX® solver and two well-known exact multi-objective algorithms. Numerical results demonstrate the efficiency and limitations of the ILP approach on this real-world problem.

  8. Genetic algorithm parameter optimization: applied to sensor coverage

    NASA Astrophysics Data System (ADS)

    Sahin, Ferat; Abbate, Giuseppe

    2004-08-01

    Genetic Algorithms are powerful tools, which when set upon a solution space will search for the optimal answer. These algorithms though have some associated problems, which are inherent to the method such as pre-mature convergence and lack of population diversity. These problems can be controlled with changes to certain parameters such as crossover, selection, and mutation. This paper attempts to tackle these problems in GA by having another GA controlling these parameters. The values for crossover parameter are: one point, two point, and uniform. The values for selection parameters are: best, worst, roulette wheel, inside 50%, outside 50%. The values for the mutation parameter are: random and swap. The system will include a control GA whose population will consist of different parameters settings. While this GA is attempting to find the best parameters it will be advancing into the search space of the problem and refining the population. As the population changes due to the search so will the optimal parameters. For every control GA generation each of the individuals in the population will be tested for fitness by being run through the problem GA with the assigned parameters. During these runs the population used in the next control generation is compiled. Thus, both the issue of finding the best parameters and the solution to the problem are attacked at the same time. The goal is to optimize the sensor coverage in a square field. The test case used was a 30 by 30 unit field with 100 sensor nodes. Each sensor node had a coverage area of 3 by 3 units. The algorithm attempts to optimize the sensor coverage in the field by moving the nodes. The results show that the control GA will provide better results when compared to a system with no parameter changes.

  9. Execution of Multidisciplinary Design Optimization Approaches on Common Test Problems

    NASA Technical Reports Server (NTRS)

    Balling, R. J.; Wilkinson, C. A.

    1997-01-01

    A class of synthetic problems for testing multidisciplinary design optimization (MDO) approaches is presented. These test problems are easy to reproduce because all functions are given as closed-form mathematical expressions. They are constructed in such a way that the optimal value of all variables and the objective is unity. The test problems involve three disciplines and allow the user to specify the number of design variables, state variables, coupling functions, design constraints, controlling design constraints, and the strength of coupling. Several MDO approaches were executed on two sample synthetic test problems. These approaches included single-level optimization approaches, collaborative optimization approaches, and concurrent subspace optimization approaches. Execution results are presented, and the robustness and efficiency of these approaches an evaluated for these sample problems.

  10. A multiple objective optimization approach to quality control

    NASA Technical Reports Server (NTRS)

    Seaman, Christopher Michael

    1991-01-01

    The use of product quality as the performance criteria for manufacturing system control is explored. The goal in manufacturing, for economic reasons, is to optimize product quality. The problem is that since quality is a rather nebulous product characteristic, there is seldom an analytic function that can be used as a measure. Therefore standard control approaches, such as optimal control, cannot readily be applied. A second problem with optimizing product quality is that it is typically measured along many dimensions: there are many apsects of quality which must be optimized simultaneously. Very often these different aspects are incommensurate and competing. The concept of optimality must now include accepting tradeoffs among the different quality characteristics. These problems are addressed using multiple objective optimization. It is shown that the quality control problem can be defined as a multiple objective optimization problem. A controller structure is defined using this as the basis. Then, an algorithm is presented which can be used by an operator to interactively find the best operating point. Essentially, the algorithm uses process data to provide the operator with two pieces of information: (1) if it is possible to simultaneously improve all quality criteria, then determine what changes to the process input or controller parameters should be made to do this; and (2) if it is not possible to improve all criteria, and the current operating point is not a desirable one, select a criteria in which a tradeoff should be made, and make input changes to improve all other criteria. The process is not operating at an optimal point in any sense if no tradeoff has to be made to move to a new operating point. This algorithm ensures that operating points are optimal in some sense and provides the operator with information about tradeoffs when seeking the best operating point. The multiobjective algorithm was implemented in two different injection molding scenarios

  11. An Iterative Approach for the Optimization of Pavement Maintenance Management at the Network Level

    PubMed Central

    Torres-Machí, Cristina; Chamorro, Alondra; Videla, Carlos; Yepes, Víctor

    2014-01-01

    Pavement maintenance is one of the major issues of public agencies. Insufficient investment or inefficient maintenance strategies lead to high economic expenses in the long term. Under budgetary restrictions, the optimal allocation of resources becomes a crucial aspect. Two traditional approaches (sequential and holistic) and four classes of optimization methods (selection based on ranking, mathematical optimization, near optimization, and other methods) have been applied to solve this problem. They vary in the number of alternatives considered and how the selection process is performed. Therefore, a previous understanding of the problem is mandatory to identify the most suitable approach and method for a particular network. This study aims to assist highway agencies, researchers, and practitioners on when and how to apply available methods based on a comparative analysis of the current state of the practice. Holistic approach tackles the problem considering the overall network condition, while the sequential approach is easier to implement and understand, but may lead to solutions far from optimal. Scenarios defining the suitability of these approaches are defined. Finally, an iterative approach gathering the advantages of traditional approaches is proposed and applied in a case study. The proposed approach considers the overall network condition in a simpler and more intuitive manner than the holistic approach. PMID:24741352

  12. An iterative approach for the optimization of pavement maintenance management at the network level.

    PubMed

    Torres-Machí, Cristina; Chamorro, Alondra; Videla, Carlos; Pellicer, Eugenio; Yepes, Víctor

    2014-01-01

    Pavement maintenance is one of the major issues of public agencies. Insufficient investment or inefficient maintenance strategies lead to high economic expenses in the long term. Under budgetary restrictions, the optimal allocation of resources becomes a crucial aspect. Two traditional approaches (sequential and holistic) and four classes of optimization methods (selection based on ranking, mathematical optimization, near optimization, and other methods) have been applied to solve this problem. They vary in the number of alternatives considered and how the selection process is performed. Therefore, a previous understanding of the problem is mandatory to identify the most suitable approach and method for a particular network. This study aims to assist highway agencies, researchers, and practitioners on when and how to apply available methods based on a comparative analysis of the current state of the practice. Holistic approach tackles the problem considering the overall network condition, while the sequential approach is easier to implement and understand, but may lead to solutions far from optimal. Scenarios defining the suitability of these approaches are defined. Finally, an iterative approach gathering the advantages of traditional approaches is proposed and applied in a case study. The proposed approach considers the overall network condition in a simpler and more intuitive manner than the holistic approach.

  13. Applying a weed risk assessment approach to GM crops.

    PubMed

    Keese, Paul K; Robold, Andrea V; Myers, Ruth C; Weisman, Sarah; Smith, Joe

    2014-12-01

    Current approaches to environmental risk assessment of genetically modified (GM) plants are modelled on chemical risk assessment methods, which have a strong focus on toxicity. There are additional types of harms posed by plants that have been extensively studied by weed scientists and incorporated into weed risk assessment methods. Weed risk assessment uses robust, validated methods that are widely applied to regulatory decision-making about potentially problematic plants. They are designed to encompass a broad variety of plant forms and traits in different environments, and can provide reliable conclusions even with limited data. The knowledge and experience that underpin weed risk assessment can be harnessed for environmental risk assessment of GM plants. A case study illustrates the application of the Australian post-border weed risk assessment approach to a representative GM plant. This approach is a valuable tool to identify potential risks from GM plants.

  14. Applying Genetic Algorithms To Query Optimization in Document Retrieval.

    ERIC Educational Resources Information Center

    Horng, Jorng-Tzong; Yeh, Ching-Chang

    2000-01-01

    Proposes a novel approach to automatically retrieve keywords and then uses genetic algorithms to adapt the keyword weights. Discusses Chinese text retrieval, term frequency rating formulas, vector space models, bigrams, the PAT-tree structure for information retrieval, query vectors, and relevance feedback. (Author/LRW)

  15. A multiple objective optimization approach to aircraft control systems design

    NASA Technical Reports Server (NTRS)

    Tabak, D.; Schy, A. A.; Johnson, K. G.; Giesy, D. P.

    1979-01-01

    The design of an aircraft lateral control system, subject to several performance criteria and constraints, is considered. While in the previous studies of the same model a single criterion optimization, with other performance requirements expressed as constraints, has been pursued, the current approach involves a multiple criteria optimization. In particular, a Pareto optimal solution is sought.

  16. Self-Adaptive Stepsize Search Applied to Optimal Structural Design

    NASA Astrophysics Data System (ADS)

    Nolle, L.; Bland, J. A.

    Structural engineering often involves the design of space frames that are required to resist predefined external forces without exhibiting plastic deformation. The weight of the structure and hence the weight of its constituent members has to be as low as possible for economical reasons without violating any of the load constraints. Design spaces are usually vast and the computational costs for analyzing a single design are usually high. Therefore, not every possible design can be evaluated for real-world problems. In this work, a standard structural design problem, the 25-bar problem, has been solved using self-adaptive stepsize search (SASS), a relatively new search heuristic. This algorithm has only one control parameter and therefore overcomes the drawback of modern search heuristics, i.e. the need to first find a set of optimum control parameter settings for the problem at hand. In this work, SASS outperforms simulated-annealing, genetic algorithms, tabu search and ant colony optimization.

  17. Optimization of minoxidil microemulsions using fractional factorial design approach.

    PubMed

    Jaipakdee, Napaphak; Limpongsa, Ekapol; Pongjanyakul, Thaned

    2016-01-01

    The objective of this study was to apply fractional factorial and multi-response optimization designs using desirability function approach for developing topical microemulsions. Minoxidil (MX) was used as a model drug. Limonene was used as an oil phase. Based on solubility, Tween 20 and caprylocaproyl polyoxyl-8 glycerides were selected as surfactants, propylene glycol and ethanol were selected as co-solvent in aqueous phase. Experiments were performed according to a two-level fractional factorial design to evaluate the effects of independent variables: Tween 20 concentration in surfactant system (X1), surfactant concentration (X2), ethanol concentration in co-solvent system (X3), limonene concentration (X4) on MX solubility (Y1), permeation flux (Y2), lag time (Y3), deposition (Y4) of MX microemulsions. It was found that Y1 increased with increasing X3 and decreasing X2, X4; whereas Y2 increased with decreasing X1, X2 and increasing X3. While Y3 was not affected by these variables, Y4 increased with decreasing X1, X2. Three regression equations were obtained and calculated for predicted values of responses Y1, Y2 and Y4. The predicted values matched experimental values reasonably well with high determination coefficient. By using optimal desirability function, optimized microemulsion demonstrating the highest MX solubility, permeation flux and skin deposition was confirmed as low level of X1, X2 and X4 but high level of X3. PMID:25318551

  18. Group Counseling Optimization: A Novel Approach

    NASA Astrophysics Data System (ADS)

    Eita, M. A.; Fahmy, M. M.

    A new population-based search algorithm, which we call Group Counseling Optimizer (GCO), is presented. It mimics the group counseling behavior of humans in solving their problems. The algorithm is tested using seven known benchmark functions: Sphere, Rosenbrock, Griewank, Rastrigin, Ackley, Weierstrass, and Schwefel functions. A comparison is made with the recently published comprehensive learning particle swarm optimizer (CLPSO). The results demonstrate the efficiency and robustness of the proposed algorithm.

  19. Robust Bayesian decision theory applied to optimal dosage.

    PubMed

    Abraham, Christophe; Daurès, Jean-Pierre

    2004-04-15

    We give a model for constructing an utility function u(theta,d) in a dose prescription problem. theta and d denote respectively the patient state of health and the dose. The construction of u is based on the conditional probabilities of several variables. These probabilities are described by logistic models. Obviously, u is only an approximation of the true utility function and that is why we investigate the sensitivity of the final decision with respect to the utility function. We construct a class of utility functions from u and approximate the set of all Bayes actions associated to that class. Then, we measure the sensitivity as the greatest difference between the expected utilities of two Bayes actions. Finally, we apply these results to weighing up a chemotherapy treatment of lung cancer. This application emphasizes the importance of measuring robustness through the utility of decisions rather than the decisions themselves. PMID:15057878

  20. Applying riding-posture optimization on bicycle frame design.

    PubMed

    Hsiao, Shih-Wen; Chen, Rong-Qi; Leng, Wan-Lee

    2015-11-01

    Customization design is a trend for developing a bicycle in recent years. Thus, the comfort of riding a bike is an important factor that should be paid much attention to while developing a bicycle. From the viewpoint of ergonomics, the concept of "fitting object to the human body" is designed into the bicycle frame in this study. Firstly, the important feature points of riding posture were automatically detected by the image processing method. In the measurement process, the best riding posture was identified experimentally, thus the positions of feature points and joint angles of human body were obtained. Afterwards, according to the measurement data, three key points: the handlebar, the saddle and the crank center, were identified and applied to the frame design of various bicycle types. Lastly, this study further proposed a frame size table for common bicycle types, which is helpful for the designer to design a bicycle. PMID:26154206

  1. Applying riding-posture optimization on bicycle frame design.

    PubMed

    Hsiao, Shih-Wen; Chen, Rong-Qi; Leng, Wan-Lee

    2015-11-01

    Customization design is a trend for developing a bicycle in recent years. Thus, the comfort of riding a bike is an important factor that should be paid much attention to while developing a bicycle. From the viewpoint of ergonomics, the concept of "fitting object to the human body" is designed into the bicycle frame in this study. Firstly, the important feature points of riding posture were automatically detected by the image processing method. In the measurement process, the best riding posture was identified experimentally, thus the positions of feature points and joint angles of human body were obtained. Afterwards, according to the measurement data, three key points: the handlebar, the saddle and the crank center, were identified and applied to the frame design of various bicycle types. Lastly, this study further proposed a frame size table for common bicycle types, which is helpful for the designer to design a bicycle.

  2. An analytic approach to optimize tidal turbine fields

    NASA Astrophysics Data System (ADS)

    Pelz, P.; Metzler, M.

    2013-12-01

    Motivated by global warming due to CO2-emission various technologies for harvesting of energy from renewable sources are developed. Hydrokinetic turbines get applied to surface watercourse or tidal flow to gain electrical energy. Since the available power for hydrokinetic turbines is proportional to the projected cross section area, fields of turbines are installed to scale shaft power. Each hydrokinetic turbine of a field can be considered as a disk actuator. In [1], the first author derives the optimal operation point for hydropower in an open-channel. The present paper concerns about a 0-dimensional model of a disk-actuator in an open-channel flow with bypass, as a special case of [1]. Based on the energy equation, the continuity equation and the momentum balance an analytical approach is made to calculate the coefficient of performance for hydrokinetic turbines with bypass flow as function of the turbine head and the ratio of turbine width to channel width.

  3. An optimization approach and its application to compare DNA sequences

    NASA Astrophysics Data System (ADS)

    Liu, Liwei; Li, Chao; Bai, Fenglan; Zhao, Qi; Wang, Ying

    2015-02-01

    Studying the evolutionary relationship between biological sequences has become one of the main tasks in bioinformatics research by means of comparing and analyzing the gene sequence. Many valid methods have been applied to the DNA sequence alignment. In this paper, we propose a novel comparing method based on the Lempel-Ziv (LZ) complexity to compare biological sequences. Moreover, we introduce a new distance measure and make use of the corresponding similarity matrix to construct phylogenic tree without multiple sequence alignment. Further, we construct phylogenic tree for 24 species of Eutherian mammals and 48 countries of Hepatitis E virus (HEV) by an optimization approach. The results indicate that this new method improves the efficiency of sequence comparison and successfully construct phylogenies.

  4. New approaches to the design optimization of hydrofoils

    NASA Astrophysics Data System (ADS)

    Beyhaghi, Pooriya; Meneghello, Gianluca; Bewley, Thomas

    2015-11-01

    Two simulation-based approaches are developed to optimize the design of hydrofoils for foiling catamarans, with the objective of maximizing efficiency (lift/drag). In the first, a simple hydrofoil model based on the vortex-lattice method is coupled with a hybrid global and local optimization algorithm that combines our Delaunay-based optimization algorithm with a Generalized Pattern Search. This optimization procedure is compared with the classical Newton-based optimization method. The accuracy of the vortex-lattice simulation of the optimized design is compared with a more accurate and computationally expensive LES-based simulation. In the second approach, the (expensive) LES model of the flow is used directly during the optimization. A modified Delaunay-based optimization algorithm is used to maximize the efficiency of the optimization, which measures a finite-time averaged approximation of the infinite-time averaged value of an ergodic and stationary process. Since the optimization algorithm takes into account the uncertainty of the finite-time averaged approximation of the infinite-time averaged statistic of interest, the total computational time of the optimization algorithm is significantly reduced. Results from the two different approaches are compared.

  5. A comparison of two closely-related approaches to aerodynamic design optimization

    NASA Technical Reports Server (NTRS)

    Shubin, G. R.; Frank, P. D.

    1991-01-01

    Two related methods for aerodynamic design optimization are compared. The methods, called the implicit gradient approach and the variational (or optimal control) approach, both attempt to obtain gradients necessary for numerical optimization at a cost significantly less than that of the usual black-box approach that employs finite difference gradients. While the two methods are seemingly quite different, they are shown to differ (essentially) in that the order of discretizing the continuous problem, and of applying calculus, is interchanged. Under certain circumstances, the two methods turn out to be identical. We explore the relationship between these methods by applying them to a model problem for duct flow that has many features in common with transonic flow over an airfoil. We find that the gradients computed by the variational method can sometimes be sufficiently inaccurate to cause the optimization to fail.

  6. Russian Loanword Adaptation in Persian; Optimal Approach

    ERIC Educational Resources Information Center

    Kambuziya, Aliye Kord Zafaranlu; Hashemi, Eftekhar Sadat

    2011-01-01

    In this paper we analyzed some of the phonological rules of Russian loanword adaptation in Persian, on the view of Optimal Theory (OT) (Prince & Smolensky, 1993/2004). It is the first study of phonological process on Russian loanwords adaptation in Persian. By gathering about 50 current Russian loanwords, we selected some of them to analyze. We…

  7. Applying a Modified Triad Approach to Investigate Wastewater lines

    SciTech Connect

    Pawlowicz, R.; Urizar, L.; Blanchard, S.; Jacobsen, K.; Scholfield, J.

    2006-07-01

    Approximately 20 miles of wastewater lines are below grade at an active military Base. This piping network feeds or fed domestic or industrial wastewater treatment plants on the Base. Past wastewater line investigations indicated potential contaminant releases to soil and groundwater. Further environmental assessment was recommended to characterize the lines because of possible releases. A Remedial Investigation (RI) using random sampling or use of sampling points spaced at predetermined distances along the entire length of the wastewater lines, however, would be inefficient and cost prohibitive. To accomplish RI goals efficiently and within budget, a modified Triad approach was used to design a defensible sampling and analysis plan and perform the investigation. The RI task was successfully executed and resulted in a reduced fieldwork schedule, and sampling and analytical costs. Results indicated that no major releases occurred at the biased sampling points. It was reasonably extrapolated that since releases did not occur at the most likely locations, then the entire length of a particular wastewater line segment was unlikely to have contaminated soil or groundwater and was recommended for no further action. A determination of no further action was recommended for the majority of the waste lines after completing the investigation. The modified Triad approach was successful and a similar approach could be applied to investigate wastewater lines on other United States Department of Defense or Department of Energy facilities. (authors)

  8. A Novel Particle Swarm Optimization Approach for Grid Job Scheduling

    NASA Astrophysics Data System (ADS)

    Izakian, Hesam; Tork Ladani, Behrouz; Zamanifar, Kamran; Abraham, Ajith

    This paper represents a Particle Swarm Optimization (PSO) algorithm, for grid job scheduling. PSO is a population-based search algorithm based on the simulation of the social behavior of bird flocking and fish schooling. Particles fly in problem search space to find optimal or near-optimal solutions. In this paper we used a PSO approach for grid job scheduling. The scheduler aims at minimizing makespan and flowtime simultaneously. Experimental studies show that the proposed novel approach is more efficient than the PSO approach reported in the literature.

  9. Drug discovery: selecting the optimal approach.

    PubMed

    Sams-Dodd, Frank

    2006-05-01

    The target-based drug discovery approach has for the past 10-15 years been the dominating drug discovery paradigm. However, within the past few years, the commercial value of novel targets in licensing deals has fallen dramatically, reflecting that the probability of reaching a clinical drug candidate for a novel target is very low. This has naturally led to questions regarding the success of target-based drug discovery and, more importantly, a search for alternatives. This paper evaluates the strengths and limitations of the main drug discovery approaches, and proposes a novel approach that could offer advantages for the identification of disease-modifying treatments.

  10. A sensitivity equation approach to shape optimization in fluid flows

    NASA Technical Reports Server (NTRS)

    Borggaard, Jeff; Burns, John

    1994-01-01

    A sensitivity equation method to shape optimization problems is applied. An algorithm is developed and tested on a problem of designing optimal forebody simulators for a 2D, inviscid supersonic flow. The algorithm uses a BFGS/Trust Region optimization scheme with sensitivities computed by numerically approximating the linear partial differential equations that determine the flow sensitivities. Numerical examples are presented to illustrate the method.

  11. Molecular Approaches for Optimizing Vitamin D Supplementation.

    PubMed

    Carlberg, Carsten

    2016-01-01

    Vitamin D can be synthesized endogenously within UV-B exposed human skin. However, avoidance of sufficient sun exposure via predominant indoor activities, textile coverage, dark skin at higher latitude, and seasonal variations makes the intake of vitamin D fortified food or direct vitamin D supplementation necessary. Vitamin D has via its biologically most active metabolite 1α,25-dihydroxyvitamin D and the transcription factor vitamin D receptor a direct effect on the epigenome and transcriptome of many human tissues and cell types. Different interpretation of results from observational studies with vitamin D led to some dispute in the field on the desired optimal vitamin D level and the recommended daily supplementation. This chapter will provide background on the epigenome- and transcriptome-wide functions of vitamin D and will outline how this insight may be used for determining of the optimal vitamin D status of human individuals. These reflections will lead to the concept of a personal vitamin D index that may be a better guideline for an optimized vitamin D supplementation than population-based recommendations.

  12. Molecular Approaches for Optimizing Vitamin D Supplementation.

    PubMed

    Carlberg, Carsten

    2016-01-01

    Vitamin D can be synthesized endogenously within UV-B exposed human skin. However, avoidance of sufficient sun exposure via predominant indoor activities, textile coverage, dark skin at higher latitude, and seasonal variations makes the intake of vitamin D fortified food or direct vitamin D supplementation necessary. Vitamin D has via its biologically most active metabolite 1α,25-dihydroxyvitamin D and the transcription factor vitamin D receptor a direct effect on the epigenome and transcriptome of many human tissues and cell types. Different interpretation of results from observational studies with vitamin D led to some dispute in the field on the desired optimal vitamin D level and the recommended daily supplementation. This chapter will provide background on the epigenome- and transcriptome-wide functions of vitamin D and will outline how this insight may be used for determining of the optimal vitamin D status of human individuals. These reflections will lead to the concept of a personal vitamin D index that may be a better guideline for an optimized vitamin D supplementation than population-based recommendations. PMID:26827955

  13. MATERIAL SHAPE OPTIMIZATION FOR FIBER REINFORCED COMPOSITES APPLYING A DAMAGE FORMULATION

    NASA Astrophysics Data System (ADS)

    Kato, Junji; Ramm, Ekkehard; Terada, Kenjiro; Kyoya, Takashi

    The present contribution deals with an optimization strategy of fiber reinforced composites. Although the methodical concept is very general we concentrate on Fiber Reinforced Concrete with a complex failure mechanism resulting from material brittleness of both constituents matrix and fibers. The purpose of the present paper is to improve the structural ductility of the fiber reinforced composites applying an optimization method with respect to the geometrical layout of continuous long textile fibers. The method proposed is achieved by applying a so-called embedded reinforcement formulation. This methodology is extended to a damage formulation in order to represent a realistic structural behavior. For the optimization problem a gradient-based optimization scheme is assumed. An optimality criteria method is applied because of its numerically high efficiency and robustness. The performance of the method is demonstrated by a series of numerical examples; it is verified that the ductility can be substantially improved.

  14. Scalar and Multivariate Approaches for Optimal Network Design in Antarctica

    NASA Astrophysics Data System (ADS)

    Hryniw, Natalia

    Observations are crucial for weather and climate, not only for daily forecasts and logistical purposes, for but maintaining representative records and for tuning atmospheric models. Here scalar theory for optimal network design is expanded in a multivariate framework, to allow for optimal station siting for full field optimization. Ensemble sensitivity theory is expanded to produce the covariance trace approach, which optimizes for the trace of the covariance matrix. Relative entropy is also used for multivariate optimization as an information theory approach for finding optimal locations. Antarctic surface temperature data is used as a testbed for these methods. Both methods produce different results which are tied to the fundamental physical parameters of the Antarctic temperature field.

  15. Risks, scientific uncertainty and the approach of applying precautionary principle.

    PubMed

    Lo, Chang-fa

    2009-03-01

    The paper intends to clarify the nature and aspects of risks and scientific uncertainty and also to elaborate the approach of application of precautionary principle for the purpose of handling the risk arising from scientific uncertainty. It explains the relations between risks and the application of precautionary principle at international and domestic levels. In the situations where an international treaty has admitted the precautionary principle and in the situation where there is no international treaty admitting the precautionary principle or enumerating the conditions to take measures, the precautionary principle has a role to play. The paper proposes a decision making tool, containing questions to be asked, to help policymakers to apply the principle. It also proposes a "weighing and balancing" procedure to help them decide the contents of the measure to cope with the potential risk and to avoid excessive measures.

  16. A system approach to aircraft optimization

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw

    1991-01-01

    Mutual couplings among the mathematical models of physical phenomena and parts of a system such as an aircraft complicate the design process because each contemplated design change may have a far reaching consequence throughout the system. Techniques are outlined for computing these influences as system design derivatives useful for both judgemental and formal optimization purposes. The techniques facilitate decomposition of the design process into smaller, more manageable tasks and they form a methodology that can easily fit into existing engineering organizations and incorporate their design tools.

  17. Optimization approaches to volumetric modulated arc therapy planning

    SciTech Connect

    Unkelbach, Jan Bortfeld, Thomas; Craft, David; Alber, Markus; Bangert, Mark; Bokrantz, Rasmus; Chen, Danny; Li, Ruijiang; Xing, Lei; Men, Chunhua; Nill, Simeon; Papp, Dávid; Romeijn, Edwin; Salari, Ehsan

    2015-03-15

    Volumetric modulated arc therapy (VMAT) has found widespread clinical application in recent years. A large number of treatment planning studies have evaluated the potential for VMAT for different disease sites based on the currently available commercial implementations of VMAT planning. In contrast, literature on the underlying mathematical optimization methods used in treatment planning is scarce. VMAT planning represents a challenging large scale optimization problem. In contrast to fluence map optimization in intensity-modulated radiotherapy planning for static beams, VMAT planning represents a nonconvex optimization problem. In this paper, the authors review the state-of-the-art in VMAT planning from an algorithmic perspective. Different approaches to VMAT optimization, including arc sequencing methods, extensions of direct aperture optimization, and direct optimization of leaf trajectories are reviewed. Their advantages and limitations are outlined and recommendations for improvements are discussed.

  18. Applying response surface methodology to optimize nimesulide permeation from topical formulation.

    PubMed

    Shahzad, Yasser; Afreen, Urooj; Nisar Hussain Shah, Syed; Hussain, Talib

    2013-01-01

    Nimesulide is a non-steroidal anti-inflammatory drug that acts through selective inhibition of COX-2 enzyme. Poor bioavailability of this drug may leads to local toxicity at the site of aggregation and hinders reaching desired therapeutic effects. This study aimed at formulating and optimizing topically applied lotions of nimesulide using an experimental design approach, namely response surface methodology. The formulated lotions were evaluated for pH, viscosity, spreadability, homogeneity and in vitro permeation studies through rabbit skin using Franz diffusion cells. Data were fitted to linear, quadratic and cubic models and best fit model was selected to investigate the influence of permeation enhancers, namely propylene glycol and polyethylene glycol on percutaneous absorption of nimesulide from lotion formulations. The best fit quadratic model explained that the enhancer combination at equal levels significantly increased the flux and permeability coefficient. The model was validated by comparing the permeation profile of optimized formulations' predicted and experimental response values, thus, endorsing the prognostic ability of response surface methodology.

  19. RF cavity design exploiting a new derivative-free trust region optimization approach.

    PubMed

    Hassan, Abdel-Karim S O; Abdel-Malek, Hany L; Mohamed, Ahmed S A; Abuelfadl, Tamer M; Elqenawy, Ahmed E

    2015-11-01

    In this article, a novel derivative-free (DF) surrogate-based trust region optimization approach is proposed. In the proposed approach, quadratic surrogate models are constructed and successively updated. The generated surrogate model is then optimized instead of the underlined objective function over trust regions. Truncated conjugate gradients are employed to find the optimal point within each trust region. The approach constructs the initial quadratic surrogate model using few data points of order O(n), where n is the number of design variables. The proposed approach adopts weighted least squares fitting for updating the surrogate model instead of interpolation which is commonly used in DF optimization. This makes the approach more suitable for stochastic optimization and for functions subject to numerical error. The weights are assigned to give more emphasis to points close to the current center point. The accuracy and efficiency of the proposed approach are demonstrated by applying it to a set of classical bench-mark test problems. It is also employed to find the optimal design of RF cavity linear accelerator with a comparison analysis with a recent optimization technique. PMID:26644929

  20. Applying the J-optimal channelized quadratic observer to SPECT myocardial perfusion defect detection

    NASA Astrophysics Data System (ADS)

    Kupinski, Meredith K.; Clarkson, Eric; Ghaly, Michael; Frey, Eric C.

    2016-03-01

    To evaluate performance on a perfusion defect detection task from 540 image pairs of myocardial perfusion SPECT image data we apply the J-optimal channelized quadratic observer (J-CQO). We compare AUC values of the linear Hotelling observer and J-CQO when the defect location is fixed and when it occurs in one of two locations. As expected, when the location is fixed a single channels maximizes AUC; location variability requires multiple channels to maximize the AUC. The AUC is estimated from both the projection data and reconstructed images. J-CQO is quadratic since it uses the first- and second- order statistics of the image data from both classes. The linear data reduction by the channels is described by an L x M channel matrix and in prior work we introduced an iterative gradient-based method for calculating the channel matrix. The dimensionality reduction from M measurements to L channels yields better estimates of these sample statistics from smaller sample sizes, and since the channelized covariance matrix is L x L instead of M x M, the matrix inverse is easier to compute. The novelty of our approach is the use of Jeffrey's divergence (J) as the figure of merit (FOM) for optimizing the channel matrix. We previously showed that the J-optimal channels are also the optimum channels for the AUC and the Bhattacharyya distance when the channel outputs are Gaussian distributed with equal means. This work evaluates the use of J as a surrogate FOM (SFOM) for AUC when these statistical conditions are not satisfied.

  1. Genetic algorithm for design and manufacture optimization based on numerical simulations applied to aeronautic composite parts

    NASA Astrophysics Data System (ADS)

    Mouton, S.; Ledoux, Y.; Teissandier, D.; Sébastian, P.

    2010-06-01

    A key challenge for the future is to reduce drastically the human impact on the environment. In the aeronautic field, this challenge aims at optimizing the design of the aircraft to decrease the global mass. This reduction leads to the optimization of every part constitutive of the plane. This operation is even more delicate when the used material is composite material. In this case, it is necessary to find a compromise between the strength, the mass and the manufacturing cost of the component. Due to these different kinds of design constraints it is necessary to assist engineer with decision support system to determine feasible solutions. In this paper, an approach is proposed based on the coupling of the different key characteristics of the design process and on the consideration of the failure risk of the component. The originality of this work is that the manufacturing deviations due to the RTM process are integrated in the simulation of the assembly process. Two kinds of deviations are identified: volume impregnation (injection phase of RTM process) and geometrical deviations (curing and cooling phases). The quantification of these deviations and the related failure risk calculation is based on finite element simulations (Pam RTM® and Samcef® softwares). The use of genetic algorithm allows to estimate the impact of the design choices and their consequences on the failure risk of the component. The main focus of the paper is the optimization of tool design. In the framework of decision support systems, the failure risk calculation is used for making the comparison of possible industrialization alternatives. It is proposed to apply this method on a particular part of the airplane structure: a spar unit made of carbon fiber/epoxy composite.

  2. Genetic algorithm for design and manufacture optimization based on numerical simulations applied to aeronautic composite parts

    SciTech Connect

    Mouton, S.; Ledoux, Y.; Teissandier, D.; Sebastian, P.

    2010-06-15

    A key challenge for the future is to reduce drastically the human impact on the environment. In the aeronautic field, this challenge aims at optimizing the design of the aircraft to decrease the global mass. This reduction leads to the optimization of every part constitutive of the plane. This operation is even more delicate when the used material is composite material. In this case, it is necessary to find a compromise between the strength, the mass and the manufacturing cost of the component. Due to these different kinds of design constraints it is necessary to assist engineer with decision support system to determine feasible solutions. In this paper, an approach is proposed based on the coupling of the different key characteristics of the design process and on the consideration of the failure risk of the component. The originality of this work is that the manufacturing deviations due to the RTM process are integrated in the simulation of the assembly process. Two kinds of deviations are identified: volume impregnation (injection phase of RTM process) and geometrical deviations (curing and cooling phases). The quantification of these deviations and the related failure risk calculation is based on finite element simulations (Pam RTM registered and Samcef registered softwares). The use of genetic algorithm allows to estimate the impact of the design choices and their consequences on the failure risk of the component. The main focus of the paper is the optimization of tool design. In the framework of decision support systems, the failure risk calculation is used for making the comparison of possible industrialization alternatives. It is proposed to apply this method on a particular part of the airplane structure: a spar unit made of carbon fiber/epoxy composite.

  3. Optimality approaches to describe characteristic fluvial patterns on landscapes

    PubMed Central

    Paik, Kyungrock; Kumar, Praveen

    2010-01-01

    Mother Nature has left amazingly regular geomorphic patterns on the Earth's surface. These patterns are often explained as having arisen as a result of some optimal behaviour of natural processes. However, there is little agreement on what is being optimized. As a result, a number of alternatives have been proposed, often with little a priori justification with the argument that successful predictions will lend a posteriori support to the hypothesized optimality principle. Given that maximum entropy production is an optimality principle attempting to predict the microscopic behaviour from a macroscopic characterization, this paper provides a review of similar approaches with the goal of providing a comparison and contrast between them to enable synthesis. While assumptions of optimal behaviour approach a system from a macroscopic viewpoint, process-based formulations attempt to resolve the mechanistic details whose interactions lead to the system level functions. Using observed optimality trends may help simplify problem formulation at appropriate levels of scale of interest. However, for such an approach to be successful, we suggest that optimality approaches should be formulated at a broader level of environmental systems' viewpoint, i.e. incorporating the dynamic nature of environmental variables and complex feedback mechanisms between fluvial and non-fluvial processes. PMID:20368257

  4. A new optimization based approach to experimental combination chemotherapy.

    PubMed

    Pereira, F L; Pedreira, C E; de Sousa, J B

    1995-01-01

    A new approach towards the design of optimal multiple drug experimental cancer chemotherapy is presented. Once an adequate model is specified, an optimization procedure is used in order to achieve an optimal compromise between after treatment tumor size and toxic effects on healthy tissues. In our approach we consider a model including cancer cell population growth and pharmacokinetic dynamics. These elements of the model are essential in order to allow less empirical relationships between multiple drug delivery policies, and their effects on cancer and normal cells. The desired multiple drug dosage schedule is computed by minimizing a customizable cost function subject to dynamic constraints expressed by the model. However, this additional dynamic wealth increases the complexity of the problem which, in general, cannot be solved in a closed form. Therefore, we propose an iterative optimization algorithm of the projected gradient type where the Maximum Principle of Pontryagin is used to select the optimal control policy.

  5. An Efficient Approach to Obtain Optimal Load Factors for Structural Design

    PubMed Central

    Bojórquez, Juan

    2014-01-01

    An efficient optimization approach is described to calibrate load factors used for designing of structures. The load factors are calibrated so that the structural reliability index is as close as possible to a target reliability value. The optimization procedure is applied to find optimal load factors for designing of structures in accordance with the new version of the Mexico City Building Code (RCDF). For this aim, the combination of factors corresponding to dead load plus live load is considered. The optimal combination is based on a parametric numerical analysis of several reinforced concrete elements, which are designed using different load factor values. The Monte Carlo simulation technique is used. The formulation is applied to different failure modes: flexure, shear, torsion, and compression plus bending of short and slender reinforced concrete elements. Finally, the structural reliability corresponding to the optimal load combination proposed here is compared with that corresponding to the load combination recommended by the current Mexico City Building Code. PMID:25133232

  6. Annular flow optimization: A new integrated approach

    SciTech Connect

    Maglione, R.; Robotti, G.; Romagnoli, R.

    1997-07-01

    During the drilling stage of an oil and gas well the hydraulic circuit of the mud assumes great importance with respect to most of the numerous and various constituting parts (mostly in the annular sections). Each of them has some points to be satisfied in order to guarantee both the safety of the operations and the performance optimization of each of the single elements of the circuit. The most important tasks for the annular part of the drilling hydraulic circuit are the following: (1) Maximum available pressure to the last casing shoe; (2) avoid borehole wall erosions; and (3) guarantee the hole cleaning. A new integrated system considering all the elements of the annular part of the drilling hydraulic circuit and the constraints imposed from each of them has been realized. In this way the family of the flow parameters (mud rheology and pump rate) satisfying simultaneously all the variables of the annular section has been found. Finally two examples regarding a standard and narrow annular section (slim hole) will be reported, showing briefly all the steps of the calculations until reaching the optimum flow parameters family (for that operational condition of drilling) that satisfies simultaneous all the flow parameters limitations imposed by the elements of the annular section circuit.

  7. Optimization strategies in the modelling of SG-SMB applied to separation of phenylalanine and tryptophan

    NASA Astrophysics Data System (ADS)

    Diógenes Tavares Câmara, Leôncio

    2014-03-01

    The solvent-gradient simulated moving bed process (SG-SMB) is the new tendency in the performance improvement if compared to the traditional isocratic solvent conditions. In such SG-SMB process the modulation of the solvent strength leads to significant increase in the purities and productivity followed by reduction in the solvent consumption. A stepwise modelling approach was utilized in the representation of the interconnected chromatographic columns of the system combined with a lumped mass transfer model between the solid and liquid phase. The influence of the solvent modifier was considered applying the Abel model which takes into account the effect of modifier volume fraction over the partition coefficient. Correlation models of the mass transfer parameters were obtained through the retention times of the solutes according to the volume fraction of modifier. The modelling and simulations were carried out and compared to the experimental SG-SMB separation unit of the amino acids Phenylalanine and Tryptophan. The simulation results showed the great potential of the proposed modelling approach in the representation of such complex systems. The simulations showed great agreement fitting the experimental data of the amino acids concentrations both at the extract as well as at the raffinate. A new optimization strategy was proposed in the determination of the best operating conditions which uses the phi-plot concept.

  8. Comparative Properties of Collaborative Optimization and other Approaches to MDO

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia M.; Lewis, Robert Michael

    1999-01-01

    We discuss criteria by which one can classify, analyze, and evaluate approaches to solving multidisciplinary design optimization (MDO) problems. Central to our discussion is the often overlooked distinction between questions of formulating MDO problems and solving the resulting computational problem. We illustrate our general remarks by comparing several approaches to MDO that have been proposed.

  9. Comparative Properties of Collaborative Optimization and Other Approaches to MDO

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia M.; Lewis, Robert Michael

    1999-01-01

    We, discuss criteria by which one can classify, analyze, and evaluate approaches to solving multidisciplinary design optimization (MDO) problems. Central to our discussion is the often overlooked distinction between questions of formulating MDO problems and solving the resulting computational problem. We illustrate our general remarks by comparing several approaches to MDO that have been proposed.

  10. A collective neurodynamic optimization approach to bound-constrained nonconvex optimization.

    PubMed

    Yan, Zheng; Wang, Jun; Li, Guocheng

    2014-07-01

    This paper presents a novel collective neurodynamic optimization method for solving nonconvex optimization problems with bound constraints. First, it is proved that a one-layer projection neural network has a property that its equilibria are in one-to-one correspondence with the Karush-Kuhn-Tucker points of the constrained optimization problem. Next, a collective neurodynamic optimization approach is developed by utilizing a group of recurrent neural networks in framework of particle swarm optimization by emulating the paradigm of brainstorming. Each recurrent neural network carries out precise constrained local search according to its own neurodynamic equations. By iteratively improving the solution quality of each recurrent neural network using the information of locally best known solution and globally best known solution, the group can obtain the global optimal solution to a nonconvex optimization problem. The advantages of the proposed collective neurodynamic optimization approach over evolutionary approaches lie in its constraint handling ability and real-time computational efficiency. The effectiveness and characteristics of the proposed approach are illustrated by using many multimodal benchmark functions. PMID:24705545

  11. Aerodynamic design applying automatic differentiation and using robust variable fidelity optimization

    NASA Astrophysics Data System (ADS)

    Takemiya, Tetsushi

    , and that (2) the AMF terminates optimization erroneously when the optimization problems have constraints. The first problem is due to inaccuracy in computing derivatives in the AMF, and the second problem is due to erroneous treatment of the trust region ratio, which sets the size of the domain for an optimization in the AMF. In order to solve the first problem of the AMF, automatic differentiation (AD) technique, which reads the codes of analysis models and automatically generates new derivative codes based on some mathematical rules, is applied. If derivatives are computed with the generated derivative code, they are analytical, and the required computational time is independent of the number of design variables, which is very advantageous for realistic aerospace engineering problems. However, if analysis models implement iterative computations such as computational fluid dynamics (CFD), which solves system partial differential equations iteratively, computing derivatives through the AD requires a massive memory size. The author solved this deficiency by modifying the AD approach and developing a more efficient implementation with CFD, and successfully applied the AD to general CFD software. In order to solve the second problem of the AMF, the governing equation of the trust region ratio, which is very strict against the violation of constraints, is modified so that it can accept the violation of constraints within some tolerance. By accepting violations of constraints during the optimization process, the AMF can continue optimization without terminating immaturely and eventually find the true optimum design point. With these modifications, the AMF is referred to as "Robust AMF," and it is applied to airfoil and wing aerodynamic design problems using Euler CFD software. The former problem has 21 design variables, and the latter 64. In both problems, derivatives computed with the proposed AD method are first compared with those computed with the finite

  12. Departures From Optimality When Pursuing Multiple Approach or Avoidance Goals

    PubMed Central

    2016-01-01

    This article examines how people depart from optimality during multiple-goal pursuit. The authors operationalized optimality using dynamic programming, which is a mathematical model used to calculate expected value in multistage decisions. Drawing on prospect theory, they predicted that people are risk-averse when pursuing approach goals and are therefore more likely to prioritize the goal in the best position than the dynamic programming model suggests is optimal. The authors predicted that people are risk-seeking when pursuing avoidance goals and are therefore more likely to prioritize the goal in the worst position than is optimal. These predictions were supported by results from an experimental paradigm in which participants made a series of prioritization decisions while pursuing either 2 approach or 2 avoidance goals. This research demonstrates the usefulness of using decision-making theories and normative models to understand multiple-goal pursuit. PMID:26963081

  13. Departures from optimality when pursuing multiple approach or avoidance goals.

    PubMed

    Ballard, Timothy; Yeo, Gillian; Neal, Andrew; Farrell, Simon

    2016-07-01

    This article examines how people depart from optimality during multiple-goal pursuit. The authors operationalized optimality using dynamic programming, which is a mathematical model used to calculate expected value in multistage decisions. Drawing on prospect theory, they predicted that people are risk-averse when pursuing approach goals and are therefore more likely to prioritize the goal in the best position than the dynamic programming model suggests is optimal. The authors predicted that people are risk-seeking when pursuing avoidance goals and are therefore more likely to prioritize the goal in the worst position than is optimal. These predictions were supported by results from an experimental paradigm in which participants made a series of prioritization decisions while pursuing either 2 approach or 2 avoidance goals. This research demonstrates the usefulness of using decision-making theories and normative models to understand multiple-goal pursuit. (PsycINFO Database Record PMID:26963081

  14. A comparison between gradient descent and stochastic approaches for parameter optimization of a sea ice model

    NASA Astrophysics Data System (ADS)

    Sumata, H.; Kauker, F.; Gerdes, R.; Köberle, C.; Karcher, M.

    2013-07-01

    Two types of optimization methods were applied to a parameter optimization problem in a coupled ocean-sea ice model of the Arctic, and applicability and efficiency of the respective methods were examined. One optimization utilizes a finite difference (FD) method based on a traditional gradient descent approach, while the other adopts a micro-genetic algorithm (μGA) as an example of a stochastic approach. The optimizations were performed by minimizing a cost function composed of model-data misfit of ice concentration, ice drift velocity and ice thickness. A series of optimizations were conducted that differ in the model formulation ("smoothed code" versus standard code) with respect to the FD method and in the population size and number of possibilities with respect to the μGA method. The FD method fails to estimate optimal parameters due to the ill-shaped nature of the cost function caused by the strong non-linearity of the system, whereas the genetic algorithms can effectively estimate near optimal parameters. The results of the study indicate that the sophisticated stochastic approach (μGA) is of practical use for parameter optimization of a coupled ocean-sea ice model with a medium-sized horizontal resolution of 50 km × 50 km as used in this study.

  15. A Riccati approach for constrained linear quadratic optimal control

    NASA Astrophysics Data System (ADS)

    Sideris, Athanasios; Rodriguez, Luis A.

    2011-02-01

    An active-set method is proposed for solving linear quadratic optimal control problems subject to general linear inequality path constraints including mixed state-control and state-only constraints. A Riccati-based approach is developed for efficiently solving the equality constrained optimal control subproblems generated during the procedure. The solution of each subproblem requires computations that scale linearly with the horizon length. The algorithm is illustrated with numerical examples.

  16. Optimal purchasing of raw materials: A data-driven approach

    SciTech Connect

    Muteki, K.; MacGregor, J.F.

    2008-06-15

    An approach to the optimal purchasing of raw materials that will achieve a desired product quality at a minimum cost is presented. A PLS (Partial Least Squares) approach to formulation modeling is used to combine databases on raw material properties and on past process operations and to relate these to final product quality. These PLS latent variable models are then used in a sequential quadratic programming (SQP) or mixed integer nonlinear programming (MINLP) optimization to select those raw-materials, among all those available on the market, the ratios in which to combine them and the process conditions under which they should be processed. The approach is illustrated for the optimal purchasing of metallurgical coals for coke making in the steel industry.

  17. A Numerical Optimization Approach for Tuning Fuzzy Logic Controllers

    NASA Technical Reports Server (NTRS)

    Woodard, Stanley E.; Garg, Devendra P.

    1998-01-01

    This paper develops a method to tune fuzzy controllers using numerical optimization. The main attribute of this approach is that it allows fuzzy logic controllers to be tuned to achieve global performance requirements. Furthermore, this approach allows design constraints to be implemented during the tuning process. The method tunes the controller by parameterizing the membership functions for error, change-in-error and control output. The resulting parameters form a design vector which is iteratively changed to minimize an objective function. The minimal objective function results in an optimal performance of the system. A spacecraft mounted science instrument line-of-sight pointing control is used to demonstrate results.

  18. A hybrid approach to near-optimal launch vehicle guidance

    NASA Technical Reports Server (NTRS)

    Leung, Martin S. K.; Calise, Anthony J.

    1992-01-01

    This paper evaluates a proposed hybrid analytical/numerical approach to launch-vehicle guidance for ascent to orbit injection. The feedback-guidance approach is based on a piecewise nearly analytic zero-order solution evaluated using a collocation method. The zero-order solution is then improved through a regular perturbation analysis, wherein the neglected dynamics are corrected in the first-order term. For real-time implementation, the guidance approach requires solving a set of small dimension nonlinear algebraic equations and performing quadrature. Assessment of performance and reliability are carried out through closed-loop simulation for a vertically launched 2-stage heavy-lift capacity vehicle to a low earth orbit. The solutions are compared with optimal solutions generated from a multiple shooting code. In the example the guidance approach delivers over 99.9 percent of optimal performance and terminal constraint accuracy.

  19. Direct-aperture optimization applied to selection of beam orientations in intensity-modulated radiation therapy

    NASA Astrophysics Data System (ADS)

    Bedford, J. L.; Webb, S.

    2007-01-01

    Direct-aperture optimization (DAO) was applied to iterative beam-orientation selection in intensity-modulated radiation therapy (IMRT), so as to ensure a realistic segmental treatment plan at each iteration. Nested optimization engines dealt separately with gantry angles, couch angles, collimator angles, segment shapes, segment weights and wedge angles. Each optimization engine performed a random search with successively narrowing step sizes. For optimization of segment shapes, the filtered backprojection (FBP) method was first used to determine desired fluence, the fluence map was segmented, and then constrained direct-aperture optimization was used thereafter. Segment shapes were fully optimized when a beam angle was perturbed, and minimally re-optimized otherwise. The algorithm was compared with a previously reported method using FBP alone at each orientation iteration. An example case consisting of a cylindrical phantom with a hemi-annular planning target volume (PTV) showed that for three-field plans, the method performed better than when using FBP alone, but for five or more fields, neither method provided much benefit over equally spaced beams. For a prostate case, improved bladder sparing was achieved through the use of the new algorithm. A plan for partial scalp treatment showed slightly improved PTV coverage and lower irradiated volume of brain with the new method compared to FBP alone. It is concluded that, although the method is computationally intensive and not suitable for searching large unconstrained regions of beam space, it can be used effectively in conjunction with prior class solutions to provide individually optimized IMRT treatment plans.

  20. An optimal control approach to probabilistic Boolean networks

    NASA Astrophysics Data System (ADS)

    Liu, Qiuli

    2012-12-01

    External control of some genes in a genetic regulatory network is useful for avoiding undesirable states associated with some diseases. For this purpose, a number of stochastic optimal control approaches have been proposed. Probabilistic Boolean networks (PBNs) as powerful tools for modeling gene regulatory systems have attracted considerable attention in systems biology. In this paper, we deal with a problem of optimal intervention in a PBN with the help of the theory of discrete time Markov decision process. Specifically, we first formulate a control model for a PBN as a first passage model for discrete time Markov decision processes and then find, using a value iteration algorithm, optimal effective treatments with the minimal expected first passage time over the space of all possible treatments. In order to demonstrate the feasibility of our approach, an example is also displayed.

  1. The optimality of potential rescaling approaches in land data assimilation

    Technology Transfer Automated Retrieval System (TEKTRAN)

    It is well-known that systematic differences exist between modeled and observed realizations of hydrological variables like soil moisture. Prior to data assimilation, these differences must be removed in order to obtain an optimal analysis. A number of rescaling approaches have been proposed for rem...

  2. Successive linear optimization approach to the dynamic traffic assignment problem

    SciTech Connect

    Ho, J.K.

    1980-11-01

    A dynamic model for the optimal control of traffic flow over a network is considered. The model, which treats congestion explicitly in the flow equations, gives rise to nonlinear, nonconvex mathematical programming problems. It has been shown for a piecewise linear version of this model that a global optimum is contained in the set of optimal solutions of a certain linear program. A sufficient condition for optimality which implies that a global optimum can be obtained by successively optimizing at most N + 1 objective functions for the linear program, where N is the number of time periods in the planning horizon is presented. Computational results are reported to indicate the efficiency of this approach.

  3. New approaches to optimization in aerospace conceptual design

    NASA Technical Reports Server (NTRS)

    Gage, Peter J.

    1995-01-01

    Aerospace design can be viewed as an optimization process, but conceptual studies are rarely performed using formal search algorithms. Three issues that restrict the success of automatic search are identified in this work. New approaches are introduced to address the integration of analyses and optimizers, to avoid the need for accurate gradient information and a smooth search space (required for calculus-based optimization), and to remove the restrictions imposed by fixed complexity problem formulations. (1) Optimization should be performed in a flexible environment. A quasi-procedural architecture is used to conveniently link analysis modules and automatically coordinate their execution. It efficiently controls a large-scale design tasks. (2) Genetic algorithms provide a search method for discontinuous or noisy domains. The utility of genetic optimization is demonstrated here, but parameter encodings and constraint-handling schemes must be carefully chosen to avoid premature convergence to suboptimal designs. The relationship between genetic and calculus-based methods is explored. (3) A variable-complexity genetic algorithm is created to permit flexible parameterization, so that the level of description can change during optimization. This new optimizer automatically discovers novel designs in structural and aerodynamic tasks.

  4. PARETO: A novel evolutionary optimization approach to multiobjective IMRT planning

    SciTech Connect

    Fiege, Jason; McCurdy, Boyd; Potrebko, Peter; Champion, Heather; Cull, Andrew

    2011-09-15

    Purpose: In radiation therapy treatment planning, the clinical objectives of uniform high dose to the planning target volume (PTV) and low dose to the organs-at-risk (OARs) are invariably in conflict, often requiring compromises to be made between them when selecting the best treatment plan for a particular patient. In this work, the authors introduce Pareto-Aware Radiotherapy Evolutionary Treatment Optimization (pareto), a multiobjective optimization tool to solve for beam angles and fluence patterns in intensity-modulated radiation therapy (IMRT) treatment planning. Methods: pareto is built around a powerful multiobjective genetic algorithm (GA), which allows us to treat the problem of IMRT treatment plan optimization as a combined monolithic problem, where all beam fluence and angle parameters are treated equally during the optimization. We have employed a simple parameterized beam fluence representation with a realistic dose calculation approach, incorporating patient scatter effects, to demonstrate feasibility of the proposed approach on two phantoms. The first phantom is a simple cylindrical phantom containing a target surrounded by three OARs, while the second phantom is more complex and represents a paraspinal patient. Results: pareto results in a large database of Pareto nondominated solutions that represent the necessary trade-offs between objectives. The solution quality was examined for several PTV and OAR fitness functions. The combination of a conformity-based PTV fitness function and a dose-volume histogram (DVH) or equivalent uniform dose (EUD) -based fitness function for the OAR produced relatively uniform and conformal PTV doses, with well-spaced beams. A penalty function added to the fitness functions eliminates hotspots. Comparison of resulting DVHs to those from treatment plans developed with a single-objective fluence optimizer (from a commercial treatment planning system) showed good correlation. Results also indicated that pareto shows

  5. Applying Digital Sensor Technology: A Problem-Solving Approach

    ERIC Educational Resources Information Center

    Seedhouse, Paul; Knight, Dawn

    2016-01-01

    There is currently an explosion in the number and range of new devices coming onto the technology market that use digital sensor technology to track aspects of human behaviour. In this article, we present and exemplify a three-stage model for the application of digital sensor technology in applied linguistics that we have developed, namely,…

  6. Defining and Applying a Functionality Approach to Intellectual Disability

    ERIC Educational Resources Information Center

    Luckasson, R.; Schalock, R. L.

    2013-01-01

    Background: The current functional models of disability do not adequately incorporate significant changes of the last three decades in our understanding of human functioning, and how the human functioning construct can be applied to clinical functions, professional practices and outcomes evaluation. Methods: The authors synthesise current…

  7. Teaching Social Science Research: An Applied Approach Using Community Resources.

    ERIC Educational Resources Information Center

    Gilliland, M. Janice; And Others

    A four-week summer project for 100 rural tenth graders in the University of Alabama's Biomedical Sciences Preparation Program (BioPrep) enabled students to acquire and apply social sciences research skills. The students investigated drinking water quality in three rural Alabama counties by interviewing local officials, health workers, and…

  8. An approach to the perceptual optimization of complex visualizations.

    PubMed

    House, Donald H; Bair, Alethea S; Ware, Colin

    2006-01-01

    This paper proposes a new experimental framework within which evidence regarding the perceptual characteristics of a visualization method can be collected, and describes how this evidence can be explored to discover principles and insights to guide the design of perceptually near-optimal visualizations. We make the case that each of the current approaches for evaluating visualizations is limited in what it can tell us about optimal tuning and visual design. We go on to argue that our new approach is better suited to optimizing the kinds of complex visual displays that are commonly created in visualization. Our method uses human-in-the-loop experiments to selectively search through the parameter space of a visualization method, generating large databases of rated visualization solutions. Data mining is then used to extract results from the database, ranging from highly specific exemplar visualizations for a particular data set, to more broadly applicable guidelines for visualization design. We illustrate our approach using a recent study of optimal texturing for layered surfaces viewed in stereo and in motion. We show that a genetic algorithm is a valuable way of guiding the human-in-the-loop search through visualization parameter space. We also demonstrate several useful data mining methods including clustering, principal component analysis, neural networks, and statistical comparisons of functions of parameters.

  9. Effects of optimism on creativity under approach and avoidance motivation

    PubMed Central

    Icekson, Tamar; Roskes, Marieke; Moran, Simone

    2014-01-01

    Focusing on avoiding failure or negative outcomes (avoidance motivation) can undermine creativity, due to cognitive (e.g., threat appraisals), affective (e.g., anxiety), and volitional processes (e.g., low intrinsic motivation). This can be problematic for people who are avoidance motivated by nature and in situations in which threats or potential losses are salient. Here, we review the relation between avoidance motivation and creativity, and the processes underlying this relation. We highlight the role of optimism as a potential remedy for the creativity undermining effects of avoidance motivation, due to its impact on the underlying processes. Optimism, expecting to succeed in achieving success or avoiding failure, may reduce negative effects of avoidance motivation, as it eases threat appraisals, anxiety, and disengagement—barriers playing a key role in undermining creativity. People experience these barriers more under avoidance than under approach motivation, and beneficial effects of optimism should therefore be more pronounced under avoidance than approach motivation. Moreover, due to their eagerness, approach motivated people may even be more prone to unrealistic over-optimism and its negative consequences. PMID:24616690

  10. Pilot-testing an applied competency-based approach to health human resources planning.

    PubMed

    Tomblin Murphy, Gail; MacKenzie, Adrian; Alder, Rob; Langley, Joanne; Hickey, Marjorie; Cook, Amanda

    2013-10-01

    A competency-based approach to health human resources (HHR) planning is one that explicitly considers the spectrum of knowledge, skills and judgement (competencies) required for the health workforce based on the health needs of the relevant population in some specific circumstances. Such an approach is of particular benefit to planners challenged to make optimal use of limited HHR as it allows them to move beyond simply estimating numbers of certain professionals required and plan instead according to the unique mix of competencies available from the existing health workforce. This kind of flexibility is particularly valuable in contexts where healthcare providers are in short supply generally (e.g. in many developing countries) or temporarily due to a surge in need (e.g. a pandemic or other disease outbreak). A pilot application of this approach using the context of an influenza pandemic in one health district of Nova Scotia, Canada, is described, and key competency gaps identified. The approach is also being applied using other conditions in other Canadian jurisdictions and in Zambia.

  11. Shape Optimization and Supremal Minimization Approaches in Landslides Modeling

    SciTech Connect

    Hassani, Riad Ionescu, Ioan R. Lachand-Robert, Thomas

    2005-10-15

    The steady-state unidirectional (anti-plane) flow for a Bingham fluid is considered. We take into account the inhomogeneous yield limit of the fluid, which is well adjusted to the description of landslides. The blocking property is analyzed and we introduce the safety factor which is connected to two optimization problems in terms of velocities and stresses. Concerning the velocity analysis the minimum problem in Bv({omega}) is equivalent to a shape-optimization problem. The optimal set is the part of the land which slides whenever the loading parameter becomes greater than the safety factor. This is proved in the one-dimensional case and conjectured for the two-dimensional flow. For the stress-optimization problem we give a stream function formulation in order to deduce a minimum problem in W{sup 1,{infinity}}({omega}) and we prove the existence of a minimizer. The L{sup p}({omega}) approximation technique is used to get a sequence of minimum problems for smooth functionals. We propose two numerical approaches following the two analysis presented before.First, we describe a numerical method to compute the safety factor through equivalence with the shape-optimization problem.Then the finite-element approach and a Newton method is used to obtain a numerical scheme for the stress formulation. Some numerical results are given in order to compare the two methods. The shape-optimization method is sharp in detecting the sliding zones but the convergence is very sensitive to the choice of the parameters. The stress-optimization method is more robust, gives precise safety factors but the results cannot be easily compiled to obtain the sliding zone.

  12. Dialogical Approach Applied in Group Counselling: Case Study

    ERIC Educational Resources Information Center

    Koivuluhta, Merja; Puhakka, Helena

    2013-01-01

    This study utilizes structured group counselling and a dialogical approach to develop a group counselling intervention for students beginning a computer science education. The study assesses the outcomes of group counselling from the standpoint of the development of the students' self-observation. The research indicates that group counselling…

  13. Applying Socio-Semiotics to Organizational Communication: A New Approach.

    ERIC Educational Resources Information Center

    Cooren, Francois

    1999-01-01

    Argues that a socio-semiotic approach to organizational communication opens up a middle course leading to a reconciliation of the functionalist and interpretive movements. Outlines and illustrates three premises to show how they enable scholars to reconceptualize the opposition between functionalism and interpretivism. Concludes that organizations…

  14. Tennis: Applied Examples of a Game-Based Teaching Approach

    ERIC Educational Resources Information Center

    Crespo, Miguel; Reid, Machar M.; Miley, Dave

    2004-01-01

    In this article, the authors reveal that tennis has been increasingly taught with a tactical model or game-based approach, which emphasizes learning through practice in match-like drills and actual play, rather than in practicing strokes for exact technical execution. Its goal is to facilitate the player's understanding of the tactical, physical…

  15. Applied Ethics and the Humanistic Tradition: A Comparative Curricula Approach.

    ERIC Educational Resources Information Center

    Deonanan, Carlton R.; Deonanan, Venus E.

    This research work investigates the problem of "Leadership, and the Ethical Dimension: A Comparative Curricula Approach." The research problem is investigated from the academic areas of (1) philosophy; (2) comparative curricula; (3) subject matter areas of English literature and intellectual history; (4) religion; and (5) psychology. Different…

  16. Adaptive Wing Camber Optimization: A Periodic Perturbation Approach

    NASA Technical Reports Server (NTRS)

    Espana, Martin; Gilyard, Glenn

    1994-01-01

    Available redundancy among aircraft control surfaces allows for effective wing camber modifications. As shown in the past, this fact can be used to improve aircraft performance. To date, however, algorithm developments for in-flight camber optimization have been limited. This paper presents a perturbational approach for cruise optimization through in-flight camber adaptation. The method uses, as a performance index, an indirect measurement of the instantaneous net thrust. As such, the actual performance improvement comes from the integrated effects of airframe and engine. The algorithm, whose design and robustness properties are discussed, is demonstrated on the NASA Dryden B-720 flight simulator.

  17. Optimal control of underactuated mechanical systems: A geometric approach

    NASA Astrophysics Data System (ADS)

    Colombo, Leonardo; Martín De Diego, David; Zuccalli, Marcela

    2010-08-01

    In this paper, we consider a geometric formalism for optimal control of underactuated mechanical systems. Our techniques are an adaptation of the classical Skinner and Rusk approach for the case of Lagrangian dynamics with higher-order constraints. We study a regular case where it is possible to establish a symplectic framework and, as a consequence, to obtain a unique vector field determining the dynamics of the optimal control problem. These developments will allow us to develop a new class of geometric integrators based on discrete variational calculus.

  18. Optimal band selection in hyperspectral remote sensing of aquatic benthic features: a wavelet filter window approach

    NASA Astrophysics Data System (ADS)

    Bostater, Charles R., Jr.

    2006-09-01

    This paper describes a wavelet based approach to derivative spectroscopy. The approach is utilized to select, through optimization, optimal channels or bands to use as derivative based remote sensing algorithms. The approach is applied to airborne and modeled or synthetic reflectance signatures of environmental media and features or objects within such media, such as benthic submerged vegetation canopies. The technique can also applied to selected pixels identified within a hyperspectral image cube obtained from an board an airborne, ground based, or subsurface mobile imaging system. This wavelet based image processing technique is an extremely fast numerical method to conduct higher order derivative spectroscopy which includes nonlinear filter windows. Essentially, the wavelet filter scans a measured or synthetic signature in an automated sequential manner in order to develop a library of filtered spectra. The library is utilized in real time to select the optimal channels for direct algorithm application. The unique wavelet based derivative filtering technique makes us of a translating, and dilating derivative spectroscopy signal processing (TDDS-SP (R)) approach based upon remote sensing science and radiative transfer processes unlike other signal processing techniques applied to hyperspectral signatures.

  19. Hybrid swarm intelligence optimization approach for optimal data storage position identification in wireless sensor networks.

    PubMed

    Mohanasundaram, Ranganathan; Periasamy, Pappampalayam Sanmugam

    2015-01-01

    The current high profile debate with regard to data storage and its growth have become strategic task in the world of networking. It mainly depends on the sensor nodes called producers, base stations, and also the consumers (users and sensor nodes) to retrieve and use the data. The main concern dealt here is to find an optimal data storage position in wireless sensor networks. The works that have been carried out earlier did not utilize swarm intelligence based optimization approaches to find the optimal data storage positions. To achieve this goal, an efficient swam intelligence approach is used to choose suitable positions for a storage node. Thus, hybrid particle swarm optimization algorithm has been used to find the suitable positions for storage nodes while the total energy cost of data transmission is minimized. Clustering-based distributed data storage is utilized to solve clustering problem using fuzzy-C-means algorithm. This research work also considers the data rates and locations of multiple producers and consumers to find optimal data storage positions. The algorithm is implemented in a network simulator and the experimental results show that the proposed clustering and swarm intelligence based ODS strategy is more effective than the earlier approaches.

  20. Hybrid Swarm Intelligence Optimization Approach for Optimal Data Storage Position Identification in Wireless Sensor Networks

    PubMed Central

    Mohanasundaram, Ranganathan; Periasamy, Pappampalayam Sanmugam

    2015-01-01

    The current high profile debate with regard to data storage and its growth have become strategic task in the world of networking. It mainly depends on the sensor nodes called producers, base stations, and also the consumers (users and sensor nodes) to retrieve and use the data. The main concern dealt here is to find an optimal data storage position in wireless sensor networks. The works that have been carried out earlier did not utilize swarm intelligence based optimization approaches to find the optimal data storage positions. To achieve this goal, an efficient swam intelligence approach is used to choose suitable positions for a storage node. Thus, hybrid particle swarm optimization algorithm has been used to find the suitable positions for storage nodes while the total energy cost of data transmission is minimized. Clustering-based distributed data storage is utilized to solve clustering problem using fuzzy-C-means algorithm. This research work also considers the data rates and locations of multiple producers and consumers to find optimal data storage positions. The algorithm is implemented in a network simulator and the experimental results show that the proposed clustering and swarm intelligence based ODS strategy is more effective than the earlier approaches. PMID:25734182

  1. A split-optimization approach for obtaining multiple solutions in single-objective process parameter optimization.

    PubMed

    Rajora, Manik; Zou, Pan; Yang, Yao Guang; Fan, Zhi Wen; Chen, Hung Yi; Wu, Wen Chieh; Li, Beizhi; Liang, Steven Y

    2016-01-01

    It can be observed from the experimental data of different processes that different process parameter combinations can lead to the same performance indicators, but during the optimization of process parameters, using current techniques, only one of these combinations can be found when a given objective function is specified. The combination of process parameters obtained after optimization may not always be applicable in actual production or may lead to undesired experimental conditions. In this paper, a split-optimization approach is proposed for obtaining multiple solutions in a single-objective process parameter optimization problem. This is accomplished by splitting the original search space into smaller sub-search spaces and using GA in each sub-search space to optimize the process parameters. Two different methods, i.e., cluster centers and hill and valley splitting strategy, were used to split the original search space, and their efficiency was measured against a method in which the original search space is split into equal smaller sub-search spaces. The proposed approach was used to obtain multiple optimal process parameter combinations for electrochemical micro-machining. The result obtained from the case study showed that the cluster centers and hill and valley splitting strategies were more efficient in splitting the original search space than the method in which the original search space is divided into smaller equal sub-search spaces.

  2. A split-optimization approach for obtaining multiple solutions in single-objective process parameter optimization.

    PubMed

    Rajora, Manik; Zou, Pan; Yang, Yao Guang; Fan, Zhi Wen; Chen, Hung Yi; Wu, Wen Chieh; Li, Beizhi; Liang, Steven Y

    2016-01-01

    It can be observed from the experimental data of different processes that different process parameter combinations can lead to the same performance indicators, but during the optimization of process parameters, using current techniques, only one of these combinations can be found when a given objective function is specified. The combination of process parameters obtained after optimization may not always be applicable in actual production or may lead to undesired experimental conditions. In this paper, a split-optimization approach is proposed for obtaining multiple solutions in a single-objective process parameter optimization problem. This is accomplished by splitting the original search space into smaller sub-search spaces and using GA in each sub-search space to optimize the process parameters. Two different methods, i.e., cluster centers and hill and valley splitting strategy, were used to split the original search space, and their efficiency was measured against a method in which the original search space is split into equal smaller sub-search spaces. The proposed approach was used to obtain multiple optimal process parameter combinations for electrochemical micro-machining. The result obtained from the case study showed that the cluster centers and hill and valley splitting strategies were more efficient in splitting the original search space than the method in which the original search space is divided into smaller equal sub-search spaces. PMID:27625978

  3. A comparison between gradient descent and stochastic approaches for parameter optimization of a coupled ocean-sea ice model

    NASA Astrophysics Data System (ADS)

    Sumata, H.; Kauker, F.; Gerdes, R.; Köberle, C.; Karcher, M.

    2012-11-01

    Two types of optimization methods were applied to a parameter optimization problem in a coupled ocean-sea ice model, and applicability and efficiency of the respective methods were examined. One is a finite difference method based on a traditional gradient descent approach, while the other adopts genetic algorithms as an example of stochastic approaches. Several series of parameter optimization experiments were performed by minimizing a cost function composed of model-data misfit of ice concentration, ice drift velocity and ice thickness. The finite difference method fails to estimate optimal parameters due to an ill-shaped nature of the cost function, whereas the genetic algorithms can effectively estimate near optimal parameters with a practical number of iterations. The results of the study indicate that a sophisticated stochastic approach is of practical use to a parameter optimization of a coupled ocean-sea ice model.

  4. Optimal Decision Stimuli for Risky Choice Experiments: An Adaptive Approach.

    PubMed

    Cavagnaro, Daniel R; Gonzalez, Richard; Myung, Jay I; Pitt, Mark A

    2013-02-01

    Collecting data to discriminate between models of risky choice requires careful selection of decision stimuli. Models of decision making aim to predict decisions across a wide range of possible stimuli, but practical limitations force experimenters to select only a handful of them for actual testing. Some stimuli are more diagnostic between models than others, so the choice of stimuli is critical. This paper provides the theoretical background and a methodological framework for adaptive selection of optimal stimuli for discriminating among models of risky choice. The approach, called Adaptive Design Optimization (ADO), adapts the stimulus in each experimental trial based on the results of the preceding trials. We demonstrate the validity of the approach with simulation studies aiming to discriminate Expected Utility, Weighted Expected Utility, Original Prospect Theory, and Cumulative Prospect Theory models. PMID:24532856

  5. Optimal Decision Stimuli for Risky Choice Experiments: An Adaptive Approach

    PubMed Central

    Cavagnaro, Daniel R.; Gonzalez, Richard; Myung, Jay I.; Pitt, Mark A.

    2014-01-01

    Collecting data to discriminate between models of risky choice requires careful selection of decision stimuli. Models of decision making aim to predict decisions across a wide range of possible stimuli, but practical limitations force experimenters to select only a handful of them for actual testing. Some stimuli are more diagnostic between models than others, so the choice of stimuli is critical. This paper provides the theoretical background and a methodological framework for adaptive selection of optimal stimuli for discriminating among models of risky choice. The approach, called Adaptive Design Optimization (ADO), adapts the stimulus in each experimental trial based on the results of the preceding trials. We demonstrate the validity of the approach with simulation studies aiming to discriminate Expected Utility, Weighted Expected Utility, Original Prospect Theory, and Cumulative Prospect Theory models. PMID:24532856

  6. An evolutionary based Bayesian design optimization approach under incomplete information

    NASA Astrophysics Data System (ADS)

    Srivastava, Rupesh; Deb, Kalyanmoy

    2013-02-01

    Design optimization in the absence of complete information about uncertain quantities has been recently gaining consideration, as expensive repetitive computation tasks are becoming tractable due to the invention of faster and parallel computers. This work uses Bayesian inference to quantify design reliability when only sample measurements of the uncertain quantities are available. A generalized Bayesian reliability based design optimization algorithm has been proposed and implemented for numerical as well as engineering design problems. The approach uses an evolutionary algorithm (EA) to obtain a trade-off front between design objectives and reliability. The Bayesian approach provides a well-defined link between the amount of available information and the reliability through a confidence measure, and the EA acts as an efficient optimizer for a discrete and multi-dimensional objective space. Additionally, a GPU-based parallelization study shows computational speed-up of close to 100 times in a simulated scenario wherein the constraint qualification checks may be time consuming and could render a sequential implementation that can be impractical for large sample sets. These results show promise for the use of a parallel implementation of EAs in handling design optimization problems under uncertainties.

  7. The discrete adjoint approach to aerodynamic shape optimization

    NASA Astrophysics Data System (ADS)

    Nadarajah, Siva Kumaran

    A viscous discrete adjoint approach to automatic aerodynamic shape optimization is developed, and the merits of the viscous discrete and continuous adjoint approaches are discussed. The viscous discrete and continuous adjoint gradients for inverse design and drag minimization cost functions are compared with finite-difference and complex-step gradients. The optimization of airfoils in two-dimensional flow for inverse design and drag minimization is illustrated. Both the discrete and continuous adjoint methods are used to formulate two new design problems. First, the time-dependent optimal design problem is established, and both the time accurate discrete and continuous adjoint equations are derived. An application to the reduction of the time-averaged drag coefficient while maintaining time-averaged lift and thickness distribution of a pitching airfoil in transonic flow is demonstrated. Second, the remote inverse design problem is formulated. The optimization of a three-dimensional biconvex wing in supersonic flow verifies the feasibility to reduce the near field pressure peak. Coupled drag minimization and remote inverse design cases produce wings with a lower drag and a reduced near field peak pressure signature.

  8. A mathematical programming approach to stochastic and dynamic optimization problems

    SciTech Connect

    Bertsimas, D.

    1994-12-31

    We propose three ideas for constructing optimal or near-optimal policies: (1) for systems for which we have an exact characterization of the performance space we outline an adaptive greedy algorithm that gives rise to indexing policies (we illustrate this technique in the context of indexable systems); (2) we use integer programming to construct policies from the underlying descriptions of the performance space (we illustrate this technique in the context of polling systems); (3) we use linear control over polyhedral regions to solve deterministic versions for this class of problems. This approach gives interesting insights for the structure of the optimal policy (we illustrate this idea in the context of multiclass queueing networks). The unifying theme in the paper is the thesis that better formulations lead to deeper understanding and better solution methods. Overall the proposed approach for stochastic and dynamic optimization parallels efforts of the mathematical programming community in the last fifteen years to develop sharper formulations (polyhedral combinatorics and more recently nonlinear relaxations) and leads to new insights ranging from a complete characterization and new algorithms for indexable systems to tight lower bounds and new algorithms with provable a posteriori guarantees for their suboptimality for polling systems, multiclass queueing and loss networks.

  9. A Model-Based Prognostics Approach Applied to Pneumatic Valves

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew J.; Goebel, Kai

    2011-01-01

    Within the area of systems health management, the task of prognostics centers on predicting when components will fail. Model-based prognostics exploits domain knowledge of the system, its components, and how they fail by casting the underlying physical phenomena in a physics-based model that is derived from first principles. Uncertainty cannot be avoided in prediction, therefore, algorithms are employed that help in managing these uncertainties. The particle filtering algorithm has become a popular choice for model-based prognostics due to its wide applicability, ease of implementation, and support for uncertainty management. We develop a general model-based prognostics methodology within a robust probabilistic framework using particle filters. As a case study, we consider a pneumatic valve from the Space Shuttle cryogenic refueling system. We develop a detailed physics-based model of the pneumatic valve, and perform comprehensive simulation experiments to illustrate our prognostics approach and evaluate its effectiveness and robustness. The approach is demonstrated using historical pneumatic valve data from the refueling system.

  10. SU-F-BRD-13: Quantum Annealing Applied to IMRT Beamlet Intensity Optimization

    SciTech Connect

    Nazareth, D; Spaans, J

    2014-06-15

    Purpose: We report on the first application of quantum annealing (QA) to the process of beamlet intensity optimization for IMRT. QA is a new technology, which employs novel hardware and software techniques to address various discrete optimization problems in many fields. Methods: We apply the D-Wave Inc. proprietary hardware, which natively exploits quantum mechanical effects for improved optimization. The new QA algorithm, running on this hardware, is most similar to simulated annealing, but relies on natural processes to directly minimize the free energy of a system. A simple quantum system is slowly evolved into a classical system, representing the objective function. To apply QA to IMRT-type optimization, two prostate cases were considered. A reduced number of beamlets were employed, due to the current QA hardware limitation of ∼500 binary variables. The beamlet dose matrices were computed using CERR, and an objective function was defined based on typical clinical constraints, including dose-volume objectives. The objective function was discretized, and the QA method was compared to two standard optimization Methods: simulated annealing and Tabu search, run on a conventional computing cluster. Results: Based on several runs, the average final objective function value achieved by the QA was 16.9 for the first patient, compared with 10.0 for Tabu and 6.7 for the SA. For the second patient, the values were 70.7 for the QA, 120.0 for Tabu, and 22.9 for the SA. The QA algorithm required 27–38% of the time required by the other two methods. Conclusion: In terms of objective function value, the QA performance was similar to Tabu but less effective than the SA. However, its speed was 3–4 times faster than the other two methods. This initial experiment suggests that QA-based heuristics may offer significant speedup over conventional clinical optimization methods, as quantum annealing hardware scales to larger sizes.

  11. Optimized probabilistic quantum processors: A unified geometric approach 1

    NASA Astrophysics Data System (ADS)

    Bergou, Janos; Bagan, Emilio; Feldman, Edgar

    Using probabilistic and deterministic quantum cloning, and quantum state separation as illustrative examples we develop a complete geometric solution for finding their optimal success probabilities. The method is related to the approach that we introduced earlier for the unambiguous discrimination of more than two states. In some cases the method delivers analytical results, in others it leads to intuitive and straightforward numerical solutions. We also present implementations of the schemes based on linear optics employing few-photon interferometry

  12. The GRG approach for large-scale optimization

    SciTech Connect

    Drud, A.

    1994-12-31

    The Generalized Reduced Gradient (GRG) algorithm for general Nonlinear Programming (NLP) has been used successfully for over 25 years. The ideas of the original GRG algorithm have been modified and have absorbed developments in unconstrained optimization, linear programming, sparse matrix techniques, etc. The talk will review the essential aspects of the GRG approach and will discuss current development trends, especially related to very large models. Examples will be based on the CONOPT implementation.

  13. Particle Swarm and Ant Colony Approaches in Multiobjective Optimization

    NASA Astrophysics Data System (ADS)

    Rao, S. S.

    2010-10-01

    The social behavior of groups of birds, ants, insects and fish has been used to develop evolutionary algorithms known as swarm intelligence techniques for solving optimization problems. This work presents the development of strategies for the application of two of the popular swarm intelligence techniques, namely the particle swarm and ant colony methods, for the solution of multiobjective optimization problems. In a multiobjective optimization problem, the objectives exhibit a conflicting nature and hence no design vector can minimize all the objectives simultaneously. The concept of Pareto-optimal solution is used in finding a compromise solution. A modified cooperative game theory approach, in which each objective is associated with a different player, is used in this work. The applicability and computational efficiencies of the proposed techniques are demonstrated through several illustrative examples involving unconstrained and constrained problems with single and multiple objectives and continuous and mixed design variables. The present methodologies are expected to be useful for the solution of a variety of practical continuous and mixed optimization problems involving single or multiple objectives with or without constraints.

  14. Computational approaches for microalgal biofuel optimization: a review.

    PubMed

    Koussa, Joseph; Chaiboonchoe, Amphun; Salehi-Ashtiani, Kourosh

    2014-01-01

    The increased demand and consumption of fossil fuels have raised interest in finding renewable energy sources throughout the globe. Much focus has been placed on optimizing microorganisms and primarily microalgae, to efficiently produce compounds that can substitute for fossil fuels. However, the path to achieving economic feasibility is likely to require strain optimization through using available tools and technologies in the fields of systems and synthetic biology. Such approaches invoke a deep understanding of the metabolic networks of the organisms and their genomic and proteomic profiles. The advent of next generation sequencing and other high throughput methods has led to a major increase in availability of biological data. Integration of such disparate data can help define the emergent metabolic system properties, which is of crucial importance in addressing biofuel production optimization. Herein, we review major computational tools and approaches developed and used in order to potentially identify target genes, pathways, and reactions of particular interest to biofuel production in algae. As the use of these tools and approaches has not been fully implemented in algal biofuel research, the aim of this review is to highlight the potential utility of these resources toward their future implementation in algal research.

  15. Computational approaches for microalgal biofuel optimization: a review.

    PubMed

    Koussa, Joseph; Chaiboonchoe, Amphun; Salehi-Ashtiani, Kourosh

    2014-01-01

    The increased demand and consumption of fossil fuels have raised interest in finding renewable energy sources throughout the globe. Much focus has been placed on optimizing microorganisms and primarily microalgae, to efficiently produce compounds that can substitute for fossil fuels. However, the path to achieving economic feasibility is likely to require strain optimization through using available tools and technologies in the fields of systems and synthetic biology. Such approaches invoke a deep understanding of the metabolic networks of the organisms and their genomic and proteomic profiles. The advent of next generation sequencing and other high throughput methods has led to a major increase in availability of biological data. Integration of such disparate data can help define the emergent metabolic system properties, which is of crucial importance in addressing biofuel production optimization. Herein, we review major computational tools and approaches developed and used in order to potentially identify target genes, pathways, and reactions of particular interest to biofuel production in algae. As the use of these tools and approaches has not been fully implemented in algal biofuel research, the aim of this review is to highlight the potential utility of these resources toward their future implementation in algal research. PMID:25309916

  16. Optimal synchronization of Kuramoto oscillators: A dimensional reduction approach

    NASA Astrophysics Data System (ADS)

    Pinto, Rafael S.; Saa, Alberto

    2015-12-01

    A recently proposed dimensional reduction approach for studying synchronization in the Kuramoto model is employed to build optimal network topologies to favor or to suppress synchronization. The approach is based in the introduction of a collective coordinate for the time evolution of the phase locked oscillators, in the spirit of the Ott-Antonsen ansatz. We show that the optimal synchronization of a Kuramoto network demands the maximization of the quadratic function ωTL ω , where ω stands for the vector of the natural frequencies of the oscillators and L for the network Laplacian matrix. Many recently obtained numerical results can be reobtained analytically and in a simpler way from our maximization condition. A computationally efficient hill climb rewiring algorithm is proposed to generate networks with optimal synchronization properties. Our approach can be easily adapted to the case of the Kuramoto models with both attractive and repulsive interactions, and again many recent numerical results can be rederived in a simpler and clearer analytical manner.

  17. A Model Driven Engineering Approach Applied to Master Data Management

    NASA Astrophysics Data System (ADS)

    Menet, Ludovic; Lamolle, Myriam

    The federation of data sources and the definition of pivot models are strongly interrelated topics. This paper explores a mediation solution based on XML architecture and the concept of Master Data Management. In this solution, pivot models use the standard XML Schema allowing the definition of complex data structures. The introduction of a MDE approach is a means to make modeling easier. We use UML as an abstract modeling layer. UML is a modeling object language, which is more and more used and recognized as a standard in the software engineering field, which makes it an ideal candidate for the modeling of XML Schema models. In this purpose we introduce features of the UML formalism, through profiles, to facilitate the definition and the exchange of models.

  18. Volume reconstruction optimization for tomo-PIV algorithms applied to experimental data

    NASA Astrophysics Data System (ADS)

    Martins, Fabio J. W. A.; Foucaut, Jean-Marc; Thomas, Lionel; Azevedo, Luis F. A.; Stanislas, Michel

    2015-08-01

    Tomographic PIV is a three-component volumetric velocity measurement technique based on the tomographic reconstruction of a particle distribution imaged by multiple camera views. In essence, the performance and accuracy of this technique is highly dependent on the parametric adjustment and the reconstruction algorithm used. Although synthetic data have been widely employed to optimize experiments, the resulting reconstructed volumes might not have optimal quality. The purpose of the present study is to offer quality indicators that can be applied to data samples in order to improve the quality of velocity results obtained by the tomo-PIV technique. The methodology proposed can potentially lead to significantly reduction in the time required to optimize a tomo-PIV reconstruction, also leading to better quality velocity results. Tomo-PIV data provided by a six-camera turbulent boundary-layer experiment were used to optimize the reconstruction algorithms according to this methodology. Velocity statistics measurements obtained by optimized BIMART, SMART and MART algorithms were compared with hot-wire anemometer data and velocity measurement uncertainties were computed. Results indicated that BIMART and SMART algorithms produced reconstructed volumes with equivalent quality as the standard MART with the benefit of reduced computational time.

  19. Applying artificial intelligence to clinical guidelines: the GLARE approach.

    PubMed

    Terenziani, Paolo; Montani, Stefania; Bottrighi, Alessio; Molino, Gianpaolo; Torchio, Mauro

    2008-01-01

    We present GLARE, a domain-independent system for acquiring, representing and executing clinical guidelines (GL). GLARE is characterized by the adoption of Artificial Intelligence (AI) techniques in the definition and implementation of the system. First of all, a high-level and user-friendly knowledge representation language has been designed. Second, a user-friendly acquisition tool, which provides expert physicians with various forms of help, has been implemented. Third, a tool for executing GL on a specific patient has been made available. At all the levels above, advanced AI techniques have been exploited, in order to enhance flexibility and user-friendliness and to provide decision support. Specifically, this chapter focuses on the methods we have developed in order to cope with (i) automatic resource-based adaptation of GL, (ii) representation and reasoning about temporal constraints in GL, (iii) decision making support, and (iv) model-based verification. We stress that, although we have devised such techniques within the GLARE project, they are mostly system-independent, so that they might be applied to other guideline management systems.

  20. Optimization of nucleophilic ¹⁸F radiofluorinations using a microfluidic reaction approach.

    PubMed

    Pascali, Giancarlo; Matesic, Lidia; Collier, Thomas L; Wyatt, Naomi; Fraser, Benjamin H; Pham, Tien Q; Salvadori, Piero A; Greguric, Ivan

    2014-09-01

    Microfluidic techniques are increasingly being used to synthesize positron-emitting radiopharmaceuticals. Several reports demonstrate higher incorporation yields, with shorter reaction times and reduced amounts of reagents compared with traditional vessel-based techniques. Microfluidic techniques, therefore, have tremendous potential for allowing rapid and cost-effective optimization of new radiotracers. This protocol describes the implementation of a suitable microfluidic process to optimize classical (18)F radiofluorination reactions by rationalizing the time and reagents used. Reaction optimization varies depending on the systems used, and it typically involves 5-10 experimental days of up to 4 h of sample collection and analysis. In particular, the protocol allows optimization of the key fluidic parameters in the first tier of experiments: reaction temperature, residence time and reagent ratio. Other parameters, such as solvent, activating agent and precursor concentration need to be stated before the experimental runs. Once the optimal set of parameters is found, repeatability and scalability are also tested in the second tier of experiments. This protocol allows the standardization of a microfluidic methodology that could be applied in any radiochemistry laboratory, in order to enable rapid and efficient radiosynthesis of new and existing [(18)F]-radiotracers. Here we show how this method can be applied to the radiofluorination optimization of [(18)F]-MEL050, a melanoma tumor imaging agent. This approach, if integrated into a good manufacturing practice (GMP) framework, could result in the reduction of materials and the time required to bring new radiotracers toward preclinical and clinical applications. PMID:25079426

  1. Innovization procedure applied to a multi-objective optimization of a biped robot locomotion

    NASA Astrophysics Data System (ADS)

    Oliveira, Miguel; Santos, Cristina P.; Costa, Lino

    2013-10-01

    This paper proposes an Innovization procedure approach for a bio-inspired biped gait locomotion controller. We combine a multi-objective evolutionary algorithm and a bio-inspired Central Patterns Generator locomotion controller to generates the necessary limb movements to perform the walking gait of a biped robot. The search for the best set of CPG parameters is optimized by considering multiple objectives along a staged evolution. An innovation analysis is issued to verify relationships between the parameters and the objectives and between objectives themselves in order to find relevant motor behaviors characteristics. The simulation results show the effectiveness of the proposed approach.

  2. Applying electrical utility least-cost approach to transportation planning

    SciTech Connect

    McCoy, G.A.; Growdon, K.; Lagerberg, B.

    1994-09-01

    Members of the energy and environmental communities believe that parallels exist between electrical utility least-cost planning and transportation planning. In particular, the Washington State Energy Strategy Committee believes that an integrated and comprehensive transportation planning process should be developed to fairly evaluate the costs of both demand-side and supply-side transportation options, establish competition between different travel modes, and select the mix of options designed to meet system goals at the lowest cost to society. Comparisons between travel modes are also required under the Intermodal Surface Transportation Efficiency Act (ISTEA). ISTEA calls for the development of procedures to compare demand management against infrastructure investment solutions and requires the consideration of efficiency, socioeconomic and environmental factors in the evaluation process. Several of the techniques and approaches used in energy least-cost planning and utility peak demand management can be incorporated into a least-cost transportation planning methodology. The concepts of avoided plants, expressing avoidable costs in levelized nominal dollars to compare projects with different on-line dates and service lives, the supply curve, and the resource stack can be directly adapted from the energy sector.

  3. Applying a cloud computing approach to storage architectures for spacecraft

    NASA Astrophysics Data System (ADS)

    Baldor, Sue A.; Quiroz, Carlos; Wood, Paul

    As sensor technologies, processor speeds, and memory densities increase, spacecraft command, control, processing, and data storage systems have grown in complexity to take advantage of these improvements and expand the possible missions of spacecraft. Spacecraft systems engineers are increasingly looking for novel ways to address this growth in complexity and mitigate associated risks. Looking to conventional computing, many solutions have been executed to solve both the problem of complexity and heterogeneity in systems. In particular, the cloud-based paradigm provides a solution for distributing applications and storage capabilities across multiple platforms. In this paper, we propose utilizing a cloud-like architecture to provide a scalable mechanism for providing mass storage in spacecraft networks that can be reused on multiple spacecraft systems. By presenting a consistent interface to applications and devices that request data to be stored, complex systems designed by multiple organizations may be more readily integrated. Behind the abstraction, the cloud storage capability would manage wear-leveling, power consumption, and other attributes related to the physical memory devices, critical components in any mass storage solution for spacecraft. Our approach employs SpaceWire networks and SpaceWire-capable devices, although the concept could easily be extended to non-heterogeneous networks consisting of multiple spacecraft and potentially the ground segment.

  4. New Approach to Ultrasonic Spectroscopy Applied to Flywheel Rotors

    NASA Technical Reports Server (NTRS)

    Harmon, Laura M.; Baaklini, George Y.

    2002-01-01

    Flywheel energy storage devices comprising multilayered composite rotor systems are being studied extensively for use in the International Space Station. A flywheel system includes the components necessary to store and discharge energy in a rotating mass. The rotor is the complete rotating assembly portion of the flywheel, which is composed primarily of a metallic hub and a composite rim. The rim may contain several concentric composite rings. This article summarizes current ultrasonic spectroscopy research of such composite rings and rims and a flat coupon, which was manufactured to mimic the manufacturing of the rings. Ultrasonic spectroscopy is a nondestructive evaluation (NDE) method for material characterization and defect detection. In the past, a wide bandwidth frequency spectrum created from a narrow ultrasonic signal was analyzed for amplitude and frequency changes. Tucker developed and patented a new approach to ultrasonic spectroscopy. The ultrasonic system employs a continuous swept-sine waveform and performs a fast Fourier transform on the frequency spectrum to create the spectrum resonance spacing domain, or fundamental resonant frequency. Ultrasonic responses from composite flywheel components were analyzed at Glenn to assess this NDE technique for the quality assurance of flywheel applications.

  5. Unsteady Adjoint Approach for Design Optimization of Flapping Airfoils

    NASA Technical Reports Server (NTRS)

    Lee, Byung Joon; Liou, Meng-Sing

    2012-01-01

    This paper describes the work for optimizing the propulsive efficiency of flapping airfoils, i.e., improving the thrust under constraining aerodynamic work during the flapping flights by changing their shape and trajectory of motion with the unsteady discrete adjoint approach. For unsteady problems, it is essential to properly resolving time scales of motion under consideration and it must be compatible with the objective sought after. We include both the instantaneous and time-averaged (periodic) formulations in this study. For the design optimization with shape parameters or motion parameters, the time-averaged objective function is found to be more useful, while the instantaneous one is more suitable for flow control. The instantaneous objective function is operationally straightforward. On the other hand, the time-averaged objective function requires additional steps in the adjoint approach; the unsteady discrete adjoint equations for a periodic flow must be reformulated and the corresponding system of equations solved iteratively. We compare the design results from shape and trajectory optimizations and investigate the physical relevance of design variables to the flapping motion at on- and off-design conditions.

  6. Portfolio optimization in enhanced index tracking with goal programming approach

    NASA Astrophysics Data System (ADS)

    Siew, Lam Weng; Jaaman, Saiful Hafizah Hj.; Ismail, Hamizun bin

    2014-09-01

    Enhanced index tracking is a popular form of passive fund management in stock market. Enhanced index tracking aims to generate excess return over the return achieved by the market index without purchasing all of the stocks that make up the index. This can be done by establishing an optimal portfolio to maximize the mean return and minimize the risk. The objective of this paper is to determine the portfolio composition and performance using goal programming approach in enhanced index tracking and comparing it to the market index. Goal programming is a branch of multi-objective optimization which can handle decision problems that involve two different goals in enhanced index tracking, a trade-off between maximizing the mean return and minimizing the risk. The results of this study show that the optimal portfolio with goal programming approach is able to outperform the Malaysia market index which is FTSE Bursa Malaysia Kuala Lumpur Composite Index because of higher mean return and lower risk without purchasing all the stocks in the market index.

  7. Electrical defibrillation optimization: An automated, iterative parallel finite-element approach

    SciTech Connect

    Hutchinson, S.A.; Shadid, J.N.; Ng, K.T.; Nadeem, A.

    1997-04-01

    To date, optimization of electrode systems for electrical defibrillation has been limited to hand-selected electrode configurations. In this paper we present an automated approach which combines detailed, three-dimensional (3-D) finite element torso models with optimization techniques to provide a flexible analysis and design tool for electrical defibrillation optimization. Specifically, a parallel direct search (PDS) optimization technique is used with a representative objective function to find an electrode configuration which corresponds to the satisfaction of a postulated defibrillation criterion with a minimum amount of power and a low possibility of myocardium damage. For adequate representation of the thoracic inhomogeneities, 3-D finite-element torso models are used in the objective function computations. The CPU-intensive finite-element calculations required for the objective function evaluation have been implemented on a message-passing parallel computer in order to complete the optimization calculations in a timely manner. To illustrate the optimization procedure, it has been applied to a representative electrode configuration for transmyocardial defibrillation, namely the subcutaneous patch-right ventricular catheter (SP-RVC) system. Sensitivity of the optimal solutions to various tissue conductivities has been studied. 39 refs., 9 figs., 2 tabs.

  8. Applying ILT mask synthesis for co-optimizing design rules and DSA process characteristics

    NASA Astrophysics Data System (ADS)

    Dam, Thuc; Stanton, William

    2014-03-01

    During early stage development of a DSA process, there are many unknown interactions between design, DSA process, RET, and mask synthesis. The computational resolution of these unknowns can guide development towards a common process space whereby manufacturing success can be evaluated. This paper will demonstrate the use of existing Inverse Lithography Technology (ILT) to co-optimize the multitude of parameters. ILT mask synthesis will be applied to a varied hole design space in combination with a range of DSA model parameters under different illumination and RET conditions. The design will range from 40 nm pitch doublet to random DSA designs with larger pitches, while various effective DSA characteristics of shrink bias and corner smoothing will be assumed for the DSA model during optimization. The co-optimization of these design parameters and process characteristics under different SMO solutions and RET conditions (dark/bright field tones and binary/PSM mask types) will also help to provide a complete process mapping of possible manufacturing options. The lithographic performances for masks within the optimized parameter space will be generated to show a common process space with the highest possibility for success.

  9. Robust optimization approach to regional wastewater system planning.

    PubMed

    Zeferino, João A; Cunha, Maria C; Antunes, António P

    2012-10-30

    Wastewater systems are subject to several sources of uncertainty. Different scenarios can occur in the future, depending on the behavior of a variety of demographic, economic, environmental, and technological variables. Robust optimization approaches are aimed at finding solutions that will perform well under any likely scenario. The planning decisions to be made about wastewater system planning involve two main issues: the setup and operation costs of sewer networks, treatment plants, and possible pump stations; and the water quality parameters to be met in the water body where the (treated) wastewater is discharged. The source of uncertainty considered in this article is the flow of the river that receives the wastewater generated in a given region. Three robust optimization models for regional wastewater system planning are proposed. The models are solved using a simulated annealing algorithm enhanced with a local improvement procedure. Their application is illustrated through a case study representing a real-world situation, with the results being compared and commented upon.

  10. A coupled simulation-optimization approach for groundwater remediation design under uncertainty: an application to a petroleum-contaminated site.

    PubMed

    He, L; Huang, G H; Lu, H W

    2009-01-01

    This study provides a coupled simulation-optimization approach for optimal design of petroleum-contaminated groundwater remediation under uncertainty. Compared to the previous approaches, it has the advantages of: (1) addressing the stochasticity of the modeling parameters in simulating the flow and transport of NAPLs in groundwater, (2) providing a direct and response-rapid bridge between remediation strategies (pumping rates) and remediation performance (contaminant concentrations) through the created proxy models, (3) alleviating the computational cost in searching for optimal solutions, and (4) giving confidence levels for the obtained optimal remediation strategies. The approach is applied to a practical site in Canada for demonstrating its performance. The results show that mitigating the effects of uncertainty on optimal remediation strategies (through enhancing the confidence level) would lead to the rise of remediation cost due to the increase in the total pumping rate.

  11. A Bayesian optimization approach for wind farm power maximization

    NASA Astrophysics Data System (ADS)

    Park, Jinkyoo; Law, Kincho H.

    2015-03-01

    The objective of this study is to develop a model-free optimization algorithm to improve the total wind farm power production in a cooperative game framework. Conventionally, for a given wind condition, an individual wind turbine maximizes its own power production without taking into consideration the conditions of other wind turbines. Under this greedy control strategy, the wake formed by the upstream wind turbine, due to the reduced wind speed and the increased turbulence intensity inside the wake, would affect and lower the power productions of the downstream wind turbines. To increase the overall wind farm power production, researchers have proposed cooperative wind turbine control approaches to coordinate the actions that mitigate the wake interference among the wind turbines and thus increase the total wind farm power production. This study explores the use of a data-driven optimization approach to identify the optimum coordinated control actions in real time using limited amount of data. Specifically, we propose the Bayesian Ascent (BA) method that combines the strengths of Bayesian optimization and trust region optimization algorithms. Using Gaussian Process regression, BA requires only a few number of data points to model the complex target system. Furthermore, due to the use of trust region constraint on sampling procedure, BA tends to increase the target value and converge toward near the optimum. Simulation studies using analytical functions show that the BA method can achieve an almost monotone increase in a target value with rapid convergence. BA is also implemented and tested in a laboratory setting to maximize the total power using two scaled wind turbine models.

  12. Direct and Evolutionary Approaches for Optimal Receiver Function Inversion

    NASA Astrophysics Data System (ADS)

    Dugda, Mulugeta Tuji

    Receiver functions are time series obtained by deconvolving vertical component seismograms from radial component seismograms. Receiver functions represent the impulse response of the earth structure beneath a seismic station. Generally, receiver functions consist of a number of seismic phases related to discontinuities in the crust and upper mantle. The relative arrival times of these phases are correlated with the locations of discontinuities as well as the media of seismic wave propagation. The Moho (Mohorovicic discontinuity) is a major interface or discontinuity that separates the crust and the mantle. In this research, automatic techniques to determine the depth of the Moho from the earth's surface (the crustal thickness H) and the ratio of crustal seismic P-wave velocity (Vp) to S-wave velocity (Vs) (kappa= Vp/Vs) were developed. In this dissertation, an optimization problem of inverting receiver functions has been developed to determine crustal parameters and the three associated weights using evolutionary and direct optimization techniques. The first technique developed makes use of the evolutionary Genetic Algorithms (GA) optimization technique. The second technique developed combines the direct Generalized Pattern Search (GPS) and evolutionary Fitness Proportionate Niching (FPN) techniques by employing their strengths. In a previous study, Monte Carlo technique has been utilized for determining variable weights in the H-kappa stacking of receiver functions. Compared to that previously introduced variable weights approach, the current GA and GPS-FPN techniques have tremendous advantages of saving time and these new techniques are suitable for automatic and simultaneous determination of crustal parameters and appropriate weights. The GA implementation provides optimal or near optimal weights necessary in stacking receiver functions as well as optimal H and kappa values simultaneously. Generally, the objective function of the H-kappa stacking problem

  13. Perspective: Codesign for materials science: An optimal learning approach

    NASA Astrophysics Data System (ADS)

    Lookman, Turab; Alexander, Francis J.; Bishop, Alan R.

    2016-05-01

    A key element of materials discovery and design is to learn from available data and prior knowledge to guide the next experiments or calculations in order to focus in on materials with targeted properties. We suggest that the tight coupling and feedback between experiments, theory and informatics demands a codesign approach, very reminiscent of computational codesign involving software and hardware in computer science. This requires dealing with a constrained optimization problem in which uncertainties are used to adaptively explore and exploit the predictions of a surrogate model to search the vast high dimensional space where the desired material may be found.

  14. Codon Optimizing for Increased Membrane Protein Production: A Minimalist Approach.

    PubMed

    Mirzadeh, Kiavash; Toddo, Stephen; Nørholm, Morten H H; Daley, Daniel O

    2016-01-01

    Reengineering a gene with synonymous codons is a popular approach for increasing production levels of recombinant proteins. Here we present a minimalist alternative to this method, which samples synonymous codons only at the second and third positions rather than the entire coding sequence. As demonstrated with two membrane-embedded transporters in Escherichia coli, the method was more effective than optimizing the entire coding sequence. The method we present is PCR based and requires three simple steps: (1) the design of two PCR primers, one of which is degenerate; (2) the amplification of a mini-library by PCR; and (3) screening for high-expressing clones. PMID:27485329

  15. An integrated source/mask/DSA optimization approach

    NASA Astrophysics Data System (ADS)

    Fühner, Tim; Michalak, Przemysław; Welling, Ulrich; Orozco-Rey, Juan Carlos; Müller, Marcus; Erdmann, Andreas

    2016-03-01

    The introduction of DSA for lithography is still obstructed by a number of technical issues including the lack of a comprehensive computational platform. This work presents a direct source/mask/DSA optimization (SMDSAO) method, which incorporates standard lithographic metrics and figures of merit such as the maximization of process windows. The procedure is demonstrated for a contact doubling example, assuming grapho-epitaxy-DSA. To retain a feasible runtime, a geometry-based Interface Hamiltonian DSA model is employed. The feasibility of this approach is demonstrated through several results and their comparison with more rigorous DSA models.

  16. Final Technical Report for "Applied Mathematics Research: Simulation Based Optimization and Application to Electromagnetic Inverse Problems"

    SciTech Connect

    Haber, Eldad

    2014-03-17

    The focus of research was: Developing adaptive mesh for the solution of Maxwell's equations; Developing a parallel framework for time dependent inverse Maxwell's equations; Developing multilevel methods for optimization problems with inequal- ity constraints; A new inversion code for inverse Maxwell's equations in the 0th frequency (DC resistivity); A new inversion code for inverse Maxwell's equations in low frequency regime. Although the research concentrated on electromagnetic forward and in- verse problems the results of the research was applied to the problem of image registration.

  17. Optimized Chemical Separation and Measurement by TE TIMS Using Carburized Filaments for Uranium Isotope Ratio Measurements Applied to Plutonium Chronometry.

    PubMed

    Sturm, Monika; Richter, Stephan; Aregbe, Yetunde; Wellum, Roger; Prohaska, Thomas

    2016-06-21

    An optimized method is described for U/Pu separation and subsequent measurement of the amount contents of uranium isotopes by total evaporation (TE) TIMS with a double filament setup combined with filament carburization for age determination of plutonium samples. The use of carburized filaments improved the signal behavior for total evaporation TIMS measurements of uranium. Elevated uranium ion formation by passive heating during rhenium signal optimization at the start of the total evaporation measurement procedure was found to be a result from byproducts of the separation procedure deposited on the filament. This was avoided using carburized filaments. Hence, loss of sample before the actual TE data acquisition was prevented, and automated measurement sequences could be accomplished. Furthermore, separation of residual plutonium in the separated uranium fraction was achieved directly on the filament by use of the carburized filaments. Although the analytical approach was originally tailored to achieve reliable results only for the (238)Pu/(234)U, (239)Pu/(235)U, and (240)Pu/(236)U chronometers, the optimization of the procedure additionally allowed the use of the (242)Pu/(238)U isotope amount ratio as a highly sensitive indicator for residual uranium present in the sample, which is not of radiogenic origin. The sample preparation method described in this article has been successfully applied for the age determination of CRM NBS 947 and other sulfate and oxide plutonium samples. PMID:27240571

  18. Optimized Chemical Separation and Measurement by TE TIMS Using Carburized Filaments for Uranium Isotope Ratio Measurements Applied to Plutonium Chronometry.

    PubMed

    Sturm, Monika; Richter, Stephan; Aregbe, Yetunde; Wellum, Roger; Prohaska, Thomas

    2016-06-21

    An optimized method is described for U/Pu separation and subsequent measurement of the amount contents of uranium isotopes by total evaporation (TE) TIMS with a double filament setup combined with filament carburization for age determination of plutonium samples. The use of carburized filaments improved the signal behavior for total evaporation TIMS measurements of uranium. Elevated uranium ion formation by passive heating during rhenium signal optimization at the start of the total evaporation measurement procedure was found to be a result from byproducts of the separation procedure deposited on the filament. This was avoided using carburized filaments. Hence, loss of sample before the actual TE data acquisition was prevented, and automated measurement sequences could be accomplished. Furthermore, separation of residual plutonium in the separated uranium fraction was achieved directly on the filament by use of the carburized filaments. Although the analytical approach was originally tailored to achieve reliable results only for the (238)Pu/(234)U, (239)Pu/(235)U, and (240)Pu/(236)U chronometers, the optimization of the procedure additionally allowed the use of the (242)Pu/(238)U isotope amount ratio as a highly sensitive indicator for residual uranium present in the sample, which is not of radiogenic origin. The sample preparation method described in this article has been successfully applied for the age determination of CRM NBS 947 and other sulfate and oxide plutonium samples.

  19. Adaptive sequentially space-filling metamodeling applied in optimal water quantity allocation at basin scale

    NASA Astrophysics Data System (ADS)

    Mousavi, S. Jamshid; Shourian, M.

    2010-03-01

    Global optimization models in many problems suffer from high computational costs due to the need for performing high-fidelity simulation models for objective function evaluations. Metamodeling is a useful approach to dealing with this problem in which a fast surrogate model replaces the detailed simulation model. However, training of the surrogate model needs enough input-output data which in case of absence of observed data, each of them must be obtained by running the simulation model and may still cause computational difficulties. In this paper a new metamodeling approach called adaptive sequentially space filling (ASSF) is presented by which the regions in the search space that need more training data are sequentially identified and the process of design of experiments is performed adaptively. Performance of the ASSF approach is tested against a benchmark function optimization problem and optimum basin-scale water allocation problems, in which the MODSIM river basin decision support system is approximated. Results show the ASSF model with fewer actual function evaluations is able to find comparable solutions to other metamodeling techniques using random sampling and evolution control strategies.

  20. Aircraft wing structural design optimization based on automated finite element modelling and ground structure approach

    NASA Astrophysics Data System (ADS)

    Yang, Weizhu; Yue, Zhufeng; Li, Lei; Wang, Peiyan

    2016-01-01

    An optimization procedure combining an automated finite element modelling (AFEM) technique with a ground structure approach (GSA) is proposed for structural layout and sizing design of aircraft wings. The AFEM technique, based on CATIA VBA scripting and PCL programming, is used to generate models automatically considering the arrangement of inner systems. GSA is used for local structural topology optimization. The design procedure is applied to a high-aspect-ratio wing. The arrangement of the integral fuel tank, landing gear and control surfaces is considered. For the landing gear region, a non-conventional initial structural layout is adopted. The positions of components, the number of ribs and local topology in the wing box and landing gear region are optimized to obtain a minimum structural weight. Constraints include tank volume, strength, buckling and aeroelastic parameters. The results show that the combined approach leads to a greater weight saving, i.e. 26.5%, compared with three additional optimizations based on individual design approaches.

  1. An optimization approach for fitting canonical tensor decompositions.

    SciTech Connect

    Dunlavy, Daniel M.; Acar, Evrim; Kolda, Tamara Gibson

    2009-02-01

    Tensor decompositions are higher-order analogues of matrix decompositions and have proven to be powerful tools for data analysis. In particular, we are interested in the canonical tensor decomposition, otherwise known as the CANDECOMP/PARAFAC decomposition (CPD), which expresses a tensor as the sum of component rank-one tensors and is used in a multitude of applications such as chemometrics, signal processing, neuroscience, and web analysis. The task of computing the CPD, however, can be difficult. The typical approach is based on alternating least squares (ALS) optimization, which can be remarkably fast but is not very accurate. Previously, nonlinear least squares (NLS) methods have also been recommended; existing NLS methods are accurate but slow. In this paper, we propose the use of gradient-based optimization methods. We discuss the mathematical calculation of the derivatives and further show that they can be computed efficiently, at the same cost as one iteration of ALS. Computational experiments demonstrate that the gradient-based optimization methods are much more accurate than ALS and orders of magnitude faster than NLS.

  2. Optimization techniques applied to passive measures for in-orbit spacecraft survivability

    NASA Technical Reports Server (NTRS)

    Mog, Robert A.; Helba, Michael J.; Hill, Janeil B.

    1992-01-01

    The purpose of this research is to provide Space Station Freedom protective structures design insight through the coupling of design/material requirements, hypervelocity impact phenomenology, meteoroid and space debris environment sensitivities, optimization techniques and operations research strategies, and mission scenarios. The goals of the research are: (1) to develop a Monte Carlo simulation tool which will provide top level insight for Space Station protective structures designers; (2) to develop advanced shielding concepts relevant to Space Station Freedom using unique multiple bumper approaches; and (3) to investigate projectile shape effects on protective structures design.

  3. Optimal subinterval selection approach for power system transient stability simulation

    SciTech Connect

    Kim, Soobae; Overbye, Thomas J.

    2015-10-21

    Power system transient stability analysis requires an appropriate integration time step to avoid numerical instability as well as to reduce computational demands. For fast system dynamics, which vary more rapidly than what the time step covers, a fraction of the time step, called a subinterval, is used. However, the optimal value of this subinterval is not easily determined because the analysis of the system dynamics might be required. This selection is usually made from engineering experiences, and perhaps trial and error. This paper proposes an optimal subinterval selection approach for power system transient stability analysis, which is based on modal analysis using a single machine infinite bus (SMIB) system. Fast system dynamics are identified with the modal analysis and the SMIB system is used focusing on fast local modes. An appropriate subinterval time step from the proposed approach can reduce computational burden and achieve accurate simulation responses as well. As a result, the performance of the proposed method is demonstrated with the GSO 37-bus system.

  4. Optimal subinterval selection approach for power system transient stability simulation

    DOE PAGESBeta

    Kim, Soobae; Overbye, Thomas J.

    2015-10-21

    Power system transient stability analysis requires an appropriate integration time step to avoid numerical instability as well as to reduce computational demands. For fast system dynamics, which vary more rapidly than what the time step covers, a fraction of the time step, called a subinterval, is used. However, the optimal value of this subinterval is not easily determined because the analysis of the system dynamics might be required. This selection is usually made from engineering experiences, and perhaps trial and error. This paper proposes an optimal subinterval selection approach for power system transient stability analysis, which is based on modalmore » analysis using a single machine infinite bus (SMIB) system. Fast system dynamics are identified with the modal analysis and the SMIB system is used focusing on fast local modes. An appropriate subinterval time step from the proposed approach can reduce computational burden and achieve accurate simulation responses as well. As a result, the performance of the proposed method is demonstrated with the GSO 37-bus system.« less

  5. A Statistical Approach to Optimizing Concrete Mixture Design

    PubMed Central

    Alghamdi, Saeid A.

    2014-01-01

    A step-by-step statistical approach is proposed to obtain optimum proportioning of concrete mixtures using the data obtained through a statistically planned experimental program. The utility of the proposed approach for optimizing the design of concrete mixture is illustrated considering a typical case in which trial mixtures were considered according to a full factorial experiment design involving three factors and their three levels (33). A total of 27 concrete mixtures with three replicates (81 specimens) were considered by varying the levels of key factors affecting compressive strength of concrete, namely, water/cementitious materials ratio (0.38, 0.43, and 0.48), cementitious materials content (350, 375, and 400 kg/m3), and fine/total aggregate ratio (0.35, 0.40, and 0.45). The experimental data were utilized to carry out analysis of variance (ANOVA) and to develop a polynomial regression model for compressive strength in terms of the three design factors considered in this study. The developed statistical model was used to show how optimization of concrete mixtures can be carried out with different possible options. PMID:24688405

  6. Applying Dynamical Systems Theory to Optimize Libration Point Orbit Stationkeeping Maneuvers for WIND

    NASA Technical Reports Server (NTRS)

    Brown, Jonathan M.; Petersen, Jeremy D.

    2014-01-01

    NASA's WIND mission has been operating in a large amplitude Lissajous orbit in the vicinity of the interior libration point of the Sun-Earth/Moon system since 2004. Regular stationkeeping maneuvers are required to maintain the orbit due to the instability around the collinear libration points. Historically these stationkeeping maneuvers have been performed by applying an incremental change in velocity, or (delta)v along the spacecraft-Sun vector as projected into the ecliptic plane. Previous studies have shown that the magnitude of libration point stationkeeping maneuvers can be minimized by applying the (delta)v in the direction of the local stable manifold found using dynamical systems theory. This paper presents the analysis of this new maneuver strategy which shows that the magnitude of stationkeeping maneuvers can be decreased by 5 to 25 percent, depending on the location in the orbit where the maneuver is performed. The implementation of the optimized maneuver method into operations is discussed and results are presented for the first two optimized stationkeeping maneuvers executed by WIND.

  7. Correction of linear-array lidar intensity data using an optimal beam shaping approach

    NASA Astrophysics Data System (ADS)

    Xu, Fan; Wang, Yuanqing; Yang, Xingyu; Zhang, Bingqing; Li, Fenfang

    2016-08-01

    The linear-array lidar has been recently developed and applied for its superiority of vertically non-scanning, large field of view, high sensitivity and high precision. The beam shaper is the key component for the linear-array detection. However, the traditional beam shaping approaches can hardly satisfy our requirement for obtaining unbiased and complete backscattered intensity data. The required beam distribution should roughly be oblate U-shaped rather than Gaussian or uniform. Thus, an optimal beam shaping approach is proposed in this paper. By employing a pair of conical lenses and a cylindrical lens behind the beam expander, the expanded Gaussian laser was shaped to a line-shaped beam whose intensity distribution is more consistent with the required distribution. To provide a better fit to the requirement, off-axis method is adopted. The design of the optimal beam shaping module is mathematically explained and the experimental verification of the module performance is also presented in this paper. The experimental results indicate that the optimal beam shaping approach can effectively correct the intensity image and provide ~30% gain of detection area over traditional approach, thus improving the imaging quality of linear-array lidar.

  8. A systems biology approach to radiation therapy optimization.

    PubMed

    Brahme, Anders; Lind, Bengt K

    2010-05-01

    During the last 20 years, the field of cellular and not least molecular radiation biology has been developed substantially and can today describe the response of heterogeneous tumors and organized normal tissues to radiation therapy quite well. An increased understanding of the sub-cellular and molecular response is leading to a more general systems biological approach to radiation therapy and treatment optimization. It is interesting that most of the characteristics of the tissue infrastructure, such as the vascular system and the degree of hypoxia, have to be considered to get an accurate description of tumor and normal tissue responses to ionizing radiation. In the limited space available, only a brief description of some of the most important concepts and processes is possible, starting from the key functional genomics pathways of the cell that are not only responsible for tumor development but also responsible for the response of the cells to radiation therapy. The key mechanisms for cellular damage and damage repair are described. It is further more discussed how these processes can be brought to inactivate the tumor without severely damaging surrounding normal tissues using suitable radiation modalities like intensity-modulated radiation therapy (IMRT) or light ions. The use of such methods may lead to a truly scientific approach to radiation therapy optimization, particularly when invivo predictive assays of radiation responsiveness becomes clinically available at a larger scale. Brief examples of the efficiency of IMRT are also given showing how sensitive normal tissues can be spared at the same time as highly curative doses are delivered to a tumor that is often radiation resistant and located near organs at risk. This new approach maximizes the probability to eradicate the tumor, while at the same time, adverse reactions in sensitive normal tissues are as far as possible minimized using IMRT with photons and light ions. PMID:20191284

  9. A simple approach to metal hydride alloy optimization

    NASA Technical Reports Server (NTRS)

    Lawson, D. D.; Miller, C.; Landel, R. F.

    1976-01-01

    Certain metals and related alloys can combine with hydrogen in a reversible fashion, so that on being heated, they release a portion of the gas. Such materials may find application in the large scale storage of hydrogen. Metal and alloys which show high dissociation pressure at low temperatures, and low endothermic heat of dissociation, and are therefore desirable for hydrogen storage, give values of the Hildebrand-Scott solubility parameter that lie between 100-118 Hildebrands, (Ref. 1), close to that of dissociated hydrogen. All of the less practical storage systems give much lower values of the solubility parameter. By using the Hildebrand solubility parameter as a criterion, and applying the mixing rule to combinations of known alloys and solid solutions, correlations are made to optimize alloy compositions and maximize hydrogen storage capacity.

  10. Discovery and Optimization of Materials Using Evolutionary Approaches.

    PubMed

    Le, Tu C; Winkler, David A

    2016-05-25

    Materials science is undergoing a revolution, generating valuable new materials such as flexible solar panels, biomaterials and printable tissues, new catalysts, polymers, and porous materials with unprecedented properties. However, the number of potentially accessible materials is immense. Artificial evolutionary methods such as genetic algorithms, which explore large, complex search spaces very efficiently, can be applied to the identification and optimization of novel materials more rapidly than by physical experiments alone. Machine learning models can augment experimental measurements of materials fitness to accelerate identification of useful and novel materials in vast materials composition or property spaces. This review discusses the problems of large materials spaces, the types of evolutionary algorithms employed to identify or optimize materials, and how materials can be represented mathematically as genomes, describes fitness landscapes and mutation operators commonly employed in materials evolution, and provides a comprehensive summary of published research on the use of evolutionary methods to generate new catalysts, phosphors, and a range of other materials. The review identifies the potential for evolutionary methods to revolutionize a wide range of manufacturing, medical, and materials based industries.

  11. New Approaches to HSCT Multidisciplinary Design and Optimization

    NASA Technical Reports Server (NTRS)

    Schrage, D. P.; Craig, J. I.; Fulton, R. E.; Mistree, F.

    1996-01-01

    The successful development of a capable and economically viable high speed civil transport (HSCT) is perhaps one of the most challenging tasks in aeronautics for the next two decades. At its heart it is fundamentally the design of a complex engineered system that has significant societal, environmental and political impacts. As such it presents a formidable challenge to all areas of aeronautics, and it is therefore a particularly appropriate subject for research in multidisciplinary design and optimization (MDO). In fact, it is starkly clear that without the availability of powerful and versatile multidisciplinary design, analysis and optimization methods, the design, construction and operation of im HSCT simply cannot be achieved. The present research project is focused on the development and evaluation of MDO methods that, while broader and more general in scope, are particularly appropriate to the HSCT design problem. The research aims to not only develop the basic methods but also to apply them to relevant examples from the NASA HSCT R&D effort. The research involves a three year effort aimed first at the HSCT MDO problem description, next the development of the problem, and finally a solution to a significant portion of the problem.

  12. Discovery and Optimization of Materials Using Evolutionary Approaches.

    PubMed

    Le, Tu C; Winkler, David A

    2016-05-25

    Materials science is undergoing a revolution, generating valuable new materials such as flexible solar panels, biomaterials and printable tissues, new catalysts, polymers, and porous materials with unprecedented properties. However, the number of potentially accessible materials is immense. Artificial evolutionary methods such as genetic algorithms, which explore large, complex search spaces very efficiently, can be applied to the identification and optimization of novel materials more rapidly than by physical experiments alone. Machine learning models can augment experimental measurements of materials fitness to accelerate identification of useful and novel materials in vast materials composition or property spaces. This review discusses the problems of large materials spaces, the types of evolutionary algorithms employed to identify or optimize materials, and how materials can be represented mathematically as genomes, describes fitness landscapes and mutation operators commonly employed in materials evolution, and provides a comprehensive summary of published research on the use of evolutionary methods to generate new catalysts, phosphors, and a range of other materials. The review identifies the potential for evolutionary methods to revolutionize a wide range of manufacturing, medical, and materials based industries. PMID:27171499

  13. A stochastic optimization approach for integrated urban water resource planning.

    PubMed

    Huang, Y; Chen, J; Zeng, S; Sun, F; Dong, X

    2013-01-01

    Urban water is facing the challenges of both scarcity and water quality deterioration. Consideration of nonconventional water resources has increasingly become essential over the last decade in urban water resource planning. In addition, rapid urbanization and economic development has led to an increasing uncertain water demand and fragile water infrastructures. Planning of urban water resources is thus in need of not only an integrated consideration of both conventional and nonconventional urban water resources including reclaimed wastewater and harvested rainwater, but also the ability to design under gross future uncertainties for better reliability. This paper developed an integrated nonlinear stochastic optimization model for urban water resource evaluation and planning in order to optimize urban water flows. It accounted for not only water quantity but also water quality from different sources and for different uses with different costs. The model successfully applied to a case study in Beijing, which is facing a significant water shortage. The results reveal how various urban water resources could be cost-effectively allocated by different planning alternatives and how their reliabilities would change.

  14. An Airway Network Flow Assignment Approach Based on an Efficient Multiobjective Optimization Framework

    PubMed Central

    Guan, Xiangmin; Zhang, Xuejun; Zhu, Yanbo; Sun, Dengfeng; Lei, Jiaxing

    2015-01-01

    Considering reducing the airspace congestion and the flight delay simultaneously, this paper formulates the airway network flow assignment (ANFA) problem as a multiobjective optimization model and presents a new multiobjective optimization framework to solve it. Firstly, an effective multi-island parallel evolution algorithm with multiple evolution populations is employed to improve the optimization capability. Secondly, the nondominated sorting genetic algorithm II is applied for each population. In addition, a cooperative coevolution algorithm is adapted to divide the ANFA problem into several low-dimensional biobjective optimization problems which are easier to deal with. Finally, in order to maintain the diversity of solutions and to avoid prematurity, a dynamic adjustment operator based on solution congestion degree is specifically designed for the ANFA problem. Simulation results using the real traffic data from China air route network and daily flight plans demonstrate that the proposed approach can improve the solution quality effectively, showing superiority to the existing approaches such as the multiobjective genetic algorithm, the well-known multiobjective evolutionary algorithm based on decomposition, and a cooperative coevolution multiobjective algorithm as well as other parallel evolution algorithms with different migration topology. PMID:26180840

  15. An Airway Network Flow Assignment Approach Based on an Efficient Multiobjective Optimization Framework.

    PubMed

    Guan, Xiangmin; Zhang, Xuejun; Zhu, Yanbo; Sun, Dengfeng; Lei, Jiaxing

    2015-01-01

    Considering reducing the airspace congestion and the flight delay simultaneously, this paper formulates the airway network flow assignment (ANFA) problem as a multiobjective optimization model and presents a new multiobjective optimization framework to solve it. Firstly, an effective multi-island parallel evolution algorithm with multiple evolution populations is employed to improve the optimization capability. Secondly, the nondominated sorting genetic algorithm II is applied for each population. In addition, a cooperative coevolution algorithm is adapted to divide the ANFA problem into several low-dimensional biobjective optimization problems which are easier to deal with. Finally, in order to maintain the diversity of solutions and to avoid prematurity, a dynamic adjustment operator based on solution congestion degree is specifically designed for the ANFA problem. Simulation results using the real traffic data from China air route network and daily flight plans demonstrate that the proposed approach can improve the solution quality effectively, showing superiority to the existing approaches such as the multiobjective genetic algorithm, the well-known multiobjective evolutionary algorithm based on decomposition, and a cooperative coevolution multiobjective algorithm as well as other parallel evolution algorithms with different migration topology. PMID:26180840

  16. Optimizing Dendritic Cell-Based Approaches for Cancer Immunotherapy

    PubMed Central

    Datta, Jashodeep; Terhune, Julia H.; Lowenfeld, Lea; Cintolo, Jessica A.; Xu, Shuwen; Roses, Robert E.; Czerniecki, Brian J.

    2014-01-01

    Dendritic cells (DC) are professional antigen-presenting cells uniquely suited for cancer immunotherapy. They induce primary immune responses, potentiate the effector functions of previously primed T-lymphocytes, and orchestrate communication between innate and adaptive immunity. The remarkable diversity of cytokine activation regimens, DC maturation states, and antigen-loading strategies employed in current DC-based vaccine design reflect an evolving, but incomplete, understanding of optimal DC immunobiology. In the clinical realm, existing DC-based cancer immunotherapy efforts have yielded encouraging but inconsistent results. Despite recent U.S. Federal and Drug Administration (FDA) approval of DC-based sipuleucel-T for metastatic castration-resistant prostate cancer, clinically effective DC immunotherapy as monotherapy for a majority of tumors remains a distant goal. Recent work has identified strategies that may allow for more potent “next-generation” DC vaccines. Additionally, multimodality approaches incorporating DC-based immunotherapy may improve clinical outcomes. PMID:25506283

  17. [OPTIMAL APPROACH TO COMBINED TREATMENT OF PATIENTS WITH UROGENITAL PAPILLOMATOSIS].

    PubMed

    Breusov, A A; Kulchavenya, E V; Brizhatyukl, E V; Filimonov, P N

    2015-01-01

    The review analyzed 59 sources of domestic and foreign literature on the use of immunomodulator izoprinozin in treating patients infected with human papilloma virus, and the results of their own experience. The high prevalence of HPV and its role in the development of cervical cancer are shown, the mechanisms of HPV development and the host protection from this infection are described. The authors present approaches to the treatment of HPV-infected patients with particular attention to izoprinozin. Isoprinosine belongs to immunomodulators with antiviral activity. It inhibits the replication of viral DNA and RNA by binding to cell ribosomes and changing their stereochemical structure. HPV infection, especially in the early stages, may be successfully cured till the complete elimination of the virus. Inosine Pranobex (izoprinozin) having dual action and the most abundant evidence base, may be recognized as the optimal treatment option. PMID:26859953

  18. Approaches of Russian oil companies to optimal capital structure

    NASA Astrophysics Data System (ADS)

    Ishuk, T.; Ulyanova, O.; Savchitz, V.

    2015-11-01

    Oil companies play a vital role in Russian economy. Demand for hydrocarbon products will be increasing for the nearest decades simultaneously with the population growth and social needs. Change of raw-material orientation of Russian economy and the transition to the innovative way of the development do not exclude the development of oil industry in future. Moreover, society believes that this sector must bring the Russian economy on to the road of innovative development due to neo-industrialization. To achieve this, the government power as well as capital management of companies are required. To make their optimal capital structure, it is necessary to minimize the capital cost, decrease definite risks under existing limits, and maximize profitability. The capital structure analysis of Russian and foreign oil companies shows different approaches, reasons, as well as conditions and, consequently, equity capital and debt capital relationship and their cost, which demands the effective capital management strategy.

  19. Selective optimization of side activities: the SOSA approach.

    PubMed

    Wermuth, Camille G

    2006-02-01

    Selective optimization of side activities of drug molecules (the SOSA approach) is an intelligent and potentially more efficient strategy than HTS for the generation of new biological activities. Only a limited number of highly diverse drug molecules are screened, for which bioavailability and toxicity studies have already been performed and efficacy in humans has been confirmed. Once the screening has generated a hit it will be used as the starting point for a drug discovery program. Using traditional medicinal chemistry as well as parallel synthesis, the initial 'side activity' is transformed into the 'main activity' and, conversely, the initial 'main activity' is significantly reduced or abolished. This strategy has a high probability of yielding safe, bioavailable, original and patentable analogues. PMID:16533714

  20. Design optimization for cost and quality: The robust design approach

    NASA Technical Reports Server (NTRS)

    Unal, Resit

    1990-01-01

    Designing reliable, low cost, and operable space systems has become the key to future space operations. Designing high quality space systems at low cost is an economic and technological challenge to the designer. A systematic and efficient way to meet this challenge is a new method of design optimization for performance, quality, and cost, called Robust Design. Robust Design is an approach for design optimization. It consists of: making system performance insensitive to material and subsystem variation, thus allowing the use of less costly materials and components; making designs less sensitive to the variations in the operating environment, thus improving reliability and reducing operating costs; and using a new structured development process so that engineering time is used most productively. The objective in Robust Design is to select the best combination of controllable design parameters so that the system is most robust to uncontrollable noise factors. The robust design methodology uses a mathematical tool called an orthogonal array, from design of experiments theory, to study a large number of decision variables with a significantly small number of experiments. Robust design also uses a statistical measure of performance, called a signal-to-noise ratio, from electrical control theory, to evaluate the level of performance and the effect of noise factors. The purpose is to investigate the Robust Design methodology for improving quality and cost, demonstrate its application by the use of an example, and suggest its use as an integral part of space system design process.

  1. Selection of Reserves for Woodland Caribou Using an Optimization Approach

    PubMed Central

    Schneider, Richard R.; Hauer, Grant; Dawe, Kimberly; Adamowicz, Wiktor; Boutin, Stan

    2012-01-01

    Habitat protection has been identified as an important strategy for the conservation of woodland caribou (Rangifer tarandus). However, because of the economic opportunity costs associated with protection it is unlikely that all caribou ranges can be protected in their entirety. We used an optimization approach to identify reserve designs for caribou in Alberta, Canada, across a range of potential protection targets. Our designs minimized costs as well as three demographic risk factors: current industrial footprint, presence of white-tailed deer (Odocoileus virginianus), and climate change. We found that, using optimization, 60% of current caribou range can be protected (including 17% in existing parks) while maintaining access to over 98% of the value of resources on public lands. The trade-off between minimizing cost and minimizing demographic risk factors was minimal because the spatial distributions of cost and risk were similar. The prospects for protection are much reduced if protection is directed towards the herds that are most at risk of near-term extirpation. PMID:22363702

  2. Optimization methods of the net emission computation applied to cylindrical sodium vapor plasma

    SciTech Connect

    Hadj Salah, S. Hajji, S.; Ben Hamida, M. B.; Charrada, K.

    2015-01-15

    An optimization method based on a physical analysis of the temperature profile and different terms in the radiative transfer equation is developed to reduce the time computation of the net emission. This method has been applied for the cylindrical discharge in sodium vapor. Numerical results show a relative error of spectral flux density values lower than 5% with an exact solution, whereas the computation time is about 10 orders of magnitude less. This method is followed by a spectral method based on the rearrangement of the lines profile. Results are shown for Lorentzian profile and they demonstrated a relative error lower than 10% with the reference method and gain in computation time about 20 orders of magnitude.

  3. An Improved Ant Colony Optimization Approach for Optimization of Process Planning

    PubMed Central

    Wang, JinFeng; Fan, XiaoLiang; Ding, Haimin

    2014-01-01

    Computer-aided process planning (CAPP) is an important interface between computer-aided design (CAD) and computer-aided manufacturing (CAM) in computer-integrated manufacturing environments (CIMs). In this paper, process planning problem is described based on a weighted graph, and an ant colony optimization (ACO) approach is improved to deal with it effectively. The weighted graph consists of nodes, directed arcs, and undirected arcs, which denote operations, precedence constraints among operation, and the possible visited path among operations, respectively. Ant colony goes through the necessary nodes on the graph to achieve the optimal solution with the objective of minimizing total production costs (TPCs). A pheromone updating strategy proposed in this paper is incorporated in the standard ACO, which includes Global Update Rule and Local Update Rule. A simple method by controlling the repeated number of the same process plans is designed to avoid the local convergence. A case has been carried out to study the influence of various parameters of ACO on the system performance. Extensive comparative experiments have been carried out to validate the feasibility and efficiency of the proposed approach. PMID:25097874

  4. Genetic algorithm applied to the optimization of quantum cascade lasers with second harmonic generation

    SciTech Connect

    Gajić, A.; Radovanović, J. Milanović, V.; Indjin, D.; Ikonić, Z.

    2014-02-07

    A computational model for the optimization of the second order optical nonlinearities in GaInAs/AlInAs quantum cascade laser structures is presented. The set of structure parameters that lead to improved device performance was obtained through the implementation of the Genetic Algorithm. In the following step, the linear and second harmonic generation power were calculated by self-consistently solving the system of rate equations for carriers and photons. This rate equation system included both stimulated and simultaneous double photon absorption processes that occur between the levels relevant for second harmonic generation, and material-dependent effective mass, as well as band nonparabolicity, were taken into account. The developed method is general, in the sense that it can be applied to any higher order effect, which requires the photon density equation to be included. Specifically, we have addressed the optimization of the active region of a double quantum well In{sub 0.53}Ga{sub 0.47}As/Al{sub 0.48}In{sub 0.52}As structure and presented its output characteristics.

  5. A boundary element approach to optimization of active noise control sources on three-dimensional structures

    NASA Technical Reports Server (NTRS)

    Cunefare, K. A.; Koopmann, G. H.

    1991-01-01

    This paper presents the theoretical development of an approach to active noise control (ANC) applicable to three-dimensional radiators. The active noise control technique, termed ANC Optimization Analysis, is based on minimizing the total radiated power by adding secondary acoustic sources on the primary noise source. ANC Optimization Analysis determines the optimum magnitude and phase at which to drive the secondary control sources in order to achieve the best possible reduction in the total radiated power from the noise source/control source combination. For example, ANC Optimization Analysis predicts a 20 dB reduction in the total power radiated from a sphere of radius at a dimensionless wavenumber ka of 0.125, for a single control source representing 2.5 percent of the total area of the sphere. ANC Optimization Analysis is based on a boundary element formulation of the Helmholtz Integral Equation, and thus, the optimization analysis applies to a single frequency, while multiple frequencies can be treated through repeated analyses.

  6. Optimization of floodplain monitoring sensors through an entropy approach

    NASA Astrophysics Data System (ADS)

    Ridolfi, E.; Yan, K.; Alfonso, L.; Di Baldassarre, G.; Napolitano, F.; Russo, F.; Bates, P. D.

    2012-04-01

    To support the decision making processes of flood risk management and long term floodplain planning, a significant issue is the availability of data to build appropriate and reliable models. Often the required data for model building, calibration and validation are not sufficient or available. A unique opportunity is offered nowadays by the globally available data, which can be freely downloaded from internet. However, there remains the question of what is the real potential of those global remote sensing data, characterized by different accuracies, for global inundation monitoring and how to integrate them with inundation models. In order to monitor a reach of the River Dee (UK), a network of cheap wireless sensors (GridStix) was deployed both in the channel and in the floodplain. These sensors measure the water depth, supplying the input data for flood mapping. Besides their accuracy and reliability, their location represents a big issue, having the purpose of providing as much information as possible and at the same time as low redundancy as possible. In order to update their layout, the initial number of six sensors has been increased up to create a redundant network over the area. Through an entropy approach, the most informative and the least redundant sensors have been chosen among all. First, a simple raster-based inundation model (LISFLOOD-FP) is used to generate a synthetic GridStix data set of water stages. The Digital Elevation Model (DEM) used for hydraulic model building is the globally and freely available SRTM DEM. Second, the information content of each sensor has been compared by evaluating their marginal entropy. Those with a low marginal entropy are excluded from the process because of their low capability to provide information. Then the number of sensors has been optimized considering a Multi-Objective Optimization Problem (MOOP) with two objectives, namely maximization of the joint entropy (a measure of the information content) and

  7. Optimizing algal cultivation & productivity : an innovative, multidiscipline, and multiscale approach.

    SciTech Connect

    Murton, Jaclyn K.; Hanson, David T.; Turner, Tom; Powell, Amy Jo; James, Scott Carlton; Timlin, Jerilyn Ann; Scholle, Steven; August, Andrew; Dwyer, Brian P.; Ruffing, Anne; Jones, Howland D. T.; Ricken, James Bryce; Reichardt, Thomas A.

    2010-04-01

    Progress in algal biofuels has been limited by significant knowledge gaps in algal biology, particularly as they relate to scale-up. To address this we are investigating how culture composition dynamics (light as well as biotic and abiotic stressors) describe key biochemical indicators of algal health: growth rate, photosynthetic electron transport, and lipid production. Our approach combines traditional algal physiology with genomics, bioanalytical spectroscopy, chemical imaging, remote sensing, and computational modeling to provide an improved fundamental understanding of algal cell biology across multiple cultures scales. This work spans investigations from the single-cell level to ensemble measurements of algal cell cultures at the laboratory benchtop to large greenhouse scale (175 gal). We will discuss the advantages of this novel, multidisciplinary strategy and emphasize the importance of developing an integrated toolkit to provide sensitive, selective methods for detecting early fluctuations in algal health, productivity, and population diversity. Progress in several areas will be summarized including identification of spectroscopic signatures for algal culture composition, stress level, and lipid production enabled by non-invasive spectroscopic monitoring of the photosynthetic and photoprotective pigments at the single-cell and bulk-culture scales. Early experiments compare and contrast the well-studied green algae chlamydomonas with two potential production strains of microalgae, nannochloropsis and dunnaliella, under optimal and stressed conditions. This integrated approach has the potential for broad impact on algal biofuels and bioenergy and several of these opportunities will be discussed.

  8. Approach to optimal care at end of life.

    PubMed

    Nichols, K J

    2001-10-01

    At no other time in any patient's life is the team approach to care more important than at the end of life. The demands and challenges of end-of-life care (ELC) tax all physicians at some point. There is no other profession that is charged with this ultimate responsibility. No discipline in medicine is immune to the issues of end-of-life care except perhaps, ironically, pathology. This presentation addresses the issues, options, and challenges of providing optimal care at the end of life. It looks at the principles of ELC, barriers to good ELC, and what patients and families expect from ELC. Barriers to ELC include financial restrictions, inadequate care-givers, community support, legal issues, legislative issues, training needs, coordination of care, hospice care, and transitions for the patients and families. The legal aspects of physician-assisted suicide is presented as well as the approach of the American Osteopathic Association to ensure better education for physicians in the principles of ELC. PMID:11681166

  9. Applying the Cultural Formulation Approach to Career Counseling with Latinas/os

    ERIC Educational Resources Information Center

    Flores, Lisa Y.; Ramos, Karina; Kanagui, Marlen

    2010-01-01

    In this article, the authors present two hypothetical cases, one of a Mexican American female college student and one of a Mexican immigrant adult male, and apply a culturally sensitive approach to career assessment and career counseling with each of these clients. Drawing from Leong, Hardin, and Gupta's cultural formulation approach (CFA) to…

  10. Dynamic optimization of ISR sensors using a risk-based reward function applied to ground and space surveillance scenarios

    NASA Astrophysics Data System (ADS)

    DeSena, J. T.; Martin, S. R.; Clarke, J. C.; Dutrow, D. A.; Newman, A. J.

    2012-06-01

    As the number and diversity of sensing assets available for intelligence, surveillance and reconnaissance (ISR) operations continues to expand, the limited ability of human operators to effectively manage, control and exploit the ISR ensemble is exceeded, leading to reduced operational effectiveness. Automated support both in the processing of voluminous sensor data and sensor asset control can relieve the burden of human operators to support operation of larger ISR ensembles. In dynamic environments it is essential to react quickly to current information to avoid stale, sub-optimal plans. Our approach is to apply the principles of feedback control to ISR operations, "closing the loop" from the sensor collections through automated processing to ISR asset control. Previous work by the authors demonstrated non-myopic multiple platform trajectory control using a receding horizon controller in a closed feedback loop with a multiple hypothesis tracker applied to multi-target search and track simulation scenarios in the ground and space domains. This paper presents extensions in both size and scope of the previous work, demonstrating closed-loop control, involving both platform routing and sensor pointing, of a multisensor, multi-platform ISR ensemble tasked with providing situational awareness and performing search, track and classification of multiple moving ground targets in irregular warfare scenarios. The closed-loop ISR system is fullyrealized using distributed, asynchronous components that communicate over a network. The closed-loop ISR system has been exercised via a networked simulation test bed against a scenario in the Afghanistan theater implemented using high-fidelity terrain and imagery data. In addition, the system has been applied to space surveillance scenarios requiring tracking of space objects where current deliberative, manually intensive processes for managing sensor assets are insufficiently responsive. Simulation experiment results are presented

  11. A modular approach to large-scale design optimization of aerospace systems

    NASA Astrophysics Data System (ADS)

    Hwang, John T.

    Gradient-based optimization and the adjoint method form a synergistic combination that enables the efficient solution of large-scale optimization problems. Though the gradient-based approach struggles with non-smooth or multi-modal problems, the capability to efficiently optimize up to tens of thousands of design variables provides a valuable design tool for exploring complex tradeoffs and finding unintuitive designs. However, the widespread adoption of gradient-based optimization is limited by the implementation challenges for computing derivatives efficiently and accurately, particularly in multidisciplinary and shape design problems. This thesis addresses these difficulties in two ways. First, to deal with the heterogeneity and integration challenges of multidisciplinary problems, this thesis presents a computational modeling framework that solves multidisciplinary systems and computes their derivatives in a semi-automated fashion. This framework is built upon a new mathematical formulation developed in this thesis that expresses any computational model as a system of algebraic equations and unifies all methods for computing derivatives using a single equation. The framework is applied to two engineering problems: the optimization of a nanosatellite with 7 disciplines and over 25,000 design variables; and simultaneous allocation and mission optimization for commercial aircraft involving 330 design variables, 12 of which are integer variables handled using the branch-and-bound method. In both cases, the framework makes large-scale optimization possible by reducing the implementation effort and code complexity. The second half of this thesis presents a differentiable parametrization of aircraft geometries and structures for high-fidelity shape optimization. Existing geometry parametrizations are not differentiable, or they are limited in the types of shape changes they allow. This is addressed by a novel parametrization that smoothly interpolates aircraft

  12. A jazz-based approach for optimal setting of pressure reducing valves in water distribution networks

    NASA Astrophysics Data System (ADS)

    De Paola, Francesco; Galdiero, Enzo; Giugni, Maurizio

    2016-05-01

    This study presents a model for valve setting in water distribution networks (WDNs), with the aim of reducing the level of leakage. The approach is based on the harmony search (HS) optimization algorithm. The HS mimics a jazz improvisation process able to find the best solutions, in this case corresponding to valve settings in a WDN. The model also interfaces with the improved version of a popular hydraulic simulator, EPANET 2.0, to check the hydraulic constraints and to evaluate the performances of the solutions. Penalties are introduced in the objective function in case of violation of the hydraulic constraints. The model is applied to two case studies, and the obtained results in terms of pressure reductions are comparable with those of competitive metaheuristic algorithms (e.g. genetic algorithms). The results demonstrate the suitability of the HS algorithm for water network management and optimization.

  13. Convergence behavior of multireference perturbation theory: Forced degeneracy and optimization partitioning applied to the beryllium atom

    NASA Astrophysics Data System (ADS)

    Finley, James P.; Chaudhuri, Rajat K.; Freed, Karl F.

    1996-07-01

    High-order multireference perturbation theory is applied to the 1S states of the beryllium atom using a reference (model) space composed of the \\|1s22s2> and the \\|1s22p2> configuration-state functions (CSF's), a system that is known to yield divergent expansions using Mo/ller-Plesset and Epstein-Nesbet partitioning methods. Computations of the eigenvalues are made through 40th order using forced degeneracy (FD) partitioning and the recently introduced optimization (OPT) partitioning. The former forces the 2s and 2p orbitals to be degenerate in zeroth order, while the latter chooses optimal zeroth-order energies of the (few) most important states. Our methodology employs simple models for understanding and suggesting remedies for unsuitable choices of reference spaces and partitioning methods. By examining a two-state model composed of only the \\|1s22p2> and \\|1s22s3s> states of the beryllium atom, it is demonstrated that the full computation with 1323 CSF's can converge only if the zeroth-order energy of the \\|1s22s3s> Rydberg state from the orthogonal space lies below the zeroth-order energy of the \\|1s22p2> CSF from the reference space. Thus convergence in this case requires a zeroth-order spectral overlap between the orthogonal and reference spaces. The FD partitioning is not capable of generating this type of spectral overlap and thus yields a divergent expansion. However, the expansion is actually asymptotically convergent, with divergent behavior not displayed until the 11th order because the \\|1s22s3s> Rydberg state is only weakly coupled with the \\|1s22p2> CSF and because these states are energetically well separated in zeroth order. The OPT partitioning chooses the correct zeroth-order energy ordering and thus yields a convergent expansion that is also very accurate in low orders compared to the exact solution within the basis.

  14. Optimizing neural networks for river flow forecasting - Evolutionary Computation methods versus the Levenberg-Marquardt approach

    NASA Astrophysics Data System (ADS)

    Piotrowski, Adam P.; Napiorkowski, Jarosław J.

    2011-09-01

    Evolutionary Computation-based algorithms. The Levenberg-Marquardt optimization must be considered as the most efficient one due to its speed. Its drawback due to possible sticking in poor local optimum can be overcome by applying a multi-start approach.

  15. Assay optimization: a statistical design of experiments approach.

    PubMed

    Altekar, Maneesha; Homon, Carol A; Kashem, Mohammed A; Mason, Steven W; Nelson, Richard M; Patnaude, Lori A; Yingling, Jeffrey; Taylor, Paul B

    2007-03-01

    With the transition from manual to robotic HTS in the last several years, assay optimization has become a significant bottleneck. Recent advances in robotic liquid handling have made it feasible to reduce assay optimization timelines with the application of statistically designed experiments. When implemented, they can efficiently optimize assays by rapidly identifying significant factors, complex interactions, and nonlinear responses. This article focuses on the use of statistically designed experiments in assay optimization.

  16. Calculation of a double reactive azeotrope using stochastic optimization approaches

    NASA Astrophysics Data System (ADS)

    Mendes Platt, Gustavo; Pinheiro Domingos, Roberto; Oliveira de Andrade, Matheus

    2013-02-01

    An homogeneous reactive azeotrope is a thermodynamic coexistence condition of two phases under chemical and phase equilibrium, where compositions of both phases (in the Ung-Doherty sense) are equal. This kind of nonlinear phenomenon arises from real world situations and has applications in chemical and petrochemical industries. The modeling of reactive azeotrope calculation is represented by a nonlinear algebraic system with phase equilibrium, chemical equilibrium and azeotropy equations. This nonlinear system can exhibit more than one solution, corresponding to a double reactive azeotrope. The robust calculation of reactive azeotropes can be conducted by several approaches, such as interval-Newton/generalized bisection algorithms and hybrid stochastic-deterministic frameworks. In this paper, we investigate the numerical aspects of the calculation of reactive azeotropes using two metaheuristics: the Luus-Jaakola adaptive random search and the Firefly algorithm. Moreover, we present results for a system (with industrial interest) with more than one azeotrope, the system isobutene/methanol/methyl-tert-butyl-ether (MTBE). We present convergence patterns for both algorithms, illustrating - in a bidimensional subdomain - the identification of reactive azeotropes. A strategy for calculation of multiple roots in nonlinear systems is also applied. The results indicate that both algorithms are suitable and robust when applied to reactive azeotrope calculations for this "challenging" nonlinear system.

  17. A Pareto frontier intersection-based approach for efficient multiobjective optimization of competing concept alternatives

    NASA Astrophysics Data System (ADS)

    Rousis, Damon A.

    The expected growth of civil aviation over the next twenty years places significant emphasis on revolutionary technology development aimed at mitigating the environmental impact of commercial aircraft. As the number of technology alternatives grows along with model complexity, current methods for Pareto finding and multiobjective optimization quickly become computationally infeasible. Coupled with the large uncertainty in the early stages of design, optimal designs are sought while avoiding the computational burden of excessive function calls when a single design change or technology assumption could alter the results. This motivates the need for a robust and efficient evaluation methodology for quantitative assessment of competing concepts. This research presents a novel approach that combines Bayesian adaptive sampling with surrogate-based optimization to efficiently place designs near Pareto frontier intersections of competing concepts. Efficiency is increased over sequential multiobjective optimization by focusing computational resources specifically on the location in the design space where optimality shifts between concepts. At the intersection of Pareto frontiers, the selection decisions are most sensitive to preferences place on the objectives, and small perturbations can lead to vastly different final designs. These concepts are incorporated into an evaluation methodology that ultimately reduces the number of failed cases, infeasible designs, and Pareto dominated solutions across all concepts. A set of algebraic samples along with a truss design problem are presented as canonical examples for the proposed approach. The methodology is applied to the design of ultra-high bypass ratio turbofans to guide NASA's technology development efforts for future aircraft. Geared-drive and variable geometry bypass nozzle concepts are explored as enablers for increased bypass ratio and potential alternatives over traditional configurations. The method is shown to improve

  18. Optimal management of substrates in anaerobic co-digestion: An ant colony algorithm approach.

    PubMed

    Verdaguer, Marta; Molinos-Senante, María; Poch, Manel

    2016-04-01

    Sewage sludge (SWS) is inevitably produced in urban wastewater treatment plants (WWTPs). The treatment of SWS on site at small WWTPs is not economical; therefore, the SWS is typically transported to an alternative SWS treatment center. There is increased interest in the use of anaerobic digestion (AnD) with co-digestion as an SWS treatment alternative. Although the availability of different co-substrates has been ignored in most of the previous studies, it is an essential issue for the optimization of AnD co-digestion. In a pioneering approach, this paper applies an Ant-Colony-Optimization (ACO) algorithm that maximizes the generation of biogas through AnD co-digestion in order to optimize the discharge of organic waste from different waste sources in real-time. An empirical application is developed based on a virtual case study that involves organic waste from urban WWTPs and agrifood activities. The results illustrate the dominate role of toxicity levels in selecting contributions to the AnD input. The methodology and case study proposed in this paper demonstrate the usefulness of the ACO approach in supporting a decision process that contributes to improving the sustainability of organic waste and SWS management.

  19. Application of an Optimal Tuner Selection Approach for On-Board Self-Tuning Engine Models

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Armstrong, Jeffrey B.; Garg, Sanjay

    2012-01-01

    An enhanced design methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented in this paper. It specific-ally addresses the under-determined estimation problem, in which there are more unknown parameters than available sensor measurements. This work builds upon an existing technique for systematically selecting a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. While the existing technique was optimized for open-loop engine operation at a fixed design point, in this paper an alternative formulation is presented that enables the technique to be optimized for an engine operating under closed-loop control throughout the flight envelope. The theoretical Kalman filter mean squared estimation error at a steady-state closed-loop operating point is derived, and the tuner selection approach applied to minimize this error is discussed. A technique for constructing a globally optimal tuning parameter vector, which enables full-envelope application of the technology, is also presented, along with design steps for adjusting the dynamic response of the Kalman filter state estimates. Results from the application of the technique to linear and nonlinear aircraft engine simulations are presented and compared to the conventional approach of tuner selection. The new methodology is shown to yield a significant improvement in on-line Kalman filter estimation accuracy.

  20. Optimal management of substrates in anaerobic co-digestion: An ant colony algorithm approach.

    PubMed

    Verdaguer, Marta; Molinos-Senante, María; Poch, Manel

    2016-04-01

    Sewage sludge (SWS) is inevitably produced in urban wastewater treatment plants (WWTPs). The treatment of SWS on site at small WWTPs is not economical; therefore, the SWS is typically transported to an alternative SWS treatment center. There is increased interest in the use of anaerobic digestion (AnD) with co-digestion as an SWS treatment alternative. Although the availability of different co-substrates has been ignored in most of the previous studies, it is an essential issue for the optimization of AnD co-digestion. In a pioneering approach, this paper applies an Ant-Colony-Optimization (ACO) algorithm that maximizes the generation of biogas through AnD co-digestion in order to optimize the discharge of organic waste from different waste sources in real-time. An empirical application is developed based on a virtual case study that involves organic waste from urban WWTPs and agrifood activities. The results illustrate the dominate role of toxicity levels in selecting contributions to the AnD input. The methodology and case study proposed in this paper demonstrate the usefulness of the ACO approach in supporting a decision process that contributes to improving the sustainability of organic waste and SWS management. PMID:26868846

  1. New Approaches to HSCT Multidisciplinary Design and Optimization

    NASA Technical Reports Server (NTRS)

    Schrage, Daniel P.; Craig, James I.; Fulton, Robert E.; Mistree, Farrokh

    1999-01-01

    New approaches to MDO have been developed and demonstrated during this project on a particularly challenging aeronautics problem- HSCT Aeroelastic Wing Design. To tackle this problem required the integration of resources and collaboration from three Georgia Tech laboratories: ASDL, SDL, and PPRL, along with close coordination and participation from industry. Its success can also be contributed to the close interaction and involvement of fellows from the NASA Multidisciplinary Analysis and Optimization (MAO) program, which was going on in parallel, and provided additional resources to work the very complex, multidisciplinary problem, along with the methods being developed. The development of the Integrated Design Engineering Simulator (IDES) and its initial demonstration is a necessary first step in transitioning the methods and tools developed to larger industrial sized problems of interest. It also provides a framework for the implementation and demonstration of the methodology. Attachment: Appendix A - List of publications. Appendix B - Year 1 report. Appendix C - Year 2 report. Appendix D - Year 3 report. Appendix E - accompanying CDROM.

  2. A systematic approach: optimization of healthcare operations with knowledge management.

    PubMed

    Wickramasinghe, Nilmini; Bali, Rajeev K; Gibbons, M Chris; Choi, J H James; Schaffer, Jonathan L

    2009-01-01

    Effective decision making is vital in all healthcare activities. While this decision making is typically complex and unstructured, it requires the decision maker to gather multispectral data and information in order to make an effective choice when faced with numerous options. Unstructured decision making in dynamic and complex environments is challenging and in almost every situation the decision maker is undoubtedly faced with information inferiority. The need for germane knowledge, pertinent information and relevant data are critical and hence the value of harnessing knowledge and embracing the tools, techniques, technologies and tactics of knowledge management are essential to ensuring efficiency and efficacy in the decision making process. The systematic approach and application of knowledge management (KM) principles and tools can provide the necessary foundation for improving the decision making processes in healthcare. A combination of Boyd's OODA Loop (Observe, Orient, Decide, Act) and the Intelligence Continuum provide an integrated, systematic and dynamic model for ensuring that the healthcare decision maker is always provided with the appropriate and necessary knowledge elements that will help to ensure that healthcare decision making process outcomes are optimized for maximal patient benefit. The example of orthopaedic operating room processes will illustrate the application of the integrated model to support effective decision making in the clinical environment.

  3. Interior search algorithm (ISA): a novel approach for global optimization.

    PubMed

    Gandomi, Amir H

    2014-07-01

    This paper presents the interior search algorithm (ISA) as a novel method for solving optimization tasks. The proposed ISA is inspired by interior design and decoration. The algorithm is different from other metaheuristic algorithms and provides new insight for global optimization. The proposed method is verified using some benchmark mathematical and engineering problems commonly used in the area of optimization. ISA results are further compared with well-known optimization algorithms. The results show that the ISA is efficiently capable of solving optimization problems. The proposed algorithm can outperform the other well-known algorithms. Further, the proposed algorithm is very simple and it only has one parameter to tune.

  4. Optimization of preparation of chitosan-coated iron oxide nanoparticles for biomedical applications by chemometrics approaches

    NASA Astrophysics Data System (ADS)

    Honary, Soheila; Ebrahimi, Pouneh; Rad, Hossein Asgari; Asgari, Mahsa

    2013-08-01

    Functionalized magnetic nanoparticles are used in several biomedical applications, such as drug delivery, magnetic cell separation, and magnetic resonance imaging. Size and surface properties of iron oxide nanoparticles are the two important factors which could dramatically affect the nanoparticle efficiency as well as their stability. In this study, the chemometrics approach was applied to optimize the coating process of iron oxide nanoparticles. To optimize the size of nanoparticles, the effect of two experimental parameters on size was investigated by means of multivariate analysis. The factors considered were chitosan molecular weight and chitosan-to-tripolyphosphate concentration ratio. The experiments were performed according to face-centered cube central composite response surface design. A second-order regression model was obtained which characterized by both descriptive and predictive abilities. The method was optimized with respect to the percent of Z average diameter's increasing after coating as response. It can be concluded that experimental design provides a suitable means of optimizing and testing the robustness of iron oxide nanoparticle coating method.

  5. Molecular tailoring approach for geometry optimization of large molecules: Energy evaluation and parallelization strategies

    NASA Astrophysics Data System (ADS)

    Ganesh, V.; Dongare, Rameshwar K.; Balanarayan, P.; Gadre, Shridhar R.

    2006-09-01

    A linear-scaling scheme for estimating the electronic energy, gradients, and Hessian of a large molecule at ab initio level of theory based on fragment set cardinality is presented. With this proposition, a general, cardinality-guided molecular tailoring approach (CG-MTA) for ab initio geometry optimization of large molecules is implemented. The method employs energy gradients extracted from fragment wave functions, enabling computations otherwise impractical on PC hardware. Further, the method is readily amenable to large scale coarse-grain parallelization with minimal communication among nodes, resulting in a near-linear speedup. CG-MTA is applied for density-functional-theory-based geometry optimization of a variety of molecules including α-tocopherol, taxol, γ-cyclodextrin, and two conformations of polyglycine. In the tests performed, energy and gradient estimates obtained from CG-MTA during optimization runs show an excellent agreement with those obtained from actual computation. Accuracy of the Hessian obtained employing CG-MTA provides good hope for the application of Hessian-based geometry optimization to large molecules.

  6. On a New Optimization Approach for the Hydroforming of Defects-Free Tubular Metallic Parts

    NASA Astrophysics Data System (ADS)

    Caseiro, J. F.; Valente, R. A. F.; Andrade-Campos, A.; Jorge, R. M. Natal

    2011-05-01

    In the hydroforming of tubular metallic components, process parameters (internal pressure, axial feed and counter-punch position) must be carefully set in order to avoid defects in the final part. If, on one hand, excessive pressure may lead to thinning and bursting during forming, on the other hand insufficient pressure may lead to an inadequate filling of the die. Similarly, an excessive axial feeding may lead to the formation of wrinkles, whilst an inadequate one may cause thinning and, consequentially, bursting. These apparently contradictory targets are virtually impossible to achieve without trial-and-error procedures in industry, unless optimization approaches are formulated and implemented for complex parts. In this sense, an optimization algorithm based on differentialevolutionary techniques is presented here, capable of being applied in the determination of the adequate process parameters for the hydroforming of metallic tubular components of complex geometries. The Hybrid Differential Evolution Particle Swarm Optimization (HDEPSO) algorithm, combining the advantages of a number of well-known distinct optimization strategies, acts along with a general purpose implicit finite element software, and is based on the definition of a wrinkling and thinning indicators. If defects are detected, the algorithm automatically corrects the process parameters and new numerical simulations are performed in real time. In the end, the algorithm proved to be robust and computationally cost-effective, thus providing a valid design tool for the conformation of defects-free components in industry [1].

  7. Improved Broadband Liner Optimization Applied to the Advanced Noise Control Fan

    NASA Technical Reports Server (NTRS)

    Nark, Douglas M.; Jones, Michael G.; Sutliff, Daniel L.; Ayle, Earl; Ichihashi, Fumitaka

    2014-01-01

    The broadband component of fan noise has grown in relevance with the utilization of increased bypass ratio and advanced fan designs. Thus, while the attenuation of fan tones remains paramount, the ability to simultaneously reduce broadband fan noise levels has become more desirable. This paper describes improvements to a previously established broadband acoustic liner optimization process using the Advanced Noise Control Fan rig as a demonstrator. Specifically, in-duct attenuation predictions with a statistical source model are used to obtain optimum impedance spectra over the conditions of interest. The predicted optimum impedance information is then used with acoustic liner modeling tools to design liners aimed at producing impedance spectra that most closely match the predicted optimum values. Design selection is based on an acceptance criterion that provides the ability to apply increased weighting to specific frequencies and/or operating conditions. Constant-depth, double-degree of freedom and variable-depth, multi-degree of freedom designs are carried through design, fabrication, and testing to validate the efficacy of the design process. Results illustrate the value of the design process in concurrently evaluating the relative costs/benefits of these liner designs. This study also provides an application for demonstrating the integrated use of duct acoustic propagation/radiation and liner modeling tools in the design and evaluation of novel broadband liner concepts for complex engine configurations.

  8. Computing Optimal Stochastic Portfolio Execution Strategies: A Parametric Approach Using Simulations

    NASA Astrophysics Data System (ADS)

    Moazeni, Somayeh; Coleman, Thomas F.; Li, Yuying

    2010-09-01

    Computing optimal stochastic portfolio execution strategies under appropriate risk consideration presents great computational challenge. We investigate a parametric approach for computing optimal stochastic strategies using Monte Carlo simulations. This approach allows reduction in computational complexity by computing coefficients for a parametric representation of a stochastic dynamic strategy based on static optimization. Using this technique, constraints can be similarly handled using appropriate penalty functions. We illustrate the proposed approach to minimize the expected execution cost and Conditional Value-at-Risk (CVaR).

  9. Parameter Estimation of Ion Current Formulations Requires Hybrid Optimization Approach to Be Both Accurate and Reliable

    PubMed Central

    Loewe, Axel; Wilhelms, Mathias; Schmid, Jochen; Krause, Mathias J.; Fischer, Fathima; Thomas, Dierk; Scholz, Eberhard P.; Dössel, Olaf; Seemann, Gunnar

    2016-01-01

    Computational models of cardiac electrophysiology provided insights into arrhythmogenesis and paved the way toward tailored therapies in the last years. To fully leverage in silico models in future research, these models need to be adapted to reflect pathologies, genetic alterations, or pharmacological effects, however. A common approach is to leave the structure of established models unaltered and estimate the values of a set of parameters. Today’s high-throughput patch clamp data acquisition methods require robust, unsupervised algorithms that estimate parameters both accurately and reliably. In this work, two classes of optimization approaches are evaluated: gradient-based trust-region-reflective and derivative-free particle swarm algorithms. Using synthetic input data and different ion current formulations from the Courtemanche et al. electrophysiological model of human atrial myocytes, we show that neither of the two schemes alone succeeds to meet all requirements. Sequential combination of the two algorithms did improve the performance to some extent but not satisfactorily. Thus, we propose a novel hybrid approach coupling the two algorithms in each iteration. This hybrid approach yielded very accurate estimates with minimal dependency on the initial guess using synthetic input data for which a ground truth parameter set exists. When applied to measured data, the hybrid approach yielded the best fit, again with minimal variation. Using the proposed algorithm, a single run is sufficient to estimate the parameters. The degree of superiority over the other investigated algorithms in terms of accuracy and robustness depended on the type of current. In contrast to the non-hybrid approaches, the proposed method proved to be optimal for data of arbitrary signal to noise ratio. The hybrid algorithm proposed in this work provides an important tool to integrate experimental data into computational models both accurately and robustly allowing to assess the often non

  10. A Technical and Economic Optimization Approach to Exploring Offshore Renewable Energy Development in Hawaii

    SciTech Connect

    Larson, Kyle B.; Tagestad, Jerry D.; Perkins, Casey J.; Oster, Matthew R.; Warwick, M.; Geerlofs, Simon H.

    2015-09-01

    This study was conducted with the support of the U.S. Department of Energy’s (DOE’s) Wind and Water Power Technologies Office (WWPTO) as part of ongoing efforts to minimize key risks and reduce the cost and time associated with permitting and deploying ocean renewable energy. The focus of the study was to discuss a possible approach to exploring scenarios for ocean renewable energy development in Hawaii that attempts to optimize future development based on technical, economic, and policy criteria. The goal of the study was not to identify potentially suitable or feasible locations for development, but to discuss how such an approach may be developed for a given offshore area. Hawaii was selected for this case study due to the complex nature of the energy climate there and DOE’s ongoing involvement to support marine spatial planning for the West Coast. Primary objectives of the study included 1) discussing the political and economic context for ocean renewable energy development in Hawaii, especially with respect to how inter-island transmission may affect the future of renewable energy development in Hawaii; 2) applying a Geographic Information System (GIS) approach that has been used to assess the technical suitability of offshore renewable energy technologies in Washington, Oregon, and California, to Hawaii’s offshore environment; and 3) formulate a mathematical model for exploring scenarios for ocean renewable energy development in Hawaii that seeks to optimize technical and economic suitability within the context of Hawaii’s existing energy policy and planning.

  11. A new optimization approach for shell and tube heat exchangers by using electromagnetism-like algorithm (EM)

    NASA Astrophysics Data System (ADS)

    Abed, Azher M.; Abed, Issa Ahmed; Majdi, Hasan Sh.; Al-Shamani, Ali Najah; Sopian, K.

    2016-02-01

    This study proposes a new procedure for optimal design of shell and tube heat exchangers. The electromagnetism-like algorithm is applied to save on heat exchanger capital cost and designing a compact, high performance heat exchanger with effective use of the allowable pressure drop (cost of the pump). An optimization algorithm is then utilized to determine the optimal values of both geometric design parameters and maximum allowable pressure drop by pursuing the minimization of a total cost function. A computer code is developed for the optimal shell and tube heat exchangers. Different test cases are solved to demonstrate the effectiveness and ability of the proposed algorithm. Results are also compared with those obtained by other approaches available in the literature. The comparisons indicate that a proposed design procedure can be successfully applied in the optimal design of shell and tube heat exchangers. In particular, in the examined cases a reduction of total costs up to 30, 29, and 56.15 % compared with the original design and up to 18, 5.5 and 7.4 % compared with other approaches for case study 1, 2 and 3 respectively, are observed. In this work, economic optimization resulting from the proposed design procedure are relevant especially when the size/volume is critical for high performance and compact unit, moderate volume and cost are needed.

  12. Utility Theory for Evaluation of Optimal Process Condition of SAW: A Multi-Response Optimization Approach

    SciTech Connect

    Datta, Saurav; Biswas, Ajay; Bhaumik, Swapan; Majumdar, Gautam

    2011-01-17

    Multi-objective optimization problem has been solved in order to estimate an optimal process environment consisting of optimal parametric combination to achieve desired quality indicators (related to bead geometry) of submerged arc weld of mild steel. The quality indicators selected in the study were bead height, penetration depth, bead width and percentage dilution. Taguchi method followed by utility concept has been adopted to evaluate the optimal process condition achieving multiple objective requirements of the desired quality weld.

  13. Hybrid Metaheuristic Approach for Nonlocal Optimization of Molecular Systems.

    PubMed

    Dresselhaus, Thomas; Yang, Jack; Kumbhar, Sadhana; Waller, Mark P

    2013-04-01

    Accurate modeling of molecular systems requires a good knowledge of the structure; therefore, conformation searching/optimization is a routine necessity in computational chemistry. Here we present a hybrid metaheuristic optimization (HMO) algorithm, which combines ant colony optimization (ACO) and particle swarm optimization (PSO) for the optimization of molecular systems. The HMO implementation meta-optimizes the parameters of the ACO algorithm on-the-fly by the coupled PSO algorithm. The ACO parameters were optimized on a set of small difluorinated polyenes where the parameters exhibited small variance as the size of the molecule increased. The HMO algorithm was validated by searching for the closed form of around 100 molecular balances. Compared to the gradient-based optimized molecular balance structures, the HMO algorithm was able to find low-energy conformations with a 87% success rate. Finally, the computational effort for generating low-energy conformation(s) for the phenylalanyl-glycyl-glycine tripeptide was approximately 60 CPU hours with the ACO algorithm, in comparison to 4 CPU years required for an exhaustive brute-force calculation. PMID:26583559

  14. An Analysis of the Optimal Multiobjective Inventory Clustering Decision with Small Quantity and Great Variety Inventory by Applying a DPSO

    PubMed Central

    Li, Meng-Hua

    2014-01-01

    When an enterprise has thousands of varieties in its inventory, the use of a single management method could not be a feasible approach. A better way to manage this problem would be to categorise inventory items into several clusters according to inventory decisions and to use different management methods for managing different clusters. The present study applies DPSO (dynamic particle swarm optimisation) to a problem of clustering of inventory items. Without the requirement of prior inventory knowledge, inventory items are automatically clustered into near optimal clustering number. The obtained clustering results should satisfy the inventory objective equation, which consists of different objectives such as total cost, backorder rate, demand relevance, and inventory turnover rate. This study integrates the above four objectives into a multiobjective equation, and inputs the actual inventory items of the enterprise into DPSO. In comparison with other clustering methods, the proposed method can consider different objectives and obtain an overall better solution to obtain better convergence results and inventory decisions. PMID:25197713

  15. The Contribution of Applied Social Sciences to Obesity Stigma-Related Public Health Approaches

    PubMed Central

    Bombak, Andrea E.

    2014-01-01

    Obesity is viewed as a major public health concern, and obesity stigma is pervasive. Such marginalization renders obese persons a “special population.” Weight bias arises in part due to popular sources' attribution of obesity causation to individual lifestyle factors. This may not accurately reflect the experiences of obese individuals or their perspectives on health and quality of life. A powerful role may exist for applied social scientists, such as anthropologists or sociologists, in exploring the lived and embodied experiences of this largely discredited population. This novel research may aid in public health intervention planning. Through these studies, applied social scientists could help develop a nonstigmatizing, salutogenic approach to public health that accurately reflects the health priorities of all individuals. Such an approach would call upon applied social science's strengths in investigating the mundane, problematizing the “taken for granted” and developing emic (insiders') understandings of marginalized populations. PMID:24782921

  16. Simultaneous optimization by neuro-genetic approach for analysis of plant materials by laser induced breakdown spectroscopy

    NASA Astrophysics Data System (ADS)

    Nunes, Lidiane Cristina; da Silva, Gilmare Antônia; Trevizan, Lilian Cristina; Santos Júnior, Dario; Poppi, Ronei Jesus; Krug, Francisco José

    2009-06-01

    A simultaneous optimization strategy based on a neuro-genetic approach is proposed for selection of laser induced breakdown spectroscopy operational conditions for the simultaneous determination of macro-nutrients (Ca, Mg and P), micro-nutrients (B, Cu, Fe, Mn and Zn), Al and Si in plant samples. A laser induced breakdown spectroscopy system equipped with a 10 Hz Q-switched Nd:YAG laser (12 ns, 532 nm, 140 mJ) and an Echelle spectrometer with intensified coupled-charge device was used. Integration time gate, delay time, amplification gain and number of pulses were optimized. Pellets of spinach leaves (NIST 1570a) were employed as laboratory samples. In order to find a model that could correlate laser induced breakdown spectroscopy operational conditions with compromised high peak areas of all elements simultaneously, a Bayesian Regularized Artificial Neural Network approach was employed. Subsequently, a genetic algorithm was applied to find optimal conditions for the neural network model, in an approach called neuro-genetic. A single laser induced breakdown spectroscopy working condition that maximizes peak areas of all elements simultaneously, was obtained with the following optimized parameters: 9.0 µs integration time gate, 1.1 µs delay time, 225 (a.u.) amplification gain and 30 accumulated laser pulses. The proposed approach is a useful and a suitable tool for the optimization process of such a complex analytical problem.

  17. TH-C-BRD-10: An Evaluation of Three Robust Optimization Approaches in IMPT Treatment Planning

    SciTech Connect

    Cao, W; Randeniya, S; Mohan, R; Zaghian, M; Kardar, L; Lim, G; Liu, W

    2014-06-15

    Purpose: Various robust optimization approaches have been proposed to ensure the robustness of intensity modulated proton therapy (IMPT) in the face of uncertainty. In this study, we aim to investigate the performance of three classes of robust optimization approaches regarding plan optimality and robustness. Methods: Three robust optimization models were implemented in our in-house IMPT treatment planning system: 1) L2 optimization based on worst-case dose; 2) L2 optimization based on minmax objective; and 3) L1 optimization with constraints on all uncertain doses. The first model was solved by a L-BFGS algorithm; the second was solved by a gradient projection algorithm; and the third was solved by an interior point method. One nominal scenario and eight maximum uncertainty scenarios (proton range over and under 3.5%, and setup error of 5 mm for x, y, z directions) were considered in optimization. Dosimetric measurements of optimized plans from the three approaches were compared for four prostate cancer patients retrospectively selected at our institution. Results: For the nominal scenario, all three optimization approaches yielded the same coverage to the clinical treatment volume (CTV) and the L2 worst-case approach demonstrated better rectum and bladder sparing than others. For the uncertainty scenarios, the L1 approach resulted in the most robust CTV coverage against uncertainties, while the plans from L2 worst-case were less robust than others. In addition, we observed that the number of scanning spots with positive MUs from the L2 approaches was approximately twice as many as that from the L1 approach. This indicates that L1 optimization may lead to more efficient IMPT delivery. Conclusion: Our study indicated that the L1 approach best conserved the target coverage in the face of uncertainty but its resulting OAR sparing was slightly inferior to other two approaches.

  18. Antigen identification starting from the genome: a "Reverse Vaccinology" approach applied to MenB.

    PubMed

    Palumbo, Emmanuelle; Fiaschi, Luigi; Brunelli, Brunella; Marchi, Sara; Savino, Silvana; Pizza, Mariagrazia

    2012-01-01

    Most of the vaccines available today, albeit very effective, have been developed using traditional "old-style" methodologies. Technologies developed in recent years have opened up new perspectives in the field of vaccinology and novel strategies are now being used to design improved or new vaccines against infections for which preventive measures do not exist. The Reverse Vaccinology (RV) approach is one of the most powerful examples of biotechnology applied to the field of vaccinology for identifying new protein-based vaccines. RV combines the availability of genomic data, the analyzing capabilities of new bioinformatic tools, and the application of high throughput expression and purification systems combined with serological screening assays for a coordinated screening process of the entire genomic repertoire of bacterial, viral, or parasitic pathogens. The application of RV to Neisseria meningitidis serogroup B represents the first success of this novel approach. In this chapter, we describe how this revolutionary approach can be easily applied to any pathogen.

  19. Flower pollination algorithm: A novel approach for multiobjective optimization

    NASA Astrophysics Data System (ADS)

    Yang, Xin-She; Karamanoglu, Mehmet; He, Xingshi

    2014-09-01

    Multiobjective design optimization problems require multiobjective optimization techniques to solve, and it is often very challenging to obtain high-quality Pareto fronts accurately. In this article, the recently developed flower pollination algorithm (FPA) is extended to solve multiobjective optimization problems. The proposed method is used to solve a set of multiobjective test functions and two bi-objective design benchmarks, and a comparison of the proposed algorithm with other algorithms has been made, which shows that the FPA is efficient with a good convergence rate. Finally, the importance for further parametric studies and theoretical analysis is highlighted and discussed.

  20. Fast engineering optimization: A novel highly effective control parameterization approach for industrial dynamic processes.

    PubMed

    Liu, Ping; Li, Guodong; Liu, Xinggao

    2015-09-01

    Control vector parameterization (CVP) is an important approach of the engineering optimization for the industrial dynamic processes. However, its major defect, the low optimization efficiency caused by calculating the relevant differential equations in the generated nonlinear programming (NLP) problem repeatedly, limits its wide application in the engineering optimization for the industrial dynamic processes. A novel highly effective control parameterization approach, fast-CVP, is first proposed to improve the optimization efficiency for industrial dynamic processes, where the costate gradient formulae is employed and a fast approximate scheme is presented to solve the differential equations in dynamic process simulation. Three well-known engineering optimization benchmark problems of the industrial dynamic processes are demonstrated as illustration. The research results show that the proposed fast approach achieves a fine performance that at least 90% of the computation time can be saved in contrast to the traditional CVP method, which reveals the effectiveness of the proposed fast engineering optimization approach for the industrial dynamic processes.

  1. A Simultaneous Approach to Optimizing Treatment Assignments with Mastery Scores. Research Report 89-5.

    ERIC Educational Resources Information Center

    Vos, Hans J.

    An approach to simultaneous optimization of assignments of subjects to treatments followed by an end-of-mastery test is presented using the framework of Bayesian decision theory. Focus is on demonstrating how rules for the simultaneous optimization of sequences of decisions can be found. The main advantages of the simultaneous approach, compared…

  2. Optimization of the ASPN Process to Bright Nitriding of Woodworking Tools Using the Taguchi Approach

    NASA Astrophysics Data System (ADS)

    Walkowicz, J.; Staśkiewicz, J.; Szafirowicz, K.; Jakrzewski, D.; Grzesiak, G.; Stępniak, M.

    2013-02-01

    The subject of the research is optimization of the parameters of the Active Screen Plasma Nitriding (ASPN) process of high speed steel planing knives used in woodworking. The Taguchi approach was applied for development of the plan of experiments and elaboration of obtained experimental results. The optimized ASPN parameters were: process duration, composition and pressure of the gaseous atmosphere, the substrate BIAS voltage and the substrate temperature. The results of the optimization procedure were verified by the tools' behavior in the sharpening operation performed in normal industrial conditions. The ASPN technology proved to be extremely suitable for nitriding the woodworking planing tools, which because of their specific geometry, in particular extremely sharp wedge angles, could not be successfully nitrided using conventional direct current plasma nitriding method. The carried out research proved that the values of fracture toughness coefficient K Ic are in correlation with maximum spalling depths of the cutting edge measured after sharpening, and therefore may be used as a measure of the nitrided planing knives quality. Based on this criterion the optimum parameters of the ASPN process for nitriding high speed planing knives were determined.

  3. Clinical Evaluation of Direct Aperture Optimization When Applied to Head-And-Neck IMRT

    SciTech Connect

    Jones, Stephen Williams, Matthew

    2008-04-01

    Direct Machine Parameter Optimization (DMPO) is a leaf segmentation program released as an optional item of the Pinnacle planning system (Philips Radiation Oncology Systems, Milpitas, CA); it is based on the principles of direct aperture optimization where the size, shape, and weight of individual segments are optimized to produce an intensity modulated radiation treatment (IMRT) plan. In this study, we compare DMPO to the traditional method of IMRT planning, in which intensity maps are optimized prior to conversion into deliverable multileaf collimator (MLC) apertures, and we determine if there was any dosimetric improvement, treatment efficiency gain, or planning advantage provided by the use of DMPO. Eleven head-and-neck patients treated with IMRT had treatment plans generated using each optimization method. For each patient, the same planning parameters were used for each optimization method. All calculations were performed using Pinnacle version 7.6c software and treatments were delivered using a step-and-shoot IMRT method on a Varian 2100EX linear accelerator equipped with a 120-leaf Millennium MLC (Varian Medical Systems, Palo Alto, CA). Each plan was assessed based on the calculation time, a conformity index, the composite objective value used in the optimization, the number of segments, monitor units (MUs), and treatment time. The results showed DMPO to be superior to the traditional optimization method in all areas. Considerable advantages were observed in the dosimetric quality of DMPO plans, which also required 32% less time to calculate, 42% fewer MUs, and 35% fewer segments than the conventional optimization method. These reductions translated directly into a 29% decrease in treatment times. While considerable gains were observed in planning and treatment efficiency, they were specific to our institution, and the impact of direct aperture optimization on plan quality and workflow will be dependent on the planning parameters, planning system, and

  4. A methodological integrated approach to optimize a hydrogeological engineering work

    NASA Astrophysics Data System (ADS)

    Loperte, A.; Satriani, A.; Bavusi, M.; Cerverizzo, G.

    2012-04-01

    The geoelectrical survey applied to hydraulic engineering is a well known in literature. However, despite of its large number of successful cases of application, the use of geophysics is still often not considered; this due to different reasons as: the poor knowledge of the potential performances; the difficulties in the practical implementation; the cost limitations. In this work, an integrated study of non-invasive (geoelectrical) and direct surveys is described, aimed at identifying a subsoil foundation where it possible to set up a watertight concrete structure able to protect the purifier of Senise, a little town in Basilicata Region (Southern Italy). The purifier, used by several villages, is located in a particularly dangerous hydrogeological position, as it is very close to the Sinni river, which has been obstructed from many years by the Monte Cotugno dam. During the rainiest periods, the river could flood the purifier, causing the drainage of waste waters in the Monte Cotugno artificial lake. The purifier is located in Pliocene- Calabrian clay and clay - marly formations covered by about 10m layer of alluvional gravelly-sandy materials carried by the Sinni river. The electrical resistivity tomography acquired with the Wenner Schlumberger array was revealed meaningful for the purpose to identify the potential depth of impermeable clays with high accuracy. In particular, the geoelectrical acquisition, orientated along the long side of purifier, was carried out using a multielectrodes system with 48 electrodes 2 m spaced leading to an achievable investigation depth of about 15 m The subsequent direct surveys have confirmed this depth so that it was possible to set up the foundation concrete structure with precision to protect the purifier. It is worth noting that the use of this methodological approach has allowed a remarkable economic saving as it has made it possible to correct the wrong information, regarding the depth of impermeably clays, previously

  5. Method developments approaches in supercritical fluid chromatography applied to the analysis of cosmetics.

    PubMed

    Lesellier, E; Mith, D; Dubrulle, I

    2015-12-01

    Analyses of complex samples of cosmetics, such as creams or lotions, are generally achieved by HPLC. These analyses are often multistep gradients, due to the presence of compounds with a large range of polarity. For instance, the bioactive compounds may be polar, while the matrix contains lipid components that are rather non-polar, thus cosmetic formulations are usually oil-water emulsions. Supercritical fluid chromatography (SFC) uses mobile phases composed of carbon dioxide and organic co-solvents, allowing for good solubility of both the active compounds and the matrix excipients. Moreover, the classical and well-known properties of these mobile phases yield fast analyses and ensure rapid method development. However, due to the large number of stationary phases available for SFC and to the varied additional parameters acting both on retention and separation factors (co-solvent nature and percentage, temperature, backpressure, flow rate, column dimensions and particle size), a simplified approach can be followed to ensure a fast method development. First, suited stationary phases should be carefully selected for an initial screening, and then the other operating parameters can be limited to the co-solvent nature and percentage, maintaining the oven temperature and back-pressure constant. To describe simple method development guidelines in SFC, three sample applications are discussed in this paper: UV-filters (sunscreens) in sunscreen cream, glyceryl caprylate in eye liner and caffeine in eye serum. Firstly, five stationary phases (ACQUITY UPC(2)) are screened with isocratic elution conditions (10% methanol in carbon dioxide). Complementary of the stationary phases is assessed based on our spider diagram classification which compares a large number of stationary phases based on five molecular interactions. Secondly, the one or two best stationary phases are retained for further optimization of mobile phase composition, with isocratic elution conditions or, when

  6. Method developments approaches in supercritical fluid chromatography applied to the analysis of cosmetics.

    PubMed

    Lesellier, E; Mith, D; Dubrulle, I

    2015-12-01

    Analyses of complex samples of cosmetics, such as creams or lotions, are generally achieved by HPLC. These analyses are often multistep gradients, due to the presence of compounds with a large range of polarity. For instance, the bioactive compounds may be polar, while the matrix contains lipid components that are rather non-polar, thus cosmetic formulations are usually oil-water emulsions. Supercritical fluid chromatography (SFC) uses mobile phases composed of carbon dioxide and organic co-solvents, allowing for good solubility of both the active compounds and the matrix excipients. Moreover, the classical and well-known properties of these mobile phases yield fast analyses and ensure rapid method development. However, due to the large number of stationary phases available for SFC and to the varied additional parameters acting both on retention and separation factors (co-solvent nature and percentage, temperature, backpressure, flow rate, column dimensions and particle size), a simplified approach can be followed to ensure a fast method development. First, suited stationary phases should be carefully selected for an initial screening, and then the other operating parameters can be limited to the co-solvent nature and percentage, maintaining the oven temperature and back-pressure constant. To describe simple method development guidelines in SFC, three sample applications are discussed in this paper: UV-filters (sunscreens) in sunscreen cream, glyceryl caprylate in eye liner and caffeine in eye serum. Firstly, five stationary phases (ACQUITY UPC(2)) are screened with isocratic elution conditions (10% methanol in carbon dioxide). Complementary of the stationary phases is assessed based on our spider diagram classification which compares a large number of stationary phases based on five molecular interactions. Secondly, the one or two best stationary phases are retained for further optimization of mobile phase composition, with isocratic elution conditions or, when

  7. A functional approach to geometry optimization of complex systems

    NASA Astrophysics Data System (ADS)

    Maslen, P. E.

    A quadratically convergent procedure is presented for the geometry optimization of complex systems, such as biomolecules and molecular complexes. The costly evaluation of the exact Hessian is avoided by expanding the density functional to second order in both nuclear and electronic variables, and then searching for the minimum of the quadratic functional. The dependence of the functional on the choice of nuclear coordinate system is described, and illustrative geometry optimizations using Cartesian and internal coordinates are presented for Taxol™.

  8. An inverse dynamics approach to trajectory optimization for an aerospace plane

    NASA Technical Reports Server (NTRS)

    Lu, Ping

    1992-01-01

    An inverse dynamics approach for trajectory optimization is proposed. This technique can be useful in many difficult trajectory optimization and control problems. The application of the approach is exemplified by ascent trajectory optimization for an aerospace plane. Both minimum-fuel and minimax types of performance indices are considered. When rocket augmentation is available for ascent, it is shown that accurate orbital insertion can be achieved through the inverse control of the rocket in the presence of disturbances.

  9. Improved understanding of the searching behavior of ant colony optimization algorithms applied to the water distribution design problem

    NASA Astrophysics Data System (ADS)

    Zecchin, A. C.; Simpson, A. R.; Maier, H. R.; Marchi, A.; Nixon, J. B.

    2012-09-01

    Evolutionary algorithms (EAs) have been applied successfully to many water resource problems, such as system design, management decision formulation, and model calibration. The performance of an EA with respect to a particular problem type is dependent on how effectively its internal operators balance the exploitation/exploration trade-off to iteratively find solutions of an increasing quality. For a given problem, different algorithms are observed to produce a variety of different final performances, but there have been surprisingly few investigations into characterizing how the different internal mechanisms alter the algorithm's searching behavior, in both the objective and decision space, to arrive at this final performance. This paper presents metrics for analyzing the searching behavior of ant colony optimization algorithms, a particular type of EA, for the optimal water distribution system design problem, which is a classical NP-hard problem in civil engineering. Using the proposed metrics, behavior is characterized in terms of three different attributes: (1) the effectiveness of the search in improving its solution quality and entering into optimal or near-optimal regions of the search space, (2) the extent to which the algorithm explores as it converges to solutions, and (3) the searching behavior with respect to the feasible and infeasible regions. A range of case studies is considered, where a number of ant colony optimization variants are applied to a selection of water distribution system optimization problems. The results demonstrate the utility of the proposed metrics to give greater insight into how the internal operators affect each algorithm's searching behavior.

  10. A two-stage sequential linear programming approach to IMRT dose optimization

    PubMed Central

    Zhang, Hao H; Meyer, Robert R; Wu, Jianzhou; Naqvi, Shahid A; Shi, Leyuan; D’Souza, Warren D

    2010-01-01

    The conventional IMRT planning process involves two stages in which the first stage consists of fast but approximate idealized pencil beam dose calculations and dose optimization and the second stage consists of discretization of the intensity maps followed by intensity map segmentation and a more accurate final dose calculation corresponding to physical beam apertures. Consequently, there can be differences between the presumed dose distribution corresponding to pencil beam calculations and optimization and a more accurately computed dose distribution corresponding to beam segments that takes into account collimator-specific effects. IMRT optimization is computationally expensive and has therefore led to the use of heuristic (e.g., simulated annealing and genetic algorithms) approaches that do not encompass a global view of the solution space. We modify the traditional two-stage IMRT optimization process by augmenting the second stage via an accurate Monte-Carlo based kernel-superposition dose calculations corresponding to beam apertures combined with an exact mathematical programming based sequential optimization approach that uses linear programming (SLP). Our approach was tested on three challenging clinical test cases with multileaf collimator constraints corresponding to two vendors. We compared our approach to the conventional IMRT planning approach, a direct-aperture approach and a segment weight optimization approach. Our results in all three cases indicate that the SLP approach outperformed the other approaches, achieving superior critical structure sparing. Convergence of our approach is also demonstrated. Finally, our approach has also been integrated with a commercial treatment planning system and may be utilized clinically. PMID:20071764

  11. Optimism

    PubMed Central

    Carver, Charles S.; Scheier, Michael F.; Segerstrom, Suzanne C.

    2010-01-01

    Optimism is an individual difference variable that reflects the extent to which people hold generalized favorable expectancies for their future. Higher levels of optimism have been related prospectively to better subjective well-being in times of adversity or difficulty (i.e., controlling for previous well-being). Consistent with such findings, optimism has been linked to higher levels of engagement coping and lower levels of avoidance, or disengagement, coping. There is evidence that optimism is associated with taking proactive steps to protect one's health, whereas pessimism is associated with health-damaging behaviors. Consistent with such findings, optimism is also related to indicators of better physical health. The energetic, task-focused approach that optimists take to goals also relates to benefits in the socioeconomic world. Some evidence suggests that optimism relates to more persistence in educational efforts and to higher later income. Optimists also appear to fare better than pessimists in relationships. Although there are instances in which optimism fails to convey an advantage, and instances in which it may convey a disadvantage, those instances are relatively rare. In sum, the behavioral patterns of optimists appear to provide models of living for others to learn from. PMID:20170998

  12. Simulation-Based Approach for Site-Specific Optimization of Hydrokinetic Turbine Arrays

    NASA Astrophysics Data System (ADS)

    Sotiropoulos, F.; Chawdhary, S.; Yang, X.; Khosronejad, A.; Angelidis, D.

    2014-12-01

    A simulation-based approach has been developed to enable site-specific optimization of tidal and current turbine arrays in real-life waterways. The computational code is based on the St. Anthony Falls Laboratory Virtual StreamLab (VSL3D), which is able to carry out high-fidelity simulations of turbulent flow and sediment transport processes in rivers and streams taking into account the arbitrary geometrical complexity characterizing natural waterways. The computational framework can be used either in turbine-resolving mode, to take into account all geometrical details of the turbine, or with the turbines parameterized as actuator disks or actuator lines. Locally refined grids are employed to dramatically increase the resolution of the simulation and enable efficient simulations of multi-turbine arrays. Turbine/sediment interactions are simulated using the coupled hydro-morphodynamic module of VSL3D. The predictive capabilities of the resulting computational framework will be demonstrated by applying it to simulate turbulent flow past a tri-frame configuration of hydrokinetic turbines in a rigid-bed turbulent open channel flow as well as turbines mounted on mobile bed open channels to investigate turbine/sediment interactions. The utility of the simulation-based approach for guiding the optimal development of turbine arrays in real-life waterways will also be discussed and demonstrated. This work was supported by NSF grant IIP-1318201. Simulations were carried out at the Minnesota Supercomputing Institute.

  13. New approach for automatic recognition of melanoma in profilometry: optimized feature selection using genetic algorithms

    NASA Astrophysics Data System (ADS)

    Handels, Heinz; Ross, Th; Kreusch, J.; Wolff, H. H.; Poeppl, S. J.

    1998-06-01

    A new approach to computer supported recognition of melanoma and naevocytic naevi based on high resolution skin surface profiles is presented. Profiles are generated by sampling an area of 4 X 4 mm2 at a resolution of 125 sample points per mm with a laser profilometer at a vertical resolution of 0.1 micrometers . With image analysis algorithms Haralick's texture parameters, Fourier features and features based on fractal analysis are extracted. In order to improve classification performance, a subsequent feature selection process is applied to determine the best possible subset of features. Genetic algorithms are optimized for the feature selection process, and results of different approaches are compared. As quality measure for feature subsets, the error rate of the nearest neighbor classifier estimated with the leaving-one-out method is used. In comparison to heuristic strategies and greedy algorithms, genetic algorithms show the best results for the feature selection problem. After feature selection, several architectures of feed forward neural networks with error back-propagation are evaluated. Classification performance of the neural classifier is optimized using different topologies, learning parameters and pruning algorithms. The best neural classifier achieved an error rate of 4.5% and was found after network pruning. The best result in all with an error rate of 2.3% was obtained with the nearest neighbor classifier.

  14. Applying the Taguchi method to river water pollution remediation strategy optimization.

    PubMed

    Yang, Tsung-Ming; Hsu, Nien-Sheng; Chiu, Chih-Chiang; Wang, Hsin-Ju

    2014-04-01

    Optimization methods usually obtain the travel direction of the solution by substituting the solutions into the objective function. However, if the solution space is too large, this search method may be time consuming. In order to address this problem, this study incorporated the Taguchi method into the solution space search process of the optimization method, and used the characteristics of the Taguchi method to sequence the effects of the variation of decision variables on the system. Based on the level of effect, this study determined the impact factor of decision variables and the optimal solution for the model. The integration of the Taguchi method and the solution optimization method successfully obtained the optimal solution of the optimization problem, while significantly reducing the solution computing time and enhancing the river water quality. The results suggested that the basin with the greatest water quality improvement effectiveness is the Dahan River. Under the optimal strategy of this study, the severe pollution length was reduced from 18 km to 5 km.

  15. Applying the Taguchi method to river water pollution remediation strategy optimization.

    PubMed

    Yang, Tsung-Ming; Hsu, Nien-Sheng; Chiu, Chih-Chiang; Wang, Hsin-Ju

    2014-04-01

    Optimization methods usually obtain the travel direction of the solution by substituting the solutions into the objective function. However, if the solution space is too large, this search method may be time consuming. In order to address this problem, this study incorporated the Taguchi method into the solution space search process of the optimization method, and used the characteristics of the Taguchi method to sequence the effects of the variation of decision variables on the system. Based on the level of effect, this study determined the impact factor of decision variables and the optimal solution for the model. The integration of the Taguchi method and the solution optimization method successfully obtained the optimal solution of the optimization problem, while significantly reducing the solution computing time and enhancing the river water quality. The results suggested that the basin with the greatest water quality improvement effectiveness is the Dahan River. Under the optimal strategy of this study, the severe pollution length was reduced from 18 km to 5 km. PMID:24739765

  16. Applying the Taguchi Method to River Water Pollution Remediation Strategy Optimization

    PubMed Central

    Yang, Tsung-Ming; Hsu, Nien-Sheng; Chiu, Chih-Chiang; Wang, Hsin-Ju

    2014-01-01

    Optimization methods usually obtain the travel direction of the solution by substituting the solutions into the objective function. However, if the solution space is too large, this search method may be time consuming. In order to address this problem, this study incorporated the Taguchi method into the solution space search process of the optimization method, and used the characteristics of the Taguchi method to sequence the effects of the variation of decision variables on the system. Based on the level of effect, this study determined the impact factor of decision variables and the optimal solution for the model. The integration of the Taguchi method and the solution optimization method successfully obtained the optimal solution of the optimization problem, while significantly reducing the solution computing time and enhancing the river water quality. The results suggested that the basin with the greatest water quality improvement effectiveness is the Dahan River. Under the optimal strategy of this study, the severe pollution length was reduced from 18 km to 5 km. PMID:24739765

  17. a Hybrid Approach of Neural Network with Particle Swarm Optimization for Tobacco Pests Prediction

    NASA Astrophysics Data System (ADS)

    Lv, Jiake; Wang, Xuan; Xie, Deti; Wei, Chaofu

    Forecasting pests emergence levels plays a significant role in regional crop planting and management. The accuracy, which is derived from the accuracy of the forecasting approach used, will determine the economics of the operation of the pests prediction. Conventional methods including time series, regression analysis or ARMA model entail exogenous input together with a number of assumptions. The use of neural networks has been shown to be a cost-effective technique. But their training, usually with back-propagation algorithm or other gradient algorithms, is featured with some drawbacks such as very slow convergence and easy entrapment in a local minimum. This paper presents a hybrid approach of neural network with particle swarm optimization for developing the accuracy of predictions. The approach is applied to forecast Alternaria alternate Keissl emergence level of the WuLong Country, one of the most important tobacco planting areas in Chongqing. Traditional ARMA model and BP neural network are investigated as comparison basis. The experimental results show that the proposed approach can achieve better prediction performance.

  18. Efficient global optimization applied to wind tunnel evaluation-based optimization for improvement of flow control by plasma actuators

    NASA Astrophysics Data System (ADS)

    Kanazaki, Masahiro; Matsuno, Takashi; Maeda, Kengo; Kawazoe, Hiromitsu

    2015-09-01

    A kriging-based genetic algorithm called efficient global optimization (EGO) was employed to optimize the parameters for the operating conditions of plasma actuators. The aerodynamic performance was evaluated by wind tunnel testing to overcome the disadvantages of time-consuming numerical simulations. The proposed system was used on two design problems to design the power supply for a plasma actuator. The first case was the drag minimization problem around a semicircular cylinder. In this case, the inhibitory effect of flow separation was also observed. The second case was the lift maximization problem around a circular cylinder. This case was similar to the aerofoil design, because the circular cylinder has potential to work as an aerofoil owing to the control of the flow circulation by the plasma actuators with four design parameters. In this case, applicability to the multi-variant design problem was also investigated. Based on these results, optimum designs and global design information were obtained while drastically reducing the number of experiments required compared to a full factorial experiment.

  19. Multiobjective optimization in a pseudometric objective space as applied to a general model of business activities

    NASA Astrophysics Data System (ADS)

    Khachaturov, R. V.

    2016-09-01

    It is shown that finding the equivalence set for solving multiobjective discrete optimization problems is advantageous over finding the set of Pareto optimal decisions. An example of a set of key parameters characterizing the economic efficiency of a commercial firm is proposed, and a mathematical model of its activities is constructed. In contrast to the classical problem of finding the maximum profit for any business, this study deals with a multiobjective optimization problem. A method for solving inverse multiobjective problems in a multidimensional pseudometric space is proposed for finding the best project of firm's activities. The solution of a particular problem of this type is presented.

  20. Efficient design of a truss beam by applying first order optimization method

    NASA Astrophysics Data System (ADS)

    Fedorik, Filip

    2013-10-01

    Applications of optimization procedures in structural designs are widely discussed problems, which are caused by currently still-increasing demands on structures. Using of optimization methods in efficient designs passes through great development, especially in duplicate production where even small savings might lead to considerable reduction of total costs. The presented paper deals with application and analysis of the First Order optimization technique, which is implemented in the Design Optimization module that uses the main features of multi-physical FEM program ANSYS, in steel truss-beam design. Constraints of the design are stated by EN 1993 Eurocode 3, for uniform compression forces in compression members and tensile resistance moments in tension members. Furthermore, a minimum frequency of the first natural modal shape of the structure is determined. The aim of the solution is minimizing the weight of the structure by changing members' cross-section properties.

  1. A PERFECT MATCH CONDITION FOR POINT-SET MATCHING PROBLEMS USING THE OPTIMAL MASS TRANSPORT APPROACH

    PubMed Central

    CHEN, PENGWEN; LIN, CHING-LONG; CHERN, I-LIANG

    2013-01-01

    We study the performance of optimal mass transport-based methods applied to point-set matching problems. The present study, which is based on the L2 mass transport cost, states that perfect matches always occur when the product of the point-set cardinality and the norm of the curl of the non-rigid deformation field does not exceed some constant. This analytic result is justified by a numerical study of matching two sets of pulmonary vascular tree branch points whose displacement is caused by the lung volume changes in the same human subject. The nearly perfect match performance verifies the effectiveness of this mass transport-based approach. PMID:23687536

  2. A General Multidisciplinary Turbomachinery Design Optimization system Applied to a Transonic Fan

    NASA Astrophysics Data System (ADS)

    Nemnem, Ahmed Mohamed Farid

    The blade geometry design process is integral to the development and advancement of compressors and turbines in gas generators or aeroengines. A new airfoil section design capability has been added to an open source parametric 3D blade design tool. Curvature of the meanline is controlled using B-splines to create the airfoils. The curvature is analytically integrated to derive the angles and the meanline is obtained by integrating the angles. A smooth thickness distribution is then added to the airfoil to guarantee a smooth shape while maintaining a prescribed thickness distribution. A leading edge B-spline definition has also been implemented to achieve customized airfoil leading edges which guarantees smoothness with parametric eccentricity and droop. An automated turbomachinery design and optimization system has been created. An existing splittered transonic fan is used as a test and reference case. This design was more general than a conventional design to have access to the other design methodology. The whole mechanical and aerodynamic design loops are automated for the optimization process. The flow path and the geometrical properties of the rotor are initially created using the axi-symmetric design and analysis code (T-AXI). The main and splitter blades are parametrically designed with the created geometry builder (3DBGB) using the new added features (curvature technique). The solid model creation of the rotor sector with a periodic boundaries combining the main blade and splitter is done using MATLAB code directly connected to SolidWorks including the hub, fillets and tip clearance. A mechanical optimization is performed with DAKOTA (developed by DOE) to reduce the mass of the blades while keeping maximum stress as a constraint with a safety factor. A Genetic algorithm followed by Numerical Gradient optimization strategies are used in the mechanical optimization. The splittered transonic fan blades mass is reduced by 2.6% while constraining the maximum

  3. Percutaneous approach to the upper thoracic spine: optimal patient positioning.

    PubMed

    Bayley, Edward; Clamp, Jonathan; Boszczyk, Bronek M

    2009-12-01

    Percutaneous access to the upper thoracic vertebrae under fluoroscopic guidance is challenging. We describe our positioning technique facilitating optimal visualisation of the high thoracic vertebrae in the prone position. This allows safe practice of kyphoplasty, vertebroplasty and biopsy throughout the upper thoracic spine.

  4. A Simulation of Optimal Foraging: The Nuts and Bolts Approach.

    ERIC Educational Resources Information Center

    Thomson, James D.

    1980-01-01

    Presents a mechanical model for an ecology laboratory that introduces the concept of optimal foraging theory. Describes the physical model which includes a board studded with protruding machine bolts that simulate prey, and blindfolded students who simulate either generalist or specialist predator types. Discusses the theoretical model and data…

  5. D-optimal design applied to binding saturation curves of an enkephalin analog in rat brain

    SciTech Connect

    Verotta, D.; Petrillo, P.; La Regina, A.; Rocchetti, M.; Tavani, A.

    1988-01-01

    The D-optimal design, a minimal sample design that minimizes the volume of the joint confidence region for the parameters, was used to evaluate binding parameters in a saturation curve with a view to reducing the number of experimental points without loosing accuracy in binding parameter estimates. Binding saturation experiments were performed in rat brain crude membrane preparations with the opioid ..mu..-selective ligand (/sup 3/H)-(D-Ala/sup 2/, MePhe/sup 4/, Gly-ol/sup 5/)enkephalin (DAGO), using a sequential procedure. The first experiment consisted of a wide-range saturation curve, which confirmed that (/sup 3/H)-DAGO binds only one class of specific sites and non-specific sites, and gave information on the experimental range and a first estimate of binding affinity (K/sub a/), capacity (B/sub max/) and non-specific constant (k). On this basis the D-optimal design was computed and sequential experiments were performed each covering a wide-range traditional saturation curve, the D-optimal design and a splitting of the D-optimal design with the addition of 2 points (+/- 15% of the central point). No appreciable differences were obtained with these designs in parameter estimates and their accuracy. Thus, sequential experiments based on D-optimal design seem a valid method for accurate determination of binding parameters, using far fewer points with no loss in parameter estimation accuracy. 25 references, 2 figures, 3 tables.

  6. Optimal filters - A unified approach for SNR and PCE. [Peak-To-Correlation-Energy

    NASA Technical Reports Server (NTRS)

    Juday, Richard D.

    1993-01-01

    A unified approach for a general metric that encompasses both the signal-to-noise ratio (SNR) and the peak-to-correlation (PCE) ratio in optical correlators is described. In this approach, the connection between optimizing SNR and optimizing PCE is achieved by considering a metric in which the central correlation irradiance is divided by the total energy of the correlation plane. The peak-to-total energy (PTE) is shown to be optimized similarly to SNR and PCE. Since PTE is a function of the search values G and beta, the optimal filter is determined with only a two-dimensional search.

  7. Optimizing technology investments: a broad mission model approach

    NASA Technical Reports Server (NTRS)

    Shishko, R.

    2003-01-01

    A long-standing problem in NASA is how to allocate scarce technology development resources across advanced technologies in order to best support a large set of future potential missions. Within NASA, two orthogonal paradigms have received attention in recent years: the real-options approach and the broad mission model approach. This paper focuses on the latter.

  8. A majorize-minimize strategy for subspace optimization applied to image restoration.

    PubMed

    Chouzenoux, Emilie; Idier, Jérôme; Moussaoui, Saïd

    2011-06-01

    This paper proposes accelerated subspace optimization methods in the context of image restoration. Subspace optimization methods belong to the class of iterative descent algorithms for unconstrained optimization. At each iteration of such methods, a stepsize vector allowing the best combination of several search directions is computed through a multidimensional search. It is usually obtained by an inner iterative second-order method ruled by a stopping criterion that guarantees the convergence of the outer algorithm. As an alternative, we propose an original multidimensional search strategy based on the majorize-minimize principle. It leads to a closed-form stepsize formula that ensures the convergence of the subspace algorithm whatever the number of inner iterations. The practical efficiency of the proposed scheme is illustrated in the context of edge-preserving image restoration.

  9. Economic optimization software applied to JFK airport heating and cooling plant

    SciTech Connect

    Gay, R.R.; McCoy, L.

    1995-09-01

    This paper describes the on-line economic optimization routine developed by Enter Software, Inc. for application at the heating and cooling plant for the JFK International Airport near New York City. The objective of the economic optimization is to find the optimum plant configuration (which gas turbines to run, power levels of each gas turbine, duct firing levels, which auxiliary water heaters to run, which electric chillers to run, and which absorption chillers to run) which produces maximum net income at the plant as plant loads and the prices vary. The routines also include a planner which runs a series of optimizations over multiple plant configurations to simulate the varying plant operating conditions for the purpose of predicting the overall plant results over a period of time.

  10. Fast-convergent double-sigmoid Hopfield neural network as applied to optimization problems.

    PubMed

    Uykan, Zekeriya

    2013-06-01

    The Hopfield neural network (HNN) has been widely used in numerous different optimization problems since the early 1980s. The convergence speed of the HNN (already in high gain) eventually plays a critical role in various real-time applications. In this brief, we propose and analyze a generalized HNN which drastically improves the convergence speed of the network, and thus allows benefiting from the HNN capabilities in solving the optimization problems in real time. By examining the channel allocation optimization problem in cellular radio systems, which is NP-complete and in which fast solution is necessary due to time-varying link gains, as well as the associative memory problem, computer simulations confirm the dramatic improvement in convergence speed at the expense of using a second nonlinear function in the proposed network.

  11. A genetic optimization approach for isolating translational efficiency bias.

    PubMed

    Raiford, Douglas W; Krane, Dan E; Doom, Travis E W; Raymer, Michael L

    2011-01-01

    The study of codon usage bias is an important research area that contributes to our understanding of molecular evolution, phylogenetic relationships, respiratory lifestyle, and other characteristics. Translational efficiency bias is perhaps the most well-studied codon usage bias, as it is frequently utilized to predict relative protein expression levels. We present a novel approach to isolating translational efficiency bias in microbial genomes. There are several existent methods for isolating translational efficiency bias. Previous approaches are susceptible to the confounding influences of other potentially dominant biases. Additionally, existing approaches to identifying translational efficiency bias generally require both genomic sequence information and prior knowledge of a set of highly expressed genes. This novel approach provides more accurate results from sequence information alone by resisting the confounding effects of other biases. We validate this increase in accuracy in isolating translational efficiency bias on 10 microbial genomes, five of which have proven particularly difficult for existing approaches due to the presence of strong confounding biases.

  12. Analysis of modern optimal control theory applied to plasma position and current control in TFTR

    SciTech Connect

    Firestone, M.A.

    1981-09-01

    The strong compression TFTR discharge has been segmented into regions where linear dynamics can approximate the plasma's interaction with the OH and EF power supply systems. The dynamic equations for these regions are utilized within the linear optimal control theory framework to provide active feedback gains to control the plasma position and current. Methods are developed to analyze and quantitatively evaluate the quality of control in a nonlinear, more realistic simulation. Tests are made of optimal control theory's assumptions and requirements, and the feasibility of this method for TFTR is assessed.

  13. An inverse dynamics approach to trajectory optimization and guidance for an aerospace plane

    NASA Technical Reports Server (NTRS)

    Lu, Ping

    1992-01-01

    The optimal ascent problem for an aerospace planes is formulated as an optimal inverse dynamic problem. Both minimum-fuel and minimax type of performance indices are considered. Some important features of the optimal trajectory and controls are used to construct a nonlinear feedback midcourse controller, which not only greatly simplifies the difficult constrained optimization problem and yields improved solutions, but is also suited for onboard implementation. Robust ascent guidance is obtained by using combination of feedback compensation and onboard generation of control through the inverse dynamics approach. Accurate orbital insertion can be achieved with near-optimal control of the rocket through inverse dynamics even in the presence of disturbances.

  14. A genetic algorithm approach in interface and surface structure optimization

    SciTech Connect

    Zhang, Jian

    2010-01-01

    The thesis is divided into two parts. In the first part a global optimization method is developed for the interface and surface structures optimization. Two prototype systems are chosen to be studied. One is Si[001] symmetric tilted grain boundaries and the other is Ag/Au induced Si(111) surface. It is found that Genetic Algorithm is very efficient in finding lowest energy structures in both cases. Not only existing structures in the experiments can be reproduced, but also many new structures can be predicted using Genetic Algorithm. Thus it is shown that Genetic Algorithm is a extremely powerful tool for the material structures predictions. The second part of the thesis is devoted to the explanation of an experimental observation of thermal radiation from three-dimensional tungsten photonic crystal structures. The experimental results seems astounding and confusing, yet the theoretical models in the paper revealed the physics insight behind the phenomena and can well reproduced the experimental results.

  15. A thermodynamic approach to the affinity optimization of drug candidates.

    PubMed

    Freire, Ernesto

    2009-11-01

    High throughput screening and other techniques commonly used to identify lead candidates for drug development usually yield compounds with binding affinities to their intended targets in the mid-micromolar range. The affinity of these molecules needs to be improved by several orders of magnitude before they become viable drug candidates. Traditionally, this task has been accomplished by establishing structure activity relationships to guide chemical modifications and improve the binding affinity of the compounds. As the binding affinity is a function of two quantities, the binding enthalpy and the binding entropy, it is evident that a more efficient optimization would be accomplished if both quantities were considered and improved simultaneously. Here, an optimization algorithm based upon enthalpic and entropic information generated by Isothermal Titration Calorimetry is presented.

  16. A free boundary approach to shape optimization problems.

    PubMed

    Bucur, D; Velichkov, B

    2015-09-13

    The analysis of shape optimization problems involving the spectrum of the Laplace operator, such as isoperimetric inequalities, has known in recent years a series of interesting developments essentially as a consequence of the infusion of free boundary techniques. The main focus of this paper is to show how the analysis of a general shape optimization problem of spectral type can be reduced to the analysis of particular free boundary problems. In this survey article, we give an overview of some very recent technical tools, the so-called shape sub- and supersolutions, and show how to use them for the minimization of spectral functionals involving the eigenvalues of the Dirichlet Laplacian, under a volume constraint. PMID:26261362

  17. A free boundary approach to shape optimization problems

    PubMed Central

    Bucur, D.; Velichkov, B.

    2015-01-01

    The analysis of shape optimization problems involving the spectrum of the Laplace operator, such as isoperimetric inequalities, has known in recent years a series of interesting developments essentially as a consequence of the infusion of free boundary techniques. The main focus of this paper is to show how the analysis of a general shape optimization problem of spectral type can be reduced to the analysis of particular free boundary problems. In this survey article, we give an overview of some very recent technical tools, the so-called shape sub- and supersolutions, and show how to use them for the minimization of spectral functionals involving the eigenvalues of the Dirichlet Laplacian, under a volume constraint. PMID:26261362

  18. Aircraft optimization by a system approach: Achievements and trends

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw

    1992-01-01

    Recently emerging methodology for optimal design of aircraft treated as a system of interacting physical phenomena and parts is examined. The methodology is found to coalesce into methods for hierarchic, non-hierarchic, and hybrid systems all dependent on sensitivity analysis. A separate category of methods has also evolved independent of sensitivity analysis, hence suitable for discrete problems. References and numerical applications are cited. Massively parallel computer processing is seen as enabling technology for practical implementation of the methodology.

  19. Optimal perfusion during cardiopulmonary bypass: an evidence-based approach.

    PubMed

    Murphy, Glenn S; Hessel, Eugene A; Groom, Robert C

    2009-05-01

    In this review, we summarize the best available evidence to guide the conduct of adult cardiopulmonary bypass (CPB) to achieve "optimal" perfusion. At the present time, there is considerable controversy relating to appropriate management of physiologic variables during CPB. Low-risk patients tolerate mean arterial blood pressures of 50-60 mm Hg without apparent complications, although limited data suggest that higher-risk patients may benefit from mean arterial blood pressures >70 mm Hg. The optimal hematocrit on CPB has not been defined, with large data-based investigations demonstrating that both severe hemodilution and transfusion of packed red blood cells increase the risk of adverse postoperative outcomes. Oxygen delivery is determined by the pump flow rate and the arterial oxygen content and organ injury may be prevented during more severe hemodilutional anemia by increasing pump flow rates. Furthermore, the optimal temperature during CPB likely varies with physiologic goals, and recent data suggest that aggressive rewarming practices may contribute to neurologic injury. The design of components of the CPB circuit may also influence tissue perfusion and outcomes. Although there are theoretical advantages to centrifugal blood pumps over roller pumps, it has been difficult to demonstrate that the use of centrifugal pumps improves clinical outcomes. Heparin coating of the CPB circuit may attenuate inflammatory and coagulation pathways, but has not been clearly demonstrated to reduce major morbidity and mortality. Similarly, no distinct clinical benefits have been observed when open venous reservoirs have been compared to closed systems. In conclusion, there are currently limited data upon which to confidently make strong recommendations regarding how to conduct optimal CPB. There is a critical need for randomized trials assessing clinically significant outcomes, particularly in high-risk patients. PMID:19372313

  20. Principles for optimization of air cooling system applied to the two-stroke engine

    SciTech Connect

    Franco, A.; Martorano, L.

    1995-12-31

    The heat transfer process has always played an important role in internal combustion engine design. An area of importance is the thermal loading of engine structural components, and the optimization of engine cooling system. The engine cooling system of a vehicle makes up a significant portion of the total component cost. It also places demands on other vehicle systems, and the quality of its design is evident to the customer in terms of the power that it consumes that for a two-stroke engine with forced convection mr cooling can be also the 10% of the total brake power. An area of importance is the calculation of the thermal load of engine structural components, and the optimization of engine cooling system. Optimization of engine cooling requires the solution of the coupled problem of heat transfer from gases to walls and of heat convection from the structure (generally a finned surface) to the external environment. The aim of this work is to furnish some reference data about the heat transfer process and to fix some criteria to optimize the air cooling system, paying attention to the field of small displacement (50--250 cm{sup 3}) two-stroke engines.

  1. Improving Discrete-Sensitivity-Based Approach for Practical Design Optimization

    NASA Technical Reports Server (NTRS)

    Baysal, Oktay; Cordero, Yvette; Pandya, Mohagna J.

    1997-01-01

    In developing the automated methodologies for simulation-based optimal shape designs, their accuracy, efficiency and practicality are the defining factors to their success. To that end, four recent improvements to the building blocks of such a methodology, intended for more practical design optimization, have been reported. First, in addition to a polynomial-based parameterization, a partial differential equation (PDE) based parameterization was shown to be a practical tool for a number of reasons. Second, an alternative has been incorporated to one of the tedious phases of developing such a methodology, namely, the automatic differentiation of the computer code for the flow analysis in order to generate the sensitivities. Third, by extending the methodology for the thin-layer Navier-Stokes (TLNS) based flow simulations, the more accurate flow physics was made available. However, the computer storage requirement for a shape optimization of a practical configuration with the -fidelity simulations (TLNS and dense-grid based simulations), required substantial computational resources. Therefore, the final improvement reported herein responded to this point by including the alternating-direct-implicit (ADI) based system solver as an alternative to the preconditioned biconjugate (PbCG) and other direct solvers.

  2. Optimal preview game theory approach to vehicle stability controller design

    NASA Astrophysics Data System (ADS)

    Tamaddoni, Seyed Hossein; Taheri, Saied; Ahmadian, Mehdi

    2011-12-01

    Dynamic game theory brings together different features that are keys to many situations in control design: optimisation behaviour, the presence of multiple agents/players, enduring consequences of decisions and robustness with respect to variability in the environment, etc. In the presented methodology, vehicle stability is represented by a cooperative dynamic/difference game such that its two agents (players), namely the driver and the direct yaw controller (DYC), are working together to provide more stability to the vehicle system. While the driver provides the steering wheel control, the DYC control algorithm is obtained by the Nash game theory to ensure optimal performance as well as robustness to disturbances. The common two-degrees-of-freedom vehicle-handling performance model is put into discrete form to develop the game equations of motion. To evaluate the developed control algorithm, CarSim with its built-in nonlinear vehicle model along with the Pacejka tire model is used. The control algorithm is evaluated for a lane change manoeuvre, and the optimal set of steering angle and corrective yaw moment is calculated and fed to the test vehicle. Simulation results show that the optimal preview control algorithm can significantly reduce lateral velocity, yaw rate, and roll angle, which all contribute to enhancing vehicle stability.

  3. Applying proprioceptive neuromuscular facilitation stretching: optimal contraction intensity to attain the maximum increase in range of motion in young males

    PubMed Central

    Kwak, Dong Ho; Ryu, Young Uk

    2015-01-01

    [Purpose] Proprioceptive neuromuscular facilitation (PNF) stretching is known to be effective in increasing joint ROM. The PNF stretching technique first induces an isometric contraction in the muscles to be stretched, but no agreement concerning the optimal contraction intensity has yet been reached. The purpose of the present study was to examine the effect of contraction intensity on ROM while applying PNF stretching. [Subjects and Methods] Sixty male subjects were randomly assigned to one of four groups (three experimental groups and one control group). Each experimental group applied one of three contraction intensities (100%, 60%, and 20%) defined by the MVIC ratio, and the control group did not receive any intervention during the experiment. PNF stretching was applied to left knee extensors to compare changes in the knee joint flexion angle. [Results] The results showed that the changes in ROM were larger for the 60% and 100% groups compared with the 20% group. The changes in ROM were lowest in the control group. [Conclusion] The present results indicate that while applying the PNF stretching, it is not necessary to apply the maximum intensity of muscle contraction. Moderate isometric contraction intensities may be optimal for healthy young males, while a sufficient effect can be obtained even with a low contraction intensity. PMID:26310658

  4. Optimal robust control of drug delivery in cancer chemotherapy: a comparison between three control approaches.

    PubMed

    Moradi, Hamed; Vossoughi, Gholamreza; Salarieh, Hassan

    2013-10-01

    During the drug delivery process in chemotherapy, both of the cancer cells and normal healthy cells may be killed. In this paper, three mathematical cell-kill models including log-kill hypothesis, Norton-Simon hypothesis and Emax hypothesis are considered. Three control approaches including optimal linear regulation, nonlinear optimal control based on variation of extremals and H∞-robust control based on μ-synthesis are developed. An appropriate cost function is defined such that the amount of required drug is minimized while the tumor volume is reduced. For the first time, performance of the system is investigated and compared for three control strategies; applied on three nonlinear models of the process. In additions, their efficiency is compared in the presence of model parametric uncertainties. It is observed that in the presence of model uncertainties, controller designed based on variation of extremals is more efficient than the linear regulation controller. However, H∞-robust control is more efficient in improving robust performance of the uncertain models with faster tumor reduction and minimum drug usage. PMID:23891423

  5. A new approach for structural health monitoring by applying anomaly detection on strain sensor data

    NASA Astrophysics Data System (ADS)

    Trichias, Konstantinos; Pijpers, Richard; Meeuwissen, Erik

    2014-03-01

    Structural Health Monitoring (SHM) systems help to monitor critical infrastructures (bridges, tunnels, etc.) remotely and provide up-to-date information about their physical condition. In addition, it helps to predict the structure's life and required maintenance in a cost-efficient way. Typically, inspection data gives insight in the structural health. The global structural behavior, and predominantly the structural loading, is generally measured with vibration and strain sensors. Acoustic emission sensors are more and more used for measuring global crack activity near critical locations. In this paper, we present a procedure for local structural health monitoring by applying Anomaly Detection (AD) on strain sensor data for sensors that are applied in expected crack path. Sensor data is analyzed by automatic anomaly detection in order to find crack activity at an early stage. This approach targets the monitoring of critical structural locations, such as welds, near which strain sensors can be applied during construction and/or locations with limited inspection possibilities during structural operation. We investigate several anomaly detection techniques to detect changes in statistical properties, indicating structural degradation. The most effective one is a novel polynomial fitting technique, which tracks slow changes in sensor data. Our approach has been tested on a representative test structure (bridge deck) in a lab environment, under constant and variable amplitude fatigue loading. In both cases, the evolving cracks at the monitored locations were successfully detected, autonomously, by our AD monitoring tool.

  6. Niching Methods: Speciation Theory Applied for Multi-modal Function Optimization

    NASA Astrophysics Data System (ADS)

    Shir, Ofer M.; Bäck, Thomas

    While contemporary Evolutionary Algorithms (EAs) excel in various types of optimizations, their generalization to speciational subpopulations is much needed upon their deployment to multi-modal landscapes, mainly due to the typical loss of population diversity. The resulting techniques, known as niching methods, are the main focus of this chapter, which will provide the motivation, pose the problem both from the biological as well as computational perspectives, and describe algorithmic solutions. Biologically inspired by organic speciation processes, and armed with real-world incentive to obtain multiple solutions for better decision making, we shall present here the application of certain bioprocesses to multi-modal function optimization, by means of a broad overview of the existing work in the field, as well as a detailed description of specific test cases.

  7. Optimization techniques applied to passive measures for in-orbit spacecraft survivability

    NASA Technical Reports Server (NTRS)

    Mog, Robert A.; Price, D. Marvin

    1991-01-01

    Spacecraft designers have always been concerned about the effects of meteoroid impacts on mission safety. The engineering solution to this problem has generally been to erect a bumper or shield placed outboard from the spacecraft wall to disrupt/deflect the incoming projectiles. Spacecraft designers have a number of tools at their disposal to aid in the design process. These include hypervelocity impact testing, analytic impact predictors, and hydrodynamic codes. Analytic impact predictors generally provide the best quick-look estimate of design tradeoffs. The most complete way to determine the characteristics of an analytic impact predictor is through optimization of the protective structures design problem formulated with the predictor of interest. Space Station Freedom protective structures design insight is provided through the coupling of design/material requirements, hypervelocity impact phenomenology, meteoroid and space debris environment sensitivities, optimization techniques and operations research strategies, and mission scenarios. Major results are presented.

  8. Optimization of liquid scintillation measurements applied to smears and aqueous samples collected in industrial environments

    NASA Astrophysics Data System (ADS)

    Chapon, Arnaud; Pigrée, Gilbert; Putmans, Valérie; Rogel, Gwendal

    Search for low-energy β contaminations in industrial environments requires using Liquid Scintillation Counting. This indirect measurement method supposes a fine control from sampling to measurement itself. Thus, in this paper, we focus on the definition of a measurement method, as generic as possible, for both smears and aqueous samples' characterization. That includes choice of consumables, sampling methods, optimization of counting parameters and definition of energy windows, using the maximization of a Figure of Merit. Detection limits are then calculated considering these optimized parameters. For this purpose, we used PerkinElmer Tri-Carb counters. Nevertheless, except those relative to some parameters specific to PerkinElmer, most of the results presented here can be extended to other counters.

  9. A mathematical approach to optimal selection of dose values in the additive dose method of ERP dosimetry

    SciTech Connect

    Hayes, R.B.; Haskell, E.H.; Kenner, G.H.

    1996-01-01

    Additive dose methods commonly used in electron paramagnetic resonance (EPR) dosimetry are time consuming and labor intensive. We have developed a mathematical approach for determining optimal spacing of applied doses and the number of spectra which should be taken at each dose level. Expected uncertainitites in the data points are assumed to be normally distributed with a fixed standard deviation and linearity of dose response is also assumed. The optimum spacing and number of points necessary for the minimal error can be estimated, as can the likely error in the resulting estimate. When low doses are being estimated for tooth enamel samples the optimal spacing is shown to be a concentration of points near the zero dose value with fewer spectra taken at a single high dose value within the range of known linearity. Optimization of the analytical process results in increased accuracy and sample throughput.

  10. Optimization of weld bead geometry in laser welding with filler wire process using Taguchi’s approach

    NASA Astrophysics Data System (ADS)

    dongxia, Yang; xiaoyan, Li; dingyong, He; zuoren, Nie; hui, Huang

    2012-10-01

    In the present work, laser welding with filler wire was successfully applied to joining a new-type Al-Mg alloy. Welding parameters of laser power, welding speed and wire feed rate were carefully selected with the objective of producing a weld joint with the minimum weld bead width and the fusion zone area. Taguchi approach was used as a statistical design of experimental technique for optimizing the selected welding parameters. From the experimental results, it is found that the effect of welding parameters on the welding quality decreased in the order of welding speed, wire feed rate, and laser power. The optimal combination of welding parameters is the laser power of 2.4 kW, welding speed of 3 m/min and the wire feed rate of 2 m/min. Verification experiments have also been conducted to validate the optimized parameters.

  11. A combined NLP-differential evolution algorithm approach for the optimization of looped water distribution systems

    NASA Astrophysics Data System (ADS)

    Zheng, Feifei; Simpson, Angus R.; Zecchin, Aaron C.

    2011-08-01

    This paper proposes a novel optimization approach for the least cost design of looped water distribution systems (WDSs). Three distinct steps are involved in the proposed optimization approach. In the first step, the shortest-distance tree within the looped network is identified using the Dijkstra graph theory algorithm, for which an extension is proposed to find the shortest-distance tree for multisource WDSs. In the second step, a nonlinear programming (NLP) solver is employed to optimize the pipe diameters for the shortest-distance tree (chords of the shortest-distance tree are allocated the minimum allowable pipe sizes). Finally, in the third step, the original looped water network is optimized using a differential evolution (DE) algorithm seeded with diameters in the proximity of the continuous pipe sizes obtained in step two. As such, the proposed optimization approach combines the traditional deterministic optimization technique of NLP with the emerging evolutionary algorithm DE via the proposed network decomposition. The proposed methodology has been tested on four looped WDSs with the number of decision variables ranging from 21 to 454. Results obtained show the proposed approach is able to find optimal solutions with significantly less computational effort than other optimization techniques.

  12. High direct drive illumination uniformity achieved by multi-parameter optimization approach: a case study of Shenguang III laser facility.

    PubMed

    Tian, Chao; Chen, Jia; Zhang, Bo; Shan, Lianqiang; Zhou, Weimin; Liu, Dongxiao; Bi, Bi; Zhang, Feng; Wang, Weiwu; Zhang, Baohan; Gu, Yuqiu

    2015-05-01

    The uniformity of the compression driver is of fundamental importance for inertial confinement fusion (ICF). In this paper, the illumination uniformity on a spherical capsule during the initial imprinting phase directly driven by laser beams has been considered. We aim to explore methods to achieve high direct drive illumination uniformity on laser facilities designed for indirect drive ICF. There are many parameters that would affect the irradiation uniformity, such as Polar Direct Drive displacement quantity, capsule radius, laser spot size and intensity distribution within a laser beam. A novel approach to reduce the root mean square illumination non-uniformity based on multi-parameter optimizing approach (particle swarm optimization) is proposed, which enables us to obtain a set of optimal parameters over a large parameter space. Finally, this method is applied to improve the direct drive illumination uniformity provided by Shenguang III laser facility and the illumination non-uniformity is reduced from 5.62% to 0.23% for perfectly balanced beams. Moreover, beam errors (power imbalance and pointing error) are taken into account to provide a more practical solution and results show that this multi-parameter optimization approach is effective.

  13. Operator splitting approach applied to oscillatory flow and heat transfer in a tube

    NASA Astrophysics Data System (ADS)

    Widura, R.; Lehn, M.; Muralidhar, K.; Scherer, R.

    2008-02-01

    The method of operator splitting is applied to an advection-diffusion model as it occurs in a pulse tube. Firstly, the governing equations of the simplified model are studied and the mathematical description is derived. Then the splitting approach is used to separate the advection and the diffusion part. Now it turns out that the advective part can be solved analytically and therefore the computational cost are reduced and the accuracy is increased. It is shown that the method can model an effect called Taylor dispersion. Applying a domain decomposition strategy, the solution process can be decoupled, reducing the numerical cost even more. This procedure allows to study the relevant parameters within the model with the goal to maximize the amount of energy stored within the tube wall. As a measure of efficiency, the amount of energy transferred between the fluid phase and the wall is chosen.

  14. Learning About Dying and Living: An Applied Approach to End-of-Life Communication.

    PubMed

    Pagano, Michael P

    2016-08-01

    The purpose of this article is to expand on prior research in end-of-life communication and death and dying communication apprehension, by developing a unique course that utilizes a hospice setting and an applied, service-learning approach. Therefore, this essay describes and discusses both students' and my experiences over a 7-year period from 2008 through 2014. The courses taught during this time frame provided an opportunity to analyze students' responses, experiences, and discoveries across semesters/years and cocultures. This unique, 3-credit, 14-week, service-learning, end-of-life communication course was developed to provide an opportunity for students to learn the theories related to this field of study and to apply that knowledge through volunteer experiences via interactions with dying patients and their families. The 7 years of author's notes, plus the 91 students' electronically submitted three reflection essays each (273 total documents) across four courses/years, served as the data for this study. According to the students, verbally in class discussions and in numerous writing assignments, this course helped lower their death and dying communication apprehension and increased their willingness to interact with hospice patients and their families. Furthermore, the students' final research papers clearly demonstrated how utilizing a service-learning approach allowed them to apply classroom learnings and interactions with dying patients and their families at the hospice, to their analyses of end-of-life communication theories and behaviors. The results of these classes suggest that other, difficult topic courses (e.g., domestic violence, addiction, etc.) might benefit from a similar pedagogical approach.

  15. Input estimation for drug discovery using optimal control and Markov chain Monte Carlo approaches.

    PubMed

    Trägårdh, Magnus; Chappell, Michael J; Ahnmark, Andrea; Lindén, Daniel; Evans, Neil D; Gennemark, Peter

    2016-04-01

    Input estimation is employed in cases where it is desirable to recover the form of an input function which cannot be directly observed and for which there is no model for the generating process. In pharmacokinetic and pharmacodynamic modelling, input estimation in linear systems (deconvolution) is well established, while the nonlinear case is largely unexplored. In this paper, a rigorous definition of the input-estimation problem is given, and the choices involved in terms of modelling assumptions and estimation algorithms are discussed. In particular, the paper covers Maximum a Posteriori estimates using techniques from optimal control theory, and full Bayesian estimation using Markov Chain Monte Carlo (MCMC) approaches. These techniques are implemented using the optimisation software CasADi, and applied to two example problems: one where the oral absorption rate and bioavailability of the drug eflornithine are estimated using pharmacokinetic data from rats, and one where energy intake is estimated from body-mass measurements of mice exposed to monoclonal antibodies targeting the fibroblast growth factor receptor (FGFR) 1c. The results from the analysis are used to highlight the strengths and weaknesses of the methods used when applied to sparsely sampled data. The presented methods for optimal control are fast and robust, and can be recommended for use in drug discovery. The MCMC-based methods can have long running times and require more expertise from the user. The rigorous definition together with the illustrative examples and suggestions for software serve as a highly promising starting point for application of input-estimation methods to problems in drug discovery. PMID:26932466

  16. Random matrix theory for portfolio optimization: a stability approach

    NASA Astrophysics Data System (ADS)

    Sharifi, S.; Crane, M.; Shamaie, A.; Ruskin, H.

    2004-04-01

    We apply random matrix theory (RMT) to an empirically measured financial correlation matrix, C, and show that this matrix contains a large amount of noise. In order to determine the sensitivity of the spectral properties of a random matrix to noise, we simulate a set of data and add different volumes of random noise. Having ascertained that the eigenspectrum is independent of the standard deviation of added noise, we use RMT to determine the noise percentage in a correlation matrix based on real data from S&P500. Eigenvalue and eigenvector analyses are applied and the experimental results for each of them are presented to identify qualitatively and quantitatively different spectral properties of the empirical correlation matrix to a random counterpart. Finally, we attempt to separate the noisy part from the non-noisy part of C. We apply an existing technique to cleaning C and then discuss its associated problems. We propose a technique of filtering C that has many advantages, from the stability point of view, over the existing method of cleaning.

  17. A multi-label approach using binary relevance and decision trees applied to functional genomics.

    PubMed

    Tanaka, Erica Akemi; Nozawa, Sérgio Ricardo; Macedo, Alessandra Alaniz; Baranauskas, José Augusto

    2015-04-01

    Many classification problems, especially in the field of bioinformatics, are associated with more than one class, known as multi-label classification problems. In this study, we propose a new adaptation for the Binary Relevance algorithm taking into account possible relations among labels, focusing on the interpretability of the model, not only on its performance. Experiments were conducted to compare the performance of our approach against others commonly found in the literature and applied to functional genomic datasets. The experimental results show that our proposal has a performance comparable to that of other methods and that, at the same time, it provides an interpretable model from the multi-label problem.

  18. Applying human rights to maternal health: UN Technical Guidance on rights-based approaches.

    PubMed

    Yamin, Alicia Ely

    2013-05-01

    In the last few years there have been several critical milestones in acknowledging the centrality of human rights to sustainably addressing the scourge of maternal death and morbidity around the world, including from the United Nations Human Rights Council. In 2012, the Council adopted a resolution welcoming a Technical Guidance on rights-based approaches to maternal mortality and morbidity, and calling for a report on its implementation in 2 years. The present paper provides an overview of the contents and significance of the Guidance. It reviews how the Guidance can assist policymakers in improving women's health and their enjoyment of rights by setting out the implications of adopting a human rights-based approach at each step of the policy cycle, from planning and budgeting, to ensuring implementation, to monitoring and evaluation, to fostering accountability mechanisms. The Guidance should also prove useful to clinicians in understanding rights frameworks as applied to maternal health.

  19. An optimal dynamic inversion-based neuro-adaptive approach for treatment of chronic myelogenous leukemia.

    PubMed

    Padhi, Radhakant; Kothari, Mangal

    2007-09-01

    Combining the advanced techniques of optimal dynamic inversion and model-following neuro-adaptive control design, an innovative technique is presented to design an automatic drug administration strategy for effective treatment of chronic myelogenous leukemia (CML). A recently developed nonlinear mathematical model for cell dynamics is used to design the controller (medication dosage). First, a nominal controller is designed based on the principle of optimal dynamic inversion. This controller can treat the nominal model patients (patients who can be described by the mathematical model used here with the nominal parameter values) effectively. However, since the system parameters for a realistic model patient can be different from that of the nominal model patients, simulation studies for such patients indicate that the nominal controller is either inefficient or, worse, ineffective; i.e. the trajectory of the number of cancer cells either shows non-satisfactory transient behavior or it grows in an unstable manner. Hence, to make the drug dosage history more realistic and patient-specific, a model-following neuro-adaptive controller is augmented to the nominal controller. In this adaptive approach, a neural network trained online facilitates a new adaptive controller. The training process of the neural network is based on Lyapunov stability theory, which guarantees both stability of the cancer cell dynamics as well as boundedness of the network weights. From simulation studies, this adaptive control design approach is found to be very effective to treat the CML disease for realistic patients. Sufficient generality is retained in the mathematical developments so that the technique can be applied to other similar nonlinear control design problems as well.

  20. Particle Swarm Optimization Approach in a Consignment Inventory System

    NASA Astrophysics Data System (ADS)

    Sharifyazdi, Mehdi; Jafari, Azizollah; Molamohamadi, Zohreh; Rezaeiahari, Mandana; Arshizadeh, Rahman

    2009-09-01

    Consignment Inventory (CI) is a kind of inventory which is in the possession of the customer, but is still owned by the supplier. This creates a condition of shared risk whereby the supplier risks the capital investment associated with the inventory while the customer risks dedicating retail space to the product. This paper considers both the vendor's and the retailers' costs in an integrated model. The vendor here is a warehouse which stores one type of product and supplies it at the same wholesale price to multiple retailers who then sell the product in independent markets at retail prices. Our main aim is to design a CI system which generates minimum costs for the two parties. Here a Particle Swarm Optimization (PSO) algorithm is developed to calculate the proper values. Finally a sensitivity analysis is performed to examine the effects of each parameter on decision variables. Also PSO performance is compared with genetic algorithm.

  1. Optimal error estimates for high order Runge-Kutta methods applied to evolutionary equations

    SciTech Connect

    McKinney, W.R.

    1989-01-01

    Fully discrete approximations to 1-periodic solutions of the Generalized Korteweg de-Vries and the Cahn-Hilliard equations are analyzed. These approximations are generated by an Implicit Runge-Kutta method for the temporal discretization and a Galerkin Finite Element method for the spatial discretization. Furthermore, these approximations may be of arbitrarily high order. In particular, it is shown that the well-known order reduction phenomenon afflicting Implicit Runge Kutta methods does not occur. Numerical results supporting these optimal error estimates for the Korteweg-de Vries equation and indicating the existence of a slow motion manifold for the Cahn-Hilliard equation are also provided.

  2. Uncertainty optimization applied to the Monte Carlo analysis of planetary entry trajectories

    NASA Astrophysics Data System (ADS)

    Way, David Wesley

    2001-10-01

    Future robotic missions to Mars, as well as any human missions, will require precise entries to ensure safe landings near science objectives and pre-deployed assets. Planning for these missions will depend heavily on Monte Carlo analyses to evaluate active guidance algorithms, assess the impact of off-nominal conditions, and account for uncertainty. The dependability of Monte Carlo forecasts, however, is limited by the accuracy and completeness of the assumed uncertainties. This is because Monte Carlo analysis is a forward driven problem; beginning with the input uncertainties and proceeding to the forecast output statistics. An improvement to the Monte Carlo analysis is needed that will allow the problem to be worked in reverse. In this way, the largest allowable dispersions that achieve the required mission objectives can be determined quantitatively. This thesis proposes a methodology to optimize the uncertainties in the Monte Carlo analysis of spacecraft landing footprints. A metamodel is used to first write polynomial expressions for the size of the landing footprint as functions of the independent uncertainty extrema. The coefficients of the metamodel are determined by performing experiments. The metamodel is then used in a constrained optimization procedure to minimize a cost-tolerance function. First, a two-dimensional proof-of-concept problem was used to evaluate the feasibility of this optimization method. Next, the optimization method was further demonstrated on the Mars Surveyor Program 2001 Lander. The purpose of this example was to demonstrate that the methodology developed during the proof-of-concept could be scaled to solve larger, more complicated, "real world" problems. This research has shown that is possible to control the size of the landing footprint and establish tolerances for mission uncertainties. A simplified metamodel was developed, which is enabling for realistic problems with more than just a few uncertainties. A confidence interval on

  3. The Goodwyn Field - an integrated approach to optimal field development

    SciTech Connect

    Newman, S.H.; Taylor, N.C.

    1996-12-31

    The Goodwyn gas field is located some 130 km offshore of Western Australia in a water depth of 130m and is currently under development. First production commenced in February 1995. The rich gas (CGR - 90 bbl/MMscf) is trapped within fluvio-deltaic reservoirs of the Triassic Mungaroo Formation In a large notated fault block on the northwestern edge of the Dampier Sub-Basin. The reservoir units, ranging in thickness between 30 and 80 meters, dip gently below the overlying Cretaceous shales which provide the updip seal. The target production levels and ultimate recovery are based on the optimization of gas recycling along strike in the individual reservoir units. The success of the development plan depends on an accurate model of the reservoir architecture. Prior to development drilling, only four wells had penetrated the primary reservoir units. Successful development planning required the recognition and management of key subsurface uncertainties. Integration between seismic interpretation, stochastic reservoir modelling and reservoir engineering proved essential to achieve the development objectives. A detailed evaluation of the reservoir stratigraphy, sedimentology, high resolution seismic and high resolution palynology provided the framework for the 3D stochastic reservoir modeling. The modelling converted the information into a number of geological realizations which were then used to generate a family of dynamic reservoir models. The location of the various development wells was thus optimized on a risked basis. Seven development wells have now been drilled and although these wells have shown that there is more variability than originally envisaged, the broad framework of the reservoir model remains robust.

  4. The Goodwyn Field - an integrated approach to optimal field development

    SciTech Connect

    Newman, S.H.; Taylor, N.C.

    1996-01-01

    The Goodwyn gas field is located some 130 km offshore of Western Australia in a water depth of 130m and is currently under development. First production commenced in February 1995. The rich gas (CGR - 90 bbl/MMscf) is trapped within fluvio-deltaic reservoirs of the Triassic Mungaroo Formation In a large notated fault block on the northwestern edge of the Dampier Sub-Basin. The reservoir units, ranging in thickness between 30 and 80 meters, dip gently below the overlying Cretaceous shales which provide the updip seal. The target production levels and ultimate recovery are based on the optimization of gas recycling along strike in the individual reservoir units. The success of the development plan depends on an accurate model of the reservoir architecture. Prior to development drilling, only four wells had penetrated the primary reservoir units. Successful development planning required the recognition and management of key subsurface uncertainties. Integration between seismic interpretation, stochastic reservoir modelling and reservoir engineering proved essential to achieve the development objectives. A detailed evaluation of the reservoir stratigraphy, sedimentology, high resolution seismic and high resolution palynology provided the framework for the 3D stochastic reservoir modeling. The modelling converted the information into a number of geological realizations which were then used to generate a family of dynamic reservoir models. The location of the various development wells was thus optimized on a risked basis. Seven development wells have now been drilled and although these wells have shown that there is more variability than originally envisaged, the broad framework of the reservoir model remains robust.

  5. A suggested approach to applying IAEA safeguards to plutonium in weapons components

    SciTech Connect

    Lu, M.S.; Allentuck, J.

    1998-08-01

    It is the announced policy of the United States to make fissile material removed from its nuclear weapons stockpile subject to the US-IAEA voluntary safeguards agreement. Much of this material is plutonium in the form of pits. The application of traditional IAEA safeguards would reveal Restricted Data to unauthorized persons which is prohibited by US law and international treaties. Prior to the availability of a facility for the conversion of the plutonium in the pits to a non-sensitive form this obvious long-term solution to the problem is foreclosed. An alternative near-term approach to applying IAEA safeguards while preserving the necessary degree of confidentiality is required. This paper identifies such an approach. It presents in detail the form of the US declaration; the safeguards objectives which are met; inspection techniques which are utilized and the conclusion which the IAEA could reach concerning the contents of each item and the aggregate of all items. The approach would reveal the number of containers and the aggregate mass of plutonium in a set of n containers presented to the IAEA for verification while protecting data of the isotopic composition and plutonium mass of individual components. The suggested approach provides for traceability from the time the containers are sealed until the conversion of the plutonium to a non-sensitive form.

  6. Reformulation linearization technique based branch-and-reduce approach applied to regional water supply system planning

    NASA Astrophysics Data System (ADS)

    Lan, Fujun; Bayraksan, Güzin; Lansey, Kevin

    2016-03-01

    A regional water supply system design problem that determines pipe and pump design parameters and water flows over a multi-year planning horizon is considered. A non-convex nonlinear model is formulated and solved by a branch-and-reduce global optimization approach. The lower bounding problem is constructed via a three-pronged effort that involves transforming the space of certain decision variables, polyhedral outer approximations, and the Reformulation Linearization Technique (RLT). Range reduction techniques are employed systematically to speed up convergence. Computational results demonstrate the efficiency of the proposed algorithm; in particular, the critical role range reduction techniques could play in RLT based branch-and-bound methods. Results also indicate using reclaimed water not only saves freshwater sources but is also a cost-effective non-potable water source in arid regions. Supplemental data for this article can be accessed at http://dx.doi.org/10.1080/0305215X.2015.1016508.

  7. Optimal Diagnostic Approaches for Patients with Suspected Small Bowel Disease

    PubMed Central

    Kim, Jae Hyun; Moon, Won

    2016-01-01

    While the domain of gastrointestinal endoscopy has made great strides over the last several decades, endoscopic assessment of the small bowel continues to be challenging. Recently, with the development of new technology including video capsule endoscopy, device-assisted enteroscopy, and computed tomography/magnetic resonance enterography, a more thorough investigation of the small bowel is possible. In this article, we review the systematic approach for patients with suspected small bowel disease based on these advanced endoscopic and imaging systems. PMID:27334413

  8. Optimal processing for gel electrophoresis images: Applying Monte Carlo Tree Search in GelApp.

    PubMed

    Nguyen, Phi-Vu; Ghezal, Ali; Hsueh, Ya-Chih; Boudier, Thomas; Gan, Samuel Ken-En; Lee, Hwee Kuan

    2016-08-01

    In biomedical research, gel band size estimation in electrophoresis analysis is a routine process. To facilitate and automate this process, numerous software have been released, notably the GelApp mobile app. However, the band detection accuracy is limited due to a band detection algorithm that cannot adapt to the variations in input images. To address this, we used the Monte Carlo Tree Search with Upper Confidence Bound (MCTS-UCB) method to efficiently search for optimal image processing pipelines for the band detection task, thereby improving the segmentation algorithm. Incorporating this into GelApp, we report a significant enhancement of gel band detection accuracy by 55.9 ± 2.0% for protein polyacrylamide gels, and 35.9 ± 2.5% for DNA SYBR green agarose gels. This implementation is a proof-of-concept in demonstrating MCTS-UCB as a strategy to optimize general image segmentation. The improved version of GelApp-GelApp 2.0-is freely available on both Google Play Store (for Android platform), and Apple App Store (for iOS platform).

  9. Optimal processing for gel electrophoresis images: Applying Monte Carlo Tree Search in GelApp.

    PubMed

    Nguyen, Phi-Vu; Ghezal, Ali; Hsueh, Ya-Chih; Boudier, Thomas; Gan, Samuel Ken-En; Lee, Hwee Kuan

    2016-08-01

    In biomedical research, gel band size estimation in electrophoresis analysis is a routine process. To facilitate and automate this process, numerous software have been released, notably the GelApp mobile app. However, the band detection accuracy is limited due to a band detection algorithm that cannot adapt to the variations in input images. To address this, we used the Monte Carlo Tree Search with Upper Confidence Bound (MCTS-UCB) method to efficiently search for optimal image processing pipelines for the band detection task, thereby improving the segmentation algorithm. Incorporating this into GelApp, we report a significant enhancement of gel band detection accuracy by 55.9 ± 2.0% for protein polyacrylamide gels, and 35.9 ± 2.5% for DNA SYBR green agarose gels. This implementation is a proof-of-concept in demonstrating MCTS-UCB as a strategy to optimize general image segmentation. The improved version of GelApp-GelApp 2.0-is freely available on both Google Play Store (for Android platform), and Apple App Store (for iOS platform). PMID:27251892

  10. Doehlert experimental design applied to optimization of light emitting textile structures

    NASA Astrophysics Data System (ADS)

    Oguz, Yesim; Cochrane, Cedric; Koncar, Vladan; Mordon, Serge R.

    2016-07-01

    A light emitting fabric (LEF) has been developed for photodynamic therapy (PDT) for the treatment of dermatologic diseases such as Actinic Keratosis (AK). A successful PDT requires homogenous and reproducible light with controlled power and wavelength on the treated skin area. Due to the shape of the human body, traditional PDT with external light sources is unable to deliver homogenous light everywhere on the skin (head vertex, hand, etc.). For better light delivery homogeneity, plastic optical fibers (POFs) have been woven in textile in order to emit laterally the injected light. The previous studies confirmed that the light power could be locally controlled by modifying the radius of POF macro-bendings within the textile structure. The objective of this study is to optimize the distribution of macro-bendings over the LEF surface in order to increase the light intensity (mW/cm2), and to guarantee the best possible light deliver homogeneity over the LEF which are often contradictory. Fifteen experiments have been carried out with Doehlert experimental design involving Response Surface Methodology (RSM). The proposed models are fitted to the experimental data to enable the optimal set up of the warp yarns tensions.

  11. Multiple response optimization applied to the development of a capillary electrophoretic method for pharmaceutical analysis.

    PubMed

    Candioti, Luciana Vera; Robles, Juan C; Mantovani, Víctor E; Goicoechea, Héctor C

    2006-03-15

    Multiple response simultaneous optimization by using the desirability function was used for the development of a capillary electrophoresis method for the simultaneous determination of four active ingredients in pharmaceutical preparations: vitamins B(6) and B(12), dexamethasone and lidocaine hydrochloride. Five responses were simultaneously optimized: the three resolutions, the analysis time and the capillary current. This latter response was taken into account in order to improve the quality of the separations. The separation was carried out by using capillary zone electrophoresis (CZE) with a silica capillary and UV detection (240 nm). The optimum conditions were: 57.0 mmol l(-1) sodium phosphate buffer solution, pH 7.0 and voltage=17.2 kV. Good results concerning precision (CV lower than 2%), accuracy (recoveries ranged between 98.5 and 102.6%) and selectivity were obtained in the concentration range studied for the four compounds. These results are comparable to those provided by the reference high performance liquid chromatography (HPLC) technique.

  12. Applying Reflective Middleware Techniques to Optimize a QoS-enabled CORBA Component Model Implementation

    NASA Technical Reports Server (NTRS)

    Wang, Nanbor; Parameswaran, Kirthika; Kircher, Michael; Schmidt, Douglas

    2003-01-01

    Although existing CORBA specifications, such as Real-time CORBA and CORBA Messaging, address many end-to-end quality-of service (QoS) properties, they do not define strategies for configuring these properties into applications flexibly, transparently, and adaptively. Therefore, application developers must make these configuration decisions manually and explicitly, which is tedious, error-prone, and open sub-optimal. Although the recently adopted CORBA Component Model (CCM) does define a standard configuration framework for packaging and deploying software components, conventional CCM implementations focus on functionality rather than adaptive quality-of-service, which makes them unsuitable for next-generation applications with demanding QoS requirements. This paper presents three contributions to the study of middleware for QoS-enabled component-based applications. It outlines rejective middleware techniques designed to adaptively (1) select optimal communication mechanisms, (2) manage QoS properties of CORBA components in their contain- ers, and (3) (re)con$gure selected component executors dynamically. Based on our ongoing research on CORBA and the CCM, we believe the application of rejective techniques to component middleware will provide a dynamically adaptive and (re)configurable framework for COTS software that is well-suited for the QoS demands of next-generation applications.

  13. Applying Reflective Middleware Techniques to Optimize a QoS-enabled CORBA Component Model Implementation

    NASA Technical Reports Server (NTRS)

    Wang, Nanbor; Kircher, Michael; Schmidt, Douglas C.

    2000-01-01

    Although existing CORBA specifications, such as Real-time CORBA and CORBA Messaging, address many end-to-end quality-of-service (QoS) properties, they do not define strategies for configuring these properties into applications flexibly, transparently, and adaptively. Therefore, application developers must make these configuration decisions manually and explicitly, which is tedious, error-prone, and often sub-optimal. Although the recently adopted CORBA Component Model (CCM) does define a standard configuration frame-work for packaging and deploying software components, conventional CCM implementations focus on functionality rather than adaptive quality-of service, which makes them unsuitable for next-generation applications with demanding QoS requirements. This paper presents three contributions to the study of middleware for QoS-enabled component-based applications. It outlines reflective middleware techniques designed to adaptively: (1) select optimal communication mechanisms, (2) man- age QoS properties of CORBA components in their containers, and (3) (re)configure selected component executors dynamically. Based on our ongoing research on CORBA and the CCM, we believe the application of reflective techniques to component middleware will provide a dynamically adaptive and (re)configurable framework for COTS software that is well-suited for the QoS demands of next-generation applications.

  14. Optimized molecular dynamics force fields applied to the helix-coil transition of polypeptides.

    PubMed

    Best, Robert B; Hummer, Gerhard

    2009-07-01

    Obtaining the correct balance of secondary structure propensities is a central priority in protein force-field development. Given that current force fields differ significantly in their alpha-helical propensities, a correction to match experimental results would be highly desirable. We have determined simple backbone energy corrections for two force fields to reproduce the fraction of helix measured in short peptides at 300 K. As validation, we show that the optimized force fields produce results in excellent agreement with nuclear magnetic resonance experiments for folded proteins and short peptides not used in the optimization. However, despite the agreement at ambient conditions, the dependence of the helix content on temperature is too weak, a problem shared with other force fields. A fit of the Lifson-Roig helix-coil theory shows that both the enthalpy and entropy of helix formation are too small: the helix extension parameter w agrees well with experiment, but its entropic and enthalpic components are both only about half the respective experimental estimates. Our structural and thermodynamic analyses point toward the physical origins of these shortcomings in current force fields, and suggest ways to address them in future force-field development.

  15. Optimization of spatial light distribution through genetic algorithms for vision systems applied to quality control

    NASA Astrophysics Data System (ADS)

    Castellini, P.; Cecchini, S.; Stroppa, L.; Paone, N.

    2015-02-01

    The paper presents an adaptive illumination system for image quality enhancement in vision-based quality control systems. In particular, a spatial modulation of illumination intensity is proposed in order to improve image quality, thus compensating for different target scattering properties, local reflections and fluctuations of ambient light. The desired spatial modulation of illumination is obtained by a digital light projector, used to illuminate the scene with an arbitrary spatial distribution of light intensity, designed to improve feature extraction in the region of interest. The spatial distribution of illumination is optimized by running a genetic algorithm. An image quality estimator is used to close the feedback loop and to stop iterations once the desired image quality is reached. The technique proves particularly valuable for optimizing the spatial illumination distribution in the region of interest, with the remarkable capability of the genetic algorithm to adapt the light distribution to very different target reflectivity and ambient conditions. The final objective of the proposed technique is the improvement of the matching score in the recognition of parts through matching algorithms, hence of the diagnosis of machine vision-based quality inspections. The procedure has been validated both by a numerical model and by an experimental test, referring to a significant problem of quality control for the washing machine manufacturing industry: the recognition of a metallic clamp. Its applicability to other domains is also presented, specifically for the visual inspection of shoes with retro-reflective tape and T-shirts with paillettes.

  16. Combinatorial optimization of long-term maneuver sequences applied to geostationary orbit control

    NASA Astrophysics Data System (ADS)

    Haerting, A.; Meixner, H.

    Eccentricity control schemes for geostationary satellites are discussed considering realistic mission profiles. The advantages of long-term maneuver planning are outlined in terms of fuel savings, number of maneuvers, orbit control accuracy and safety. The planning ahead of a dozen or more maneuvers involves, in particular, the selection of discrete alternatives such as thruster branches and time intervals. The proposed planning scheme is developed by heuristic augmentation taking the TVSat-2 spacecraft as an example. Then it is formalized in the framework of combinatorial optimization by adapting the method of simulated annealing to maneuver sequences. Results for TVSat-2 are shown and compared to actual mission experience. As a conclusion, optimal control of the eccentricity vector need not be along a sun-pointing perigee circle, but along a more sophisticated path depending on spacecraft characteristics. The number of double east-west maneuvers is reduced to two per year and these are scheduled when the eccentricity is smallest. The long-term planning scheme is also demonstrated for contingency analyses.

  17. Optimal groundwater remediation design of pump and treat systems via a simulation-optimization approach and firefly algorithm

    NASA Astrophysics Data System (ADS)

    Javad Kazemzadeh-Parsi, Mohammad; Daneshmand, Farhang; Ahmadfard, Mohammad Amin; Adamowski, Jan; Martel, Richard

    2015-01-01

    In the present study, an optimization approach based on the firefly algorithm (FA) is combined with a finite element simulation method (FEM) to determine the optimum design of pump and treat remediation systems. Three multi-objective functions in which pumping rate and clean-up time are design variables are considered and the proposed FA-FEM model is used to minimize operating costs, total pumping volumes and total pumping rates in three scenarios while meeting water quality requirements. The groundwater lift and contaminant concentration are also minimized through the optimization process. The obtained results show the applicability of the FA in conjunction with the FEM for the optimal design of groundwater remediation systems. The performance of the FA is also compared with the genetic algorithm (GA) and the FA is found to have a better convergence rate than the GA.

  18. A new approach to the Pontryagin maximum principle for nonlinear fractional optimal control problems

    NASA Astrophysics Data System (ADS)

    Ali, Hegagi M.; Pereira, Fernando Lobo; Gama, Sílvio M. A.

    2016-09-01

    In this paper, we discuss a new general formulation of fractional optimal control problems whose performance index is in the fractional integral form and the dynamics are given by a set of fractional differential equations in the Caputo sense. We use a new approach to prove necessary conditions of optimality in the form of Pontryagin maximum principle for fractional nonlinear optimal control problems. Moreover, a new method based on a generalization of the Mittag-Leffler function is used to solving this class of fractional optimal control problems. A simple example is provided to illustrate the effectiveness of our main result.

  19. Optimal speech motor control and token-to-token variability: a Bayesian modeling approach.

    PubMed

    Patri, Jean-François; Diard, Julien; Perrier, Pascal

    2015-12-01

    The remarkable capacity of the speech motor system to adapt to various speech conditions is due to an excess of degrees of freedom, which enables producing similar acoustical properties with different sets of control strategies. To explain how the central nervous system selects one of the possible strategies, a common approach, in line with optimal motor control theories, is to model speech motor planning as the solution of an optimality problem based on cost functions. Despite the success of this approach, one of its drawbacks is the intrinsic contradiction between the concept of optimality and the observed experimental intra-speaker token-to-token variability. The present paper proposes an alternative approach by formulating feedforward optimal control in a probabilistic Bayesian modeling framework. This is illustrated by controlling a biomechanical model of the vocal tract for speech production and by comparing it with an existing optimal control model (GEPPETO). The essential elements of this optimal control model are presented first. From them the Bayesian model is constructed in a progressive way. Performance of the Bayesian model is evaluated based on computer simulations and compared to the optimal control model. This approach is shown to be appropriate for solving the speech planning problem while accounting for variability in a principled way. PMID:26497359

  20. Optimal speech motor control and token-to-token variability: a Bayesian modeling approach.

    PubMed

    Patri, Jean-François; Diard, Julien; Perrier, Pascal

    2015-12-01

    The remarkable capacity of the speech motor system to adapt to various speech conditions is due to an excess of degrees of freedom, which enables producing similar acoustical properties with different sets of control strategies. To explain how the central nervous system selects one of the possible strategies, a common approach, in line with optimal motor control theories, is to model speech motor planning as the solution of an optimality problem based on cost functions. Despite the success of this approach, one of its drawbacks is the intrinsic contradiction between the concept of optimality and the observed experimental intra-speaker token-to-token variability. The present paper proposes an alternative approach by formulating feedforward optimal control in a probabilistic Bayesian modeling framework. This is illustrated by controlling a biomechanical model of the vocal tract for speech production and by comparing it with an existing optimal control model (GEPPETO). The essential elements of this optimal control model are presented first. From them the Bayesian model is constructed in a progressive way. Performance of the Bayesian model is evaluated based on computer simulations and compared to the optimal control model. This approach is shown to be appropriate for solving the speech planning problem while accounting for variability in a principled way.

  1. New aspects of developing a dry powder inhalation formulation applying the quality-by-design approach.

    PubMed

    Pallagi, Edina; Karimi, Keyhaneh; Ambrus, Rita; Szabó-Révész, Piroska; Csóka, Ildikó

    2016-09-10

    The current work outlines the application of an up-to-date and regulatory-based pharmaceutical quality management method, applied as a new development concept in the process of formulating dry powder inhalation systems (DPIs). According to the Quality by Design (QbD) methodology and Risk Assessment (RA) thinking, a mannitol based co-spray dried formula was produced as a model dosage form with meloxicam as the model active agent. The concept and the elements of the QbD approach (regarding its systemic, scientific, risk-based, holistic, and proactive nature with defined steps for pharmaceutical development), as well as the experimental drug formulation (including the technological parameters assessed and the methods and processes applied) are described in the current paper. Findings of the QbD based theoretical prediction and the results of the experimental development are compared and presented. Characteristics of the developed end-product were in correlation with the predictions, and all data were confirmed by the relevant results of the in vitro investigations. These results support the importance of using the QbD approach in new drug formulation, and prove its good usability in the early development process of DPIs. This innovative formulation technology and product appear to have a great potential in pulmonary drug delivery.

  2. New aspects of developing a dry powder inhalation formulation applying the quality-by-design approach.

    PubMed

    Pallagi, Edina; Karimi, Keyhaneh; Ambrus, Rita; Szabó-Révész, Piroska; Csóka, Ildikó

    2016-09-10

    The current work outlines the application of an up-to-date and regulatory-based pharmaceutical quality management method, applied as a new development concept in the process of formulating dry powder inhalation systems (DPIs). According to the Quality by Design (QbD) methodology and Risk Assessment (RA) thinking, a mannitol based co-spray dried formula was produced as a model dosage form with meloxicam as the model active agent. The concept and the elements of the QbD approach (regarding its systemic, scientific, risk-based, holistic, and proactive nature with defined steps for pharmaceutical development), as well as the experimental drug formulation (including the technological parameters assessed and the methods and processes applied) are described in the current paper. Findings of the QbD based theoretical prediction and the results of the experimental development are compared and presented. Characteristics of the developed end-product were in correlation with the predictions, and all data were confirmed by the relevant results of the in vitro investigations. These results support the importance of using the QbD approach in new drug formulation, and prove its good usability in the early development process of DPIs. This innovative formulation technology and product appear to have a great potential in pulmonary drug delivery. PMID:27386791

  3. Approaching the optimal transurethral resection of a bladder tumor.

    PubMed

    Jurewicz, Michael; Soloway, Mark S

    2014-06-01

    A complete transurethral resection of a bladder tumor (TURBT) is essential for adequately diagnosing, staging, and treating bladder cancer. A TURBT is deceptively difficult and is a highly underappreciated procedure. An incomplete resection is the major reason for the high incidence of recurrence following initial transurethral resection and thus to the suboptimal care of our patients. Our objective was to review the preoperative, intraoperative, and postoperative considerations for performing an optimal TURBT. The European Association of Urology, Society of International Urology, and The American Urological Association guidelines emphasize a complete resection of all visible tumor during a TURBT. This review will emphasize the various techniques and treatments, including photodynamic cystoscopy, intravesical chemotherapy, and a perioperative checklist, that can be used to help to enable a complete resection and reduce the recurrence rate. A Medline/PubMed search was completed for original and review articles related to transurethral resection and the treatment of non-muscle-invasive bladder cancer. The major findings were analyzed and are presented from large prospective, retrospective, and review studies.

  4. Approaching the optimal transurethral resection of a bladder tumor

    PubMed Central

    Jurewicz, Michael; Soloway, Mark S.

    2014-01-01

    A complete transurethral resection of a bladder tumor (TURBT) is essential for adequately diagnosing, staging, and treating bladder cancer. A TURBT is deceptively difficult and is a highly underappreciated procedure. An incomplete resection is the major reason for the high incidence of recurrence following initial transurethral resection and thus to the suboptimal care of our patients. Our objective was to review the preoperative, intraoperative, and postoperative considerations for performing an optimal TURBT. The European Association of Urology, Society of International Urology, and The American Urological Association guidelines emphasize a complete resection of all visible tumor during a TURBT. This review will emphasize the various techniques and treatments, including photodynamic cystoscopy, intravesical chemotherapy, and a perioperative checklist, that can be used to help to enable a complete resection and reduce the recurrence rate. A Medline/PubMed search was completed for original and review articles related to transurethral resection and the treatment of non-muscle-invasive bladder cancer. The major findings were analyzed and are presented from large prospective, retrospective, and review studies. PMID:26328154

  5. Dynamic Range Size Analysis of Territorial Animals: An Optimality Approach.

    PubMed

    Tao, Yun; Börger, Luca; Hastings, Alan

    2016-10-01

    Home range sizes of territorial animals are often observed to vary periodically in response to seasonal changes in foraging opportunities. Here we develop the first mechanistic model focused on the temporal dynamics of home range expansion and contraction in territorial animals. We demonstrate how simple movement principles can lead to a rich suite of range size dynamics, by balancing foraging activity with defensive requirements and incorporating optimal behavioral rules into mechanistic home range analysis. Our heuristic model predicts three general temporal patterns that have been observed in empirical studies across multiple taxa. First, a positive correlation between age and territory quality promotes shrinking home ranges over an individual's lifetime, with maximal range size variability shortly before the adult stage. Second, poor sensory information, low population density, and large resource heterogeneity may all independently facilitate range size instability. Finally, aggregation behavior toward forage-rich areas helps produce divergent home range responses between individuals from different age classes. This model has broad applications for addressing important unknowns in animal space use, with potential applications also in conservation and health management strategies. PMID:27622879

  6. A Formal Approach to Empirical Dynamic Model Optimization and Validation

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G; Morelli, Eugene A.; Kenny, Sean P.; Giesy, Daniel P.

    2014-01-01

    A framework was developed for the optimization and validation of empirical dynamic models subject to an arbitrary set of validation criteria. The validation requirements imposed upon the model, which may involve several sets of input-output data and arbitrary specifications in time and frequency domains, are used to determine if model predictions are within admissible error limits. The parameters of the empirical model are estimated by finding the parameter realization for which the smallest of the margins of requirement compliance is as large as possible. The uncertainty in the value of this estimate is characterized by studying the set of model parameters yielding predictions that comply with all the requirements. Strategies are presented for bounding this set, studying its dependence on admissible prediction error set by the analyst, and evaluating the sensitivity of the model predictions to parameter variations. This information is instrumental in characterizing uncertainty models used for evaluating the dynamic model at operating conditions differing from those used for its identification and validation. A practical example based on the short period dynamics of the F-16 is used for illustration.

  7. The 15-meter antenna performance optimization using an interdisciplinary approach

    NASA Astrophysics Data System (ADS)

    Grantham, William L.; Schroeder, Lyle C.; Bailey, Marion C.; Campbell, Thomas G.

    1988-05-01

    A 15-meter diameter deployable antenna has been built and is being used as an experimental test system with which to develop interdisciplinary controls, structures, and electromagnetics technology for large space antennas. The program objective is to study interdisciplinary issues important in optimizing large space antenna performance for a variety of potential users. The 15-meter antenna utilizes a hoop column structural concept with a gold-plated molybdenum mesh reflector. One feature of the design is the use of adjustable control cables to improve the paraboloid reflector shape. Manual adjustment of the cords after initial deployment improved surface smoothness relative to the build accuracy from 0.140 in. RMS to 0.070 in. Preliminary structural dynamics tests and near-field electromagnetic tests were made. The antenna is now being modified for further testing. Modifications include addition of a precise motorized control cord adjustment system to make the reflector surface smoother and an adaptive feed for electronic compensation of reflector surface distortions. Although the previous test results show good agreement between calculated and measured values, additional work is needed to study modelling limits for each discipline, evaluate the potential of adaptive feed compensation, and study closed-loop control performance in a dynamic environment.

  8. Optimal Investment Under Transaction Costs: A Threshold Rebalanced Portfolio Approach

    NASA Astrophysics Data System (ADS)

    Tunc, Sait; Donmez, Mehmet Ali; Kozat, Suleyman Serdar

    2013-06-01

    We study optimal investment in a financial market having a finite number of assets from a signal processing perspective. We investigate how an investor should distribute capital over these assets and when he should reallocate the distribution of the funds over these assets to maximize the cumulative wealth over any investment period. In particular, we introduce a portfolio selection algorithm that maximizes the expected cumulative wealth in i.i.d. two-asset discrete-time markets where the market levies proportional transaction costs in buying and selling stocks. We achieve this using "threshold rebalanced portfolios", where trading occurs only if the portfolio breaches certain thresholds. Under the assumption that the relative price sequences have log-normal distribution from the Black-Scholes model, we evaluate the expected wealth under proportional transaction costs and find the threshold rebalanced portfolio that achieves the maximal expected cumulative wealth over any investment period. Our derivations can be readily extended to markets having more than two stocks, where these extensions are pointed out in the paper. As predicted from our derivations, we significantly improve the achieved wealth over portfolio selection algorithms from the literature on historical data sets.

  9. Applying Business Process Re-engineering Patterns to optimize WS-BPEL Workflows

    NASA Astrophysics Data System (ADS)

    Buys, Jonas; de Florio, Vincenzo; Blondia, Chris

    With the advent of XML-based SOA, WS-BPEL shortly turned out to become a widely accepted standard for modeling business processes. Though SOA is said to embrace the principle of business agility, BPEL process definitions are still manually crafted into their final executable version. While SOA has proven to be a giant leap forward in building flexible IT systems, this static BPEL workflow model is somewhat paradoxical to the need for real business agility and should be enhanced to better sustain continual process evolution. In this paper, we point out the potential of adding business intelligence with respect to business process re-engineering patterns to the system to allow for automatic business process optimization. Furthermore, we point out that BPR macro-rules could be implemented leveraging micro-techniques from computer science. We present some practical examples that illustrate the benefit of such adaptive process models and our preliminary findings.

  10. Multi-stage optimal design for groundwater remediation: a hybrid bi-level programming approach.

    PubMed

    Zou, Yun; Huang, Guo H; He, Li; Li, Hengliang

    2009-08-11

    This paper presents the development of a hybrid bi-level programming approach for supporting multi-stage groundwater remediation design. To investigate remediation performances, a subsurface model was employed to simulate contaminant transport. A mixed-integer nonlinear optimization model was formulated in order to evaluate different remediation strategies. Multivariate relationships based on a filtered stepwise clustering analysis were developed to facilitate the incorporation of a simulation model within a nonlinear optimization framework. By using the developed statistical relationships, predictions needed for calculating the objective function value can be quickly obtained during the search process. The main advantage of the developed approach is that the remediation strategy can be adjusted from stage to stage, which makes the optimization more realistic. The proposed approach was examined through its application to a real-world aquifer remediation case in western Canada. The optimization results based on this application can help the decision makers to comprehensively evaluate remediation performance.

  11. A simulation-optimization approach to retrieve reservoir releasing strategies under the trade-off objectives considering flooding, sedimentation, turbidity and water supply during typhoons

    NASA Astrophysics Data System (ADS)

    Huang, C. L.; Hsu, N. S.; Yeh, W. W. G.; You, G. J. Y.

    2014-12-01

    This study develops a simulation-optimization approach for retrieving optimal multi-layer reservoir conjunctive release strategies considering the natural hazards of sedimentation, turbidity and flooding during typhoon invasion. The purposes of the developed approach are: (1) to apply WASP-based fluid dynamic sediment concentration simulation model and the developed extracting method of ideal releasing practice to search the optimal initial solution for optimization; and (2) to construct the replacing sediment concentration simulation model which embedded in the optimization model. In this study, the optimization model is solved by tabu search, and the optimized releasing hydrograph is then used for construction of the decision model. This study applies Adaptive Network-based Fuzzy Inference System (ANFIS) and Real-time Recurrent Learning Neural Network (RTRLNN) as construction tool of the concentration simulation model for total suspended solids. This developed approach is applied to the Shihmen Reservoir basin, Taiwan. The assessment index of operational outcome of multi-purpose multi-layer conjunctive releasing are maximum sediment concentration at Yuan-Shan weir, sediment removed ratio, highest water level at Shan-Yin Bridge, and final water level in Shihmen reservoir. The analyzed and optimizing results shows the following: (1) The multi-layer releasing during the stages before flood coming and before peak flow possess high potential for flood detention and sedimentation control; and during the stages after peak flow, for turbidity control and storage; (2) The ability of error toleration and adaption of ANFIS is superior, so ANFIS-based sediment concentration simulation model surpass RTRLNN-based model on simulating the mechanism and characteristics of sediment transport; and (3) The developed approach can effectively and automatically retrieve the optimal multi-layer releasing strategies under the trade-off control between flooding, sedimentation, turbidity

  12. A novel optical calorimetry dosimetry approach applied to an HDR Brachytherapy source

    NASA Astrophysics Data System (ADS)

    Cavan, A.; Meyer, J.

    2013-06-01

    The technique of Digital Holographic Interferometry (DHI) is applied to the measurement of radiation absorbed dose distribution in water. An optical interferometer has been developed that captures the small variations in the refractive index of water due to the radiation induced temperature increase ΔT. The absorbed dose D is then determined with high temporal and spatial resolution using the calorimetric relation D=cΔT (where c is the specific heat capacity of water). The method is capable of time resolving 3D spatial calorimetry. As a proof-of-principle of the approach, a prototype DHI dosimeter was applied to the measurement of absorbed dose from a High Dose Rate (HDR) Brachytherapy source. Initial results are in agreement with modelled doses from the Brachyvision treatment planning system, demonstrating the viability of the system for high dose rate applications. Future work will focus on applying corrections for heat diffusion and geometric effects. The method has potential to contribute to the dosimetry of diverse high dose rate applications which require high spatial resolution such as microbeam radiotherapy (MRT) or small field proton beam dosimetry but may potentially also be useful for interface dosimetry.

  13. Precision and the approach to optimality in quantum annealing processors

    NASA Astrophysics Data System (ADS)

    Johnson, Mark W.

    The last few years have seen both a significant technological advance towards the practical application of, and a growing scientific interest in the underlying behaviour of quantum annealing (QA) algorithms. A series of commercially available QA processors, most recently the D-Wave 2XTM 1000 qubit processor, have provided a valuable platform for empirical study of QA at a non-trivial scale. From this it has become clear that misspecification of Hamiltonian parameters is an important performance consideration, both for the goal of studying the underlying physics of QA, as well as that of building a practical and useful QA processor. The empirical study of the physics of QA requires a way to look beyond Hamiltonian misspecification.Recently, a solver metric called 'time-to-target' was proposed as a way to compare quantum annealing processors to classical heuristic algorithms. This approach puts emphasis on analyzing a solver's short time approach to the ground state. In this presentation I will review the processor technology, based on superconducting flux qubits, and some of the known sources of error in Hamiltonian specification. I will then discuss recent advances in reducing Hamiltonian specification error, as well as review the time-to-target metric and empirical results analyzed in this way.

  14. Optimization Approaches for Designing Quantum Reversible Arithmetic Logic Unit

    NASA Astrophysics Data System (ADS)

    Haghparast, Majid; Bolhassani, Ali

    2016-03-01

    Reversible logic is emerging as a promising alternative for applications in low-power design and quantum computation in recent years due to its ability to reduce power dissipation, which is an important research area in low power VLSI and ULSI designs. Many important contributions have been made in the literatures towards the reversible implementations of arithmetic and logical structures; however, there have not been many efforts directed towards efficient approaches for designing reversible Arithmetic Logic Unit (ALU). In this study, three efficient approaches are presented and their implementations in the design of reversible ALUs are demonstrated. Three new designs of reversible one-digit arithmetic logic unit for quantum arithmetic has been presented in this article. This paper provides explicit construction of reversible ALU effecting basic arithmetic operations with respect to the minimization of cost metrics. The architectures of the designs have been proposed in which each block is realized using elementary quantum logic gates. Then, reversible implementations of the proposed designs are analyzed and evaluated. The results demonstrate that the proposed designs are cost-effective compared with the existing counterparts. All the scales are in the NANO-metric area.

  15. Optimal indolence: a normative microscopic approach to work and leisure

    PubMed Central

    Niyogi, Ritwik K.; Breton, Yannick-Andre; Solomon, Rebecca B.; Conover, Kent; Shizgal, Peter; Dayan, Peter

    2014-01-01

    Dividing limited time between work and leisure when both have their attractions is a common everyday decision. We provide a normative control-theoretic treatment of this decision that bridges economic and psychological accounts. We show how our framework applies to free-operant behavioural experiments in which subjects are required to work (depressing a lever) for sufficient total time (called the price) to receive a reward. When the microscopic benefit-of-leisure increases nonlinearly with duration, the model generates behaviour that qualitatively matches various microfeatures of subjects’ choices, including the distribution of leisure bout durations as a function of the pay-off. We relate our model to traditional accounts by deriving macroscopic, molar, quantities from microscopic choices. PMID:24284898

  16. Academic Departmental Management: An Application of an Interactive Multicriterion Optimization Approach.

    ERIC Educational Resources Information Center

    Geoffrion, A. M.; And Others

    This paper presents the conceptual development and application of a new interactive approach for multicriterion optimization to the aggregate operating problem of an academic department. This approach provides a mechanism for assisting an administrator in determing resource allocation decisions and only requires local trade-off and preference…

  17. A simple reliability-based topology optimization approach for continuum structures using a topology description function

    NASA Astrophysics Data System (ADS)

    Liu, Jie; Wen, Guilin; Zhi Zuo, Hao; Qing, Qixiang

    2016-07-01

    The structural configuration obtained by deterministic topology optimization may represent a low reliability level and lead to a high failure rate. Therefore, it is necessary to take reliability into account for topology optimization. By integrating reliability analysis into topology optimization problems, a simple reliability-based topology optimization (RBTO) methodology for continuum structures is investigated in this article. The two-layer nesting involved in RBTO, which is time consuming, is decoupled by the use of a particular optimization procedure. A topology description function approach (TOTDF) and a first order reliability method are employed for topology optimization and reliability calculation, respectively. The problem of the non-smoothness inherent in TOTDF is dealt with using two different smoothed Heaviside functions and the corresponding topologies are compared. Numerical examples demonstrate the validity and efficiency of the proposed improved method. In-depth discussions are also presented on the influence of different structural reliability indices on the final layout.

  18. A hybrid simulation-optimization approach for solving the areal groundwater pollution source identification problems

    NASA Astrophysics Data System (ADS)

    Ayvaz, M. Tamer

    2016-07-01

    In this study, a new simulation-optimization approach is proposed for solving the areal groundwater pollution source identification problems which is an ill-posed inverse problem. In the simulation part of the proposed approach, groundwater flow and pollution transport processes are simulated by modeling the given aquifer system on MODFLOW and MT3DMS models. The developed simulation model is then integrated to a newly proposed hybrid optimization model where a binary genetic algorithm and a generalized reduced gradient method are mutually used. This is a novel approach and it is employed for the first time in the areal pollution source identification problems. The objective of the proposed hybrid optimization approach is to simultaneously identify the spatial distributions and input concentrations of the unknown areal groundwater pollution sources by using the limited number of pollution concentration time series at the monitoring well locations. The applicability of the proposed simulation-optimization approach is evaluated on a hypothetical aquifer model for different pollution source distributions. Furthermore, model performance is evaluated for measurement error conditions, different genetic algorithm parameter combinations, different numbers and locations of the monitoring wells, and different heterogeneous hydraulic conductivity fields. Identified results indicated that the proposed simulation-optimization approach may be an effective way to solve the areal groundwater pollution source identification problems.

  19. [Optimization of organizational approaches to management of patients with atherosclerosis].

    PubMed

    Barbarash, L S; Barbarash, O L; Artamonova, G V; Sumin, A N

    2014-01-01

    Despite undoubted achievements of modern cardiology in prevention and treatment of atherosclerosis, cardiologists, neurologists, and vascular surgeons are still facing severe stenotic atherosclerotic lesions in different vascular regions, both symptomatic and asymptomatic. As a rule hemodynamically significant stenoses of different locations are found after development of acute vascular events. In this regard, active detection of arterial stenoses localized in different areas just at primary contact of patients presenting with symptoms of ischemia of various locations with care providers appears to be crucial. Further monitoring of these stenoses is also important. The article is dedicated to innovative organizational approaches to provision of healthcare to patients suffering from circulatory system diseases that have contributed to improvement of demographic situation in Kuzbass.

  20. Applying AN Integrated Route Optimization Method as a Solution to the Problem of Waste Collection

    NASA Astrophysics Data System (ADS)

    Salleh, A. H.; Ahamad, M. S. S.; Yusoff, M. S.

    2016-09-01

    Solid waste management (SWM) is very subjective to budget control where the utmost expenses are devoted to the waste collection's travel route. The common understanding of the travel route in SWM is that shorter route is cheaper. However, in reality it is not necessarily true as the SWM compactor truck is affected by various aspects which leads to higher fuel consumption. Thus, this ongoing research introduces a solution to the problem using multiple criteria route optimization process integrated with AHP/GIS as its main analysis tools. With the criteria obtained from the idea that leads to higher fuel consumption based on road factors, road networks and human factors. The weightage of criteria is obtained from the combination of AHP with the distance of multiple shortest routes obtained from GIS. A solution of most optimum routes is achievable and comparative analysis with the currently used route by the SWM compactor truck can be compared. It is expected that the decision model will be able to solve the global and local travel route problem in MSW.

  1. Factorial design applied to the optimization of lipid composition of topical antiherpetic nanoemulsions containing isoflavone genistein

    PubMed Central

    Argenta, Débora Fretes; de Mattos, Cristiane Bastos; Misturini, Fabíola Dallarosa; Koester, Leticia Scherer; Bassani, Valquiria Linck; Simões, Cláudia Maria Oliveira; Teixeira, Helder Ferreira

    2014-01-01

    The aim of this study was to optimize topical nanoemulsions containing genistein, by means of a 23 full factorial design based on physicochemical properties and skin retention. The experimental arrangement was constructed using oil type (isopropyl myristate or castor oil), phospholipid type (distearoylphosphatidylcholine [DSPC] or dioleylphosphaditylcholine [DOPC]), and ionic cosurfactant type (oleic acid or oleylamine) as independent variables. The analysis of variance showed effect of third order for particle size, polydispersity index, and skin retention of genistein. Nanoemulsions composed of isopropyl myristate/DOPC/oleylamine showed the smallest diameter and highest genistein amount in porcine ear skin whereas the formulation composed of isopropyl myristate/DSPC/oleylamine exhibited the lowest polydispersity index. Thus, these two formulations were selected for further studies. The formulations presented positive ζ potential values (>25 mV) and genistein content close to 100% (at 1 mg/mL). The incorporation of genistein in nanoemulsions significantly increased the retention of this isoflavone in epidermis and dermis, especially when the formulation composed by isopropyl myristate/DOPC/oleylamine was used. These results were supported by confocal images. Such formulations exhibited antiherpetic activity in vitro against herpes simplex virus 1 (strain KOS) and herpes simplex virus 22 (strain 333). Taken together, the results show that the genistein-loaded nanoemulsions developed in this study are promising options in herpes treatment. PMID:25336951

  2. On the preventive management of sediment-related sewer blockages: a combined maintenance and routing optimization approach.

    PubMed

    Fontecha, John E; Akhavan-Tabatabaei, Raha; Duque, Daniel; Medaglia, Andrés L; Torres, María N; Rodríguez, Juan Pablo

    2016-01-01

    In this work we tackle the problem of planning and scheduling preventive maintenance (PM) of sediment-related sewer blockages in a set of geographically distributed sites that are subject to non-deterministic failures. To solve the problem, we extend a combined maintenance and routing (CMR) optimization approach which is a procedure based on two components: (a) first a maintenance model is used to determine the optimal time to perform PM operations for each site and second (b) a mixed integer program-based split procedure is proposed to route a set of crews (e.g., sewer cleaners, vehicles equipped with winches or rods and dump trucks) in order to perform PM operations at a near-optimal minimum expected cost. We applied the proposed CMR optimization approach to two (out of five) operative zones in the city of Bogotá (Colombia), where more than 100 maintenance operations per zone must be scheduled on a weekly basis. Comparing the CMR against the current maintenance plan, we obtained more than 50% of cost savings in 90% of the sites.

  3. On the preventive management of sediment-related sewer blockages: a combined maintenance and routing optimization approach.

    PubMed

    Fontecha, John E; Akhavan-Tabatabaei, Raha; Duque, Daniel; Medaglia, Andrés L; Torres, María N; Rodríguez, Juan Pablo

    2016-01-01

    In this work we tackle the problem of planning and scheduling preventive maintenance (PM) of sediment-related sewer blockages in a set of geographically distributed sites that are subject to non-deterministic failures. To solve the problem, we extend a combined maintenance and routing (CMR) optimization approach which is a procedure based on two components: (a) first a maintenance model is used to determine the optimal time to perform PM operations for each site and second (b) a mixed integer program-based split procedure is proposed to route a set of crews (e.g., sewer cleaners, vehicles equipped with winches or rods and dump trucks) in order to perform PM operations at a near-optimal minimum expected cost. We applied the proposed CMR optimization approach to two (out of five) operative zones in the city of Bogotá (Colombia), where more than 100 maintenance operations per zone must be scheduled on a weekly basis. Comparing the CMR against the current maintenance plan, we obtained more than 50% of cost savings in 90% of the sites. PMID:27438233

  4. An Informatics Approach to Demand Response Optimization in Smart Grids

    SciTech Connect

    Simmhan, Yogesh; Aman, Saima; Cao, Baohua; Giakkoupis, Mike; Kumbhare, Alok; Zhou, Qunzhi; Paul, Donald; Fern, Carol; Sharma, Aditya; Prasanna, Viktor K

    2011-03-03

    Power utilities are increasingly rolling out “smart” grids with the ability to track consumer power usage in near real-time using smart meters that enable bidirectional communication. However, the true value of smart grids is unlocked only when the veritable explosion of data that will become available is ingested, processed, analyzed and translated into meaningful decisions. These include the ability to forecast electricity demand, respond to peak load events, and improve sustainable use of energy by consumers, and are made possible by energy informatics. Information and software system techniques for a smarter power grid include pattern mining and machine learning over complex events and integrated semantic information, distributed stream processing for low latency response,Cloud platforms for scalable operations and privacy policies to mitigate information leakage in an information rich environment. Such an informatics approach is being used in the DoE sponsored Los Angeles Smart Grid Demonstration Project, and the resulting software architecture will lead to an agile and adaptive Los Angeles Smart Grid.

  5. Fixed structure compensator design using a constrained hybrid evolutionary optimization approach.

    PubMed

    Ghosh, Subhojit; Samanta, Susovon

    2014-07-01

    This paper presents an efficient technique for designing a fixed order compensator for compensating current mode control architecture of DC-DC converters. The compensator design is formulated as an optimization problem, which seeks to attain a set of frequency domain specifications. The highly nonlinear nature of the optimization problem demands the use of an initial parameterization independent global search technique. In this regard, the optimization problem is solved using a hybrid evolutionary optimization approach, because of its simple structure, faster execution time and greater probability in achieving the global solution. The proposed algorithm involves the combination of a population search based optimization approach i.e. Particle Swarm Optimization (PSO) and local search based method. The op-amp dynamics have been incorporated during the design process. Considering the limitations of fixed structure compensator in achieving loop bandwidth higher than a certain threshold, the proposed approach also determines the op-amp bandwidth, which would be able to achieve the same. The effectiveness of the proposed approach in meeting the desired frequency domain specifications is experimentally tested on a peak current mode control dc-dc buck converter.

  6. Fixed structure compensator design using a constrained hybrid evolutionary optimization approach.

    PubMed

    Ghosh, Subhojit; Samanta, Susovon

    2014-07-01

    This paper presents an efficient technique for designing a fixed order compensator for compensating current mode control architecture of DC-DC converters. The compensator design is formulated as an optimization problem, which seeks to attain a set of frequency domain specifications. The highly nonlinear nature of the optimization problem demands the use of an initial parameterization independent global search technique. In this regard, the optimization problem is solved using a hybrid evolutionary optimization approach, because of its simple structure, faster execution time and greater probability in achieving the global solution. The proposed algorithm involves the combination of a population search based optimization approach i.e. Particle Swarm Optimization (PSO) and local search based method. The op-amp dynamics have been incorporated during the design process. Considering the limitations of fixed structure compensator in achieving loop bandwidth higher than a certain threshold, the proposed approach also determines the op-amp bandwidth, which would be able to achieve the same. The effectiveness of the proposed approach in meeting the desired frequency domain specifications is experimentally tested on a peak current mode control dc-dc buck converter. PMID:24768082

  7. Applying a radiomics approach to predict prognosis of lung cancer patients

    NASA Astrophysics Data System (ADS)

    Emaminejad, Nastaran; Yan, Shiju; Wang, Yunzhi; Qian, Wei; Guan, Yubao; Zheng, Bin

    2016-03-01

    Radiomics is an emerging technology to decode tumor phenotype based on quantitative analysis of image features computed from radiographic images. In this study, we applied Radiomics concept to investigate the association among the CT image features of lung tumors, which are either quantitatively computed or subjectively rated by radiologists, and two genomic biomarkers namely, protein expression of the excision repair cross-complementing 1 (ERCC1) genes and a regulatory subunit of ribonucleotide reductase (RRM1), in predicting disease-free survival (DFS) of lung cancer patients after surgery. An image dataset involving 94 patients was used. Among them, 20 had cancer recurrence within 3 years, while 74 patients remained DFS. After tumor segmentation, 35 image features were computed from CT images. Using the Weka data mining software package, we selected 10 non-redundant image features. Applying a SMOTE algorithm to generate synthetic data to balance case numbers in two DFS ("yes" and "no") groups and a leave-one-case-out training/testing method, we optimized and compared a number of machine learning classifiers using (1) quantitative image (QI) features, (2) subjective rated (SR) features, and (3) genomic biomarkers (GB). Data analyses showed relatively lower correlation among the QI, SR and GB prediction results (with Pearson correlation coefficients < 0.5 including between ERCC1 and RRM1 biomarkers). By using area under ROC curve as an assessment index, the QI, SR and GB based classifiers yielded AUC = 0.89+/-0.04, 0.73+/-0.06 and 0.76+/-0.07, respectively, which showed that all three types of features had prediction power (AUC>0.5). Among them, using QI yielded the highest performance.

  8. A multiobjective ant colony optimization approach for scheduling environmental flow management alternatives with application to the River Murray, Australia

    NASA Astrophysics Data System (ADS)

    Szemis, J. M.; Dandy, G. C.; Maier, H. R.

    2013-10-01

    In regulated river systems, such as the River Murray in Australia, the efficient use of water to preserve and restore biota in the river, wetlands, and floodplains is of concern for water managers. Available management options include the timing of river flow releases and operation of wetland flow control structures. However, the optimal scheduling of these environmental flow management alternatives is a difficult task, since there are generally multiple wetlands and floodplains with a range of species, as well as a large number of management options that need to be considered. Consequently, this problem is a multiobjective optimization problem aimed at maximizing ecological benefit while minimizing water allocations within the infrastructure constraints of the system under consideration. This paper presents a multiobjective optimization framework, which is based on a multiobjective ant colony optimization approach, for developing optimal trade-offs between water allocation and ecological benefit. The framework is applied to a reach of the River Murray in South Australia. Two studies are formulated to assess the impact of (i) upstream system flow constraints and (ii) additional regulators on this trade-off. The results indicate that unless the system flow constraints are relaxed, there is limited additional ecological benefit as allocation increases. Furthermore the use of regulators can increase ecological benefits while using less water. The results illustrate the utility of the framework since the impact of flow control infrastructure on the trade-offs between water allocation and ecological benefit can be investigated, thereby providing valuable insight to managers.

  9. Old concepts, new molecules and current approaches applied to the bacterial nucleotide signalling field.

    PubMed

    Gründling, Angelika; Lee, Vincent T

    2016-11-01

    Signalling nucleotides are key molecules that help bacteria to rapidly coordinate cellular pathways and adapt to changes in their environment. During the past 10 years, the nucleotide signalling field has seen much excitement, as several new signalling nucleotides have been discovered in both eukaryotic and bacterial cells. The fields have since advanced quickly, aided by the development of important tools such as the synthesis of modified nucleotides, which, combined with sensitive mass spectrometry methods, allowed for the rapid identification of specific receptor proteins along with other novel genome-wide screening methods. In this review, we describe the principle concepts of nucleotide signalling networks and summarize the recent work that led to the discovery of the novel signalling nucleotides. We also highlight current approaches applied to the research in the field as well as resources and methodological advances aiding in a rapid identification of nucleotide-specific receptor proteins.This article is part of the themed issue 'The new bacteriology'. PMID:27672152

  10. A sensory- and consumer-based approach to optimize cheese enrichment with grape skin powders.

    PubMed

    Torri, L; Piochi, M; Marchiani, R; Zeppa, G; Dinnella, C; Monteleone, E

    2016-01-01

    The present study aimed to present a sensory- and consumer-based approach to optimize cheese enrichment with grape skin powders (GSP). The combined sensory evaluation approach, involving a descriptive and an affective test, respectively, was applied to evaluate the effect of the addition of grape skin powders from 2 grape varieties (Barbera and Chardonnay) at different levels [0.8, 1.6, and 2.4%; weight (wt) powder/wt curd] on the sensory properties and consumer acceptability of innovative soft cow milk cheeses. The experimental plan envisaged 7 products, 6 fortified prototypes (at rates of Barbera and Chardonnay of 0.8, 1.6, and 2.4%) and a control sample, with 1 wk of ripening. By means of a free choice profile, 21 cheese experts described the sensory properties of prototypes. A central location test with 90 consumers was subsequently conducted to assess the acceptability of samples. The GSP enrichment strongly affected the sensory properties of innovative products, mainly in terms of appearance and texture. Fortified samples were typically described with a marbling aspect (violet or brown as function of the grape variety) and with an increased granularity, sourness, saltiness, and astringency. The fortification also contributed certain vegetable sensations perceived at low intensity (grassy, cereal, nuts), and some potential negative sensations (earthy, animal, winy, varnish). The white color, the homogeneous dough, the compact and elastic texture, and the presence of lactic flavors resulted the positive drivers of preference. On the contrary, the marbling aspect, granularity, sandiness, sourness, saltiness, and astringency negatively affected the cheese acceptability for amounts of powder, exceeding 0.8 and 1.6% for the Barbera and Chardonnay prototypes, respectively. Therefore, the amount of powder resulted a critical parameter for liking of fortified cheeses and a discriminant between the 2 varieties. Reducing the GSP particle size and improving the GSP

  11. Wind Tunnel Management and Resource Optimization: A Systems Modeling Approach

    NASA Technical Reports Server (NTRS)

    Jacobs, Derya, A.; Aasen, Curtis A.

    2000-01-01

    Time, money, and, personnel are becoming increasingly scarce resources within government agencies due to a reduction in funding and the desire to demonstrate responsible economic efficiency. The ability of an organization to plan and schedule resources effectively can provide the necessary leverage to improve productivity, provide continuous support to all projects, and insure flexibility in a rapidly changing environment. Without adequate internal controls the organization is forced to rely on external support, waste precious resources, and risk an inefficient response to change. Management systems must be developed and applied that strive to maximize the utility of existing resources in order to achieve the goal of "faster, cheaper, better". An area of concern within NASA Langley Research Center was the scheduling, planning, and resource management of the Wind Tunnel Enterprise operations. Nine wind tunnels make up the Enterprise. Prior to this research, these wind tunnel groups did not employ a rigorous or standardized management planning system. In addition, each wind tunnel unit operated from a position of autonomy, with little coordination of clients, resources, or project control. For operating and planning purposes, each wind tunnel operating unit must balance inputs from a variety of sources. Although each unit is managed by individual Facility Operations groups, other stakeholders influence wind tunnel operations. These groups include, for example, the various researchers and clients who use the facility, the Facility System Engineering Division (FSED) tasked with wind tunnel repair and upgrade, the Langley Research Center (LaRC) Fabrication (FAB) group which fabricates repair parts and provides test model upkeep, the NASA and LARC Strategic Plans, and unscheduled use of the facilities by important clients. Expanding these influences horizontally through nine wind tunnel operations and vertically along the NASA management structure greatly increases the

  12. Geometry Control System for Exploratory Shape Optimization Applied to High-Fidelity Aerodynamic Design of Unconventional Aircraft

    NASA Astrophysics Data System (ADS)

    Gagnon, Hugo

    This thesis represents a step forward to bring geometry parameterization and control on par with the disciplinary analyses involved in shape optimization, particularly high-fidelity aerodynamic shape optimization. Central to the proposed methodology is the non-uniform rational B-spline, used here to develop a new geometry generator and geometry control system applicable to the aerodynamic design of both conventional and unconventional aircraft. The geometry generator adopts a component-based approach, where any number of predefined but modifiable (parametric) wing, fuselage, junction, etc., components can be arbitrarily assembled to generate the outer mold line of aircraft geometry. A unique Python-based user interface incorporating an interactive OpenGL windowing system is proposed. Together, these tools allow for the generation of high-quality, C2 continuous (or higher), and customized aircraft geometry with fast turnaround. The geometry control system tightly integrates shape parameterization with volume mesh movement using a two-level free-form deformation approach. The framework is augmented with axial curves, which are shown to be flexible and efficient at parameterizing wing systems of arbitrary topology. A key aspect of this methodology is that very large shape deformations can be achieved with only a few, intuitive control parameters. Shape deformation consumes a few tenths of a second on a single processor and surface sensitivities are machine accurate. The geometry control system is implemented within an existing aerodynamic optimizer comprising a flow solver for the Euler equations and a sequential quadratic programming optimizer. Gradients are evaluated exactly with discrete-adjoint variables. The algorithm is first validated by recovering an elliptical lift distribution on a rectangular wing, and then demonstrated through the exploratory shape optimization of a three-pronged feathered winglet leading to a span efficiency of 1.22 under a height

  13. Resource allocation for error resilient video coding over AWGN using optimization approach.

    PubMed

    An, Cheolhong; Nguyen, Truong Q

    2008-12-01

    The number of slices for error resilient video coding is jointly optimized with 802.11a-like media access control and the physical layers with automatic repeat request and rate compatible punctured convolutional code over additive white gaussian noise channel as well as channel times allocation for time division multiple access. For error resilient video coding, the relation between the number of slices and coding efficiency is analyzed and formulated as a mathematical model. It is applied for the joint optimization problem, and the problem is solved by a convex optimization method such as the primal-dual decomposition method. We compare the performance of a video communication system which uses the optimal number of slices with one that codes a picture as one slice. From numerical examples, end-to-end distortion of utility functions can be significantly reduced with the optimal slices of a picture especially at low signal-to-noise ratio.

  14. An approach to the multi-axis problem in manual control. [optimal pilot model

    NASA Technical Reports Server (NTRS)

    Harrington, W. W.

    1977-01-01

    The multiaxis control problem is addressed within the context of the optimal pilot model. The problem is developed to provide efficient adaptation of the optimal pilot model to complex aircraft systems and real world, multiaxis tasks. This is accomplished by establishing separability of the longitudinal and lateral control problems subject to the constraints of multiaxis attention and control allocation. Control solution adaptation to the constrained single axis attention allocations is provided by an optimal control frequency response algorithm. An algorithm is developed to solve the multiaxis control problem. The algorithm is then applied to an attitude hold task for a bare airframe fighter aircraft case with interesting multiaxis properties.

  15. Applying the Principles of Systems Engineering and Project Management to Optimize Scientific Research

    NASA Astrophysics Data System (ADS)

    Peterkin, Adria J.

    2016-01-01

    Systems Engineering is an interdisciplinary practice that analyzes different facets of a suggested area to properly develop and design an efficient system guided by the principles and restrictions of the science community. When entering an institution with quantitative and analytical scientific theory it is important to make sure that all parts of a system correlates in a structured and systematic manner so that all areas of intricacy will be prevented or quickly deduced. My research focused on interpreting and implementing Systems Engineering techniques in the construction, integration and operation of a NASA Radio Jove Kit to Observe Jupiter radio emissions. Jupiter emissions read at very low frequencies so when building the telescope it had to be able to read less than 39.5 MHz. The projected outcome was to receive long L-bursts and short S-burts signals; however, during the time of observation Jupiter was in conjunction with the Sun. We then decided to use the receiver built from the NASA Radio Jove Kit to hook it up to the Karl Jansky telescope to make an effort to listen to solar flares as well, nonetheless, we were unable to identify these signals and further realized they were noise. The overall project was a success in that we were able to apply and comprehend, the principles of Systems Engineering to facilitate the build.

  16. The systems approach for applying artificial intelligence to space station automation (Invited Paper)

    NASA Astrophysics Data System (ADS)

    Grose, Vernon L.

    1985-12-01

    The progress of technology is marked by fragmentation -- dividing research and development into ever narrower fields of specialization. Ultimately, specialists know everything about nothing. And hope for integrating those slender slivers of specialty into a whole fades. Without an integrated, all-encompassing perspective, technology becomes applied in a lopsided and often inefficient manner. A decisionary model, developed and applied for NASA's Chief Engineer toward establishment of commercial space operations, can be adapted to the identification, evaluation, and selection of optimum application of artificial intelligence for space station automation -- restoring wholeness to a situation that is otherwise chaotic due to increasing subdivision of effort. Issues such as functional assignments for space station task, domain, and symptom modules can be resolved in a manner understood by all parties rather than just the person with assigned responsibility -- and ranked by overall significance to mission accomplishment. Ranking is based on the three basic parameters of cost, performance, and schedule. This approach has successfully integrated many diverse specialties in situations like worldwide terrorism control, coal mining safety, medical malpractice risk, grain elevator explosion prevention, offshore drilling hazards, and criminal justice resource allocation -- all of which would have otherwise been subject to "squeaky wheel" emphasis and support of decision-makers.

  17. Mathematic simulation of soil-vegetation condition and land use structure applying basin approach

    NASA Astrophysics Data System (ADS)

    Mishchenko, Natalia; Shirkin, Leonid; Krasnoshchekov, Alexey

    2016-04-01

    Ecosystems anthropogenic transformation is basically connected to the changes of land use structure and human impact on soil fertility. The Research objective is to simulate the stationary state of river basins ecosystems. Materials and Methods. Basin approach has been applied in the research. Small rivers basins of the Klyazma river have been chosen as our research objects. They are situated in the central part of the Russian plain. The analysis is carried out applying integrated characteristics of ecosystems functioning and mathematic simulation methods. To design mathematic simulator functional simulation methods and principles on the basis of regression, correlation and factor analysis have been applied in the research. Results. Mathematic simulation resulted in defining possible permanent conditions of "phytocenosis-soil" system in coordinates of phytomass, phytoproductivity, humus percentage in soil. Ecosystem productivity is determined not only by vegetation photosynthesis activity but also by the area ratio of forest and meadow phytocenosis. Local maximums attached to certain phytomass areas and humus content in soil have been defined on the basin phytoproductivity distribution diagram. We explain the local maximum by synergetic effect. It appears with the definite ratio of forest and meadow phytocenosis. In this case, utmost values of phytomass for the whole area are higher than just a sum of utmost values of phytomass for the forest and meadow phytocenosis. Efficient correlation of natural forest and meadow phytocenosis has been defined for the Klyazma river. Conclusion. Mathematic simulation methods assist in forecasting the ecosystem conditions under various changes of land use structure. Nowadays overgrowing of the abandoned agricultural lands is very actual for the Russian Federation. Simulation results demonstrate that natural ratio of forest and meadow phytocenosis for the area will restore during agricultural overgrowing.

  18. A Graph-Based Ant Colony Optimization Approach for Process Planning

    PubMed Central

    Wang, JinFeng; Fan, XiaoLiang; Wan, Shuting

    2014-01-01

    The complex process planning problem is modeled as a combinatorial optimization problem with constraints in this paper. An ant colony optimization (ACO) approach has been developed to deal with process planning problem by simultaneously considering activities such as sequencing operations, selecting manufacturing resources, and determining setup plans to achieve the optimal process plan. A weighted directed graph is conducted to describe the operations, precedence constraints between operations, and the possible visited path between operation nodes. A representation of process plan is described based on the weighted directed graph. Ant colony goes through the necessary nodes on the graph to achieve the optimal solution with the objective of minimizing total production costs (TPC). Two cases have been carried out to study the influence of various parameters of ACO on the system performance. Extensive comparative experiments have been conducted to demonstrate the feasibility and efficiency of the proposed approach. PMID:24995355

  19. Next-Generation Mitogenomics: A Comparison of Approaches Applied to Caecilian Amphibian Phylogeny.

    PubMed

    Maddock, Simon T; Briscoe, Andrew G; Wilkinson, Mark; Waeschenbach, Andrea; San Mauro, Diego; Day, Julia J; Littlewood, D Tim J; Foster, Peter G; Nussbaum, Ronald A; Gower, David J

    2016-01-01

    Mitochondrial genome (mitogenome) sequences are being generated with increasing speed due to the advances of next-generation sequencing (NGS) technology and associated analytical tools. However, detailed comparisons to explore the utility of alternative NGS approaches applied to the same taxa have not been undertaken. We compared a 'traditional' Sanger sequencing method with two NGS approaches (shotgun sequencing and non-indexed, multiplex amplicon sequencing) on four different sequencing platforms (Illumina's HiSeq and MiSeq, Roche's 454 GS FLX, and Life Technologies' Ion Torrent) to produce seven (near-) complete mitogenomes from six species that form a small radiation of caecilian amphibians from the Seychelles. The fastest, most accurate method of obtaining mitogenome sequences that we tested was direct sequencing of genomic DNA (shotgun sequencing) using the MiSeq platform. Bayesian inference and maximum likelihood analyses using seven different partitioning strategies were unable to resolve compellingly all phylogenetic relationships among the Seychelles caecilian species, indicating the need for additional data in this case.

  20. Next-Generation Mitogenomics: A Comparison of Approaches Applied to Caecilian Amphibian Phylogeny.

    PubMed

    Maddock, Simon T; Briscoe, Andrew G; Wilkinson, Mark; Waeschenbach, Andrea; San Mauro, Diego; Day, Julia J; Littlewood, D Tim J; Foster, Peter G; Nussbaum, Ronald A; Gower, David J

    2016-01-01

    Mitochondrial genome (mitogenome) sequences are being generated with increasing speed due to the advances of next-generation sequencing (NGS) technology and associated analytical tools. However, detailed comparisons to explore the utility of alternative NGS approaches applied to the same taxa have not been undertaken. We compared a 'traditional' Sanger sequencing method with two NGS approaches (shotgun sequencing and non-indexed, multiplex amplicon sequencing) on four different sequencing platforms (Illumina's HiSeq and MiSeq, Roche's 454 GS FLX, and Life Technologies' Ion Torrent) to produce seven (near-) complete mitogenomes from six species that form a small radiation of caecilian amphibians from the Seychelles. The fastest, most accurate method of obtaining mitogenome sequences that we tested was direct sequencing of genomic DNA (shotgun sequencing) using the MiSeq platform. Bayesian inference and maximum likelihood analyses using seven different partitioning strategies were unable to resolve compellingly all phylogenetic relationships among the Seychelles caecilian species, indicating the need for additional data in this case. PMID:27280454

  1. Next-Generation Mitogenomics: A Comparison of Approaches Applied to Caecilian Amphibian Phylogeny

    PubMed Central

    Maddock, Simon T.; Briscoe, Andrew G.; Wilkinson, Mark; Waeschenbach, Andrea; San Mauro, Diego; Day, Julia J.; Littlewood, D. Tim J.; Foster, Peter G.; Nussbaum, Ronald A.; Gower, David J.

    2016-01-01

    Mitochondrial genome (mitogenome) sequences are being generated with increasing speed due to the advances of next-generation sequencing (NGS) technology and associated analytical tools. However, detailed comparisons to explore the utility of alternative NGS approaches applied to the same taxa have not been undertaken. We compared a ‘traditional’ Sanger sequencing method with two NGS approaches (shotgun sequencing and non-indexed, multiplex amplicon sequencing) on four different sequencing platforms (Illumina’s HiSeq and MiSeq, Roche’s 454 GS FLX, and Life Technologies’ Ion Torrent) to produce seven (near-) complete mitogenomes from six species that form a small radiation of caecilian amphibians from the Seychelles. The fastest, most accurate method of obtaining mitogenome sequences that we tested was direct sequencing of genomic DNA (shotgun sequencing) using the MiSeq platform. Bayesian inference and maximum likelihood analyses using seven different partitioning strategies were unable to resolve compellingly all phylogenetic relationships among the Seychelles caecilian species, indicating the need for additional data in this case. PMID:27280454

  2. Crossover versus Mutation: A Comparative Analysis of the Evolutionary Strategy of Genetic Algorithms Applied to Combinatorial Optimization Problems

    PubMed Central

    Osaba, E.; Carballedo, R.; Diaz, F.; Onieva, E.; de la Iglesia, I.; Perallos, A.

    2014-01-01

    Since their first formulation, genetic algorithms (GAs) have been one of the most widely used techniques to solve combinatorial optimization problems. The basic structure of the GAs is known by the scientific community, and thanks to their easy application and good performance, GAs are the focus of a lot of research works annually. Although throughout history there have been many studies analyzing various concepts of GAs, in the literature there are few studies that analyze objectively the influence of using blind crossover operators for combinatorial optimization problems. For this reason, in this paper a deep study on the influence of using them is conducted. The study is based on a comparison of nine techniques applied to four well-known combinatorial optimization problems. Six of the techniques are GAs with different configurations, and the remaining three are evolutionary algorithms that focus exclusively on the mutation process. Finally, to perform a reliable comparison of these results, a statistical study of them is made, performing the normal distribution z-test. PMID:25165731

  3. A multi-label, semi-supervised classification approach applied to personality prediction in social media.

    PubMed

    Lima, Ana Carolina E S; de Castro, Leandro Nunes

    2014-10-01

    Social media allow web users to create and share content pertaining to different subjects, exposing their activities, opinions, feelings and thoughts. In this context, online social media has attracted the interest of data scientists seeking to understand behaviours and trends, whilst collecting statistics for social sites. One potential application for these data is personality prediction, which aims to understand a user's behaviour within social media. Traditional personality prediction relies on users' profiles, their status updates, the messages they post, etc. Here, a personality prediction system for social media data is introduced that differs from most approaches in the literature, in that it works with groups of texts, instead of single texts, and does not take users' profiles into account. Also, the proposed approach extracts meta-attributes from texts and does not work directly with the content of the messages. The set of possible personality traits is taken from the Big Five model and allows the problem to be characterised as a multi-label classification task. The problem is then transformed into a set of five binary classification problems and solved by means of a semi-supervised learning approach, due to the difficulty in annotating the massive amounts of data generated in social media. In our implementation, the proposed system was trained with three well-known machine-learning algorithms, namely a Naïve Bayes classifier, a Support Vector Machine, and a Multilayer Perceptron neural network. The system was applied to predict the personality of Tweets taken from three datasets available in the literature, and resulted in an approximately 83% accurate prediction, with some of the personality traits presenting better individual classification rates than others.

  4. Distribution function approach to redshift space distortions. Part IV: perturbation theory applied to dark matter

    SciTech Connect

    Vlah, Zvonimir; Seljak, Uroš; Baldauf, Tobias; McDonald, Patrick; Okumura, Teppei E-mail: seljak@physik.uzh.ch E-mail: teppei@ewha.ac.kr

    2012-11-01

    We develop a perturbative approach to redshift space distortions (RSD) using the phase space distribution function approach and apply it to the dark matter redshift space power spectrum and its moments. RSD can be written as a sum over density weighted velocity moments correlators, with the lowest order being density, momentum density and stress energy density. We use standard and extended perturbation theory (PT) to determine their auto and cross correlators, comparing them to N-body simulations. We show which of the terms can be modeled well with the standard PT and which need additional terms that include higher order corrections which cannot be modeled in PT. Most of these additional terms are related to the small scale velocity dispersion effects, the so called finger of god (FoG) effects, which affect some, but not all, of the terms in this expansion, and which can be approximately modeled using a simple physically motivated ansatz such as the halo model. We point out that there are several velocity dispersions that enter into the detailed RSD analysis with very different amplitudes, which can be approximately predicted by the halo model. In contrast to previous models our approach systematically includes all of the terms at a given order in PT and provides a physical interpretation for the small scale dispersion values. We investigate RSD power spectrum as a function of μ, the cosine of the angle between the Fourier mode and line of sight, focusing on the lowest order powers of μ and multipole moments which dominate the observable RSD power spectrum. Overall we find considerable success in modeling many, but not all, of the terms in this expansion. This is similar to the situation in real space, but predicting power spectrum in redshift space is more difficult because of the explicit influence of small scale dispersion type effects in RSD, which extend to very large scales.

  5. A multi-label, semi-supervised classification approach applied to personality prediction in social media.

    PubMed

    Lima, Ana Carolina E S; de Castro, Leandro Nunes

    2014-10-01

    Social media allow web users to create and share content pertaining to different subjects, exposing their activities, opinions, feelings and thoughts. In this context, online social media has attracted the interest of data scientists seeking to understand behaviours and trends, whilst collecting statistics for social sites. One potential application for these data is personality prediction, which aims to understand a user's behaviour within social media. Traditional personality prediction relies on users' profiles, their status updates, the messages they post, etc. Here, a personality prediction system for social media data is introduced that differs from most approaches in the literature, in that it works with groups of texts, instead of single texts, and does not take users' profiles into account. Also, the proposed approach extracts meta-attributes from texts and does not work directly with the content of the messages. The set of possible personality traits is taken from the Big Five model and allows the problem to be characterised as a multi-label classification task. The problem is then transformed into a set of five binary classification problems and solved by means of a semi-supervised learning approach, due to the difficulty in annotating the massive amounts of data generated in social media. In our implementation, the proposed system was trained with three well-known machine-learning algorithms, namely a Naïve Bayes classifier, a Support Vector Machine, and a Multilayer Perceptron neural network. The system was applied to predict the personality of Tweets taken from three datasets available in the literature, and resulted in an approximately 83% accurate prediction, with some of the personality traits presenting better individual classification rates than others. PMID:24969690

  6. Applying the spatial mapping approach to 231Pa/230Th as an overturning proxy

    NASA Astrophysics Data System (ADS)

    Bradtmiller, L. I.; McManus, J. F.; Robinson, L. F.

    2008-12-01

    The use of the 231Pa/230Th ratio in deep-sea sediments has been developed and used over the last decade as a proxy for the rate of Atlantic meridional overturning circulation (AMOC). The proxy is based on the known ratio of 231Pa and 230Th production by uranium decay in the ocean, and on the different rates of removal to the sediment of the two isotopes. North Atlantic climate and AMOC are believed to be closely related, and so the 231Pa/230Th proxy has most often been applied to North Atlantic sediments over the past glacial cycle, particularly during periods of abrupt climate change such as the Heinrich 1 (H1) iceberg discharge event. Recent studies have used high-resolution downcore records to interpret AMOC circulation at a single location. Although powerful, this approach cannot always rule out local changes in sediment composition, particle rain rate or other factors influencing the 231Pa/230Th ratio, and therefore may not necessarily reflect the mean behavior of AMOC. Here we combine new and existing 231Pa/230Th data from the Atlantic basin to apply the spatial mapping approach to the 231Pa/230Th proxy. Instead of attempting to reconstruct AMOC at a single site, we use weighted averages of spatially distributed data from the last glacial maximum, H1 and the Holocene in an attempt to examine these three key time periods with respect to the average behavior of the AMOC. This approach greatly decreases the likelihood that the results are biased by variations in factors other than the AMOC, allowing us to examine 231Pa/230Th through time as well as in three- dimensional space. Compilation of existing data highlights key gaps in the spatial coverage and is complicated by the challenge of identifying H1 in all cores. Nevertheless we are able to determine broad spatial patterns and calculate 231Pa budgets where suitable data exists. We show that the minimum net export of 231Pa form the North Atlantic by the AMOC occurred during relatively brief intervals such as H1

  7. Biologically optimized helium ion plans: calculation approach and its in vitro validation

    NASA Astrophysics Data System (ADS)

    Mairani, A.; Dokic, I.; Magro, G.; Tessonnier, T.; Kamp, F.; Carlson, D. J.; Ciocca, M.; Cerutti, F.; Sala, P. R.; Ferrari, A.; Böhlen, T. T.; Jäkel, O.; Parodi, K.; Debus, J.; Abdollahi, A.; Haberer, T.

    2016-06-01

    Treatment planning studies on the biological effect of raster-scanned helium ion beams should be performed, together with their experimental verification, before their clinical application at the Heidelberg Ion Beam Therapy Center (HIT). For this purpose, we introduce a novel calculation approach based on integrating data-driven biological models in our Monte Carlo treatment planning (MCTP) tool. Dealing with a mixed radiation field, the biological effect of the primary 4He ion beams, of the secondary 3He and 4He (Z  =  2) fragments and of the produced protons, deuterons and tritons (Z  =  1) has to be taken into account. A spread-out Bragg peak (SOBP) in water, representative of a clinically-relevant scenario, has been biologically optimized with the MCTP and then delivered at HIT. Predictions of cell survival and RBE for a tumor cell line, characterized by {{(α /β )}\\text{ph}}=5.4 Gy, have been successfully compared against measured clonogenic survival data. The mean absolute survival variation ({μΔ \\text{S}} ) between model predictions and experimental data was 5.3%  ±  0.9%. A sensitivity study, i.e. quantifying the variation of the estimations for the studied plan as a function of the applied phenomenological modelling approach, has been performed. The feasibility of a simpler biological modelling based on dose-averaged LET (linear energy transfer) has been tested. Moreover, comparisons with biophysical models such as the local effect model (LEM) and the repair-misrepair-fixation (RMF) model were performed. {μΔ \\text{S}} values for the LEM and the RMF model were, respectively, 4.5%  ±  0.8% and 5.8%  ±  1.1%. The satisfactorily agreement found in this work for the studied SOBP, representative of clinically-relevant scenario, suggests that the introduced approach could be applied for an accurate estimation of the biological effect for helium ion radiotherapy.

  8. Optimizing water supply and hydropower reservoir operation rule curves: An imperialist competitive algorithm approach

    NASA Astrophysics Data System (ADS)

    Afshar, Abbas; Emami Skardi, Mohammad J.; Masoumi, Fariborz

    2015-09-01

    Efficient reservoir management requires the implementation of generalized optimal operating policies that manage storage volumes and releases while optimizing a single objective or multiple objectives. Reservoir operating rules stipulate the actions that should be taken under the current state of the system. This study develops a set of piecewise linear operating rule curves for water supply and hydropower reservoirs, employing an imperialist competitive algorithm in a parameterization-simulation-optimization approach. The adaptive penalty method is used for constraint handling and proved to work efficiently in the proposed scheme. Its performance is tested deriving an operation rule for the Dez reservoir in Iran. The proposed modelling scheme converged to near-optimal solutions efficiently in the case examples. It was shown that the proposed optimum piecewise linear rule may perform quite well in reservoir operation optimization as the operating period extends from very short to fairly long periods.

  9. Completeness - II. A signal-to-noise ratio approach for completeness estimators applied to galaxy magnitude-redshift surveys

    NASA Astrophysics Data System (ADS)

    Teodoro, Luís; Johnston, Russell; Hendry, Martin

    2010-06-01

    This is the second paper in our completeness series, which addresses some of the issues raised in the previous article by Johnston, Teodoro & Hendry, in which we developed statistical tests for assessing the completeness in apparent magnitude of magnitude-redshift surveys defined by two flux limits. The statistics, Tc and Tv, associated with these tests are non-parametric and defined in terms of the observed cumulative distribution function of sources; they represent powerful tools for identifying the true flux limit and/or characterizing systematic errors in magnitude-redshift data. In this paper, we present a new approach to constructing these estimators that resembles an `adaptive smoothing' procedure - i.e. by seeking to maintain the same amount of the information, as measured by the signal-to-noise ratio (S/N), allocated to each galaxy. For consistency with our previous work, we apply our improved estimators to the Millennium Galaxy Catalogue and the 2dF Galaxy Redshift Survey data, and demonstrate that one needs to use an S/N appropriately tailored for each individual catalogue to optimize the performance of the completeness estimators. Furthermore, unless such an adaptive procedure is employed, the assessment of completeness may result in a spurious outcome if one uses other estimators present in the literature which have not been designed taking into account `shot-noise' due to sampling.

  10. A knowledge-based approach to improving optimization techniques in system planning

    NASA Technical Reports Server (NTRS)

    Momoh, J. A.; Zhang, Z. Z.

    1990-01-01

    A knowledge-based (KB) approach to improve mathematical programming techniques used in the system planning environment is presented. The KB system assists in selecting appropriate optimization algorithms, objective functions, constraints and parameters. The scheme is implemented by integrating symbolic computation of rules derived from operator and planner's experience and is used for generalized optimization packages. The KB optimization software package is capable of improving the overall planning process which includes correction of given violations. The method was demonstrated on a large scale power system discussed in the paper.

  11. An optimization approach for design of RC beams subjected to flexural and shear effects

    NASA Astrophysics Data System (ADS)

    Nigdeli, Sinan Melih; Bekdaş, Gebrail

    2013-10-01

    A random search technique (RST) is proposed for the optimum design of reinforced concrete (RC) beams with minimum material cost. Cross-sectional dimensions and reinforcement bars are optimized for different flexural moments and shear forces. The optimization of reinforcement bars includes number and diameter of longitudinal bars for flexural moments. Also, stirrup reinforcements are designed for shear forces. The optimization is performed according to design procedure given in ACI-318 (Building Code Requirements for Structural Concrete). The approach is effective for the detailed design of RC beams ensuring safety and application conditions.

  12. A Novel Hybridization of Applied Mathematical, Operations Research and Risk-based Methods to Achieve an Optimal Solution to a Challenging Subsurface Contamination Problem

    NASA Astrophysics Data System (ADS)

    Johnson, K. D.; Pinder, G. F.

    2013-12-01

    The objective of the project is the creation of a new, computationally based, approach to the collection, evaluation and use of data for the purpose of determining optimal strategies for investment in the solution of remediation of contaminant source areas and similar environmental problems. The research focuses on the use of existing mathematical tools assembled in a unique fashion. The area of application of this new capability is optimal (least-cost) groundwater contamination source identification; we wish to identify the physical environments wherein it may be cost-prohibitive to identify a contaminant source, the optimal strategy to protect the environment from additional insult and formulate strategies for cost-effective environmental restoration. The computational underpinnings of the proposed approach encompass the integration into a unique of several known applied-mathematical tools. The resulting tool integration achieves the following: 1) simulate groundwater flow and contaminant transport under uncertainty, that is when the physical parameters such as hydraulic conductivity are known to be described by a random field; 2) define such a random field from available field data or be able to provide insight into the sampling strategy needed to create such a field; 3) incorporate subjective information, such as the opinions of experts on the importance of factors such as locations of waste landfills; 4) optimize a search strategy for finding a potential source location and to optimally combine field information with model results to provide the best possible representation of the mean contaminant field and its geostatistics. Our approach combines in a symbiotic manner methodologies found in numerical simulation, random field analysis, Kalman filtering, fuzzy set theory and search theory. Testing the algorithm for this stage of the work, we will focus on fabricated field situations wherein we can a priori specify the degree of uncertainty associated with the

  13. A Kriging surrogate model coupled in simulation-optimization approach for identifying release history of groundwater sources.

    PubMed

    Zhao, Ying; Lu, Wenxi; Xiao, Chuanning

    2016-01-01

    As the incidence frequency of groundwater pollution increases, many methods that identify source characteristics of pollutants are being developed. In this study, a simulation-optimization approach was applied to determine the duration and magnitude of pollutant sources. Such problems are time consuming because thousands of simulation models are required to run the optimization model. To address this challenge, the Kriging surrogate model was proposed to increase computational efficiency. Accuracy, time consumption, and the robustness of the Kriging model were tested on both homogenous and non-uniform media, as well as steady-state and transient flow and transport conditions. The results of three hypothetical cases demonstrate that the Kriging model has the ability to solve groundwater contaminant source problems that could occur during field site source identification problems with a high degree of accuracy and short computation times and is thus very robust.

  14. Uncertainty Optimization Applied to the Monte Carlo Analysis of Planetary Entry Trajectories

    NASA Technical Reports Server (NTRS)

    Olds, John; Way, David

    2001-01-01

    -and-error approach to reconcile discrepancies. Therefore, an improvement to the Monte Carlo analysis is needed that will allow the problem to be worked in reverse. In this way, the largest allowable dispersions that achieve the required mission objectives can be determined quantitatively.

  15. Towards a full integration of optimization and validation phases: An analytical-quality-by-design approach.

    PubMed

    Hubert, C; Houari, S; Rozet, E; Lebrun, P; Hubert, Ph

    2015-05-22

    When using an analytical method, defining an analytical target profile (ATP) focused on quantitative performance represents a key input, and this will drive the method development process. In this context, two case studies were selected in order to demonstrate the potential of a quality-by-design (QbD) strategy when applied to two specific phases of the method lifecycle: the pre-validation study and the validation step. The first case study focused on the improvement of a liquid chromatography (LC) coupled to mass spectrometry (MS) stability-indicating method by the means of the QbD concept. The design of experiments (DoE) conducted during the optimization step (i.e. determination of the qualitative design space (DS)) was performed a posteriori. Additional experiments were performed in order to simultaneously conduct the pre-validation study to assist in defining the DoE to be conducted during the formal validation step. This predicted protocol was compared to the one used during the formal validation. A second case study based on the LC/MS-MS determination of glucosamine and galactosamine in human plasma was considered in order to illustrate an innovative strategy allowing the QbD methodology to be incorporated during the validation phase. An operational space, defined by the qualitative DS, was considered during the validation process rather than a specific set of working conditions as conventionally performed. Results of all the validation parameters conventionally studied were compared to those obtained with this innovative approach for glucosamine and galactosamine. Using this strategy, qualitative and quantitative information were obtained. Consequently, an analyst using this approach would be able to select with great confidence several working conditions within the operational space rather than a given condition for the routine use of the method. This innovative strategy combines both a learning process and a thorough assessment of the risk involved.

  16. A scalar optimization approach for averaged Hausdorff approximations of the Pareto front

    NASA Astrophysics Data System (ADS)

    Schütze, Oliver; Domínguez-Medina, Christian; Cruz-Cortés, Nareli; Gerardo de la Fraga, Luis; Sun, Jian-Qiao; Toscano, Gregorio; Landa, Ricardo

    2016-09-01

    This article presents a novel method to compute averaged Hausdorff (?) approximations of the Pareto fronts of multi-objective optimization problems. The underlying idea is to utilize directly the scalar optimization problem that is induced by the ? performance indicator. This method can be viewed as a certain set based scalarization approach and can be addressed both by mathematical programming techniques and evolutionary algorithms (EAs). In this work, the focus is on the latter where a first single objective EA for such ? approximations is proposed. Finally, the strength of the novel approach is demonstrated on some bi-objective benchmark problems with different shapes of the Pareto front.

  17. A generalised chemical precipitation modelling approach in wastewater treatment applied to calcite.

    PubMed

    Mbamba, Christian Kazadi; Batstone, Damien J; Flores-Alsina, Xavier; Tait, Stephan

    2015-01-01

    Process simulation models used across the wastewater industry have inherent limitations due to over-simplistic descriptions of important physico–chemical reactions, especially for mineral solids precipitation. As part of the efforts towards a larger Generalized Physicochemical Modelling Framework, the present study aims to identify a broadly applicable precipitation modelling approach. The study uses two experimental platforms applied to calcite precipitating from synthetic aqueous solutions to identify and validate the model approach. Firstly, dynamic pH titration tests are performed to define the baseline model approach. Constant Composition Method (CCM) experiments are then used to examine influence of environmental factors on the baseline approach. Results show that the baseline model should include precipitation kinetics (not be quasi-equilibrium), should include a 1st order effect of the mineral particulate state (Xcryst) and, for calcite, have a 2nd order dependency (exponent n = 2.05 ± 0.29) on thermodynamic supersaturation (σ). Parameter analysis indicated that the model was more tolerant to a fast kinetic coefficient (kcryst) and so, in general, it is recommended that a large kcryst value be nominally selected where insufficient process data is available. Zero seed (self nucleating) conditions were effectively represented by including arbitrarily small amounts of mineral phase in the initial conditions. Both of these aspects are important for wastewater modelling, where knowledge of kinetic coefficients is usually not available, and it is typically uncertain which precipitates are actually present. The CCM experiments confirmed the baseline model, particularly the dependency on supersaturation. Temperature was also identified as an influential factor that should be corrected for via an Arrhenius-style correction of kcryst. The influence of magnesium (a common and representative added impurity) on kcryst was found to be significant but was considered

  18. A generalised chemical precipitation modelling approach in wastewater treatment applied to calcite.

    PubMed

    Mbamba, Christian Kazadi; Batstone, Damien J; Flores-Alsina, Xavier; Tait, Stephan

    2015-01-01

    Process simulation models used across the wastewater industry have inherent limitations due to over-simplistic descriptions of important physico–chemical reactions, especially for mineral solids precipitation. As part of the efforts towards a larger Generalized Physicochemical Modelling Framework, the present study aims to identify a broadly applicable precipitation modelling approach. The study uses two experimental platforms applied to calcite precipitating from synthetic aqueous solutions to identify and validate the model approach. Firstly, dynamic pH titration tests are performed to define the baseline model approach. Constant Composition Method (CCM) experiments are then used to examine influence of environmental factors on the baseline approach. Results show that the baseline model should include precipitation kinetics (not be quasi-equilibrium), should include a 1st order effect of the mineral particulate state (Xcryst) and, for calcite, have a 2nd order dependency (exponent n = 2.05 ± 0.29) on thermodynamic supersaturation (σ). Parameter analysis indicated that the model was more tolerant to a fast kinetic coefficient (kcryst) and so, in general, it is recommended that a large kcryst value be nominally selected where insufficient process data is available. Zero seed (self nucleating) conditions were effectively represented by including arbitrarily small amounts of mineral phase in the initial conditions. Both of these aspects are important for wastewater modelling, where knowledge of kinetic coefficients is usually not available, and it is typically uncertain which precipitates are actually present. The CCM experiments confirmed the baseline model, particularly the dependency on supersaturation. Temperature was also identified as an influential factor that should be corrected for via an Arrhenius-style correction of kcryst. The influence of magnesium (a common and representative added impurity) on kcryst was found to be significant but was considered

  19. Optimal design of experiments applied to headspace solid phase microextraction for the quantification of vicinal diketones in beer through gas chromatography-mass spectrometric detection.

    PubMed

    Leça, João M; Pereira, Ana C; Vieira, Ana C; Reis, Marco S; Marques, José C

    2015-08-01

    Vicinal diketones, namely diacetyl (DC) and pentanedione (PN), are compounds naturally found in beer that play a key role in the definition of its aroma. In lager beer, they are responsible for off-flavors (buttery flavor) and therefore their presence and quantification is of paramount importance to beer producers. Aiming at developing an accurate quantitative monitoring scheme to follow these off-flavor compounds during beer production and in the final product, the head space solid-phase microextraction (HS-SPME) analytical procedure was tuned through experiments planned in an optimal way and the final settings were fully validated. Optimal design of experiments (O-DOE) is a computational, statistically-oriented approach for designing experiences that are most informative according to a well-defined criterion. This methodology was applied for HS-SPME optimization, leading to the following optimal extraction conditions for the quantification of VDK: use a CAR/PDMS fiber, 5 ml of samples in 20 ml vial, 5 min of pre-incubation time followed by 25 min of extraction at 30 °C, with agitation. The validation of the final analytical methodology was performed using a matrix-matched calibration, in order to minimize matrix effects. The following key features were obtained: linearity (R(2) > 0.999, both for diacetyl and 2,3-pentanedione), high sensitivity (LOD of 0.92 μg L(-1) and 2.80 μg L(-1), and LOQ of 3.30 μg L(-1) and 10.01 μg L(-1), for diacetyl and 2,3-pentanedione, respectively), recoveries of approximately 100% and suitable precision (repeatability and reproducibility lower than 3% and 7.5%, respectively). The applicability of the methodology was fully confirmed through an independent analysis of several beer samples, with analyte concentrations ranging from 4 to 200 g L(-1).

  20. Optimal design of experiments applied to headspace solid phase microextraction for the quantification of vicinal diketones in beer through gas chromatography-mass spectrometric detection.

    PubMed

    Leça, João M; Pereira, Ana C; Vieira, Ana C; Reis, Marco S; Marques, José C

    2015-08-01

    Vicinal diketones, namely diacetyl (DC) and pentanedione (PN), are compounds naturally found in beer that play a key role in the definition of its aroma. In lager beer, they are responsible for off-flavors (buttery flavor) and therefore their presence and quantification is of paramount importance to beer producers. Aiming at developing an accurate quantitative monitoring scheme to follow these off-flavor compounds during beer production and in the final product, the head space solid-phase microextraction (HS-SPME) analytical procedure was tuned through experiments planned in an optimal way and the final settings were fully validated. Optimal design of experiments (O-DOE) is a computational, statistically-oriented approach for designing experiences that are most informative according to a well-defined criterion. This methodology was applied for HS-SPME optimization, leading to the following optimal extraction conditions for the quantification of VDK: use a CAR/PDMS fiber, 5 ml of samples in 20 ml vial, 5 min of pre-incubation time followed by 25 min of extraction at 30 °C, with agitation. The validation of the final analytical methodology was performed using a matrix-matched calibration, in order to minimize matrix effects. The following key features were obtained: linearity (R(2) > 0.999, both for diacetyl and 2,3-pentanedione), high sensitivity (LOD of 0.92 μg L(-1) and 2.80 μg L(-1), and LOQ of 3.30 μg L(-1) and 10.01 μg L(-1), for diacetyl and 2,3-pentanedione, respectively), recoveries of approximately 100% and suitable precision (repeatability and reproducibility lower than 3% and 7.5%, respectively). The applicability of the methodology was fully confirmed through an independent analysis of several beer samples, with analyte concentrations ranging from 4 to 200 g L(-1). PMID:26320791

  1. An Efficacious Multi-Objective Fuzzy Linear Programming Approach for Optimal Power Flow Considering Distributed Generation

    PubMed Central

    Warid, Warid; Hizam, Hashim; Mariun, Norman; Abdul-Wahab, Noor Izzri

    2016-01-01

    This paper proposes a new formulation for the multi-objective optimal power flow (MOOPF) problem for meshed power networks considering distributed generation. An efficacious multi-objective fuzzy linear programming optimization (MFLP) algorithm is proposed to solve the aforementioned problem with and without considering the distributed generation (DG) effect. A variant combination of objectives is considered for simultaneous optimization, including power loss, voltage stability, and shunt capacitors MVAR reserve. Fuzzy membership functions for these objectives are designed with extreme targets, whereas the inequality constraints are treated as hard constraints. The multi-objective fuzzy optimal power flow (OPF) formulation was converted into a crisp OPF in a successive linear programming (SLP) framework and solved using an efficient interior point method (IPM). To test the efficacy of the proposed approach, simulations are performed on the IEEE 30-busand IEEE 118-bus test systems. The MFLP optimization is solved for several optimization cases. The obtained results are compared with those presented in the literature. A unique solution with a high satisfaction for the assigned targets is gained. Results demonstrate the effectiveness of the proposed MFLP technique in terms of solution optimality and rapid convergence. Moreover, the results indicate that using the optimal DG location with the MFLP algorithm provides the solution with the highest quality. PMID:26954783

  2. An Efficacious Multi-Objective Fuzzy Linear Programming Approach for Optimal Power Flow Considering Distributed Generation.

    PubMed

    Warid, Warid; Hizam, Hashim; Mariun, Norman; Abdul-Wahab, Noor Izzri

    2016-01-01

    This paper proposes a new formulation for the multi-objective optimal power flow (MOOPF) problem for meshed power networks considering distributed generation. An efficacious multi-objective fuzzy linear programming optimization (MFLP) algorithm is proposed to solve the aforementioned problem with and without considering the distributed generation (DG) effect. A variant combination of objectives is considered for simultaneous optimization, including power loss, voltage stability, and shunt capacitors MVAR reserve. Fuzzy membership functions for these objectives are designed with extreme targets, whereas the inequality constraints are treated as hard constraints. The multi-objective fuzzy optimal power flow (OPF) formulation was converted into a crisp OPF in a successive linear programming (SLP) framework and solved using an efficient interior point method (IPM). To test the efficacy of the proposed approach, simulations are performed on the IEEE 30-busand IEEE 118-bus test systems. The MFLP optimization is solved for several optimization cases. The obtained results are compared with those presented in the literature. A unique solution with a high satisfaction for the assigned targets is gained. Results demonstrate the effectiveness of the proposed MFLP technique in terms of solution optimality and rapid convergence. Moreover, the results indicate that using the optimal DG location with the MFLP algorithm provides the solution with the highest quality. PMID:26954783

  3. An Efficacious Multi-Objective Fuzzy Linear Programming Approach for Optimal Power Flow Considering Distributed Generation.

    PubMed

    Warid, Warid; Hizam, Hashim; Mariun, Norman; Abdul-Wahab, Noor Izzri

    2016-01-01

    This paper proposes a new formulation for the multi-objective optimal power flow (MOOPF) problem for meshed power networks considering distributed generation. An efficacious multi-objective fuzzy linear programming optimization (MFLP) algorithm is proposed to solve the aforementioned problem with and without considering the distributed generation (DG) effect. A variant combination of objectives is considered for simultaneous optimization, including power loss, voltage stability, and shunt capacitors MVAR reserve. Fuzzy membership functions for these objectives are designed with extreme targets, whereas the inequality constraints are treated as hard constraints. The multi-objective fuzzy optimal power flow (OPF) formulation was converted into a crisp OPF in a successive linear programming (SLP) framework and solved using an efficient interior point method (IPM). To test the efficacy of the proposed approach, simulations are performed on the IEEE 30-busand IEEE 118-bus test systems. The MFLP optimization is solved for several optimization cases. The obtained results are compared with those presented in the literature. A unique solution with a high satisfaction for the assigned targets is gained. Results demonstrate the effectiveness of the proposed MFLP technique in terms of solution optimality and rapid convergence. Moreover, the results indicate that using the optimal DG location with the MFLP algorithm provides the solution with the highest quality.

  4. A Monte Carlo approach applied to ultrasonic non-destructive testing

    NASA Astrophysics Data System (ADS)

    Mosca, I.; Bilgili, F.; Meier, T. M.; Sigloch, K.

    2011-12-01

    Non-destructive testing based on ultrasound allows us to detect, characterize and size discrete flaws in geotechnical and engineering structures and materials. This information is needed to determine whether such flaws can be tolerated in future service. In typical ultrasonic experiments, only the first-arriving P-wave is interpreted, and the remainder of the recorded waveform is neglected. Our work aims at understanding surface waves, which are strong signals in the later wave train, with the ultimate goal of full waveform tomography. At present, even the structural estimation of layered media is still challenging because material properties of the samples can vary widely, and good initial models for inversion do not often exist. The aim of the present study is to analyze ultrasonic waveforms measured at the surface of Plexiglas and rock samples, and to define the behaviour of surface waves in structures of increasing complexity. The tremendous potential of ultrasonic surface waves becomes an advantage only if numerical forward modelling tools are available to describe the waveforms accurately. We compute synthetic full seismograms as well as group and phase velocities for the data. We invert them for the elastic properties of the sample via a global search of the parameter space, using the Neighbourhood Algorithm. Such a Monte Carlo approach allows us to perform a complete uncertainty and resolution analysis, but the computational cost is high and increases quickly with the number of model parameters. Therefore it is practical only for defining the seismic properties of media with a limited number of degrees of freedom, such as layered structures. We have applied this approach to both synthetic layered structures and real samples. The former contributed to benchmark the propagation of ultrasonic surface waves in typical materials tested with a non-destructive technique (e.g., marble, unweathered and weathered concrete and natural stone).

  5. A Monte Carlo approach applied to ultrasonic non-destructive testing

    NASA Astrophysics Data System (ADS)

    Mosca, I.; Bilgili, F.; Meier, T.; Sigloch, K.

    2012-04-01

    Non-destructive testing based on ultrasound allows us to detect, characterize and size discrete flaws in geotechnical and architectural structures and materials. This information is needed to determine whether such flaws can be tolerated in future service. In typical ultrasonic experiments, only the first-arriving P-wave is interpreted, and the remainder of the recorded waveform is neglected. Our work aims at understanding surface waves, which are strong signals in the later wave train, with the ultimate goal of full waveform tomography. At present, even the structural estimation of layered media is still challenging because material properties of the samples can vary widely, and good initial models for inversion do not often exist. The aim of the present study is to combine non-destructive testing with a theoretical data analysis and hence to contribute to conservation strategies of archaeological and architectural structures. We analyze ultrasonic waveforms measured at the surface of a variety of samples, and define the behaviour of surface waves in structures of increasing complexity. The tremendous potential of ultrasonic surface waves becomes an advantage only if numerical forward modelling tools are available to describe the waveforms accurately. We compute synthetic full seismograms as well as group and phase velocities for the data. We invert them for the elastic properties of the sample via a global search of the parameter space, using the Neighbourhood Algorithm. Such a Monte Carlo approach allows us to perform a complete uncertainty and resolution analysis, but the computational cost is high and increases quickly with the number of model parameters. Therefore it is practical only for defining the seismic properties of media with a limited number of degrees of freedom, such as layered structures. We have applied this approach to both synthetic layered structures and real samples. The former contributed to benchmark the propagation of ultrasonic surface

  6. A complex-valued neural dynamical optimization approach and its stability analysis.

    PubMed

    Zhang, Songchuan; Xia, Youshen; Zheng, Weixing

    2015-01-01

    In this paper, we propose a complex-valued neural dynamical method for solving a complex-valued nonlinear convex programming problem. Theoretically, we prove that the proposed complex-valued neural dynamical approach is globally stable and convergent to the optimal solution. The proposed neural dynamical approach significantly generalizes the real-valued nonlinear Lagrange network completely in the complex domain. Compared with existing real-valued neural networks and numerical optimization methods for solving complex-valued quadratic convex programming problems, the proposed complex-valued neural dynamical approach can avoid redundant computation in a double real-valued space and thus has a low model complexity and storage capacity. Numerical simulations are presented to show the effectiveness of the proposed complex-valued neural dynamical approach.

  7. An analytical approach to the problem of inverse optimization with additive objective functions: an application to human prehension

    PubMed Central

    Pesin, Yakov B.; Niu, Xun; Latash, Mark L.

    2010-01-01

    We consider the problem of what is being optimized in human actions with respect to various aspects of human movements and different motor tasks. From the mathematical point of view this problem consists of finding an unknown objective function given the values at which it reaches its minimum. This problem is called the inverse optimization problem. Until now the main approach to this problems has been the cut-and-try method, which consists of introducing an objective function and checking how it reflects the experimental data. Using this approach, different objective functions have been proposed for the same motor action. In the current paper we focus on inverse optimization problems with additive objective functions and linear constraints. Such problems are typical in human movement science. The problem of muscle (or finger) force sharing is an example. For such problems we obtain sufficient conditions for uniqueness and propose a method for determining the objective functions. To illustrate our method we analyze the problem of force sharing among the fingers in a grasping task. We estimate the objective function from the experimental data and show that it can predict the force-sharing pattern for a vast range of external forces and torques applied to the grasped object. The resulting objective function is quadratic with essentially non-zero linear terms. PMID:19902213

  8. Applying clustering approach in predictive uncertainty estimation: a case study with the UNEEC method

    NASA Astrophysics Data System (ADS)

    Dogulu, Nilay; Solomatine, Dimitri; Lal Shrestha, Durga

    2014-05-01

    Within the context of flood forecasting, assessment of predictive uncertainty has become a necessity for most of the modelling studies in operational hydrology. There are several uncertainty analysis and/or prediction methods available in the literature; however, most of them rely on normality and homoscedasticity assumptions for model residuals occurring in reproducing the observed data. This study focuses on a statistical method analyzing model residuals without having any assumptions and based on a clustering approach: Uncertainty Estimation based on local Errors and Clustering (UNEEC). The aim of this work is to provide a comprehensive evaluation of the UNEEC method's performance in view of clustering approach employed within its methodology. This is done by analyzing normality of model residuals and comparing uncertainty analysis results (for 50% and 90% confidence level) with those obtained from uniform interval and quantile regression methods. An important part of the basis by which the methods are compared is analysis of data clusters representing different hydrometeorological conditions. The validation measures used are PICP, MPI, ARIL and NUE where necessary. A new validation measure linking prediction interval to the (hydrological) model quality - weighted mean prediction interval (WMPI) - is also proposed for comparing the methods more effectively. The case study is Brue catchment, located in the South West of England. A different parametrization of the method than its previous application in Shrestha and Solomatine (2008) is used, i.e. past error values in addition to discharge and effective rainfall is considered. The results show that UNEEC's notable characteristic in its methodology, i.e. applying clustering to data of predictors upon which catchment behaviour information is encapsulated, contributes increased accuracy of the method's results for varying flow conditions. Besides, classifying data so that extreme flow events are individually

  9. Applying patient centered approach in management of pulmonary tuberculosis: A case report from Malaysia.

    PubMed

    Atif, M; Sulaiman, Sas; Shafi, Aa; Muttalif, Ar; Ali, I; Saleem, F

    2011-06-01

    A 24 year university student with history of productive cough was registered as sputum smear confirmed case of pulmonary tuberculosis. During treatment, patient suffered from itchiness associated with anti tuberculosis drugs and was treated with chlorpheniramine (4mg) tablet. Patient missed twenty eight doses of anti tuberculosis drugs in continuation phase claiming that he was very busy in his studies and assignments. Upon questioning he further explained that he was quite healthy after five months and unable to concentrate on his studies after taking prescribed medicines. His treatment was stopped based on clinical improvement, although he did not complete six months therapy. Two major reasons; false perception of being completely cured and side effects associated with anti TB drugs might be responsible for non adherence. Non sedative anti histamines like fexofenadine, citrizine or loratidine should be preferred over first generation anti histamines (chlorpheniramine) in patients with such lifestyle. Patient had not completed full course of chemotherapy, which is preliminary requirement for a case to be classified as "cure" and "treatment completed". Moreover, patient had not defaulted for two consecutive months. Therefore, according to WHO treatment outcome categories, this patient can neither be classified as "cure" or "treatment completed" nor as "defaulter". Further elaboration of WHO treatment outcome categories is required for adequate classification of patients with similar characteristics. Likelihood of non adherence can be significantly reduced by applying the WHO recommended "Patient Centered Approach" strategy. Close friend, class mate or family member can be selected as treatment supporter to ensure adherence to treatment. PMID:24826012

  10. Old concepts, new molecules and current approaches applied to the bacterial nucleotide signalling field

    PubMed Central

    2016-01-01

    Signalling nucleotides are key molecules that help bacteria to rapidly coordinate cellular pathways and adapt to changes in their environment. During the past 10 years, the nucleotide signalling field has seen much excitement, as several new signalling nucleotides have been discovered in both eukaryotic and bacterial cells. The fields have since advanced quickly, aided by the development of important tools such as the synthesis of modified nucleotides, which, combined with sensitive mass spectrometry methods, allowed for the rapid identification of specific receptor proteins along with other novel genome-wide screening methods. In this review, we describe the principle concepts of nucleotide signalling networks and summarize the recent work that led to the discovery of the novel signalling nucleotides. We also highlight current approaches applied to the research in the field as well as resources and methodological advances aiding in a rapid identification of nucleotide-specific receptor proteins. This article is part of the themed issue ‘The new bacteriology’. PMID:27672152

  11. Applied tagmemics: A heuristic approach to the use of graphic aids in technical writing

    NASA Technical Reports Server (NTRS)

    Brownlee, P. P.; Kirtz, M. K.

    1981-01-01

    In technical report writing, two needs which must be met if reports are to be useable by an audience are the language needs and the technical needs of that particular audience. A heuristic analysis helps to decide the most suitable format for information; that is, whether the information should be presented verbally or visually. The report writing process should be seen as an organic whole which can be divided and subdivided according to the writer's purpose, but which always functions as a totality. The tagmemic heuristic, because it itself follows a process of deconstructing and reconstructing information, lends itself to being a useful approach to the teaching of technical writing. By applying the abstract questions this heuristic asks to specific parts of the report. The language and technical needs of the audience are analyzed by examining the viability of the solution within the givens of the corporate structure, and by deciding which graphic or verbal format will best suit the writer's purpose. By following such a method, answers which are both specific and thorough in their range of application are found.

  12. Distribution function approach to redshift space distortions. Part V: perturbation theory applied to dark matter halos

    SciTech Connect

    Vlah, Zvonimir; Seljak, Uroš; Okumura, Teppei; Desjacques, Vincent E-mail: seljak@physik.uzh.ch E-mail: Vincent.Desjacques@unige.ch

    2013-10-01

    Numerical simulations show that redshift space distortions (RSD) introduce strong scale dependence in the power spectra of halos, with ten percent deviations relative to linear theory predictions even on relatively large scales (k < 0.1h/Mpc) and even in the absence of satellites (which induce Fingers-of-God, FoG, effects). If unmodeled these effects prevent one from extracting cosmological information from RSD surveys. In this paper we use Eulerian perturbation theory (PT) and Eulerian halo biasing model and apply it to the distribution function approach to RSD, in which RSD is decomposed into several correlators of density weighted velocity moments. We model each of these correlators using PT and compare the results to simulations over a wide range of halo masses and redshifts. We find that with an introduction of a physically motivated halo biasing, and using dark matter power spectra from simulations, we can reproduce the simulation results at a percent level on scales up to k ∼ 0.15h/Mpc at z = 0, without the need to have free FoG parameters in the model.

  13. Applying patient centered approach in management of pulmonary tuberculosis: A case report from Malaysia.

    PubMed

    Atif, M; Sulaiman, Sas; Shafi, Aa; Muttalif, Ar; Ali, I; Saleem, F

    2011-06-01

    A 24 year university student with history of productive cough was registered as sputum smear confirmed case of pulmonary tuberculosis. During treatment, patient suffered from itchiness associated with anti tuberculosis drugs and was treated with chlorpheniramine (4mg) tablet. Patient missed twenty eight doses of anti tuberculosis drugs in continuation phase claiming that he was very busy in his studies and assignments. Upon questioning he further explained that he was quite healthy after five months and unable to concentrate on his studies after taking prescribed medicines. His treatment was stopped based on clinical improvement, although he did not complete six months therapy. Two major reasons; false perception of being completely cured and side effects associated with anti TB drugs might be responsible for non adherence. Non sedative anti histamines like fexofenadine, citrizine or loratidine should be preferred over first generation anti histamines (chlorpheniramine) in patients with such lifestyle. Patient had not completed full course of chemotherapy, which is preliminary requirement for a case to be classified as "cure" and "treatment completed". Moreover, patient had not defaulted for two consecutive months. Therefore, according to WHO treatment outcome categories, this patient can neither be classified as "cure" or "treatment completed" nor as "defaulter". Further elaboration of WHO treatment outcome categories is required for adequate classification of patients with similar characteristics. Likelihood of non adherence can be significantly reduced by applying the WHO recommended "Patient Centered Approach" strategy. Close friend, class mate or family member can be selected as treatment supporter to ensure adherence to treatment.

  14. Extraction of thermal Green's function using diffuse fields: a passive approach applied to thermography

    NASA Astrophysics Data System (ADS)

    Capriotti, Margherita; Sternini, Simone; Lanza di Scalea, Francesco; Mariani, Stefano

    2016-04-01

    In the field of non-destructive evaluation, defect detection and visualization can be performed exploiting different techniques relying either on an active or a passive approach. In the following paper the passive technique is investigated due to its numerous advantages and its application to thermography is explored. In previous works, it has been shown that it is possible to reconstruct the Green's function between any pair of points of a sensing grid by using noise originated from diffuse fields in acoustic environments. The extraction of the Green's function can be achieved by cross-correlating these random recorded waves. Averaging, filtering and length of the measured signals play an important role in this process. This concept is here applied in an NDE perspective utilizing thermal fluctuations present on structural materials. Temperature variations interacting with thermal properties of the specimen allow for the characterization of the material and its health condition. The exploitation of the thermographic image resolution as a dense grid of sensors constitutes the basic idea underlying passive thermography. Particular attention will be placed on the creation of a proper diffuse thermal field, studying the number, placement and excitation signal of heat sources. Results from numerical simulations will be presented to assess the capabilities and performances of the passive thermal technique devoted to defect detection and imaging of structural components.

  15. Optimal Surface Segmentation in Volumetric Images—A Graph-Theoretic Approach

    PubMed Central

    Li, Kang; Wu, Xiaodong; Chen, Danny Z.; Sonka, Milan

    2008-01-01

    Efficient segmentation of globally optimal surfaces representing object boundaries in volumetric data sets is important and challenging in many medical image analysis applications. We have developed an optimal surface detection method capable of simultaneously detecting multiple interacting surfaces, in which the optimality is controlled by the cost functions designed for individual surfaces and by several geometric constraints defining the surface smoothness and interrelations. The method solves the surface segmentation problem by transforming it into computing a minimum s-t cut in a derived arc-weighted directed graph. The proposed algorithm has a low-order polynomial time complexity and is computationally efficient. It has been extensively validated on more than 300 computer-synthetic volumetric images, 72 CT-scanned data sets of different-sized plexiglas tubes, and tens of medical images spanning various imaging modalities. In all cases, the approach yielded highly accurate results. Our approach can be readily extended to higher-dimensional image segmentation. PMID:16402624

  16. A Scalable and Robust Multi-Agent Approach to Distributed Optimization

    NASA Technical Reports Server (NTRS)

    Tumer, Kagan

    2005-01-01

    Modularizing a large optimization problem so that the solutions to the subproblems provide a good overall solution is a challenging problem. In this paper we present a multi-agent approach to this problem based on aligning the agent objectives with the system objectives, obviating the need to impose external mechanisms to achieve collaboration among the agents. This approach naturally addresses scaling and robustness issues by ensuring that the agents do not rely on the reliable operation of other agents We test this approach in the difficult distributed optimization problem of imperfect device subset selection [Challet and Johnson, 2002]. In this problem, there are n devices, each of which has a "distortion", and the task is to find the subset of those n devices that minimizes the average distortion. Our results show that in large systems (1000 agents) the proposed approach provides improvements of over an order of magnitude over both traditional optimization methods and traditional multi-agent methods. Furthermore, the results show that even in extreme cases of agent failures (i.e., half the agents fail midway through the simulation) the system remains coordinated and still outperforms a failure-free and centralized optimization algorithm.

  17. Optimal design of sewer networks using cellular automata-based hybrid methods: Discrete and continuous approaches

    NASA Astrophysics Data System (ADS)

    Afshar, M. H.; Rohani, M.

    2012-01-01

    In this article, cellular automata based hybrid methods are proposed for the optimal design of sewer networks and their performance is compared with some of the common heuristic search methods. The problem of optimal design of sewer networks is first decomposed into two sub-optimization problems which are solved iteratively in a two stage manner. In the first stage, the pipe diameters of the network are assumed fixed and the nodal cover depths of the network are determined by solving a nonlinear sub-optimization problem. A cellular automata (CA) method is used for the solution of the optimization problem with the network nodes considered as the cells and their cover depths as the cell states. In the second stage, the nodal cover depths calculated from the first stage are fixed and the pipe diameters are calculated by solving a second nonlinear sub-optimization problem. Once again a CA method is used to solve the optimization problem of the second stage with the pipes considered as the CA cells and their corresponding diameters as the cell states. Two different updating rules are derived and used for the CA of the second stage depending on the treatment of the pipe diameters. In the continuous approach, the pipe diameters are considered as continuous variables and the corresponding updating rule is derived mathematically from the original objective function of the problem. In the discrete approach, however, an adhoc updating rule is derived and used taking into account the discrete nature of the pipe diameters. The proposed methods are used to optimally solve two sewer network problems and the results are presented and compared with those obtained by other methods. The results show that the proposed CA based hybrid methods are more efficient and effective than the most powerful search methods considered in this work.

  18. Comparison of optimization-based approaches to imaging spectroscopic inversion in coastal waters

    NASA Astrophysics Data System (ADS)

    Filippi, Anthony M.; Mishonov, Andrey

    2005-06-01

    The United States Navy has recently shifted focus from open-ocean warfare to joint operations in optically complex nearshore regions. Accurately estimating bathymetry and water column inherent optical properties (IOPs) from passive remotely sensed imagery can be an important facilitator of naval operations. Lee et al. developed a semianalytical model that describes the relationship between shallow-water bottom depth, IOPs and subsurface and above-surface reflectance. They also developed a nonlinear optimization-based technique that estimates bottom depth and IOPs, using only measured spectral remote sensing reflectance as input. While quite effective, inversion using noisy field data can limit its accuracy. In this research, the nonlinear optimization-based Lee et al. inversion algorithm was used as a baseline method, and it provided the framework for a proposed hybrid evolutionary/classical optimization approach to hyperspectral data processing. All aspects of the proposed implementation were held constant with that of Lee et al., except that a hybrid evolutionary/classical optimizer (HECO) was substituted for the nonlinear method. HECO required more computer-processing time. In addition, HECO is nondeterministic, and the termination strategy is heuristic. However, the HECO method makes no assumptions regarding the mathematical form of the problem functions. Also, whereas smooth nonlinear optimization is only guaranteed to find a locally optimal solution, HECO has a higher probability of finding a more globally optimal result. While the HECO-acquired results are not provably optimal, we have empirically found that for certain variables, HECO does provide estimates comparable to nonlinear optimization (e.g., bottom albedo at 550 nm).

  19. A Bayesian approach to optimal sensor placement for structural health monitoring with application to active sensing

    NASA Astrophysics Data System (ADS)

    Flynn, Eric B.; Todd, Michael D.

    2010-05-01

    This paper introduces a novel approach for optimal sensor and/or actuator placement for structural health monitoring (SHM) applications. Starting from a general formulation of Bayes risk, we derive a global optimality criterion within a detection theory framework. The optimal configuration is then established as the one that minimizes the expected total presence of either type I or type II error during the damage detection process. While the approach is suitable for many sensing/actuation SHM processes, we focus on the example of active sensing using guided ultrasonic waves by implementing an appropriate statistical model of the wave propagation and feature extraction process. This example implements both pulse-echo and pitch-catch actuation schemes and takes into account line-of-site visibility and non-uniform damage probabilities over the monitored structure. The optimization space is searched using a genetic algorithm with a time-varying mutation rate. We provide three actuator/sensor placement test problems and discuss the optimal solutions generated by the algorithm.

  20. A new systems approach to optimizing investments in gas production and distribution

    SciTech Connect

    Dougherty, E.L.

    1983-03-01

    This paper presents a new analytical approach for determining the optimal sequence of investments to make in each year of an extended planning horizon in each of a group of reservoirs producing gas and gas liquids through an interconnected trunkline network and a gas processing plant. The optimality criterion is to maximize net present value while satisfying fixed offtake requirements for dry gas, but with no limits on gas liquids production. The planning problem is broken into n + 2 separate but interrelated subproblems; gas reservoir development and production, gas flow in a trunkline gathering system, and plant separation activities to remove undesirable gas (CO/sub 2/) or to recover valuable liquid components. The optimal solution for each subproblem depends upon the optimal solutions for all of the other subproblems, so that the overall optimal solution is obtained iteratively. The iteration technique used is based upon a combination of heuristics and the decompostion algorithm of mathematical programming. Each subproblem is solved once during each overall iteration. In addition to presenting some mathematical details of the solution approach, this paper describes a computer system which has been developed to obtain solutions.

  1. Reshaping polygonal meshes with smoothed normals extracted from ultrasound volume data: an optimization approach

    NASA Astrophysics Data System (ADS)

    San Jose, Raul; Alberola-Lopez, Carlos; Ruiz-Alzola, Juan

    2001-05-01

    Several methods exploit the relative motion between the probe and the object being scanned to figure out an estimate of the normals of the existing structures in a volume. These methods are revealed as a good estimator for normals, at least better than simple gradient schemes. On the other hand, polygonal meshes can be obtained directly from raw data by means of tiling algorithms. Although these meshes are good representations of isosurfaces in CT or MRI data, as far as ultrasound is concerned, results are quite noisy, so more effort is needed in developing algorithms that will be able to enhance the structures in the images. In this paper we propose a method that reshapes the geometry of meshes using the information given by normals. Rendering the meshes with the estimated normals is meaningful smoothness is observed. Therefore it is reasonable to obtain a new geometry for the meshes by imposing the normals as an external condition. In order to achieve coherence between the two entities (polygonal meshes and normals), a local optimization approach is proposed. For each vertex, the position that minimizes the norm of the error between the geometric normal and the external normal is worked out. A second term in the objective function favors solutions that are closer to the current state of the mesh. This minimization process is applied to all vertices that constitute the mesh and it is iterated so as to find a global minimum in the objective function. Our results show a better match of external normals and meshes, which draws more natural surface-rendered images.

  2. An efficient approach to optimize the vibration mode of bar-type ultrasonic motors.

    PubMed

    Zhu, Hua; Li, Zhirong; Zhao, Chunsheng

    2010-04-01

    The electromechanical coupled dynamic model of the stator of the bar-type ultrasonic motor is derived based on the finite element method. The dynamical behavior of the stator is analyzed via this model and the theoretical result agrees with the experimental result of the stator of the prototype motor very well. Both the structural design principles and the approaches to meet the requirements for the mode of the stator are discussed. Based on the pattern search algorithm, an optimal model to meet the design requirements is established. The numerical simulation results show that this optimal model is effective for the structural design of the stator.

  3. Dual-energy approach to contrast-enhanced mammography using the balanced filter method: Spectral optimization and preliminary phantom measurement

    SciTech Connect

    Saito, Masatoshi

    2007-11-15

    Dual-energy contrast agent-enhanced mammography is a technique of demonstrating breast cancers obscured by a cluttered background resulting from the contrast between soft tissues in the breast. The technique has usually been implemented by exploiting two exposures to different x-ray tube voltages. In this article, another dual-energy approach using the balanced filter method without switching the tube voltages is described. For the spectral optimization of dual-energy mammography using the balanced filters, we applied a theoretical framework reported by Lemacks et al. [Med. Phys. 29, 1739-1751 (2002)] to calculate the signal-to-noise ratio (SNR) in an iodinated contrast agent subtraction image. This permits the selection of beam parameters such as tube voltage and balanced filter material, and the optimization of the latter's thickness with respect to some critical quantity--in this case, mean glandular dose. For an imaging system with a 0.1 mm thick CsI:Tl scintillator, we predict that the optimal tube voltage would be 45 kVp for a tungsten anode using zirconium, iodine, and neodymium balanced filters. A mean glandular dose of 1.0 mGy is required to obtain an SNR of 5 in order to detect 1.0 mg/cm{sup 2} iodine in the resulting clutter-free image of a 5 cm thick breast composed of 50% adipose and 50% glandular tissue. In addition to spectral optimization, we carried out phantom measurements to demonstrate the present dual-energy approach for obtaining a clutter-free image, which preferentially shows iodine, of a breast phantom comprising three major components - acrylic spheres, olive oil, and an iodinated contrast agent. The detection of iodine details on the cluttered background originating from the contrast between acrylic spheres and olive oil is analogous to the task of distinguishing contrast agents in a mixture of glandular and adipose tissues.

  4. Comparison of Ensemble and Adjoint Approaches to Variational Optimization of Observational Arrays

    NASA Astrophysics Data System (ADS)

    Nechaev, D.; Panteleev, G.; Yaremchuk, M.

    2015-12-01

    Comprehensive monitoring of the circulation in the Chukchi Sea and Bering Strait is one of the key prerequisites of the successful long-term forecast of the Arctic Ocean state. Since the number of continuously maintained observational platforms is restricted by logistical and political constraints, the configuration of such an observing system should be guided by an objective strategy that optimizes the observing system coverage, design, and the expenses of monitoring. The presented study addresses optimization of system consisting of a limited number of observational platforms with respect to reduction of the uncertainties in monitoring the volume/freshwater/heat transports through a set of key sections in the Chukchi Sea and Bering Strait. Variational algorithms for optimization of observational arrays are verified in the test bed of the set of 4Dvar optimized summer-fall circulations in the Pacific sector of the Arctic Ocean. The results of an optimization approach based on low-dimensional ensemble of model solutions is compared against a more conventional algorithm involving application of the tangent linear and adjoint models. Special attention is paid to the computational efficiency and portability of the optimization procedure.

  5. A System-Oriented Approach for the Optimal Control of Process Chains under Stochastic Influences

    NASA Astrophysics Data System (ADS)

    Senn, Melanie; Schäfer, Julian; Pollak, Jürgen; Link, Norbert

    2011-09-01

    Process chains in manufacturing consist of multiple connected processes in terms of dynamic systems. The properties of a product passing through such a process chain are influenced by the transformation of each single process. There exist various methods for the control of individual processes, such as classical state controllers from cybernetics or function mapping approaches realized by statistical learning. These controllers ensure that a desired state is obtained at process end despite of variations in the input and disturbances. The interactions between the single processes are thereby neglected, but play an important role in the optimization of the entire process chain. We divide the overall optimization into two phases: (1) the solution of the optimization problem by Dynamic Programming to find the optimal control variable values for each process for any encountered end state of its predecessor and (2) the application of the optimal control variables at runtime for the detected initial process state. The optimization problem is solved by selecting adequate control variables for each process in the chain backwards based on predefined quality requirements for the final product. For the demonstration of the proposed concept, we have chosen a process chain from sheet metal manufacturing with simplified transformation functions.

  6. An auto-adaptive optimization approach for targeting nonpoint source pollution control practices

    NASA Astrophysics Data System (ADS)

    Chen, Lei; Wei, Guoyuan; Shen, Zhenyao

    2015-10-01

    To solve computationally intensive and technically complex control of nonpoint source pollution, the traditional genetic algorithm was modified into an auto-adaptive pattern, and a new framework was proposed by integrating this new algorithm with a watershed model and an economic module. Although conceptually simple and comprehensive, the proposed algorithm would search automatically for those Pareto-optimality solutions without a complex calibration of optimization parameters. The model was applied in a case study in a typical watershed of the Three Gorges Reservoir area, China. The results indicated that the evolutionary process of optimization was improved due to the incorporation of auto-adaptive parameters. In addition, the proposed algorithm outperformed the state-of-the-art existing algorithms in terms of convergence ability and computational efficiency. At the same cost level, solutions with greater pollutant reductions could be identified. From a scientific viewpoint, the proposed algorithm could be extended to other watersheds to provide cost-effective configurations of BMPs.

  7. An auto-adaptive optimization approach for targeting nonpoint source pollution control practices.

    PubMed

    Chen, Lei; Wei, Guoyuan; Shen, Zhenyao

    2015-10-21

    To solve computationally intensive and technically complex control of nonpoint source pollution, the traditional genetic algorithm was modified into an auto-adaptive pattern, and a new framework was proposed by integrating this new algorithm with a watershed model and an economic module. Although conceptually simple and comprehensive, the proposed algorithm would search automatically for those Pareto-optimality solutions without a complex calibration of optimization parameters. The model was applied in a case study in a typical watershed of the Three Gorges Reservoir area, China. The results indicated that the evolutionary process of optimization was improved due to the incorporation of auto-adaptive parameters. In addition, the proposed algorithm outperformed the state-of-the-art existing algorithms in terms of convergence ability and computational efficiency. At the same cost level, solutions with greater pollutant reductions could be identified. From a scientific viewpoint, the proposed algorithm could be extended to other watersheds to provide cost-effective configurations of BMPs.

  8. Applying a multi-criteria genetic algorithm framework for brownfield reuse optimization: improving redevelopment options based on stakeholder preferences.

    PubMed

    Morio, Maximilian; Schädler, Sebastian; Finkel, Michael

    2013-11-30

    The reuse of underused or abandoned contaminated land, so-called brownfields, is increasingly seen as an important means for reducing the consumption of land and natural resources. Many existing decision support systems are not appropriate because they focus mainly on economic aspects, while neglecting sustainability issues. To fill this gap, we present a framework for spatially explicit, integrated planning and assessment of brownfield redevelopment options. A multi-criteria genetic algorithm allows us to determine optimal land use configurations with respect to assessment criteria and given constraints on the composition of land use classes, according to, e.g., stakeholder preferences. Assessment criteria include sustainability indicators as well as economic aspects, including remediation costs and land value. The framework is applied to a case study of a former military site near Potsdam, Germany. Emphasis is placed on the trade-off between possibly conflicting objectives (e.g., economic goals versus the need for sustainable development in the regional context of the brownfield site), which may represent different perspectives of involved stakeholders. The economic analysis reveals the trade-off between the increase in land value due to reuse and the costs for remediation required to make reuse possible. We identify various reuse options, which perform similarly well although they exhibit different land use patterns. High-cost high-value options dominated by residential land use and low-cost low-value options with less sensitive land use types may perform equally well economically. The results of the integrated analysis show that the quantitative integration of sustainability may change optimal land use patterns considerably.

  9. MODIS 250M burnt area detection algorithm: A case study applied, optimized and evaluated over continental Portugal.

    NASA Astrophysics Data System (ADS)

    Mota, Bernardo; Benali, Akli; Pereira, Jose Miguel

    2014-05-01

    The dependence on satellites to derive burnt area (BA) maps is unquestionable. High resolution inventories normally result from change detection algorithms applied to pre and post fire season high resolution imagery. But these have no temporal discrimination within the occurring season. Limited to the larger fire scars, coarser resolution imagery based on reflectance or thermal information can help to map the individual fire progression. The Moderate Resolution Imaging Spectroradiometer (MODIS) 250m imagery bands, freely available, can be used to provide quick areal estimates and provide the needed temporal discrimination with four times the standard spatial resolution BA products. The scope of this study is to assess the spatial and temporal accuracy of burnt area maps derived by the MODIS 250m resolution Burnt Area algorithm (M250BA) presented by Mota et al., (2013) on an Mediterranean landscape. The algorithm is an improved adaptation of one of the burnt area algorithms developed within the scope of the Fire_CCI project and was applied to an area covering continental Portugal for the period of 2001-2013. The algorithm comprises a temporal analysis based on change point detections and a spatial analysis based on Markov random fields. We explored the benefits of applying standard optimization techniques to the algorithm and achieved significant performance improvements.. Temporal and spatial accuracy assessments were performed by comparing the results with spatial and temporal distribution of active fire maps and with high resolution burnt area maps, derived by the MCD14ML thermal anomalies dataset and by Landsat BA classifications, respectively. Accuracy results highlight the potential applications for this BA algorithm and the advantages of using 250m spatial resolution images for BA detection. The study also extends the current national burnt area atlas since 2010. Due to the open-access data policy, the algorithm can be easily parameterised and applied to any

  10. A multidating approach applied to historical slackwater flood deposits of the Gardon River, SE France

    NASA Astrophysics Data System (ADS)

    Dezileau, L.; Terrier, B.; Berger, J. F.; Blanchemanche, P.; Latapie, A.; Freydier, R.; Bremond, L.; Paquier, A.; Lang, M.; Delgado, J. L.

    2014-06-01

    A multidating approach was carried out on slackwater flood deposits, preserved in valley side rock cave and terrace, of the Gardon River in Languedoc, southeast France. Lead-210, caesium-137, and geochemical analysis of mining-contaminated slackwater flood sediments have been used to reconstruct the history of these flood deposits. These age controls were combined with the continuous record of Gardon flow since 1890, and the combined records were then used to assign ages to slackwater deposits. The stratigraphic records of terrace GE and cave GG were excellent examples to illustrate the effects of erosion/preservation in a context of a progressively self-censoring, vertically accreting sequence. The sedimentary flood record of the terrace GE located at 10 m above the channel bed is complete for years post-1958 but incomplete before. During the 78-year period 1880-1958, 25 floods of a sufficient magnitude (> 1450 m3/s) have covered the terrace. Since 1958, however, the frequency of inundation of the deposits has been lower: only 5 or 6 floods in 52 years have been large enough to exceed the necessary threshold discharge (> 1700 m3/s). The progressive increase of the threshold discharge and the reduced frequency of inundation at the terrace could allow stabilization of the vegetation cover and improve protection against erosion from subsequent large magnitude flood events. The sedimentary flood record seems complete for cave GG located at 15 m above the channel bed. Here, the low frequency of events would have enabled a high degree of stabilization of the sedimentary flood record, rendering the deposits less susceptible to erosion. Radiocarbon dating is used in this study and compared to the other dating techniques. Eighty percent of radiocarbon dates on charcoals were considerably older than those obtained by the other techniques in the terrace. On the other hand, radiocarbon dating on seeds provided better results. This discrepancy between radiocarbon dates on

  11. A Computer-Assisted Approach for Conducting Information Technology Applied Instructions

    ERIC Educational Resources Information Center

    Chu, Hui-Chun; Hwang, Gwo-Jen; Tsai, Pei Jin; Yang, Tzu-Chi

    2009-01-01

    The growing popularity of computer and network technologies has attracted researchers to investigate the strategies and the effects of information technology applied instructions. Previous research has not only demonstrated the benefits of applying information technologies to the learning process, but has also revealed the difficulty of applying…

  12. Direct approach for bioprocess optimization in a continuous flat-bed photobioreactor system.

    PubMed

    Kwon, Jong-Hee; Rögner, Matthias; Rexroth, Sascha

    2012-11-30

    Application of photosynthetic micro-organisms, such as cyanobacteria and green algae, for the carbon neutral energy production raises the need for cost-efficient photobiological processes. Optimization of these processes requires permanent control of many independent and mutably dependent parameters, for which a continuous cultivation approach has significant advantages. As central factors like the cell density can be kept constant by turbidostatic control, light intensity and iron content with its strong impact on productivity can be optimized. Both are key parameters due to their strong dependence on photosynthetic activity. Here we introduce an engineered low-cost 5 L flat-plate photobioreactor in combination with a simple and efficient optimization procedure for continuous photo-cultivation of microalgae. Based on direct determination of the growth rate at constant cell densities and the continuous measurement of O₂ evolution, stress conditions and their effect on the photosynthetic productivity can be directly observed. PMID:22789478

  13. A holistic approach towards optimal planning of hybrid renewable energy systems: Combining hydroelectric and wind energy

    NASA Astrophysics Data System (ADS)

    Dimas, Panagiotis; Bouziotas, Dimitris; Efstratiadis, Andreas; Koutsoyiannis, Demetris

    2014-05-01

    Hydropower with pumped storage is a proven technology with very high efficiency that offers a unique large-scale energy buffer. Energy storage is employed by pumping water upstream to take advantage of the excess of produced energy (e.g. during night) and next retrieving this water to generate hydro-power during demand peaks. Excess energy occurs due to other renewables (wind, solar) whose power fluctuates in an uncontrollable manner. By integrating these with hydroelectric plants with pumped storage facilities we can form autonomous hybrid renewable energy systems. The optimal planning and management thereof requires a holistic approach, where uncertainty is properly represented. In this context, a novel framework is proposed, based on stochastic simulation and optimization. This is tested in an existing hydrosystem of Greece, considering its combined operation with a hypothetical wind power system, for which we seek the optimal design to ensure the most beneficial performance of the overall scheme.

  14. Enhanced index tracking modeling in portfolio optimization with mixed-integer programming z approach

    NASA Astrophysics Data System (ADS)

    Siew, Lam Weng; Jaaman, Saiful Hafizah Hj.; Ismail, Hamizun bin

    2014-09-01

    Enhanced index tracking is a popular form of portfolio management in stock market investment. Enhanced index tracking aims to construct an optimal portfolio to generate excess return over the return achieved by the stock market index without purchasing all of the stocks that make up the index. The objective of this paper is to construct an optimal portfolio using mixed-integer programming model which adopts regression approach in order to generate higher portfolio mean return than stock market index return. In this study, the data consists of 24 component stocks in Malaysia market index which is FTSE Bursa Malaysia Kuala Lumpur Composite Index from January 2010 until December 2012. The results of this study show that the optimal portfolio of mixed-integer programming model is able to generate higher mean return than FTSE Bursa Malaysia Kuala Lumpur Composite Index return with only selecting 30% out of the total stock market index components.

  15. Towards an optimal multidisciplinary approach to breast cancer treatment for older women.

    PubMed

    Thavarajah, Nemica; Menjak, Ines; Trudeau, Maureen; Mehta, Rajin; Wright, Frances; Leahey, Angela; Ellis, Janet; Gallagher, Damian; Moore, Jennifer; Bristow, Bonnie; Kay, Noreen; Szumacher, Ewa

    2015-01-01

    The treatment of breast cancer presents specifc concerns that are unique to the needs of older female patients. While treatment of early breast cancer does not vary greatly with age, the optimal management of older women with breast cancer often requires complex interdisciplinary supportive care due to multiple comorbidities. This article reviews optimal approaches to breast cancer in women 65 years and older from an interdisciplinary perspective. A literature review was conducted using MEDLINE and EMBASE, choosing articles concentrated on the management of older breast cancer patients from the point of view of several disciplines, including geriatrics, radiation oncology, medical oncology, surgical oncology, psychooncology, palliative care, nursing, and social work. This patient population requires interprofessional collaboration from the time of diagnosis, throughout treatment and into the recovery period. Thus, we recommend an interdisciplinary program dedicated to the treat ment of older women with breast cancer to optimize their cancer care. PMID:26897863

  16. Systematic analysis of protein-detergent complexes applying dynamic light scattering to optimize solutions for crystallization trials.

    PubMed

    Meyer, Arne; Dierks, Karsten; Hussein, Rana; Brillet, Karl; Brognaro, Hevila; Betzel, Christian

    2015-01-01

    Detergents are widely used for the isolation and solubilization of membrane proteins to support crystallization and structure determination. Detergents are amphiphilic molecules that form micelles once the characteristic critical micelle concentration (CMC) is achieved and can solubilize membrane proteins by the formation of micelles around them. The results are presented of a study of micelle formation observed by in situ dynamic light-scattering (DLS) analyses performed on selected detergent solutions using a newly designed advanced hardware device. DLS was initially applied in situ to detergent samples with a total volume of approximately 2 µl. When measured with DLS, pure detergents show a monodisperse radial distribution in water at concentrations exceeding the CMC. A series of all-trans n-alkyl-β-D-maltopyranosides, from n-hexyl to n-tetradecyl, were used in the investigations. The results obtained verify that the application of DLS in situ is capable of distinguishing differences in the hydrodynamic radii of micelles formed by detergents differing in length by only a single CH2 group in their aliphatic tails. Subsequently, DLS was applied to investigate the distribution of hydrodynamic radii of membrane proteins and selected water-insoluble proteins in presence of detergent micelles. The results confirm that stable protein-detergent complexes were prepared for (i) bacteriorhodopsin and (ii) FetA in complex with a ligand as examples of transmembrane proteins. A fusion of maltose-binding protein and the Duck hepatitis B virus X protein was added to this investigation as an example of a non-membrane-associated protein with low water solubility. The increased solubility of this protein in the presence of detergent could be monitored, as well as the progress of proteolytic cleavage to separate the fusion partners. This study demonstrates the potential of in situ DLS to optimize solutions of protein-detergent complexes for crystallization applications.

  17. A new Monte Carlo-based treatment plan optimization approach for intensity modulated radiation therapy.

    PubMed

    Li, Yongbao; Tian, Zhen; Shi, Feng; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun

    2015-04-01

    Intensity-modulated radiation treatment (IMRT) plan optimization needs beamlet dose distributions. Pencil-beam or superposition/convolution type algorithms are typically used because of their high computational speed. However, inaccurate beamlet dose distributions may mislead the optimization process and hinder the resulting plan quality. To solve this problem, the Monte Carlo (MC) simulation method has been used to compute all beamlet doses prior to the optimization step. The conventional approach samples the same number of particles from each beamlet. Yet this is not the optimal use of MC in this problem. In fact, there are beamlets that have very small intensities after solving the plan optimization problem. For those beamlets, it may be possible to use fewer particles in dose calculations to increase efficiency. Based on this idea, we have developed a new MC-based IMRT plan optimization framework that iteratively performs MC dose calculation and plan optimization. At each dose calculation step, the particle numbers for beamlets were adjusted based on the beamlet intensities obtained through solving the plan optimization problem in the last iteration step. We modified a GPU-based MC dose engine to allow simultaneous computations of a large number of beamlet doses. To test the accuracy of our modified dose engine, we compared the dose from a broad beam and the summed beamlet doses in this beam in an inhomogeneous phantom. Agreement within 1% for the maximum difference and 0.55% for the average difference was observed. We then validated the proposed MC-based optimization schemes in one lung IMRT case. It was found that the conventional scheme required 10(6) particles from each beamlet to achieve an optimization result that was 3% difference in fluence map and 1% difference in dose from the ground truth. In contrast, the proposed scheme achieved the same level of accuracy with on average 1.2 × 10(5) particles per beamlet. Correspondingly, the computation

  18. A new Monte Carlo-based treatment plan optimization approach for intensity modulated radiation therapy

    NASA Astrophysics Data System (ADS)

    Li, Yongbao; Tian, Zhen; Shi, Feng; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun

    2015-04-01

    Intensity-modulated radiation treatment (IMRT) plan optimization needs beamlet dose distributions. Pencil-beam or superposition/convolution type algorithms are typically used because of their high computational speed. However, inaccurate beamlet dose distributions may mislead the optimization process and hinder the resulting plan quality. To solve this problem, the Monte Carlo (MC) simulation method has been used to compute all beamlet doses prior to the optimization step. The conventional approach samples the same number of particles from each beamlet. Yet this is not the optimal use of MC in this problem. In fact, there are beamlets that have very small intensities after solving the plan optimization problem. For those beamlets, it may be possible to use fewer particles in dose calculations to increase efficiency. Based on this idea, we have developed a new MC-based IMRT plan optimization framework that iteratively performs MC dose calculation and plan optimization. At each dose calculation step, the particle numbers for beamlets were adjusted based on the beamlet intensities obtained through solving the plan optimization problem in the last iteration step. We modified a GPU-based MC dose engine to allow simultaneous computations of a large number of beamlet doses. To test the accuracy of our modified dose engine, we compared the dose from a broad beam and the summed beamlet doses in this beam in an inhomogeneous phantom. Agreement within 1% for the maximum difference and 0.55% for the average difference was observed. We then validated the proposed MC-based optimization schemes in one lung IMRT case. It was found that the conventional scheme required 106 particles from each beamlet to achieve an optimization result that was 3% difference in fluence map and 1% difference in dose from the ground truth. In contrast, the proposed scheme achieved the same level of accuracy with on average 1.2 × 105 particles per beamlet. Correspondingly, the computation time

  19. Numerical approach of collision avoidance and optimal control on robotic manipulators

    NASA Technical Reports Server (NTRS)

    Wang, Jyhshing Jack

    1990-01-01

    Collision-free optimal motion and trajectory planning for robotic manipulators are solved by a method of sequential gradient restoration algorithm. Numerical examples of a two degree-of-freedom (DOF) robotic manipulator are demonstrated to show the excellence of the optimization technique and obstacle avoidance scheme. The obstacle is put on the midway, or even further inward on purpose, of the previous no-obstacle optimal trajectory. For the minimum-time purpose, the trajectory grazes by the obstacle and the minimum-time motion successfully avoids the obstacle. The minimum-time is longer for the obstacle avoidance cases than the one without obstacle. The obstacle avoidance scheme can deal with multiple obstacles in any ellipsoid forms by using artificial potential fields as penalty functions via distance functions. The method is promising in solving collision-free optimal control problems for robotics and can be applied to any DOF robotic manipulators with any performance indices and mobile robots as well. Since this method generates optimum solution based on Pontryagin Extremum Principle, rather than based on assumptions, the results provide a benchmark against which any optimization techniques can be measured.

  20. Modeling, simulation and optimization approaches for design of lightweight car body structures

    NASA Astrophysics Data System (ADS)

    Kiani, Morteza

    Simulation-based design optimization and finite element method are used in this research to investigate weight reduction of car body structures made of metallic and composite materials under different design criteria. Besides crashworthiness in full frontal, offset frontal, and side impact scenarios, vibration frequencies, static stiffness, and joint rigidity are also considered. Energy absorption at the component level is used to study the effectiveness of carbon fiber reinforced polymer (CFRP) composite material with consideration of different failure criteria. A global-local design strategy is introduced and applied to multi-objective optimization of car body structures with CFRP components. Multiple example problems involving the analysis of full-vehicle crash and body-in-white models are used to examine the effect of material substitution and the choice of design criteria on weight reduction. The results of this study show that car body structures that are optimized for crashworthiness alone may not meet the vibration criterion. Moreover, optimized car body structures with CFRP components can be lighter with superior crashworthiness than the baseline and optimized metallic structures.

  1. An integrated approach of topology optimized design and selective laser melting process for titanium implants materials.

    PubMed

    Xiao, Dongming; Yang, Yongqiang; Su, Xubin; Wang, Di; Sun, Jianfeng

    2013-01-01

    The load-bearing bone implants materials should have sufficient stiffness and large porosity, which are interacted since larger porosity causes lower mechanical properties. This paper is to seek the maximum stiffness architecture with the constraint of specific volume fraction by topology optimization approach, that is, maximum porosity can be achieved with predefine stiffness properties. The effective elastic modulus of conventional cubic and topology optimized scaffolds were calculated using finite element analysis (FEA) method; also, some specimens with different porosities of 41.1%, 50.3%, 60.2% and 70.7% respectively were fabricated by Selective Laser Melting (SLM) process and were tested by compression test. Results showed that the computational effective elastic modulus of optimized scaffolds was approximately 13% higher than cubic scaffolds, the experimental stiffness values were reduced by 76% than the computational ones. The combination of topology optimization approach and SLM process would be available for development of titanium implants materials in consideration of both porosity and mechanical stiffness.

  2. Real-time PCR probe optimization using design of experiments approach.

    PubMed

    Wadle, S; Lehnert, M; Rubenwolf, S; Zengerle, R; von Stetten, F

    2016-03-01

    Primer and probe sequence designs are among the most critical input factors in real-time polymerase chain reaction (PCR) assay optimization. In this study, we present the use of statistical design of experiments (DOE) approach as a general guideline for probe optimization and more specifically focus on design optimization of label-free hydrolysis probes that are designated as mediator probes (MPs), which are used in reverse transcription MP PCR (RT-MP PCR). The effect of three input factors on assay performance was investigated: distance between primer and mediator probe cleavage site; dimer stability of MP and target sequence (influenza B virus); and dimer stability of the mediator and universal reporter (UR). The results indicated that the latter dimer stability had the greatest influence on assay performance, with RT-MP PCR efficiency increased by up to 10% with changes to this input factor. With an optimal design configuration, a detection limit of 3-14 target copies/10 μl reaction could be achieved. This improved detection limit was confirmed for another UR design and for a second target sequence, human metapneumovirus, with 7-11 copies/10 μl reaction detected in an optimum case. The DOE approach for improving oligonucleotide designs for real-time PCR not only produces excellent results but may also reduce the number of experiments that need to be performed, thus reducing costs and experimental times.

  3. Surface roughness optimization of polyamide-6/nanoclay nanocomposites using artificial neural network: genetic algorithm approach.

    PubMed

    Moghri, Mehdi; Madic, Milos; Omidi, Mostafa; Farahnakian, Masoud

    2014-01-01

    During the past decade, polymer nanocomposites attracted considerable investment in research and development worldwide. One of the key factors that affect the quality of polymer nanocomposite products in machining is surface roughness. To obtain high quality products and reduce machining costs it is very important to determine the optimal machining conditions so as to achieve enhanced machining performance. The objective of this paper is to develop a predictive model using a combined design of experiments and artificial intelligence approach for optimization of surface roughness in milling of polyamide-6 (PA-6) nanocomposites. A surface roughness predictive model was developed in terms of milling parameters (spindle speed and feed rate) and nanoclay (NC) content using artificial neural network (ANN). As the present study deals with relatively small number of data obtained from full factorial design, application of genetic algorithm (GA) for ANN training is thought to be an appropriate approach for the purpose of developing accurate and robust ANN model. In the optimization phase, a GA is considered in conjunction with the explicit nonlinear function derived from the ANN to determine the optimal milling parameters for minimization of surface roughness for each PA-6 nanocomposite. PMID:24578636

  4. Real-time PCR probe optimization using design of experiments approach.

    PubMed

    Wadle, S; Lehnert, M; Rubenwolf, S; Zengerle, R; von Stetten, F

    2016-03-01

    Primer and probe sequence designs are among the most critical input factors in real-time polymerase chain reaction (PCR) assay optimization. In this study, we present the use of statistical design of experiments (DOE) approach as a general guideline for probe optimization and more specifically focus on design optimization of label-free hydrolysis probes that are designated as mediator probes (MPs), which are used in reverse transcription MP PCR (RT-MP PCR). The effect of three input factors on assay performance was investigated: distance between primer and mediator probe cleavage site; dimer stability of MP and target sequence (influenza B virus); and dimer stability of the mediator and universal reporter (UR). The results indicated that the latter dimer stability had the greatest influence on assay performance, with RT-MP PCR efficiency increased by up to 10% with changes to this input factor. With an optimal design configuration, a detection limit of 3-14 target copies/10 μl reaction could be achieved. This improved detection limit was confirmed for another UR design and for a second target sequence, human metapneumovirus, with 7-11 copies/10 μl reaction detected in an optimum case. The DOE approach for improving oligonucleotide designs for real-time PCR not only produces excellent results but may also reduce the number of experiments that need to be performed, thus reducing costs and experimental times. PMID:27077046

  5. Surface Roughness Optimization of Polyamide-6/Nanoclay Nanocomposites Using Artificial Neural Network: Genetic Algorithm Approach

    PubMed Central

    Moghri, Mehdi; Omidi, Mostafa; Farahnakian, Masoud

    2014-01-01

    During the past decade, polymer nanocomposites attracted considerable investment in research and development worldwide. One of the key factors that affect the quality of polymer nanocomposite products in machining is surface roughness. To obtain high quality products and reduce machining costs it is very important to determine the optimal machining conditions so as to achieve enhanced machining performance. The objective of this paper is to develop a predictive model using a combined design of experiments and artificial intelligence approach for optimization of surface roughness in milling of polyamide-6 (PA-6) nanocomposites. A surface roughness predictive model was developed in terms of milling parameters (spindle speed and feed rate) and nanoclay (NC) content using artificial neural network (ANN). As the present study deals with relatively small number of data obtained from full factorial design, application of genetic algorithm (GA) for ANN training is thought to be an appropriate approach for the purpose of developing accurate and robust ANN model. In the optimization phase, a GA is considered in conjunction with the explicit nonlinear function derived from the ANN to determine the optimal milling parameters for minimization of surface roughness for each PA-6 nanocomposite. PMID:24578636

  6. Real-time PCR probe optimization using design of experiments approach

    PubMed Central

    Wadle, S.; Lehnert, M.; Rubenwolf, S.; Zengerle, R.; von Stetten, F.

    2015-01-01

    Primer and probe sequence designs are among the most critical input factors in real-time polymerase chain reaction (PCR) assay optimization. In this study, we present the use of statistical design of experiments (DOE) approach as a general guideline for probe optimization and more specifically focus on design optimization of label-free hydrolysis probes that are designated as mediator probes (MPs), which are used in reverse transcription MP PCR (RT-MP PCR). The effect of three input factors on assay performance was investigated: distance between primer and mediator probe cleavage site; dimer stability of MP and target sequence (influenza B virus); and dimer stability of the mediator and universal reporter (UR). The results indicated that the latter dimer stability had the greatest influence on assay performance, with RT-MP PCR efficiency increased by up to 10% with changes to this input factor. With an optimal design configuration, a detection limit of 3–14 target copies/10 μl reaction could be achieved. This improved detection limit was confirmed for another UR design and for a second target sequence, human metapneumovirus, with 7–11 copies/10 μl reaction detected in an optimum case. The DOE approach for improving oligonucleotide designs for real-time PCR not only produces excellent results but may also reduce the number of experiments that need to be performed, thus reducing costs and experimental times. PMID:27077046

  7. Accounting for the tongue-and-groove effect using a robust direct aperture optimization approach

    SciTech Connect

    Salari, Ehsan; Men Chunhua; Romeijn, H. Edwin

    2011-03-15

    Purpose: Traditionally, the tongue-and-groove effect due to the multileaf collimator architecture in intensity-modulated radiation therapy (IMRT) has typically been deferred to the leaf sequencing stage. The authors propose a new direct aperture optimization method for IMRT treatment planning that explicitly incorporates dose calculation inaccuracies due to the tongue-and-groove effect into the treatment plan optimization stage. Methods: The authors avoid having to accurately estimate the dosimetric effects of the tongue-and-groove architecture by using lower and upper bounds on the dose distribution delivered to the patient. They then develop a model that yields a treatment plan that is robust with respect to the corresponding dose calculation inaccuracies. Results: Tests on a set of ten clinical head-and-neck cancer cases demonstrate the effectiveness of the new method in developing robust treatment plans with tight dose distributions in targets and critical structures. This is contrasted with the very loose bounds on the dose distribution that are obtained by solving a traditional treatment plan optimization model that ignores tongue-and-groove effects in the treatment planning stage. Conclusions: A robust direct aperture optimization approach is proposed to account for the dosimetric inaccuracies caused by the tongue-and-groove effect. The experiments validate the ability of the proposed approach in designing robust treatment plans regardless of the exact consequences of the tongue-and-groove architecture.

  8. An integrated approach of topology optimized design and selective laser melting process for titanium implants materials.

    PubMed

    Xiao, Dongming; Yang, Yongqiang; Su, Xubin; Wang, Di; Sun, Jianfeng

    2013-01-01

    The load-bearing bone implants materials should have sufficient stiffness and large porosity, which are interacted since larger porosity causes lower mechanical properties. This paper is to seek the maximum stiffness architecture with the constraint of specific volume fraction by topology optimization approach, that is, maximum porosity can be achieved with predefine stiffness properties. The effective elastic modulus of conventional cubic and topology optimized scaffolds were calculated using finite element analysis (FEA) method; also, some specimens with different porosities of 41.1%, 50.3%, 60.2% and 70.7% respectively were fabricated by Selective Laser Melting (SLM) process and were tested by compression test. Results showed that the computational effective elastic modulus of optimized scaffolds was approximately 13% higher than cubic scaffolds, the experimental stiffness values were reduced by 76% than the computational ones. The combination of topology optimization approach and SLM process would be available for development of titanium implants materials in consideration of both porosity and mechanical stiffness. PMID:23988713

  9. Optimal contribution selection applied to the Norwegian and the North-Swedish cold-blooded trotter - a feasibility study.

    PubMed

    Olsen, H F; Meuwissen, T; Klemetsdal, G

    2013-06-01

    The aim of this study was to examine how to apply optimal contribution selection (OCS) in the Norwegian and the North-Swedish cold-blooded trotter and give practical recommendations for the future. OCS was implemented using the software Gencont with overlapping generations and selected a few, but young sires, as these turn over the generations faster and thus is less related to the mare candidates. In addition, a number of Swedish sires were selected as they were less related to the selection candidates. We concluded that implementing OCS is feasible to select sires (there is no selection on mares), and we recommend the number of available sire candidates to be continuously updated because of amongst others deaths and geldings. In addition, only considering sire candidates with phenotype above average within a year class would allow selection candidates from many year classes to be included and circumvent current limitation on number of selection candidates in Gencont (approx. 3000). The results showed that mare candidates can well be those being mated the previous year. OCS will, dynamically, recruit young stallions and manage the culling or renewal of annual breeding permits for stallions that had been previously approved. For the annual mating proportion per sire, a constraint in accordance with the maximum that a sire can mate naturally is recommended. PMID:23679942

  10. Pectin extraction from quince (Cydonia oblonga) pomace applying alternative methods: effect of process variables and preliminary optimization.

    PubMed

    Brown, Valeria Anahí; Lozano, Jorge E; Genovese, Diego Bautista

    2014-03-01

    The objectives of this study were to introduce alternative methods in the process of pectin extraction from quince pomace, to determine the effect of selected process variables (factors) on the obtained pectin, and to perform a preliminary optimization of the process. A fractional factorial experimental design was applied, where the factors considered were six: quince pomace pretreatment (washing vs blanching), drying method (hot air vs LPSSD), acid extraction conditions (pH, temperature, and time), and pectin extract concentration method (vacuum evaporation vs ultrafiltration). The effects of these factors and their interactions on pectin yield (Y: 0.2-34.2 mg/g), GalA content (44.5-76.2%), and DM (47.5-90.9%), were determined. For these three responses, extraction pH was the main effect, but it was involved in two and three factors interactions. Regarding alternative methods, LPSSD was required for maximum Y and GalA, and ultrafiltration for maximum GalA and DM. Response models were used to predict optimum process conditions (quince blanching, pomace drying by LPSSD, acid extraction at pH 2.20, 80 , 3 h, and concentration under vacuum) to simultaneously maximize Y (25.2 mg/g), GalA (66.3%), and DM (66.4%).

  11. CNS Multiparameter Optimization Approach: Is it in Accordance with Occam's Razor Principle?

    PubMed

    Raevsky, Oleg A

    2016-04-01

    A detailed analysis of the possibility of using the Multiparameter Optimization approach (MPO) for CNS/non-CNS classification of drugs was carried out. This work has shown that MPO descriptors are able to describe only part of chemical transport in the CNS connected with transmembrane diffusion. Hence the "intuitive" CNS MPO approach with arbitrary selection of descriptors and calculations of score functions, search of thresholds of classification, and absence of any chemometric procedures, leads to rather modest accuracy of CNS/non-CNS classification models. PMID:27491918

  12. Comparison of penalty functions on a penalty approach to mixed-integer optimization

    NASA Astrophysics Data System (ADS)

    Francisco, Rogério B.; Costa, M. Fernanda P.; Rocha, Ana Maria A. C.; Fernandes, Edite M. G. P.

    2016-06-01

    In this paper, we present a comparative study involving several penalty functions that can be used in a penalty approach for globally solving bound mixed-integer nonlinear programming (bMIMLP) problems. The penalty approach relies on a continuous reformulation of the bMINLP problem by adding a particular penalty term to the objective function. A penalty function based on the `erf' function is proposed. The continuous nonlinear optimization problems are sequentially solved by the population-based firefly algorithm. Preliminary numerical experiments are carried out in order to analyze the quality of the produced solutions, when compared with other penalty functions available in the literature.

  13. A Novel Synthesis of Computational Approaches Enables Optimization of Grasp Quality of Tendon-Driven Hands

    PubMed Central

    Inouye, Joshua M.; Kutch, Jason J.; Valero-Cuevas, Francisco J.

    2013-01-01

    We propose a complete methodology to find the full set of feasible grasp wrenches and the corresponding wrench-direction-independent grasp quality for a tendon-driven hand with arbitrary design parameters. Monte Carlo simulations on two representative designs combined with multiple linear regression identified the parameters with the greatest potential to increase this grasp metric. This synthesis of computational approaches now enables the systematic design, evaluation, and optimization of tendon-driven hands. PMID:23335864

  14. An optimal control approach to pilot/vehicle analysis and Neal-Smith criteria

    NASA Technical Reports Server (NTRS)

    Bacon, B. J.; Schmidt, D. K.

    1984-01-01

    The approach of Neal and Smith was merged with the advances in pilot modeling by means of optimal control techniques. While confirming the findings of Neal and Smith, a methodology that explicitly includes the pilot's objective in attitude tracking was developed. More importantly, the method yields the required system bandwidth along with a better pilot model directly applicable to closed-loop analysis of systems in any order.

  15. Developing and Applying Green Building Technology in an Indigenous Community: An Engaged Approach to Sustainability Education

    ERIC Educational Resources Information Center

    Riley, David R.; Thatcher, Corinne E.; Workman, Elizabeth A.

    2006-01-01

    Purpose: This paper aims to disseminate an innovative approach to sustainability education in construction-related fields in which teaching, research, and service are integrated to provide a unique learning experience for undergraduate students, faculty members, and community partners. Design/methodology/approach: The paper identifies the need for…

  16. Bridging the Gap between Basic and Applied Research by an Integrative Research Approach

    ERIC Educational Resources Information Center

    Stark, Robin; Mandl, Heinz

    2007-01-01

    The discussion of the gap between theory and practice has a long tradition in educational psychology and especially in research on learning and instruction. Starting with a short analysis of more or less elaborated approaches focusing on this problem, a complex procedure called "integrative research approach", specialized in reducing the gap…

  17. A Pragmatic Approach to Applied Ethics in Sport and Related Physical Activity.

    ERIC Educational Resources Information Center

    Zeigler, Earle F.

    Arguing that there is still no single, noncontroversial foundation on which the world's present multi-structure of ethics can be built, this paper examines a scientific ethics approach. It is postulated that in North American culture, the approach to instruction in ethics for youth is haphazard at best. Society does not provide an adequate means…

  18. Investigation of Multi-Criteria Decision Consistency: A Triplex Approach to Optimal Oilfield Portfolio Investment Decisions

    NASA Astrophysics Data System (ADS)

    Qaradaghi, Mohammed

    techniques that can provide more flexibility and inclusiveness in the decision making process, such as Multi-Criteria Decision Making (MCDM) methods. However, it can be observed that the MCDM literature: 1) is primarily focused on suggesting certain MCDM techniques to specific problems without providing sufficient evidence for their selection, 2) is inadequate in addressing MCDM in E&P portfolio selection and prioritization compared with other fields, and 3) does not address prioritizing brownfields (i.e., developed oilfields). This research study aims at addressing the above drawbacks through combining three MCDM methods (i.e., AHP, PROMETHEE and TOPSIS) into a single decision making tool that can support optimal oilfield portfolio investment decisions by helping determine the share of each oilfield of the total development resources allocated. Selecting these methods is reinforced by a pre-deployment and post-deployment validation framework. In addition, this study proposes a two-dimensional consistency test to verify the output coherence or prioritization stability of the MCDM methods in comparison with an intuitive approach. Nine scenarios representing all possible outcomes of the internal and external consistency tests are further proposed to reach a conclusion. The methodology is applied to a case study of six major oilfields in Iraq to generate percentage shares of each oilfield of a total production target that is in line with Iraq's aspiration to increase oil production. However, the methodology is intended to be applicable to other E&P portfolio investment prioritization scenarios by taking the specific contextual characteristics into consideration.

  19. Optimizing a four-props support using the integrative design approach

    NASA Astrophysics Data System (ADS)

    Gwiazda, A.; Foit, K.; Banaś, W.; Sękala, A.; Monica, Z.; Topolska, S.

    2016-08-01

    Modern approach to the design process of technical means requires taking into consideration the issues concerning the integration of different sources of data and knowledge, and various methodologies of design. Thus, the importance of integrative approach is growing. The integration itself could be understood as a link between these different methodological solutions. Another problem is the issue concerning the optimization of technical means because of the range of design requirements. The presented issues are the basis for design approach that uses integrative approach as the basis for constructional optimization of designed technical mean. It bases firstly on the concept of integration three main subsystems of a technical mean: structural one, drive one and control one. Secondly it includes the integration of three aspects of a construction: its geometrical characteristics, its material characteristics and its assembly characteristics. One of areas of utilization of the proposed integrative approach to designing process is elaborating the construction of mining mechanized supports. There different systems of mining support characterizing by different sets of advantages and disadvantages. The importance of the design process is considered with the working conditions of mining supports that are closely linked with geological characteristics of mined beds. A mining mechanized support could be treated as the logical union of three mentioned constructional subsystems. The structural one includes among others canopy, rear shield and foot pieces. The drive one includes hydraulic props and its equipment. Finally the control one include the system of hydraulic valves ant their parameters.

  20. Nanocarriers for optimizing the balance between interfollicular permeation and follicular uptake of topically applied clobetasol to minimize adverse effects.

    PubMed

    Mathes, C; Melero, A; Conrad, P; Vogt, T; Rigo, L; Selzer, D; Prado, W A; De Rossi, C; Garrigues, T M; Hansen, S; Guterres, S S; Pohlmann, A R; Beck, R C R; Lehr, C-M; Schaefer, U F

    2016-02-10

    The treatment of various hair disorders has become a central focus of good dermatologic patient care as it affects men and women all over the world. For many inflammatory-based scalp diseases, glucocorticoids are an essential part of treatment, even though they are known to cause systemic as well as local adverse effects when applied topically. Therefore, efficient targeting and avoidance of these side effects are of utmost importance. Optimizing the balance between drug release, interfollicular permeation, and follicular uptake may allow minimizing these adverse events and simultaneously improve drug delivery, given that one succeeds in targeting a sustained release formulation to the hair follicle. To test this hypothesis, three types of polymeric nanocarriers (nanospheres, nanocapsules, lipid-core nanocapsules) for the potent glucocorticoid clobetasol propionate (CP) were prepared. They all exhibited a sustained release of drug, as was desired. The particles were formulated as a dispersion and hydrogel and (partially) labeled with Rhodamin B for quantification purposes. Follicular uptake was investigated using the Differential Stripping method and was found highest for nanocapsules in dispersion after application of massage. Moreover, the active ingredient (CP) as well as the nanocarrier (Rhodamin B labeled polymer) recovered in the hair follicle were measured simultaneously, revealing an equivalent uptake of both. In contrast, only negligible amounts of CP could be detected in the hair follicle when applied as free drug in solution or hydrogel, regardless of any massage. Skin permeation experiments using heat-separated human epidermis mounted in Franz Diffusion cells revealed equivalent reduced transdermal permeability for all nanocarriers in comparison to application of the free drug. Combining these results, nanocapsules formulated as an aqueous dispersion and applied by massage appeare to be a good candidate to maximize follicular targeting and minimize drug

  1. Nanocarriers for optimizing the balance between interfollicular permeation and follicular uptake of topically applied clobetasol to minimize adverse effects.

    PubMed

    Mathes, C; Melero, A; Conrad, P; Vogt, T; Rigo, L; Selzer, D; Prado, W A; De Rossi, C; Garrigues, T M; Hansen, S; Guterres, S S; Pohlmann, A R; Beck, R C R; Lehr, C-M; Schaefer, U F

    2016-02-10

    The treatment of various hair disorders has become a central focus of good dermatologic patient care as it affects men and women all over the world. For many inflammatory-based scalp diseases, glucocorticoids are an essential part of treatment, even though they are known to cause systemic as well as local adverse effects when applied topically. Therefore, efficient targeting and avoidance of these side effects are of utmost importance. Optimizing the balance between drug release, interfollicular permeation, and follicular uptake may allow minimizing these adverse events and simultaneously improve drug delivery, given that one succeeds in targeting a sustained release formulation to the hair follicle. To test this hypothesis, three types of polymeric nanocarriers (nanospheres, nanocapsules, lipid-core nanocapsules) for the potent glucocorticoid clobetasol propionate (CP) were prepared. They all exhibited a sustained release of drug, as was desired. The particles were formulated as a dispersion and hydrogel and (partially) labeled with Rhodamin B for quantification purposes. Follicular uptake was investigated using the Differential Stripping method and was found highest for nanocapsules in dispersion after application of massage. Moreover, the active ingredient (CP) as well as the nanocarrier (Rhodamin B labeled polymer) recovered in the hair follicle were measured simultaneously, revealing an equivalent uptake of both. In contrast, only negligible amounts of CP could be detected in the hair follicle when applied as free drug in solution or hydrogel, regardless of any massage. Skin permeation experiments using heat-separated human epidermis mounted in Franz Diffusion cells revealed equivalent reduced transdermal permeability for all nanocarriers in comparison to application of the free drug. Combining these results, nanocapsules formulated as an aqueous dispersion and applied by massage appeare to be a good candidate to maximize follicular targeting and minimize drug

  2. Development of a new adaptive ordinal approach to continuous-variable probabilistic optimization.

    SciTech Connect

    Romero, Vicente JosÔe; Chen, Chun-Hung (George Mason University, Fairfax, VA)

    2006-11-01

    A very general and robust approach to solving continuous-variable optimization problems involving uncertainty in the objective function is through the use of ordinal optimization. At each step in the optimization problem, improvement is based only on a relative ranking of the uncertainty effects on local design alternatives, rather than on precise quantification of the effects. One simply asks ''Is that alternative better or worse than this one?'' -not ''HOW MUCH better or worse is that alternative to this one?'' The answer to the latter question requires precise characterization of the uncertainty--with the corresponding sampling/integration expense for precise resolution. However, in this report we demonstrate correct decision-making in a continuous-variable probabilistic optimization problem despite extreme vagueness in the statistical characterization of the design options. We present a new adaptive ordinal method for probabilistic optimization in which the trade-off between computational expense and vagueness in the uncertainty characterization can be conveniently managed in various phases of the optimization problem to make cost-effective stepping decisions in the design space. Spatial correlation of uncertainty in the continuous-variable design space is exploited to dramatically increase method efficiency. Under many circumstances the method appears to have favorable robustness and cost-scaling properties relative to other probabilistic optimization methods, and uniquely has mechanisms for quantifying and controlling error likelihood in design-space stepping decisions. The method is asymptotically convergent to the true probabilistic optimum, so could be useful as a reference standard against which the efficiency and robustness of other methods can be compared--analogous to the role that Monte Carlo simulation plays in uncertainty propagation.

  3. A modular approach to intensity-modulated arc therapy optimization with noncoplanar trajectories

    NASA Astrophysics Data System (ADS)

    Papp, Dávid; Bortfeld, Thomas; Unkelbach, Jan

    2015-07-01

    Utilizing noncoplanar beam angles in volumetric modulated arc therapy (VMAT) has the potential to combine the benefits of arc therapy, such as short treatment times, with the benefits of noncoplanar intensity modulated radiotherapy (IMRT) plans, such as improved organ sparing. Recently, vendors introduced treatment machines that allow for simultaneous couch and gantry motion during beam delivery to make noncoplanar VMAT treatments possible. Our aim is to provide a reliable optimization method for noncoplanar isocentric arc therapy plan optimization. The proposed solution is modular in the sense that it can incorporate different existing beam angle selection and coplanar arc therapy optimization methods. Treatment planning is performed in three steps. First, a number of promising noncoplanar beam directions are selected using an iterative beam selection heuristic; these beams serve as anchor points of the arc therapy trajectory. In the second step, continuous gantry/couch angle trajectories are optimized using a simple combinatorial optimization model to define a beam trajectory that efficiently visits each of the anchor points. Treatment time is controlled by limiting the time the beam needs to trace the prescribed trajectory. In the third and final step, an optimal arc therapy plan is found along the prescribed beam trajectory. In principle any existing arc therapy optimization method could be incorporated into this step; for this work we use a sliding window VMAT algorithm. The approach is demonstrated using two particularly challenging cases. The first one is a lung SBRT patient whose planning goals could not be satisfied with fewer than nine noncoplanar IMRT fields when the patient was treated in the clinic. The second one is a brain tumor patient, where the target volume overlaps with the optic nerves and the chiasm and it is directly adjacent to the brainstem. Both cases illustrate that the large number of angles utilized by isocentric noncoplanar VMAT plans

  4. A modular approach to intensity-modulated arc therapy optimization with noncoplanar trajectories.

    PubMed

    Papp, Dávid; Bortfeld, Thomas; Unkelbach, Jan

    2015-07-01

    Utilizing noncoplanar beam angles in volumetric modulated arc therapy (VMAT) has the potential to combine the benefits of arc therapy, such as short treatment times, with the benefits of noncoplanar intensity modulated radiotherapy (IMRT) plans, such as improved organ sparing. Recently, vendors introduced treatment machines that allow for simultaneous couch and gantry motion during beam delivery to make noncoplanar VMAT treatments possible. Our aim is to provide a reliable optimization method for noncoplanar isocentric arc therapy plan optimization. The proposed solution is modular in the sense that it can incorporate different existing beam angle selection and coplanar arc therapy optimization methods. Treatment planning is performed in three steps. First, a number of promising noncoplanar beam directions are selected using an iterative beam selection heuristic; these beams serve as anchor points of the arc therapy trajectory. In the second step, continuous gantry/couch angle trajectories are optimized using a simple combinatorial optimization model to define a beam trajectory that efficiently visits each of the anchor points. Treatment time is controlled by limiting the time the beam needs to trace the prescribed trajectory. In the third and final step, an optimal arc therapy plan is found along the prescribed beam trajectory. In principle any existing arc therapy optimization method could be incorporated into this step; for this work we use a sliding window VMAT algorithm. The approach is demonstrated using two particularly challenging cases. The first one is a lung SBRT patient whose planning goals could not be satisfied with fewer than nine noncoplanar IMRT fields when the patient was treated in the clinic. The second one is a brain tumor patient, where the target volume overlaps with the optic nerves and the chiasm and it is directly adjacent to the brainstem. Both cases illustrate that the large number of angles utilized by isocentric noncoplanar VMAT plans

  5. An Approach to Streaming Video Segmentation With Sub-Optimal Low-Rank Decomposition.

    PubMed

    Li, Chenglong; Lin, Liang; Zuo, Wangmeng; Wang, Wenzhong; Tang, Jin

    2016-05-01

    This paper investigates how to perform robust and efficient video segmentation while suppressing the effects of data noises and/or corruptions, and an effective approach is introduced to this end. First, a general algorithm, called sub-optimal low-rank decomposition (SOLD), is proposed to pursue the low-rank representation for video segmentation. Given the data matrix formed by supervoxel features of an observed video sequence, SOLD seeks a sub-optimal solution by making the matrix rank explicitly determined. In particular, the representation coefficient matrix with the fixed rank can be decomposed into two sub-matrices of low rank, and then we iteratively optimize them with closed-form solutions. Moreover, we incorporate a discriminative replication prior into SOLD based on the observation that small-size video patterns tend to recur frequently within the same object. Second, based on SOLD, we present an efficient inference algorithm to perform streaming video segmentation in both unsupervised and interactive scenarios. More specifically, the constrained normalized-cut algorithm is adopted by incorporating the low-rank representation with other low level cues and temporal consistent constraints for spatio-temporal segmentation. Extensive experiments on two public challenging data sets VSB100 and SegTrack suggest that our approach outperforms other video segmentation approaches in both accuracy and efficiency.

  6. Optimization and enhancement of H&E stained microscopical images by applying bilinear interpolation method on lab color mode

    PubMed Central

    2014-01-01

    Background Hematoxylin & Eosin (H&E) is a widely employed technique in pathology and histology to distinguish nuclei and cytoplasm in tissues by staining them in different colors. This procedure helps to ease the diagnosis by enhancing contrast through digital microscopes. However, microscopic digital images obtained from this technique usually suffer from uneven lighting, i.e. poor Koehler illumination. Several off-the-shelf methods particularly established to correct this problem along with some popular general commercial tools have been examined to find out a robust solution. Methods First, the characteristics of uneven lighting in pathological images obtained from the H&E technique are revealed, and then how the quality of these images can be improved by employing bilinear interpolation based approach applied on the channels of Lab color mode is explored without losing any essential detail, especially for the color information of nuclei (hematoxylin stained sections). Second, an approach to enhance the nuclei details that are a fundamental part of diagnosis and crucially needed by the pathologists who work with digital images is demonstrated. Results Merits of the proposed methodology are substantiated on sample microscopic images. The results show that the proposed methodology not only remedies the deficiencies of H&E microscopical images, but also enhances delicate details. Conclusions Non-uniform illumination problems in H&E microscopical images can be corrected without compromising crucial details that are essential for revealing the features of tissue samples. PMID:24502223

  7. Fuel moisture content estimation: a land-surface modelling approach applied to African savannas

    NASA Astrophysics Data System (ADS)

    Ghent, D.; Spessa, A.; Kaduk, J.; Balzter, H.

    2009-04-01

    Despite the importance of fire to the global climate system, in terms of emissions from biomass burning, ecosystem structure and function, and changes to surface albedo, current land-surface models do not adequately estimate key variables affecting fire ignition and propagation. Fuel moisture content (FMC) is considered one of the most important of these variables (Chuvieco et al., 2004). Biophysical models, with appropriate plant functional type parameterisations, are the most viable option to adequately predict FMC over continental scales at high temporal resolution. However, the complexity of plant-water interactions, and the variability associated with short-term climate changes, means it is one of the most difficult fire variables to quantify and predict. Our work attempts to resolve this issue using a combination of satellite data and biophysical modelling applied to Africa. The approach we take is to represent live FMC as a surface dryness index; expressed as the ratio between the Normalised Difference Vegetation Index (NDVI) and land-surface temperature (LST). It has been argued in previous studies (Sandholt et al., 2002; Snyder et al., 2006), that this ratio displays a statistically stronger correlation to FMC than either of the variables, considered separately. In this study, simulated FMC is constrained through the assimilation of remotely sensed LST and NDVI data into the land-surface model JULES (Joint-UK Land Environment Simulator). Previous modelling studies of fire activity in Africa savannas, such as Lehsten et al. (2008), have reported significant levels of uncertainty associated with the simulations. This uncertainty is important because African savannas are among some of the most frequently burnt ecosystems and are a major source of greenhouse trace gases and aerosol emissions (Scholes et al., 1996). Furthermore, regional climate model studies indicate that many parts of the African savannas will experience drier and warmer conditions in future

  8. Conceptual design optimization study

    NASA Technical Reports Server (NTRS)

    Hollowell, S. J.; Beeman, E. R., II; Hiyama, R. M.

    1990-01-01

    The feasibility of applying multilevel functional decomposition and optimization techniques to conceptual design of advanced fighter aircraft was investigated. Applying the functional decomposition techniques to the conceptual design phase appears to be feasible. The initial implementation of the modified design process will optimize wing design variables. A hybrid approach, combining functional decomposition techniques for generation of aerodynamic and mass properties linear sensitivity derivatives with existing techniques for sizing mission performance and optimization, is proposed.

  9. Exploring the Dynamics of Policy Interaction: Feedback among and Impacts from Multiple, Concurrently Applied Policy Approaches for Promoting Collaboration

    ERIC Educational Resources Information Center

    Fuller, Boyd W.; Vu, Khuong Minh

    2011-01-01

    The prisoner's dilemma and stag hunt games, as well as the apparent benefits of collaboration, have motivated governments to promote more frequent and effective collaboration through a variety of policy approaches. Sometimes, multiple kinds of policies are applied concurrently, and yet little is understood about how these policies might interact…

  10. Characteristics of Computational Thinking about the Estimation of the Students in Mathematics Classroom Applying Lesson Study and Open Approach

    ERIC Educational Resources Information Center

    Promraksa, Siwarak; Sangaroon, Kiat; Inprasitha, Maitree

    2014-01-01

    The objectives of this research were to study and analyze the characteristics of computational thinking about the estimation of the students in mathematics classroom applying lesson study and open approach. Members of target group included 4th grade students of 2011 academic year of Choomchon Banchonnabot School. The Lesson plan used for data…

  11. Benefits of collaborative learning for environmental management: applying the integrated systems for knowledge management approach to support animal pest control.

    PubMed

    Allen, W; Bosch, O; Kilvington, M; Oliver, J; Gilbert, M

    2001-02-01

    Resource management issues continually change over time in response to coevolving social, economic, and ecological systems. Under these conditions adaptive management, or "learning by doing," offers an opportunity for more proactive and collaborative approaches to resolving environmental problems. In turn, this will require the implementation of learning-based extension approaches alongside more traditional linear technology transfer approaches within the area of environmental extension. In this paper the Integrated Systems for Knowledge Management (ISKM) approach is presented to illustrate how such learning-based approaches can be used to help communities develop, apply, and refine technical information within a larger context of shared understanding. To outline how this works in practice, we use a case study involving pest management. Particular attention is paid to the issues that emerge as a result of multiple stakeholder involvement within environmental problem situations. Finally, the potential role of the Internet in supporting and disseminating the experience gained through ongoing adaptive management processes is examined.

  12. Private pediatric neuropsychology practice multimodal treatment of ADHD: an applied approach.

    PubMed

    Beljan, Paul; Bree, Kathleen D; Reuter, Alison E F; Reuter, Scott D; Wingers, Laura

    2014-01-01

    As neuropsychologists and psychologists specializing in the assessment and treatment of pediatric mental health concerns, one of the most prominent diagnoses we encounter is attention-deficit hyperactivity disorder (ADHD). Following a pediatric neuropsychological evaluation, parents often request recommendations for treatment. This article addresses our approach to the treatment of ADHD from the private practice perspective. We will review our primary treatment methodology as well as integrative and alternative treatment approaches.

  13. A small perturbation based optimization approach for the frequency placement of high aspect ratio wings

    NASA Astrophysics Data System (ADS)

    Goltsch, Mandy

    Design denotes the transformation of an identified need to its physical embodiment in a traditionally iterative approach of trial and error. Conceptual design plays a prominent role but an almost infinite number of possible solutions at the outset of design necessitates fast evaluations. The corresponding practice of empirical equations and low fidelity analyses becomes obsolete in the light of novel concepts. Ever increasing system complexity and resource scarcity mandate new approaches to adequately capture system characteristics. Contemporary concerns in atmospheric science and homeland security created an operational need for unconventional configurations. Unmanned long endurance flight at high altitudes offers a unique showcase for the exploration of new design spaces and the incidental deficit of conceptual modeling and simulation capabilities. Structural and aerodynamic performance requirements necessitate light weight materials and high aspect ratio wings resulting in distinct structural and aeroelastic response characteristics that stand in close correlation with natural vibration modes. The present research effort evolves around the development of an efficient and accurate optimization algorithm for high aspect ratio wings subject to natural frequency constraints. Foundational corner stones are beam dimensional reduction and modal perturbation redesign. Local and global analyses inherent to the former suggest corresponding levels of local and global optimization. The present approach departs from this suggestion. It introduces local level surrogate models to capacitate a methodology that consists of multi level analyses feeding into a single level optimization. The innovative heart of the new algorithm originates in small perturbation theory. A sequence of small perturbation solutions allows the optimizer to make incremental movements within the design space. It enables a directed search that is free of costly gradients. System matrices are decomposed

  14. An efficient hybrid approach for multiobjective optimization of water distribution systems

    NASA Astrophysics Data System (ADS)

    Zheng, Feifei; Simpson, Angus R.; Zecchin, Aaron C.

    2014-05-01

    An efficient hybrid approach for the design of water distribution systems (WDSs) with multiple objectives is described in this paper. The objectives are the minimization of the network cost and maximization of the network resilience. A self-adaptive multiobjective differential evolution (SAMODE) algorithm has been developed, in which control parameters are automatically adapted by means of evolution instead of the presetting of fine-tuned parameter values. In the proposed method, a graph algorithm is first used to decompose a looped WDS into a shortest-distance tree (T) or forest, and chords (Ω). The original two-objective optimization problem is then approximated by a series of single-objective optimization problems of the T to be solved by nonlinear programming (NLP), thereby providing an approximate Pareto optimal front for the original whole network. Finally, the solutions at the approximate front are used to seed the SAMODE algorithm to find an improved front for the original entire network. The proposed approach is compared with two other conventional full-search optimization methods (the SAMODE algorithm and the NSGA-II) that seed the initial population with purely random solutions based on three case studies: a benchmark network and two real-world networks with multiple demand loading cases. Results show that (i) the proposed NLP-SAMODE method consistently generates better-quality Pareto fronts than the full-search methods with significantly improved efficiency; and (ii) the proposed SAMODE algorithm (no parameter tuning) exhibits better performance than the NSGA-II with calibrated parameter values in efficiently offering optimal fronts.

  15. A New Combinatorial Optimization Approach for Integrated Feature Selection Using Different Datasets: A Prostate Cancer Transcriptomic Study

    PubMed Central

    Puthiyedth, Nisha; Riveros, Carlos; Berretta, Regina; Moscato, Pablo

    2015-01-01

    Background The joint study of multiple datasets has become a common technique for increasing statistical power in detecting biomarkers obtained from smaller studies. The approach generally followed is based on the fact that as the total number of samples increases, we expect to have greater power to detect associations of interest. This methodology has been applied to genome-wide association and transcriptomic studies due to the availability of datasets in the public domain. While this approach is well established in biostatistics, the introduction of new combinatorial optimization models to address this issue has not been explored in depth. In this study, we introduce a new model for the integration of multiple datasets and we show its application in transcriptomics. Methods We propose a new combinatorial optimization problem that addresses the core issue of biomarker detection in integrated datasets. Optimal solutions for this model deliver a feature selection from a panel of prospective biomarkers. The model we propose is a generalised version of the (α,β)-k-Feature Set problem. We illustrate the performance of this new methodology via a challenging meta-analysis task involving six prostate cancer microarray datasets. The results are then compared to the popular RankProd meta-analysis tool and to what can be obtained by analysing the individual datasets by statistical and combinatorial methods alone. Results Application of the integrated method resulted in a more informative signature than the rank-based meta-analysis or individual dataset results, and overcomes problems arising from real world datasets. The set of genes identified is highly significant in the context of prostate cancer. The method used does not rely on homogenisation or transformation of values to a common scale, and at the same time is able to capture markers associated with subgroups of the disease. PMID:26106884

  16. On the practical convergence of coda-based correlations: a window optimization approach

    NASA Astrophysics Data System (ADS)

    Chaput, J.; Clerc, V.; Campillo, M.; Roux, P.; Knox, H.

    2016-02-01

    We present a novel optimization approach to improve the convergence of interstation coda correlation functions towards the medium's empirical Green's function. For two stations recording a series of impulsive events in a multiply scattering medium, we explore the impact of coda window selection through a Markov Chain Monte Carlo scheme, with the aim of generating a gather of correlation functions that is the most coherent and symmetric over events, thus recovering intuitive elements of the interstation Green's function without any nonlinear post-processing techniques. This approach is tested here for a 2-D acoustic finite difference model, where a much improved correlation function is obtained, as well as for a database of small impulsive icequakes recorded on Erebus Volcano, Antarctica, where similar robust results are shown. The average coda solutions, as deduced from the posterior probability distributions of the optimization, are further representative of the scattering strength of the medium, with stronger scattering resulting in a slightly delayed overall coda sampling. The recovery of singly scattered arrivals in the coda of correlation functions are also shown to be possible through this approach, and surface wave reflections from outer craters on Erebus volcano were mapped in this fashion. We also note that, due to the improvement of correlation functions over subsequent events, this approach can further be used to improve the resolution of passive temporal monitoring.

  17. Optimal reservoir operation considering the water quality issues: A stochastic conflict resolution approach

    NASA Astrophysics Data System (ADS)

    Kerachian, Reza; Karamouz, Mohammad

    2006-12-01

    In this study, an algorithm combining a water quality simulation model and a deterministic/stochastic conflict resolution technique is developed for determining optimal reservoir operating rules. As different decision makers and stakeholders are involved in reservoir operation, the Nash bargaining theory is used to resolve the existing conflict of interests. The utility functions of the proposed models are developed on the basis of the reliability of the water supply to downstream demands, water storage, and the quality of the withdrawn water. The expected value on the Nash product is considered as the objective function of the stochastic model, which can incorporate the inherent uncertainty of reservoir inflow. A water quality simulation model is also developed to simulate the thermal stratification cycle and the reservoir discharge quality through a selective withdrawal structure. The optimization models are solved using a new version of genetic algorithms called varying chromosome length genetic algorithm (VLGA). In this algorithm the chromosome length is sequentially increased to provide a good initial solution for the final traditional GA-based optimization model. The proposed stochastic optimization model can also reduce the computational burden of the previously proposed models such as stochastic dynamic programming (SDP) by reducing the number of state transitions in each stage. The proposed models which are called VLGAQ and SVLGAQ are applied to the 15-Khordad Reservoir in the central part of Iran. The results show that the proposed models can reduce the salinity of allocated water to different water demands as well as the salinity buildup in the reservoir.

  18. Conceptual design optimization of rectilinear building frames: A knapsack problem approach

    NASA Astrophysics Data System (ADS)

    Sharafi, Pezhman; Teh, Lip H.; Hadi, Muhammad N. S.

    2015-10-01

    This article presents an automated technique for preliminary layout (conceptual design) optimization of rectilinear, orthogonal building frames in which the shape of the building plan, the number of bays and the size of unsupported spans are variables. It adopts the knapsack problem as the applied combinatorial optimization problem, and describes how the conceptual design optimization problem can be generally modelled as the unbounded multi-constraint multiple knapsack problem. It discusses some special cases, which can be modelled more efficiently as the single knapsack problem, the multiple-choice knapsack problem or the multiple knapsack problem. A knapsack contains sub-rectangles that define the floor plan and the location of columns. Particular conditions or preferences for the conceptual design can be incorporated as constraints on the knapsacks and/or sub-rectangles. A bi-objective knapsack problem is defined with the aim of obtaining a conceptual design having minimum cost and maximum plan regularity (minimum structural eccentricity). A multi-objective ant colony algorithm is formulated to solve the combinatorial optimization problem. A numerical example is included to demonstrate the application of the present method and the robustness of the algorithm.

  19. A “Reverse-Schur” Approach to Optimization With Linear PDE Constraints: Application to Biomolecule Analysis and Design

    PubMed Central

    Bardhan, Jaydeep P.; Altman, Michael D.

    2009-01-01

    We present a partial-differential-equation (PDE)-constrained approach for optimizing a molecule’s electrostatic interactions with a target molecule. The approach, which we call reverse-Schur co-optimization, can be more than two orders of magnitude faster than the traditional approach to electrostatic optimization. The efficiency of the co-optimization approach may enhance the value of electrostatic optimization for ligand-design efforts–in such projects, it is often desirable to screen many candidate ligands for their viability, and the optimization of electrostatic interactions can improve ligand binding affinity and specificity. The theoretical basis for electrostatic optimization derives from linear-response theory, most commonly continuum models, and simple assumptions about molecular binding processes. Although the theory has been used successfully to study a wide variety of molecular binding events, its implications have not yet been fully explored, in part due to the computational expense associated with the optimization. The co-optimization algorithm achieves improved performance by solving the optimization and electrostatic simulation problems simultaneously, and is applicable to both unconstrained and constrained optimization problems. Reverse-Schur co-optimization resembles other well-known techniques for solving optimization problems with PDE constraints. Model problems as well as realistic examples validate the reverse-Schur method, and demonstrate that our technique and alternative PDE-constrained methods scale very favorably compared to the standard approach. Regularization, which ordinarily requires an explicit representation of the objective function, can be included using an approximate Hessian calculated using the new BIBEE/P (boundary-integral-based electrostatics estimation by preconditioning) method. PMID:23055839

  20. Systematic analysis of protein–detergent complexes applying dynamic light scattering to optimize solutions for crystallization trials

    SciTech Connect

    Meyer, Arne; Hussein, Rana; Brognaro, Hevila

    2015-01-01

    Application of in situ dynamic light scattering to solutions of protein–detergent complexes permits characterization of these complexes in samples as small as 2 µl in volume. Detergents are widely used for the isolation and solubilization of membrane proteins to support crystallization and structure determination. Detergents are amphiphilic molecules that form micelles once the characteristic critical micelle concentration (CMC) is achieved and can solubilize membrane proteins by the formation of micelles around them. The results are presented of a study of micelle formation observed by in situ dynamic light-scattering (DLS) analyses performed on selected detergent solutions using a newly designed advanced hardware device. DLS was initially applied in situ to detergent samples with a total volume of approximately 2 µl. When measured with DLS, pure detergents show a monodisperse radial distribution in water at concentrations exceeding the CMC. A series of all-transn-alkyl-β-d-maltopyranosides, from n-hexyl to n-tetradecyl, were used in the investigations. The results obtained verify that the application of DLS in situ is capable of distinguishing differences in the hydrodynamic radii of micelles formed by detergents differing in length by only a single CH{sub 2} group in their aliphatic tails. Subsequently, DLS was applied to investigate the distribution of hydrodynamic radii of membrane proteins and selected water-insoluble proteins in presence of detergent micelles. The results confirm that stable protein–detergent complexes were prepared for (i) bacteriorhodopsin and (ii) FetA in complex with a ligand as examples of transmembrane proteins. A fusion of maltose-binding protein and the Duck hepatitis B virus X protein was added to this investigation as an example of a non-membrane-associated protein with low water solubility. The increased solubility of this protein in the presence of detergent could be monitored, as well as the progress of proteolytic

  1. The flux-coordinate independent approach applied to X-point geometries

    SciTech Connect

    Hariri, F. Hill, P.; Ottaviani, M.; Sarazin, Y.

    2014-08-15

    A Flux-Coordinate Independent (FCI) approach for anisotropic systems, not based on magnetic flux coordinates, has been introduced in Hariri and Ottaviani [Comput. Phys. Commun. 184, 2419 (2013)]. In this paper, we show that the approach can tackle magnetic configurations including X-points. Using the code FENICIA, an equilibrium with a magnetic island has been used to show the robustness of the FCI approach to cases in which a magnetic separatrix is present in the system, either by design or as a consequence of instabilities. Numerical results are in good agreement with the analytic solutions of the sound-wave propagation problem. Conservation properties are verified. Finally, the critical gain of the FCI approach in situations including the magnetic separatrix with an X-point is demonstrated by a fast convergence of the code with the numerical resolution in the direction of symmetry. The results highlighted in this paper show that the FCI approach can efficiently deal with X-point geometries.

  2. A Dyadic Approach: Applying a Developmental-Conceptual Model to Couples Coping with Chronic Illness

    ERIC Educational Resources Information Center

    Checton, Maria G.; Magsamen-Conrad, Kate; Venetis, Maria K.; Greene, Kathryn

    2015-01-01

    The purpose of the present study was to apply Berg and Upchurch's developmental-conceptual model toward a better understanding of how couples cope with chronic illness. Specifically, a model was hypothesized in which proximal factors (relational quality), dyadic appraisal (illness interference), and dyadic coping (partner support) influence…

  3. A Transfer Learning Approach for Applying Matrix Factorization to Small ITS Datasets

    ERIC Educational Resources Information Center

    Voß, Lydia; Schatten, Carlotta; Mazziotti, Claudia; Schmidt-Thieme, Lars

    2015-01-01

    Machine Learning methods for Performance Prediction in Intelligent Tutoring Systems (ITS) have proven their efficacy; specific methods, e.g. Matrix Factorization (MF), however suffer from the lack of available information about new tasks or new students. In this paper we show how this problem could be solved by applying Transfer Learning (TL),…

  4. Understanding the Conceptual Development Phase of Applied Theory-Building Research: A Grounded Approach

    ERIC Educational Resources Information Center

    Storberg-Walker, Julia

    2007-01-01

    This article presents a provisional grounded theory of conceptual development for applied theory-building research. The theory described here extends the understanding of the components of conceptual development and provides generalized relations among the components. The conceptual development phase of theory-building research has been widely…

  5. Fluid Intelligence as a Predictor of Learning: A Longitudinal Multilevel Approach Applied to Math

    ERIC Educational Resources Information Center

    Primi, Ricardo; Ferrao, Maria Eugenia; Almeida, Leandro S.

    2010-01-01

    The association between fluid intelligence and inter-individual differences was investigated using multilevel growth curve modeling applied to data measuring intra-individual improvement on math achievement tests. A sample of 166 students (88 boys and 78 girls), ranging in age from 11 to 14 (M = 12.3, SD = 0.64), was tested. These individuals took…

  6. Applying Research to Teacher Education: The University of Utah's Collaborative Approach. First Year Preliminary Report.

    ERIC Educational Resources Information Center

    Driscoll, Amy

    In 1983, the National Institute of Education funded the Far West Laboratory for Educational Research and Development to conduct a study, Applying Research to Teacher Education (ARTE) Research Utilization in Elementary Teacher Education (RUETE). The ARTE:RUETE study's purpose is to develop preservice instruction incorporating current research…

  7. Ice particle mass-dimensional parameter retrieval and uncertainty analysis using an Optimal Estimation framework applied to in situ data

    NASA Astrophysics Data System (ADS)

    Xu, Zhuocan; Mace, Jay; Avalone, Linnea; Wang, Zhien

    2015-04-01

    The extreme variability of ice particle habits in precipitating clouds affects our understanding of these cloud systems in every aspect (i.e. radiation transfer, dynamics, precipitation rate, etc) and largely contributes to the uncertainties in the model representation of related processes. Ice particle mass-dimensional power law relationships, M=a*(D ^ b), are commonly assumed in models and retrieval algorithms, while very little knowledge exists regarding the uncertainties of these M-D parameters in real-world situations. In this study, we apply Optimal Estimation (OE) methodology to infer ice particle mass-dimensional relationship from ice particle size distributions and bulk water contents independently measured on board the University of Wyoming King Air during the Colorado Airborne Multi-Phase Cloud Study (CAMPS). We also utilize W-band radar reflectivity obtained on the same platform (King Air) offering a further constraint to this ill-posed problem (Heymsfield et al. 2010). In addition to the values of retrieved M-D parameters, the associated uncertainties are conveniently acquired in the OE framework, within the limitations of assumed Gaussian statistics. We find, given the constraints provided by the bulk water measurement and in situ radar reflectivity, that the relative uncertainty of mass-dimensional power law prefactor (a) is approximately 80% and the relative uncertainty of exponent (b) is 10-15%. With this level of uncertainty, the forward model uncertainty in radar reflectivity would be on the order of 4 dB or a factor of approximately 2.5 in ice water content. The implications of this finding are that inferences of bulk water from either remote or in situ measurements of particle spectra cannot be more certain than this when the mass-dimensional relationships are not known a priori which is almost never the case.

  8. Multi-objective optimization approach for cost management during product design at the conceptual phase

    NASA Astrophysics Data System (ADS)

    Durga Prasad, K. G.; Venkata Subbaiah, K.; Narayana Rao, K.

    2014-03-01

    The effective cost management during the conceptual design phase of a product is essential to develop a product with minimum cost and desired quality. The integration of the methodologies of quality function deployment (QFD), value engineering (VE) and target costing (TC) could be applied to the continuous improvement of any product during product development. To optimize customer satisfaction and total cost of a product, a mathematical model is established in this paper. This model integrates QFD, VE and TC under multi-objective optimization frame work. A case study on domestic refrigerator is presented to show the performance of the proposed model. Goal programming is adopted to attain the goals of maximum customer satisfaction and minimum cost of the product.

  9. Quantitative Systems Pharmacology Approaches Applied to Microphysiological Systems (MPS): Data Interpretation and Multi-MPS Integration

    PubMed Central

    Yu, J; Cilfone, NA; Large, EM; Sarkar, U; Wishnok, JS; Tannenbaum, SR; Hughes, DJ; Lauffenburger, DA; Griffith, LG; Stokes, CL; Cirit, M

    2015-01-01

    Our goal in developing Microphysiological Systems (MPS) technology is to provide an improved approach for more predictive preclinical drug discovery via a highly integrated experimental/computational paradigm. Success will require quantitative characterization of MPSs and mechanistic analysis of experimental findings sufficient to translate resulting insights from in vitro to in vivo. We describe herein a systems pharmacology approach to MPS development and utilization that incorporates more mechanistic detail than traditional pharmacokinetic/pharmacodynamic (PK/PD) models. A series of studies illustrates diverse facets of our approach. First, we demonstrate two case studies: a PK data analysis and an inflammation response––focused on a single MPS, the liver/immune MPS. Building on the single MPS modeling, a theoretical investigation of a four-MPS interactome then provides a quantitative way to consider several pharmacological concepts such as absorption, distribution, metabolism, and excretion in the design of multi-MPS interactome operation and experiments. PMID:26535159

  10. Frontolateral Approach Applied to Sellar Region Lesions: A Retrospective Study in 79 Patients

    PubMed Central

    Liu, Hao-Cheng; Wu, Zhen; Wang, Liang; Xiao, Xin-Ru; Li, Da; Jia, Wang; Zhang, Li-Wei; Zhang, Jun-Ting

    2016-01-01

    Background: Various surgical approaches for the removal of sellar region lesions have previously been described. This study aimed to evaluate the reliability and safety of the frontolateral approach (FLA) to remove sellar region lesions. Methods: We presented a retrospective study of 79 patients with sellar region lesions who were admitted and operated by the FLA approach from August 2011 to August 2015 in Department of Neurosurgery of Beijing Tian Tan Hospital. We classified FLA into three types, compared the FLA types to the areas of lesion invasion, and analyzed operation bleeding volume, gross total resection (GTR) rate, visual outcome, and mortality. Results: Seventy-nine patients were followed up from 2.9 to 50.3 months with a mean follow-up of 20.5 months. There were 42 cases of meningiomas, 25 cases of craniopharyngiomas, and 12 cases of pituitary adenomas. The mean follow-up Karnofsky Performance Scale was 90.4. GTR was achieved in 75 patients (94.9%). Two patients (2.5%) had tumor recurrence. No patients died perioperatively or during short-term follow-up. Three patients (3.8%) with craniopharyngioma died 10, 12, and 23 months, respectively, after surgery. The operative bleeding volume of this study was no more than that of the other approaches in the sellar region (P = 0.783). In this study, 35 patients (44.3%) had visual improvement after surgery, 38 patients (48.1%) remained unchanged, and three patients’ visual outcome (3.8%) worsened. Conclusions: FLA was an effective approach in the treatment of sellar region lesions with good preservation of visual function. FLA classification enabled tailored craniotomies for each patient according to the anatomic site of tumor invasion. This study found that FLA had similar outcomes to other surgical approaches of sellar region lesions. PMID:27364792

  11. Characterization of remarkable floods in France, a transdisciplinary approach applied on generalized floods of January 1910

    NASA Astrophysics Data System (ADS)

    Boudou, Martin; Lang, Michel; Vinet, Freddy; Coeur, Denis

    2014-05-01

    emphasize one flood typology or one flood dynamic (for example flash floods are often over-represented than slow dynamic floods in existing databases). Thus, the selected criteria have to introduce a general overview of flooding risk in France by integrating all typologies: storm surges, torrential floods, rising groundwater level and resulting to flood, etc. The methodology developed for the evaluation grid is inspired by several scientific works related to historical hydrology (Bradzil, 2006; Benito et al., 2004) or extreme floods classification (Kundzewics et al. 2013; Garnier E., 2005). The referenced information are mainly issued from investigations realized for the PFRA (archives, local data),from internet databases on flooding disasters, and from a complementary bibliography (some scientists such as Maurice Pardé a geographer who largely documented French floods during the 20th century). The proposed classification relies on three main axes. Each axis is associated to a set of criteria, each one related to a score (from 0.5 to 4 points), and pointing out a final remarkability score. • The flood intensity characterizing the flood's hazard level. It is composed of the submersion duration, important to valorize floods with slow dynamics as flooding from groundwater, the event peak discharge's return period, and the presence of factors increasing significantly the hazard level (dykes breaks, log jam, sediment transport…) • The flood severity focuses on economic damages, social and political repercussions, media coverage of the event, fatalities number or eventual flood warning failures. Analyzing the flood consequences is essential in order to evaluate the vulnerability of society at disaster date. • The spatial extension of the flood, which contributes complementary information to the two first axes. The evaluation grid was tested and applied on the sample of 176 remarkable events. Around twenty events (from 1856 to 2010) come out with a high remarkability rate

  12. Stochastic approach to reconstruction of dynamical systems: optimal model selection criterion

    NASA Astrophysics Data System (ADS)

    Gavrilov, A.; Mukhin, D.; Loskutov, E. M.; Feigin, A. M.

    2011-12-01

    Most of known observable systems are complex and high-dimensional that doesn't allow to make the exact long-term forecast of their behavior. The stochastic approach to reconstruction of such systems gives a hope to describe important qualitative features of their behavior in a low-dimensional way while all other dynamics is modelled as stochastic disturbance. This report is devoted to application of Bayesian evidence for optimal stochastic model selection when reconstructing the evolution operator of observable system. The idea of Bayesian evidence is to find compromise between the model predictiveness and quality of fitting the model into the data. We represent the evolution operator of investigated system in a form of random dynamic system including deterministic and stochastic parts, both parameterized by artificial neural network. Then we use Bayesian evidence criterion to estimate optimal complexity of the model, i.e. both number of parameters and dimension corresponding to most probable model given the data. We demonstrate on the number of model examples that the model with non-uniformly distributed stochastic part (which corresponds to non-Gaussian perturbations of evolution operator) is optimal in general case. Further, we show that simple stochastic model can be the most preferred for reconstruction of the evolution operator underlying complex observed dynamics even in a case of deterministic high-dimensional system. Workability of suggested approach for modeling and prognosis of real-measured geophysical dynamics is investigated.

  13. Calculating complete and exact Pareto front for multiobjective optimization: a new deterministic approach for discrete problems.

    PubMed

    Hu, Xiao-Bing; Wang, Ming; Di Paolo, Ezequiel

    2013-06-01

    Searching the Pareto front for multiobjective optimization problems usually involves the use of a population-based search algorithm or of a deterministic method with a set of different single aggregate objective functions. The results are, in fact, only approximations of the real Pareto front. In this paper, we propose a new deterministic approach capable of fully determining the real Pareto front for those discrete problems for which it is possible to construct optimization algorithms to find the k best solutions to each of the single-objective problems. To this end, two theoretical conditions are given to guarantee the finding of the actual Pareto front rather than its approximation. Then, a general methodology for designing a deterministic search procedure is proposed. A case study is conducted, where by following the general methodology, a ripple-spreading algorithm is designed to calculate the complete exact Pareto front for multiobjective route optimization. When compared with traditional Pareto front search methods, the obvious advantage of the proposed approach is its unique capability of finding the complete Pareto front. This is illustrated by the simulation results in terms of both solution quality and computational efficiency.

  14. Off-line determination of the optimal number of iterations of the robust anisotropic diffusion filter applied to denoising of brain MR images.

    PubMed

    Ferrari, Ricardo J

    2013-02-01

    Although anisotropic diffusion filters have been used extensively and with great success in medical image denoising, one limitation of this iterative approach, when used on fully automatic medical image processing schemes, is that the quality of the resulting denoised image is highly dependent on the number of iterations of the algorithm. Using many iterations may excessively blur the edges of the anatomical structures, while a few may not be enough to remove the undesirable noise. In this work, a mathematical model is proposed to automatically determine the number of iterations of the robust anisotropic diffusion filter applied to the problem of denoising three common human brain magnetic resonance (MR) images (T1-weighted, T2-weighted and proton density). The model is determined off-line by means of the maximization of the mean structural similarity index, which is used in this work as metric for quantitative assessment of the resulting processed images obtained after each iteration of the algorithm. After determining the model parameters, the optimal number of iterations of the algorithm is easily determined without requiring any extra computation time. The proposed method was tested on 3D synthetic and clinical human brain MR images and the results of qualitative and quantitative evaluation have shown its effectiveness. PMID:23124813

  15. An iterative approach to the optimal co-design of linear control systems

    NASA Astrophysics Data System (ADS)

    Jiang, Yu; Wang, Yebin; Bortoff, Scott A.; Jiang, Zhong-Ping

    2016-04-01

    This paper investigates the optimal co-design of both physical plants and control policies for a class of continuous-time linear control systems. The optimal co-design of a specific linear control system is commonly formulated as a nonlinear non-convex optimisation problem (NNOP), and solved by using iterative techniques, where the plant parameters and the control policy are updated iteratively and alternately. This paper proposes a novel iterative approach to solve the NNOP, where the plant parameters are updated by solving a standard semi-definite programming problem, with non-convexity no longer involved. The proposed system design is generally less conservative in terms of the system performance compared to the conventional system-equivalence-based design, albeit the range of applicability is slightly reduced. A practical optimisation algorithm is proposed to compute a sub-optimal solution ensuring the system stability, and the convergence of the algorithm is established. The effectiveness of the proposed algorithm is illustrated by its application to the optimal co-design of a physical load positioning system.

  16. Neural network approach to continuous-time direct adaptive optimal control for partially unknown nonlinear systems.

    PubMed

    Vrabie, Draguna; Lewis, Frank

    2009-04-01

    In this paper we present in a continuous-time framework an online approach to direct adaptive optimal control with infinite horizon cost for nonlinear systems. The algorithm converges online to the optimal control solution without knowledge of the internal system dynamics. Closed-loop dynamic stability is guaranteed throughout. The algorithm is based on a reinforcement learning scheme, namely Policy Iterations, and makes use of neural networks, in an Actor/Critic structure, to parametrically represent the control policy and the performance of the control system. The two neural networks are trained to express the optimal controller and optimal cost function which describes the infinite horizon control performance. Convergence of the algorithm is proven under the realistic assumption that the two neural networks do not provide perfect representations for the nonlinear control and cost functions. The result is a hybrid control structure which involves a continuous-time controller and a supervisory adaptation structure which operates based on data sampled from the plant and from the continuous-time performance dynamics. Such control structure is unlike any standard form of controllers previously seen in the literature. Simulation results, obtained considering two second-order nonlinear systems, are provided.

  17. A policy iteration approach to online optimal control of continuous-time constrained-input systems.

    PubMed

    Modares, Hamidreza; Naghibi Sistani, Mohammad-Bagher; Lewis, Frank L

    2013-09-01

    This paper is an effort towards developing an online learning algorithm to find the optimal control solution for continuous-time (CT) systems subject to input constraints. The proposed method is based on the policy iteration (PI) technique which has recently evolved as a major technique for solving optimal control problems. Although a number of online PI algorithms have been developed for CT systems, none of them take into account the input constraints caused by actuator saturation. In practice, however, ignoring these constraints leads to performance degradation or even system instability. In this paper, to deal with the input constraints, a suitable nonquadratic functional is employed to encode the constraints into the optimization formulation. Then, the proposed PI algorithm is implemented on an actor-critic structure to solve the Hamilton-Jacobi-Bellman (HJB) equation associated with this nonquadratic cost functional in an online fashion. That is, two coupled neural network (NN) approximators, namely an actor and a critic are tuned online and simultaneously for approximating the associated HJB solution and computing the optimal control policy. The critic is used to evaluate the cost associated with the current policy, while the actor is used to find an improved policy based on information provided by the critic. Convergence to a close approximation of the HJB solution as well as stability of the proposed feedback control law are shown. Simulation results of the proposed method on a nonlinear CT system illustrate the effectiveness of the proposed approach. PMID:23706414

  18. Neural network approach to continuous-time direct adaptive optimal control for partially unknown nonlinear systems.

    PubMed

    Vrabie, Draguna; Lewis, Frank

    2009-04-01

    In this paper we present in a continuous-time framework an online approach to direct adaptive optimal control with infinite horizon cost for nonlinear systems. The algorithm converges online to the optimal control solution without knowledge of the internal system dynamics. Closed-loop dynamic stability is guaranteed throughout. The algorithm is based on a reinforcement learning scheme, namely Policy Iterations, and makes use of neural networks, in an Actor/Critic structure, to parametrically represent the control policy and the performance of the control system. The two neural networks are trained to express the optimal controller and optimal cost function which describes the infinite horizon control performance. Convergence of the algorithm is proven under the realistic assumption that the two neural networks do not provide perfect representations for the nonlinear control and cost functions. The result is a hybrid control structure which involves a continuous-time controller and a supervisory adaptation structure which operates based on data sampled from the plant and from the continuous-time performance dynamics. Such control structure is unlike any standard form of controllers previously seen in the literature. Simulation results, obtained considering two second-order nonlinear systems, are provided. PMID:19362449

  19. A novel variable selection approach that iteratively optimizes variable space using weighted binary matrix sampling.

    PubMed

    Deng, Bai-chuan; Yun, Yong-huan; Liang, Yi-zeng; Yi, Lun-zhao

    2014-10-01

    In this study, a new optimization algorithm called the Variable Iterative Space Shrinkage Approach (VISSA) that is based on the idea of model population analysis (MPA) is proposed for variable selection. Unlike most of the existing optimization methods for variable selection, VISSA statistically evaluates the performance of variable space in each step of optimization. Weighted binary matrix sampling (WBMS) is proposed to generate sub-models that span the variable subspace. Two rules are highlighted during the optimization procedure. First, the variable space shrinks in each step. Second, the new variable space outperforms the previous one. The second rule, which is rarely satisfied in most of the existing methods, is the core of the VISSA strategy. Compared with some promising variable selection methods such as competitive adaptive reweighted sampling (CARS), Monte Carlo uninformative variable elimination (MCUVE) and iteratively retaining informative variables (IRIV), VISSA showed better prediction ability for the calibration of NIR data. In addition, VISSA is user-friendly; only a few insensitive parameters are needed, and the program terminates automatically without any additional conditions. The Matlab codes for implementing VISSA are freely available on the website: https://sourceforge.net/projects/multivariateanalysis/files/VISSA/.

  20. A Developmental Approach to Helping: The Epigenetic Model Applied to the Period of Early Childhood

    ERIC Educational Resources Information Center

    Dawson, Susan H.

    1973-01-01

    The article describes application of the epigenetic model to work with children in the period of early childhood development. The focus is placed on verbal learning. Projects wherein disadvantaged children gain in verbal skills through supervised mother-child interactions are described. The response of families to this approach suggests important…

  1. Applying DOE's Graded Approach for assessing radiation impacts to non-human biota at the INL.

    PubMed

    Morris, Randall C

    2006-01-01

    In July 2002, The US Department of Energy (DOE) released a new technical standard entitled A Graded Approach for Evaluating Radiation Doses to Aquatic and Terrestrial Biota. DOE facilities are annually required to demonstrate that routine radioactive releases from their sites are protective of non-human receptors and sites are encouraged to use the Graded Approach for this purpose. Use of the Graded Approach requires completion of several preliminary steps, to evaluate the degree to which the site environmental monitoring program is appropriate for evaluating impacts to non-human biota. We completed these necessary activities at the Idaho National Laboratory (INL) using the following four tasks: (1) develop conceptual models and evaluate exposure pathways; (2) define INL evaluation areas; (3) evaluate sampling locations and media; (4) evaluate data gaps. All of the information developed in the four steps was incorporated, data sources were identified, departures from the Graded Approach were justified, and a step-by-step procedure for biota dose assessment at the INL was specified. Finally, we completed a site-wide biota dose assessment using the 2002 environmental surveillance data and an offsite assessment using soil and surface water data collected since 1996. These assessments demonstrated the environmental concentrations of radionuclides measured on and near the INL do not present significant risks to populations of non-human biota.

  2. An Activity and Theory for Applying Human Systems Approach to Industrial Arts.

    ERIC Educational Resources Information Center

    Mietus, Walter S.

    A human systems approach that emphasizes knowing the parts of a phenomenon, their order, and particularly their interactions needs to be adopted by industrial arts. A student-based theoretical framework that incorporates systems and subsystems in industrial arts has been presented by Donald Maley. The theoretical base includes 10 organismic…

  3. Applying Adverse Outcome Pathways (AOPs) to support Integrated Approaches to Testing and Assessment (IATA workshop report)

    EPA Science Inventory

    Chemical regulation is challenged by the large number of chemicals requiring assessment for potential human health and environmental impacts. Current approaches are too resource intensive in terms of time, money and animal use to evaluate all chemicals under development or alread...

  4. What Are the Learning Approaches Applied by Undergraduate Students in English Process Writing Based on Gender?

    ERIC Educational Resources Information Center

    Veloo, Arsaythamby; Krishnasamy, Hariharan N.; Harun, Hana Mulyani

    2015-01-01

    The purpose of this study is to determine gender differences and type of learning approaches among Universiti Utara Malaysia (UUM) undergraduate students in English writing performance. The study involved 241 (32.8% male & 67.2% female) undergraduate students of UUM who were taking the Process Writing course. This study uses a Two-Factor Study…

  5. Applying Form-Focused Approaches to L2 Vocabulary Instruction through Video Podcasts

    ERIC Educational Resources Information Center

    Marefat, Fahimeh; Hassanzadeh, Mohammad

    2016-01-01

    Since its inception, form-focused instruction (FFI) has been associated with grammar, with only a handful of studies examining its potential for vocabulary development (e.g., Laufer, 2006). Meanwhile, there has been an unresolved dispute between the two approaches of Focus on Form (FonF) and Focus on Forms (FonFs) in terms of their degree of…

  6. Applying a Competency- and Problem-Based Approach for Learning Compiler Design

    ERIC Educational Resources Information Center

    Khoumsi, Ahmed; Gonzalez-Rubio, Ruben

    2006-01-01

    Our department has redesigned its electrical engineering and computer engineering programs completely by adopting a learning methodology based on competence development, problem solving, and the realization of design projects. In this article, we show how this pedagogical approach has been successfully used for learning compiler design.

  7. Applying the Recovery Approach to the Interface between Mental Health and Child Protection Services

    ERIC Educational Resources Information Center

    Duffy, Joe; Davidson, Gavin; Kavanagh, Damien

    2016-01-01

    There is a range of theoretical approaches which may inform the interface between child protection and adult mental health services. These theoretical perspectives tend to be focused on either child protection or mental health with no agreed integrating framework. The interface continues to be identified, in research, case management reviews and…

  8. An Integrated Optimal Estimation Approach to Spitzer Space Telescope Focal Plane Survey

    NASA Technical Reports Server (NTRS)

    Bayard, David S.; Kang, Bryan H.; Brugarolas, Paul B.; Boussalis, D.

    2004-01-01

    This paper discusses an accurate and efficient method for focal plane survey that was used for the Spitzer Space Telescope. The approach is based on using a high-order 37-state Instrument Pointing Frame (IPF) Kalman filter that combines both engineering parameters and science parameters into a single filter formulation. In this approach, engineering parameters such as pointing alignments, thermomechanical drift and gyro drifts are estimated along with science parameters such as plate scales and optical distortions. This integrated approach has many advantages compared to estimating the engineering and science parameters separately. The resulting focal plane survey approach is applicable to a diverse range of science instruments such as imaging cameras, spectroscopy slits, and scanning-type arrays alike. The paper will summarize results from applying the IPF Kalman Filter to calibrating the Spitzer Space Telescope focal plane, containing the MIPS, IRAC, and the IRS science Instrument arrays.

  9. Further developments in the controlled growth approach for optimal structural synthesis

    NASA Technical Reports Server (NTRS)

    Hajela, P.

    1982-01-01

    It is pointed out that the use of nonlinear programming methods in conjunction with finite element and other discrete analysis techniques have provided a powerful tool in the domain of optimal structural synthesis. The present investigation is concerned with new strategies which comprise an extension to the controlled growth method considered by Hajela and Sobieski-Sobieszczanski (1981). This method proposed an approach wherein the standard nonlinear programming (NLP) methodology of working with a very large number of design variables was replaced by a sequence of smaller optimization cycles, each involving a single 'dominant' variable. The current investigation outlines some new features. Attention is given to a modified cumulative constraint representation which is defined in both the feasible and infeasible domain of the design space. Other new features are related to the evaluation of the 'effectiveness measure' on which the choice of the dominant variable and the linking strategy is based.

  10. Optimizing distance image quality of an aspheric multifocal intraocular lens using a comprehensive statistical design approach.

    PubMed

    Hong, Xin; Zhang, Xiaoxiao

    2008-12-01

    The AcrySof ReSTOR intraocular lens (IOL) is a multifocal lens with state-of-the-art apodized diffractive technology, and is indicated for visual correction of aphakia secondary to removal of cataractous lenses in adult patients with/without presbyopia, who desire near, intermediate, and distance vision with increased spectacle independence. The multifocal design results in some optical contrast reduction, which may be improved by reducing spherical aberration. A novel patent-pending approach was undertaken to investigate the optical performance of aspheric lens designs. Simulated eyes using human normal distributions were corrected with different lens designs in a Monte Carlo simulation that allowed for variability in multiple surgical parameters (e.g. positioning error, biometric variation). Monte Carlo optimized results indicated that a lens spherical aberration of -0.10 microm provided optimal distance image quality.

  11. Development and optimization of the activated charcoal suspension composition based on a mixture design approach.

    PubMed

    Ronowicz, Joanna; Kupcewicz, Bogumiła; Pałkowski, Łukasz; Krysiński, Jerzy

    2015-03-01

    In this study, a new drug product containing activated charcoal was designed and developed. The excipient levels in the pharmaceutical formulation were optimized using a mixture design approach. The adsorption power of the activated charcoal suspension was selected as the critical quality attribute influencing the efficacy of medical treatment. Significant prognostic models (p<0.05) were obtained to describe in detail the interrelations between excipient levels and the adsorption power of the formulation. Liquid flavour had a critical impact on the adsorption power of the suspension. Formulations containing the largest amount of liquid flavour showed the lowest adsorption power. Sorbitol was not adsorbed onto activated charcoal so strongly as liquid flavour. A slight increase in the content of carboxymethylcellulose sodium led to a marked decrease in adsorption power. The obtained mathematical models and response surface allowed selection of the optimal composition of excipients in a final drug product.

  12. Implementing nonprojective measurements via linear optics: An approach based on optimal quantum-state discrimination

    SciTech Connect

    Loock, Peter van; Nemoto, Kae; Munro, William J.; Raynal, Philippe; Luetkenhaus, Norbert

    2006-06-15

    We discuss the problem of implementing generalized measurements [positive operator-valued measures (POVMs)] with linear optics, either based upon a static linear array or including conditional dynamics. In our approach, a given POVM shall be identified as a solution to an optimization problem for a chosen cost function. We formulate a general principle: the implementation is only possible if a linear-optics circuit exists for which the quantum mechanical optimum (minimum) is still attainable after dephasing the corresponding quantum states. The general principle enables us, for instance, to derive a set of necessary conditions for the linear-optics implementation of the POVM that realizes the quantum mechanically optimal unambiguous discrimination of two pure nonorthogonal states. This extends our previous results on projection measurements and the exact discrimination of orthogonal states.

  13. Quasiparticle mass enhancement approaching optimal doping in a high-Tc superconductor

    DOE PAGESBeta

    Ramshaw, B. J.; Sebastian, S. E.; McDonald, R. D.; Day, J.; Tan, B. S.; Zhu, Z.; Betts, J. B.; Liang, Ruixing; Bonn, D. A.; Hardy, W. N.; et al

    2015-03-26

    In the quest for superconductors with higher transition temperatures (Tc), one emerging motif is that electronic interactions favorable for superconductivity can be enhanced by fluctuations of a broken-symmetry phase. In recent experiments it is suggested that the existence of the requisite broken-symmetry phase in the high-Tc cuprates, but the impact of such a phase on the ground-state electronic interactions has remained unclear. Here, we used magnetic fields exceeding 90 tesla to access the underlying metallic state of the cuprate YBa2Cu3O6+δ over a wide range of doping, and observed magnetic quantum oscillations that reveal a strong enhancement of the quasiparticle effectivemore » mass toward optimal doping. Finally, this mass enhancement results from increasing electronic interactions approaching optimal doping, and suggests a quantum critical point at a hole doping of pcrit ≈ 0.18.« less

  14. Practical Framework for an Electron Beam Induced Current Technique Based on a Numerical Optimization Approach

    NASA Astrophysics Data System (ADS)

    Yamaguchi, Hideshi; Soeda, Takeshi

    2015-03-01

    A practical framework for an electron beam induced current (EBIC) technique has been established for conductive materials based on a numerical optimization approach. Although the conventional EBIC technique is useful for evaluating the distributions of dopants or crystal defects in semiconductor transistors, issues related to the reproducibility and quantitative capability of measurements using this technique persist. For instance, it is difficult to acquire high-quality EBIC images throughout continuous tests due to variation in operator skill or test environment. Recently, due to the evaluation of EBIC equipment performance and the numerical optimization of equipment items, the constant acquisition of high contrast images has become possible, improving the reproducibility as well as yield regardless of operator skill or test environment. The technique proposed herein is even more sensitive and quantitative than scanning probe microscopy, an imaging technique that can possibly damage the sample. The new technique is expected to benefit the electrical evaluation of fragile or soft materials along with LSI materials.

  15. Ensemble-based air quality forecasts: A multimodel approach applied to ozone

    NASA Astrophysics Data System (ADS)

    Mallet, Vivien; Sportisse, Bruno

    2006-09-01

    The potential of ensemble techniques to improve ozone forecasts is investigated. Ensembles with up to 48 members (models) are generated using the modeling system Polyphemus. Members differ in their physical parameterizations, their numerical approximations, and their input data. Each model is evaluated during 4 months (summer 2001) over Europe with hundreds of stations from three ozone-monitoring networks. We found that several linear combinations of models have the potential to drastically increase the performances of model-to-data comparisons. Optimal weights associated with each model are not robust in time or space. Forecasting these weights therefore requires relevant methods, such as selection of adequate learning data sets, or specific learning algorithms. Significant performance improvements are accomplished by the resulting forecasted combinations. A decrease of about 10% of the root-mean-square error is obtained on ozone daily peaks. Ozone hourly concentrations show stronger improvements.

  16. Sample site selection for tracer studies applying a unidirectional circulatory approach

    SciTech Connect

    Layman, D.K.; Wolfe, R.R.

    1987-08-01

    The optimal arterial or venous sites for infusion and sampling during isotopic tracer studies have not been established. This study determined the relationship of plasma and tissue enrichment (E) when isotopes were infused in an artery and sampled from a vein (av mode) or infused in a vein and sampled from an artery (va mode). Adult dogs were given primed constant infusions of (3-/sup 13/C)lactate, (1-/sup 13/C)leucine, and /sup 14/C-labeled bicarbonate. Simultaneous samples were drawn from the vena cava, aortic arch, and breath. Tissue samples were removed from skeletal muscle, liver, kidney, and gut. Breath samples were analyzed for /sup 14/CO/sub 2/ by liquid scintillation counting and plasma isotopic enrichments of (/sup 13/C)lactate, (/sup 13/C)leucine, and alpha-(/sup 13/C)ketoisocaproate (KIC) were determined by gas chromatography-mass spectrometry. By using the va mode, the plasma E for lactate and leucine were 30-40% above tissue E. The av mode provided an accurate reflection of tissue E for lactate, which equilibrates rapidly with tissues, and a reasonable estimate for leucine, which exchanges more slowly. The isotopic enrichment of plasma KIC more directly reflected tissue leucine E than did plasma leucine E, and KIC enrichment was insensitive to sampling site. We also evaluated theoretically a circulatory model that predicts venous isotopic enrichments when the va mode is used. We conclude that the av mode is optimal but that the problems arising from use of the va mode can be overcome by use of a metabolic product (i.e., KIC) or by calculation of venous specific activity with our circulatory mode.

  17. A GENERALIZED STOCHASTIC COLLOCATION APPROACH TO CONSTRAINED OPTIMIZATION FOR RANDOM DATA IDENTIFICATION PROBLEMS

    SciTech Connect

    Webster, Clayton G; Gunzburger, Max D

    2013-01-01

    We present a scalable, parallel mechanism for stochastic identification/control for problems constrained by partial differential equations with random input data. Several identification objectives will be discussed that either minimize the expectation of a tracking cost functional or minimize the difference of desired statistical quantities in the appropriate $L^p$ norm, and the distributed parameters/control can both deterministic or stochastic. Given an objective we prove the existence of an optimal solution, establish the validity of the Lagrange multiplier rule and obtain a stochastic optimality system of equations. The modeling process may describe the solution in terms of high dimensional spaces, particularly in the case when the input data (coefficients, forcing terms, boundary conditions, geometry, etc) are affected by a large amount of uncertainty. For higher accuracy, the computer simulation must increase the number of random variables (dimensions), and expend more effort approximating the quantity of interest in each individual dimension. Hence, we introduce a novel stochastic parameter identification algorithm that integrates an adjoint-based deterministic algorithm with the sparse grid stochastic collocation FEM approach. This allows for decoupled, moderately high dimensional, parameterized computations of the stochastic optimality system, where at each collocation point, deterministic analysis and techniques can be utilized. The advantage of our approach is that it allows for the optimal identification of statistical moments (mean value, variance, covariance, etc.) or even the whole probability distribution of the input random fields, given the probability distribution of some responses of the system (quantities of physical interest). Our rigorously derived error estimates, for the fully discrete problems, will be described and used to compare the efficiency of the method with several other techniques. Numerical examples illustrate the theoretical

  18. Design Space Approach for Preservative System Optimization of an Anti-Aging Eye Fluid Emulsion.

    PubMed

    Lourenço, Felipe Rebello; Francisco, Fabiane Lacerda; Ferreira, Márcia Regina Spuri; Andreoli, Terezinha De Jesus; Löbenberg, Raimar; Bou-Chacra, Nádia

    2015-01-01

    The use of preservatives must be optimized in order to ensure the efficacy of an antimicrobial system as well as the product safety. Despite the wide variety of preservatives, the synergistic or antagonistic effects of their combinations are not well established and it is still an issue in the development of pharmaceutical and cosmetic products. The purpose of this paper was to establish a space design using a simplex-centroid approach to achieve the lowest effective concentration of 3 preservatives (methylparaben, propylparaben, and imidazolidinyl urea) and EDTA for an emulsion cosmetic product. Twenty-two formulae of emulsion differing only by imidazolidinyl urea (A: 0.00 to 0.30% w/w), methylparaben (B: 0.00 to 0.20% w/w), propylparaben (C: 0.00 to 0.10% w/w) and EDTA (D: 0.00 to 0.10% w/w) concentrations were prepared. They were tested alone and in binary, ternary and quaternary combinations. Aliquots of these formulae were inoculated with several microorganisms. An electrochemical method was used to determine microbial burden immediately after inoculation and after 2, 4, 8, 12, 24, 48, and 168 h. An optimization strategy was used to obtain the concentrations of preservatives and EDTA resulting in a most effective preservative system of all microorganisms simultaneously. The use of preservatives and EDTA in combination has the advantage of exhibiting a potential synergistic effect against a wider spectrum of microorganisms. Based on graphic and optimization strategies, we proposed a new formula containing a quaternary combination (A: 55%; B: 30%; C: 5% and D: 10% w/w), which complies with the specification of a conventional challenge test. A design space approach was successfully employed in the optimization of concentrations of preservatives and EDTA in an emulsion cosmetic product. PMID:26517141

  19. Experiments with ROPAR, an approach for probabilistic analysis of the optimal solutions' robustness

    NASA Astrophysics Data System (ADS)

    Marquez, Oscar; Solomatine, Dimitri

    2016-04-01

    Robust optimization is defined as the search for solutions and performance results which remain reasonably unchanged when exposed to uncertain conditions such as natural variability in input variables, parameter drifts during operation time, model sensitivities and others [1]. In the present study we follow the approach named ROPAR (multi-objective robust optimization allowing for explicit analysis of robustness (see online publication [2]). Its main idea is in: a) sampling the vectors of uncertain factors; b) solving MOO problem for each of them obtaining multiple Pareto sets; c) analysing the statistical properties (distributions) of the subsets of these Pareto sets corresponding to different conditions (e.g. based on constraints formulated for the objective functions values of other system variables); d) selecting the robust solutions. The paper presents the results of experiments with the two case studies: 1) a benchmark function ZDT1 (with an uncertain factor) often used in algorithms comparisons, and 2) a problem of drainage network rehabilitation that uses SWMM hydrodynamic model (the rainfall is assumed to be an uncertain factor). This study is partly supported by the FP7 European Project WeSenseIt Citizen Water Observatory (www.http://wesenseit.eu/) and the CONACYT (Mexico's National Council of Science and Technology) supporting the PhD study of the first author. References [1] H.G.Beyer and B. Sendhoff. "Robust optimization - A comprehensive survey." Comput. Methods Appl. Mech. Engrg., 2007: 3190-3218. [2] D.P. Solomatine (2012). An approach to multi-objective robust optimization allowing for explicit analysis of robustness (ROPAR). UNESCO-IHE. Online publication. Web: https://www.unesco-ihe.org/sites/default/files/solomatine-ropar.pdf

  20. Design Space Approach for Preservative System Optimization of an Anti-Aging Eye Fluid Emulsion.

    PubMed

    Lourenço, Felipe Rebello; Francisco, Fabiane Lacerda; Ferreira, Márcia Regina Spuri; Andreoli, Terezinha De Jesus; Löbenberg, Raimar; Bou-Chacra, Nádia

    2015-01-01

    The use of preservatives must be optimized in order to ensure the efficacy of an antimicrobial system as well as the product safety. Despite the wide variety of preservatives, the synergistic or antagonistic effects of their combinations are not well established and it is still an issue in the development of pharmaceutical and cosmetic products. The purpose of this paper was to establish a space design using a simplex-centroid approach to achieve the lowest effective concentration of 3 preservatives (methylparaben, propylparaben, and imidazolidinyl urea) and EDTA for an emulsion cosmetic product. Twenty-two formulae of emulsion differing only by imidazolidinyl urea (A: 0.00 to 0.30% w/w), methylparaben (B: 0.00 to 0.20% w/w), propylparaben (C: 0.00 to 0.10% w/w) and EDTA (D: 0.00 to 0.10% w/w) concentrations were prepared. They were tested alone and in binary, ternary and quaternary combinations. Aliquots of these formulae were inoculated with several microorganisms. An electrochemical method was used to determine microbial burden immediately after inoculation and after 2, 4, 8, 12, 24, 48, and 168 h. An optimization strategy was used to obtain the concentrations of preservatives and EDTA resulting in a most effective preservative system of all microorganisms simultaneously. The use of preservatives and EDTA in combination has the advantage of exhibiting a potential synergistic effect against a wider spectrum of microorganisms. Based on graphic and optimization strategies, we proposed a new formula containing a quaternary combination (A: 55%; B: 30%; C: 5% and D: 10% w/w), which complies with the specification of a conventional challenge test. A design space approach was successfully employed in the optimization of concentrations of preservatives and EDTA in an emulsion cosmetic product.