Sample records for inexact optimization model

  1. Performance of Nonlinear Finite-Difference Poisson-Boltzmann Solvers

    PubMed Central

    Cai, Qin; Hsieh, Meng-Juei; Wang, Jun; Luo, Ray

    2014-01-01

    We implemented and optimized seven finite-difference solvers for the full nonlinear Poisson-Boltzmann equation in biomolecular applications, including four relaxation methods, one conjugate gradient method, and two inexact Newton methods. The performance of the seven solvers was extensively evaluated with a large number of nucleic acids and proteins. Worth noting is the inexact Newton method in our analysis. We investigated the role of linear solvers in its performance by incorporating the incomplete Cholesky conjugate gradient and the geometric multigrid into its inner linear loop. We tailored and optimized both linear solvers for faster convergence rate. In addition, we explored strategies to optimize the successive over-relaxation method to reduce its convergence failures without too much sacrifice in its convergence rate. Specifically we attempted to adaptively change the relaxation parameter and to utilize the damping strategy from the inexact Newton method to improve the successive over-relaxation method. Our analysis shows that the nonlinear methods accompanied with a functional-assisted strategy, such as the conjugate gradient method and the inexact Newton method, can guarantee convergence in the tested molecules. Especially the inexact Newton method exhibits impressive performance when it is combined with highly efficient linear solvers that are tailored for its special requirement. PMID:24723843

  2. A multi-objective optimization model for hub network design under uncertainty: An inexact rough-interval fuzzy approach

    NASA Astrophysics Data System (ADS)

    Niakan, F.; Vahdani, B.; Mohammadi, M.

    2015-12-01

    This article proposes a multi-objective mixed-integer model to optimize the location of hubs within a hub network design problem under uncertainty. The considered objectives include minimizing the maximum accumulated travel time, minimizing the total costs including transportation, fuel consumption and greenhouse emissions costs, and finally maximizing the minimum service reliability. In the proposed model, it is assumed that for connecting two nodes, there are several types of arc in which their capacity, transportation mode, travel time, and transportation and construction costs are different. Moreover, in this model, determining the capacity of the hubs is part of the decision-making procedure and balancing requirements are imposed on the network. To solve the model, a hybrid solution approach is utilized based on inexact programming, interval-valued fuzzy programming and rough interval programming. Furthermore, a hybrid multi-objective metaheuristic algorithm, namely multi-objective invasive weed optimization (MOIWO), is developed for the given problem. Finally, various computational experiments are carried out to assess the proposed model and solution approaches.

  3. Inexact nonlinear improved fuzzy chance-constrained programming model for irrigation water management under uncertainty

    NASA Astrophysics Data System (ADS)

    Zhang, Chenglong; Zhang, Fan; Guo, Shanshan; Liu, Xiao; Guo, Ping

    2018-01-01

    An inexact nonlinear mλ-measure fuzzy chance-constrained programming (INMFCCP) model is developed for irrigation water allocation under uncertainty. Techniques of inexact quadratic programming (IQP), mλ-measure, and fuzzy chance-constrained programming (FCCP) are integrated into a general optimization framework. The INMFCCP model can deal with not only nonlinearities in the objective function, but also uncertainties presented as discrete intervals in the objective function, variables and left-hand side constraints and fuzziness in the right-hand side constraints. Moreover, this model improves upon the conventional fuzzy chance-constrained programming by introducing a linear combination of possibility measure and necessity measure with varying preference parameters. To demonstrate its applicability, the model is then applied to a case study in the middle reaches of Heihe River Basin, northwest China. An interval regression analysis method is used to obtain interval crop water production functions in the whole growth period under uncertainty. Therefore, more flexible solutions can be generated for optimal irrigation water allocation. The variation of results can be examined by giving different confidence levels and preference parameters. Besides, it can reflect interrelationships among system benefits, preference parameters, confidence levels and the corresponding risk levels. Comparison between interval crop water production functions and deterministic ones based on the developed INMFCCP model indicates that the former is capable of reflecting more complexities and uncertainties in practical application. These results can provide more reliable scientific basis for supporting irrigation water management in arid areas.

  4. Performance comparison of a new hybrid conjugate gradient method under exact and inexact line searches

    NASA Astrophysics Data System (ADS)

    Ghani, N. H. A.; Mohamed, N. S.; Zull, N.; Shoid, S.; Rivaie, M.; Mamat, M.

    2017-09-01

    Conjugate gradient (CG) method is one of iterative techniques prominently used in solving unconstrained optimization problems due to its simplicity, low memory storage, and good convergence analysis. This paper presents a new hybrid conjugate gradient method, named NRM1 method. The method is analyzed under the exact and inexact line searches in given conditions. Theoretically, proofs show that the NRM1 method satisfies the sufficient descent condition with both line searches. The computational result indicates that NRM1 method is capable in solving the standard unconstrained optimization problems used. On the other hand, the NRM1 method performs better under inexact line search compared with exact line search.

  5. A Bayesian-based two-stage inexact optimization method for supporting stream water quality management in the Three Gorges Reservoir region.

    PubMed

    Hu, X H; Li, Y P; Huang, G H; Zhuang, X W; Ding, X W

    2016-05-01

    In this study, a Bayesian-based two-stage inexact optimization (BTIO) method is developed for supporting water quality management through coupling Bayesian analysis with interval two-stage stochastic programming (ITSP). The BTIO method is capable of addressing uncertainties caused by insufficient inputs in water quality model as well as uncertainties expressed as probabilistic distributions and interval numbers. The BTIO method is applied to a real case of water quality management for the Xiangxi River basin in the Three Gorges Reservoir region to seek optimal water quality management schemes under various uncertainties. Interval solutions for production patterns under a range of probabilistic water quality constraints have been generated. Results obtained demonstrate compromises between the system benefit and the system failure risk due to inherent uncertainties that exist in various system components. Moreover, information about pollutant emission is accomplished, which would help managers to adjust production patterns of regional industry and local policies considering interactions of water quality requirement, economic benefit, and industry structure.

  6. An inexact log-normal distribution-based stochastic chance-constrained model for agricultural water quality management

    NASA Astrophysics Data System (ADS)

    Wang, Yu; Fan, Jie; Xu, Ye; Sun, Wei; Chen, Dong

    2018-05-01

    In this study, an inexact log-normal-based stochastic chance-constrained programming model was developed for solving the non-point source pollution issues caused by agricultural activities. Compared to the general stochastic chance-constrained programming model, the main advantage of the proposed model is that it allows random variables to be expressed as a log-normal distribution, rather than a general normal distribution. Possible deviations in solutions caused by irrational parameter assumptions were avoided. The agricultural system management in the Erhai Lake watershed was used as a case study, where critical system factors, including rainfall and runoff amounts, show characteristics of a log-normal distribution. Several interval solutions were obtained under different constraint-satisfaction levels, which were useful in evaluating the trade-off between system economy and reliability. The applied results show that the proposed model could help decision makers to design optimal production patterns under complex uncertainties. The successful application of this model is expected to provide a good example for agricultural management in many other watersheds.

  7. An inexact reverse logistics model for municipal solid waste management systems.

    PubMed

    Zhang, Yi Mei; Huang, Guo He; He, Li

    2011-03-01

    This paper proposed an inexact reverse logistics model for municipal solid waste management systems (IRWM). Waste managers, suppliers, industries and distributors were involved in strategic planning and operational execution through reverse logistics management. All the parameters were assumed to be intervals to quantify the uncertainties in the optimization process and solutions in IRWM. To solve this model, a piecewise interval programming was developed to deal with Min-Min functions in both objectives and constraints. The application of the model was illustrated through a classical municipal solid waste management case. With different cost parameters for landfill and the WTE, two scenarios were analyzed. The IRWM could reflect the dynamic and uncertain characteristics of MSW management systems, and could facilitate the generation of desired management plans. The model could be further advanced through incorporating methods of stochastic or fuzzy parameters into its framework. Design of multi-waste, multi-echelon, multi-uncertainty reverse logistics model for waste management network would also be preferred. Copyright © 2010 Elsevier Ltd. All rights reserved.

  8. A modified conjugate gradient coefficient with inexact line search for unconstrained optimization

    NASA Astrophysics Data System (ADS)

    Aini, Nurul; Rivaie, Mohd; Mamat, Mustafa

    2016-11-01

    Conjugate gradient (CG) method is a line search algorithm mostly known for its wide application in solving unconstrained optimization problems. Its low memory requirements and global convergence properties makes it one of the most preferred method in real life application such as in engineering and business. In this paper, we present a new CG method based on AMR* and CD method for solving unconstrained optimization functions. The resulting algorithm is proven to have both the sufficient descent and global convergence properties under inexact line search. Numerical tests are conducted to assess the effectiveness of the new method in comparison to some previous CG methods. The results obtained indicate that our method is indeed superior.

  9. Waste management with recourse: an inexact dynamic programming model containing fuzzy boundary intervals in objectives and constraints.

    PubMed

    Tan, Q; Huang, G H; Cai, Y P

    2010-09-01

    The existing inexact optimization methods based on interval-parameter linear programming can hardly address problems where coefficients in objective functions are subject to dual uncertainties. In this study, a superiority-inferiority-based inexact fuzzy two-stage mixed-integer linear programming (SI-IFTMILP) model was developed for supporting municipal solid waste management under uncertainty. The developed SI-IFTMILP approach is capable of tackling dual uncertainties presented as fuzzy boundary intervals (FuBIs) in not only constraints, but also objective functions. Uncertainties expressed as a combination of intervals and random variables could also be explicitly reflected. An algorithm with high computational efficiency was provided to solve SI-IFTMILP. SI-IFTMILP was then applied to a long-term waste management case to demonstrate its applicability. Useful interval solutions were obtained. SI-IFTMILP could help generate dynamic facility-expansion and waste-allocation plans, as well as provide corrective actions when anticipated waste management plans are violated. It could also greatly reduce system-violation risk and enhance system robustness through examining two sets of penalties resulting from variations in fuzziness and randomness. Moreover, four possible alternative models were formulated to solve the same problem; solutions from them were then compared with those from SI-IFTMILP. The results indicate that SI-IFTMILP could provide more reliable solutions than the alternatives. 2010 Elsevier Ltd. All rights reserved.

  10. An export coefficient based inexact fuzzy bi-level multi-objective programming model for the management of agricultural nonpoint source pollution under uncertainty

    NASA Astrophysics Data System (ADS)

    Cai, Yanpeng; Rong, Qiangqiang; Yang, Zhifeng; Yue, Wencong; Tan, Qian

    2018-02-01

    In this research, an export coefficient based inexact fuzzy bi-level multi-objective programming (EC-IFBLMOP) model was developed through integrating export coefficient model (ECM), interval parameter programming (IPP) and fuzzy parameter programming (FPP) within a bi-level multi-objective programming framework. The proposed EC-IFBLMOP model can effectively deal with the multiple uncertainties expressed as discrete intervals and fuzzy membership functions. Also, the complexities in agricultural systems, such as the cooperation and gaming relationship between the decision makers at different levels, can be fully considered in the model. The developed model was then applied to identify the optimal land use patterns and BMP implementing levels for agricultural nonpoint source (NPS) pollution management in a subcatchment in the upper stream watershed of the Miyun Reservoir in north China. The results of the model showed that the desired optimal land use patterns and implementing levels of best management of practices (BMPs) would be obtained. It is the gaming result between the upper- and lower-level decision makers, when the allowable discharge amounts of NPS pollutants were limited. Moreover, results corresponding to different decision scenarios could provide a set of decision alternatives for the upper- and lower-level decision makers to identify the most appropriate management strategy. The model has a good applicability and can be effectively utilized for agricultural NPS pollution management.

  11. Waste management under multiple complexities: inexact piecewise-linearization-based fuzzy flexible programming.

    PubMed

    Sun, Wei; Huang, Guo H; Lv, Ying; Li, Gongchen

    2012-06-01

    To tackle nonlinear economies-of-scale (EOS) effects in interval-parameter constraints for a representative waste management problem, an inexact piecewise-linearization-based fuzzy flexible programming (IPFP) model is developed. In IPFP, interval parameters for waste amounts and transportation/operation costs can be quantified; aspiration levels for net system costs, as well as tolerance intervals for both capacities of waste treatment facilities and waste generation rates can be reflected; and the nonlinear EOS effects transformed from objective function to constraints can be approximated. An interactive algorithm is proposed for solving the IPFP model, which in nature is an interval-parameter mixed-integer quadratically constrained programming model. To demonstrate the IPFP's advantages, two alternative models are developed to compare their performances. One is a conventional linear-regression-based inexact fuzzy programming model (IPFP2) and the other is an IPFP model with all right-hand-sides of fussy constraints being the corresponding interval numbers (IPFP3). The comparison results between IPFP and IPFP2 indicate that the optimized waste amounts would have the similar patterns in both models. However, when dealing with EOS effects in constraints, the IPFP2 may underestimate the net system costs while the IPFP can estimate the costs more accurately. The comparison results between IPFP and IPFP3 indicate that their solutions would be significantly different. The decreased system uncertainties in IPFP's solutions demonstrate its effectiveness for providing more satisfactory interval solutions than IPFP3. Following its first application to waste management, the IPFP can be potentially applied to other environmental problems under multiple complexities. Copyright © 2012 Elsevier Ltd. All rights reserved.

  12. Inexact fuzzy-stochastic mixed-integer programming approach for long-term planning of waste management--Part A: methodology.

    PubMed

    Guo, P; Huang, G H

    2009-01-01

    In this study, an inexact fuzzy chance-constrained two-stage mixed-integer linear programming (IFCTIP) approach is proposed for supporting long-term planning of waste-management systems under multiple uncertainties in the City of Regina, Canada. The method improves upon the existing inexact two-stage programming and mixed-integer linear programming techniques by incorporating uncertainties expressed as multiple uncertainties of intervals and dual probability distributions within a general optimization framework. The developed method can provide an effective linkage between the predefined environmental policies and the associated economic implications. Four special characteristics of the proposed method make it unique compared with other optimization techniques that deal with uncertainties. Firstly, it provides a linkage to predefined policies that have to be respected when a modeling effort is undertaken; secondly, it is useful for tackling uncertainties presented as intervals, probabilities, fuzzy sets and their incorporation; thirdly, it facilitates dynamic analysis for decisions of facility-expansion planning and waste-flow allocation within a multi-facility, multi-period, multi-level, and multi-option context; fourthly, the penalties are exercised with recourse against any infeasibility, which permits in-depth analyses of various policy scenarios that are associated with different levels of economic consequences when the promised solid waste-generation rates are violated. In a companion paper, the developed method is applied to a real case for the long-term planning of waste management in the City of Regina, Canada.

  13. Waste management under multiple complexities: Inexact piecewise-linearization-based fuzzy flexible programming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun Wei; Huang, Guo H., E-mail: huang@iseis.org; Institute for Energy, Environment and Sustainable Communities, University of Regina, Regina, Saskatchewan, S4S 0A2

    2012-06-15

    Highlights: Black-Right-Pointing-Pointer Inexact piecewise-linearization-based fuzzy flexible programming is proposed. Black-Right-Pointing-Pointer It's the first application to waste management under multiple complexities. Black-Right-Pointing-Pointer It tackles nonlinear economies-of-scale effects in interval-parameter constraints. Black-Right-Pointing-Pointer It estimates costs more accurately than the linear-regression-based model. Black-Right-Pointing-Pointer Uncertainties are decreased and more satisfactory interval solutions are obtained. - Abstract: To tackle nonlinear economies-of-scale (EOS) effects in interval-parameter constraints for a representative waste management problem, an inexact piecewise-linearization-based fuzzy flexible programming (IPFP) model is developed. In IPFP, interval parameters for waste amounts and transportation/operation costs can be quantified; aspiration levels for net system costs, as well as tolerancemore » intervals for both capacities of waste treatment facilities and waste generation rates can be reflected; and the nonlinear EOS effects transformed from objective function to constraints can be approximated. An interactive algorithm is proposed for solving the IPFP model, which in nature is an interval-parameter mixed-integer quadratically constrained programming model. To demonstrate the IPFP's advantages, two alternative models are developed to compare their performances. One is a conventional linear-regression-based inexact fuzzy programming model (IPFP2) and the other is an IPFP model with all right-hand-sides of fussy constraints being the corresponding interval numbers (IPFP3). The comparison results between IPFP and IPFP2 indicate that the optimized waste amounts would have the similar patterns in both models. However, when dealing with EOS effects in constraints, the IPFP2 may underestimate the net system costs while the IPFP can estimate the costs more accurately. The comparison results between IPFP and IPFP3 indicate that their solutions would be significantly different. The decreased system uncertainties in IPFP's solutions demonstrate its effectiveness for providing more satisfactory interval solutions than IPFP3. Following its first application to waste management, the IPFP can be potentially applied to other environmental problems under multiple complexities.« less

  14. An inexact mixed risk-aversion two-stage stochastic programming model for water resources management under uncertainty.

    PubMed

    Li, W; Wang, B; Xie, Y L; Huang, G H; Liu, L

    2015-02-01

    Uncertainties exist in the water resources system, while traditional two-stage stochastic programming is risk-neutral and compares the random variables (e.g., total benefit) to identify the best decisions. To deal with the risk issues, a risk-aversion inexact two-stage stochastic programming model is developed for water resources management under uncertainty. The model was a hybrid methodology of interval-parameter programming, conditional value-at-risk measure, and a general two-stage stochastic programming framework. The method extends on the traditional two-stage stochastic programming method by enabling uncertainties presented as probability density functions and discrete intervals to be effectively incorporated within the optimization framework. It could not only provide information on the benefits of the allocation plan to the decision makers but also measure the extreme expected loss on the second-stage penalty cost. The developed model was applied to a hypothetical case of water resources management. Results showed that that could help managers generate feasible and balanced risk-aversion allocation plans, and analyze the trade-offs between system stability and economy.

  15. On the use of inexact, pruned hardware in atmospheric modelling

    PubMed Central

    Düben, Peter D.; Joven, Jaume; Lingamneni, Avinash; McNamara, Hugh; De Micheli, Giovanni; Palem, Krishna V.; Palmer, T. N.

    2014-01-01

    Inexact hardware design, which advocates trading the accuracy of computations in exchange for significant savings in area, power and/or performance of computing hardware, has received increasing prominence in several error-tolerant application domains, particularly those involving perceptual or statistical end-users. In this paper, we evaluate inexact hardware for its applicability in weather and climate modelling. We expand previous studies on inexact techniques, in particular probabilistic pruning, to floating point arithmetic units and derive several simulated set-ups of pruned hardware with reasonable levels of error for applications in atmospheric modelling. The set-up is tested on the Lorenz ‘96 model, a toy model for atmospheric dynamics, using software emulation for the proposed hardware. The results show that large parts of the computation tolerate the use of pruned hardware blocks without major changes in the quality of short- and long-time diagnostics, such as forecast errors and probability density functions. This could open the door to significant savings in computational cost and to higher resolution simulations with weather and climate models. PMID:24842031

  16. An inexact chance-constrained programming model for water quality management in Binhai New Area of Tianjin, China.

    PubMed

    Xie, Y L; Li, Y P; Huang, G H; Li, Y F; Chen, L R

    2011-04-15

    In this study, an inexact-chance-constrained water quality management (ICC-WQM) model is developed for planning regional environmental management under uncertainty. This method is based on an integration of interval linear programming (ILP) and chance-constrained programming (CCP) techniques. ICC-WQM allows uncertainties presented as both probability distributions and interval values to be incorporated within a general optimization framework. Complexities in environmental management systems can be systematically reflected, thus applicability of the modeling process can be highly enhanced. The developed method is applied to planning chemical-industry development in Binhai New Area of Tianjin, China. Interval solutions associated with different risk levels of constraint violation have been obtained. They can be used for generating decision alternatives and thus help decision makers identify desired policies under various system-reliability constraints of water environmental capacity of pollutant. Tradeoffs between system benefits and constraint-violation risks can also be tackled. They are helpful for supporting (a) decision of wastewater discharge and government investment, (b) formulation of local policies regarding water consumption, economic development and industry structure, and (c) analysis of interactions among economic benefits, system reliability and pollutant discharges. Copyright © 2011 Elsevier B.V. All rights reserved.

  17. Inexact hardware for modelling weather & climate

    NASA Astrophysics Data System (ADS)

    Düben, Peter D.; McNamara, Hugh; Palmer, Tim

    2014-05-01

    The use of stochastic processing hardware and low precision arithmetic in atmospheric models is investigated. Stochastic processors allow hardware-induced faults in calculations, sacrificing exact calculations in exchange for improvements in performance and potentially accuracy and a reduction in power consumption. A similar trade-off is achieved using low precision arithmetic, with improvements in computation and communication speed and savings in storage and memory requirements. As high-performance computing becomes more massively parallel and power intensive, these two approaches may be important stepping stones in the pursuit of global cloud resolving atmospheric modelling. The impact of both, hardware induced faults and low precision arithmetic is tested in the dynamical core of a global atmosphere model. Our simulations show that both approaches to inexact calculations do not substantially affect the quality of the model simulations, provided they are restricted to act only on smaller scales. This suggests that inexact calculations at the small scale could reduce computation and power costs without adversely affecting the quality of the simulations.

  18. Optimized production planning model for a multi-plant cultivation system under uncertainty

    NASA Astrophysics Data System (ADS)

    Ke, Shunkui; Guo, Doudou; Niu, Qingliang; Huang, Danfeng

    2015-02-01

    An inexact multi-constraint programming model under uncertainty was developed by incorporating a production plan algorithm into the crop production optimization framework under the multi-plant collaborative cultivation system. In the production plan, orders from the customers are assigned to a suitable plant under the constraints of plant capabilities and uncertainty parameters to maximize profit and achieve customer satisfaction. The developed model and solution method were applied to a case study of a multi-plant collaborative cultivation system to verify its applicability. As determined in the case analysis involving different orders from customers, the period of plant production planning and the interval between orders can significantly affect system benefits. Through the analysis of uncertain parameters, reliable and practical decisions can be generated using the suggested model of a multi-plant collaborative cultivation system.

  19. Axioms of adaptivity

    PubMed Central

    Carstensen, C.; Feischl, M.; Page, M.; Praetorius, D.

    2014-01-01

    This paper aims first at a simultaneous axiomatic presentation of the proof of optimal convergence rates for adaptive finite element methods and second at some refinements of particular questions like the avoidance of (discrete) lower bounds, inexact solvers, inhomogeneous boundary data, or the use of equivalent error estimators. Solely four axioms guarantee the optimality in terms of the error estimators. Compared to the state of the art in the temporary literature, the improvements of this article can be summarized as follows: First, a general framework is presented which covers the existing literature on optimality of adaptive schemes. The abstract analysis covers linear as well as nonlinear problems and is independent of the underlying finite element or boundary element method. Second, efficiency of the error estimator is neither needed to prove convergence nor quasi-optimal convergence behavior of the error estimator. In this paper, efficiency exclusively characterizes the approximation classes involved in terms of the best-approximation error and data resolution and so the upper bound on the optimal marking parameters does not depend on the efficiency constant. Third, some general quasi-Galerkin orthogonality is not only sufficient, but also necessary for the R-linear convergence of the error estimator, which is a fundamental ingredient in the current quasi-optimality analysis due to Stevenson 2007. Finally, the general analysis allows for equivalent error estimators and inexact solvers as well as different non-homogeneous and mixed boundary conditions. PMID:25983390

  20. Numerical study of a matrix-free trust-region SQP method for equality constrained optimization.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heinkenschloss, Matthias; Ridzal, Denis; Aguilo, Miguel Antonio

    2011-12-01

    This is a companion publication to the paper 'A Matrix-Free Trust-Region SQP Algorithm for Equality Constrained Optimization' [11]. In [11], we develop and analyze a trust-region sequential quadratic programming (SQP) method that supports the matrix-free (iterative, in-exact) solution of linear systems. In this report, we document the numerical behavior of the algorithm applied to a variety of equality constrained optimization problems, with constraints given by partial differential equations (PDEs).

  1. A new family of Polak-Ribiere-Polyak conjugate gradient method with the strong-Wolfe line search

    NASA Astrophysics Data System (ADS)

    Ghani, Nur Hamizah Abdul; Mamat, Mustafa; Rivaie, Mohd

    2017-08-01

    Conjugate gradient (CG) method is an important technique in unconstrained optimization, due to its effectiveness and low memory requirements. The focus of this paper is to introduce a new CG method for solving large scale unconstrained optimization. Theoretical proofs show that the new method fulfills sufficient descent condition if strong Wolfe-Powell inexact line search is used. Besides, computational results show that our proposed method outperforms to other existing CG methods.

  2. An image understanding system using attributed symbolic representation and inexact graph-matching

    NASA Astrophysics Data System (ADS)

    Eshera, M. A.; Fu, K.-S.

    1986-09-01

    A powerful image understanding system using a semantic-syntactic representation scheme consisting of attributed relational graphs (ARGs) is proposed for the analysis of the global information content of images. A multilayer graph transducer scheme performs the extraction of ARG representations from images, with ARG nodes representing the global image features, and the relations between features represented by the attributed branches between corresponding nodes. An efficient dynamic programming technique is employed to derive the distance between two ARGs and the inexact matching of their respective components. Noise, distortion and ambiguity in real-world images are handled through modeling in the transducer mapping rules and through the appropriate cost of error-transformation for the inexact matching of the representation. The system is demonstrated for the case of locating objects in a scene composed of complex overlapped objects, and the case of target detection in noisy and distorted synthetic aperture radar image.

  3. Overcoming the Power Wall by Exploiting Application Inexactness and Emerging COTS Architectural Features

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fagan, Mike; Schlachter, Jeremy; Yoshii, Kazutomo

    Abstract—Energy and power consumption are major limitations to continued scaling of computing systems. Inexactness where the quality of the solution can be traded for energy savings has been proposed as a counterintuitive approach to overcoming those limitation. However, in the past, inexactness has been necessitated the need for highly customized or specialized hardware. In order to move away from customization, in earlier work [4], it was shown that by interpreting precision in the computation to be the parameter to trade to achieve inexactness, weather prediction and page rank could both benefit in terms of yielding energy savings through reduced precision,more » while preserving the quality of the application. However, this required representations of numbers that were not readily available on commercial off-the-shelf (COTS) processors. In this paper, we provide opportunities for extending the the notion of trading precision for energy savings into the world COTS. We provide a model and analyze the opportunities and behavior of all three IEEE compliant precision values available on COTS processors: (i) double (ii) single, and (iii) half. Through measurements, we show through a limit study energy savings in going from double to half precision can potentially exceed a factor of four, largely due to memory and cache effects.« less

  4. A Globally Convergent Augmented Lagrangian Pattern Search Algorithm for Optimization with General Constraints and Simple Bounds

    NASA Technical Reports Server (NTRS)

    Lewis, Robert Michael; Torczon, Virginia

    1998-01-01

    We give a pattern search adaptation of an augmented Lagrangian method due to Conn, Gould, and Toint. The algorithm proceeds by successive bound constrained minimization of an augmented Lagrangian. In the pattern search adaptation we solve this subproblem approximately using a bound constrained pattern search method. The stopping criterion proposed by Conn, Gould, and Toint for the solution of this subproblem requires explicit knowledge of derivatives. Such information is presumed absent in pattern search methods; however, we show how we can replace this with a stopping criterion based on the pattern size in a way that preserves the convergence properties of the original algorithm. In this way we proceed by successive, inexact, bound constrained minimization without knowing exactly how inexact the minimization is. So far as we know, this is the first provably convergent direct search method for general nonlinear programming.

  5. A Hybrid Interval-Robust Optimization Model for Water Quality Management.

    PubMed

    Xu, Jieyu; Li, Yongping; Huang, Guohe

    2013-05-01

    In water quality management problems, uncertainties may exist in many system components and pollution-related processes ( i.e. , random nature of hydrodynamic conditions, variability in physicochemical processes, dynamic interactions between pollutant loading and receiving water bodies, and indeterminacy of available water and treated wastewater). These complexities lead to difficulties in formulating and solving the resulting nonlinear optimization problems. In this study, a hybrid interval-robust optimization (HIRO) method was developed through coupling stochastic robust optimization and interval linear programming. HIRO can effectively reflect the complex system features under uncertainty, where implications of water quality/quantity restrictions for achieving regional economic development objectives are studied. By delimiting the uncertain decision space through dimensional enlargement of the original chemical oxygen demand (COD) discharge constraints, HIRO enhances the robustness of the optimization processes and resulting solutions. This method was applied to planning of industry development in association with river-water pollution concern in New Binhai District of Tianjin, China. Results demonstrated that the proposed optimization model can effectively communicate uncertainties into the optimization process and generate a spectrum of potential inexact solutions supporting local decision makers in managing benefit-effective water quality management schemes. HIRO is helpful for analysis of policy scenarios related to different levels of economic penalties, while also providing insight into the tradeoff between system benefits and environmental requirements.

  6. Numerical experience with a class of algorithms for nonlinear optimization using inexact function and gradient information

    NASA Technical Reports Server (NTRS)

    Carter, Richard G.

    1989-01-01

    For optimization problems associated with engineering design, parameter estimation, image reconstruction, and other optimization/simulation applications, low accuracy function and gradient values are frequently much less expensive to obtain than high accuracy values. Here, researchers investigate the computational performance of trust region methods for nonlinear optimization when high accuracy evaluations are unavailable or prohibitively expensive, and confirm earlier theoretical predictions when the algorithm is convergent even with relative gradient errors of 0.5 or more. The proper choice of the amount of accuracy to use in function and gradient evaluations can result in orders-of-magnitude savings in computational cost.

  7. Solving constrained inverse problems for waveform tomography with Salvus

    NASA Astrophysics Data System (ADS)

    Boehm, C.; Afanasiev, M.; van Driel, M.; Krischer, L.; May, D.; Rietmann, M.; Fichtner, A.

    2016-12-01

    Finding a good balance between flexibility and performance is often difficult within domain-specific software projects. To achieve this balance, we introduce Salvus: an open-source high-order finite element package built upon PETSc and Eigen, that focuses on large-scale full-waveform modeling and inversion. One of the key features of Salvus is its modular design, based on C++ mixins, that separates the physical equations from the numerical discretization and the mathematical optimization. In this presentation we focus on solving inverse problems with Salvus and discuss (i) dealing with inexact derivatives resulting, e.g., from lossy wavefield compression, (ii) imposing additional constraints on the model parameters, e.g., from effective medium theory, and (iii) integration with a workflow management tool. We present a feasible-point trust-region method for PDE-constrained inverse problems that can handle inexactly computed derivatives. The level of accuracy in the approximate derivatives is controlled by localized error estimates to ensure global convergence of the method. Additional constraints on the model parameters are typically cheap to compute without the need for further simulations. Hence, including them in the trust-region subproblem introduces only a small computational overhead, but ensures feasibility of the model in every iteration. We show examples with homogenization constraints derived from effective medium theory (i.e. all fine-scale updates must upscale to a physically meaningful long-wavelength model). Salvus has a built-in workflow management framework to automate the inversion with interfaces to user-defined misfit functionals and data structures. This significantly reduces the amount of manual user interaction and enhances reproducibility which we demonstrate for several applications from the laboratory to global scale.

  8. An extended basis inexact shift-invert Lanczos for the efficient solution of large-scale generalized eigenproblems

    NASA Astrophysics Data System (ADS)

    Rewieński, M.; Lamecki, A.; Mrozowski, M.

    2013-09-01

    This paper proposes a technique, based on the Inexact Shift-Invert Lanczos (ISIL) method with Inexact Jacobi Orthogonal Component Correction (IJOCC) refinement, and a preconditioned conjugate-gradient (PCG) linear solver with multilevel preconditioner, for finding several eigenvalues for generalized symmetric eigenproblems. Several eigenvalues are found by constructing (with the ISIL process) an extended projection basis. Presented results of numerical experiments confirm the technique can be effectively applied to challenging, large-scale problems characterized by very dense spectra, such as resonant cavities with spatial dimensions which are large with respect to wavelengths of the resonating electromagnetic fields. It is also shown that the proposed scheme based on inexact linear solves delivers superior performance, as compared to methods which rely on exact linear solves, indicating tremendous potential of the 'inexact solve' concept. Finally, the scheme which generates an extended projection basis is found to provide a cost-efficient alternative to classical deflation schemes when several eigenvalues are computed.

  9. An inexact risk management model for agricultural land-use planning under water shortage

    NASA Astrophysics Data System (ADS)

    Li, Wei; Feng, Changchun; Dai, Chao; Li, Yongping; Li, Chunhui; Liu, Ming

    2016-09-01

    Water resources availability has a significant impact on agricultural land-use planning, especially in a water shortage area such as North China. The random nature of available water resources and other uncertainties in an agricultural system present risk for land-use planning and may lead to undesirable decisions or potential economic loss. In this study, an inexact risk management model (IRM) was developed for supporting agricultural land-use planning and risk analysis under water shortage. The IRM model was formulated through incorporating a conditional value-at-risk (CVaR) constraint into an inexact two-stage stochastic programming (ITSP) framework, and could be used to control uncertainties expressed as not only probability distributions but also as discrete intervals. The measure of risk about the second-stage penalty cost was incorporated into the model so that the trade-off between system benefit and extreme expected loss could be analyzed. The developed model was applied to a case study in the Zhangweinan River Basin, a typical agricultural region facing serious water shortage in North China. Solutions of the IRM model showed that the obtained first-stage land-use target values could be used to reflect decision-makers' opinions on the long-term development plan. The confidence level α and maximum acceptable risk loss β could be used to reflect decisionmakers' preference towards system benefit and risk control. The results indicated that the IRM model was useful for reflecting the decision-makers' attitudes toward risk aversion and could help seek cost-effective agricultural land-use planning strategies under complex uncertainties.

  10. Deconvolution of post-adaptive optics images of faint circumstellar environments by means of the inexact Bregman procedure

    NASA Astrophysics Data System (ADS)

    Benfenati, A.; La Camera, A.; Carbillet, M.

    2016-02-01

    Aims: High-dynamic range images of astrophysical objects present some difficulties in their restoration because of the presence of very bright point-wise sources surrounded by faint and smooth structures. We propose a method that enables the restoration of this kind of images by taking these kinds of sources into account and, at the same time, improving the contrast enhancement in the final image. Moreover, the proposed approach can help to detect the position of the bright sources. Methods: The classical variational scheme in the presence of Poisson noise aims to find the minimum of a functional compound of the generalized Kullback-Leibler function and a regularization functional: the latter function is employed to preserve some characteristic in the restored image. The inexact Bregman procedure substitutes the regularization function with its inexact Bregman distance. This proposed scheme allows us to take under control the level of inexactness arising in the computed solution and permits us to employ an overestimation of the regularization parameter (which balances the trade-off between the Kullback-Leibler and the Bregman distance). This aspect is fundamental, since the estimation of this kind of parameter is very difficult in the presence of Poisson noise. Results: The inexact Bregman procedure is tested on a bright unresolved binary star with a faint circumstellar environment. When the sources' position is exactly known, this scheme provides us with very satisfactory results. In case of inexact knowledge of the sources' position, it can in addition give some useful information on the true positions. Finally, the inexact Bregman scheme can be also used when information about the binary star's position concerns a connected region instead of isolated pixels.

  11. A Hybrid Interval–Robust Optimization Model for Water Quality Management

    PubMed Central

    Xu, Jieyu; Li, Yongping; Huang, Guohe

    2013-01-01

    Abstract In water quality management problems, uncertainties may exist in many system components and pollution-related processes (i.e., random nature of hydrodynamic conditions, variability in physicochemical processes, dynamic interactions between pollutant loading and receiving water bodies, and indeterminacy of available water and treated wastewater). These complexities lead to difficulties in formulating and solving the resulting nonlinear optimization problems. In this study, a hybrid interval–robust optimization (HIRO) method was developed through coupling stochastic robust optimization and interval linear programming. HIRO can effectively reflect the complex system features under uncertainty, where implications of water quality/quantity restrictions for achieving regional economic development objectives are studied. By delimiting the uncertain decision space through dimensional enlargement of the original chemical oxygen demand (COD) discharge constraints, HIRO enhances the robustness of the optimization processes and resulting solutions. This method was applied to planning of industry development in association with river-water pollution concern in New Binhai District of Tianjin, China. Results demonstrated that the proposed optimization model can effectively communicate uncertainties into the optimization process and generate a spectrum of potential inexact solutions supporting local decision makers in managing benefit-effective water quality management schemes. HIRO is helpful for analysis of policy scenarios related to different levels of economic penalties, while also providing insight into the tradeoff between system benefits and environmental requirements. PMID:23922495

  12. An enhanced export coefficient based optimization model for supporting agricultural nonpoint source pollution mitigation under uncertainty.

    PubMed

    Rong, Qiangqiang; Cai, Yanpeng; Chen, Bing; Yue, Wencong; Yin, Xin'an; Tan, Qian

    2017-02-15

    In this research, an export coefficient based dual inexact two-stage stochastic credibility constrained programming (ECDITSCCP) model was developed through integrating an improved export coefficient model (ECM), interval linear programming (ILP), fuzzy credibility constrained programming (FCCP) and a fuzzy expected value equation within a general two stage programming (TSP) framework. The proposed ECDITSCCP model can effectively address multiple uncertainties expressed as random variables, fuzzy numbers, pure and dual intervals. Also, the model can provide a direct linkage between pre-regulated management policies and the associated economic implications. Moreover, the solutions under multiple credibility levels can be obtained for providing potential decision alternatives for decision makers. The proposed model was then applied to identify optimal land use structures for agricultural NPS pollution mitigation in a representative upstream subcatchment of the Miyun Reservoir watershed in north China. Optimal solutions of the model were successfully obtained, indicating desired land use patterns and nutrient discharge schemes to get a maximum agricultural system benefits under a limited discharge permit. Also, numerous results under multiple credibility levels could provide policy makers with several options, which could help get an appropriate balance between system benefits and pollution mitigation. The developed ECDITSCCP model can be effectively applied to addressing the uncertain information in agricultural systems and shows great applicability to the land use adjustment for agricultural NPS pollution mitigation. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. Sensor fault diagnosis of singular delayed LPV systems with inexact parameters: an uncertain system approach

    NASA Astrophysics Data System (ADS)

    Hassanabadi, Amir Hossein; Shafiee, Masoud; Puig, Vicenc

    2018-01-01

    In this paper, sensor fault diagnosis of a singular delayed linear parameter varying (LPV) system is considered. In the considered system, the model matrices are dependent on some parameters which are real-time measurable. The case of inexact parameter measurements is considered which is close to real situations. Fault diagnosis in this system is achieved via fault estimation. For this purpose, an augmented system is created by including sensor faults as additional system states. Then, an unknown input observer (UIO) is designed which estimates both the system states and the faults in the presence of measurement noise, disturbances and uncertainty induced by inexact measured parameters. Error dynamics and the original system constitute an uncertain system due to inconsistencies between real and measured values of the parameters. Then, the robust estimation of the system states and the faults are achieved with H∞ performance and formulated with a set of linear matrix inequalities (LMIs). The designed UIO is also applicable for fault diagnosis of singular delayed LPV systems with unmeasurable scheduling variables. The efficiency of the proposed approach is illustrated with an example.

  14. Regulation of Renewable Energy Sources to Optimal Power Flow Solutions Using ADMM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall-Anese, Emiliano; Zhang, Yijian; Hong, Mingyi

    This paper considers power distribution systems featuring renewable energy sources (RESs), and develops a distributed optimization method to steer the RES output powers to solutions of AC optimal power flow (OPF) problems. The design of the proposed method leverages suitable linear approximations of the AC-power flow equations, and is based on the Alternating Direction Method of Multipliers (ADMM). Convergence of the RES-inverter output powers to solutions of the OPF problem is established under suitable conditions on the stepsize as well as mismatches between the commanded setpoints and actual RES output powers. In a broad sense, the methods and results proposedmore » here are also applicable to other distributed optimization problem setups with ADMM and inexact dual updates.« less

  15. Guaranteed estimation of solutions to Helmholtz transmission problems with uncertain data from their indirect noisy observations

    NASA Astrophysics Data System (ADS)

    Podlipenko, Yu. K.; Shestopalov, Yu. V.

    2017-09-01

    We investigate the guaranteed estimation problem of linear functionals from solutions to transmission problems for the Helmholtz equation with inexact data. The right-hand sides of equations entering the statements of transmission problems and the statistical characteristics of observation errors are supposed to be unknown and belonging to certain sets. It is shown that the optimal linear mean square estimates of the above mentioned functionals and estimation errors are expressed via solutions to the systems of transmission problems of the special type. The results and techniques can be applied in the analysis and estimation of solution to forward and inverse electromagnetic and acoustic problems with uncertain data that arise in mathematical models of the wave diffraction on transparent bodies.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Yijian; Hong, Mingyi; Dall'Anese, Emiliano

    This paper considers power distribution systems featuring renewable energy sources (RESs), and develops a distributed optimization method to steer the RES output powers to solutions of AC optimal power flow (OPF) problems. The design of the proposed method leverages suitable linear approximations of the AC-power flow equations, and is based on the Alternating Direction Method of Multipliers (ADMM). Convergence of the RES-inverter output powers to solutions of the OPF problem is established under suitable conditions on the stepsize as well as mismatches between the commanded setpoints and actual RES output powers. In a broad sense, the methods and results proposedmore » here are also applicable to other distributed optimization problem setups with ADMM and inexact dual updates.« less

  17. Designing management strategies for carbon dioxide storage and utilization under uncertainty using inexact modelling

    NASA Astrophysics Data System (ADS)

    Wang, Yu; Fan, Jie; Xu, Ye; Sun, Wei; Chen, Dong

    2017-06-01

    Effective application of carbon capture, utilization and storage (CCUS) systems could help to alleviate the influence of climate change by reducing carbon dioxide (CO2) emissions. The research objective of this study is to develop an equilibrium chance-constrained programming model with bi-random variables (ECCP model) for supporting the CCUS management system under random circumstances. The major advantage of the ECCP model is that it tackles random variables as bi-random variables with a normal distribution, where the mean values follow a normal distribution. This could avoid irrational assumptions and oversimplifications in the process of parameter design and enrich the theory of stochastic optimization. The ECCP model is solved by an equilibrium change-constrained programming algorithm, which provides convenience for decision makers to rank the solution set using the natural order of real numbers. The ECCP model is applied to a CCUS management problem, and the solutions could be useful in helping managers to design and generate rational CO2-allocation patterns under complexities and uncertainties.

  18. Krylov-Subspace Recycling via the POD-Augmented Conjugate-Gradient Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlberg, Kevin; Forstall, Virginia; Tuminaro, Ray

    This paper presents a new Krylov-subspace-recycling method for efficiently solving sequences of linear systems of equations characterized by varying right-hand sides and symmetric-positive-definite matrices. As opposed to typical truncation strategies used in recycling such as deflation, we propose a truncation method inspired by goal-oriented proper orthogonal decomposition (POD) from model reduction. This idea is based on the observation that model reduction aims to compute a low-dimensional subspace that contains an accurate solution; as such, we expect the proposed method to generate a low-dimensional subspace that is well suited for computing solutions that can satisfy inexact tolerances. In particular, we proposemore » specific goal-oriented POD `ingredients' that align the optimality properties of POD with the objective of Krylov-subspace recycling. To compute solutions in the resulting 'augmented' POD subspace, we propose a hybrid direct/iterative three-stage method that leverages 1) the optimal ordering of POD basis vectors, and 2) well-conditioned reduced matrices. Numerical experiments performed on solid-mechanics problems highlight the benefits of the proposed method over existing approaches for Krylov-subspace recycling.« less

  19. Krylov-Subspace Recycling via the POD-Augmented Conjugate-Gradient Method

    DOE PAGES

    Carlberg, Kevin; Forstall, Virginia; Tuminaro, Ray

    2016-01-01

    This paper presents a new Krylov-subspace-recycling method for efficiently solving sequences of linear systems of equations characterized by varying right-hand sides and symmetric-positive-definite matrices. As opposed to typical truncation strategies used in recycling such as deflation, we propose a truncation method inspired by goal-oriented proper orthogonal decomposition (POD) from model reduction. This idea is based on the observation that model reduction aims to compute a low-dimensional subspace that contains an accurate solution; as such, we expect the proposed method to generate a low-dimensional subspace that is well suited for computing solutions that can satisfy inexact tolerances. In particular, we proposemore » specific goal-oriented POD `ingredients' that align the optimality properties of POD with the objective of Krylov-subspace recycling. To compute solutions in the resulting 'augmented' POD subspace, we propose a hybrid direct/iterative three-stage method that leverages 1) the optimal ordering of POD basis vectors, and 2) well-conditioned reduced matrices. Numerical experiments performed on solid-mechanics problems highlight the benefits of the proposed method over existing approaches for Krylov-subspace recycling.« less

  20. A Study on Gröbner Basis with Inexact Input

    NASA Astrophysics Data System (ADS)

    Nagasaka, Kosaku

    Gröbner basis is one of the most important tools in recent symbolic algebraic computations. However, computing a Gröbner basis for the given polynomial ideal is not easy and it is not numerically stable if polynomials have inexact coefficients. In this paper, we study what we should get for computing a Gröbner basis with inexact coefficients and introduce a naive method to compute a Gröbner basis by reduced row echelon form, for the ideal generated by the given polynomial set having a priori errors on their coefficients.

  1. Effects of hierarchical structures and insulating liquid media on adhesion

    NASA Astrophysics Data System (ADS)

    Yang, Weixu; Wang, Xiaoli; Li, Hanqing; Song, Xintao

    2017-11-01

    Effects of hierarchical structures and insulating liquid media on adhesion are investigated through a numerical adhesive contact model established in this paper, in which hierarchical structures are considered by introducing the height distribution into the surface gap equation, and media are taken into account through the Hamaker constant in Lifshitz-Hamaker approach. Computational methods such as inexact Newton method, bi-conjugate stabilized (Bi-CGSTAB) method and fast Fourier transform (FFT) technique are employed to obtain the adhesive force. It is shown that hierarchical structured surface exhibits excellent anti-adhesive properties compared with flat, micro or nano structured surfaces. Adhesion force is more dependent on the sizes of nanostructures than those of microstructures, and the optimal ranges of nanostructure pitch and maximum height for small adhesion force are presented. Insulating liquid media effectively decrease the adhesive interaction and 1-bromonaphthalene exhibits the smallest adhesion force among the five selected media. In addition, effects of hierarchical structures with optimal sizes on reducing adhesion are more obvious than those of the selected insulating liquid media.

  2. Model Identification in Time-Series Analysis: Some Empirical Results.

    ERIC Educational Resources Information Center

    Padia, William L.

    Model identification of time-series data is essential to valid statistical tests of intervention effects. Model identification is, at best, inexact in the social and behavioral sciences where one is often confronted with small numbers of observations. These problems are discussed, and the results of independent identifications of 130 social and…

  3. A Unified Fault-Tolerance Protocol

    NASA Technical Reports Server (NTRS)

    Miner, Paul; Gedser, Alfons; Pike, Lee; Maddalon, Jeffrey

    2004-01-01

    Davies and Wakerly show that Byzantine fault tolerance can be achieved by a cascade of broadcasts and middle value select functions. We present an extension of the Davies and Wakerly protocol, the unified protocol, and its proof of correctness. We prove that it satisfies validity and agreement properties for communication of exact values. We then introduce bounded communication error into the model. Inexact communication is inherent for clock synchronization protocols. We prove that validity and agreement properties hold for inexact communication, and that exact communication is a special case. As a running example, we illustrate the unified protocol using the SPIDER family of fault-tolerant architectures. In particular we demonstrate that the SPIDER interactive consistency, distributed diagnosis, and clock synchronization protocols are instances of the unified protocol.

  4. Situation resolution with context-sensitive fuzzy relations

    NASA Astrophysics Data System (ADS)

    Jakobson, Gabriel; Buford, John; Lewis, Lundy

    2009-05-01

    Context plays a significant role in situation resolution by intelligent agents (human or machine) by affecting how the situations are recognized, interpreted, acted upon or predicted. Many definitions and formalisms for the notion of context have emerged in various research fields including psychology, economics and computer science (computational linguistics, data management, control theory, artificial intelligence and others). In this paper we examine the role of context in situation management, particularly how to resolve situations that are described by using fuzzy (inexact) relations among their components. We propose a language for describing context sensitive inexact constraints and an algorithm for interpreting relations using inexact (fuzzy) computations.

  5. The use of imprecise processing to improve accuracy in weather & climate prediction

    NASA Astrophysics Data System (ADS)

    Düben, Peter D.; McNamara, Hugh; Palmer, T. N.

    2014-08-01

    The use of stochastic processing hardware and low precision arithmetic in atmospheric models is investigated. Stochastic processors allow hardware-induced faults in calculations, sacrificing bit-reproducibility and precision in exchange for improvements in performance and potentially accuracy of forecasts, due to a reduction in power consumption that could allow higher resolution. A similar trade-off is achieved using low precision arithmetic, with improvements in computation and communication speed and savings in storage and memory requirements. As high-performance computing becomes more massively parallel and power intensive, these two approaches may be important stepping stones in the pursuit of global cloud-resolving atmospheric modelling. The impact of both hardware induced faults and low precision arithmetic is tested using the Lorenz '96 model and the dynamical core of a global atmosphere model. In the Lorenz '96 model there is a natural scale separation; the spectral discretisation used in the dynamical core also allows large and small scale dynamics to be treated separately within the code. Such scale separation allows the impact of lower-accuracy arithmetic to be restricted to components close to the truncation scales and hence close to the necessarily inexact parametrised representations of unresolved processes. By contrast, the larger scales are calculated using high precision deterministic arithmetic. Hardware faults from stochastic processors are emulated using a bit-flip model with different fault rates. Our simulations show that both approaches to inexact calculations do not substantially affect the large scale behaviour, provided they are restricted to act only on smaller scales. By contrast, results from the Lorenz '96 simulations are superior when small scales are calculated on an emulated stochastic processor than when those small scales are parametrised. This suggests that inexact calculations at the small scale could reduce computation and power costs without adversely affecting the quality of the simulations. This would allow higher resolution models to be run at the same computational cost.

  6. Optimal land use management for soil erosion control by using an interval-parameter fuzzy two-stage stochastic programming approach.

    PubMed

    Han, Jing-Cheng; Huang, Guo-He; Zhang, Hua; Li, Zhong

    2013-09-01

    Soil erosion is one of the most serious environmental and public health problems, and such land degradation can be effectively mitigated through performing land use transitions across a watershed. Optimal land use management can thus provide a way to reduce soil erosion while achieving the maximum net benefit. However, optimized land use allocation schemes are not always successful since uncertainties pertaining to soil erosion control are not well presented. This study applied an interval-parameter fuzzy two-stage stochastic programming approach to generate optimal land use planning strategies for soil erosion control based on an inexact optimization framework, in which various uncertainties were reflected. The modeling approach can incorporate predefined soil erosion control policies, and address inherent system uncertainties expressed as discrete intervals, fuzzy sets, and probability distributions. The developed model was demonstrated through a case study in the Xiangxi River watershed, China's Three Gorges Reservoir region. Land use transformations were employed as decision variables, and based on these, the land use change dynamics were yielded for a 15-year planning horizon. Finally, the maximum net economic benefit with an interval value of [1.197, 6.311] × 10(9) $ was obtained as well as corresponding land use allocations in the three planning periods. Also, the resulting soil erosion amount was found to be decreased and controlled at a tolerable level over the watershed. Thus, results confirm that the developed model is a useful tool for implementing land use management as not only does it allow local decision makers to optimize land use allocation, but can also help to answer how to accomplish land use changes.

  7. Optimal Land Use Management for Soil Erosion Control by Using an Interval-Parameter Fuzzy Two-Stage Stochastic Programming Approach

    NASA Astrophysics Data System (ADS)

    Han, Jing-Cheng; Huang, Guo-He; Zhang, Hua; Li, Zhong

    2013-09-01

    Soil erosion is one of the most serious environmental and public health problems, and such land degradation can be effectively mitigated through performing land use transitions across a watershed. Optimal land use management can thus provide a way to reduce soil erosion while achieving the maximum net benefit. However, optimized land use allocation schemes are not always successful since uncertainties pertaining to soil erosion control are not well presented. This study applied an interval-parameter fuzzy two-stage stochastic programming approach to generate optimal land use planning strategies for soil erosion control based on an inexact optimization framework, in which various uncertainties were reflected. The modeling approach can incorporate predefined soil erosion control policies, and address inherent system uncertainties expressed as discrete intervals, fuzzy sets, and probability distributions. The developed model was demonstrated through a case study in the Xiangxi River watershed, China's Three Gorges Reservoir region. Land use transformations were employed as decision variables, and based on these, the land use change dynamics were yielded for a 15-year planning horizon. Finally, the maximum net economic benefit with an interval value of [1.197, 6.311] × 109 was obtained as well as corresponding land use allocations in the three planning periods. Also, the resulting soil erosion amount was found to be decreased and controlled at a tolerable level over the watershed. Thus, results confirm that the developed model is a useful tool for implementing land use management as not only does it allow local decision makers to optimize land use allocation, but can also help to answer how to accomplish land use changes.

  8. GHG emission control and solid waste management for megacities with inexact inputs: a case study in Beijing, China.

    PubMed

    Lu, Hongwei; Sun, Shichao; Ren, Lixia; He, Li

    2015-03-02

    This study advances an integrated MSW management model under inexact input information for the city of Beijing, China. The model is capable of simultaneously generating MSW management policies, performing GHG emission control, and addressing system uncertainty. Results suggest that: (1) a management strategy with minimal system cost can be obtained even when suspension of certain facilities becomes unavoidable through specific increments of the remaining ones; (2) expansion of facilities depends only on actual needs, rather than enabling the full usage of existing facilities, although it may prove to be a costly proposition; (3) adjustment of waste-stream diversion ratio directly leads to a change in GHG emissions from different disposal facilities. Results are also obtained from the comparison of the model with a conventional one without GHG emissions consideration. It is indicated that (1) the model would reduce the net system cost by [45, 61]% (i.e., [3173, 3520] million dollars) and mitigate GHG emissions by [141, 179]% (i.e., [76, 81] million tons); (2) increased waste would be diverted to integrated waste management facilities to prevent overmuch CH4 emission from the landfills. Copyright © 2014 Elsevier B.V. All rights reserved.

  9. An inexact multistage fuzzy-stochastic programming for regional electric power system management constrained by environmental quality.

    PubMed

    Fu, Zhenghui; Wang, Han; Lu, Wentao; Guo, Huaicheng; Li, Wei

    2017-12-01

    Electric power system involves different fields and disciplines which addressed the economic system, energy system, and environment system. Inner uncertainty of this compound system would be an inevitable problem. Therefore, an inexact multistage fuzzy-stochastic programming (IMFSP) was developed for regional electric power system management constrained by environmental quality. A model which concluded interval-parameter programming, multistage stochastic programming, and fuzzy probability distribution was built to reflect the uncertain information and dynamic variation in the case study, and the scenarios under different credibility degrees were considered. For all scenarios under consideration, corrective actions were allowed to be taken dynamically in accordance with the pre-regulated policies and the uncertainties in reality. The results suggest that the methodology is applicable to handle the uncertainty of regional electric power management systems and help the decision makers to establish an effective development plan.

  10. Dynamic analysis for solid waste management systems: an inexact multistage integer programming approach.

    PubMed

    Li, Yongping; Huang, Guohe

    2009-03-01

    In this study, a dynamic analysis approach based on an inexact multistage integer programming (IMIP) model is developed for supporting municipal solid waste (MSW) management under uncertainty. Techniques of interval-parameter programming and multistage stochastic programming are incorporated within an integer-programming framework. The developed IMIP can deal with uncertainties expressed as probability distributions and interval numbers, and can reflect the dynamics in terms of decisions for waste-flow allocation and facility-capacity expansion over a multistage context. Moreover, the IMIP can be used for analyzing various policy scenarios that are associated with different levels of economic consequences. The developed method is applied to a case study of long-term waste-management planning. The results indicate that reasonable solutions have been generated for binary and continuous variables. They can help generate desired decisions of system-capacity expansion and waste-flow allocation with a minimized system cost and maximized system reliability.

  11. Adaptive eigenspace method for inverse scattering problems in the frequency domain

    NASA Astrophysics Data System (ADS)

    Grote, Marcus J.; Kray, Marie; Nahum, Uri

    2017-02-01

    A nonlinear optimization method is proposed for the solution of inverse scattering problems in the frequency domain, when the scattered field is governed by the Helmholtz equation. The time-harmonic inverse medium problem is formulated as a PDE-constrained optimization problem and solved by an inexact truncated Newton-type iteration. Instead of a grid-based discrete representation, the unknown wave speed is projected to a particular finite-dimensional basis of eigenfunctions, which is iteratively adapted during the optimization. Truncating the adaptive eigenspace (AE) basis at a (small and slowly increasing) finite number of eigenfunctions effectively introduces regularization into the inversion and thus avoids the need for standard Tikhonov-type regularization. Both analytical and numerical evidence underpins the accuracy of the AE representation. Numerical experiments demonstrate the efficiency and robustness to missing or noisy data of the resulting adaptive eigenspace inversion method.

  12. On the analysis of incoherent scatter radar data from non-thermal ionospheric plasma - Effects of measurement noise and an inexact theory

    NASA Astrophysics Data System (ADS)

    Suvanto, K.

    1990-07-01

    Statistical inversion theory is employed to estimate parameter uncertainties in incoherent scatter radar studies of non-Maxwellian ionospheric plasma. Measurement noise and the inexact nature of the plasma model are considered as potential sources of error. In most of the cases investigated here, it is not possible to determine electron density, line-of-sight ion and electron temperatures, ion composition, and two non-Maxwellian shape factors simultaneously. However, if the molecular ion velocity distribution is highly non-Maxwellian, all these quantities can sometimes be retrieved from the data. This theoretical result supports the validity of the only successful non-Maxwellian, mixed-species fit discussed in the literature. A priori information on one of the parameters, e.g., the electron density, often reduces the parameter uncertainties significantly and makes composition fits possible even if the six-parameter fit cannot be performed. However, small (less than 0.5) non-Maxwellian shape factors remain difficult to distinguish.

  13. Robust subspace clustering via joint weighted Schatten-p norm and Lq norm minimization

    NASA Astrophysics Data System (ADS)

    Zhang, Tao; Tang, Zhenmin; Liu, Qing

    2017-05-01

    Low-rank representation (LRR) has been successfully applied to subspace clustering. However, the nuclear norm in the standard LRR is not optimal for approximating the rank function in many real-world applications. Meanwhile, the L21 norm in LRR also fails to characterize various noises properly. To address the above issues, we propose an improved LRR method, which achieves low rank property via the new formulation with weighted Schatten-p norm and Lq norm (WSPQ). Specifically, the nuclear norm is generalized to be the Schatten-p norm and different weights are assigned to the singular values, and thus it can approximate the rank function more accurately. In addition, Lq norm is further incorporated into WSPQ to model different noises and improve the robustness. An efficient algorithm based on the inexact augmented Lagrange multiplier method is designed for the formulated problem. Extensive experiments on face clustering and motion segmentation clearly demonstrate the superiority of the proposed WSPQ over several state-of-the-art methods.

  14. Scenario analysis of carbon emissions' anti-driving effect on Qingdao's energy structure adjustment with an optimization model, Part II: Energy system planning and management.

    PubMed

    Wu, C B; Huang, G H; Liu, Z P; Zhen, J L; Yin, J G

    2017-03-01

    In this study, an inexact multistage stochastic mixed-integer programming (IMSMP) method was developed for supporting regional-scale energy system planning (EPS) associated with multiple uncertainties presented as discrete intervals, probability distributions and their combinations. An IMSMP-based energy system planning (IMSMP-ESP) model was formulated for Qingdao to demonstrate its applicability. Solutions which can provide optimal patterns of energy resources generation, conversion, transmission, allocation and facility capacity expansion schemes have been obtained. The results can help local decision makers generate cost-effective energy system management schemes and gain a comprehensive tradeoff between economic objectives and environmental requirements. Moreover, taking the CO 2 emissions scenarios mentioned in Part I into consideration, the anti-driving effect of carbon emissions on energy structure adjustment was studied based on the developed model and scenario analysis. Several suggestions can be concluded from the results: (a) to ensure the smooth realization of low-carbon and sustainable development, appropriate price control and fiscal subsidy on high-cost energy resources should be considered by the decision-makers; (b) compared with coal, natural gas utilization should be strongly encouraged in order to insure that Qingdao could reach the carbon discharges peak value in 2020; (c) to guarantee Qingdao's power supply security in the future, the construction of new power plants should be emphasised instead of enhancing the transmission capacity of grid infrastructure. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. A globally convergent LCL method for nonlinear optimization.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Friedlander, M. P.; Saunders, M. A.; Mathematics and Computer Science

    2005-01-01

    For optimization problems with nonlinear constraints, linearly constrained Lagrangian (LCL) methods solve a sequence of subproblems of the form 'minimize an augmented Lagrangian function subject to linearized constraints.' Such methods converge rapidly near a solution but may not be reliable from arbitrary starting points. Nevertheless, the well-known software package MINOS has proved effective on many large problems. Its success motivates us to derive a related LCL algorithm that possesses three important properties: it is globally convergent, the subproblem constraints are always feasible, and the subproblems may be solved inexactly. The new algorithm has been implemented in Matlab, with an optionmore » to use either MINOS or SNOPT (Fortran codes) to solve the linearly constrained subproblems. Only first derivatives are required. We present numerical results on a subset of the COPS, HS, and CUTE test problems, which include many large examples. The results demonstrate the robustness and efficiency of the stabilized LCL procedure.« less

  16. Numerical optimization in Hilbert space using inexact function and gradient evaluations

    NASA Technical Reports Server (NTRS)

    Carter, Richard G.

    1989-01-01

    Trust region algorithms provide a robust iterative technique for solving non-convex unstrained optimization problems, but in many instances it is prohibitively expensive to compute high accuracy function and gradient values for the method. Of particular interest are inverse and parameter estimation problems, since function and gradient evaluations involve numerically solving large systems of differential equations. A global convergence theory is presented for trust region algorithms in which neither function nor gradient values are known exactly. The theory is formulated in a Hilbert space setting so that it can be applied to variational problems as well as the finite dimensional problems normally seen in trust region literature. The conditions concerning allowable error are remarkably relaxed: relative errors in the gradient error condition is automatically satisfied if the error is orthogonal to the gradient approximation. A technique for estimating gradient error and improving the approximation is also presented.

  17. A new nonlinear conjugate gradient coefficient under strong Wolfe-Powell line search

    NASA Astrophysics Data System (ADS)

    Mohamed, Nur Syarafina; Mamat, Mustafa; Rivaie, Mohd

    2017-08-01

    A nonlinear conjugate gradient method (CG) plays an important role in solving a large-scale unconstrained optimization problem. This method is widely used due to its simplicity. The method is known to possess sufficient descend condition and global convergence properties. In this paper, a new nonlinear of CG coefficient βk is presented by employing the Strong Wolfe-Powell inexact line search. The new βk performance is tested based on number of iterations and central processing unit (CPU) time by using MATLAB software with Intel Core i7-3470 CPU processor. Numerical experimental results show that the new βk converge rapidly compared to other classical CG method.

  18. Inexact adaptive Newton methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bertiger, W.I.; Kelsey, F.J.

    1985-02-01

    The Inexact Adaptive Newton method (IAN) is a modification of the Adaptive Implicit Method/sup 1/ (AIM) with improved Newton convergence. Both methods simplify the Jacobian at each time step by zeroing coefficients in regions where saturations are changing slowly. The methods differ in how the diagonal block terms are treated. On test problems with up to 3,000 cells, IAN consistently saves approximately 30% of the CPU time when compared to the fully implicit method. AIM shows similar savings on some problems, but takes as much CPU time as fully implicit on other test problems due to poor Newton convergence.

  19. Applied Distributed Model Predictive Control for Energy Efficient Buildings and Ramp Metering

    NASA Astrophysics Data System (ADS)

    Koehler, Sarah Muraoka

    Industrial large-scale control problems present an interesting algorithmic design challenge. A number of controllers must cooperate in real-time on a network of embedded hardware with limited computing power in order to maximize system efficiency while respecting constraints and despite communication delays. Model predictive control (MPC) can automatically synthesize a centralized controller which optimizes an objective function subject to a system model, constraints, and predictions of disturbance. Unfortunately, the computations required by model predictive controllers for large-scale systems often limit its industrial implementation only to medium-scale slow processes. Distributed model predictive control (DMPC) enters the picture as a way to decentralize a large-scale model predictive control problem. The main idea of DMPC is to split the computations required by the MPC problem amongst distributed processors that can compute in parallel and communicate iteratively to find a solution. Some popularly proposed solutions are distributed optimization algorithms such as dual decomposition and the alternating direction method of multipliers (ADMM). However, these algorithms ignore two practical challenges: substantial communication delays present in control systems and also problem non-convexity. This thesis presents two novel and practically effective DMPC algorithms. The first DMPC algorithm is based on a primal-dual active-set method which achieves fast convergence, making it suitable for large-scale control applications which have a large communication delay across its communication network. In particular, this algorithm is suited for MPC problems with a quadratic cost, linear dynamics, forecasted demand, and box constraints. We measure the performance of this algorithm and show that it significantly outperforms both dual decomposition and ADMM in the presence of communication delay. The second DMPC algorithm is based on an inexact interior point method which is suited for nonlinear optimization problems. The parallel computation of the algorithm exploits iterative linear algebra methods for the main linear algebra computations in the algorithm. We show that the splitting of the algorithm is flexible and can thus be applied to various distributed platform configurations. The two proposed algorithms are applied to two main energy and transportation control problems. The first application is energy efficient building control. Buildings represent 40% of energy consumption in the United States. Thus, it is significant to improve the energy efficiency of buildings. The goal is to minimize energy consumption subject to the physics of the building (e.g. heat transfer laws), the constraints of the actuators as well as the desired operating constraints (thermal comfort of the occupants), and heat load on the system. In this thesis, we describe the control systems of forced air building systems in practice. We discuss the "Trim and Respond" algorithm which is a distributed control algorithm that is used in practice, and show that it performs similarly to a one-step explicit DMPC algorithm. Then, we apply the novel distributed primal-dual active-set method and provide extensive numerical results for the building MPC problem. The second main application is the control of ramp metering signals to optimize traffic flow through a freeway system. This application is particularly important since urban congestion has more than doubled in the past few decades. The ramp metering problem is to maximize freeway throughput subject to freeway dynamics (derived from mass conservation), actuation constraints, freeway capacity constraints, and predicted traffic demand. In this thesis, we develop a hybrid model predictive controller for ramp metering that is guaranteed to be persistently feasible and stable. This contrasts to previous work on MPC for ramp metering where such guarantees are absent. We apply a smoothing method to the hybrid model predictive controller and apply the inexact interior point method to this nonlinear non-convex ramp metering problem.

  20. Workflows for Full Waveform Inversions

    NASA Astrophysics Data System (ADS)

    Boehm, Christian; Krischer, Lion; Afanasiev, Michael; van Driel, Martin; May, Dave A.; Rietmann, Max; Fichtner, Andreas

    2017-04-01

    Despite many theoretical advances and the increasing availability of high-performance computing clusters, full seismic waveform inversions still face considerable challenges regarding data and workflow management. While the community has access to solvers which can harness modern heterogeneous computing architectures, the computational bottleneck has fallen to these often manpower-bounded issues that need to be overcome to facilitate further progress. Modern inversions involve huge amounts of data and require a tight integration between numerical PDE solvers, data acquisition and processing systems, nonlinear optimization libraries, and job orchestration frameworks. To this end we created a set of libraries and applications revolving around Salvus (http://salvus.io), a novel software package designed to solve large-scale full waveform inverse problems. This presentation focuses on solving passive source seismic full waveform inversions from local to global scales with Salvus. We discuss (i) design choices for the aforementioned components required for full waveform modeling and inversion, (ii) their implementation in the Salvus framework, and (iii) how it is all tied together by a usable workflow system. We combine state-of-the-art algorithms ranging from high-order finite-element solutions of the wave equation to quasi-Newton optimization algorithms using trust-region methods that can handle inexact derivatives. All is steered by an automated interactive graph-based workflow framework capable of orchestrating all necessary pieces. This naturally facilitates the creation of new Earth models and hopefully sparks new scientific insights. Additionally, and even more importantly, it enhances reproducibility and reliability of the final results.

  1. Fast Multilevel Solvers for a Class of Discrete Fourth Order Parabolic Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zheng, Bin; Chen, Luoping; Hu, Xiaozhe

    2016-03-05

    In this paper, we study fast iterative solvers for the solution of fourth order parabolic equations discretized by mixed finite element methods. We propose to use consistent mass matrix in the discretization and use lumped mass matrix to construct efficient preconditioners. We provide eigenvalue analysis for the preconditioned system and estimate the convergence rate of the preconditioned GMRes method. Furthermore, we show that these preconditioners only need to be solved inexactly by optimal multigrid algorithms. Our numerical examples indicate that the proposed preconditioners are very efficient and robust with respect to both discretization parameters and diffusion coefficients. We also investigatemore » the performance of multigrid algorithms with either collective smoothers or distributive smoothers when solving the preconditioner systems.« less

  2. Global convergence of inexact Newton methods for transonic flow

    NASA Technical Reports Server (NTRS)

    Young, David P.; Melvin, Robin G.; Bieterman, Michael B.; Johnson, Forrester T.; Samant, Satish S.

    1990-01-01

    In computational fluid dynamics, nonlinear differential equations are essential to represent important effects such as shock waves in transonic flow. Discretized versions of these nonlinear equations are solved using iterative methods. In this paper an inexact Newton method using the GMRES algorithm of Saad and Schultz is examined in the context of the full potential equation of aerodynamics. In this setting, reliable and efficient convergence of Newton methods is difficult to achieve. A poor initial solution guess often leads to divergence or very slow convergence. This paper examines several possible solutions to these problems, including a standard local damping strategy for Newton's method and two continuation methods, one of which utilizes interpolation from a coarse grid solution to obtain the initial guess on a finer grid. It is shown that the continuation methods can be used to augment the local damping strategy to achieve convergence for difficult transonic flow problems. These include simple wings with shock waves as well as problems involving engine power effects. These latter cases are modeled using the assumption that each exhaust plume is isentropic but has a different total pressure and/or temperature than the freestream.

  3. Land Resources Allocation Strategies in an Urban Area Involving Uncertainty: A Case Study of Suzhou, in the Yangtze River Delta of China

    NASA Astrophysics Data System (ADS)

    Lu, Shasha; Guan, Xingliang; Zhou, Min; Wang, Yang

    2014-05-01

    A large number of mathematical models have been developed to support land resource allocation decisions and land management needs; however, few of them can address various uncertainties that exist in relation to many factors presented in such decisions (e.g., land resource availabilities, land demands, land-use patterns, and social demands, as well as ecological requirements). In this study, a multi-objective interval-stochastic land resource allocation model (MOISLAM) was developed for tackling uncertainty that presents as discrete intervals and/or probability distributions. The developed model improves upon the existing multi-objective programming and inexact optimization approaches. The MOISLAM not only considers economic factors, but also involves food security and eco-environmental constraints; it can, therefore, effectively reflect various interrelations among different aspects in a land resource management system. Moreover, the model can also help examine the reliability of satisfying (or the risk of violating) system constraints under uncertainty. In this study, the MOISLAM was applied to a real case of long-term urban land resource allocation planning in Suzhou, in the Yangtze River Delta of China. Interval solutions associated with different risk levels of constraint violation were obtained. The results are considered useful for generating a range of decision alternatives under various system conditions, and thus helping decision makers to identify a desirable land resource allocation strategy under uncertainty.

  4. Airfoil optimization for unsteady flows with application to high-lift noise reduction

    NASA Astrophysics Data System (ADS)

    Rumpfkeil, Markus Peer

    The use of steady-state aerodynamic optimization methods in the computational fluid dynamic (CFD) community is fairly well established. In particular, the use of adjoint methods has proven to be very beneficial because their cost is independent of the number of design variables. The application of numerical optimization to airframe-generated noise, however, has not received as much attention, but with the significant quieting of modern engines, airframe noise now competes with engine noise. Optimal control techniques for unsteady flows are needed in order to be able to reduce airframe-generated noise. In this thesis, a general framework is formulated to calculate the gradient of a cost function in a nonlinear unsteady flow environment via the discrete adjoint method. The unsteady optimization algorithm developed in this work utilizes a Newton-Krylov approach since the gradient-based optimizer uses the quasi-Newton method BFGS, Newton's method is applied to the nonlinear flow problem, GMRES is used to solve the resulting linear problem inexactly, and last but not least the linear adjoint problem is solved using Bi-CGSTAB. The flow is governed by the unsteady two-dimensional compressible Navier-Stokes equations in conjunction with a one-equation turbulence model, which are discretized using structured grids and a finite difference approach. The effectiveness of the unsteady optimization algorithm is demonstrated by applying it to several problems of interest including shocktubes, pulses in converging-diverging nozzles, rotating cylinders, transonic buffeting, and an unsteady trailing-edge flow. In order to address radiated far-field noise, an acoustic wave propagation program based on the Ffowcs Williams and Hawkings (FW-H) formulation is implemented and validated. The general framework is then used to derive the adjoint equations for a novel hybrid URANS/FW-H optimization algorithm in order to be able to optimize the shape of airfoils based on their calculated far-field pressure fluctuations. Validation and application results for this novel hybrid URANS/FW-H optimization algorithm show that it is possible to optimize the shape of an airfoil in an unsteady flow environment to minimize its radiated far-field noise while maintaining good aerodynamic performance.

  5. A new convergence analysis and perturbation resilience of some accelerated proximal forward-backward algorithms with errors

    NASA Astrophysics Data System (ADS)

    Reem, Daniel; De Pierro, Alvaro

    2017-04-01

    Many problems in science and engineering involve, as part of their solution process, the consideration of a separable function which is the sum of two convex functions, one of them possibly non-smooth. Recently a few works have discussed inexact versions of several accelerated proximal methods aiming at solving this minimization problem. This paper shows that inexact versions of a method of Beck and Teboulle (fast iterative shrinkable tresholding algorithm) preserve, in a Hilbert space setting, the same (non-asymptotic) rate of convergence under some assumptions on the decay rate of the error terms The notion of inexactness discussed here seems to be rather simple, but, interestingly, when comparing to related works, closely related decay rates of the errors terms yield closely related convergence rates. The derivation sheds some light on the somewhat mysterious origin of some parameters which appear in various accelerated methods. A consequence of the analysis is that the accelerated method is perturbation resilient, making it suitable, in principle, for the superiorization methodology. By taking this into account, we re-examine the superiorization methodology and significantly extend its scope. This work was supported by FAPESP 2013/19504-9. The second author was supported also by CNPq grant 306030/2014-4.

  6. Low-Rank Matrix Recovery Approach for Clutter Rejection in Real-Time IR-UWB Radar-Based Moving Target Detection

    PubMed Central

    Sabushimike, Donatien; Na, Seung You; Kim, Jin Young; Bui, Ngoc Nam; Seo, Kyung Sik; Kim, Gil Gyeom

    2016-01-01

    The detection of a moving target using an IR-UWB Radar involves the core task of separating the waves reflected by the static background and by the moving target. This paper investigates the capacity of the low-rank and sparse matrix decomposition approach to separate the background and the foreground in the trend of UWB Radar-based moving target detection. Robust PCA models are criticized for being batched-data-oriented, which makes them inconvenient in realistic environments where frames need to be processed as they are recorded in real time. In this paper, a novel method based on overlapping-windows processing is proposed to cope with online processing. The method consists of processing a small batch of frames which will be continually updated without changing its size as new frames are captured. We prove that RPCA (via its Inexact Augmented Lagrange Multiplier (IALM) model) can successfully separate the two subspaces, which enhances the accuracy of target detection. The overlapping-windows processing method converges on the optimal solution with its batch counterpart (i.e., processing batched data with RPCA), and both methods prove the robustness and efficiency of the RPCA over the classic PCA and the commonly used exponential averaging method. PMID:27598159

  7. Determining similarity in histological images using graph-theoretic description and matching methods for content-based image retrieval in medical diagnostics.

    PubMed

    Sharma, Harshita; Alekseychuk, Alexander; Leskovsky, Peter; Hellwich, Olaf; Anand, R S; Zerbe, Norman; Hufnagl, Peter

    2012-10-04

    Computer-based analysis of digitalized histological images has been gaining increasing attention, due to their extensive use in research and routine practice. The article aims to contribute towards the description and retrieval of histological images by employing a structural method using graphs. Due to their expressive ability, graphs are considered as a powerful and versatile representation formalism and have obtained a growing consideration especially by the image processing and computer vision community. The article describes a novel method for determining similarity between histological images through graph-theoretic description and matching, for the purpose of content-based retrieval. A higher order (region-based) graph-based representation of breast biopsy images has been attained and a tree-search based inexact graph matching technique has been employed that facilitates the automatic retrieval of images structurally similar to a given image from large databases. The results obtained and evaluation performed demonstrate the effectiveness and superiority of graph-based image retrieval over a common histogram-based technique. The employed graph matching complexity has been reduced compared to the state-of-the-art optimal inexact matching methods by applying a pre-requisite criterion for matching of nodes and a sophisticated design of the estimation function, especially the prognosis function. The proposed method is suitable for the retrieval of similar histological images, as suggested by the experimental and evaluation results obtained in the study. It is intended for the use in Content Based Image Retrieval (CBIR)-requiring applications in the areas of medical diagnostics and research, and can also be generalized for retrieval of different types of complex images. The virtual slide(s) for this article can be found here: http://www.diagnosticpathology.diagnomx.eu/vs/1224798882787923.

  8. Determining similarity in histological images using graph-theoretic description and matching methods for content-based image retrieval in medical diagnostics

    PubMed Central

    2012-01-01

    Background Computer-based analysis of digitalized histological images has been gaining increasing attention, due to their extensive use in research and routine practice. The article aims to contribute towards the description and retrieval of histological images by employing a structural method using graphs. Due to their expressive ability, graphs are considered as a powerful and versatile representation formalism and have obtained a growing consideration especially by the image processing and computer vision community. Methods The article describes a novel method for determining similarity between histological images through graph-theoretic description and matching, for the purpose of content-based retrieval. A higher order (region-based) graph-based representation of breast biopsy images has been attained and a tree-search based inexact graph matching technique has been employed that facilitates the automatic retrieval of images structurally similar to a given image from large databases. Results The results obtained and evaluation performed demonstrate the effectiveness and superiority of graph-based image retrieval over a common histogram-based technique. The employed graph matching complexity has been reduced compared to the state-of-the-art optimal inexact matching methods by applying a pre-requisite criterion for matching of nodes and a sophisticated design of the estimation function, especially the prognosis function. Conclusion The proposed method is suitable for the retrieval of similar histological images, as suggested by the experimental and evaluation results obtained in the study. It is intended for the use in Content Based Image Retrieval (CBIR)-requiring applications in the areas of medical diagnostics and research, and can also be generalized for retrieval of different types of complex images. Virtual Slides The virtual slide(s) for this article can be found here: http://www.diagnosticpathology.diagnomx.eu/vs/1224798882787923. PMID:23035717

  9. Low-power, high-speed 1-bit inexact Full Adder cell designs applicable to low-energy image processing

    NASA Astrophysics Data System (ADS)

    Zareei, Zahra; Navi, Keivan; Keshavarziyan, Peiman

    2018-03-01

    In this paper, three novel low-power and high-speed 1-bit inexact Full Adder cell designs are presented based on current mode logic in 32 nm carbon nanotube field effect transistor technology for the first time. The circuit-level figures of merits, i.e. power, delay and power-delay product as well as application-level metric such as error distance, are considered to assess the efficiency of the proposed cells over their counterparts. The effect of voltage scaling and temperature variation on the proposed cells is studied using HSPICE tool. Moreover, using MATLAB tool, the peak signal to noise ratio of the proposed cells is evaluated in an image-processing application referred to as motion detector. Simulation results confirm the efficiency of the proposed cells.

  10. Temporal Order in Periodically Driven Spins in Star-Shaped Clusters

    NASA Astrophysics Data System (ADS)

    Pal, Soham; Nishad, Naveen; Mahesh, T. S.; Sreejith, G. J.

    2018-05-01

    We experimentally study the response of star-shaped clusters of initially unentangled N =4 , 10, and 37 nuclear spin-1 /2 moments to an inexact π -pulse sequence and show that an Ising coupling between the center and the satellite spins results in robust period-2 magnetization oscillations. The period is stable against bath effects, but the amplitude decays with a timescale that depends on the inexactness of the pulse. Simulations reveal a semiclassical picture in which the rigidity of the period is due to a randomizing effect of the Larmor precession under the magnetization of surrounding spins. The timescales with stable periodicity increase with net initial magnetization, even in the presence of perturbations, indicating a robust temporal ordered phase for large systems with finite magnetization per spin.

  11. Error and Uncertainty Quantification in the Numerical Simulation of Complex Fluid Flows

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.

    2010-01-01

    The failure of numerical simulation to predict physical reality is often a direct consequence of the compounding effects of numerical error arising from finite-dimensional approximation and physical model uncertainty resulting from inexact knowledge and/or statistical representation. In this topical lecture, we briefly review systematic theories for quantifying numerical errors and restricted forms of model uncertainty occurring in simulations of fluid flow. A goal of this lecture is to elucidate both positive and negative aspects of applying these theories to practical fluid flow problems. Finite-element and finite-volume calculations of subsonic and hypersonic fluid flow are presented to contrast the differing roles of numerical error and model uncertainty. for these problems.

  12. Bounding filter - A simple solution to lack of exact a priori statistics.

    NASA Technical Reports Server (NTRS)

    Nahi, N. E.; Weiss, I. M.

    1972-01-01

    Wiener and Kalman-Bucy estimation problems assume that models describing the signal and noise stochastic processes are exactly known. When this modeling information, i.e., the signal and noise spectral densities for Wiener filter and the signal and noise dynamic system and disturbing noise representations for Kalman-Bucy filtering, is inexactly known, then the filter's performance is suboptimal and may even exhibit apparent divergence. In this paper a system is designed whereby the actual estimation error covariance is bounded by the covariance calculated by the estimator. Therefore, the estimator obtains a bound on the actual error covariance which is not available, and also prevents its apparent divergence.

  13. Inexact trajectory planning and inverse problems in the Hamilton–Pontryagin framework

    PubMed Central

    Burnett, Christopher L.; Holm, Darryl D.; Meier, David M.

    2013-01-01

    We study a trajectory-planning problem whose solution path evolves by means of a Lie group action and passes near a designated set of target positions at particular times. This is a higher-order variational problem in optimal control, motivated by potential applications in computational anatomy and quantum control. Reduction by symmetry in such problems naturally summons methods from Lie group theory and Riemannian geometry. A geometrically illuminating form of the Euler–Lagrange equations is obtained from a higher-order Hamilton–Pontryagin variational formulation. In this context, the previously known node equations are recovered with a new interpretation as Legendre–Ostrogradsky momenta possessing certain conservation properties. Three example applications are discussed as well as a numerical integration scheme that follows naturally from the Hamilton–Pontryagin principle and preserves the geometric properties of the continuous-time solution. PMID:24353467

  14. Multicategory nets of single-layer perceptrons: complexity and sample-size issues.

    PubMed

    Raudys, Sarunas; Kybartas, Rimantas; Zavadskas, Edmundas Kazimieras

    2010-05-01

    The standard cost function of multicategory single-layer perceptrons (SLPs) does not minimize the classification error rate. In order to reduce classification error, it is necessary to: 1) refuse the traditional cost function, 2) obtain near to optimal pairwise linear classifiers by specially organized SLP training and optimal stopping, and 3) fuse their decisions properly. To obtain better classification in unbalanced training set situations, we introduce the unbalance correcting term. It was found that fusion based on the Kulback-Leibler (K-L) distance and the Wu-Lin-Weng (WLW) method result in approximately the same performance in situations where sample sizes are relatively small. The explanation for this observation is by theoretically known verity that an excessive minimization of inexact criteria becomes harmful at times. Comprehensive comparative investigations of six real-world pattern recognition (PR) problems demonstrated that employment of SLP-based pairwise classifiers is comparable and as often as not outperforming the linear support vector (SV) classifiers in moderate dimensional situations. The colored noise injection used to design pseudovalidation sets proves to be a powerful tool for facilitating finite sample problems in moderate-dimensional PR tasks.

  15. 7 CFR 352.15 - Caution.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... taking other measures prescribed under the provisions in this part, it should be understood that inexactness or carelessness may result in injury or damage. It should also be understood by the owners that...

  16. Epigraph: A Vaccine Design Tool Applied to an HIV Therapeutic Vaccine and a Pan-Filovirus Vaccine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Theiler, James; Yoon, Hyejin; Yusim, Karina

    Epigraph is an efficient graph-based algorithm for designing vaccine antigens to optimize potential T-cell epitope (PTE) coverage. Functionally, epigraph vaccine antigens are similar to Mosaic vaccines, which have demonstrated effectiveness in preliminary HIV non-human primate studies. In contrast to the Mosaic algorithm, Epigraph is substantially faster, and in restricted cases, provides a mathematically optimal solution. Furthermore, epigraph has new features that enable enhanced vaccine design flexibility. These features include the ability to exclude rare epitopes from a design, to optimize population coverage based on inexact epitope matches, and to apply the code to both aligned and unaligned input sequences. Epigraphmore » was developed to provide practical design solutions for two outstanding vaccine problems. The first of these is a personalized approach to a therapeutic T-cell HIV vaccine that would provide antigens with an excellent match to an individual’s infecting strain, intended to contain or clear a chronic infection. The second is a pan-filovirus vaccine, with the potential to protect against all known viruses in the Filoviradae family, including ebolaviruses. A web-based interface to run the Epigraph tool suite is available (http://www.hiv.lanl.gov/content/sequence/EPIGRAPH/epigraph.html).« less

  17. Epigraph: A Vaccine Design Tool Applied to an HIV Therapeutic Vaccine and a Pan-Filovirus Vaccine

    DOE PAGES

    Theiler, James; Yoon, Hyejin; Yusim, Karina; ...

    2016-10-05

    Epigraph is an efficient graph-based algorithm for designing vaccine antigens to optimize potential T-cell epitope (PTE) coverage. Functionally, epigraph vaccine antigens are similar to Mosaic vaccines, which have demonstrated effectiveness in preliminary HIV non-human primate studies. In contrast to the Mosaic algorithm, Epigraph is substantially faster, and in restricted cases, provides a mathematically optimal solution. Furthermore, epigraph has new features that enable enhanced vaccine design flexibility. These features include the ability to exclude rare epitopes from a design, to optimize population coverage based on inexact epitope matches, and to apply the code to both aligned and unaligned input sequences. Epigraphmore » was developed to provide practical design solutions for two outstanding vaccine problems. The first of these is a personalized approach to a therapeutic T-cell HIV vaccine that would provide antigens with an excellent match to an individual’s infecting strain, intended to contain or clear a chronic infection. The second is a pan-filovirus vaccine, with the potential to protect against all known viruses in the Filoviradae family, including ebolaviruses. A web-based interface to run the Epigraph tool suite is available (http://www.hiv.lanl.gov/content/sequence/EPIGRAPH/epigraph.html).« less

  18. Optimal groundwater security management policies by control of inexact health risks under dual uncertainty in slope factors.

    PubMed

    Lu, Hongwei; Li, Jing; Ren, Lixia; Chen, Yizhong

    2018-05-01

    Groundwater remediation is a complicated system with time-consuming and costly challenges, which should be carefully controlled by appropriate groundwater management. This study develops an integrated optimization method for groundwater remediation management regarding cost, contamination distribution and health risk under multiple uncertainties. The integration of health risk into groundwater remediation optimization management is capable of not only adequately considering the influence of health risk on optimal remediation strategies, but also simultaneously completing remediation optimization design and risk assessment. A fuzzy chance-constrained programming approach is presented to handle multiple uncertain properties in the process of health risk assessment. The capabilities and effectiveness of the developed method are illustrated through an application of a naphthalene contaminated case in Anhui, China. Results indicate that (a) the pump-and-treat remediation system leads to a low naphthalene contamination but high remediation cost for a short-time remediation, and natural attenuation significantly affects naphthalene removal from groundwater for a long-time remediation; (b) the weighting coefficients have significant influences on the remediation cost and the performances both for naphthalene concentrations and health risks; (c) an increased level of slope factor (sf) for naphthalene corresponds to more optimal strategies characterized by higher environmental benefits and lower economic sacrifice. The developed method could be simultaneously beneficial for public health and environmental protection. Decision makers could obtain the most appropriate remediation strategies according to their specific requirements with high flexibility of economic, environmental, and risk concerns. Copyright © 2018 Elsevier Ltd. All rights reserved.

  19. A duality theorem-based algorithm for inexact quadratic programming problems: Application to waste management under uncertainty

    NASA Astrophysics Data System (ADS)

    Kong, X. M.; Huang, G. H.; Fan, Y. R.; Li, Y. P.

    2016-04-01

    In this study, a duality theorem-based algorithm (DTA) for inexact quadratic programming (IQP) is developed for municipal solid waste (MSW) management under uncertainty. It improves upon the existing numerical solution method for IQP problems. The comparison between DTA and derivative algorithm (DAM) shows that the DTA method provides better solutions than DAM with lower computational complexity. It is not necessary to identify the uncertain relationship between the objective function and decision variables, which is required for the solution process of DAM. The developed method is applied to a case study of MSW management and planning. The results indicate that reasonable solutions have been generated for supporting long-term MSW management and planning. They could provide more information as well as enable managers to make better decisions to identify desired MSW management policies in association with minimized cost under uncertainty.

  20. Globally convergent techniques in nonlinear Newton-Krylov

    NASA Technical Reports Server (NTRS)

    Brown, Peter N.; Saad, Youcef

    1989-01-01

    Some convergence theory is presented for nonlinear Krylov subspace methods. The basic idea of these methods is to use variants of Newton's iteration in conjunction with a Krylov subspace method for solving the Jacobian linear systems. These methods are variants of inexact Newton methods where the approximate Newton direction is taken from a subspace of small dimensions. The main focus is to analyze these methods when they are combined with global strategies such as linesearch techniques and model trust region algorithms. Most of the convergence results are formulated for projection onto general subspaces rather than just Krylov subspaces.

  1. Automation for Air Traffic Control: The Rise of a New Discipline

    NASA Technical Reports Server (NTRS)

    Erzberger, Heinz; Tobias, Leonard (Technical Monitor)

    1997-01-01

    The current debate over the concept of Free Flight has renewed interest in automated conflict detection and resolution in the enroute airspace. An essential requirement for effective conflict detection is accurate prediction of trajectories. Trajectory prediction is, however, an inexact process which accumulates errors that grow in proportion to the length of the prediction time interval. Using a model of prediction errors for the trajectory predictor incorporated in the Center-TRACON Automation System (CTAS), a computationally fast algorithm for computing conflict probability has been derived. Furthermore, a method of conflict resolution has been formulated that minimizes the average cost of resolution, when cost is defined as the increment in airline operating costs incurred in flying the resolution maneuver. The method optimizes the trade off between early resolution at lower maneuver costs but higher prediction error on the one hand and late resolution with higher maneuver costs but lower prediction errors on the other. The method determines both the time to initiate the resolution maneuver as well as the characteristics of the resolution trajectory so as to minimize the cost of the resolution. Several computational examples relevant to the design of a conflict probe that can support user-preferred trajectories in the enroute airspace will be presented.

  2. Inversion of geothermal heat flux in a thermomechanically coupled nonlinear Stokes ice sheet model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Hongyu; Petra, Noemi; Stadler, Georg

    We address the inverse problem of inferring the basal geothermal heat flux from surface velocity observations using a steady-state thermomechanically coupled nonlinear Stokes ice flow model. This is a challenging inverse problem since the map from basal heat flux to surface velocity observables is indirect: the heat flux is a boundary condition for the thermal advection–diffusion equation, which couples to the nonlinear Stokes ice flow equations; together they determine the surface ice flow velocity. This multiphysics inverse problem is formulated as a nonlinear least-squares optimization problem with a cost functional that includes the data misfit between surface velocity observations andmore » model predictions. A Tikhonov regularization term is added to render the problem well posed. We derive adjoint-based gradient and Hessian expressions for the resulting partial differential equation (PDE)-constrained optimization problem and propose an inexact Newton method for its solution. As a consequence of the Petrov–Galerkin discretization of the energy equation, we show that discretization and differentiation do not commute; that is, the order in which we discretize the cost functional and differentiate it affects the correctness of the gradient. Using two- and three-dimensional model problems, we study the prospects for and limitations of the inference of the geothermal heat flux field from surface velocity observations. The results show that the reconstruction improves as the noise level in the observations decreases and that short-wavelength variations in the geothermal heat flux are difficult to recover. We analyze the ill-posedness of the inverse problem as a function of the number of observations by examining the spectrum of the Hessian of the cost functional. Motivated by the popularity of operator-split or staggered solvers for forward multiphysics problems – i.e., those that drop two-way coupling terms to yield a one-way coupled forward Jacobian – we study the effect on the inversion of a one-way coupling of the adjoint energy and Stokes equations. Here, we show that taking such a one-way coupled approach for the adjoint equations can lead to an incorrect gradient and premature termination of optimization iterations. This is due to loss of a descent direction stemming from inconsistency of the gradient with the contours of the cost functional. Nevertheless, one may still obtain a reasonable approximate inverse solution particularly if important features of the reconstructed solution emerge early in optimization iterations, before the premature termination.« less

  3. Inversion of geothermal heat flux in a thermomechanically coupled nonlinear Stokes ice sheet model

    DOE PAGES

    Zhu, Hongyu; Petra, Noemi; Stadler, Georg; ...

    2016-07-13

    We address the inverse problem of inferring the basal geothermal heat flux from surface velocity observations using a steady-state thermomechanically coupled nonlinear Stokes ice flow model. This is a challenging inverse problem since the map from basal heat flux to surface velocity observables is indirect: the heat flux is a boundary condition for the thermal advection–diffusion equation, which couples to the nonlinear Stokes ice flow equations; together they determine the surface ice flow velocity. This multiphysics inverse problem is formulated as a nonlinear least-squares optimization problem with a cost functional that includes the data misfit between surface velocity observations andmore » model predictions. A Tikhonov regularization term is added to render the problem well posed. We derive adjoint-based gradient and Hessian expressions for the resulting partial differential equation (PDE)-constrained optimization problem and propose an inexact Newton method for its solution. As a consequence of the Petrov–Galerkin discretization of the energy equation, we show that discretization and differentiation do not commute; that is, the order in which we discretize the cost functional and differentiate it affects the correctness of the gradient. Using two- and three-dimensional model problems, we study the prospects for and limitations of the inference of the geothermal heat flux field from surface velocity observations. The results show that the reconstruction improves as the noise level in the observations decreases and that short-wavelength variations in the geothermal heat flux are difficult to recover. We analyze the ill-posedness of the inverse problem as a function of the number of observations by examining the spectrum of the Hessian of the cost functional. Motivated by the popularity of operator-split or staggered solvers for forward multiphysics problems – i.e., those that drop two-way coupling terms to yield a one-way coupled forward Jacobian – we study the effect on the inversion of a one-way coupling of the adjoint energy and Stokes equations. Here, we show that taking such a one-way coupled approach for the adjoint equations can lead to an incorrect gradient and premature termination of optimization iterations. This is due to loss of a descent direction stemming from inconsistency of the gradient with the contours of the cost functional. Nevertheless, one may still obtain a reasonable approximate inverse solution particularly if important features of the reconstructed solution emerge early in optimization iterations, before the premature termination.« less

  4. Inversion of geothermal heat flux in a thermomechanically coupled nonlinear Stokes ice sheet model

    NASA Astrophysics Data System (ADS)

    Zhu, Hongyu; Petra, Noemi; Stadler, Georg; Isaac, Tobin; Hughes, Thomas J. R.; Ghattas, Omar

    2016-07-01

    We address the inverse problem of inferring the basal geothermal heat flux from surface velocity observations using a steady-state thermomechanically coupled nonlinear Stokes ice flow model. This is a challenging inverse problem since the map from basal heat flux to surface velocity observables is indirect: the heat flux is a boundary condition for the thermal advection-diffusion equation, which couples to the nonlinear Stokes ice flow equations; together they determine the surface ice flow velocity. This multiphysics inverse problem is formulated as a nonlinear least-squares optimization problem with a cost functional that includes the data misfit between surface velocity observations and model predictions. A Tikhonov regularization term is added to render the problem well posed. We derive adjoint-based gradient and Hessian expressions for the resulting partial differential equation (PDE)-constrained optimization problem and propose an inexact Newton method for its solution. As a consequence of the Petrov-Galerkin discretization of the energy equation, we show that discretization and differentiation do not commute; that is, the order in which we discretize the cost functional and differentiate it affects the correctness of the gradient. Using two- and three-dimensional model problems, we study the prospects for and limitations of the inference of the geothermal heat flux field from surface velocity observations. The results show that the reconstruction improves as the noise level in the observations decreases and that short-wavelength variations in the geothermal heat flux are difficult to recover. We analyze the ill-posedness of the inverse problem as a function of the number of observations by examining the spectrum of the Hessian of the cost functional. Motivated by the popularity of operator-split or staggered solvers for forward multiphysics problems - i.e., those that drop two-way coupling terms to yield a one-way coupled forward Jacobian - we study the effect on the inversion of a one-way coupling of the adjoint energy and Stokes equations. We show that taking such a one-way coupled approach for the adjoint equations can lead to an incorrect gradient and premature termination of optimization iterations. This is due to loss of a descent direction stemming from inconsistency of the gradient with the contours of the cost functional. Nevertheless, one may still obtain a reasonable approximate inverse solution particularly if important features of the reconstructed solution emerge early in optimization iterations, before the premature termination.

  5. Monte Carlo-based interval transformation analysis for multi-criteria decision analysis of groundwater management strategies under uncertain naphthalene concentrations and health risks

    NASA Astrophysics Data System (ADS)

    Ren, Lixia; He, Li; Lu, Hongwei; Chen, Yizhong

    2016-08-01

    A new Monte Carlo-based interval transformation analysis (MCITA) is used in this study for multi-criteria decision analysis (MCDA) of naphthalene-contaminated groundwater management strategies. The analysis can be conducted when input data such as total cost, contaminant concentration and health risk are represented as intervals. Compared to traditional MCDA methods, MCITA-MCDA has the advantages of (1) dealing with inexactness of input data represented as intervals, (2) mitigating computational time due to the introduction of Monte Carlo sampling method, (3) identifying the most desirable management strategies under data uncertainty. A real-world case study is employed to demonstrate the performance of this method. A set of inexact management alternatives are considered in each duration on the basis of four criteria. Results indicated that the most desirable management strategy lied in action 15 for the 5-year, action 8 for the 10-year, action 12 for the 15-year, and action 2 for the 20-year management.

  6. Close coupling of pre- and post-processing vision stations using inexact algorithms

    NASA Astrophysics Data System (ADS)

    Shih, Chi-Hsien V.; Sherkat, Nasser; Thomas, Peter D.

    1996-02-01

    Work has been reported using lasers to cut deformable materials. Although the use of laser reduces material deformation, distortion due to mechanical feed misalignment persists. Changes in the lace patten are also caused by the release of tension in the lace structure as it is cut. To tackle the problem of distortion due to material flexibility, the 2VMethod together with the Piecewise Error Compensation Algorithm incorporating the inexact algorithms, i.e., fuzzy logic, neural networks and neural fuzzy technique, are developed. A spring mounted pen is used to emulate the distortion of the lace pattern caused by tactile cutting and feed misalignment. Using pre- and post-processing vision systems, it is possible to monitor the scalloping process and generate on-line information for the artificial intelligence engines. This overcomes the problems of lace distortion due to the trimming process. Applying the algorithms developed, the system can produce excellent results, much better than a human operator.

  7. On Exact and Inexact Differentials and Applications

    ERIC Educational Resources Information Center

    Cortez, L. A. B.; de Oliveira, E. Capelas

    2017-01-01

    Considering the important role played by mathematical derivatives in the study of physical-chemical processes, this paper discusses the different possibilities and formulations of this concept and its application. In particular, in Chemical Thermodynamics, we study exact differentials associated with the so-called state functions and inexact…

  8. A novel beamformer design method for medical ultrasound. Part I: Theory.

    PubMed

    Ranganathan, Karthik; Walker, William F

    2003-01-01

    The design of transmit and receive aperture weightings is a critical step in the development of ultrasound imaging systems. Current design methods are generally iterative, and consequently time consuming and inexact. We describe a new and general ultrasound beamformer design method, the minimum sum squared error (MSSE) technique. The MSSE technique enables aperture design for arbitrary beam patterns (within fundamental limitations imposed by diffraction). It uses a linear algebra formulation to describe the system point spread function (psf) as a function of the aperture weightings. The sum squared error (SSE) between the system psf and the desired or goal psf is minimized, yielding the optimal aperture weightings. We present detailed analysis for continuous wave (CW) and broadband systems. We also discuss several possible applications of the technique, such as the design of aperture weightings that improve the system depth of field, generate limited diffraction transmit beams, and improve the correlation depth of field in translated aperture system geometries. Simulation results are presented in an accompanying paper.

  9. Bigger than a Breadbox; Lighter than a Heavy Heart

    ERIC Educational Resources Information Center

    Hill, Robert V.

    2009-01-01

    Inexact measurements can have devastating effects in sciences where precision is of paramount importance. In contrast, morphological sciences rely heavily on description, comparison, and estimation to make meaningful inferences about the structure of humans and other animals. A review of the 1918 edition of "Gray's Anatomy" shows that the tendency…

  10. Learning Robust and Discriminative Subspace With Low-Rank Constraints.

    PubMed

    Li, Sheng; Fu, Yun

    2016-11-01

    In this paper, we aim at learning robust and discriminative subspaces from noisy data. Subspace learning is widely used in extracting discriminative features for classification. However, when data are contaminated with severe noise, the performance of most existing subspace learning methods would be limited. Recent advances in low-rank modeling provide effective solutions for removing noise or outliers contained in sample sets, which motivates us to take advantage of low-rank constraints in order to exploit robust and discriminative subspace for classification. In particular, we present a discriminative subspace learning method called the supervised regularization-based robust subspace (SRRS) approach, by incorporating the low-rank constraint. SRRS seeks low-rank representations from the noisy data, and learns a discriminative subspace from the recovered clean data jointly. A supervised regularization function is designed to make use of the class label information, and therefore to enhance the discriminability of subspace. Our approach is formulated as a constrained rank-minimization problem. We design an inexact augmented Lagrange multiplier optimization algorithm to solve it. Unlike the existing sparse representation and low-rank learning methods, our approach learns a low-dimensional subspace from recovered data, and explicitly incorporates the supervised information. Our approach and some baselines are evaluated on the COIL-100, ALOI, Extended YaleB, FERET, AR, and KinFace databases. The experimental results demonstrate the effectiveness of our approach, especially when the data contain considerable noise or variations.

  11. [Quality management and strategic consequences of assessing documentation and coding under the German Diagnostic Related Groups system].

    PubMed

    Schnabel, M; Mann, D; Efe, T; Schrappe, M; V Garrel, T; Gotzen, L; Schaeg, M

    2004-10-01

    The introduction of the German Diagnostic Related Groups (D-DRG) system requires redesigning administrative patient management strategies. Wrong coding leads to inaccurate grouping and endangers the reimbursement of treatment costs. This situation emphasizes the roles of documentation and coding as factors of economical success. The aims of this study were to assess the quantity and quality of initial documentation and coding (ICD-10 and OPS-301) and find operative strategies to improve efficiency and strategic means to ensure optimal documentation and coding quality. In a prospective study, documentation and coding quality were evaluated in a standardized way by weekly assessment. Clinical data from 1385 inpatients were processed for initial correctness and quality of documentation and coding. Principal diagnoses were found to be accurate in 82.7% of cases, inexact in 7.1%, and wrong in 10.1%. Effects on financial returns occurred in 16%. Based on these findings, an optimized, interdisciplinary, and multiprofessional workflow on medical documentation, coding, and data control was developed. Workflow incorporating regular assessment of documentation and coding quality is required by the DRG system to ensure efficient accounting of hospital services. Interdisciplinary and multiprofessional cooperation is recognized to be an important factor in establishing an efficient workflow in medical documentation and coding.

  12. Alignment of Tractograms As Graph Matching.

    PubMed

    Olivetti, Emanuele; Sharmin, Nusrat; Avesani, Paolo

    2016-01-01

    The white matter pathways of the brain can be reconstructed as 3D polylines, called streamlines, through the analysis of diffusion magnetic resonance imaging (dMRI) data. The whole set of streamlines is called tractogram and represents the structural connectome of the brain. In multiple applications, like group-analysis, segmentation, or atlasing, tractograms of different subjects need to be aligned. Typically, this is done with registration methods, that transform the tractograms in order to increase their similarity. In contrast with transformation-based registration methods, in this work we propose the concept of tractogram correspondence, whose aim is to find which streamline of one tractogram corresponds to which streamline in another tractogram, i.e., a map from one tractogram to another. As a further contribution, we propose to use the relational information of each streamline, i.e., its distances from the other streamlines in its own tractogram, as the building block to define the optimal correspondence. We provide an operational procedure to find the optimal correspondence through a combinatorial optimization problem and we discuss its similarity to the graph matching problem. In this work, we propose to represent tractograms as graphs and we adopt a recent inexact sub-graph matching algorithm to approximate the solution of the tractogram correspondence problem. On tractograms generated from the Human Connectome Project dataset, we report experimental evidence that tractogram correspondence, implemented as graph matching, provides much better alignment than affine registration and comparable if not better results than non-linear registration of volumes.

  13. Interior-Point Methods for Linear Programming: A Challenge to the Simplex Method

    DTIC Science & Technology

    1988-07-01

    subsequently found that the method was first proposed by Dikin in 1967 [6]. Search directions are generated by the same system (5). Any hint of quadratic...1982). Inexact Newton methods, SIAM Journal on Numerical Analysis 19, 400-408. [6] I. I. Dikin (1967). Iterative solution of problems of linear and

  14. Harvard Education Letter. Volume 25, Number 2, March-April 2009

    ERIC Educational Resources Information Center

    Chauncey, Caroline, Ed.

    2009-01-01

    "Harvard Education Letter" is published bimonthly by the Harvard Graduate School of Education. This issue of "Harvard Education Letter" contains the following articles: (1) Money and Motivation: New Initiatives Rekindle Debate over the Link between Rewards and Student Achievement (David McKay Wilson); (2) An Inexact Science: What Are the Technical…

  15. Assessment and Learning without Grades? Motivations and Concerns with Implementing Gradeless Learning in Higher Education

    ERIC Educational Resources Information Center

    McMorran, Chris; Ragupathi, Kiruthika; Luo, Simei

    2017-01-01

    The relationship between assessment and learning in higher education often comes down to a single thing: a grade. Despite widespread criticism of grades as inexact tools, whose overemphasis undermines student learning and negatively affects student well-being, they continue to be the norm in the assessment of student learning. This paper analyses…

  16. Decision-Making Behavioral Change in Students Connected With Their Participation in a First Theoretical Management Course.

    ERIC Educational Resources Information Center

    Badley, Tommy D.

    The educational discipline of Management is an inexact science. Because of the difficulty in measuring the effects of the academic study of Management on decision-making and related behaviors, a commercial instrument, the "How Supervise?" questionnaire, was tested as a possible means of measuring "before and after treatment" behaviors. The…

  17. An on-line equivalent system identification scheme for adaptive control. Ph.D. Thesis - Stanford Univ.

    NASA Technical Reports Server (NTRS)

    Sliwa, S. M.

    1984-01-01

    A prime obstacle to the widespread use of adaptive control is the degradation of performance and possible instability resulting from the presence of unmodeled dynamics. The approach taken is to explicitly include the unstructured model uncertainty in the output error identification algorithm. The order of the compensator is successively increased by including identified modes. During this model building stage, heuristic rules are used to test for convergence prior to designing compensators. Additionally, the recursive identification algorithm as extended to multi-input, multi-output systems. Enhancements were also made to reduce the computational burden of an algorithm for obtaining minimal state space realizations from the inexact, multivariate transfer functions which result from the identification process. A number of potential adaptive control applications for this approach are illustrated using computer simulations. Results indicated that when speed of adaptation and plant stability are not critical, the proposed schemes converge to enhance system performance.

  18. Fuzzy robust credibility-constrained programming for environmental management and planning.

    PubMed

    Zhang, Yimei; Hang, Guohe

    2010-06-01

    In this study, a fuzzy robust credibility-constrained programming (FRCCP) is developed and applied to the planning for waste management systems. It incorporates the concepts of credibility-based chance-constrained programming and robust programming within an optimization framework. The developed method can reflect uncertainties presented as possibility-density by fuzzy-membership functions. Fuzzy credibility constraints are transformed to the crisp equivalents with different credibility levels, and ordinary fuzzy inclusion constraints are determined by their robust deterministic constraints by setting a-cut levels. The FRCCP method can provide different system costs under different credibility levels (lambda). From the results of sensitivity analyses, the operation cost of the landfill is a critical parameter. For the management, any factors that would induce cost fluctuation during landfilling operation would deserve serious observation and analysis. By FRCCP, useful solutions can be obtained to provide decision-making support for long-term planning of solid waste management systems. It could be further enhanced through incorporating methods of inexact analysis into its framework. It can also be applied to other environmental management problems.

  19. High-performance 3D compressive sensing MRI reconstruction.

    PubMed

    Kim, Daehyun; Trzasko, Joshua D; Smelyanskiy, Mikhail; Haider, Clifton R; Manduca, Armando; Dubey, Pradeep

    2010-01-01

    Compressive Sensing (CS) is a nascent sampling and reconstruction paradigm that describes how sparse or compressible signals can be accurately approximated using many fewer samples than traditionally believed. In magnetic resonance imaging (MRI), where scan duration is directly proportional to the number of acquired samples, CS has the potential to dramatically decrease scan time. However, the computationally expensive nature of CS reconstructions has so far precluded their use in routine clinical practice - instead, more-easily generated but lower-quality images continue to be used. We investigate the development and optimization of a proven inexact quasi-Newton CS reconstruction algorithm on several modern parallel architectures, including CPUs, GPUs, and Intel's Many Integrated Core (MIC) architecture. Our (optimized) baseline implementation on a quad-core Core i7 is able to reconstruct a 256 × 160×80 volume of the neurovasculature from an 8-channel, 10 × undersampled data set within 56 seconds, which is already a significant improvement over existing implementations. The latest six-core Core i7 reduces the reconstruction time further to 32 seconds. Moreover, we show that the CS algorithm benefits from modern throughput-oriented architectures. Specifically, our CUDA-base implementation on NVIDIA GTX480 reconstructs the same dataset in 16 seconds, while Intel's Knights Ferry (KNF) of the MIC architecture even reduces the time to 12 seconds. Such level of performance allows the neurovascular dataset to be reconstructed within a clinically viable time.

  20. Moving object detection via low-rank total variation regularization

    NASA Astrophysics Data System (ADS)

    Wang, Pengcheng; Chen, Qian; Shao, Na

    2016-09-01

    Moving object detection is a challenging task in video surveillance. Recently proposed Robust Principal Component Analysis (RPCA) can recover the outlier patterns from the low-rank data under some mild conditions. However, the l-penalty in RPCA doesn't work well in moving object detection because the irrepresentable condition is often not satisfied. In this paper, a method based on total variation (TV) regularization scheme is proposed. In our model, image sequences captured with a static camera are highly related, which can be described using a low-rank matrix. Meanwhile, the low-rank matrix can absorb background motion, e.g. periodic and random perturbation. The foreground objects in the sequence are usually sparsely distributed and drifting continuously, and can be treated as group outliers from the highly-related background scenes. Instead of l-penalty, we exploit the total variation of the foreground. By minimizing the total variation energy, the outliers tend to collapse and finally converge to be the exact moving objects. The TV-penalty is superior to the l-penalty especially when the outlier is in the majority for some pixels, and our method can estimate the outlier explicitly with less bias but higher variance. To solve the problem, a joint optimization function is formulated and can be effectively solved through the inexact Augmented Lagrange Multiplier (ALM) method. We evaluate our method along with several state-of-the-art approaches in MATLAB. Both qualitative and quantitative results demonstrate that our proposed method works effectively on a large range of complex scenarios.

  1. A comparison of helicopter-borne electromagnetic systems for hydrogeologic studies

    USGS Publications Warehouse

    Bedrosian, Paul A.; Schamper, Cyril; Auken, Esben

    2016-01-01

    The increased application of airborne electromagnetic surveys to hydrogeological studies is driving a demand for data that can consistently be inverted for accurate subsurface resistivity structure from the near surface to depths of several hundred metres. We present an evaluation of three commercial airborne electromagnetic systems over two test blocks in western Nebraska, USA. The selected test blocks are representative of shallow and deep alluvial aquifer systems with low groundwater salinity and an electrically conductive base of aquifer. The aquifer units show significant lithologic heterogeneity and include both modern and ancient river systems. We compared the various data sets to one another and inverted resistivity models to borehole lithology and to ground geophysical models. We find distinct differences among the airborne electromagnetic systems as regards the spatial resolution of models, the depth of investigation, and the ability to recover near-surface resistivity variations. We further identify systematic biases in some data sets, which we attribute to incomplete or inexact calibration or compensation procedures.

  2. The Value of Weather Forecast in Irrigation

    NASA Astrophysics Data System (ADS)

    Cai, X.; Wang, D.

    2007-12-01

    This paper studies irrigation scheduling (when and how much water to apply during the crop growth season) in the Havana Lowlands region, Illinois, using meteorological, agronomic and agricultural production data from 2002. Irrigation scheduling determines the timing and amount of water applied to an irrigated cropland during the crop growing season. In this study a hydrologic-agronomic simulation is coupled with an optimization algorithm to search for the optimal irrigation schedule under various weather forecast horizons. The economic profit of irrigated corn from an optimized scheduling is compared to that from and the actual schedule, which is adopted from a pervious study. Extended and reliable climate prediction and weather forecast are found to be significantly valuable. If a weather forecast horizon is long enough to include the critical crop growth stage, in which crop yield bears the maximum loss over all stages, much economic loss can be avoided. Climate predictions of one to two months, which can cover the critical period, might be even more beneficial during a dry year. The other purpose of this paper is to analyze farmers' behavior in irrigation scheduling by comparing the "actual" schedule to the "optimized" ones. The ultimate goal of irrigation schedule optimization is to provide information to farmers so that they may modify their behavior. In practice, farmers' decision may not follow an optimal irrigation schedule due to the impact of various factors such as natural conditions, policies, farmers' habits and empirical knowledge, and the uncertain or inexact information that they receive. In this study farmers' behavior in irrigation decision making is analyzed by comparing the "actual" schedule to the "optimized" ones. This study finds that the identification of the crop growth stage with the most severe water stress is critical for irrigation scheduling. For the case study site in the year of 2002, framers' response to water stress was found to be late; they did not even respond appropriately to a major rainfall just 3 days ahead, which might be due to either an unreliable weather forecast or farmer's ignorance of the forecast.

  3. Marine Controlled-Source Electromagnetic 2D Inversion for synthetic models.

    NASA Astrophysics Data System (ADS)

    Liu, Y.; Li, Y.

    2016-12-01

    We present a 2D inverse algorithm for frequency domain marine controlled-source electromagnetic (CSEM) data, which is based on the regularized Gauss-Newton approach. As a forward solver, our parallel adaptive finite element forward modeling program is employed. It is a self-adaptive, goal-oriented grid refinement algorithm in which a finite element analysis is performed on a sequence of refined meshes. The mesh refinement process is guided by a dual error estimate weighting to bias refinement towards elements that affect the solution at the EM receiver locations. With the use of the direct solver (MUMPS), we can effectively compute the electromagnetic fields for multi-sources and parametric sensitivities. We also implement the parallel data domain decomposition approach of Key and Ovall (2011), with the goal of being able to compute accurate responses in parallel for complicated models and a full suite of data parameters typical of offshore CSEM surveys. All minimizations are carried out by using the Gauss-Newton algorithm and model perturbations at each iteration step are obtained by using the Inexact Conjugate Gradient iteration method. Synthetic test inversions are presented.

  4. The permuted generator hypothesis for the origin of a genetic code

    NASA Technical Reports Server (NTRS)

    Folsome, C.

    1977-01-01

    Protocells had no known means of ensuring that their randomly collected proteins would be duplicated. A possible, albeit inexact, mechanism for protein synthesis in a primitive t-RNA is presented, whereby an oligonucleotide (12 units) in a circular configuration is able to align a generator site with amino acid discriminator sites. In this way, unique anticodons could be specified for each site and replication could occur.

  5. Biometric Data Safeguarding Technologies Analysis and Best Practices

    DTIC Science & Technology

    2011-12-01

    fuzzy vault” scheme proposed by Juels and Sudan. The scheme was designed to encrypt data such that it could be unlocked by similar but inexact matches... designed transform functions. Multifactor Key Generation Multifactor key generation combines a biometric with one or more other inputs, such as a...cooperative, off-angle iris images.  Since the commercialized system is designed for images acquired from a specific, paired acquisition system

  6. Alignment of high-throughput sequencing data inside in-memory databases.

    PubMed

    Firnkorn, Daniel; Knaup-Gregori, Petra; Lorenzo Bermejo, Justo; Ganzinger, Matthias

    2014-01-01

    In times of high-throughput DNA sequencing techniques, performance-capable analysis of DNA sequences is of high importance. Computer supported DNA analysis is still an intensive time-consuming task. In this paper we explore the potential of a new In-Memory database technology by using SAP's High Performance Analytic Appliance (HANA). We focus on read alignment as one of the first steps in DNA sequence analysis. In particular, we examined the widely used Burrows-Wheeler Aligner (BWA) and implemented stored procedures in both, HANA and the free database system MySQL, to compare execution time and memory management. To ensure that the results are comparable, MySQL has been running in memory as well, utilizing its integrated memory engine for database table creation. We implemented stored procedures, containing exact and inexact searching of DNA reads within the reference genome GRCh37. Due to technical restrictions in SAP HANA concerning recursion, the inexact matching problem could not be implemented on this platform. Hence, performance analysis between HANA and MySQL was made by comparing the execution time of the exact search procedures. Here, HANA was approximately 27 times faster than MySQL which means, that there is a high potential within the new In-Memory concepts, leading to further developments of DNA analysis procedures in the future.

  7. Detection of imminent vein graft occlusion: what is the optimal surveillance program?

    PubMed

    Tinder, Chelsey N; Bandyk, Dennis F

    2009-12-01

    The prediction of infrainguinal vein bypass failure remains an inexact judgment. Patient demographics, technical factors, and vascular laboratory graft surveillance testing are helpful in identifying a high-risk graft cohort. The optimal surveillance program to detect the bypass at risk for imminent occlusion continues to be developed, but required elements are known and include clinical assessment for new or changes in limb ischemia symptoms, measurement of ankle and/or toe systolic pressure, and duplex ultrasound imaging of the bypass graft. Duplex ultrasound assessment of bypass hemodynamics may be the most accurate method to detect imminent vein graft occlusion. The finding of low graft flow during intraoperative assessment or at a scheduled surveillance study predicts failure; and if associated with an occlusive lesion, a graft revision can prolong patency. The most common abnormality producing graft failure is conduit stenosis caused by myointimal hyperplasia; and the majority can be repaired by an endovascular intervention. Frequency of testing to detect the failing bypass should be individualized to the patient, the type of arterial bypass, and prior duplex ultrasound scan findings. The focus of surveillance is on identification of the low-flow arterial bypass and timely repair of detected critical stenosis defined by duplex velocity spectra criteria of a peak systolic velocity 300 cm/s and peak systolic velocity ratio across the stenosis >3.5-correlating with >70% diameter-reducing stenosis. When conducted appropriately, a graft surveillance program should result in an unexpected graft failure rate of <3% per year.

  8. A scalable parallel black oil simulator on distributed memory parallel computers

    NASA Astrophysics Data System (ADS)

    Wang, Kun; Liu, Hui; Chen, Zhangxin

    2015-11-01

    This paper presents our work on developing a parallel black oil simulator for distributed memory computers based on our in-house parallel platform. The parallel simulator is designed to overcome the performance issues of common simulators that are implemented for personal computers and workstations. The finite difference method is applied to discretize the black oil model. In addition, some advanced techniques are employed to strengthen the robustness and parallel scalability of the simulator, including an inexact Newton method, matrix decoupling methods, and algebraic multigrid methods. A new multi-stage preconditioner is proposed to accelerate the solution of linear systems from the Newton methods. Numerical experiments show that our simulator is scalable and efficient, and is capable of simulating extremely large-scale black oil problems with tens of millions of grid blocks using thousands of MPI processes on parallel computers.

  9. A.I.-based real-time support for high performance aircraft operations

    NASA Technical Reports Server (NTRS)

    Vidal, J. J.

    1985-01-01

    Artificial intelligence (AI) based software and hardware concepts are applied to the handling system malfunctions during flight tests. A representation of malfunction procedure logic using Boolean normal forms are presented. The representation facilitates the automation of malfunction procedures and provides easy testing for the embedded rules. It also forms a potential basis for a parallel implementation in logic hardware. The extraction of logic control rules, from dynamic simulation and their adaptive revision after partial failure are examined. It uses a simplified 2-dimensional aircraft model with a controller that adaptively extracts control rules for directional thrust that satisfies a navigational goal without exceeding pre-established position and velocity limits. Failure recovery (rule adjusting) is examined after partial actuator failure. While this experiment was performed with primitive aircraft and mission models, it illustrates an important paradigm and provided complexity extrapolations for the proposed extraction of expertise from simulation, as discussed. The use of relaxation and inexact reasoning in expert systems was also investigated.

  10. Exact simulation of max-stable processes.

    PubMed

    Dombry, Clément; Engelke, Sebastian; Oesting, Marco

    2016-06-01

    Max-stable processes play an important role as models for spatial extreme events. Their complex structure as the pointwise maximum over an infinite number of random functions makes their simulation difficult. Algorithms based on finite approximations are often inexact and computationally inefficient. We present a new algorithm for exact simulation of a max-stable process at a finite number of locations. It relies on the idea of simulating only the extremal functions, that is, those functions in the construction of a max-stable process that effectively contribute to the pointwise maximum. We further generalize the algorithm by Dieker & Mikosch (2015) for Brown-Resnick processes and use it for exact simulation via the spectral measure. We study the complexity of both algorithms, prove that our new approach via extremal functions is always more efficient, and provide closed-form expressions for their implementation that cover most popular models for max-stable processes and multivariate extreme value distributions. For simulation on dense grids, an adaptive design of the extremal function algorithm is proposed.

  11. Effects of strain and quantum confinement in optically pumped nuclear magnetic resonance in GaAs: Interpretation guided by spin-dependent band structure calculations

    DOE PAGES

    Wood, R. M.; Saha, D.; McCarthy, L. A.; ...

    2014-10-29

    A combined experimental-theoretical study of optically pumped NMR (OPNMR) has been performed in a GaAs/Al 0.1Ga 0.9As quantum well film with thermally induced biaxial strain. The photon energy dependence of the Ga-71 OPNMR signal was recorded at magnetic fields of 4.9 and 9.4 T at a temperature of 4.8-5.4 K. The data were compared to the nuclear spin polarization calculated from differential absorption to spin-up and spin-down states of the conduction band using a modified Pidgeon Brown model. Reasonable agreement between theory and experiment is obtained, facilitating assignment of features in the OPNMR energy dependence to specific interband transitions. Despitemore » the approximations made in the quantum-mechanical model and the inexact correspondence between the experimental and calculated observables, the results provide insight into how effects of strain and quantum confinement are manifested in OPNMR signals« less

  12. Comprehensive Analysis of Migration Pathways (CAMP): Contaminant Migration Pathways at Confined Dredged Material Disposal Facilities

    DTIC Science & Technology

    1990-09-01

    fluctuating flow rates make such approximations relatively inexact. Accuracy of mass loading estimates can be improved by increasing the number of...Engler, Patin, and Theriot 1988; Patin and Baylot 1989). 26. Effluent flow from an upland CDF is highest when large quantities of water are being...dispersion in conjunction with lower flows during placement of mechanical dredged material. Interactions between sediment and water that occur during

  13. Zero-Sum Matrix Game with Payoffs of Dempster-Shafer Belief Structures and Its Applications on Sensors

    PubMed Central

    Deng, Xinyang; Jiang, Wen; Zhang, Jiandong

    2017-01-01

    The zero-sum matrix game is one of the most classic game models, and it is widely used in many scientific and engineering fields. In the real world, due to the complexity of the decision-making environment, sometimes the payoffs received by players may be inexact or uncertain, which requires that the model of matrix games has the ability to represent and deal with imprecise payoffs. To meet such a requirement, this paper develops a zero-sum matrix game model with Dempster–Shafer belief structure payoffs, which effectively represents the ambiguity involved in payoffs of a game. Then, a decomposition method is proposed to calculate the value of such a game, which is also expressed with belief structures. Moreover, for the possible computation-intensive issue in the proposed decomposition method, as an alternative solution, a Monte Carlo simulation approach is presented, as well. Finally, the proposed zero-sum matrix games with payoffs of Dempster–Shafer belief structures is illustratively applied to the sensor selection and intrusion detection of sensor networks, which shows its effectiveness and application process. PMID:28430156

  14. Error-in-variables models in calibration

    NASA Astrophysics Data System (ADS)

    Lira, I.; Grientschnig, D.

    2017-12-01

    In many calibration operations, the stimuli applied to the measuring system or instrument under test are derived from measurement standards whose values may be considered to be perfectly known. In that case, it is assumed that calibration uncertainty arises solely from inexact measurement of the responses, from imperfect control of the calibration process and from the possible inaccuracy of the calibration model. However, the premise that the stimuli are completely known is never strictly fulfilled and in some instances it may be grossly inadequate. Then, error-in-variables (EIV) regression models have to be employed. In metrology, these models have been approached mostly from the frequentist perspective. In contrast, not much guidance is available on their Bayesian analysis. In this paper, we first present a brief summary of the conventional statistical techniques that have been developed to deal with EIV models in calibration. We then proceed to discuss the alternative Bayesian framework under some simplifying assumptions. Through a detailed example about the calibration of an instrument for measuring flow rates, we provide advice on how the user of the calibration function should employ the latter framework for inferring the stimulus acting on the calibrated device when, in use, a certain response is measured.

  15. An automated construction of error models for uncertainty quantification and model calibration

    NASA Astrophysics Data System (ADS)

    Josset, L.; Lunati, I.

    2015-12-01

    To reduce the computational cost of stochastic predictions, it is common practice to rely on approximate flow solvers (or «proxy»), which provide an inexact, but computationally inexpensive response [1,2]. Error models can be constructed to correct the proxy response: based on a learning set of realizations for which both exact and proxy simulations are performed, a transformation is sought to map proxy into exact responses. Once the error model is constructed a prediction of the exact response is obtained at the cost of a proxy simulation for any new realization. Despite its effectiveness [2,3], the methodology relies on several user-defined parameters, which impact the accuracy of the predictions. To achieve a fully automated construction, we propose a novel methodology based on an iterative scheme: we first initialize the error model with a small training set of realizations; then, at each iteration, we add a new realization both to improve the model and to evaluate its performance. More specifically, at each iteration we use the responses predicted by the updated model to identify the realizations that need to be considered to compute the quantity of interest. Another user-defined parameter is the number of dimensions of the response spaces between which the mapping is sought. To identify the space dimensions that optimally balance mapping accuracy and risk of overfitting, we follow a Leave-One-Out Cross Validation. Also, the definition of a stopping criterion is central to an automated construction. We use a stability measure based on bootstrap techniques to stop the iterative procedure when the iterative model has converged. The methodology is illustrated with two test cases in which an inverse problem has to be solved and assess the performance of the method. We show that an iterative scheme is crucial to increase the applicability of the approach. [1] Josset, L., and I. Lunati, Local and global error models for improving uncertainty quantification, Math.ematical Geosciences, 2013 [2] Josset, L., D. Ginsbourger, and I. Lunati, Functional Error Modeling for uncertainty quantification in hydrogeology, Water Resources Research, 2015 [3] Josset, L., V. Demyanov, A.H. Elsheikhb, and I. Lunati, Accelerating Monte Carlo Markov chains with proxy and error models, Computer & Geosciences, 2015 (In press)

  16. Demonstration of Inexact Computing Implemented in the JPEG Compression Algorithm using Probabilistic Boolean Logic applied to CMOS Components

    DTIC Science & Technology

    2015-12-24

    Signal to Noise Ratio SPICE Simulation Program with Integrated Circuit Emphasis TIFF Tagged Image File Format USC University of Southern California xvii...sources can create errors in digital circuits. These effects can be simulated using Simulation Program with Integrated Circuit Emphasis ( SPICE ) or...compute summary statistics. 4.1 Circuit Simulations Noisy analog circuits can be simulated in SPICE or Cadence SpectreTM software via noisy voltage

  17. Coordinated Beamforming for MISO Interference Channel: Complexity Analysis and Efficient Algorithms

    DTIC Science & Technology

    2010-01-01

    Algorithm The cyclic coordinate descent algorithm is also known as the nonlinear Gauss - Seidel iteration [32]. There are several studies of this type of...vkρ(vi−1). It can be shown that the above BB gradient projection direction is always a descent direction. The R-linear convergence of the BB method has...KKT solution ) of the inexact pricing algorithm for MISO interference channel. The latter is interesting since the convergence of the original pricing

  18. Fuzziness In Approximate And Common-Sense Reasoning In Knowledge-Based Robotics Systems

    NASA Astrophysics Data System (ADS)

    Dodds, David R.

    1987-10-01

    Fuzzy functions, a major key to inexact reasoning, are described as they are applied to the fuzzification of robot co-ordinate systems. Linguistic-variables, a means of labelling ranges in fuzzy sets, are used as computationally pragmatic means of representing spatialization metaphors, themselves an extraordinarily rich basis for understanding concepts in orientational terms. Complex plans may be abstracted and simplified in a system which promotes conceptual planning by means of the orientational representation.

  19. Soft Computing Techniques for the Protein Folding Problem on High Performance Computing Architectures.

    PubMed

    Llanes, Antonio; Muñoz, Andrés; Bueno-Crespo, Andrés; García-Valverde, Teresa; Sánchez, Antonia; Arcas-Túnez, Francisco; Pérez-Sánchez, Horacio; Cecilia, José M

    2016-01-01

    The protein-folding problem has been extensively studied during the last fifty years. The understanding of the dynamics of global shape of a protein and the influence on its biological function can help us to discover new and more effective drugs to deal with diseases of pharmacological relevance. Different computational approaches have been developed by different researchers in order to foresee the threedimensional arrangement of atoms of proteins from their sequences. However, the computational complexity of this problem makes mandatory the search for new models, novel algorithmic strategies and hardware platforms that provide solutions in a reasonable time frame. We present in this revision work the past and last tendencies regarding protein folding simulations from both perspectives; hardware and software. Of particular interest to us are both the use of inexact solutions to this computationally hard problem as well as which hardware platforms have been used for running this kind of Soft Computing techniques.

  20. An approximate block Newton method for coupled iterations of nonlinear solvers: Theory and conjugate heat transfer applications

    NASA Astrophysics Data System (ADS)

    Yeckel, Andrew; Lun, Lisa; Derby, Jeffrey J.

    2009-12-01

    A new, approximate block Newton (ABN) method is derived and tested for the coupled solution of nonlinear models, each of which is treated as a modular, black box. Such an approach is motivated by a desire to maintain software flexibility without sacrificing solution efficiency or robustness. Though block Newton methods of similar type have been proposed and studied, we present a unique derivation and use it to sort out some of the more confusing points in the literature. In particular, we show that our ABN method behaves like a Newton iteration preconditioned by an inexact Newton solver derived from subproblem Jacobians. The method is demonstrated on several conjugate heat transfer problems modeled after melt crystal growth processes. These problems are represented by partitioned spatial regions, each modeled by independent heat transfer codes and linked by temperature and flux matching conditions at the boundaries common to the partitions. Whereas a typical block Gauss-Seidel iteration fails about half the time for the model problem, quadratic convergence is achieved by the ABN method under all conditions studied here. Additional performance advantages over existing methods are demonstrated and discussed.

  1. Steady induction effects in geomagnetism. Part 1B: Geomagnetic estimation of steady surficial core motions: A non-linear inverse problem

    NASA Technical Reports Server (NTRS)

    Voorhies, Coerte V.

    1993-01-01

    The problem of estimating a steady fluid velocity field near the top of Earth's core which induces the secular variation (SV) indicated by models of the observed geomagnetic field is examined in the source-free mantle/frozen-flux core (SFI/VFFC) approximation. This inverse problem is non-linear because solutions of the forward problem are deterministically chaotic. The SFM/FFC approximation is inexact, and neither the models nor the observations they represent are either complete or perfect. A method is developed for solving the non-linear inverse motional induction problem posed by the hypothesis of (piecewise, statistically) steady core surface flow and the supposition of a complete initial geomagnetic condition. The method features iterative solution of the weighted, linearized least-squares problem and admits optional biases favoring surficially geostrophic flow and/or spatially simple flow. Two types of weights are advanced radial field weights for fitting the evolution of the broad-scale portion of the radial field component near Earth's surface implied by the models, and generalized weights for fitting the evolution of the broad-scale portion of the scalar potential specified by the models.

  2. Inverse methods-based estimation of plate coupling in a plate motion model governed by mantle flow

    NASA Astrophysics Data System (ADS)

    Ratnaswamy, V.; Stadler, G.; Gurnis, M.

    2013-12-01

    Plate motion is primarily controlled by buoyancy (slab pull) which occurs at convergent plate margins where oceanic plates undergo deformation near the seismogenic zone. Yielding within subducting plates, lateral variations in viscosity, and the strength of seismic coupling between plate margins likely have an important control on plate motion. Here, we wish to infer the inter-plate coupling for different subduction zones, and develop a method for inferring it as a PDE-constrained optimization problem, where the cost functional is the misfit in plate velocities and is constrained by the nonlinear Stokes equation. The inverse models have well resolved slabs, plates, and plate margins in addition to a power law rheology with yielding in the upper mantle. Additionally, a Newton method is used to solve the nonlinear Stokes equation with viscosity bounds. We infer plate boundary strength using an inexact Gauss-Newton method with line search for backtracking. Each inverse model is applied to two simple 2-D scenarios (each with three subduction zones), one with back-arc spreading and one without. For each case we examine the sensitivity of the inversion to the amount of surface velocity used: 1) full surface velocity data and 2) surface velocity data simplified using a single scalar average (2-D equivalent to an Euler pole) for each plate. We can recover plate boundary strength in each case, even in the presence of highly nonlinear flow with extreme variations in viscosity. Additionally, we ascribe an uncertainty in each plate's velocity and perform an uncertainty quantification (UQ) through the Hessian of the misfit in plate velocities. We find that as plate boundaries become strongly coupled, the uncertainty in the inferred plate boundary strength decreases. For very weak, uncoupled subduction zones, the uncertainty of inferred plate margin strength increases since there is little sensitivity between plate margin strength and plate velocity. This result is significant because it implies we can infer which plate boundaries are more coupled (seismically) for a realistic dynamic model of plates and mantle flow.

  3. Regional dust storm modeling for health services: The case of valley fever

    NASA Astrophysics Data System (ADS)

    Sprigg, William A.; Nickovic, Slobodan; Galgiani, John N.; Pejanovic, Goran; Petkovic, Slavko; Vujadinovic, Mirjam; Vukovic, Ana; Dacic, Milan; DiBiase, Scott; Prasad, Anup; El-Askary, Hesham

    2014-09-01

    On 5 July 2011, a massive dust storm struck Phoenix, Arizona (USA), raising concerns for increased cases of valley fever (coccidioidomycosis, or, cocci). A quasi-operational experimental airborne dust forecast system predicted the event and provides model output for continuing analysis in collaboration with public health and air quality communities. An objective of this collaboration was to see if a signal in cases of valley fever in the region could be detected and traced to the storm - an American haboob. To better understand the atmospheric life cycle of cocci spores, the DREAM dust model (also herein, NMME-DREAM) was modified to simulate spore emission, transport and deposition. Inexact knowledge of where cocci-causing fungus grows, the low resolution of cocci surveillance and an overall active period for significant dust events complicate analysis of the effect of the 5 July 2011 storm. In the larger context of monthly to annual disease surveillance, valley fever statistics, when compared against PM10 observation networks and modeled airborne dust concentrations, may reveal a likely cause and effect. Details provided by models and satellites fill time and space voids in conventional approaches to air quality and disease surveillance, leading to land-atmosphere modeling and remote sensing that clearly mark a path to advance valley fever epidemiology, surveillance and risk avoidance.

  4. Privacy Enhancements for Inexact Biometric Templates

    NASA Astrophysics Data System (ADS)

    Ratha, Nalini; Chikkerur, Sharat; Connell, Jonathan; Bolle, Ruud

    Traditional authentication schemes utilize tokens or depend on some secret knowledge possessed by the user for verifying his or her identity. Although these techniques are widely used, they have several limitations. Both tokenand knowledge-based approaches cannot differentiate between an authorized user and an impersonator having access to the tokens or passwords. Biometrics-based authentication schemes overcome these limitations while offering usability advantages in the area of password management. However, despite its obvious advantages, the use of biometrics raises several security and privacy concerns.

  5. Multiobjective fuzzy stochastic linear programming problems with inexact probability distribution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamadameen, Abdulqader Othman; Zainuddin, Zaitul Marlizawati

    This study deals with multiobjective fuzzy stochastic linear programming problems with uncertainty probability distribution which are defined as fuzzy assertions by ambiguous experts. The problem formulation has been presented and the two solutions strategies are; the fuzzy transformation via ranking function and the stochastic transformation when α{sup –}. cut technique and linguistic hedges are used in the uncertainty probability distribution. The development of Sen’s method is employed to find a compromise solution, supported by illustrative numerical example.

  6. Nd:YAG-laser-based time-domain reflectometry measurements of the intrinsic reflection signature from PMMA fiber splices

    NASA Astrophysics Data System (ADS)

    Lawson, Christopher M.; Michael, Robert R., Jr.; Dressel, Earl M.; Harmony, David W.

    1991-12-01

    Optical time domain reflectometry (OTDR) measurements have been performed on polished polymethylmethacrylate (PMMA) plastic fiber splices. After the dominant splice reflection sources due to surface roughness, inexact index matching, and fiber core misalignment were eliminated, an intrinsic OTDR signature 3 - 8 dB above the Rayleigh backscatter floor remained with all tested fibers. This minimum splice reflectivity exhibits characteristics that are consistent with sub-surface polymer damage and can be used for detection of PMMA fiber splices.

  7. Bigger than a breadbox; lighter than a heavy heart.

    PubMed

    Hill, Robert V

    2009-01-01

    Inexact measurements can have devastating effects in sciences where precision is of paramount importance. In contrast, morphological sciences rely heavily on description, comparison, and estimation to make meaningful inferences about the structure of humans and other animals. A review of the 1918 edition of Gray's Anatomy shows that the tendency to approximate was as marked nearly a century ago as it is today. Occasionally, objects of comparison may themselves become obsolete, necessitating changes in descriptive terminology. Copyright 2009 American Association of Anatomists.

  8. Multi-Scale Fractal Analysis of Image Texture and Pattern

    NASA Technical Reports Server (NTRS)

    Emerson, Charles W.

    1998-01-01

    Fractals embody important ideas of self-similarity, in which the spatial behavior or appearance of a system is largely independent of scale. Self-similarity is defined as a property of curves or surfaces where each part is indistinguishable from the whole, or where the form of the curve or surface is invariant with respect to scale. An ideal fractal (or monofractal) curve or surface has a constant dimension over all scales, although it may not be an integer value. This is in contrast to Euclidean or topological dimensions, where discrete one, two, and three dimensions describe curves, planes, and volumes. Theoretically, if the digital numbers of a remotely sensed image resemble an ideal fractal surface, then due to the self-similarity property, the fractal dimension of the image will not vary with scale and resolution. However, most geographical phenomena are not strictly self-similar at all scales, but they can often be modeled by a stochastic fractal in which the scaling and self-similarity properties of the fractal have inexact patterns that can be described by statistics. Stochastic fractal sets relax the monofractal self-similarity assumption and measure many scales and resolutions in order to represent the varying form of a phenomenon as a function of local variables across space. In image interpretation, pattern is defined as the overall spatial form of related features, and the repetition of certain forms is a characteristic pattern found in many cultural objects and some natural features. Texture is the visual impression of coarseness or smoothness caused by the variability or uniformity of image tone or color. A potential use of fractals concerns the analysis of image texture. In these situations it is commonly observed that the degree of roughness or inexactness in an image or surface is a function of scale and not of experimental technique. The fractal dimension of remote sensing data could yield quantitative insight on the spatial complexity and information content contained within these data. A software package known as the Image Characterization and Modeling System (ICAMS) was used to explore how fractal dimension is related to surface texture and pattern. The ICAMS software was verified using simulated images of ideal fractal surfaces with specified dimensions. The fractal dimension for areas of homogeneous land cover in the vicinity of Huntsville, Alabama was measured to investigate the relationship between texture and resolution for different land covers.

  9. Civil Engineering Applications of Ground Penetrating Radar Recent Advances @ the ELEDIA Research Center

    NASA Astrophysics Data System (ADS)

    Salucci, Marco; Tenuti, Lorenza; Nardin, Cristina; Oliveri, Giacomo; Viani, Federico; Rocca, Paolo; Massa, Andrea

    2014-05-01

    The application of non-destructive testing and evaluation (NDT/NDE) methodologies in civil engineering has raised a growing interest during the last years because of its potential impact in several different scenarios. As a consequence, Ground Penetrating Radar (GPR) technologies have been widely adopted as an instrument for the inspection of the structural stability of buildings and for the detection of cracks and voids. In this framework, the development and validation of GPR algorithms and methodologies represents one of the most active research areas within the ELEDIA Research Center of the University of Trento. More in detail, great efforts have been devoted towards the development of inversion techniques based on the integration of deterministic and stochastic search algorithms with multi-focusing strategies. These approaches proved to be effective in mitigating the effects of both nonlinearity and ill-posedness of microwave imaging problems, which represent the well-known issues arising in GPR inverse scattering formulations. More in detail, a regularized multi-resolution approach based on the Inexact Newton Method (INM) has been recently applied to subsurface prospecting, showing a remarkable advantage over a single-resolution implementation [1]. Moreover, the use of multi-frequency or frequency-hopping strategies to exploit the information coming from GPR data collected in time domain and transformed into its frequency components has been proposed as well. In this framework, the effectiveness of the multi-resolution multi-frequency techniques has been proven on synthetic data generated with numerical models such as GprMax [2]. The application of inversion algorithms based on Bayesian Compressive Sampling (BCS) [3][4] to GPR is currently under investigation, as well, in order to exploit their capability to provide satisfactory reconstructions in presence of single and multiple sparse scatterers [3][4]. Furthermore, multi-scaling approaches exploiting level-set-based optimization have been developed for the qualitative reconstruction of multiple and disconnected homogeneous scatterers [5]. Finally, the real-time detection and classification of subsurface scatterers has been investigated by means of learning-by-examples (LBE) techniques, such as Support Vector Machines (SVM) [6]. Acknowledgment - This work was partially supported by COST Action TU1208 'Civil Engineering Applications of Ground Penetrating Radar' References [1] M. Salucci, D. Sartori, N. Anselmi, A. Randazzo, G. Oliveri, and A. Massa, 'Imaging Buried Objects within the Second-Order Born Approximation through a Multiresolution Regularized Inexact-Newton Method', 2013 International Symposium on Electromagnetic Theory (EMTS), (Hiroshima, Japan), May 20-24 2013 (invited). [2] A. Giannopoulos, 'Modelling ground penetrating radar by GprMax', Construct. Build. Mater., vol. 19, no. 10, pp.755 -762 2005 [3] L. Poli, G. Oliveri, P. Rocca, and A. Massa, "Bayesian compressive sensing approaches for the reconstruction of two-dimensional sparse scatterers under TE illumination," IEEE Trans. Geosci. Remote Sensing, vol. 51, no. 5, pp. 2920-2936, May. 2013. [4] L. Poli, G. Oliveri, and A. Massa, "Imaging sparse metallic cylinders through a Local Shape Function Bayesian Compressive Sensing approach," Journal of Optical Society of America A, vol. 30, no. 6, pp. 1261-1272, 2013. [5] M. Benedetti, D. Lesselier, M. Lambert, and A. Massa, "Multiple shapes reconstruction by means of multi-region level sets," IEEE Trans. Geosci. Remote Sensing, vol. 48, no. 5, pp. 2330-2342, May 2010. [6] L. Lizzi, F. Viani, P. Rocca, G. Oliveri, M. Benedetti and A. Massa, "Three-dimensional real-time localization of subsurface objects - From theory to experimental validation," 2009 IEEE International Geoscience and Remote Sensing Symposium, vol. 2, pp. II-121-II-124, 12-17 July 2009.

  10. Inexact Socio-Dynamic Modeling of Groundwater Contamination Management

    NASA Astrophysics Data System (ADS)

    Vesselinov, V. V.; Zhang, X.

    2015-12-01

    Groundwater contamination may alter the behaviors of the public such as adaptation to such a contamination event. On the other hand, social behaviors may affect groundwater contamination and associated risk levels such as through changing ingestion amount of groundwater due to the contamination. Decisions should consider not only the contamination itself, but also social attitudes on such contamination events. Such decisions are inherently associated with uncertainty, such as subjective judgement from decision makers and their implicit knowledge on selection of whether to supply water or reduce the amount of supplied water under the scenario of the contamination. A socio-dynamic model based on the theories of information-gap and fuzzy sets is being developed to address the social behaviors facing the groundwater contamination and applied to a synthetic problem designed based on typical groundwater remediation sites where the effects of social behaviors on decisions are investigated and analyzed. Different uncertainties including deep uncertainty and vague/ambiguous uncertainty are effectively and integrally addressed. The results can provide scientifically-defensible decision supports for groundwater management in face of the contamination.

  11. Characterization of the reference wave in a compact digital holographic camera.

    PubMed

    Park, I S; Middleton, R J C; Coggrave, C R; Ruiz, P D; Coupland, J M

    2018-01-01

    A hologram is a recording of the interference between an unknown object wave and a coherent reference wave. Providing the object and reference waves are sufficiently separated in some region of space and the reference beam is known, a high-fidelity reconstruction of the object wave is possible. In traditional optical holography, high-quality reconstruction is achieved by careful reillumination of the holographic plate with the exact same reference wave that was used at the recording stage. To reconstruct high-quality digital holograms the exact parameters of the reference wave must be known mathematically. This paper discusses a technique that obtains the mathematical parameters that characterize a strongly divergent reference wave that originates from a fiber source in a new compact digital holographic camera. This is a lensless design that is similar in principle to a Fourier hologram, but because of the large numerical aperture, the usual paraxial approximations cannot be applied and the Fourier relationship is inexact. To characterize the reference wave, recordings of quasi-planar object waves are made at various angles of incidence using a Dammann grating. An optimization process is then used to find the reference wave that reconstructs a stigmatic image of the object wave regardless of the angle of incidence.

  12. Robust multitask learning with three-dimensional empirical mode decomposition-based features for hyperspectral classification

    NASA Astrophysics Data System (ADS)

    He, Zhi; Liu, Lin

    2016-11-01

    Empirical mode decomposition (EMD) and its variants have recently been applied for hyperspectral image (HSI) classification due to their ability to extract useful features from the original HSI. However, it remains a challenging task to effectively exploit the spectral-spatial information by the traditional vector or image-based methods. In this paper, a three-dimensional (3D) extension of EMD (3D-EMD) is proposed to naturally treat the HSI as a cube and decompose the HSI into varying oscillations (i.e. 3D intrinsic mode functions (3D-IMFs)). To achieve fast 3D-EMD implementation, 3D Delaunay triangulation (3D-DT) is utilized to determine the distances of extrema, while separable filters are adopted to generate the envelopes. Taking the extracted 3D-IMFs as features of different tasks, robust multitask learning (RMTL) is further proposed for HSI classification. In RMTL, pairs of low-rank and sparse structures are formulated by trace-norm and l1,2 -norm to capture task relatedness and specificity, respectively. Moreover, the optimization problems of RMTL can be efficiently solved by the inexact augmented Lagrangian method (IALM). Compared with several state-of-the-art feature extraction and classification methods, the experimental results conducted on three benchmark data sets demonstrate the superiority of the proposed methods.

  13. Enhancement of PET Images

    NASA Astrophysics Data System (ADS)

    Davis, Paul B.; Abidi, Mongi A.

    1989-05-01

    PET is the only imaging modality that provides doctors with early analytic and quantitative biochemical assessment and precise localization of pathology. In PET images, boundary information as well as local pixel intensity are both crucial for manual and/or automated feature tracing, extraction, and identification. Unfortunately, the present PET technology does not provide the necessary image quality from which such precise analytic and quantitative measurements can be made. PET images suffer from significantly high levels of radial noise present in the form of streaks caused by the inexactness of the models used in image reconstruction. In this paper, our objective is to model PET noise and remove it without altering dominant features in the image. The ultimate goal here is to enhance these dominant features to allow for automatic computer interpretation and classification of PET images by developing techniques that take into consideration PET signal characteristics, data collection, and data reconstruction. We have modeled the noise steaks in PET images in both rectangular and polar representations and have shown both analytically and through computer simulation that it exhibits consistent mapping patterns. A class of filters was designed and applied successfully. Visual inspection of the filtered images show clear enhancement over the original images.

  14. Molecular Simulation of the Free Energy for the Accurate Determination of Phase Transition Properties of Molecular Solids

    NASA Astrophysics Data System (ADS)

    Sellers, Michael; Lisal, Martin; Brennan, John

    2015-06-01

    Investigating the ability of a molecular model to accurately represent a real material is crucial to model development and use. When the model simulates materials in extreme conditions, one such property worth evaluating is the phase transition point. However, phase transitions are often overlooked or approximated because of difficulty or inaccuracy when simulating them. Techniques such as super-heating or super-squeezing a material to induce a phase change suffer from inherent timescale limitations leading to ``over-driving,'' and dual-phase simulations require many long-time runs to seek out what frequently results in an inexact location of phase-coexistence. We present a compilation of methods for the determination of solid-solid and solid-liquid phase transition points through the accurate calculation of the chemical potential. The methods are applied to the Smith-Bharadwaj atomistic potential's representation of cyclotrimethylene trinitramine (RDX) to accurately determine its melting point (Tm) and the alpha to gamma solid phase transition pressure. We also determine Tm for a coarse-grain model of RDX, and compare its value to experiment and atomistic counterpart. All methods are employed via the LAMMPS simulator, resulting in 60-70 simulations that total 30-50 ns. Approved for public release. Distribution is unlimited.

  15. Demonstration of Inexact Computing Implemented in the JPEG Compression Algorithm using Probabilistic Boolean Logic applied to CMOS Components

    DTIC Science & Technology

    2015-12-24

    Ripple-Carry RCA Ripple-Carry Adder RF Radio Frequency RMS Root-Mean-Square SEU Single Event Upset SIPI Signal and Image Processing Institute SNR...correctness, where 0.5 < p < 1, and a probability (1−p) of error. Errors could be caused by noise, radio frequency (RF) interference, crosstalk...utilized in the Apollo Guidance Computer is the three input NOR Gate. . . At the time that the decision was made to use in- 11 tegrated circuits, the

  16. Demonstration of Inexact Computing Implemented in the JPEG Compression Algorithm Using Probabilistic Boolean Logic Applied to CMOS Components

    DTIC Science & Technology

    2015-12-24

    Ripple-Carry RCA Ripple-Carry Adder RF Radio Frequency RMS Root-Mean-Square SEU Single Event Upset SIPI Signal and Image Processing Institute SNR...correctness, where 0.5 < p < 1, and a probability (1−p) of error. Errors could be caused by noise, radio frequency (RF) interference, crosstalk...utilized in the Apollo Guidance Computer is the three input NOR Gate. . . At the time that the decision was made to use in- 11 tegrated circuits, the

  17. Proceedings of the Image Understanding Workshop (18th) Held in Cambridge, Massachusetts on 6-8 April 1988. Volume 2

    DTIC Science & Technology

    1988-04-01

    solution to a information. There is thus a biological motivation for investi- specific problem, e.g., solving the visual obstacle avoidance gating the...narticular practically motivated aspect of the image, known as the optical flow, does not necessarily the general problem. correspond to the 2-D motion...on (Z Z * "inexact" vision jThom8fi] The obvious motivation stems from a = X tancosa b - Z tan3sina; (1) the fact that an obstacle in relative motion

  18. Non-negative infrared patch-image model: Robust target-background separation via partial sum minimization of singular values

    NASA Astrophysics Data System (ADS)

    Dai, Yimian; Wu, Yiquan; Song, Yu; Guo, Jun

    2017-03-01

    To further enhance the small targets and suppress the heavy clutters simultaneously, a robust non-negative infrared patch-image model via partial sum minimization of singular values is proposed. First, the intrinsic reason behind the undesirable performance of the state-of-the-art infrared patch-image (IPI) model when facing extremely complex backgrounds is analyzed. We point out that it lies in the mismatching of IPI model's implicit assumption of a large number of observations with the reality of deficient observations of strong edges. To fix this problem, instead of the nuclear norm, we adopt the partial sum of singular values to constrain the low-rank background patch-image, which could provide a more accurate background estimation and almost eliminate all the salient residuals in the decomposed target image. In addition, considering the fact that the infrared small target is always brighter than its adjacent background, we propose an additional non-negative constraint to the sparse target patch-image, which could not only wipe off more undesirable components ulteriorly but also accelerate the convergence rate. Finally, an algorithm based on inexact augmented Lagrange multiplier method is developed to solve the proposed model. A large number of experiments are conducted demonstrating that the proposed model has a significant improvement over the other nine competitive methods in terms of both clutter suppressing performance and convergence rate.

  19. An Empirical State Error Covariance Matrix Orbit Determination Example

    NASA Technical Reports Server (NTRS)

    Frisbee, Joseph H., Jr.

    2015-01-01

    State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. First, consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. Then it follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix of the estimate will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully include all of the errors in the state estimate. The empirical error covariance matrix is determined from a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm. It is a formally correct, empirical state error covariance matrix obtained through use of the average form of the weighted measurement residual variance performance index rather than the usual total weighted residual form. Based on its formulation, this matrix will contain the total uncertainty in the state estimate, regardless as to the source of the uncertainty and whether the source is anticipated or not. It is expected that the empirical error covariance matrix will give a better, statistical representation of the state error in poorly modeled systems or when sensor performance is suspect. In its most straight forward form, the technique only requires supplemental calculations to be added to existing batch estimation algorithms. In the current problem being studied a truth model making use of gravity with spherical, J2 and J4 terms plus a standard exponential type atmosphere with simple diurnal and random walk components is used. The ability of the empirical state error covariance matrix to account for errors is investigated under four scenarios during orbit estimation. These scenarios are: exact modeling under known measurement errors, exact modeling under corrupted measurement errors, inexact modeling under known measurement errors, and inexact modeling under corrupted measurement errors. For this problem a simple analog of a distributed space surveillance network is used. The sensors in this network make only range measurements and with simple normally distributed measurement errors. The sensors are assumed to have full horizon to horizon viewing at any azimuth. For definiteness, an orbit at the approximate altitude and inclination of the International Space Station is used for the study. The comparison analyses of the data involve only total vectors. No investigation of specific orbital elements is undertaken. The total vector analyses will look at the chisquare values of the error in the difference between the estimated state and the true modeled state using both the empirical and theoretical error covariance matrices for each of scenario.

  20. Improved pupil dilation with medication-soaked pledget sponges.

    PubMed

    Weddle, Celeste; Thomas, Nikki; Dienemann, Jacqueline

    2013-08-01

    Use of multiple preoperative drops for pupil dilation has been shown to be inexact, to delay surgery, and to cause dissatisfaction among perioperative personnel. This article reports on an evidence-based, quality improvement project to locate and appraise research on improved effectiveness and efficiency of mydriasis (ie, pupillary dilation), and the subsequent implementation of a pledget-sponge procedure for pupil dilation at one ambulatory surgery center. Project leaders used an evidence-based practice model to assess the problem, research options for improvement, define goals, and implement a pilot project to test the new dilation technique. Outcomes from the pilot project showed a reduced number of delays caused by poor pupil dilation and a decrease in procedure turnover time. The project team solicited informal feedback from preoperative nurses, which reflected increased satisfaction in preparing patients for cataract procedures. After facility administrators and surgeons accepted the procedure change, it was adopted for preoperative use for all patients undergoing cataract surgery at the ambulatory surgery center. Copyright © 2013 AORN, Inc. Published by Elsevier Inc. All rights reserved.

  1. Using rule-based natural language processing to improve disease normalization in biomedical text.

    PubMed

    Kang, Ning; Singh, Bharat; Afzal, Zubair; van Mulligen, Erik M; Kors, Jan A

    2013-01-01

    In order for computers to extract useful information from unstructured text, a concept normalization system is needed to link relevant concepts in a text to sources that contain further information about the concept. Popular concept normalization tools in the biomedical field are dictionary-based. In this study we investigate the usefulness of natural language processing (NLP) as an adjunct to dictionary-based concept normalization. We compared the performance of two biomedical concept normalization systems, MetaMap and Peregrine, on the Arizona Disease Corpus, with and without the use of a rule-based NLP module. Performance was assessed for exact and inexact boundary matching of the system annotations with those of the gold standard and for concept identifier matching. Without the NLP module, MetaMap and Peregrine attained F-scores of 61.0% and 63.9%, respectively, for exact boundary matching, and 55.1% and 56.9% for concept identifier matching. With the aid of the NLP module, the F-scores of MetaMap and Peregrine improved to 73.3% and 78.0% for boundary matching, and to 66.2% and 69.8% for concept identifier matching. For inexact boundary matching, performances further increased to 85.5% and 85.4%, and to 73.6% and 73.3% for concept identifier matching. We have shown the added value of NLP for the recognition and normalization of diseases with MetaMap and Peregrine. The NLP module is general and can be applied in combination with any concept normalization system. Whether its use for concept types other than disease is equally advantageous remains to be investigated.

  2. Energy awareness for supercapacitors using Kalman filter state-of-charge tracking

    NASA Astrophysics Data System (ADS)

    Nadeau, Andrew; Hassanalieragh, Moeen; Sharma, Gaurav; Soyata, Tolga

    2015-11-01

    Among energy buffering alternatives, supercapacitors can provide unmatched efficiency and durability. Additionally, the direct relation between a supercapacitor's terminal voltage and stored energy can improve energy awareness. However, a simple capacitive approximation cannot adequately represent the stored energy in a supercapacitor. It is shown that the three branch equivalent circuit model provides more accurate energy awareness. This equivalent circuit uses three capacitances and associated resistances to represent the supercapacitor's internal SOC (state-of-charge). However, the SOC cannot be determined from one observation of the terminal voltage, and must be tracked over time using inexact measurements. We present: 1) a Kalman filtering solution for tracking the SOC; 2) an on-line system identification procedure to efficiently estimate the equivalent circuit's parameters; and 3) experimental validation of both parameter estimation and SOC tracking for 5 F, 10 F, 50 F, and 350 F supercapacitors. Validation is done within the operating range of a solar powered application and the associated power variability due to energy harvesting. The proposed techniques are benchmarked against the simple capacitive model and prior parameter estimation techniques, and provide a 67% reduction in root-mean-square error for predicting usable buffered energy.

  3. An Analytic Approach to Modeling Land-Atmosphere Interaction: 1. Construct and Equilibrium Behavior

    NASA Astrophysics Data System (ADS)

    Brubaker, Kaye L.; Entekhabi, Dara

    1995-03-01

    A four-variable land-atmosphere model is developed to investigate the coupled exchanges of water and energy between the land surface and atmosphere and the role of these exchanges in the statistical behavior of continental climates. The land-atmosphere system is substantially simplified and formulated as a set of ordinary differential equations that, with the addition of random noise, are suitable for analysis in the form of the multivariate Îto equation. The model treats the soil layer and the near-surface atmosphere as reservoirs with storage capacities for heat and water. The transfers between these reservoirs are regulated by four states: soil saturation, soil temperature, air specific humidity, and air potential temperature. The atmospheric reservoir is treated as a turbulently mixed boundary layer of fixed depth. Heat and moisture advection, precipitation, and layer-top air entrainment are parameterized. The system is forced externally by solar radiation and the lateral advection of air and water mass. The remaining energy and water mass exchanges are expressed in terms of the state variables. The model development and equilibrium solutions are presented. Although comparisons between observed data and steady state model results re inexact, the model appears to do a reasonable job of partitioning net radiation into sensible and latent heat flux in appropriate proportions for bare-soil midlatitude summer conditions. Subsequent work will introduce randomness into the forcing terms to investigate the effect of water-energy coupling and land-atmosphere interaction on variability and persistence in the climatic system.

  4. Bitwise efficiency in chaotic models

    PubMed Central

    Düben, Peter; Palmer, Tim

    2017-01-01

    Motivated by the increasing energy consumption of supercomputing for weather and climate simulations, we introduce a framework for investigating the bit-level information efficiency of chaotic models. In comparison with previous explorations of inexactness in climate modelling, the proposed and tested information metric has three specific advantages: (i) it requires only a single high-precision time series; (ii) information does not grow indefinitely for decreasing time step; and (iii) information is more sensitive to the dynamics and uncertainties of the model rather than to the implementation details. We demonstrate the notion of bit-level information efficiency in two of Edward Lorenz’s prototypical chaotic models: Lorenz 1963 (L63) and Lorenz 1996 (L96). Although L63 is typically integrated in 64-bit ‘double’ floating point precision, we show that only 16 bits have significant information content, given an initial condition uncertainty of approximately 1% of the size of the attractor. This result is sensitive to the size of the uncertainty but not to the time step of the model. We then apply the metric to the L96 model and find that a 16-bit scaled integer model would suffice given the uncertainty of the unresolved sub-grid-scale dynamics. We then show that, by dedicating computational resources to spatial resolution rather than numeric precision in a field programmable gate array (FPGA), we see up to 28.6% improvement in forecast accuracy, an approximately fivefold reduction in the number of logical computing elements required and an approximately 10-fold reduction in energy consumed by the FPGA, for the L96 model. PMID:28989303

  5. Bitwise efficiency in chaotic models

    NASA Astrophysics Data System (ADS)

    Jeffress, Stephen; Düben, Peter; Palmer, Tim

    2017-09-01

    Motivated by the increasing energy consumption of supercomputing for weather and climate simulations, we introduce a framework for investigating the bit-level information efficiency of chaotic models. In comparison with previous explorations of inexactness in climate modelling, the proposed and tested information metric has three specific advantages: (i) it requires only a single high-precision time series; (ii) information does not grow indefinitely for decreasing time step; and (iii) information is more sensitive to the dynamics and uncertainties of the model rather than to the implementation details. We demonstrate the notion of bit-level information efficiency in two of Edward Lorenz's prototypical chaotic models: Lorenz 1963 (L63) and Lorenz 1996 (L96). Although L63 is typically integrated in 64-bit `double' floating point precision, we show that only 16 bits have significant information content, given an initial condition uncertainty of approximately 1% of the size of the attractor. This result is sensitive to the size of the uncertainty but not to the time step of the model. We then apply the metric to the L96 model and find that a 16-bit scaled integer model would suffice given the uncertainty of the unresolved sub-grid-scale dynamics. We then show that, by dedicating computational resources to spatial resolution rather than numeric precision in a field programmable gate array (FPGA), we see up to 28.6% improvement in forecast accuracy, an approximately fivefold reduction in the number of logical computing elements required and an approximately 10-fold reduction in energy consumed by the FPGA, for the L96 model.

  6. P1 Nonconforming Finite Element Method for the Solution of Radiation Transport Problems

    NASA Technical Reports Server (NTRS)

    Kang, Kab S.

    2002-01-01

    The simulation of radiation transport in the optically thick flux-limited diffusion regime has been identified as one of the most time-consuming tasks within large simulation codes. Due to multimaterial complex geometry, the radiation transport system must often be solved on unstructured grids. In this paper, we investigate the behavior and the benefits of the unstructured P(sub 1) nonconforming finite element method, which has proven to be flexible and effective on related transport problems, in solving unsteady implicit nonlinear radiation diffusion problems using Newton and Picard linearization methods. Key words. nonconforrning finite elements, radiation transport, inexact Newton linearization, multigrid preconditioning

  7. Unraveling Diagnostic Uncertainty Surrounding Lyme Disease in Children with Neuropsychiatric Illness.

    PubMed

    Koster, Michael P; Garro, Aris

    2018-01-01

    Lyme disease is endemic in parts of the United States, including New England, the Atlantic seaboard, and Great Lakes region. The presentation has various manifestations, many of which can mimic psychiatric diseases in children. Distinguishing manifestations of Lyme disease from those of psychiatric illnesses is complicated by inexact diagnostic tests and misuse of these tests when they are not clinically indicated. This article aims to describe manifestations of Lyme disease in children with an emphasis on Lyme neuroborreliosis. Clinical scenarios will be presented and discussed. Finally, recommendations for clinical psychiatrists who encounter children with possible Lyme disease are presented. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. Ballistics, Navigation and Motion Control of the S/C on Stages of the Phobos Surface Approaching and Landing

    NASA Astrophysics Data System (ADS)

    Akim, E. L.; Ruzskiy, E. G.; Shishov, V. A.; Stepaniants, V. A.; Tuchin, A. G.

    The main purpose of the Federal space program «Phobos-grunt» project is to deliver samples of a ground of Mars's natural satellite Phobos on Earth. Mission is planned to begin at the start window of 2009 year and to finish at 2011 year. The work is devoted to problems connected with development of the Phobos landing scheme. The major factors, the landing strategy is formed on, are the tasks connected with orbital mechanics, restrictions obliged by the space craft (SC) onboard systems, inexact knowledge of the Phobos's kinematics parameters, its hypsometry and gravity field.

  9. A new exact and more powerful unconditional test of no treatment effect from binary matched pairs.

    PubMed

    Lloyd, Chris J

    2008-09-01

    We consider the problem of testing for a difference in the probability of success from matched binary pairs. Starting with three standard inexact tests, the nuisance parameter is first estimated and then the residual dependence is eliminated by maximization, producing what I call an E+M P-value. The E+M P-value based on McNemar's statistic is shown numerically to dominate previous suggestions, including partially maximized P-values as described in Berger and Sidik (2003, Statistical Methods in Medical Research 12, 91-108). The latter method, however, may have computational advantages for large samples.

  10. Intelligent fault isolation and diagnosis for communication satellite systems

    NASA Technical Reports Server (NTRS)

    Tallo, Donald P.; Durkin, John; Petrik, Edward J.

    1992-01-01

    Discussed here is a prototype diagnosis expert system to provide the Advanced Communication Technology Satellite (ACTS) System with autonomous diagnosis capability. The system, the Fault Isolation and Diagnosis EXpert (FIDEX) system, is a frame-based system that uses hierarchical structures to represent such items as the satellite's subsystems, components, sensors, and fault states. This overall frame architecture integrates the hierarchical structures into a lattice that provides a flexible representation scheme and facilitates system maintenance. FIDEX uses an inexact reasoning technique based on the incrementally acquired evidence approach developed by Shortliffe. The system is designed with a primitive learning ability through which it maintains a record of past diagnosis studies.

  11. Exclusive ρ0 electroproduction on the proton at CLAS

    NASA Astrophysics Data System (ADS)

    Morrow, S. A.; Guidal, M.; Garçon, M.; Laget, J. M.; Smith, E. S.; Adams, G.; Adhikari, K. P.; Aghasyan, M.; Amaryan, M. J.; Anghinolfi, M.; Asryan, G.; Audit, G.; Avakian, H.; Bagdasaryan, H.; Baillie, N.; Ball, J. P.; Baltzell, N. A.; Barrow, S.; Battaglieri, M.; Bedlinskiy, I.; Bektasoglu, M.; Bellis, M.; Benmouna, N.; Berman, B. L.; Biselli, A. S.; Blaszczyk, L.; Bonner, B. E.; Bookwalter, C.; Bouchigny, S.; Boiarinov, S.; Bradford, R.; Branford, D.; Briscoe, W. J.; Brooks, W. K.; Bültmann, S.; Burkert, V. D.; Butuceanu, C.; Calarco, J. R.; Careccia, S. L.; Carman, D. S.; Carnahan, B.; Casey, L.; Cazes, A.; Chen, S.; Cheng, L.; Cole, P. L.; Collins, P.; Coltharp, P.; Cords, D.; Corvisiero, P.; Crabb, D.; Crannell, H.; Crede, V.; Cummings, J. P.; Dale, D.; Dashyan, N.; de Masi, R.; de Vita, R.; de Sanctis, E.; Degtyarenko, P. V.; Denizli, H.; Dennis, L.; Deur, A.; Dhamija, S.; Dharmawardane, K. V.; Dhuga, K. S.; Dickson, R.; Didelez, J.-P.; Djalali, C.; Dodge, G. E.; Doughty, D.; Dugger, M.; Dytman, S.; Dzyubak, O. P.; Egiyan, H.; Egiyan, K. S.; El Fassi, L.; Elouadrhiri, L.; Eugenio, P.; Fatemi, R.; Fedotov, G.; Fersch, R.; Feuerbach, R. J.; Forest, T. A.; Fradi, A.; Gavalian, G.; Gevorgyan, N.; Gilfoyle, G. P.; Giovanetti, K. L.; Girod, F. X.; Goetz, J. T.; Gohn, W.; Gordon, C. I. O.; Gothe, R. W.; Graham, L.; Griffioen, K. A.; Guillo, M.; Guler, N.; Guo, L.; Gyurjyan, V.; Hadjidakis, C.; Hafidi, K.; Hakobyan, H.; Hanretty, C.; Hardie, J.; Hassall, N.; Heddle, D.; Hersman, F. W.; Hicks, K.; Hleiqawi, I.; Holtrop, M.; Hourany, E.; Hyde-Wright, C. E.; Ilieva, Y.; Ireland, D. G.; Ishkhanov, B. S.; Isupov, E. L.; Ito, M. M.; Jenkins, D.; Jo, H. S.; Johnstone, J. R.; Joo, K.; Juengst, H. G.; Kalantarians, N.; Keller, D.; Kellie, J. D.; Khandaker, M.; Khetarpal, P.; Kim, W.; Klein, A.; Klein, F. J.; Klimenko, A. V.; Kossov, M.; Kramer, L. H.; Kubarovsky, V.; Kuhn, J.; Kuhn, S. E.; Kuleshov, S. V.; Kuznetsov, V.; Lachniet, J.; Langheinrich, J.; Lawrence, D.; Li, Ji; Livingston, K.; Lu, H. Y.; MacCormick, M.; Marchand, C.; Markov, N.; Mattione, P.; McAleer, S.; McCracken, M.; McKinnon, B.; McNabb, J. W. C.; Mecking, B. A.; Mehrabyan, S.; Melone, J. J.; Mestayer, M. D.; Meyer, C. A.; Mibe, T.; Mikhailov, K.; Minehart, R.; Mirazita, M.; Miskimen, R.; Mokeev, V.; Morand, L.; Moreno, B.; Moriya, K.; Moteabbed, M.; Mueller, J.; Munevar, E.; Mutchler, G. S.; Nadel-Turonski, P.; Nasseripour, R.; Niccolai, S.; Niculescu, G.; Niculescu, I.; Niczyporuk, B. B.; Niroula, M. R.; Niyazov, R. A.; Nozar, M.; O'Rielly, G. V.; Osipenko, M.; Ostrovidov, A. I.; Park, K.; Park, S.; Pasyuk, E.; Paterson, C.; Anefalos Pereira, S.; Philips, S. A.; Pierce, J.; Pivnyuk, N.; Pocanic, D.; Pogorelko, O.; Polli, E.; Popa, I.; Pozdniakov, S.; Preedom, B. M.; Price, J. W.; Procureur, S.; Prok, Y.; Protopopescu, D.; Qin, L. M.; Raue, B. A.; Riccardi, G.; Ricco, G.; Ripani, M.; Ritchie, B. G.; Rosner, G.; Rossi, P.; Rubin, P. D.; Sabatié, F.; Saini, M. S.; Salamanca, J.; Salgado, C.; Santoro, J. P.; Sapunenko, V.; Schott, D.; Schumacher, R. A.; Serov, V. S.; Sharabian, Y. G.; Sharov, D.; Shvedunov, N. V.; Skabelin, A. V.; Smith, L. C.; Sober, D. I.; Sokhan, D.; Stavinsky, A.; Stepanyan, S. S.; Stepanyan, S.; Stokes, B. E.; Stoler, P.; Strakovsky, I. I.; Strauch, S.; Taiuti, M.; Tedeschi, D. J.; Tkabladze, A.; Tkachenko, S.; Todor, L.; Tur, C.; Ungaro, M.; Vineyard, M. F.; Vlassov, A. V.; Watts, D. P.; Weinstein, L. B.; Weygand, D. P.; Williams, M.; Wolin, E.; Wood, M. H.; Yegneswaran, A.; Yurov, M.; Zana, L.; Zhang, J.; Zhao, B.; Zhao, Z. W.

    2009-01-01

    The ep → e ' pρ0 reaction has been measured using the 5.754GeV electron beam of Jefferson Lab and the CLAS detector. This represents the largest ever set of data for this reaction in the valence region. Integrated and differential cross-sections are presented. The W , Q2 and t dependences of the cross-section are compared to theoretical calculations based on the t -channel meson-exchange Regge theory, on the one hand, and on quark handbag diagrams related to Generalized Parton Distributions (GPDs) on the other hand. The Regge approach can describe at the ≈ 30% level most of the features of the present data while the two GPD calculations that are presented in this article which succesfully reproduce the high-energy data strongly underestimate the present data. The question is then raised whether this discrepancy originates from an incomplete or inexact way of modelling the GPDs or the associated hard scattering amplitude or whether the GPD formalism is simply inapplicable in this region due to higher-twists contributions, incalculable at present.

  12. The Buffer Diagnostic Prototype: A fault isolation application using CLIPS

    NASA Technical Reports Server (NTRS)

    Porter, Ken

    1994-01-01

    This paper describes problem domain characteristics and development experiences from using CLIPS 6.0 in a proof-of-concept troubleshooting application called the Buffer Diagnostic Prototype. The problem domain is a large digital communications subsystems called the real-time network (RTN), which was designed to upgrade the launch processing system used for shuttle support at KSC. The RTN enables up to 255 computers to share 50,000 data points with millisecond response times. The RTN's extensive built-in test capability but lack of any automatic fault isolation capability presents a unique opportunity for a diagnostic expert system application. The Buffer Diagnostic Prototype addresses RTN diagnosis with a multiple strategy approach. A novel technique called 'faulty causality' employs inexact qualitative models to process test results. Experimental knowledge provides a capability to recognize symptom-fault associations. The implementation utilizes rule-based and procedural programming techniques, including a goal-directed control structure and simple text-based generic user interface that may be reusable for other rapid prototyping applications. Although limited in scope, this project demonstrates a diagnostic approach that may be adapted to troubleshoot a broad range of equipment.

  13. Statistical comparison of a hybrid approach with approximate and exact inference models for Fusion 2+

    NASA Astrophysics Data System (ADS)

    Lee, K. David; Wiesenfeld, Eric; Gelfand, Andrew

    2007-04-01

    One of the greatest challenges in modern combat is maintaining a high level of timely Situational Awareness (SA). In many situations, computational complexity and accuracy considerations make the development and deployment of real-time, high-level inference tools very difficult. An innovative hybrid framework that combines Bayesian inference, in the form of Bayesian Networks, and Possibility Theory, in the form of Fuzzy Logic systems, has recently been introduced to provide a rigorous framework for high-level inference. In previous research, the theoretical basis and benefits of the hybrid approach have been developed. However, lacking is a concrete experimental comparison of the hybrid framework with traditional fusion methods, to demonstrate and quantify this benefit. The goal of this research, therefore, is to provide a statistical analysis on the comparison of the accuracy and performance of hybrid network theory, with pure Bayesian and Fuzzy systems and an inexact Bayesian system approximated using Particle Filtering. To accomplish this task, domain specific models will be developed under these different theoretical approaches and then evaluated, via Monte Carlo Simulation, in comparison to situational ground truth to measure accuracy and fidelity. Following this, a rigorous statistical analysis of the performance results will be performed, to quantify the benefit of hybrid inference to other fusion tools.

  14. Sensitivity and Limitations of Structures from X-ray and Neutron-Based Diffraction Analyses of Transition Metal Oxide Lithium-Battery Electrodes

    DOE PAGES

    Liu, Hao; Liu, Haodong; Lapidus, Saul H.; ...

    2017-06-21

    Lithium transition metal oxides are an important class of electrode materials for lithium-ion batteries. Binary or ternary (transition) metal doping brings about new opportunities to improve the electrode’s performance and often leads to more complex stoichiometries and atomic structures than the archetypal LiCoO 2. Rietveld structural analyses of X-ray and neutron diffraction data is a widely-used approach for structural characterization of crystalline materials. But, different structural models and refinement approaches can lead to differing results, and some parameters can be difficult to quantify due to the inherent limitations of the data. Here, through the example of LiNi 0.8Co 0.15Al 0.05Omore » 2 (NCA), we demonstrated the sensitivity of various structural parameters in Rietveld structural analysis to different refinement approaches and structural models, and proposed an approach to reduce refinement uncertainties due to the inexact X-ray scattering factors of the constituent atoms within the lattice. Furthermore, this refinement approach was implemented for electrochemically-cycled NCA samples and yielded accurate structural parameters using only X-ray diffraction data. The present work provides the best practices for performing structural refinement of lithium transition metal oxides.« less

  15. Sensitivity and Limitations of Structures from X-ray and Neutron-Based Diffraction Analyses of Transition Metal Oxide Lithium-Battery Electrodes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Hao; Liu, Haodong; Lapidus, Saul H.

    Lithium transition metal oxides are an important class of electrode materials for lithium-ion batteries. Binary or ternary (transition) metal doping brings about new opportunities to improve the electrode’s performance and often leads to more complex stoichiometries and atomic structures than the archetypal LiCoO 2. Rietveld structural analyses of X-ray and neutron diffraction data is a widely-used approach for structural characterization of crystalline materials. But, different structural models and refinement approaches can lead to differing results, and some parameters can be difficult to quantify due to the inherent limitations of the data. Here, through the example of LiNi 0.8Co 0.15Al 0.05Omore » 2 (NCA), we demonstrated the sensitivity of various structural parameters in Rietveld structural analysis to different refinement approaches and structural models, and proposed an approach to reduce refinement uncertainties due to the inexact X-ray scattering factors of the constituent atoms within the lattice. Furthermore, this refinement approach was implemented for electrochemically-cycled NCA samples and yielded accurate structural parameters using only X-ray diffraction data. The present work provides the best practices for performing structural refinement of lithium transition metal oxides.« less

  16. Implementation of the Jacobian-free Newton-Krylov method for solving the for solving the first-order ice sheet momentum balance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Salinger, Andy; Evans, Katherine J; Lemieux, Jean-Francois

    2011-01-01

    We have implemented the Jacobian-free Newton-Krylov (JFNK) method for solving the rst-order ice sheet momentum equation in order to improve the numerical performance of the Community Ice Sheet Model (CISM), the land ice component of the Community Earth System Model (CESM). Our JFNK implementation is based on signicant re-use of existing code. For example, our physics-based preconditioner uses the original Picard linear solver in CISM. For several test cases spanning a range of geometries and boundary conditions, our JFNK implementation is 1.84-3.62 times more efficient than the standard Picard solver in CISM. Importantly, this computational gain of JFNK over themore » Picard solver increases when rening the grid. Global convergence of the JFNK solver has been signicantly improved by rescaling the equation for the basal boundary condition and through the use of an inexact Newton method. While a diverse set of test cases show that our JFNK implementation is usually robust, for some problems it may fail to converge with increasing resolution (as does the Picard solver). Globalization through parameter continuation did not remedy this problem and future work to improve robustness will explore a combination of Picard and JFNK and the use of homotopy methods.« less

  17. [Partial replacement of the knee joint with patient-specific instruments and implants (ConforMIS iUni, iDuo)].

    PubMed

    Beckmann, J; Steinert, A; Zilkens, C; Zeh, A; Schnurr, C; Schmitt-Sody, M; Gebauer, M

    2016-04-01

    Knee arthroplasty is a successful standard procedure in orthopedic surgery; however, approximately 20 % of patients are dissatisfied with the clinical results as they suffer pain and can no longer achieve the presurgery level of activity. According to the literature the reasons are inexact fitting of the prosthesis or too few anatomically formed implants resulting in less physiological kinematics of the knee joint. Reducing the number of dissatisfied patients and the corresponding number of revisions is an important goal considering the increasing need for artificial joints. In this context, patient-specific knee implants are an obvious alternative to conventional implants. For the first time implants are now matched to the individual bone and not vice versa to achieve the best possible individual situation and geometry and more structures (e.g. ligaments and bone) are preserved or only those structures are replaced which were actually destroyed by arthrosis. According to the authors view, this represents an optimal and pioneering addition to conventional implants. Patient-specific implants and the instruments needed for correct alignment and fitting can be manufactured by virtual 3D reconstruction and 3D printing based on computed tomography (CT) scans. The portfolio covers medial as well as lateral unicondylar implants, medial as well as lateral bicompartmental implants (femorotibial and patellofemoral compartments) and cruciate ligament-preserving as well as cruciate ligament-substituting total knee replacements; however, it must be explicitly emphasized that the literature is sparse and no long-term data are available.

  18. Combining existing numerical models with data assimilation using weighted least-squares finite element methods.

    PubMed

    Rajaraman, Prathish K; Manteuffel, T A; Belohlavek, M; Heys, Jeffrey J

    2017-01-01

    A new approach has been developed for combining and enhancing the results from an existing computational fluid dynamics model with experimental data using the weighted least-squares finite element method (WLSFEM). Development of the approach was motivated by the existence of both limited experimental blood velocity in the left ventricle and inexact numerical models of the same flow. Limitations of the experimental data include measurement noise and having data only along a two-dimensional plane. Most numerical modeling approaches do not provide the flexibility to assimilate noisy experimental data. We previously developed an approach that could assimilate experimental data into the process of numerically solving the Navier-Stokes equations, but the approach was limited because it required the use of specific finite element methods for solving all model equations and did not support alternative numerical approximation methods. The new approach presented here allows virtually any numerical method to be used for approximately solving the Navier-Stokes equations, and then the WLSFEM is used to combine the experimental data with the numerical solution of the model equations in a final step. The approach dynamically adjusts the influence of the experimental data on the numerical solution so that more accurate data are more closely matched by the final solution and less accurate data are not closely matched. The new approach is demonstrated on different test problems and provides significantly reduced computational costs compared with many previous methods for data assimilation. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  19. Implementation of an improved adaptive-implicit method in a thermal compositional simulator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tan, T.B.

    1988-11-01

    A multicomponent thermal simulator with an adaptive-implicit-method (AIM) formulation/inexact-adaptive-Newton (IAN) method is presented. The final coefficient matrix retains the original banded structure so that conventional iterative methods can be used. Various methods for selection of the eliminated unknowns are tested. AIM/IAN method has a lower work count per Newtonian iteration than fully implicit methods, but a wrong choice of unknowns will result in excessive Newtonian iterations. For the problems tested, the residual-error method described in the paper for selecting implicit unknowns, together with the IAN method, had an improvement of up to 28% of the CPU time over the fullymore » implicit method.« less

  20. Social psychology as a natural kind

    PubMed Central

    Mitchell, Jason P.

    2010-01-01

    Summary Although typically defined as the study of how people and groups interact, the field of social psychology comprises a number of disparate domains that make only indirect contributions to understanding interpersonal interaction, such as emotion, attitudes, and the self. Although these various phenomena may appear to have little in common, recent evidence suggests that the topics at the core of social psychology form a natural group of domains with a common functional neuroanatomy, centered on the medial prefrontal cortex. That self-referential, attitudinal, affective, and other social phenomena converge on this region may reflect their shared reliance on inexact and internally-generated estimates that differ from the more precise representations underlying other psychological phenomena. PMID:19427258

  1. Orthogonal Relations and Color Constancy in Dichromatic Colorblindness

    PubMed Central

    Pridmore, Ralph W.

    2014-01-01

    This paper employs uniform color space to analyze relations in dichromacy (protanopia, deuteranopia, tritanopia). Fifty percent or less of dichromats represent the classical reduction form of trichromacy, where one of three cones is inoperative but normal trichromatic color mixture such as complementary colors (pairs that mix white) are accepted by the dichromat, whose data can thus be plotted to CIE chromaticity spaces. The remaining dichromats comprise many and varied more-complex gene arrays from mutations, recombinations, etc. Though perhaps a minority, the three reductionist types provide a simple standard, in genotype and phenotype, to which the more complex remainder may be compared. Here, previously published data on dichromacy are plotted and analyzed in CIELUV uniform color space to find spatial relations in terms of color appearance space (e.g., hue angle). Traditional residual (seen) hues for protanopia and deuteranopia (both red–green colorblindness) are yellow and blue, but analysis indicates the protanopic residual hues are more greenish yellow and reddish blue than in tradition. Results for three illuminants (D65, D50, B) imply four principles in the spatial structure of dichromacy: (1) complementarity of confusion hue pairs and of residual hue pairs; (2) orthogonality of confusion locus and residual hues locus at their intersection with the white point, in each dichromatic type; (3) orthogonality of protanopic and tritanopic confusion loci; and (4) inverse relations between protanopic and tritanopic systems generally, such that one's confusion hues are the other's residual hues. Two of the three dichromatic systems do not represent components of normal trichromatic vision as sometimes thought but are quite different. Wavelength shifts between illuminants demonstrate chromatic adaptation correlates exactly with that in trichromatic vision. In theory these results clarify relations in and between types of dichromacy. They also apply in Munsell and CIELAB color spaces but inexactly to the degree they employ inexact complementarity. PMID:25211128

  2. Orthogonal relations and color constancy in dichromatic colorblindness.

    PubMed

    Pridmore, Ralph W

    2014-01-01

    This paper employs uniform color space to analyze relations in dichromacy (protanopia, deuteranopia, tritanopia). Fifty percent or less of dichromats represent the classical reduction form of trichromacy, where one of three cones is inoperative but normal trichromatic color mixture such as complementary colors (pairs that mix white) are accepted by the dichromat, whose data can thus be plotted to CIE chromaticity spaces. The remaining dichromats comprise many and varied more-complex gene arrays from mutations, recombinations, etc. Though perhaps a minority, the three reductionist types provide a simple standard, in genotype and phenotype, to which the more complex remainder may be compared. Here, previously published data on dichromacy are plotted and analyzed in CIELUV uniform color space to find spatial relations in terms of color appearance space (e.g., hue angle). Traditional residual (seen) hues for protanopia and deuteranopia (both red-green colorblindness) are yellow and blue, but analysis indicates the protanopic residual hues are more greenish yellow and reddish blue than in tradition. Results for three illuminants (D65, D50, B) imply four principles in the spatial structure of dichromacy: (1) complementarity of confusion hue pairs and of residual hue pairs; (2) orthogonality of confusion locus and residual hues locus at their intersection with the white point, in each dichromatic type; (3) orthogonality of protanopic and tritanopic confusion loci; and (4) inverse relations between protanopic and tritanopic systems generally, such that one's confusion hues are the other's residual hues. Two of the three dichromatic systems do not represent components of normal trichromatic vision as sometimes thought but are quite different. Wavelength shifts between illuminants demonstrate chromatic adaptation correlates exactly with that in trichromatic vision. In theory these results clarify relations in and between types of dichromacy. They also apply in Munsell and CIELAB color spaces but inexactly to the degree they employ inexact complementarity.

  3. Remote Sensing the Thermosphere's State Using Emissions From Carbon Dioxide and Nitric Oxide

    NASA Astrophysics Data System (ADS)

    Weimer, D. R.; Mlynczak, M. G.; Doornbos, E.

    2017-12-01

    Measurements of emissions from nitric oxide and carbon dioxide in the thermosphere have strong correlations with properties that are very useful to the determination of thermospheric densities. We have compared emissions measured with the Sounding of the Atmosphere using Broadband Emission Radiometry (SABER) instrument on the Thermosphere Ionosphere Mesosphere Energetics and Dynamics (TIMED) satellite with neutral density measurements from the Challenging Mini-satellite Payload (CHAMP), the Gravity Recovery and Climate Experiment (GRACE), the Ocean Circulation Explorer (GOCE), and the three Swarm satellites, spanning a time period of over 15 years. It has been found that nitric oxide emissions match changes in the exospheric temperatures that have been derived from the densities through use of the Naval Reasearch Laboratory Mass Spectrometer, Incoherent Scatter Radar Extended Model (NRLMSISE-00) thermosphere model. Similarly, our results indicate that the carbon dioxide emissions have annual and semiannual oscillations that correlate with changes in the amount of oxygen in the thermosphere, also determined by use of the NRLMSISE-00 model. These annual and semi-annual variations are found to have irregular amplitudes and phases, which make them very difficult to accurately predict. Prediction of exospheric temperatures through the use of geomagnetic indices also tends to be inexact. Therefore, it would be possible and very useful to use measurements of the thermosphere's infrared emissions for real-time tracking of the thermosphere's state, so that more accurate calculations of the density may be obtained.

  4. Langevin equation versus kinetic equation: Subdiffusive behavior of charged particles in a stochastic magnetic field

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balescu, R.; Wang, H.; Misguich, J.H.

    1994-12-01

    The running diffusion coefficient [ital D]([ital t]) is evaluated for a system of charged particles undergoing the effect of a fluctuating magnetic field and of their mutual collisions. The latter coefficient can be expressed either in terms of the mean square displacement (MSD) of a test particle, or in terms of a correlation between a fluctuating distribution function and the magnetic field fluctuation. In the first case a stochastic differential equation of Langevin type for the position of a test particle must be solved; the second problem requires the determination of the distribution function from a kinetic equation. Using suitablemore » simplifications, both problems are amenable to exact analytic solution. The conclusion is that the equivalence of the two approaches is by no means automatically guaranteed. A new type of object, the hybrid kinetic equation'' is constructed: it automatically ensures the equivalence with the Langevin results. The same conclusion holds for the generalized Fokker--Planck equation. The (Bhatnagar--Gross--Krook) (BGK) model for the collisions yields a completely wrong result. A linear approximation to the hybrid kinetic equation yields an inexact behavior, but represents an acceptable approximation in the strongly collisional limit.« less

  5. Within-site variability in surveys of wildlife populations

    USGS Publications Warehouse

    Link, William A.; Barker, Richard J.; Sauer, John R.; Droege, Sam

    1994-01-01

    Most large-scale surveys of animal populations are based on counts of individuals observed during a sampling period, which are used as indexes to the population. The variability in these indexes not only reflects variability in population sizes among sites but also variability due to the inexactness of the counts. Repeated counts at survey sites can be used to document this additional source of variability and, in some applications, to mitigate its effects. We present models for evaluating the proportion of total variability in counts that is attributable to this within-site variability and apply them in the analysis of data from repeated counts on routes from the North American Breeding Bird Survey. We analyzed data on 98 species, obtaining estimates of these percentages, which ranged from 3.5 to 100% with a mean of 36.25%. For at least 14 of the species, more than half of the variation in counts was attributable to within-site sources. Counts for species with lower average counts had a higher percentage of within-site variability. We discuss the relative cost efficiency of replicating sites or initiating new sites for several objectives, concluding that it is frequently better to initiate new sites than to attempt to replicate existing sites.

  6. FDD Massive MIMO Channel Estimation With Arbitrary 2D-Array Geometry

    NASA Astrophysics Data System (ADS)

    Dai, Jisheng; Liu, An; Lau, Vincent K. N.

    2018-05-01

    This paper addresses the problem of downlink channel estimation in frequency-division duplexing (FDD) massive multiple-input multiple-output (MIMO) systems. The existing methods usually exploit hidden sparsity under a discrete Fourier transform (DFT) basis to estimate the cdownlink channel. However, there are at least two shortcomings of these DFT-based methods: 1) they are applicable to uniform linear arrays (ULAs) only, since the DFT basis requires a special structure of ULAs, and 2) they always suffer from a performance loss due to the leakage of energy over some DFT bins. To deal with the above shortcomings, we introduce an off-grid model for downlink channel sparse representation with arbitrary 2D-array antenna geometry, and propose an efficient sparse Bayesian learning (SBL) approach for the sparse channel recovery and off-grid refinement. The main idea of the proposed off-grid method is to consider the sampled grid points as adjustable parameters. Utilizing an in-exact block majorization-minimization (MM) algorithm, the grid points are refined iteratively to minimize the off-grid gap. Finally, we further extend the solution to uplink-aided channel estimation by exploiting the angular reciprocity between downlink and uplink channels, which brings enhanced recovery performance.

  7. Endoscopy-coupled Raman spectroscopy for in vivo discrimination of inflammatory bowel disease

    NASA Astrophysics Data System (ADS)

    Pence, I. J.; Nguyen, Q. T.; Bi, X.; Herline, A. J.; Beaulieu, D. M.; Horst, S. N.; Schwartz, D. A.; Mahadevan-Jansen, A.

    2014-03-01

    Inflammatory bowel disease (IBD), including ulcerative colitis (UC) and Crohn's colitis (CC), affects nearly 2 million Americans, and the incidence is increasing worldwide. It has been established that UC and CC are distinct forms of IBD and require different medical care, however the distinction made between UC and CC is based upon inexact clinical, radiological, endoscopic, and pathologic features. A diagnosis of indeterminate colitis occurs in up to 15% of patients when UC and CC features overlap and cannot be differentiated; in these patients, diagnosis relies on long term followup, success or failure of existing treatment, and recurrence of the disease. Thus, there is need for a tool that can improve the sensitivity and specificity for fast, accurate and automated diagnosis of IBD. Here we present colonoscopy-coupled fiber probe-based Raman spectroscopy as a novel in vivo diagnostic tool for IBD. This in vivo study of both healthy control (NC, N=10) and diagnosed IBD patients with UC (N=15) and CC (N=26) aims to characterize spectral signatures of NC, UC, and CC. Samples are correlated with tissue pathology markers and endoscopic evaluation. Optimal collection parameters for detection have been identified based upon the new, application specific instrument design. The collected spectra are processed and analyzed using multivariate statistical techniques to identify spectral markers and discriminate NC, UC, and CC. Development of spectral markers to discriminate disease type is a necessary first step in the development of real-time, accurate and automated in vivo detection of IBD during colonoscopy procedures.

  8. Assessing uncertainty in ecological systems using global sensitivity analyses: a case example of simulated wolf reintroduction effects on elk

    USGS Publications Warehouse

    Fieberg, J.; Jenkins, Kurt J.

    2005-01-01

    Often landmark conservation decisions are made despite an incomplete knowledge of system behavior and inexact predictions of how complex ecosystems will respond to management actions. For example, predicting the feasibility and likely effects of restoring top-level carnivores such as the gray wolf (Canis lupus) to North American wilderness areas is hampered by incomplete knowledge of the predator-prey system processes and properties. In such cases, global sensitivity measures, such as Sobola?? indices, allow one to quantify the effect of these uncertainties on model predictions. Sobola?? indices are calculated by decomposing the variance in model predictions (due to parameter uncertainty) into main effects of model parameters and their higher order interactions. Model parameters with large sensitivity indices can then be identified for further study in order to improve predictive capabilities. Here, we illustrate the use of Sobola?? sensitivity indices to examine the effect of parameter uncertainty on the predicted decline of elk (Cervus elaphus) population sizes following a hypothetical reintroduction of wolves to Olympic National Park, Washington, USA. The strength of density dependence acting on survival of adult elk and magnitude of predation were the most influential factors controlling elk population size following a simulated wolf reintroduction. In particular, the form of density dependence in natural survival rates and the per-capita predation rate together accounted for over 90% of variation in simulated elk population trends. Additional research on wolf predation rates on elk and natural compensations in prey populations is needed to reliably predict the outcome of predatora??prey system behavior following wolf reintroductions.

  9. A definition of depletion of fish stocks

    USGS Publications Warehouse

    Van Oosten, John

    1949-01-01

    Attention was focused on the need of a common and better understanding of the term depletion as applied to the fisheries in order to eliminate if possible the existing inexactness of thought on the subject. Depletion has been confused at various times with at least ten different ideas associated with it but which, as has has heen pointed out, are not synonymous at all. In defining depletion we must recognize that the term represents a condition and must not he confounded with the cause (overfishing) that leads to this condition or with the symptoms that identify it. Depletion was defined as a reduction, through overfishing, in the level of abundance of the exploitable segment of a stock that prevents the realization of the maximum productive capacity.

  10. Preconditioned conjugate residual methods for the solution of spectral equations

    NASA Technical Reports Server (NTRS)

    Wong, Y. S.; Zang, T. A.; Hussaini, M. Y.

    1986-01-01

    Conjugate residual methods for the solution of spectral equations are described. An inexact finite-difference operator is introduced as a preconditioner in the iterative procedures. Application of these techniques is limited to problems for which the symmetric part of the coefficient matrix is positive definite. Although the spectral equation is a very ill-conditioned and full matrix problem, the computational effort of the present iterative methods for solving such a system is comparable to that for the sparse matrix equations obtained from the application of either finite-difference or finite-element methods to the same problems. Numerical experiments are shown for a self-adjoint elliptic partial differential equation with Dirichlet boundary conditions, and comparison with other solution procedures for spectral equations is presented.

  11. Effect of Finite Computational Domain on Turbulence Scaling Law in Both Physical and Spectral Spaces

    NASA Technical Reports Server (NTRS)

    Hou, Thomas Y.; Wu, Xiao-Hui; Chen, Shiyi; Zhou, Ye

    1998-01-01

    The well-known translation between the power law of energy spectrum and that of the correlation function or the second order structure function has been widely used in analyzing random data. Here, we show that the translation is valid only in proper scaling regimes. The regimes of valid translation are different for the correlation function and the structure function. Indeed, they do not overlap. Furthermore, in practice, the power laws exist only for a finite range of scales. We show that this finite range makes the translation inexact even in the proper scaling regime. The error depends on the scaling exponent. The current findings are applicable to data analysis in fluid turbulence and other stochastic systems.

  12. PySeqLab: an open source Python package for sequence labeling and segmentation.

    PubMed

    Allam, Ahmed; Krauthammer, Michael

    2017-11-01

    Text and genomic data are composed of sequential tokens, such as words and nucleotides that give rise to higher order syntactic constructs. In this work, we aim at providing a comprehensive Python library implementing conditional random fields (CRFs), a class of probabilistic graphical models, for robust prediction of these constructs from sequential data. Python Sequence Labeling (PySeqLab) is an open source package for performing supervised learning in structured prediction tasks. It implements CRFs models, that is discriminative models from (i) first-order to higher-order linear-chain CRFs, and from (ii) first-order to higher-order semi-Markov CRFs (semi-CRFs). Moreover, it provides multiple learning algorithms for estimating model parameters such as (i) stochastic gradient descent (SGD) and its multiple variations, (ii) structured perceptron with multiple averaging schemes supporting exact and inexact search using 'violation-fixing' framework, (iii) search-based probabilistic online learning algorithm (SAPO) and (iv) an interface for Broyden-Fletcher-Goldfarb-Shanno (BFGS) and the limited-memory BFGS algorithms. Viterbi and Viterbi A* are used for inference and decoding of sequences. Using PySeqLab, we built models (classifiers) and evaluated their performance in three different domains: (i) biomedical Natural language processing (NLP), (ii) predictive DNA sequence analysis and (iii) Human activity recognition (HAR). State-of-the-art performance comparable to machine-learning based systems was achieved in the three domains without feature engineering or the use of knowledge sources. PySeqLab is available through https://bitbucket.org/A_2/pyseqlab with tutorials and documentation. ahmed.allam@yale.edu or michael.krauthammer@yale.edu. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  13. The Use of Scale-Dependent Precision to Increase Forecast Accuracy in Earth System Modelling

    NASA Astrophysics Data System (ADS)

    Thornes, Tobias; Duben, Peter; Palmer, Tim

    2016-04-01

    At the current pace of development, it may be decades before the 'exa-scale' computers needed to resolve individual convective clouds in weather and climate models become available to forecasters, and such machines will incur very high power demands. But the resolution could be improved today by switching to more efficient, 'inexact' hardware with which variables can be represented in 'reduced precision'. Currently, all numbers in our models are represented as double-precision floating points - each requiring 64 bits of memory - to minimise rounding errors, regardless of spatial scale. Yet observational and modelling constraints mean that values of atmospheric variables are inevitably known less precisely on smaller scales, suggesting that this may be a waste of computer resources. More accurate forecasts might therefore be obtained by taking a scale-selective approach whereby the precision of variables is gradually decreased at smaller spatial scales to optimise the overall efficiency of the model. To study the effect of reducing precision to different levels on multiple spatial scales, we here introduce a new model atmosphere developed by extending the Lorenz '96 idealised system to encompass three tiers of variables - which represent large-, medium- and small-scale features - for the first time. In this chaotic but computationally tractable system, the 'true' state can be defined by explicitly resolving all three tiers. The abilities of low resolution (single-tier) double-precision models and similar-cost high resolution (two-tier) models in mixed-precision to produce accurate forecasts of this 'truth' are compared. The high resolution models outperform the low resolution ones even when small-scale variables are resolved in half-precision (16 bits). This suggests that using scale-dependent levels of precision in more complicated real-world Earth System models could allow forecasts to be made at higher resolution and with improved accuracy. If adopted, this new paradigm would represent a revolution in numerical modelling that could be of great benefit to the world.

  14. FSAW for REIT selection in multi-criteria decision making

    NASA Astrophysics Data System (ADS)

    Adawiyah, C. W. Rabiatul; Abdullah, Lazim

    2014-07-01

    Real Estate Investment Trust (REIT) is profitable investments that gain Malaysian investors' attention. It can be a great way to increase our net worth. Due to the existence of many property trusts, investors face difficulties in determining the best property trusts to invest their wealth. This high risk investment makes investors cautious in choosing the platform for investing their wealth. In real world situation, the data collected are inexact and ambiguous. Investment reminds investors to beware because it carries risk no matter low or high. There are lots of things that investors need to consider before invest in any property or trust. The aim of this paper is to identify the best criteria of REIT and determine the best property trust that helps investors gain profits in their investment. Four experts in areas of investment are randomly selected to assess and provide information regarding to Malaysia's REITs. Decision makers were asked to rate the criteria for every alternatives by using the linguistic variables. The five linguistic variables were used as input data to test fuzzy simple additive weighting (FSAW) model. From the study conducted, transparency indicates the necessary criteria for each property trust. This decision making model used to be possible to test FSAW model in investment sector. The decision makers also asked to evaluate the alternatives based on the linguistic ranking variables to rank the alternatives and the result shows the best alternative in this case is Amanah Harta Tanah PNB. The ranking signifies the impact of the criterion and alternatives to investors especially REITs investors.

  15. Total variation-based neutron computed tomography

    NASA Astrophysics Data System (ADS)

    Barnard, Richard C.; Bilheux, Hassina; Toops, Todd; Nafziger, Eric; Finney, Charles; Splitter, Derek; Archibald, Rick

    2018-05-01

    We perform the neutron computed tomography reconstruction problem via an inverse problem formulation with a total variation penalty. In the case of highly under-resolved angular measurements, the total variation penalty suppresses high-frequency artifacts which appear in filtered back projections. In order to efficiently compute solutions for this problem, we implement a variation of the split Bregman algorithm; due to the error-forgetting nature of the algorithm, the computational cost of updating can be significantly reduced via very inexact approximate linear solvers. We present the effectiveness of the algorithm in the significantly low-angular sampling case using synthetic test problems as well as data obtained from a high flux neutron source. The algorithm removes artifacts and can even roughly capture small features when an extremely low number of angles are used.

  16. Using the ratio of the magnetic field to the analytic signal of the magnetic gradient tensor in determining the position of simple shaped magnetic anomalies

    NASA Astrophysics Data System (ADS)

    Karimi, Kurosh; Shirzaditabar, Farzad

    2017-08-01

    The analytic signal of magnitude of the magnetic field’s components and its first derivatives have been employed for locating magnetic structures, which can be considered as point-dipoles or line of dipoles. Although similar methods have been used for locating such magnetic anomalies, they cannot estimate the positions of anomalies in noisy states with an acceptable accuracy. The methods are also inexact in determining the depth of deep anomalies. In noisy cases and in places other than poles, the maximum points of the magnitude of the magnetic vector components and Az are not located exactly above 3D bodies. Consequently, the horizontal location estimates of bodies are accompanied by errors. Here, the previous methods are altered and generalized to locate deeper models in the presence of noise even at lower magnetic latitudes. In addition, a statistical technique is presented for working in noisy areas and a new method, which is resistant to noise by using a ‘depths mean’ method, is made. Reduction to the pole transformation is also used to find the most possible actual horizontal body location. Deep models are also well estimated. The method is tested on real magnetic data over an urban gas pipeline in the vicinity of Kermanshah province, Iran. The estimated location of the pipeline is accurate in accordance with the result of the half-width method.

  17. OCT-based profiler for automating ocular surface prosthetic fitting (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Mujat, Mircea; Patel, Ankit H.; Maguluri, Gopi N.; Iftimia, Nicusor V.; Patel, Chirag; Agranat, Josh; Tomashevskaya, Olga; Bonte, Eugene; Ferguson, R. Daniel

    2016-03-01

    The use of a Prosthetic Replacement of the Ocular Surface Environment (PROSE) device is a revolutionary treatment for military patients that have lost their eyelids due to 3rd degree facial burns and for civilians who suffer from a host of corneal diseases. However, custom manual fitting is often a protracted painful, inexact process that requires multiple fitting sessions. Training for new practitioners is a long process. Automated methods to measure the complete corneal and scleral topology would provide a valuable tool for both clinicians and PROSE device manufacturers and would help streamline the fitting process. PSI has developed an ocular anterior-segment profiler based on Optical Coherence Tomography (OCT), which provides a 3D measure of the surface of the sclera and cornea. This device will provide topography data that will be used to expedite and improve the fabrication process for PROSE devices. OCT has been used to image portions of the cornea and sclera and to measure surface topology for smaller contact lenses [1-3]. However, current state-of-the-art anterior eye OCT systems can only scan about 16 mm of the eye's anterior surface, which is not sufficient for covering the sclera around the cornea. In addition, there is no systematic method for scanning and aligning/stitching the full scleral/corneal surface and commercial segmentation software is not optimized for the PROSE application. Although preliminary, our results demonstrate the capability of PSI's approach to generate accurate surface plots over relatively large areas of the eye, which is not currently possible with any other existing platform. Testing the technology on human volunteers is currently underway at Boston Foundation for Sight.

  18. Randomized subspace-based robust principal component analysis for hyperspectral anomaly detection

    NASA Astrophysics Data System (ADS)

    Sun, Weiwei; Yang, Gang; Li, Jialin; Zhang, Dianfa

    2018-01-01

    A randomized subspace-based robust principal component analysis (RSRPCA) method for anomaly detection in hyperspectral imagery (HSI) is proposed. The RSRPCA combines advantages of randomized column subspace and robust principal component analysis (RPCA). It assumes that the background has low-rank properties, and the anomalies are sparse and do not lie in the column subspace of the background. First, RSRPCA implements random sampling to sketch the original HSI dataset from columns and to construct a randomized column subspace of the background. Structured random projections are also adopted to sketch the HSI dataset from rows. Sketching from columns and rows could greatly reduce the computational requirements of RSRPCA. Second, the RSRPCA adopts the columnwise RPCA (CWRPCA) to eliminate negative effects of sampled anomaly pixels and that purifies the previous randomized column subspace by removing sampled anomaly columns. The CWRPCA decomposes the submatrix of the HSI data into a low-rank matrix (i.e., background component), a noisy matrix (i.e., noise component), and a sparse anomaly matrix (i.e., anomaly component) with only a small proportion of nonzero columns. The algorithm of inexact augmented Lagrange multiplier is utilized to optimize the CWRPCA problem and estimate the sparse matrix. Nonzero columns of the sparse anomaly matrix point to sampled anomaly columns in the submatrix. Third, all the pixels are projected onto the complemental subspace of the purified randomized column subspace of the background and the anomaly pixels in the original HSI data are finally exactly located. Several experiments on three real hyperspectral images are carefully designed to investigate the detection performance of RSRPCA, and the results are compared with four state-of-the-art methods. Experimental results show that the proposed RSRPCA outperforms four comparison methods both in detection performance and in computational time.

  19. Parallel Newton-Krylov-Schwarz algorithms for the transonic full potential equation

    NASA Technical Reports Server (NTRS)

    Cai, Xiao-Chuan; Gropp, William D.; Keyes, David E.; Melvin, Robin G.; Young, David P.

    1996-01-01

    We study parallel two-level overlapping Schwarz algorithms for solving nonlinear finite element problems, in particular, for the full potential equation of aerodynamics discretized in two dimensions with bilinear elements. The overall algorithm, Newton-Krylov-Schwarz (NKS), employs an inexact finite-difference Newton method and a Krylov space iterative method, with a two-level overlapping Schwarz method as a preconditioner. We demonstrate that NKS, combined with a density upwinding continuation strategy for problems with weak shocks, is robust and, economical for this class of mixed elliptic-hyperbolic nonlinear partial differential equations, with proper specification of several parameters. We study upwinding parameters, inner convergence tolerance, coarse grid density, subdomain overlap, and the level of fill-in in the incomplete factorization, and report their effect on numerical convergence rate, overall execution time, and parallel efficiency on a distributed-memory parallel computer.

  20. 'Plans are useless'.

    PubMed

    Bland, Michael

    2013-01-01

    An essential element in crisis recovery is the protection and/or recovery of reputation. This calls for a crisis communications function that is of more than passing interest to the business continuity specialist and which presents two major challenges in this era of process-driven management: (1) it is an inexact science, more about common sense, psychology, empathy and 'playing it by ear' than about box ticking; (2) it does not lend itself to detailed, rigid plans, although some degree of planning is essential. This paper outlines a flexible approach that will help the crisis team to develop a workable communications plan that strikes a balance between being too detailed and too sketchy. It argues that the whole management team should be involved in developing the plan and sets a number of questions, which, on being answered, will help a realistic, achievable and effective plan to evolve.

  1. A radiogenic heating evolution model for cosmochemically Earth-like exoplanets

    NASA Astrophysics Data System (ADS)

    Frank, Elizabeth A.; Meyer, Bradley S.; Mojzsis, Stephen J.

    2014-11-01

    Discoveries of rocky worlds around other stars have inspired diverse geophysical models of their plausible structures and tectonic regimes. Severe limitations of observable properties require many inexact assumptions about key geophysical characteristics of these planets. We present the output of an analytical galactic chemical evolution (GCE) model that quantitatively constrains one of those key properties: radiogenic heating. Earth's radiogenic heat generation has evolved since its formation, and the same will apply to exoplanets. We have fit simulations of the chemical evolution of the interstellar medium in the solar annulus to the chemistry of our Solar System at the time of its formation and then applied the carbonaceous chondrite/Earth's mantle ratio to determine the chemical composition of what we term ;cosmochemically Earth-like; exoplanets. Through this approach, predictions of exoplanet radiogenic heat productions as a function of age have been derived. The results show that the later a planet forms in galactic history, the less radiogenic heat it begins with; however, due to radioactive decay, today, old planets have lower heat outputs per unit mass than newly formed worlds. The long half-life of 232Th allows it to continue providing a small amount of heat in even the most ancient planets, while 40K dominates heating in young worlds. Through constraining the age-dependent heat production in exoplanets, we can infer that younger, hotter rocky planets are more likely to be geologically active and therefore able to sustain the crustal recycling (e.g. plate tectonics) that may be a requirement for long-term biosphere habitability. In the search for Earth-like planets, the focus should be made on stars within a billion years or so of the Sun's age.

  2. Variability And Uncertainty Analysis Of Contaminant Transport Model Using Fuzzy Latin Hypercube Sampling Technique

    NASA Astrophysics Data System (ADS)

    Kumar, V.; Nayagum, D.; Thornton, S.; Banwart, S.; Schuhmacher2, M.; Lerner, D.

    2006-12-01

    Characterization of uncertainty associated with groundwater quality models is often of critical importance, as for example in cases where environmental models are employed in risk assessment. Insufficient data, inherent variability and estimation errors of environmental model parameters introduce uncertainty into model predictions. However, uncertainty analysis using conventional methods such as standard Monte Carlo sampling (MCS) may not be efficient, or even suitable, for complex, computationally demanding models and involving different nature of parametric variability and uncertainty. General MCS or variant of MCS such as Latin Hypercube Sampling (LHS) assumes variability and uncertainty as a single random entity and the generated samples are treated as crisp assuming vagueness as randomness. Also when the models are used as purely predictive tools, uncertainty and variability lead to the need for assessment of the plausible range of model outputs. An improved systematic variability and uncertainty analysis can provide insight into the level of confidence in model estimates, and can aid in assessing how various possible model estimates should be weighed. The present study aims to introduce, Fuzzy Latin Hypercube Sampling (FLHS), a hybrid approach of incorporating cognitive and noncognitive uncertainties. The noncognitive uncertainty such as physical randomness, statistical uncertainty due to limited information, etc can be described by its own probability density function (PDF); whereas the cognitive uncertainty such estimation error etc can be described by the membership function for its fuzziness and confidence interval by ?-cuts. An important property of this theory is its ability to merge inexact generated data of LHS approach to increase the quality of information. The FLHS technique ensures that the entire range of each variable is sampled with proper incorporation of uncertainty and variability. A fuzzified statistical summary of the model results will produce indices of sensitivity and uncertainty that relate the effects of heterogeneity and uncertainty of input variables to model predictions. The feasibility of the method is validated to assess uncertainty propagation of parameter values for estimation of the contamination level of a drinking water supply well due to transport of dissolved phenolics from a contaminated site in the UK.

  3. Reduction of forecast uncertainty in the context of hydropower production: a case study for two catchment in Lac-St-Jean, Canada

    NASA Astrophysics Data System (ADS)

    Brisson, Cathy; Boucher, Marie-Amélie; Latraverse, Marco

    2014-05-01

    This research focuses on the improvement of streamflow forecasts for two subcatchments in the Lac-St-Jean area, a northern part of the province of Quebec in Canada. Those two subcatchments, named Manouane and Passes-Dangereuses, are part of a bigger system, which comprises many reservoirs and six hydropower plants. This system is managed by Rio Tinto Alcan, an aluminium producer who needs this energy for its processes. Optimal management of the hydropower plants highly depends on the reliability of the inflow forecasts to the reservoirs and also on the reliability of observed streamflow. The latter are not directly measured, but rather deduced from the computation of a water balance. This water balance includes streamflow computation based on rating curves for river sections and upstream reservoirs and a modelling process using CEQUEAU hydrological model (Morin et al., 1981). In addition, mostly during the winter, the model has to account for a transfer of water from Lac Manouane reservoir to Passes-Dangereuses through Bonnard channel. Winter flow though Bonnard channel is controlled by a spillway, and represented in CEQUEAU by a transfer function and a fixed time delay (2 days). However, it is suspected that the evacuation function, as it is currently computed, is inaccurate. The main objective of this work is to reduce predictive uncertainty for Lac Manouane and Passes-Dangereuses catchment, for the one-day ahead horizon. This objective is twofold. First, the uncertainty related to the parameterization of the hydrological model had never been evaluated. It was to be investigated whether it is better to spatialize the calibration of the hydrological model. In its actual form, the calibration of the hydrological model CEQUEAU (Morin et al., 1981) is based exclusively on the downstream outflow. There is, however, intermediate streamflow measurements data available for an intermediate location. Our study shows that calibrating the model using streamflows for both locations (intermediate location and downstream) leads to improved forecasts, as measured by the Nash-Sutcliffe efficiency criterion. The parameter sets thus determined best represent the phenomena of exchange and runoff in the watershed. Second, this study aims at reducing the uncertainty associated to the evacuation function for the Bonnard channel as well as the time delay related to this transfer. Instead of using a fixed 2-day time delay for the transfer, it was attempted to represent the channel in the hydrological model CEQUEAU and compute the time delay from this model. The results show that hydrological modelling does not improve the results and that the 2-day time delay is adequate, especially for first days of opening and few days after closure of the gate. In addition, this research shows that the evacuation function of Bonnard spillway is inexact for large streamflows. It is considered the main source of uncertainty for the prediction of inflows to the reservoirs. We also show that the evacuated streamflows can be successfully corrected by hydrological modelling. This case study shows that a careful revision of the inflow forecasting process for those important watersheds can help reduce predictive uncertainty. Although the application is specific to the Lac-St-Jean area, we believe that our experience could serve other users and water managers with similar issues regarding inflow uncertainty. Reference Morin, G., J.-P. Fortin, J.-P. Lardeau, W. Sochanska and S. Paquette. 1981. Modèle CEQUEAU : Manuel d'utilisation. Rapport de recherche no R-93, INRS-Eau, Sainte-Foy

  4. Stephen Hall Receives 2012 Walter Sullivan Award for Excellence in Science Journalism-Features: Citation

    NASA Astrophysics Data System (ADS)

    Pearson, Helen

    2013-01-01

    Stephen Hall, a freelance science writer and science-communication teacher, received the Walter Sullivan Award for Excellence in Science Journalism-Features at the AGU Fall Meeting Honors Ceremony, held on 5 December 2012 in San Francisco, Calif. Hall was honored for the article "At Fault?" published 15 September 2011 in Nature. The article examines the legal, personal, and political repercussions from a 2009 earthquake in L'Aquila, Italy for seismologists who had attempted to convey seismic risk assessments to the public. The 6.3 magnitude quake devastated the medieval town and caused more than 300 deaths. Six scientists and one government official were subsequently convicted of manslaughter and sentenced to prison for inadequately assessing and mischaracterizing the risks to city residents, despite the inexact nature of seismic risk assessment. The Sullivan award is for work published with a deadline pressure of more than 1 week.

  5. An iterative method for obtaining the optimum lightning location on a spherical surface

    NASA Technical Reports Server (NTRS)

    Chao, Gao; Qiming, MA

    1991-01-01

    A brief introduction to the basic principles of an eigen method used to obtain the optimum source location of lightning is presented. The location of the optimum source is obtained by using multiple direction finders (DF's) on a spherical surface. An improvement of this method, which takes the distance of source-DF's as a constant, is presented. It is pointed out that using a weight factor of signal strength is not the most ideal method because of the inexact inverse signal strength-distance relation and the inaccurate signal amplitude. An iterative calculation method is presented using the distance from the source to the DF as a weight factor. This improved method has higher accuracy and needs only a little more calculation time. Some computer simulations for a 4DF system are presented to show the improvement of location through use of the iterative method.

  6. Cumulative Risk Assessment: An Overview of Methodological Approaches for Evaluating Combined Health Effects from Exposure to Multiple Environmental Stressors

    PubMed Central

    Sexton, Ken

    2012-01-01

    Systematic evaluation of cumulative health risks from the combined effects of multiple environmental stressors is becoming a vital component of risk-based decisions aimed at protecting human populations and communities. This article briefly examines the historical development of cumulative risk assessment as an analytical tool, and discusses current approaches for evaluating cumulative health effects from exposure to both chemical mixtures and combinations of chemical and nonchemical stressors. A comparison of stressor-based and effects-based assessment methods is presented, and the potential value of focusing on viable risk management options to limit the scope of cumulative evaluations is discussed. The ultimate goal of cumulative risk assessment is to provide answers to decision-relevant questions based on organized scientific analysis; even if the answers, at least for the time being, are inexact and uncertain. PMID:22470298

  7. A comparison between IMSC, PI and MIMSC methods in controlling the vibration of flexible systems

    NASA Technical Reports Server (NTRS)

    Baz, A.; Poh, S.

    1987-01-01

    A comparative study is presented between three active control algorithms which have proven to be successful in controlling the vibrations of large flexible systems. These algorithms are: the Independent Modal Space Control (IMSC), the Pseudo-inverse (PI), and the Modified Independent Modal Space Control (MIMSC). Emphasis is placed on demonstrating the effectiveness of the MIMSC method in controlling the vibration of large systems with small number of actuators by using an efficient time sharing strategy. Such a strategy favors the MIMSC over the IMSC method, which requires a large number of actuators to control equal number of modes, and also over the PI method which attempts to control large number of modes with smaller number of actuators through the use of an in-exact statistical realization of a modal controller. Numerical examples are presented to illustrate the main features of the three algorithms and the merits of the MIMSC method.

  8. Hydrology for Engineers, Geologists, and Environmental Professionals

    NASA Astrophysics Data System (ADS)

    Ince, Simon

    For people who are involved in the applied aspects of hydrology, it is refreshing to find a textbook that begins with a meaningful disclaimer, albeit in fine print on the back side of the frontispiece:“The present book and the accompanying software have been written according to the latest techniques in scientific hydrology. However, hydrology is at best an inexact science. A good book and a good computer software by themselves do not guarantee accurate or even realistic predictions. Acceptable results in the applications of hydrologic methods to engineering and environmental problems depend to a greater extend (sic) on the skills, logical assumptions, and practical experience of the user, and on the quantity and quality of long-term hydrologic data available. Neither the author nor the publisher assumes any responsibility or any liability, explicitly or implicitly, on the results or the consequences of using the information contained in this book or its accompanying software.”

  9. Violence risk prediction. Clinical and actuarial measures and the role of the Psychopathy Checklist.

    PubMed

    Dolan, M; Doyle, M

    2000-10-01

    Violence risk prediction is a priority issue for clinicians working with mentally disordered offenders. To review the current status of violence risk prediction research. Literature search (Medline). Key words: violence, risk prediction, mental disorder. Systematic/structured risk assessment approaches may enhance the accuracy of clinical prediction of violent outcomes. Data on the predictive validity of available clinical risk assessment tools are based largely on American and North American studies and further validation is required in British samples. The Psychopathy Checklist appears to be a key predictor of violent recidivism in a variety of settings. Violence risk prediction is an inexact science and as such will continue to provoke debate. Clinicians clearly need to be able to demonstrate the rationale behind their decisions on violence risk and much can be learned from recent developments in research on violence risk prediction.

  10. Animal health economics: an aid to decisionmaking on animal health interventions - case studies in the United States of America.

    PubMed

    Marsh, T L; Pendell, D; Knippenberg, R

    2017-04-01

    For animal disease events the outcomes and consequences often remain unclear or uncertain, including the expected changes in benefits (e.g. profit to firms, prices to consumers) and in costs (e.g. response, clean-up). Moreover, the measurement of changes in benefits and costs across alternative interventions used to control animal disease events may be inexact. For instance, the economic consequences of alternative vaccination strategies to mitigate a disease can vary in magnitude due to trade embargoes and other factors. The authors discuss the economic measurement of animal disease outbreaks and interventions and how measurement is used in private and public decision-making. Two illustrative case studies in the United States of America are provided: a hypothetical outbreak of foot and mouth disease in cattle, and the 2014-2015 outbreak of highly pathogenic avian influenza in poultry.

  11. Approximate spatial reasoning

    NASA Technical Reports Server (NTRS)

    Dutta, Soumitra

    1988-01-01

    Much of human reasoning is approximate in nature. Formal models of reasoning traditionally try to be precise and reject the fuzziness of concepts in natural use and replace them with non-fuzzy scientific explicata by a process of precisiation. As an alternate to this approach, it has been suggested that rather than regard human reasoning processes as themselves approximating to some more refined and exact logical process that can be carried out with mathematical precision, the essence and power of human reasoning is in its capability to grasp and use inexact concepts directly. This view is supported by the widespread fuzziness of simple everyday terms (e.g., near tall) and the complexity of ordinary tasks (e.g., cleaning a room). Spatial reasoning is an area where humans consistently reason approximately with demonstrably good results. Consider the case of crossing a traffic intersection. We have only an approximate idea of the locations and speeds of various obstacles (e.g., persons and vehicles), but we nevertheless manage to cross such traffic intersections without any harm. The details of our mental processes which enable us to carry out such intricate tasks in such apparently simple manner are not well understood. However, it is that we try to incorporate such approximate reasoning techniques in our computer systems. Approximate spatial reasoning is very important for intelligent mobile agents (e.g., robots), specially for those operating in uncertain or unknown or dynamic domains.

  12. Padé approximant for normal stress differences in large-amplitude oscillatory shear flow

    NASA Astrophysics Data System (ADS)

    Poungthong, P.; Saengow, C.; Giacomin, A. J.; Kolitawong, C.; Merger, D.; Wilhelm, M.

    2018-04-01

    Analytical solutions for the normal stress differences in large-amplitude oscillatory shear flow (LAOS), for continuum or molecular models, normally take the inexact form of the first few terms of a series expansion in the shear rate amplitude. Here, we improve the accuracy of these truncated expansions by replacing them with rational functions called Padé approximants. The recent advent of exact solutions in LAOS presents an opportunity to identify accurate and useful Padé approximants. For this identification, we replace the truncated expansion for the corotational Jeffreys fluid with its Padé approximants for the normal stress differences. We uncover the most accurate and useful approximant, the [3,4] approximant, and then test its accuracy against the exact solution [C. Saengow and A. J. Giacomin, "Normal stress differences from Oldroyd 8-constant framework: Exact analytical solution for large-amplitude oscillatory shear flow," Phys. Fluids 29, 121601 (2017)]. We use Ewoldt grids to show the stunning accuracy of our [3,4] approximant in LAOS. We quantify this accuracy with an objective function and then map it onto the Pipkin space. Our two applications illustrate how to use our new approximant reliably. For this, we use the Spriggs relations to generalize our best approximant to multimode, and then, we compare with measurements on molten high-density polyethylene and on dissolved polyisobutylene in isobutylene oligomer.

  13. Discriminative Transfer Subspace Learning via Low-Rank and Sparse Representation.

    PubMed

    Xu, Yong; Fang, Xiaozhao; Wu, Jian; Li, Xuelong; Zhang, David

    2016-02-01

    In this paper, we address the problem of unsupervised domain transfer learning in which no labels are available in the target domain. We use a transformation matrix to transfer both the source and target data to a common subspace, where each target sample can be represented by a combination of source samples such that the samples from different domains can be well interlaced. In this way, the discrepancy of the source and target domains is reduced. By imposing joint low-rank and sparse constraints on the reconstruction coefficient matrix, the global and local structures of data can be preserved. To enlarge the margins between different classes as much as possible and provide more freedom to diminish the discrepancy, a flexible linear classifier (projection) is obtained by learning a non-negative label relaxation matrix that allows the strict binary label matrix to relax into a slack variable matrix. Our method can avoid a potentially negative transfer by using a sparse matrix to model the noise and, thus, is more robust to different types of noise. We formulate our problem as a constrained low-rankness and sparsity minimization problem and solve it by the inexact augmented Lagrange multiplier method. Extensive experiments on various visual domain adaptation tasks show the superiority of the proposed method over the state-of-the art methods. The MATLAB code of our method will be publicly available at http://www.yongxu.org/lunwen.html.

  14. Aeroacoustic Simulations of Tandem Cylinders with Subcritical Spacing

    NASA Technical Reports Server (NTRS)

    Lockard, David P.; Choudhari, Meelan M.; Khorrami, Mehdi R.; Neuhart, Dan H.; Hutcheson, Florence V.; Brooks, Thomas F.; Stead, Daniel J.

    2008-01-01

    Tandem cylinders are being studied because they model a variety of component level interactions of landing gear. The present effort is directed at the case of two identical cylinders with their centroids separated in the streamwise direction by 1.435 diameters. Experiments in the Basic Aerodynamic Research Tunnel and Quiet Flow Facility at NASA Langley Research Center have provided an extensive experimental database of the nearfield flow and radiated noise. The measurements were conducted at a Mach number of 0.1285 and Reynolds number of 1.66x10(exp 5) based on the cylinder diameter. A trip was used on the upstream cylinder to insure a fully turbulent flow separation and, hence, to simulate a major aspect of high Reynolds number flow. The parallel computational effort uses the three-dimensional Navier-Stokes solver CFL3D with a hybrid, zonal turbulence model that turns off the turbulence production term everywhere except in a narrow ring surrounding solid surfaces. The experiments exhibited an asymmetry in the surface pressure that was persistent despite attempts to eliminate it through small changes in the configuration. To model the asymmetry, the simulations were run with the cylinder configuration at a nonzero but small angle of attack. The computed results and experiments are in general agreement that vortex shedding for the spacing studied herein is weak relative to that observed at supercritical spacings. Although the shedding was subdued in the simulations, it was still more prominent than in the experiments. Overall, the simulation comparisons with measured near-field data and the radiated acoustics are reasonable, especially if one is concerned with capturing the trends relative to larger cylinder spacings. However, the flow details of the 1.435 diameter spacing have not been captured in full even though very fine grid computations have been performed. Some of the discrepancy may be associated with the simulation s inexact representation of the experimental configuration, but numerical and flow modeling errors are also likely contributors to the observed differences.

  15. Flexible parallel implicit modelling of coupled thermal-hydraulic-mechanical processes in fractured rocks

    NASA Astrophysics Data System (ADS)

    Cacace, Mauro; Jacquey, Antoine B.

    2017-09-01

    Theory and numerical implementation describing groundwater flow and the transport of heat and solute mass in fully saturated fractured rocks with elasto-plastic mechanical feedbacks are developed. In our formulation, fractures are considered as being of lower dimension than the hosting deformable porous rock and we consider their hydraulic and mechanical apertures as scaling parameters to ensure continuous exchange of fluid mass and energy within the fracture-solid matrix system. The coupled system of equations is implemented in a new simulator code that makes use of a Galerkin finite-element technique. The code builds on a flexible, object-oriented numerical framework (MOOSE, Multiphysics Object Oriented Simulation Environment) which provides an extensive scalable parallel and implicit coupling to solve for the multiphysics problem. The governing equations of groundwater flow, heat and mass transport, and rock deformation are solved in a weak sense (either by classical Newton-Raphson or by free Jacobian inexact Newton-Krylow schemes) on an underlying unstructured mesh. Nonlinear feedbacks among the active processes are enforced by considering evolving fluid and rock properties depending on the thermo-hydro-mechanical state of the system and the local structure, i.e. degree of connectivity, of the fracture system. A suite of applications is presented to illustrate the flexibility and capability of the new simulator to address problems of increasing complexity and occurring at different spatial (from centimetres to tens of kilometres) and temporal scales (from minutes to hundreds of years).

  16. Discretization and Preconditioning Algorithms for the Euler and Navier-Stokes Equations on Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.; Kutler, Paul (Technical Monitor)

    1998-01-01

    Several stabilized demoralization procedures for conservation law equations on triangulated domains will be considered. Specifically, numerical schemes based on upwind finite volume, fluctuation splitting, Galerkin least-squares, and space discontinuous Galerkin demoralization will be considered in detail. A standard energy analysis for several of these methods will be given via entropy symmetrization. Next, we will present some relatively new theoretical results concerning congruence relationships for left or right symmetrized equations. These results suggest new variants of existing FV, DG, GLS, and FS methods which are computationally more efficient while retaining the pleasant theoretical properties achieved by entropy symmetrization. In addition, the task of Jacobean linearization of these schemes for use in Newton's method is greatly simplified owing to exploitation of exact symmetries which exist in the system. The FV, FS and DG schemes also permit discrete maximum principle analysis and enforcement which greatly adds to the robustness of the methods. Discrete maximum principle theory will be presented for general finite volume approximations on unstructured meshes. Next, we consider embedding these nonlinear space discretizations into exact and inexact Newton solvers which are preconditioned using a nonoverlapping (Schur complement) domain decomposition technique. Elements of nonoverlapping domain decomposition for elliptic problems will be reviewed followed by the present extension to hyperbolic and elliptic-hyperbolic problems. Other issues of practical relevance such the meshing of geometries, code implementation, turbulence modeling, global convergence, etc, will. be addressed as needed.

  17. Discretization and Preconditioning Algorithms for the Euler and Navier-Stokes Equations on Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Barth, Timothy; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    Several stabilized discretization procedures for conservation law equations on triangulated domains will be considered. Specifically, numerical schemes based on upwind finite volume, fluctuation splitting, Galerkin least-squares, and space discontinuous Galerkin discretization will be considered in detail. A standard energy analysis for several of these methods will be given via entropy symmetrization. Next, we will present some relatively new theoretical results concerning congruence relationships for left or right symmetrized equations. These results suggest new variants of existing FV, DG, GLS and FS methods which are computationally more efficient while retaining the pleasant theoretical properties achieved by entropy symmetrization. In addition, the task of Jacobian linearization of these schemes for use in Newton's method is greatly simplified owing to exploitation of exact symmetries which exist in the system. These variants have been implemented in the "ELF" library for which example calculations will be shown. The FV, FS and DG schemes also permit discrete maximum principle analysis and enforcement which greatly adds to the robustness of the methods. Some prevalent limiting strategies will be reviewed. Next, we consider embedding these nonlinear space discretizations into exact and inexact Newton solvers which are preconditioned using a nonoverlapping (Schur complement) domain decomposition technique. Elements of nonoverlapping domain decomposition for elliptic problems will be reviewed followed by the present extension to hyperbolic and elliptic-hyperbolic problems. Other issues of practical relevance such the meshing of geometries, code implementation, turbulence modeling, global convergence, etc. will be addressed as needed.

  18. Joint detection and tracking of size-varying infrared targets based on block-wise sparse decomposition

    NASA Astrophysics Data System (ADS)

    Li, Miao; Lin, Zaiping; Long, Yunli; An, Wei; Zhou, Yiyu

    2016-05-01

    The high variability of target size makes small target detection in Infrared Search and Track (IRST) a challenging task. A joint detection and tracking method based on block-wise sparse decomposition is proposed to address this problem. For detection, the infrared image is divided into overlapped blocks, and each block is weighted on the local image complexity and target existence probabilities. Target-background decomposition is solved by block-wise inexact augmented Lagrange multipliers. For tracking, label multi-Bernoulli (LMB) tracker tracks multiple targets taking the result of single-frame detection as input, and provides corresponding target existence probabilities for detection. Unlike fixed-size methods, the proposed method can accommodate size-varying targets, due to no special assumption for the size and shape of small targets. Because of exact decomposition, classical target measurements are extended and additional direction information is provided to improve tracking performance. The experimental results show that the proposed method can effectively suppress background clutters, detect and track size-varying targets in infrared images.

  19. Fast and sensitive taxonomic classification for metagenomics with Kaiju

    PubMed Central

    Menzel, Peter; Ng, Kim Lee; Krogh, Anders

    2016-01-01

    Metagenomics emerged as an important field of research not only in microbial ecology but also for human health and disease, and metagenomic studies are performed on increasingly larger scales. While recent taxonomic classification programs achieve high speed by comparing genomic k-mers, they often lack sensitivity for overcoming evolutionary divergence, so that large fractions of the metagenomic reads remain unclassified. Here we present the novel metagenome classifier Kaiju, which finds maximum (in-)exact matches on the protein-level using the Burrows–Wheeler transform. We show in a genome exclusion benchmark that Kaiju classifies reads with higher sensitivity and similar precision compared with current k-mer-based classifiers, especially in genera that are underrepresented in reference databases. We also demonstrate that Kaiju classifies up to 10 times more reads in real metagenomes. Kaiju can process millions of reads per minute and can run on a standard PC. Source code and web server are available at http://kaiju.binf.ku.dk. PMID:27071849

  20. Fast and sensitive taxonomic classification for metagenomics with Kaiju.

    PubMed

    Menzel, Peter; Ng, Kim Lee; Krogh, Anders

    2016-04-13

    Metagenomics emerged as an important field of research not only in microbial ecology but also for human health and disease, and metagenomic studies are performed on increasingly larger scales. While recent taxonomic classification programs achieve high speed by comparing genomic k-mers, they often lack sensitivity for overcoming evolutionary divergence, so that large fractions of the metagenomic reads remain unclassified. Here we present the novel metagenome classifier Kaiju, which finds maximum (in-)exact matches on the protein-level using the Burrows-Wheeler transform. We show in a genome exclusion benchmark that Kaiju classifies reads with higher sensitivity and similar precision compared with current k-mer-based classifiers, especially in genera that are underrepresented in reference databases. We also demonstrate that Kaiju classifies up to 10 times more reads in real metagenomes. Kaiju can process millions of reads per minute and can run on a standard PC. Source code and web server are available at http://kaiju.binf.ku.dk.

  1. Knowledge-based approach to video content classification

    NASA Astrophysics Data System (ADS)

    Chen, Yu; Wong, Edward K.

    2001-01-01

    A framework for video content classification using a knowledge-based approach is herein proposed. This approach is motivated by the fact that videos are rich in semantic contents, which can best be interpreted and analyzed by human experts. We demonstrate the concept by implementing a prototype video classification system using the rule-based programming language CLIPS 6.05. Knowledge for video classification is encoded as a set of rules in the rule base. The left-hand-sides of rules contain high level and low level features, while the right-hand-sides of rules contain intermediate results or conclusions. Our current implementation includes features computed from motion, color, and text extracted from video frames. Our current rule set allows us to classify input video into one of five classes: news, weather, reporting, commercial, basketball and football. We use MYCIN's inexact reasoning method for combining evidences, and to handle the uncertainties in the features and in the classification results. We obtained good results in a preliminary experiment, and it demonstrated the validity of the proposed approach.

  2. Knowledge-based approach to video content classification

    NASA Astrophysics Data System (ADS)

    Chen, Yu; Wong, Edward K.

    2000-12-01

    A framework for video content classification using a knowledge-based approach is herein proposed. This approach is motivated by the fact that videos are rich in semantic contents, which can best be interpreted and analyzed by human experts. We demonstrate the concept by implementing a prototype video classification system using the rule-based programming language CLIPS 6.05. Knowledge for video classification is encoded as a set of rules in the rule base. The left-hand-sides of rules contain high level and low level features, while the right-hand-sides of rules contain intermediate results or conclusions. Our current implementation includes features computed from motion, color, and text extracted from video frames. Our current rule set allows us to classify input video into one of five classes: news, weather, reporting, commercial, basketball and football. We use MYCIN's inexact reasoning method for combining evidences, and to handle the uncertainties in the features and in the classification results. We obtained good results in a preliminary experiment, and it demonstrated the validity of the proposed approach.

  3. Higher Order Time Integration Schemes for the Unsteady Navier-Stokes Equations on Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Jothiprasad, Giridhar; Mavriplis, Dimitri J.; Caughey, David A.; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    The efficiency gains obtained using higher-order implicit Runge-Kutta schemes as compared with the second-order accurate backward difference schemes for the unsteady Navier-Stokes equations are investigated. Three different algorithms for solving the nonlinear system of equations arising at each timestep are presented. The first algorithm (NMG) is a pseudo-time-stepping scheme which employs a non-linear full approximation storage (FAS) agglomeration multigrid method to accelerate convergence. The other two algorithms are based on Inexact Newton's methods. The linear system arising at each Newton step is solved using iterative/Krylov techniques and left preconditioning is used to accelerate convergence of the linear solvers. One of the methods (LMG) uses Richardson's iterative scheme for solving the linear system at each Newton step while the other (PGMRES) uses the Generalized Minimal Residual method. Results demonstrating the relative superiority of these Newton's methods based schemes are presented. Efficiency gains as high as 10 are obtained by combining the higher-order time integration schemes with the more efficient nonlinear solvers.

  4. Composite solvers for linear saddle point problems arising from the incompressible Stokes equations with highly heterogeneous viscosity structure

    NASA Astrophysics Data System (ADS)

    Sanan, P.; Schnepp, S. M.; May, D.; Schenk, O.

    2014-12-01

    Geophysical applications require efficient forward models for non-linear Stokes flow on high resolution spatio-temporal domains. The bottleneck in applying the forward model is solving the linearized, discretized Stokes problem which takes the form of a large, indefinite (saddle point) linear system. Due to the heterogeniety of the effective viscosity in the elliptic operator, devising effective preconditioners for saddle point problems has proven challenging and highly problem-dependent. Nevertheless, at least three approaches show promise for preconditioning these difficult systems in an algorithmically scalable way using multigrid and/or domain decomposition techniques. The first is to work with a hierarchy of coarser or smaller saddle point problems. The second is to use the Schur complement method to decouple and sequentially solve for the pressure and velocity. The third is to use the Schur decomposition to devise preconditioners for the full operator. These involve sub-solves resembling inexact versions of the sequential solve. The choice of approach and sub-methods depends crucially on the motivating physics, the discretization, and available computational resources. Here we examine the performance trade-offs for preconditioning strategies applied to idealized models of mantle convection and lithospheric dynamics, characterized by large viscosity gradients. Due to the arbitrary topological structure of the viscosity field in geodynamical simulations, we utilize low order, inf-sup stable mixed finite element spatial discretizations which are suitable when sharp viscosity variations occur in element interiors. Particular attention is paid to possibilities within the decoupled and approximate Schur complement factorization-based monolithic approaches to leverage recently-developed flexible, communication-avoiding, and communication-hiding Krylov subspace methods in combination with `heavy' smoothers, which require solutions of large per-node sub-problems, well-suited to solution on hybrid computational clusters. To manage the combinatorial explosion of solver options (which include hybridizations of all the approaches mentioned above), we leverage the modularity of the PETSc library.

  5. Consequences of tidal interaction between disks and orbiting protoplanets for the evolution of multi-planet systems with architecture resembling that of Kepler 444

    NASA Astrophysics Data System (ADS)

    Papaloizou, J. C. B.

    2016-11-01

    We study orbital evolution of multi-planet systems with masses in the terrestrial planet regime induced through tidal interaction with a protoplanetary disk assuming that this is the dominant mechanism for producing orbital migration and circularization. We develop a simple analytic model for a system that maintains consecutive pairs in resonance while undergoing orbital circularization and migration. This model enables migration times for each planet to be estimated once planet masses, circularization times and the migration time for the innermost planet are specified. We applied it to a system with the current architecture of Kepler 444 adopting a simple protoplanetary disk model and planet masses that yield migration times inversely proportional to the planet mass, as expected if they result from torques due to tidal interaction with the protoplanetary disk. Furthermore the evolution time for the system as a whole is comparable to current protoplanetary disk lifetimes. In addition we have performed a number of numerical simulations with input data obtained from this model. These indicate that although the analytic model is inexact, relatively small corrections to the estimated migration rates yield systems for which period ratios vary by a minimal extent. Because of relatively large deviations from exact resonance in the observed system of up to 2 %, the migration times obtained in this way indicate only weak convergent migration such that a system for which the planets did not interact would contract by only {˜ }1 % although undergoing significant inward migration as a whole. We have also performed additional simulations to investigate conditions under which the system could undergo significant convergent migration before reaching its final state. These indicate that migration times have to be significantly shorter and resonances between planet pairs significantly closer during such an evolutionary phase. Relative migration rates would then have to decrease allowing period ratios to increase to become more distant from resonances as the system approached its final state in the inner regions of the protoplanetary disk.

  6. Couplerlib: a metadata-driven library for the integration of multiple models of higher and lower trophic level marine systems with inexact functional group matching

    NASA Astrophysics Data System (ADS)

    Beecham, Jonathan; Bruggeman, Jorn; Aldridge, John; Mackinson, Steven

    2016-03-01

    End-to-end modelling is a rapidly developing strategy for modelling in marine systems science and management. However, problems remain in the area of data matching and sub-model compatibility. A mechanism and novel interfacing system (Couplerlib) is presented whereby a physical-biogeochemical model (General Ocean Turbulence Model-European Regional Seas Ecosystem Model, GOTM-ERSEM) that predicts dynamics of the lower trophic level (LTL) organisms in marine ecosystems is coupled to a dynamic ecosystem model (Ecosim), which predicts food-web interactions among higher trophic level (HTL) organisms. Coupling is achieved by means of a bespoke interface, which handles the system incompatibilities between the models and a more generic Couplerlib library, which uses metadata descriptions in extensible mark-up language (XML) to marshal data between groups, paying attention to functional group mappings and compatibility of units between models. In addition, within Couplerlib, models can be coupled across networks by means of socket mechanisms. As a demonstration of this approach, a food-web model (Ecopath with Ecosim, EwE) and a physical-biogeochemical model (GOTM-ERSEM) representing the North Sea ecosystem were joined with Couplerlib. The output from GOTM-ERSEM varies between years, depending on oceanographic and meteorological conditions. Although inter-annual variability was clearly present, there was always the tendency for an annual cycle consisting of a peak of diatoms in spring, followed by (less nutritious) flagellates and dinoflagellates through the summer, resulting in an early summer peak in the mesozooplankton biomass. Pelagic productivity, predicted by the LTL model, was highly seasonal with little winter food for the higher trophic levels. The Ecosim model was originally based on the assumption of constant annual inputs of energy and, consequently, when coupled, pelagic species suffered population losses over the winter months. By contrast, benthic populations were more stable (although the benthic linkage modelled was purely at the detritus level, so this stability reflects the stability of the Ecosim model). The coupled model was used to examine long-term effects of environmental change, and showed the system to be nutrient limited and relatively unaffected by forecast climate change, especially in the benthos. The stability of an Ecosim formulation for large higher tropic level food webs is discussed and it is concluded that this kind of coupled model formulation is better for examining the effects of long-term environmental change than short-term perturbations.

  7. Iodinated Contrast Media and the Alleged "Iodine Allergy": An Inexact Diagnosis Leading to Inferior Radiologic Management and Adverse Drug Reactions.

    PubMed

    Böhm, Ingrid; Nairz, Knud; Morelli, John N; Keller, Patricia Silva Hasembank; Heverhagen, Johannes T

    2017-04-01

    Purpose  To test the hypothesis that the incomplete diagnosis "iodine allergy" is a possibly dangerous concept for patients under routine radiologic conditions. Materials and Methods  300 patients with a history of an "iodine allergy" were retrospectively screened and compared with two age-, sex-, and procedure-matched groups of patients either diagnosed with a nonspecific "iodine contrast medium (ICM) allergy" or an allergy to a specific ICM agent. For all groups, the clinical symptoms of the most recent past adverse drug reaction (ADR), prophylactic actions taken for subsequent imaging, and ultimate outcome were recorded and analyzed. Results  The diagnosis "iodine allergy" was not otherwise specified in 84.3 % patients. For this group, in most cases, the symptoms of the previous ADRs were not documented. In contrast, the type of ADR was undocumented in only a minority of patients in the comparison groups. In the group of patients with an "iodine allergy" the percentage of unenhanced CT scans was greater than within the other two groups (36.7 % vs. 28.7 %/18.6 %). ADRs following prophylactic measures were only observed in the "iodine allergy" group (OR of 9.24 95 % CI 1.16 - 73.45; p < 0.04). Conclusion  This data confirms the hypothesis that the diagnosis "iodine allergy" is potentially dangerous and results in uncertainty in clinical management and sometimes even ineffective prophylactic measures. Key points   · The term "iodine allergy" is imprecise, because it designates allergies against different substance classes, such as disinfectants with complexed iodine and contrast media containing covalently bound iodine.. · There is a clear correlation between the exactness of the diagnosis - from the alleged "iodine allergy" to "contrast media allergy" to naming the exact culprit CM - and the quality of documentation of the symptoms.. · Management of patients diagnosed with "iodine allergy" was associated with uncertainty leading to unenhanced scans and sometimes unnecessary prophylactic actions.. · The term "iodine allergy" should be omitted, because it is potentially dangerous and can decrease the quality of radiology exams.. Citation Format · Böhm Ingrid, Nairz Knud, Morelli John N et al. Iodinated Contrast Media and the Alleged "Iodine Allergy": An Inexact Diagnosis Leading to Inferior Radiologic Management and Adverse Drug Reactions. Fortschr Röntgenstr 2017; 189: 326 - 332. © Georg Thieme Verlag KG Stuttgart · New York.

  8. An optimal generic model for multi-parameters and big data optimizing: a laboratory experimental study

    NASA Astrophysics Data System (ADS)

    Utama, D. N.; Ani, N.; Iqbal, M. M.

    2018-03-01

    Optimization is a process for finding parameter (parameters) that is (are) able to deliver an optimal value for an objective function. Seeking an optimal generic model for optimizing is a computer science study that has been being practically conducted by numerous researchers. Generic model is a model that can be technically operated to solve any varieties of optimization problem. By using an object-oriented method, the generic model for optimizing was constructed. Moreover, two types of optimization method, simulated-annealing and hill-climbing, were functioned in constructing the model and compared to find the most optimal one then. The result said that both methods gave the same result for a value of objective function and the hill-climbing based model consumed the shortest running time.

  9. Improving Mars-GRAM: Increasing the Accuracy of Sensitivity Studies at Large Optical Depths

    NASA Technical Reports Server (NTRS)

    Justh, Hilary L.; Justus, C. G.; Badger, Andrew M.

    2010-01-01

    Extensively utilized for numerous mission applications, the Mars Global Reference Atmospheric Model (Mars-GRAM) is an engineering-level atmospheric model. In a Monte-Carlo mode, Mars-GRAM's perturbation modeling capability is used to perform high fidelity engineering end-to-end simulations for entry, descent, and landing (EDL). Mars-GRAM has been found to be inexact when used during the Mars Science Laboratory (MSL) site selection process for sensitivity studies for MapYear=0 and large optical depth values such as tau=3. Mars-GRAM is based on the NASA Ames Mars General Circulation Model (MGCM) from the surface to 80 km altitude. Mars-GRAM with the MapYear parameter set to 0 utilizes results from a MGCM run with a fixed value of tau=3 at all locations for the entire year. Imprecise atmospheric density and pressure at all altitudes is a consequence of this use of MGCM with tau=3. Density factor values have been determined for tau=0.3, 1 and 3 as a preliminary fix to this pressure-density problem. These factors adjust the input values of MGCM MapYear 0 pressure and density to achieve a better match of Mars-GRAM MapYear 0 with Thermal Emission Spectrometer (TES) observations for MapYears 1 and 2 at comparable dust loading. These density factors are fixed values for all latitudes and Ls and are included in Mars-GRAM Release 1.3. Work currently being done, to derive better multipliers by including variations with latitude and/or Ls by comparison of MapYear 0 output directly against TES limb data, will be highlighted in the presentation. The TES limb data utilized in this process has been validated by a comparison study between Mars atmospheric density estimates from Mars-GRAM and measurements by Mars Global Surveyor (MGS). This comparison study was undertaken for locations on Mars of varying latitudes, Ls, and LTST. The more precise density factors will be included in Mars-GRAM 2005 Release 1.4 and thus improve the results of future sensitivity studies done for large optical depths.

  10. A stochastic optimization model under modeling uncertainty and parameter certainty for groundwater remediation design--part I. Model development.

    PubMed

    He, L; Huang, G H; Lu, H W

    2010-04-15

    Solving groundwater remediation optimization problems based on proxy simulators can usually yield optimal solutions differing from the "true" ones of the problem. This study presents a new stochastic optimization model under modeling uncertainty and parameter certainty (SOMUM) and the associated solution method for simultaneously addressing modeling uncertainty associated with simulator residuals and optimizing groundwater remediation processes. This is a new attempt different from the previous modeling efforts. The previous ones focused on addressing uncertainty in physical parameters (i.e. soil porosity) while this one aims to deal with uncertainty in mathematical simulator (arising from model residuals). Compared to the existing modeling approaches (i.e. only parameter uncertainty is considered), the model has the advantages of providing mean-variance analysis for contaminant concentrations, mitigating the effects of modeling uncertainties on optimal remediation strategies, offering confidence level of optimal remediation strategies to system designers, and reducing computational cost in optimization processes. 2009 Elsevier B.V. All rights reserved.

  11. All-in-one model for designing optimal water distribution pipe networks

    NASA Astrophysics Data System (ADS)

    Aklog, Dagnachew; Hosoi, Yoshihiko

    2017-05-01

    This paper discusses the development of an easy-to-use, all-in-one model for designing optimal water distribution networks. The model combines different optimization techniques into a single package in which a user can easily choose what optimizer to use and compare the results of different optimizers to gain confidence in the performances of the models. At present, three optimization techniques are included in the model: linear programming (LP), genetic algorithm (GA) and a heuristic one-by-one reduction method (OBORM) that was previously developed by the authors. The optimizers were tested on a number of benchmark problems and performed very well in terms of finding optimal or near-optimal solutions with a reasonable computation effort. The results indicate that the model effectively addresses the issues of complexity and limited performance trust associated with previous models and can thus be used for practical purposes.

  12. Uncertainty Aware Structural Topology Optimization Via a Stochastic Reduced Order Model Approach

    NASA Technical Reports Server (NTRS)

    Aguilo, Miguel A.; Warner, James E.

    2017-01-01

    This work presents a stochastic reduced order modeling strategy for the quantification and propagation of uncertainties in topology optimization. Uncertainty aware optimization problems can be computationally complex due to the substantial number of model evaluations that are necessary to accurately quantify and propagate uncertainties. This computational complexity is greatly magnified if a high-fidelity, physics-based numerical model is used for the topology optimization calculations. Stochastic reduced order model (SROM) methods are applied here to effectively 1) alleviate the prohibitive computational cost associated with an uncertainty aware topology optimization problem; and 2) quantify and propagate the inherent uncertainties due to design imperfections. A generic SROM framework that transforms the uncertainty aware, stochastic topology optimization problem into a deterministic optimization problem that relies only on independent calls to a deterministic numerical model is presented. This approach facilitates the use of existing optimization and modeling tools to accurately solve the uncertainty aware topology optimization problems in a fraction of the computational demand required by Monte Carlo methods. Finally, an example in structural topology optimization is presented to demonstrate the effectiveness of the proposed uncertainty aware structural topology optimization approach.

  13. Ensemble of surrogates-based optimization for identifying an optimal surfactant-enhanced aquifer remediation strategy at heterogeneous DNAPL-contaminated sites

    NASA Astrophysics Data System (ADS)

    Jiang, Xue; Lu, Wenxi; Hou, Zeyu; Zhao, Haiqing; Na, Jin

    2015-11-01

    The purpose of this study was to identify an optimal surfactant-enhanced aquifer remediation (SEAR) strategy for aquifers contaminated by dense non-aqueous phase liquid (DNAPL) based on an ensemble of surrogates-based optimization technique. A saturated heterogeneous medium contaminated by nitrobenzene was selected as case study. A new kind of surrogate-based SEAR optimization employing an ensemble surrogate (ES) model together with a genetic algorithm (GA) is presented. Four methods, namely radial basis function artificial neural network (RBFANN), kriging (KRG), support vector regression (SVR), and kernel extreme learning machines (KELM), were used to create four individual surrogate models, which were then compared. The comparison enabled us to select the two most accurate models (KELM and KRG) to establish an ES model of the SEAR simulation model, and the developed ES model as well as these four stand-alone surrogate models was compared. The results showed that the average relative error of the average nitrobenzene removal rates between the ES model and the simulation model for 20 test samples was 0.8%, which is a high approximation accuracy, and which indicates that the ES model provides more accurate predictions than the stand-alone surrogate models. Then, a nonlinear optimization model was formulated for the minimum cost, and the developed ES model was embedded into this optimization model as a constrained condition. Besides, GA was used to solve the optimization model to provide the optimal SEAR strategy. The developed ensemble surrogate-optimization approach was effective in seeking a cost-effective SEAR strategy for heterogeneous DNAPL-contaminated sites. This research is expected to enrich and develop the theoretical and technical implications for the analysis of remediation strategy optimization of DNAPL-contaminated aquifers.

  14. Ensemble of Surrogates-based Optimization for Identifying an Optimal Surfactant-enhanced Aquifer Remediation Strategy at Heterogeneous DNAPL-contaminated Sites

    NASA Astrophysics Data System (ADS)

    Lu, W., Sr.; Xin, X.; Luo, J.; Jiang, X.; Zhang, Y.; Zhao, Y.; Chen, M.; Hou, Z.; Ouyang, Q.

    2015-12-01

    The purpose of this study was to identify an optimal surfactant-enhanced aquifer remediation (SEAR) strategy for aquifers contaminated by dense non-aqueous phase liquid (DNAPL) based on an ensemble of surrogates-based optimization technique. A saturated heterogeneous medium contaminated by nitrobenzene was selected as case study. A new kind of surrogate-based SEAR optimization employing an ensemble surrogate (ES) model together with a genetic algorithm (GA) is presented. Four methods, namely radial basis function artificial neural network (RBFANN), kriging (KRG), support vector regression (SVR), and kernel extreme learning machines (KELM), were used to create four individual surrogate models, which were then compared. The comparison enabled us to select the two most accurate models (KELM and KRG) to establish an ES model of the SEAR simulation model, and the developed ES model as well as these four stand-alone surrogate models was compared. The results showed that the average relative error of the average nitrobenzene removal rates between the ES model and the simulation model for 20 test samples was 0.8%, which is a high approximation accuracy, and which indicates that the ES model provides more accurate predictions than the stand-alone surrogate models. Then, a nonlinear optimization model was formulated for the minimum cost, and the developed ES model was embedded into this optimization model as a constrained condition. Besides, GA was used to solve the optimization model to provide the optimal SEAR strategy. The developed ensemble surrogate-optimization approach was effective in seeking a cost-effective SEAR strategy for heterogeneous DNAPL-contaminated sites. This research is expected to enrich and develop the theoretical and technical implications for the analysis of remediation strategy optimization of DNAPL-contaminated aquifers.

  15. Optimal Appearance Model for Visual Tracking

    PubMed Central

    Wang, Yuru; Jiang, Longkui; Liu, Qiaoyuan; Yin, Minghao

    2016-01-01

    Many studies argue that integrating multiple cues in an adaptive way increases tracking performance. However, what is the definition of adaptiveness and how to realize it remains an open issue. On the premise that the model with optimal discriminative ability is also optimal for tracking the target, this work realizes adaptiveness and robustness through the optimization of multi-cue integration models. Specifically, based on prior knowledge and current observation, a set of discrete samples are generated to approximate the foreground and background distribution. With the goal of optimizing the classification margin, an objective function is defined, and the appearance model is optimized by introducing optimization algorithms. The proposed optimized appearance model framework is embedded into a particle filter for a field test, and it is demonstrated to be robust against various kinds of complex tracking conditions. This model is general and can be easily extended to other parameterized multi-cue models. PMID:26789639

  16. Improving Navy Recruiting with the New Planned Resource Optimization Model With Experimental Design (PROM-WED)

    DTIC Science & Technology

    2017-03-01

    RECRUITING WITH THE NEW PLANNED RESOURCE OPTIMIZATION MODEL WITH EXPERIMENTAL DESIGN (PROM-WED) by Allison R. Hogarth March 2017 Thesis...with the New Planned Resource Optimization Model With Experimental Design (PROM-WED) 5. FUNDING NUMBERS 6. AUTHOR(S) Allison R. Hogarth 7. PERFORMING...has historically used a non -linear optimization model, the Planned Resource Optimization (PRO) model, to help inform decisions on the allocation of

  17. Optimization and Validation of Rotating Current Excitation with GMR Array Sensors for Riveted

    DTIC Science & Technology

    2016-09-16

    distribution. Simulation results, using both an optimized coil and a conventional coil, are generated using the finite element method (FEM) model...optimized coil and a conventional coil, are generated using the finite element method (FEM) model. The signal magnitude for an optimized coil is seen to be...optimized coil. 4. Model Based Performance Analysis A 3D finite element model (FEM) is used to analyze the performance of the optimized coil and

  18. Numerical optimization of Ignition and Growth reactive flow modeling for PAX2A

    NASA Astrophysics Data System (ADS)

    Baker, E. L.; Schimel, B.; Grantham, W. J.

    1996-05-01

    Variable metric nonlinear optimization has been successfully applied to the parameterization of unreacted and reacted products thermodynamic equations of state and reactive flow modeling of the HMX based high explosive PAX2A. The NLQPEB nonlinear optimization program has been recently coupled to the LLNL developed two-dimensional high rate continuum modeling programs DYNA2D and CALE. The resulting program has the ability to optimize initial modeling parameters. This new optimization capability was used to optimally parameterize the Ignition and Growth reactive flow model to experimental manganin gauge records. The optimization varied the Ignition and Growth reaction rate model parameters in order to minimize the difference between the calculated pressure histories and the experimental pressure histories.

  19. The Microbial Rosetta Stone Database: A compilation of global and emerging infectious microorganisms and bioterrorist threat agents

    PubMed Central

    Ecker, David J; Sampath, Rangarajan; Willett, Paul; Wyatt, Jacqueline R; Samant, Vivek; Massire, Christian; Hall, Thomas A; Hari, Kumar; McNeil, John A; Büchen-Osmond, Cornelia; Budowle, Bruce

    2005-01-01

    Background Thousands of different microorganisms affect the health, safety, and economic stability of populations. Many different medical and governmental organizations have created lists of the pathogenic microorganisms relevant to their missions; however, the nomenclature for biological agents on these lists and pathogens described in the literature is inexact. This ambiguity can be a significant block to effective communication among the diverse communities that must deal with epidemics or bioterrorist attacks. Results We have developed a database known as the Microbial Rosetta Stone. The database relates microorganism names, taxonomic classifications, diseases, specific detection and treatment protocols, and relevant literature. The database structure facilitates linkage to public genomic databases. This paper focuses on the information in the database for pathogens that impact global public health, emerging infectious organisms, and bioterrorist threat agents. Conclusion The Microbial Rosetta Stone is available at . The database provides public access to up-to-date taxonomic classifications of organisms that cause human diseases, improves the consistency of nomenclature in disease reporting, and provides useful links between different public genomic and public health databases. PMID:15850481

  20. A Novel Polygonal Finite Element Method: Virtual Node Method

    NASA Astrophysics Data System (ADS)

    Tang, X. H.; Zheng, C.; Zhang, J. H.

    2010-05-01

    Polygonal finite element method (PFEM), which can construct shape functions on polygonal elements, provides greater flexibility in mesh generation. However, the non-polynomial form of traditional PFEM, such as Wachspress method and Mean Value method, leads to inexact numerical integration. Since the integration technique for non-polynomial functions is immature. To overcome this shortcoming, a great number of integration points have to be used to obtain sufficiently exact results, which increases computational cost. In this paper, a novel polygonal finite element method is proposed and called as virtual node method (VNM). The features of present method can be list as: (1) It is a PFEM with polynomial form. Thereby, Hammer integral and Gauss integral can be naturally used to obtain exact numerical integration; (2) Shape functions of VNM satisfy all the requirements of finite element method. To test the performance of VNM, intensive numerical tests are carried out. It found that, in standard patch test, VNM can achieve significantly better results than Wachspress method and Mean Value method. Moreover, it is observed that VNM can achieve better results than triangular 3-node elements in the accuracy test.

  1. Interactive two-stage stochastic fuzzy programming for water resources management.

    PubMed

    Wang, S; Huang, G H

    2011-08-01

    In this study, an interactive two-stage stochastic fuzzy programming (ITSFP) approach has been developed through incorporating an interactive fuzzy resolution (IFR) method within an inexact two-stage stochastic programming (ITSP) framework. ITSFP can not only tackle dual uncertainties presented as fuzzy boundary intervals that exist in the objective function and the left- and right-hand sides of constraints, but also permit in-depth analyses of various policy scenarios that are associated with different levels of economic penalties when the promised policy targets are violated. A management problem in terms of water resources allocation has been studied to illustrate applicability of the proposed approach. The results indicate that a set of solutions under different feasibility degrees has been generated for planning the water resources allocation. They can help the decision makers (DMs) to conduct in-depth analyses of tradeoffs between economic efficiency and constraint-violation risk, as well as enable them to identify, in an interactive way, a desired compromise between satisfaction degree of the goal and feasibility of the constraints (i.e., risk of constraint violation). Copyright © 2011 Elsevier Ltd. All rights reserved.

  2. Can we measure memes?

    PubMed

    McNamara, Adam

    2011-01-01

    Memes are the fundamental unit of cultural evolution and have been left upon the periphery of cognitive neuroscience due to their inexact definition and the consequent presumption that they are impossible to measure. Here it is argued that although a precise definition of memes is rather difficult it does not preclude highly controlled experiments studying the neural substrates of their initiation and replication. In this paper, memes are termed as either internally or externally represented (i-memes/e-memes) in relation to whether they are represented as a neural substrate within the central nervous system or in some other form within our environment. It is argued that neuroimaging technology is now sufficiently advanced to image the connectivity profiles of i-memes and critically, to measure changes to i-memes over time, i.e., as they evolve. It is argued that it is wrong to simply pass off memes as an alternative term for "stimulus" and "learnt associations" as it does not accurately account for the way in which natural stimuli may dynamically "evolve" as clearly observed in our cultural lives.

  3. Can We Measure Memes?

    PubMed Central

    McNamara, Adam

    2011-01-01

    Memes are the fundamental unit of cultural evolution and have been left upon the periphery of cognitive neuroscience due to their inexact definition and the consequent presumption that they are impossible to measure. Here it is argued that although a precise definition of memes is rather difficult it does not preclude highly controlled experiments studying the neural substrates of their initiation and replication. In this paper, memes are termed as either internally or externally represented (i-memes/e-memes) in relation to whether they are represented as a neural substrate within the central nervous system or in some other form within our environment. It is argued that neuroimaging technology is now sufficiently advanced to image the connectivity profiles of i-memes and critically, to measure changes to i-memes over time, i.e., as they evolve. It is argued that it is wrong to simply pass off memes as an alternative term for “stimulus” and “learnt associations” as it does not accurately account for the way in which natural stimuli may dynamically “evolve” as clearly observed in our cultural lives. PMID:21720531

  4. The Mixed Finite Element Multigrid Method for Stokes Equations

    PubMed Central

    Muzhinji, K.; Shateyi, S.; Motsa, S. S.

    2015-01-01

    The stable finite element discretization of the Stokes problem produces a symmetric indefinite system of linear algebraic equations. A variety of iterative solvers have been proposed for such systems in an attempt to construct efficient, fast, and robust solution techniques. This paper investigates one of such iterative solvers, the geometric multigrid solver, to find the approximate solution of the indefinite systems. The main ingredient of the multigrid method is the choice of an appropriate smoothing strategy. This study considers the application of different smoothers and compares their effects in the overall performance of the multigrid solver. We study the multigrid method with the following smoothers: distributed Gauss Seidel, inexact Uzawa, preconditioned MINRES, and Braess-Sarazin type smoothers. A comparative study of the smoothers shows that the Braess-Sarazin smoothers enhance good performance of the multigrid method. We study the problem in a two-dimensional domain using stable Hood-Taylor Q 2-Q 1 pair of finite rectangular elements. We also give the main theoretical convergence results. We present the numerical results to demonstrate the efficiency and robustness of the multigrid method and confirm the theoretical results. PMID:25945361

  5. How fair are we? Italian students rate the ethics of geoscientists

    NASA Astrophysics Data System (ADS)

    Eva, Elena; Musacchio, Gemma; Piangiamore, Giovanna; Solarino, Stefano

    2015-04-01

    The social perception of scientific world is sometimes biased by false or inexact information, and dissemination might even be driven by an on-purpose discredit. A few case studies that led the public believe that business or even conspiracy were motivating incomplete information from the scientists about recent catastrophes, will be discussed. In most cases a significant role was played by the limitations of knowledge on the matter, or an underestimated scientific uncertainty, the backwardness to design a future scenario in lack of complete scientific evidences or simply the misunderstanding and mistakes in the flow of information through the filter of the media and press. Public awareness on how ethics affect geoscientists in conducting their activities is then a major issue. In this study we analyze results from a questionnaire compiled by young Italian students (15-18 years) that rates their trust in scientists, the reliability of scientific communication and ethics in the world of Geoscience. Areas exposed to various level of geohazards are taken into account. Results are discussed in the light of geoscientist's role played in society, by providing information and support decision-making.

  6. Resident selection: how we are doing and why?

    PubMed

    Thordarson, David B; Ebramzadeh, Edward; Sangiorgio, Sophia N; Schnall, Stephen B; Patzakis, Michael J

    2007-06-01

    Selection of the best applicants for orthopaedic residency programs remains a difficult problem. Most quantifiable factors for residency selection evaluate test-taking ability and grades rather than other aspects, such as patient care, professionalism, moral reasoning, and integrity. Four current department members on our resident selection committee ranked four consecutive classes of orthopaedic residents interviewed for residency. We ranked incoming residents in order of best to least qualified and compared those rankings with rank lists by the same faculty on completion of residency. Rankings also were compared with the residents' United States Medical Licensing Examination (USMLE) Part I scores, American Board of Orthopaedic Surgery (ABOS) Part I scores, and fourth-year Orthopaedic-in-Training Examination (OITE) scores. We found fair or poor correlations between the residents' initial rankings, rankings on graduation, and their USMLE, ABOS, and OITE scores. The only relatively strong correlation found was between the OITE and ABOS scores. Despite the faculty's consensus regarding selection criteria, interviewers did not agree in their rankings of residents on graduation. Additional work is necessary to refine the inexact yet important science of selecting residency applicants.

  7. A Framework for the Optimization of Discrete-Event Simulation Models

    NASA Technical Reports Server (NTRS)

    Joshi, B. D.; Unal, R.; White, N. H.; Morris, W. D.

    1996-01-01

    With the growing use of computer modeling and simulation, in all aspects of engineering, the scope of traditional optimization has to be extended to include simulation models. Some unique aspects have to be addressed while optimizing via stochastic simulation models. The optimization procedure has to explicitly account for the randomness inherent in the stochastic measures predicted by the model. This paper outlines a general purpose framework for optimization of terminating discrete-event simulation models. The methodology combines a chance constraint approach for problem formulation, together with standard statistical estimation and analyses techniques. The applicability of the optimization framework is illustrated by minimizing the operation and support resources of a launch vehicle, through a simulation model.

  8. A detailed comparison of optimality and simplicity in perceptual decision-making

    PubMed Central

    Shen, Shan; Ma, Wei Ji

    2017-01-01

    Two prominent ideas in the study of decision-making have been that organisms behave near-optimally, and that they use simple heuristic rules. These principles might be operating in different types of tasks, but this possibility cannot be fully investigated without a direct, rigorous comparison within a single task. Such a comparison was lacking in most previous studies, because a) the optimal decision rule was simple; b) no simple suboptimal rules were considered; c) it was unclear what was optimal, or d) a simple rule could closely approximate the optimal rule. Here, we used a perceptual decision-making task in which the optimal decision rule is well-defined and complex, and makes qualitatively distinct predictions from many simple suboptimal rules. We find that all simple rules tested fail to describe human behavior, that the optimal rule accounts well for the data, and that several complex suboptimal rules are indistinguishable from the optimal one. Moreover, we found evidence that the optimal model is close to the true model: first, the better the trial-to-trial predictions of a suboptimal model agree with those of the optimal model, the better that suboptimal model fits; second, our estimate of the Kullback-Leibler divergence between the optimal model and the true model is not significantly different from zero. When observers receive no feedback, the optimal model still describes behavior best, suggesting that sensory uncertainty is implicitly represented and taken into account. Beyond the task and models studied here, our results have implications for best practices of model comparison. PMID:27177259

  9. Surrogate Based Uni/Multi-Objective Optimization and Distribution Estimation Methods

    NASA Astrophysics Data System (ADS)

    Gong, W.; Duan, Q.; Huo, X.

    2017-12-01

    Parameter calibration has been demonstrated as an effective way to improve the performance of dynamic models, such as hydrological models, land surface models, weather and climate models etc. Traditional optimization algorithms usually cost a huge number of model evaluations, making dynamic model calibration very difficult, or even computationally prohibitive. With the help of a serious of recently developed adaptive surrogate-modelling based optimization methods: uni-objective optimization method ASMO, multi-objective optimization method MO-ASMO, and probability distribution estimation method ASMO-PODE, the number of model evaluations can be significantly reduced to several hundreds, making it possible to calibrate very expensive dynamic models, such as regional high resolution land surface models, weather forecast models such as WRF, and intermediate complexity earth system models such as LOVECLIM. This presentation provides a brief introduction to the common framework of adaptive surrogate-based optimization algorithms of ASMO, MO-ASMO and ASMO-PODE, a case study of Common Land Model (CoLM) calibration in Heihe river basin in Northwest China, and an outlook of the potential applications of the surrogate-based optimization methods.

  10. Research on the decision-making model of land-use spatial optimization

    NASA Astrophysics Data System (ADS)

    He, Jianhua; Yu, Yan; Liu, Yanfang; Liang, Fei; Cai, Yuqiu

    2009-10-01

    Using the optimization result of landscape pattern and land use structure optimization as constraints of CA simulation results, a decision-making model of land use spatial optimization is established coupled the landscape pattern model with cellular automata to realize the land use quantitative and spatial optimization simultaneously. And Huangpi district is taken as a case study to verify the rationality of the model.

  11. Surrogate Model Application to the Identification of Optimal Groundwater Exploitation Scheme Based on Regression Kriging Method—A Case Study of Western Jilin Province

    PubMed Central

    An, Yongkai; Lu, Wenxi; Cheng, Weiguo

    2015-01-01

    This paper introduces a surrogate model to identify an optimal exploitation scheme, while the western Jilin province was selected as the study area. A numerical simulation model of groundwater flow was established first, and four exploitation wells were set in the Tongyu county and Qian Gorlos county respectively so as to supply water to Daan county. Second, the Latin Hypercube Sampling (LHS) method was used to collect data in the feasible region for input variables. A surrogate model of the numerical simulation model of groundwater flow was developed using the regression kriging method. An optimization model was established to search an optimal groundwater exploitation scheme using the minimum average drawdown of groundwater table and the minimum cost of groundwater exploitation as multi-objective functions. Finally, the surrogate model was invoked by the optimization model in the process of solving the optimization problem. Results show that the relative error and root mean square error of the groundwater table drawdown between the simulation model and the surrogate model for 10 validation samples are both lower than 5%, which is a high approximation accuracy. The contrast between the surrogate-based simulation optimization model and the conventional simulation optimization model for solving the same optimization problem, shows the former only needs 5.5 hours, and the latter needs 25 days. The above results indicate that the surrogate model developed in this study could not only considerably reduce the computational burden of the simulation optimization process, but also maintain high computational accuracy. This can thus provide an effective method for identifying an optimal groundwater exploitation scheme quickly and accurately. PMID:26264008

  12. Visual prosthesis wireless energy transfer system optimal modeling.

    PubMed

    Li, Xueping; Yang, Yuan; Gao, Yong

    2014-01-16

    Wireless energy transfer system is an effective way to solve the visual prosthesis energy supply problems, theoretical modeling of the system is the prerequisite to do optimal energy transfer system design. On the basis of the ideal model of the wireless energy transfer system, according to visual prosthesis application condition, the system modeling is optimized. During the optimal modeling, taking planar spiral coils as the coupling devices between energy transmitter and receiver, the effect of the parasitic capacitance of the transfer coil is considered, and especially the concept of biological capacitance is proposed to consider the influence of biological tissue on the energy transfer efficiency, resulting in the optimal modeling's more accuracy for the actual application. The simulation data of the optimal model in this paper is compared with that of the previous ideal model, the results show that under high frequency condition, the parasitic capacitance of inductance and biological capacitance considered in the optimal model could have great impact on the wireless energy transfer system. The further comparison with the experimental data verifies the validity and accuracy of the optimal model proposed in this paper. The optimal model proposed in this paper has a higher theoretical guiding significance for the wireless energy transfer system's further research, and provide a more precise model reference for solving the power supply problem in visual prosthesis clinical application.

  13. An optimization model to agroindustrial sector in antioquia (Colombia, South America)

    NASA Astrophysics Data System (ADS)

    Fernandez, J.

    2015-06-01

    This paper develops a proposal of a general optimization model for the flower industry, which is defined by using discrete simulation and nonlinear optimization, whose mathematical models have been solved by using ProModel simulation tools and Gams optimization. It defines the operations that constitute the production and marketing of the sector, statistically validated data taken directly from each operation through field work, the discrete simulation model of the operations and the linear optimization model of the entire industry chain are raised. The model is solved with the tools described above and presents the results validated in a case study.

  14. Characterising bias in regulatory risk and decision analysis: An analysis of heuristics applied in health technology appraisal, chemicals regulation, and climate change governance.

    PubMed

    MacGillivray, Brian H

    2017-08-01

    In many environmental and public health domains, heuristic methods of risk and decision analysis must be relied upon, either because problem structures are ambiguous, reliable data is lacking, or decisions are urgent. This introduces an additional source of uncertainty beyond model and measurement error - uncertainty stemming from relying on inexact inference rules. Here we identify and analyse heuristics used to prioritise risk objects, to discriminate between signal and noise, to weight evidence, to construct models, to extrapolate beyond datasets, and to make policy. Some of these heuristics are based on causal generalisations, yet can misfire when these relationships are presumed rather than tested (e.g. surrogates in clinical trials). Others are conventions designed to confer stability to decision analysis, yet which may introduce serious error when applied ritualistically (e.g. significance testing). Some heuristics can be traced back to formal justifications, but only subject to strong assumptions that are often violated in practical applications. Heuristic decision rules (e.g. feasibility rules) in principle act as surrogates for utility maximisation or distributional concerns, yet in practice may neglect costs and benefits, be based on arbitrary thresholds, and be prone to gaming. We highlight the problem of rule-entrenchment, where analytical choices that are in principle contestable are arbitrarily fixed in practice, masking uncertainty and potentially introducing bias. Strategies for making risk and decision analysis more rigorous include: formalising the assumptions and scope conditions under which heuristics should be applied; testing rather than presuming their underlying empirical or theoretical justifications; using sensitivity analysis, simulations, multiple bias analysis, and deductive systems of inference (e.g. directed acyclic graphs) to characterise rule uncertainty and refine heuristics; adopting "recovery schemes" to correct for known biases; and basing decision rules on clearly articulated values and evidence, rather than convention. Copyright © 2017. Published by Elsevier Ltd.

  15. Stochastic optimization for modeling physiological time series: application to the heart rate response to exercise

    NASA Astrophysics Data System (ADS)

    Zakynthinaki, M. S.; Stirling, J. R.

    2007-01-01

    Stochastic optimization is applied to the problem of optimizing the fit of a model to the time series of raw physiological (heart rate) data. The physiological response to exercise has been recently modeled as a dynamical system. Fitting the model to a set of raw physiological time series data is, however, not a trivial task. For this reason and in order to calculate the optimal values of the parameters of the model, the present study implements the powerful stochastic optimization method ALOPEX IV, an algorithm that has been proven to be fast, effective and easy to implement. The optimal parameters of the model, calculated by the optimization method for the particular athlete, are very important as they characterize the athlete's current condition. The present study applies the ALOPEX IV stochastic optimization to the modeling of a set of heart rate time series data corresponding to different exercises of constant intensity. An analysis of the optimization algorithm, together with an analytic proof of its convergence (in the absence of noise), is also presented.

  16. Optimal Control Inventory Stochastic With Production Deteriorating

    NASA Astrophysics Data System (ADS)

    Affandi, Pardi

    2018-01-01

    In this paper, we are using optimal control approach to determine the optimal rate in production. Most of the inventory production models deal with a single item. First build the mathematical models inventory stochastic, in this model we also assume that the items are in the same store. The mathematical model of the problem inventory can be deterministic and stochastic models. In this research will be discussed how to model the stochastic as well as how to solve the inventory model using optimal control techniques. The main tool in the study problems for the necessary optimality conditions in the form of the Pontryagin maximum principle involves the Hamilton function. So we can have the optimal production rate in a production inventory system where items are subject deterioration.

  17. Blended near-optimal tools for flexible water resources decision making

    NASA Astrophysics Data System (ADS)

    Rosenberg, David

    2015-04-01

    State-of-the-art systems analysis techniques focus on efficiently finding optimal solutions. Yet an optimal solution is optimal only for the static modelled issues and managers often seek near-optimal alternatives that address un-modelled or changing objectives, preferences, limits, uncertainties, and other issues. Early on, Modelling to Generate Alternatives (MGA) formalized near-optimal as performance within a tolerable deviation from the optimal objective function value and identified a few maximally-different alternatives that addressed select un-modelled issues. This paper presents new stratified, Monte Carlo Markov Chain sampling and parallel coordinate plotting tools that generate and communicate the structure and full extent of the near-optimal region to an optimization problem. Plot controls allow users to interactively explore region features of most interest. Controls also streamline the process to elicit un-modelled issues and update the model formulation in response to elicited issues. Use for a single-objective water quality management problem at Echo Reservoir, Utah identifies numerous and flexible practices to reduce the phosphorus load to the reservoir and maintain close-to-optimal performance. Compared to MGA, the new blended tools generate more numerous alternatives faster, more fully show the near-optimal region, help elicit a larger set of un-modelled issues, and offer managers greater flexibility to cope in a changing world.

  18. Forecasting of dissolved oxygen in the Guanting reservoir using an optimized NGBM (1,1) model.

    PubMed

    An, Yan; Zou, Zhihong; Zhao, Yanfei

    2015-03-01

    An optimized nonlinear grey Bernoulli model was proposed by using a particle swarm optimization algorithm to solve the parameter optimization problem. In addition, each item in the first-order accumulated generating sequence was set in turn as an initial condition to determine which alternative would yield the highest forecasting accuracy. To test the forecasting performance, the optimized models with different initial conditions were then used to simulate dissolved oxygen concentrations in the Guanting reservoir inlet and outlet (China). The empirical results show that the optimized model can remarkably improve forecasting accuracy, and the particle swarm optimization technique is a good tool to solve parameter optimization problems. What's more, the optimized model with an initial condition that performs well in in-sample simulation may not do as well as in out-of-sample forecasting. Copyright © 2015. Published by Elsevier B.V.

  19. ConvAn: a convergence analyzing tool for optimization of biochemical networks.

    PubMed

    Kostromins, Andrejs; Mozga, Ivars; Stalidzans, Egils

    2012-01-01

    Dynamic models of biochemical networks usually are described as a system of nonlinear differential equations. In case of optimization of models for purpose of parameter estimation or design of new properties mainly numerical methods are used. That causes problems of optimization predictability as most of numerical optimization methods have stochastic properties and the convergence of the objective function to the global optimum is hardly predictable. Determination of suitable optimization method and necessary duration of optimization becomes critical in case of evaluation of high number of combinations of adjustable parameters or in case of large dynamic models. This task is complex due to variety of optimization methods, software tools and nonlinearity features of models in different parameter spaces. A software tool ConvAn is developed to analyze statistical properties of convergence dynamics for optimization runs with particular optimization method, model, software tool, set of optimization method parameters and number of adjustable parameters of the model. The convergence curves can be normalized automatically to enable comparison of different methods and models in the same scale. By the help of the biochemistry adapted graphical user interface of ConvAn it is possible to compare different optimization methods in terms of ability to find the global optima or values close to that as well as the necessary computational time to reach them. It is possible to estimate the optimization performance for different number of adjustable parameters. The functionality of ConvAn enables statistical assessment of necessary optimization time depending on the necessary optimization accuracy. Optimization methods, which are not suitable for a particular optimization task, can be rejected if they have poor repeatability or convergence properties. The software ConvAn is freely available on www.biosystems.lv/convan. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  20. Visual prosthesis wireless energy transfer system optimal modeling

    PubMed Central

    2014-01-01

    Background Wireless energy transfer system is an effective way to solve the visual prosthesis energy supply problems, theoretical modeling of the system is the prerequisite to do optimal energy transfer system design. Methods On the basis of the ideal model of the wireless energy transfer system, according to visual prosthesis application condition, the system modeling is optimized. During the optimal modeling, taking planar spiral coils as the coupling devices between energy transmitter and receiver, the effect of the parasitic capacitance of the transfer coil is considered, and especially the concept of biological capacitance is proposed to consider the influence of biological tissue on the energy transfer efficiency, resulting in the optimal modeling’s more accuracy for the actual application. Results The simulation data of the optimal model in this paper is compared with that of the previous ideal model, the results show that under high frequency condition, the parasitic capacitance of inductance and biological capacitance considered in the optimal model could have great impact on the wireless energy transfer system. The further comparison with the experimental data verifies the validity and accuracy of the optimal model proposed in this paper. Conclusions The optimal model proposed in this paper has a higher theoretical guiding significance for the wireless energy transfer system’s further research, and provide a more precise model reference for solving the power supply problem in visual prosthesis clinical application. PMID:24428906

  1. A Modified Particle Swarm Optimization Technique for Finding Optimal Designs for Mixture Models

    PubMed Central

    Wong, Weng Kee; Chen, Ray-Bing; Huang, Chien-Chih; Wang, Weichung

    2015-01-01

    Particle Swarm Optimization (PSO) is a meta-heuristic algorithm that has been shown to be successful in solving a wide variety of real and complicated optimization problems in engineering and computer science. This paper introduces a projection based PSO technique, named ProjPSO, to efficiently find different types of optimal designs, or nearly optimal designs, for mixture models with and without constraints on the components, and also for related models, like the log contrast models. We also compare the modified PSO performance with Fedorov's algorithm, a popular algorithm used to generate optimal designs, Cocktail algorithm, and the recent algorithm proposed by [1]. PMID:26091237

  2. Improving Weather Forecasts Through Reduced Precision Data Assimilation

    NASA Astrophysics Data System (ADS)

    Hatfield, Samuel; Düben, Peter; Palmer, Tim

    2017-04-01

    We present a new approach for improving the efficiency of data assimilation, by trading numerical precision for computational speed. Future supercomputers will allow a greater choice of precision, so that models can use a level of precision that is commensurate with the model uncertainty. Previous studies have already indicated that the quality of climate and weather forecasts is not significantly degraded when using a precision less than double precision [1,2], but so far these studies have not considered data assimilation. Data assimilation is inherently uncertain due to the use of relatively long assimilation windows, noisy observations and imperfect models. Thus, the larger rounding errors incurred from reducing precision may be within the tolerance of the system. Lower precision arithmetic is cheaper, and so by reducing precision in ensemble data assimilation, we can redistribute computational resources towards, for example, a larger ensemble size. Because larger ensembles provide a better estimate of the underlying distribution and are less reliant on covariance inflation and localisation, lowering precision could actually allow us to improve the accuracy of weather forecasts. We will present results on how lowering numerical precision affects the performance of an ensemble data assimilation system, consisting of the Lorenz '96 toy atmospheric model and the ensemble square root filter. We run the system at half precision (using an emulation tool), and compare the results with simulations at single and double precision. We estimate that half precision assimilation with a larger ensemble can reduce assimilation error by 30%, with respect to double precision assimilation with a smaller ensemble, for no extra computational cost. This results in around half a day extra of skillful weather forecasts, if the error-doubling characteristics of the Lorenz '96 model are mapped to those of the real atmosphere. Additionally, we investigate the sensitivity of these results to observational error and assimilation window length. Half precision hardware will become available very shortly, with the introduction of Nvidia's Pascal GPU architecture and the Intel Knights Mill coprocessor. We hope that the results presented here will encourage the uptake of this hardware. References [1] Peter D. Düben and T. N. Palmer, 2014: Benchmark Tests for Numerical Weather Forecasts on Inexact Hardware, Mon. Weather Rev., 142, 3809-3829 [2] Peter D. Düben, Hugh McNamara and T. N. Palmer, 2014: The use of imprecise processing to improve accuracy in weather & climate prediction, J. Comput. Phys., 271, 2-18

  3. SU-F-R-10: Selecting the Optimal Solution for Multi-Objective Radiomics Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Z; Folkert, M; Wang, J

    2016-06-15

    Purpose: To develop an evidential reasoning approach for selecting the optimal solution from a Pareto solution set obtained by a multi-objective radiomics model for predicting distant failure in lung SBRT. Methods: In the multi-objective radiomics model, both sensitivity and specificity are considered as the objective functions simultaneously. A Pareto solution set with many feasible solutions will be resulted from the multi-objective optimization. In this work, an optimal solution Selection methodology for Multi-Objective radiomics Learning model using the Evidential Reasoning approach (SMOLER) was proposed to select the optimal solution from the Pareto solution set. The proposed SMOLER method used the evidentialmore » reasoning approach to calculate the utility of each solution based on pre-set optimal solution selection rules. The solution with the highest utility was chosen as the optimal solution. In SMOLER, an optimal learning model coupled with clonal selection algorithm was used to optimize model parameters. In this study, PET, CT image features and clinical parameters were utilized for predicting distant failure in lung SBRT. Results: Total 126 solution sets were generated by adjusting predictive model parameters. Each Pareto set contains 100 feasible solutions. The solution selected by SMOLER within each Pareto set was compared to the manually selected optimal solution. Five-cross-validation was used to evaluate the optimal solution selection accuracy of SMOLER. The selection accuracies for five folds were 80.00%, 69.23%, 84.00%, 84.00%, 80.00%, respectively. Conclusion: An optimal solution selection methodology for multi-objective radiomics learning model using the evidential reasoning approach (SMOLER) was proposed. Experimental results show that the optimal solution can be found in approximately 80% cases.« less

  4. Genetic programming assisted stochastic optimization strategies for optimization of glucose to gluconic acid fermentation.

    PubMed

    Cheema, Jitender Jit Singh; Sankpal, Narendra V; Tambe, Sanjeev S; Kulkarni, Bhaskar D

    2002-01-01

    This article presents two hybrid strategies for the modeling and optimization of the glucose to gluconic acid batch bioprocess. In the hybrid approaches, first a novel artificial intelligence formalism, namely, genetic programming (GP), is used to develop a process model solely from the historic process input-output data. In the next step, the input space of the GP-based model, representing process operating conditions, is optimized using two stochastic optimization (SO) formalisms, viz., genetic algorithms (GAs) and simultaneous perturbation stochastic approximation (SPSA). These SO formalisms possess certain unique advantages over the commonly used gradient-based optimization techniques. The principal advantage of the GP-GA and GP-SPSA hybrid techniques is that process modeling and optimization can be performed exclusively from the process input-output data without invoking the detailed knowledge of the process phenomenology. The GP-GA and GP-SPSA techniques have been employed for modeling and optimization of the glucose to gluconic acid bioprocess, and the optimized process operating conditions obtained thereby have been compared with those obtained using two other hybrid modeling-optimization paradigms integrating artificial neural networks (ANNs) and GA/SPSA formalisms. Finally, the overall optimized operating conditions given by the GP-GA method, when verified experimentally resulted in a significant improvement in the gluconic acid yield. The hybrid strategies presented here are generic in nature and can be employed for modeling and optimization of a wide variety of batch and continuous bioprocesses.

  5. AQUASOL: An efficient solver for the dipolar Poisson–Boltzmann–Langevin equation

    PubMed Central

    Koehl, Patrice; Delarue, Marc

    2010-01-01

    The Poisson–Boltzmann (PB) formalism is among the most popular approaches to modeling the solvation of molecules. It assumes a continuum model for water, leading to a dielectric permittivity that only depends on position in space. In contrast, the dipolar Poisson–Boltzmann–Langevin (DPBL) formalism represents the solvent as a collection of orientable dipoles with nonuniform concentration; this leads to a nonlinear permittivity function that depends both on the position and on the local electric field at that position. The differences in the assumptions underlying these two models lead to significant differences in the equations they generate. The PB equation is a second order, elliptic, nonlinear partial differential equation (PDE). Its response coefficients correspond to the dielectric permittivity and are therefore constant within each subdomain of the system considered (i.e., inside and outside of the molecules considered). While the DPBL equation is also a second order, elliptic, nonlinear PDE, its response coefficients are nonlinear functions of the electrostatic potential. Many solvers have been developed for the PB equation; to our knowledge, none of these can be directly applied to the DPBL equation. The methods they use may adapt to the difference; their implementations however are PBE specific. We adapted the PBE solver originally developed by Holst and Saied [J. Comput. Chem. 16, 337 (1995)] to the problem of solving the DPBL equation. This solver uses a truncated Newton method with a multigrid preconditioner. Numerical evidences suggest that it converges for the DPBL equation and that the convergence is superlinear. It is found however to be slow and greedy in memory requirement for problems commonly encountered in computational biology and computational chemistry. To circumvent these problems, we propose two variants, a quasi-Newton solver based on a simplified, inexact Jacobian and an iterative self-consistent solver that is based directly on the PBE solver. While both methods are not guaranteed to converge, numerical evidences suggest that they do and that their convergence is also superlinear. Both variants are significantly faster than the solver based on the exact Jacobian, with a much smaller memory footprint. All three methods have been implemented in a new code named AQUASOL, which is freely available. PMID:20151727

  6. AQUASOL: An efficient solver for the dipolar Poisson-Boltzmann-Langevin equation.

    PubMed

    Koehl, Patrice; Delarue, Marc

    2010-02-14

    The Poisson-Boltzmann (PB) formalism is among the most popular approaches to modeling the solvation of molecules. It assumes a continuum model for water, leading to a dielectric permittivity that only depends on position in space. In contrast, the dipolar Poisson-Boltzmann-Langevin (DPBL) formalism represents the solvent as a collection of orientable dipoles with nonuniform concentration; this leads to a nonlinear permittivity function that depends both on the position and on the local electric field at that position. The differences in the assumptions underlying these two models lead to significant differences in the equations they generate. The PB equation is a second order, elliptic, nonlinear partial differential equation (PDE). Its response coefficients correspond to the dielectric permittivity and are therefore constant within each subdomain of the system considered (i.e., inside and outside of the molecules considered). While the DPBL equation is also a second order, elliptic, nonlinear PDE, its response coefficients are nonlinear functions of the electrostatic potential. Many solvers have been developed for the PB equation; to our knowledge, none of these can be directly applied to the DPBL equation. The methods they use may adapt to the difference; their implementations however are PBE specific. We adapted the PBE solver originally developed by Holst and Saied [J. Comput. Chem. 16, 337 (1995)] to the problem of solving the DPBL equation. This solver uses a truncated Newton method with a multigrid preconditioner. Numerical evidences suggest that it converges for the DPBL equation and that the convergence is superlinear. It is found however to be slow and greedy in memory requirement for problems commonly encountered in computational biology and computational chemistry. To circumvent these problems, we propose two variants, a quasi-Newton solver based on a simplified, inexact Jacobian and an iterative self-consistent solver that is based directly on the PBE solver. While both methods are not guaranteed to converge, numerical evidences suggest that they do and that their convergence is also superlinear. Both variants are significantly faster than the solver based on the exact Jacobian, with a much smaller memory footprint. All three methods have been implemented in a new code named AQUASOL, which is freely available.

  7. Decision Support Modeling for Water-Use Planning With Sparse Data

    NASA Astrophysics Data System (ADS)

    Brainard, J. R.; Williams, A. A.; Tidwell, V. C.

    2006-12-01

    In the face of the combined impacts of drought and increased water demand in the arid southwest, water resource managers are more frequently being called upon to solve contentious water management issues. Such efforts are often confounded by sparse and inexact data. System dynamics provides a valuable modeling framework to explore pertinent uncertainties and test alternative system conceptualizations. In particular, the quickness of simulations and the object oriented modeling environment allows for efficient investigations into system uncertainty. Through sensitivity studies, the limits of uncertain variables can be determined and the accuracy of the underlying conceptual model can be tested. Here we apply system dynamic modeling to a New Mexico Office of the State Engineer Active Water Resource Management Priority Basin in southwest New Mexico where water managers are faced with developing water allocation plans under drought conditions. The Mimbres River Basin, one of several closed north-south trending structural basins in southwestern New Mexico, has a total drainage area of 13,300 km2 (5,140 mi2), most of which overlies an economically significant aquifer comprised of Tertiary and Quaternary alluvium. The Mimbres River heads in the Black and Los Pinos Mountain Ranges where elevations reach 3,051 m (10,011 ft) above sea level. The river is perennial in a segment of the upper reach where the basin is relatively narrow and where groundwater is limited. In this perennial reach, the Mimbres River is used extensively for agricultural irrigation in which those with senior rights are downstream of those with junior rights. Domestic wells, most of which have junior rights, impact river flow through pumping from the accompanying alluvial aquifer. This modeling effort concentrates on the upper reach of the Mimbres River where continued drought is expected to result in water shortages creating conflicts between water users. Objectives of this effort are 1) to develop a physically based model that can be used by water managers to explore the outcomes of various water management scenarios under a full spectrum of hydrologic conditions and social demands placed on the system, and 2) to identify significant data gaps and uncertainty that impacts the ability to make sound science-based water management decisions, and 3) to evaluate how uncertainties should be communicated and addressed in the planning process. The model uses physically based calculations to explore relationships between climate and other variables of interest such as vegetative water consumption from riparian communities and agriculture crops and to explore climate driven impacts on surface and ground water availability.

  8. Assessing the applicability of WRF optimal parameters under the different precipitation simulations in the Greater Beijing Area

    NASA Astrophysics Data System (ADS)

    Di, Zhenhua; Duan, Qingyun; Wang, Chen; Ye, Aizhong; Miao, Chiyuan; Gong, Wei

    2018-03-01

    Forecasting skills of the complex weather and climate models have been improved by tuning the sensitive parameters that exert the greatest impact on simulated results based on more effective optimization methods. However, whether the optimal parameter values are still work when the model simulation conditions vary, which is a scientific problem deserving of study. In this study, a highly-effective optimization method, adaptive surrogate model-based optimization (ASMO), was firstly used to tune nine sensitive parameters from four physical parameterization schemes of the Weather Research and Forecasting (WRF) model to obtain better summer precipitation forecasting over the Greater Beijing Area in China. Then, to assess the applicability of the optimal parameter values, simulation results from the WRF model with default and optimal parameter values were compared across precipitation events, boundary conditions, spatial scales, and physical processes in the Greater Beijing Area. The summer precipitation events from 6 years were used to calibrate and evaluate the optimal parameter values of WRF model. Three boundary data and two spatial resolutions were adopted to evaluate the superiority of the calibrated optimal parameters to default parameters under the WRF simulations with different boundary conditions and spatial resolutions, respectively. Physical interpretations of the optimal parameters indicating how to improve precipitation simulation results were also examined. All the results showed that the optimal parameters obtained by ASMO are superior to the default parameters for WRF simulations for predicting summer precipitation in the Greater Beijing Area because the optimal parameters are not constrained by specific precipitation events, boundary conditions, and spatial resolutions. The optimal values of the nine parameters were determined from 127 parameter samples using the ASMO method, which showed that the ASMO method is very highly-efficient for optimizing WRF model parameters.

  9. A trust region approach with multivariate Padé model for optimal circuit design

    NASA Astrophysics Data System (ADS)

    Abdel-Malek, Hany L.; Ebid, Shaimaa E. K.; Mohamed, Ahmed S. A.

    2017-11-01

    Since the optimization process requires a significant number of consecutive function evaluations, it is recommended to replace the function by an easily evaluated approximation model during the optimization process. The model suggested in this article is based on a multivariate Padé approximation. This model is constructed using data points of ?, where ? is the number of parameters. The model is updated over a sequence of trust regions. This model avoids the slow convergence of linear models of ? and has features of quadratic models that need interpolation data points of ?. The proposed approach is tested by applying it to several benchmark problems. Yield optimization using such a direct method is applied to some practical circuit examples. Minimax solution leads to a suitable initial point to carry out the yield optimization process. The yield is optimized by the proposed derivative-free method for active and passive filter examples.

  10. An optimized Nash nonlinear grey Bernoulli model based on particle swarm optimization and its application in prediction for the incidence of Hepatitis B in Xinjiang, China.

    PubMed

    Zhang, Liping; Zheng, Yanling; Wang, Kai; Zhang, Xueliang; Zheng, Yujian

    2014-06-01

    In this paper, by using a particle swarm optimization algorithm to solve the optimal parameter estimation problem, an improved Nash nonlinear grey Bernoulli model termed PSO-NNGBM(1,1) is proposed. To test the forecasting performance, the optimized model is applied for forecasting the incidence of hepatitis B in Xinjiang, China. Four models, traditional GM(1,1), grey Verhulst model (GVM), original nonlinear grey Bernoulli model (NGBM(1,1)) and Holt-Winters exponential smoothing method, are also established for comparison with the proposed model under the criteria of mean absolute percentage error and root mean square percent error. The prediction results show that the optimized NNGBM(1,1) model is more accurate and performs better than the traditional GM(1,1), GVM, NGBM(1,1) and Holt-Winters exponential smoothing method. Copyright © 2014. Published by Elsevier Ltd.

  11. Improving the FLORIS wind plant model for compatibility with gradient-based optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thomas, Jared J.; Gebraad, Pieter MO; Ning, Andrew

    The FLORIS (FLOw Redirection and Induction in Steady-state) model, a parametric wind turbine wake model that predicts steady-state wake characteristics based on wind turbine position and yaw angle, was developed for optimization of control settings and turbine locations. This article provides details on changes made to the FLORIS model to make the model more suitable for gradient-based optimization. Changes to the FLORIS model were made to remove discontinuities and add curvature to regions of non-physical zero gradient. Exact gradients for the FLORIS model were obtained using algorithmic differentiation. A set of three case studies demonstrate that using exact gradients withmore » gradient-based optimization reduces the number of function calls by several orders of magnitude. The case studies also show that adding curvature improves convergence behavior, allowing gradient-based optimization algorithms used with the FLORIS model to more reliably find better solutions to wind farm optimization problems.« less

  12. Optimization of multi-objective micro-grid based on improved particle swarm optimization algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Jian; Gan, Yang

    2018-04-01

    The paper presents a multi-objective optimal configuration model for independent micro-grid with the aim of economy and environmental protection. The Pareto solution set can be obtained by solving the multi-objective optimization configuration model of micro-grid with the improved particle swarm algorithm. The feasibility of the improved particle swarm optimization algorithm for multi-objective model is verified, which provides an important reference for multi-objective optimization of independent micro-grid.

  13. Joint global optimization of tomographic data based on particle swarm optimization and decision theory

    NASA Astrophysics Data System (ADS)

    Paasche, H.; Tronicke, J.

    2012-04-01

    In many near surface geophysical applications multiple tomographic data sets are routinely acquired to explore subsurface structures and parameters. Linking the model generation process of multi-method geophysical data sets can significantly reduce ambiguities in geophysical data analysis and model interpretation. Most geophysical inversion approaches rely on local search optimization methods used to find an optimal model in the vicinity of a user-given starting model. The final solution may critically depend on the initial model. Alternatively, global optimization (GO) methods have been used to invert geophysical data. They explore the solution space in more detail and determine the optimal model independently from the starting model. Additionally, they can be used to find sets of optimal models allowing a further analysis of model parameter uncertainties. Here we employ particle swarm optimization (PSO) to realize the global optimization of tomographic data. PSO is an emergent methods based on swarm intelligence characterized by fast and robust convergence towards optimal solutions. The fundamental principle of PSO is inspired by nature, since the algorithm mimics the behavior of a flock of birds searching food in a search space. In PSO, a number of particles cruise a multi-dimensional solution space striving to find optimal model solutions explaining the acquired data. The particles communicate their positions and success and direct their movement according to the position of the currently most successful particle of the swarm. The success of a particle, i.e. the quality of the currently found model by a particle, must be uniquely quantifiable to identify the swarm leader. When jointly inverting disparate data sets, the optimization solution has to satisfy multiple optimization objectives, at least one for each data set. Unique determination of the most successful particle currently leading the swarm is not possible. Instead, only statements about the Pareto optimality of the found solutions can be made. Identification of the leading particle traditionally requires a costly combination of ranking and niching techniques. In our approach, we use a decision rule under uncertainty to identify the currently leading particle of the swarm. In doing so, we consider the different objectives of our optimization problem as competing agents with partially conflicting interests. Analysis of the maximin fitness function allows for robust and cheap identification of the currently leading particle. The final optimization result comprises a set of possible models spread along the Pareto front. For convex Pareto fronts, solution density is expected to be maximal in the region ideally compromising all objectives, i.e. the region of highest curvature.

  14. Optimal harvesting for a predator-prey agent-based model using difference equations.

    PubMed

    Oremland, Matthew; Laubenbacher, Reinhard

    2015-03-01

    In this paper, a method known as Pareto optimization is applied in the solution of a multi-objective optimization problem. The system in question is an agent-based model (ABM) wherein global dynamics emerge from local interactions. A system of discrete mathematical equations is formulated in order to capture the dynamics of the ABM; while the original model is built up analytically from the rules of the model, the paper shows how minor changes to the ABM rule set can have a substantial effect on model dynamics. To address this issue, we introduce parameters into the equation model that track such changes. The equation model is amenable to mathematical theory—we show how stability analysis can be performed and validated using ABM data. We then reduce the equation model to a simpler version and implement changes to allow controls from the ABM to be tested using the equations. Cohen's weighted κ is proposed as a measure of similarity between the equation model and the ABM, particularly with respect to the optimization problem. The reduced equation model is used to solve a multi-objective optimization problem via a technique known as Pareto optimization, a heuristic evolutionary algorithm. Results show that the equation model is a good fit for ABM data; Pareto optimization provides a suite of solutions to the multi-objective optimization problem that can be implemented directly in the ABM.

  15. Research on NC laser combined cutting optimization model of sheet metal parts

    NASA Astrophysics Data System (ADS)

    Wu, Z. Y.; Zhang, Y. L.; Li, L.; Wu, L. H.; Liu, N. B.

    2017-09-01

    The optimization problem for NC laser combined cutting of sheet metal parts was taken as the research object in this paper. The problem included two contents: combined packing optimization and combined cutting path optimization. In the problem of combined packing optimization, the method of “genetic algorithm + gravity center NFP + geometric transformation” was used to optimize the packing of sheet metal parts. In the problem of combined cutting path optimization, the mathematical model of cutting path optimization was established based on the parts cutting constraint rules of internal contour priority and cross cutting. The model played an important role in the optimization calculation of NC laser combined cutting.

  16. Optimization of response surface and neural network models in conjugation with desirability function for estimation of nutritional needs of methionine, lysine, and threonine in broiler chickens.

    PubMed

    Mehri, Mehran

    2014-07-01

    The optimization algorithm of a model may have significant effects on the final optimal values of nutrient requirements in poultry enterprises. In poultry nutrition, the optimal values of dietary essential nutrients are very important for feed formulation to optimize profit through minimizing feed cost and maximizing bird performance. This study was conducted to introduce a novel multi-objective algorithm, desirability function, for optimization the bird response models based on response surface methodology (RSM) and artificial neural network (ANN). The growth databases on the central composite design (CCD) were used to construct the RSM and ANN models and optimal values for 3 essential amino acids including lysine, methionine, and threonine in broiler chicks have been reevaluated using the desirable function in both analytical approaches from 3 to 16 d of age. Multi-objective optimization results showed that the most desirable function was obtained for ANN-based model (D = 0.99) where the optimal levels of digestible lysine (dLys), digestible methionine (dMet), and digestible threonine (dThr) for maximum desirability were 13.2, 5.0, and 8.3 g/kg of diet, respectively. However, the optimal levels of dLys, dMet, and dThr in the RSM-based model were estimated at 11.2, 5.4, and 7.6 g/kg of diet, respectively. This research documented that the application of ANN in the broiler chicken model along with a multi-objective optimization algorithm such as desirability function could be a useful tool for optimization of dietary amino acids in fractional factorial experiments, in which the use of the global desirability function may be able to overcome the underestimations of dietary amino acids resulting from the RSM model. © 2014 Poultry Science Association Inc.

  17. Optimal speech motor control and token-to-token variability: a Bayesian modeling approach.

    PubMed

    Patri, Jean-François; Diard, Julien; Perrier, Pascal

    2015-12-01

    The remarkable capacity of the speech motor system to adapt to various speech conditions is due to an excess of degrees of freedom, which enables producing similar acoustical properties with different sets of control strategies. To explain how the central nervous system selects one of the possible strategies, a common approach, in line with optimal motor control theories, is to model speech motor planning as the solution of an optimality problem based on cost functions. Despite the success of this approach, one of its drawbacks is the intrinsic contradiction between the concept of optimality and the observed experimental intra-speaker token-to-token variability. The present paper proposes an alternative approach by formulating feedforward optimal control in a probabilistic Bayesian modeling framework. This is illustrated by controlling a biomechanical model of the vocal tract for speech production and by comparing it with an existing optimal control model (GEPPETO). The essential elements of this optimal control model are presented first. From them the Bayesian model is constructed in a progressive way. Performance of the Bayesian model is evaluated based on computer simulations and compared to the optimal control model. This approach is shown to be appropriate for solving the speech planning problem while accounting for variability in a principled way.

  18. Finite burn maneuver modeling for a generalized spacecraft trajectory design and optimization system.

    PubMed

    Ocampo, Cesar

    2004-05-01

    The modeling, design, and optimization of finite burn maneuvers for a generalized trajectory design and optimization system is presented. A generalized trajectory design and optimization system is a system that uses a single unified framework that facilitates the modeling and optimization of complex spacecraft trajectories that may operate in complex gravitational force fields, use multiple propulsion systems, and involve multiple spacecraft. The modeling and optimization issues associated with the use of controlled engine burn maneuvers of finite thrust magnitude and duration are presented in the context of designing and optimizing a wide class of finite thrust trajectories. Optimal control theory is used examine the optimization of these maneuvers in arbitrary force fields that are generally position, velocity, mass, and are time dependent. The associated numerical methods used to obtain these solutions involve either, the solution to a system of nonlinear equations, an explicit parameter optimization method, or a hybrid parameter optimization that combines certain aspects of both. The theoretical and numerical methods presented here have been implemented in copernicus, a prototype trajectory design and optimization system under development at the University of Texas at Austin.

  19. Existence and characterization of optimal control in mathematics model of diabetics population

    NASA Astrophysics Data System (ADS)

    Permatasari, A. H.; Tjahjana, R. H.; Udjiani, T.

    2018-03-01

    Diabetes is a chronic disease with a huge burden affecting individuals and the whole society. In this paper, we constructed the optimal control mathematical model by applying a strategy to control the development of diabetic population. The constructed mathematical model considers the dynamics of disabled people due to diabetes. Moreover, an optimal control approach is proposed in order to reduce the burden of pre-diabetes. Implementation of control is done by preventing the pre-diabetes develop into diabetics with and without complications. The existence of optimal control and characterization of optimal control is discussed in this paper. Optimal control is characterized by applying the Pontryagin minimum principle. The results indicate that there is an optimal control in optimization problem in mathematics model of diabetic population. The effect of the optimal control variable (prevention) is strongly affected by the number of healthy people.

  20. Nonlinear and parallel algorithms for finite element discretizations of the incompressible Navier-Stokes equations

    NASA Astrophysics Data System (ADS)

    Arteaga, Santiago Egido

    1998-12-01

    The steady-state Navier-Stokes equations are of considerable interest because they are used to model numerous common physical phenomena. The applications encountered in practice often involve small viscosities and complicated domain geometries, and they result in challenging problems in spite of the vast attention that has been dedicated to them. In this thesis we examine methods for computing the numerical solution of the primitive variable formulation of the incompressible equations on distributed memory parallel computers. We use the Galerkin method to discretize the differential equations, although most results are stated so that they apply also to stabilized methods. We also reformulate some classical results in a single framework and discuss some issues frequently dismissed in the literature, such as the implementation of pressure space basis and non- homogeneous boundary values. We consider three nonlinear methods: Newton's method, Oseen's (or Picard) iteration, and sequences of Stokes problems. All these iterative nonlinear methods require solving a linear system at every step. Newton's method has quadratic convergence while that of the others is only linear; however, we obtain theoretical bounds showing that Oseen's iteration is more robust, and we confirm it experimentally. In addition, although Oseen's iteration usually requires more iterations than Newton's method, the linear systems it generates tend to be simpler and its overall costs (in CPU time) are lower. The Stokes problems result in linear systems which are easier to solve, but its convergence is much slower, so that it is competitive only for large viscosities. Inexact versions of these methods are studied, and we explain why the best timings are obtained using relatively modest error tolerances in solving the corresponding linear systems. We also present a new damping optimization strategy based on the quadratic nature of the Navier-Stokes equations, which improves the robustness of all the linearization strategies considered and whose computational cost is negligible. The algebraic properties of these systems depend on both the discretization and nonlinear method used. We study in detail the positive definiteness and skewsymmetry of the advection submatrices (essentially, convection-diffusion problems). We propose a discretization based on a new trilinear form for Newton's method. We solve the linear systems using three Krylov subspace methods, GMRES, QMR and TFQMR, and compare the advantages of each. Our emphasis is on parallel algorithms, and so we consider preconditioners suitable for parallel computers such as line variants of the Jacobi and Gauss- Seidel methods, alternating direction implicit methods, and Chebyshev and least squares polynomial preconditioners. These work well for moderate viscosities (moderate Reynolds number). For small viscosities we show that effective parallel solution of the advection subproblem is a critical factor to improve performance. Implementation details on a CM-5 are presented.

  1. Operations research investigations of satellite power stations

    NASA Technical Reports Server (NTRS)

    Cole, J. W.; Ballard, J. L.

    1976-01-01

    A systems model reflecting the design concepts of Satellite Power Stations (SPS) was developed. The model is of sufficient scope to include the interrelationships of the following major design parameters: the transportation to and between orbits; assembly of the SPS; and maintenance of the SPS. The systems model is composed of a set of equations that are nonlinear with respect to the system parameters and decision variables. The model determines a figure of merit from which alternative concepts concerning transportation, assembly, and maintenance of satellite power stations are studied. A hybrid optimization model was developed to optimize the system's decision variables. The optimization model consists of a random search procedure and the optimal-steepest descent method. A FORTRAN computer program was developed to enable the user to optimize nonlinear functions using the model. Specifically, the computer program was used to optimize Satellite Power Station system components.

  2. Advanced Modeling System for Optimization of Wind Farm Layout and Wind Turbine Sizing Using a Multi-Level Extended Pattern Search Algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DuPont, Bryony; Cagan, Jonathan; Moriarty, Patrick

    This paper presents a system of modeling advances that can be applied in the computational optimization of wind plants. These modeling advances include accurate cost and power modeling, partial wake interaction, and the effects of varying atmospheric stability. To validate the use of this advanced modeling system, it is employed within an Extended Pattern Search (EPS)-Multi-Agent System (MAS) optimization approach for multiple wind scenarios. The wind farm layout optimization problem involves optimizing the position and size of wind turbines such that the aerodynamic effects of upstream turbines are reduced, which increases the effective wind speed and resultant power at eachmore » turbine. The EPS-MAS optimization algorithm employs a profit objective, and an overarching search determines individual turbine positions, with a concurrent EPS-MAS determining the optimal hub height and rotor diameter for each turbine. Two wind cases are considered: (1) constant, unidirectional wind, and (2) three discrete wind speeds and varying wind directions, each of which have a probability of occurrence. Results show the advantages of applying the series of advanced models compared to previous application of an EPS with less advanced models to wind farm layout optimization, and imply best practices for computational optimization of wind farms with improved accuracy.« less

  3. A staggered-grid finite-difference scheme optimized in the time–space domain for modeling scalar-wave propagation in geophysical problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tan, Sirui, E-mail: siruitan@hotmail.com; Huang, Lianjie, E-mail: ljh@lanl.gov

    For modeling scalar-wave propagation in geophysical problems using finite-difference schemes, optimizing the coefficients of the finite-difference operators can reduce numerical dispersion. Most optimized finite-difference schemes for modeling seismic-wave propagation suppress only spatial but not temporal dispersion errors. We develop a novel optimized finite-difference scheme for numerical scalar-wave modeling to control dispersion errors not only in space but also in time. Our optimized scheme is based on a new stencil that contains a few more grid points than the standard stencil. We design an objective function for minimizing relative errors of phase velocities of waves propagating in all directions within amore » given range of wavenumbers. Dispersion analysis and numerical examples demonstrate that our optimized finite-difference scheme is computationally up to 2.5 times faster than the optimized schemes using the standard stencil to achieve the similar modeling accuracy for a given 2D or 3D problem. Compared with the high-order finite-difference scheme using the same new stencil, our optimized scheme reduces 50 percent of the computational cost to achieve the similar modeling accuracy. This new optimized finite-difference scheme is particularly useful for large-scale 3D scalar-wave modeling and inversion.« less

  4. Improved Propulsion Modeling for Low-Thrust Trajectory Optimization

    NASA Technical Reports Server (NTRS)

    Knittel, Jeremy M.; Englander, Jacob A.; Ozimek, Martin T.; Atchison, Justin A.; Gould, Julian J.

    2017-01-01

    Low-thrust trajectory design is tightly coupled with spacecraft systems design. In particular, the propulsion and power characteristics of a low-thrust spacecraft are major drivers in the design of the optimal trajectory. Accurate modeling of the power and propulsion behavior is essential for meaningful low-thrust trajectory optimization. In this work, we discuss new techniques to improve the accuracy of propulsion modeling in low-thrust trajectory optimization while maintaining the smooth derivatives that are necessary for a gradient-based optimizer. The resulting model is significantly more realistic than the industry standard and performs well inside an optimizer. A variety of deep-space trajectory examples are presented.

  5. Optimal weighted combinatorial forecasting model of QT dispersion of ECGs in Chinese adults.

    PubMed

    Wen, Zhang; Miao, Ge; Xinlei, Liu; Minyi, Cen

    2016-07-01

    This study aims to provide a scientific basis for unifying the reference value standard of QT dispersion of ECGs in Chinese adults. Three predictive models including regression model, principal component model, and artificial neural network model are combined to establish the optimal weighted combination model. The optimal weighted combination model and single model are verified and compared. Optimal weighted combinatorial model can reduce predicting risk of single model and improve the predicting precision. The reference value of geographical distribution of Chinese adults' QT dispersion was precisely made by using kriging methods. When geographical factors of a particular area are obtained, the reference value of QT dispersion of Chinese adults in this area can be estimated by using optimal weighted combinatorial model and reference value of the QT dispersion of Chinese adults anywhere in China can be obtained by using geographical distribution figure as well.

  6. Joint Optimization of Vertical Component Gravity and Seismic P-wave First Arrivals by Simulated Annealing

    NASA Astrophysics Data System (ADS)

    Louie, J. N.; Basler-Reeder, K.; Kent, G. M.; Pullammanappallil, S. K.

    2015-12-01

    Simultaneous joint seismic-gravity optimization improves P-wave velocity models in areas with sharp lateral velocity contrasts. Optimization is achieved using simulated annealing, a metaheuristic global optimization algorithm that does not require an accurate initial model. Balancing the seismic-gravity objective function is accomplished by a novel approach based on analysis of Pareto charts. Gravity modeling uses a newly developed convolution algorithm, while seismic modeling utilizes the highly efficient Vidale eikonal equation traveltime generation technique. Synthetic tests show that joint optimization improves velocity model accuracy and provides velocity control below the deepest headwave raypath. Detailed first arrival picking followed by trial velocity modeling remediates inconsistent data. We use a set of highly refined first arrival picks to compare results of a convergent joint seismic-gravity optimization to the Plotrefa™ and SeisOpt® Pro™ velocity modeling packages. Plotrefa™ uses a nonlinear least squares approach that is initial model dependent and produces shallow velocity artifacts. SeisOpt® Pro™ utilizes the simulated annealing algorithm and is limited to depths above the deepest raypath. Joint optimization increases the depth of constrained velocities, improving reflector coherency at depth. Kirchoff prestack depth migrations reveal that joint optimization ameliorates shallow velocity artifacts caused by limitations in refraction ray coverage. Seismic and gravity data from the San Emidio Geothermal field of the northwest Basin and Range province demonstrate that joint optimization changes interpretation outcomes. The prior shallow-valley interpretation gives way to a deep valley model, while shallow antiformal reflectors that could have been interpreted as antiformal folds are flattened. Furthermore, joint optimization provides a clearer image of the rangefront fault. This technique can readily be applied to existing datasets and could replace the existing strategy of forward modeling to match gravity data.

  7. Optimal control of raw timber production processes

    Treesearch

    Ivan Kolenka

    1978-01-01

    This paper demonstrates the possibility of optimal planning and control of timber harvesting activ-ities with mathematical optimization models. The separate phases of timber harvesting are represented by coordinated models which can be used to select the optimal decision for the execution of any given phase. The models form a system whose components are connected and...

  8. Autonomous Modelling of X-ray Spectra Using Robust Global Optimization Methods

    NASA Astrophysics Data System (ADS)

    Rogers, Adam; Safi-Harb, Samar; Fiege, Jason

    2015-08-01

    The standard approach to model fitting in X-ray astronomy is by means of local optimization methods. However, these local optimizers suffer from a number of problems, such as a tendency for the fit parameters to become trapped in local minima, and can require an involved process of detailed user intervention to guide them through the optimization process. In this work we introduce a general GUI-driven global optimization method for fitting models to X-ray data, written in MATLAB, which searches for optimal models with minimal user interaction. We directly interface with the commonly used XSPEC libraries to access the full complement of pre-existing spectral models that describe a wide range of physics appropriate for modelling astrophysical sources, including supernova remnants and compact objects. Our algorithm is powered by the Ferret genetic algorithm and Locust particle swarm optimizer from the Qubist Global Optimization Toolbox, which are robust at finding families of solutions and identifying degeneracies. This technique will be particularly instrumental for multi-parameter models and high-fidelity data. In this presentation, we provide details of the code and use our techniques to analyze X-ray data obtained from a variety of astrophysical sources.

  9. Optimization of life support systems and their systems reliability

    NASA Technical Reports Server (NTRS)

    Fan, L. T.; Hwang, C. L.; Erickson, L. E.

    1971-01-01

    The identification, analysis, and optimization of life support systems and subsystems have been investigated. For each system or subsystem that has been considered, the procedure involves the establishment of a set of system equations (or mathematical model) based on theory and experimental evidences; the analysis and simulation of the model; the optimization of the operation, control, and reliability; analysis of sensitivity of the system based on the model; and, if possible, experimental verification of the theoretical and computational results. Research activities include: (1) modeling of air flow in a confined space; (2) review of several different gas-liquid contactors utilizing centrifugal force: (3) review of carbon dioxide reduction contactors in space vehicles and other enclosed structures: (4) application of modern optimal control theory to environmental control of confined spaces; (5) optimal control of class of nonlinear diffusional distributed parameter systems: (6) optimization of system reliability of life support systems and sub-systems: (7) modeling, simulation and optimal control of the human thermal system: and (8) analysis and optimization of the water-vapor eletrolysis cell.

  10. [Extraction Optimization of Rhizome of Curcuma longa by Response Surface Methodology and Support Vector Regression].

    PubMed

    Zhou, Pei-pei; Shan, Jin-feng; Jiang, Jian-lan

    2015-12-01

    To optimize the optimal microwave-assisted extraction method of curcuminoids from Curcuma longa. On the base of single factor experiment, the ethanol concentration, the ratio of liquid to solid and the microwave time were selected for further optimization. Support Vector Regression (SVR) and Central Composite Design-Response Surface Methodology (CCD) algorithm were utilized to design and establish models respectively, while Particle Swarm Optimization (PSO) was introduced to optimize the parameters of SVR models and to search optimal points of models. The evaluation indicator, the sum of curcumin, demethoxycurcumin and bisdemethoxycurcumin by HPLC, were used. The optimal parameters of microwave-assisted extraction were as follows: ethanol concentration of 69%, ratio of liquid to solid of 21 : 1, microwave time of 55 s. On those conditions, the sum of three curcuminoids was 28.97 mg/g (per gram of rhizomes powder). Both the CCD model and the SVR model were credible, for they have predicted the similar process condition and the deviation of yield were less than 1.2%.

  11. A comparative research of different ensemble surrogate models based on set pair analysis for the DNAPL-contaminated aquifer remediation strategy optimization.

    PubMed

    Hou, Zeyu; Lu, Wenxi; Xue, Haibo; Lin, Jin

    2017-08-01

    Surrogate-based simulation-optimization technique is an effective approach for optimizing the surfactant enhanced aquifer remediation (SEAR) strategy for clearing DNAPLs. The performance of the surrogate model, which is used to replace the simulation model for the aim of reducing computation burden, is the key of corresponding researches. However, previous researches are generally based on a stand-alone surrogate model, and rarely make efforts to improve the approximation accuracy of the surrogate model to the simulation model sufficiently by combining various methods. In this regard, we present set pair analysis (SPA) as a new method to build ensemble surrogate (ES) model, and conducted a comparative research to select a better ES modeling pattern for the SEAR strategy optimization problems. Surrogate models were developed using radial basis function artificial neural network (RBFANN), support vector regression (SVR), and Kriging. One ES model is assembling RBFANN model, SVR model, and Kriging model using set pair weights according their performance, and the other is assembling several Kriging (the best surrogate modeling method of three) models built with different training sample datasets. Finally, an optimization model, in which the ES model was embedded, was established to obtain the optimal remediation strategy. The results showed the residuals of the outputs between the best ES model and simulation model for 100 testing samples were lower than 1.5%. Using an ES model instead of the simulation model was critical for considerably reducing the computation time of simulation-optimization process and maintaining high computation accuracy simultaneously. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. A mathematical framework for yield (vs. rate) optimization in constraint-based modeling and applications in metabolic engineering.

    PubMed

    Klamt, Steffen; Müller, Stefan; Regensburger, Georg; Zanghellini, Jürgen

    2018-05-01

    The optimization of metabolic rates (as linear objective functions) represents the methodical core of flux-balance analysis techniques which have become a standard tool for the study of genome-scale metabolic models. Besides (growth and synthesis) rates, metabolic yields are key parameters for the characterization of biochemical transformation processes, especially in the context of biotechnological applications. However, yields are ratios of rates, and hence the optimization of yields (as nonlinear objective functions) under arbitrary linear constraints is not possible with current flux-balance analysis techniques. Despite the fundamental importance of yields in constraint-based modeling, a comprehensive mathematical framework for yield optimization is still missing. We present a mathematical theory that allows one to systematically compute and analyze yield-optimal solutions of metabolic models under arbitrary linear constraints. In particular, we formulate yield optimization as a linear-fractional program. For practical computations, we transform the linear-fractional yield optimization problem to a (higher-dimensional) linear problem. Its solutions determine the solutions of the original problem and can be used to predict yield-optimal flux distributions in genome-scale metabolic models. For the theoretical analysis, we consider the linear-fractional problem directly. Most importantly, we show that the yield-optimal solution set (like the rate-optimal solution set) is determined by (yield-optimal) elementary flux vectors of the underlying metabolic model. However, yield- and rate-optimal solutions may differ from each other, and hence optimal (biomass or product) yields are not necessarily obtained at solutions with optimal (growth or synthesis) rates. Moreover, we discuss phase planes/production envelopes and yield spaces, in particular, we prove that yield spaces are convex and provide algorithms for their computation. We illustrate our findings by a small example and demonstrate their relevance for metabolic engineering with realistic models of E. coli. We develop a comprehensive mathematical framework for yield optimization in metabolic models. Our theory is particularly useful for the study and rational modification of cell factories designed under given yield and/or rate requirements. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  13. Idealized Experiments for Optimizing Model Parameters Using a 4D-Variational Method in an Intermediate Coupled Model of ENSO

    NASA Astrophysics Data System (ADS)

    Gao, Chuan; Zhang, Rong-Hua; Wu, Xinrong; Sun, Jichang

    2018-04-01

    Large biases exist in real-time ENSO prediction, which can be attributed to uncertainties in initial conditions and model parameters. Previously, a 4D variational (4D-Var) data assimilation system was developed for an intermediate coupled model (ICM) and used to improve ENSO modeling through optimized initial conditions. In this paper, this system is further applied to optimize model parameters. In the ICM used, one important process for ENSO is related to the anomalous temperature of subsurface water entrained into the mixed layer ( T e), which is empirically and explicitly related to sea level (SL) variation. The strength of the thermocline effect on SST (referred to simply as "the thermocline effect") is represented by an introduced parameter, α Te. A numerical procedure is developed to optimize this model parameter through the 4D-Var assimilation of SST data in a twin experiment context with an idealized setting. Experiments having their initial condition optimized only, and having their initial condition plus this additional model parameter optimized, are compared. It is shown that ENSO evolution can be more effectively recovered by including the additional optimization of this parameter in ENSO modeling. The demonstrated feasibility of optimizing model parameters and initial conditions together through the 4D-Var method provides a modeling platform for ENSO studies. Further applications of the 4D-Var data assimilation system implemented in the ICM are also discussed.

  14. Using a 4D-Variational Method to Optimize Model Parameters in an Intermediate Coupled Model of ENSO

    NASA Astrophysics Data System (ADS)

    Gao, C.; Zhang, R. H.

    2017-12-01

    Large biases exist in real-time ENSO prediction, which is attributed to uncertainties in initial conditions and model parameters. Previously, a four dimentional variational (4D-Var) data assimilation system was developed for an intermediate coupled model (ICM) and used to improve ENSO modeling through optimized initial conditions. In this paper, this system is further applied to optimize model parameters. In the ICM used, one important process for ENSO is related to the anomalous temperature of subsurface water entrained into the mixed layer (Te), which is empirically and explicitly related to sea level (SL) variation, written as Te=αTe×FTe (SL). The introduced parameter, αTe, represents the strength of the thermocline effect on sea surface temperature (SST; referred as the thermocline effect). A numerical procedure is developed to optimize this model parameter through the 4D-Var assimilation of SST data in a twin experiment context with an idealized setting. Experiments having initial condition optimized only and having initial condition plus this additional model parameter optimized both are compared. It is shown that ENSO evolution can be more effectively recovered by including the additional optimization of this parameter in ENSO modeling. The demonstrated feasibility of optimizing model parameter and initial condition together through the 4D-Var method provides a modeling platform for ENSO studies. Further applications of the 4D-Var data assimilation system implemented in the ICM are also discussed.

  15. Optimal allocation in annual plants and its implications for drought response

    NASA Astrophysics Data System (ADS)

    Caldararu, Silvia; Smith, Matthew; Purves, Drew

    2015-04-01

    The concept of plant optimality refers to the plastic behaviour of plants that results in lifetime and offspring fitness. Optimality concepts have been used in vegetation models for a variety of processes, including stomatal conductance, leaf phenology and biomass allocation. Including optimality in vegetation models has the advantages of creating process based models with a relatively low complexity in terms of parameter numbers but which are capable of reproducing complex plant behaviour. We present a general model of plant growth for annual plants based on the hypothesis that plants allocate biomass to aboveground and belowground vegetative organs in order to maintain an optimal C:N ratio. The model also represents reproductive growth through a second optimality criteria, which states that plants flower when they reach peak nitrogen uptake. We apply this model to wheat and maize crops at 15 locations corresponding to FLUXNET cropland sites. The model parameters are data constrained using a Bayesian fitting algorithm to eddy covariance data, satellite derived vegetation indices, specifically the MODIS fAPAR product and field level crop yield data. We use the model to simulate the plant drought response under the assumption of plant optimality and show that the plants maintain unstressed total biomass levels under drought for a reduction in precipitation of up to 40%. Beyond that level plant response stops being plastic and growth decreases sharply. This behaviour results simply from the optimal allocation criteria as the model includes no explicit drought sensitivity component. Models that use plant optimality concepts are a useful tool for simulation plant response to stress without the addition of artificial thresholds and parameters.

  16. Combining Simulation and Optimization Models for Hardwood Lumber Production

    Treesearch

    G.A. Mendoza; R.J. Meimban; W.G. Luppold; Philip A. Araman

    1991-01-01

    Published literature contains a number of optimization and simulation models dealing with the primary processing of hardwood and softwood logs. Simulation models have been developed primarily as descriptive models for characterizing the general operations and performance of a sawmill. Optimization models, on the other hand, were developed mainly as analytical tools for...

  17. A linked simulation-optimization model for solving the unknown groundwater pollution source identification problems.

    PubMed

    Ayvaz, M Tamer

    2010-09-20

    This study proposes a linked simulation-optimization model for solving the unknown groundwater pollution source identification problems. In the proposed model, MODFLOW and MT3DMS packages are used to simulate the flow and transport processes in the groundwater system. These models are then integrated with an optimization model which is based on the heuristic harmony search (HS) algorithm. In the proposed simulation-optimization model, the locations and release histories of the pollution sources are treated as the explicit decision variables and determined through the optimization model. Also, an implicit solution procedure is proposed to determine the optimum number of pollution sources which is an advantage of this model. The performance of the proposed model is evaluated on two hypothetical examples for simple and complex aquifer geometries, measurement error conditions, and different HS solution parameter sets. Identified results indicated that the proposed simulation-optimization model is an effective way and may be used to solve the inverse pollution source identification problems. Copyright (c) 2010 Elsevier B.V. All rights reserved.

  18. Optimization of autoregressive, exogenous inputs-based typhoon inundation forecasting models using a multi-objective genetic algorithm

    NASA Astrophysics Data System (ADS)

    Ouyang, Huei-Tau

    2017-07-01

    Three types of model for forecasting inundation levels during typhoons were optimized: the linear autoregressive model with exogenous inputs (LARX), the nonlinear autoregressive model with exogenous inputs with wavelet function (NLARX-W) and the nonlinear autoregressive model with exogenous inputs with sigmoid function (NLARX-S). The forecast performance was evaluated by three indices: coefficient of efficiency, error in peak water level and relative time shift. Historical typhoon data were used to establish water-level forecasting models that satisfy all three objectives. A multi-objective genetic algorithm was employed to search for the Pareto-optimal model set that satisfies all three objectives and select the ideal models for the three indices. Findings showed that the optimized nonlinear models (NLARX-W and NLARX-S) outperformed the linear model (LARX). Among the nonlinear models, the optimized NLARX-W model achieved a more balanced performance on the three indices than the NLARX-S models and is recommended for inundation forecasting during typhoons.

  19. [Analyzing and modeling methods of near infrared spectroscopy for in-situ prediction of oil yield from oil shale].

    PubMed

    Liu, Jie; Zhang, Fu-Dong; Teng, Fei; Li, Jun; Wang, Zhi-Hong

    2014-10-01

    In order to in-situ detect the oil yield of oil shale, based on portable near infrared spectroscopy analytical technology, with 66 rock core samples from No. 2 well drilling of Fuyu oil shale base in Jilin, the modeling and analyzing methods for in-situ detection were researched. By the developed portable spectrometer, 3 data formats (reflectance, absorbance and K-M function) spectra were acquired. With 4 different modeling data optimization methods: principal component-mahalanobis distance (PCA-MD) for eliminating abnormal samples, uninformative variables elimination (UVE) for wavelength selection and their combina- tions: PCA-MD + UVE and UVE + PCA-MD, 2 modeling methods: partial least square (PLS) and back propagation artificial neural network (BPANN), and the same data pre-processing, the modeling and analyzing experiment were performed to determine the optimum analysis model and method. The results show that the data format, modeling data optimization method and modeling method all affect the analysis precision of model. Results show that whether or not using the optimization method, reflectance or K-M function is the proper spectrum format of the modeling database for two modeling methods. Using two different modeling methods and four different data optimization methods, the model precisions of the same modeling database are different. For PLS modeling method, the PCA-MD and UVE + PCA-MD data optimization methods can improve the modeling precision of database using K-M function spectrum data format. For BPANN modeling method, UVE, UVE + PCA-MD and PCA- MD + UVE data optimization methods can improve the modeling precision of database using any of the 3 spectrum data formats. In addition to using the reflectance spectra and PCA-MD data optimization method, modeling precision by BPANN method is better than that by PLS method. And modeling with reflectance spectra, UVE optimization method and BPANN modeling method, the model gets the highest analysis precision, its correlation coefficient (Rp) is 0.92, and its standard error of prediction (SEP) is 0.69%.

  20. Risk modelling in portfolio optimization

    NASA Astrophysics Data System (ADS)

    Lam, W. H.; Jaaman, Saiful Hafizah Hj.; Isa, Zaidi

    2013-09-01

    Risk management is very important in portfolio optimization. The mean-variance model has been used in portfolio optimization to minimize the investment risk. The objective of the mean-variance model is to minimize the portfolio risk and achieve the target rate of return. Variance is used as risk measure in the mean-variance model. The purpose of this study is to compare the portfolio composition as well as performance between the optimal portfolio of mean-variance model and equally weighted portfolio. Equally weighted portfolio means the proportions that are invested in each asset are equal. The results show that the portfolio composition of the mean-variance optimal portfolio and equally weighted portfolio are different. Besides that, the mean-variance optimal portfolio gives better performance because it gives higher performance ratio than the equally weighted portfolio.

  1. Analysis of EnergyPlus for use in residential building energy optimization

    NASA Astrophysics Data System (ADS)

    Spencer, Justin S.

    This work explored the utility of EnergyPlus as a simulation engine for doing residential building energy optimization, with the objective of finding the modeling areas that require further development in EnergyPlus for residential optimization applications. This work was conducted primarily during 2006-2007, with publication occurring later in 2010. The assessments and recommendations apply to the simulation tool versions available in 2007. During this work, an EnergyPlus v2.0 (2007) input file generator was developed for use in BEopt 0.8.0.4 (2007). BEopt 0.8.0.4 is a residential Building Energy optimization program developed at the National Renewable Energy Laboratory in Golden, Colorado. Residential modeling capabilities of EnergyPlus v2.0 were scrutinized and tested. Modeling deficiencies were identified in a number of areas. These deficiencies were compared to deficiencies in the DOE2.2 V44E4(2007)/TRNSYS simulation engines. The highest priority gaps in EnergyPlus v2.0's residential modeling capability are in infiltration, duct leakage, and foundation modeling. Optimization results from DOE2.2 V44E4 and EnergyPlus v2.0 were analyzed to search for modeling differences that have a significant impact on optimization results. Optimal buildings at different energy savings levels were compared to look for biases. It was discovered that the EnergyPlus v2.0 optimizations consistently chose higher wall insulation levels than the DOE2.2 V44E4 optimizations. The points composing the optimal paths chosen by DOE2.2 V44E4 and EnergyPlus v2.0 were compared to look for points chosen by one optimization that were significantly different from the other optimal path. These outliers were compared to consensus optimal points to determine the simulation differences that cause disparities in the optimization results. The differences were primarily caused by modeling of window radiation exchange and HVAC autosizing.

  2. Vector-model-supported approach in prostate plan optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Eva Sau Fan; Department of Health Technology and Informatics, The Hong Kong Polytechnic University; Wu, Vincent Wing Cheung

    Lengthy time consumed in traditional manual plan optimization can limit the use of step-and-shoot intensity-modulated radiotherapy/volumetric-modulated radiotherapy (S&S IMRT/VMAT). A vector model base, retrieving similar radiotherapy cases, was developed with respect to the structural and physiologic features extracted from the Digital Imaging and Communications in Medicine (DICOM) files. Planning parameters were retrieved from the selected similar reference case and applied to the test case to bypass the gradual adjustment of planning parameters. Therefore, the planning time spent on the traditional trial-and-error manual optimization approach in the beginning of optimization could be reduced. Each S&S IMRT/VMAT prostate reference database comprised 100more » previously treated cases. Prostate cases were replanned with both traditional optimization and vector-model-supported optimization based on the oncologists' clinical dose prescriptions. A total of 360 plans, which consisted of 30 cases of S&S IMRT, 30 cases of 1-arc VMAT, and 30 cases of 2-arc VMAT plans including first optimization and final optimization with/without vector-model-supported optimization, were compared using the 2-sided t-test and paired Wilcoxon signed rank test, with a significance level of 0.05 and a false discovery rate of less than 0.05. For S&S IMRT, 1-arc VMAT, and 2-arc VMAT prostate plans, there was a significant reduction in the planning time and iteration with vector-model-supported optimization by almost 50%. When the first optimization plans were compared, 2-arc VMAT prostate plans had better plan quality than 1-arc VMAT plans. The volume receiving 35 Gy in the femoral head for 2-arc VMAT plans was reduced with the vector-model-supported optimization compared with the traditional manual optimization approach. Otherwise, the quality of plans from both approaches was comparable. Vector-model-supported optimization was shown to offer much shortened planning time and iteration number without compromising the plan quality.« less

  3. Optimal Spatial Design of Capacity and Quantity of Rainwater Catchment Systems for Urban Flood Mitigation

    NASA Astrophysics Data System (ADS)

    Huang, C.; Hsu, N.

    2013-12-01

    This study imports Low-Impact Development (LID) technology of rainwater catchment systems into a Storm-Water runoff Management Model (SWMM) to design the spatial capacity and quantity of rain barrel for urban flood mitigation. This study proposes a simulation-optimization model for effectively searching the optimal design. In simulation method, we design a series of regular spatial distributions of capacity and quantity of rainwater catchment facilities, and thus the reduced flooding circumstances using a variety of design forms could be simulated by SWMM. Moreover, we further calculate the net benefit that is equal to subtract facility cost from decreasing inundation loss and the best solution of simulation method would be the initial searching solution of the optimization model. In optimizing method, first we apply the outcome of simulation method and Back-Propagation Neural Network (BPNN) for developing a water level simulation model of urban drainage system in order to replace SWMM which the operating is based on a graphical user interface and is hard to combine with optimization model and method. After that we embed the BPNN-based simulation model into the developed optimization model which the objective function is minimizing the negative net benefit. Finally, we establish a tabu search-based algorithm to optimize the planning solution. This study applies the developed method in Zhonghe Dist., Taiwan. Results showed that application of tabu search and BPNN-based simulation model into the optimization model not only can find better solutions than simulation method in 12.75%, but also can resolve the limitations of previous studies. Furthermore, the optimized spatial rain barrel design can reduce 72% of inundation loss according to historical flood events.

  4. NARMAX model identification of a palm oil biodiesel engine using multi-objective optimization differential evolution

    NASA Astrophysics Data System (ADS)

    Mansor, Zakwan; Zakaria, Mohd Zakimi; Nor, Azuwir Mohd; Saad, Mohd Sazli; Ahmad, Robiah; Jamaluddin, Hishamuddin

    2017-09-01

    This paper presents the black-box modelling of palm oil biodiesel engine (POB) using multi-objective optimization differential evolution (MOODE) algorithm. Two objective functions are considered in the algorithm for optimization; minimizing the number of term of a model structure and minimizing the mean square error between actual and predicted outputs. The mathematical model used in this study to represent the POB system is nonlinear auto-regressive moving average with exogenous input (NARMAX) model. Finally, model validity tests are applied in order to validate the possible models that was obtained from MOODE algorithm and lead to select an optimal model.

  5. TH-E-BRF-06: Kinetic Modeling of Tumor Response to Fractionated Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhong, H; Gordon, J; Chetty, I

    2014-06-15

    Purpose: Accurate calibration of radiobiological parameters is crucial to predicting radiation treatment response. Modeling differences may have a significant impact on calibrated parameters. In this study, we have integrated two existing models with kinetic differential equations to formulate a new tumor regression model for calibrating radiobiological parameters for individual patients. Methods: A system of differential equations that characterizes the birth-and-death process of tumor cells in radiation treatment was analytically solved. The solution of this system was used to construct an iterative model (Z-model). The model consists of three parameters: tumor doubling time Td, half-life of dying cells Tr and cellmore » survival fraction SFD under dose D. The Jacobian determinant of this model was proposed as a constraint to optimize the three parameters for six head and neck cancer patients. The derived parameters were compared with those generated from the two existing models, Chvetsov model (C-model) and Lim model (L-model). The C-model and L-model were optimized with the parameter Td fixed. Results: With the Jacobian-constrained Z-model, the mean of the optimized cell survival fractions is 0.43±0.08, and the half-life of dying cells averaged over the six patients is 17.5±3.2 days. The parameters Tr and SFD optimized with the Z-model differ by 1.2% and 20.3% from those optimized with the Td-fixed C-model, and by 32.1% and 112.3% from those optimized with the Td-fixed L-model, respectively. Conclusion: The Z-model was analytically constructed from the cellpopulation differential equations to describe changes in the number of different tumor cells during the course of fractionated radiation treatment. The Jacobian constraints were proposed to optimize the three radiobiological parameters. The developed modeling and optimization methods may help develop high-quality treatment regimens for individual patients.« less

  6. A novel medical information management and decision model for uncertain demand optimization.

    PubMed

    Bi, Ya

    2015-01-01

    Accurately planning the procurement volume is an effective measure for controlling the medicine inventory cost. Due to uncertain demand it is difficult to make accurate decision on procurement volume. As to the biomedicine sensitive to time and season demand, the uncertain demand fitted by the fuzzy mathematics method is obviously better than general random distribution functions. To establish a novel medical information management and decision model for uncertain demand optimization. A novel optimal management and decision model under uncertain demand has been presented based on fuzzy mathematics and a new comprehensive improved particle swarm algorithm. The optimal management and decision model can effectively reduce the medicine inventory cost. The proposed improved particle swarm optimization is a simple and effective algorithm to improve the Fuzzy interference and hence effectively reduce the calculation complexity of the optimal management and decision model. Therefore the new model can be used for accurate decision on procurement volume under uncertain demand.

  7. Correlations in state space can cause sub-optimal adaptation of optimal feedback control models.

    PubMed

    Aprasoff, Jonathan; Donchin, Opher

    2012-04-01

    Control of our movements is apparently facilitated by an adaptive internal model in the cerebellum. It was long thought that this internal model implemented an adaptive inverse model and generated motor commands, but recently many reject that idea in favor of a forward model hypothesis. In theory, the forward model predicts upcoming state during reaching movements so the motor cortex can generate appropriate motor commands. Recent computational models of this process rely on the optimal feedback control (OFC) framework of control theory. OFC is a powerful tool for describing motor control, it does not describe adaptation. Some assume that adaptation of the forward model alone could explain motor adaptation, but this is widely understood to be overly simplistic. However, an adaptive optimal controller is difficult to implement. A reasonable alternative is to allow forward model adaptation to 're-tune' the controller. Our simulations show that, as expected, forward model adaptation alone does not produce optimal trajectories during reaching movements perturbed by force fields. However, they also show that re-optimizing the controller from the forward model can be sub-optimal. This is because, in a system with state correlations or redundancies, accurate prediction requires different information than optimal control. We find that adding noise to the movements that matches noise found in human data is enough to overcome this problem. However, since the state space for control of real movements is far more complex than in our simple simulations, the effects of correlations on re-adaptation of the controller from the forward model cannot be overlooked.

  8. A computational framework for simultaneous estimation of muscle and joint contact forces and body motion using optimization and surrogate modeling.

    PubMed

    Eskinazi, Ilan; Fregly, Benjamin J

    2018-04-01

    Concurrent estimation of muscle activations, joint contact forces, and joint kinematics by means of gradient-based optimization of musculoskeletal models is hindered by computationally expensive and non-smooth joint contact and muscle wrapping algorithms. We present a framework that simultaneously speeds up computation and removes sources of non-smoothness from muscle force optimizations using a combination of parallelization and surrogate modeling, with special emphasis on a novel method for modeling joint contact as a surrogate model of a static analysis. The approach allows one to efficiently introduce elastic joint contact models within static and dynamic optimizations of human motion. We demonstrate the approach by performing two optimizations, one static and one dynamic, using a pelvis-leg musculoskeletal model undergoing a gait cycle. We observed convergence on the order of seconds for a static optimization time frame and on the order of minutes for an entire dynamic optimization. The presented framework may facilitate model-based efforts to predict how planned surgical or rehabilitation interventions will affect post-treatment joint and muscle function. Copyright © 2018 IPEM. Published by Elsevier Ltd. All rights reserved.

  9. Steady-state global optimization of metabolic non-linear dynamic models through recasting into power-law canonical models

    PubMed Central

    2011-01-01

    Background Design of newly engineered microbial strains for biotechnological purposes would greatly benefit from the development of realistic mathematical models for the processes to be optimized. Such models can then be analyzed and, with the development and application of appropriate optimization techniques, one could identify the modifications that need to be made to the organism in order to achieve the desired biotechnological goal. As appropriate models to perform such an analysis are necessarily non-linear and typically non-convex, finding their global optimum is a challenging task. Canonical modeling techniques, such as Generalized Mass Action (GMA) models based on the power-law formalism, offer a possible solution to this problem because they have a mathematical structure that enables the development of specific algorithms for global optimization. Results Based on the GMA canonical representation, we have developed in previous works a highly efficient optimization algorithm and a set of related strategies for understanding the evolution of adaptive responses in cellular metabolism. Here, we explore the possibility of recasting kinetic non-linear models into an equivalent GMA model, so that global optimization on the recast GMA model can be performed. With this technique, optimization is greatly facilitated and the results are transposable to the original non-linear problem. This procedure is straightforward for a particular class of non-linear models known as Saturable and Cooperative (SC) models that extend the power-law formalism to deal with saturation and cooperativity. Conclusions Our results show that recasting non-linear kinetic models into GMA models is indeed an appropriate strategy that helps overcoming some of the numerical difficulties that arise during the global optimization task. PMID:21867520

  10. Feasibility of employing model-based optimization of pulse amplitude and electrode distance for effective tumor electropermeabilization.

    PubMed

    Sel, Davorka; Lebar, Alenka Macek; Miklavcic, Damijan

    2007-05-01

    In electrochemotherapy (ECT) electropermeabilization, parameters (pulse amplitude, electrode setup) need to be customized in order to expose the whole tumor to electric field intensities above permeabilizing threshold to achieve effective ECT. In this paper, we present a model-based optimization approach toward determination of optimal electropermeabilization parameters for effective ECT. The optimization is carried out by minimizing the difference between the permeabilization threshold and electric field intensities computed by finite element model in selected points of tumor. We examined the feasibility of model-based optimization of electropermeabilization parameters on a model geometry generated from computer tomography images, representing brain tissue with tumor. Continuous parameter subject to optimization was pulse amplitude. The distance between electrode pairs was optimized as a discrete parameter. Optimization also considered the pulse generator constraints on voltage and current. During optimization the two constraints were reached preventing the exposure of the entire volume of the tumor to electric field intensities above permeabilizing threshold. However, despite the fact that with the particular needle array holder and pulse generator the entire volume of the tumor was not permeabilized, the maximal extent of permeabilization for the particular case (electrodes, tissue) was determined with the proposed approach. Model-based optimization approach could also be used for electro-gene transfer, where electric field intensities should be distributed between permeabilizing threshold and irreversible threshold-the latter causing tissue necrosis. This can be obtained by adding constraints on maximum electric field intensity in optimization procedure.

  11. Topography-based Flood Planning and Optimization Capability Development Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Judi, David R.; Tasseff, Byron A.; Bent, Russell W.

    2014-02-26

    Globally, water-related disasters are among the most frequent and costly natural hazards. Flooding inflicts catastrophic damage on critical infrastructure and population, resulting in substantial economic and social costs. NISAC is developing LeveeSim, a suite of nonlinear and network optimization models, to predict optimal barrier placement to protect critical regions and infrastructure during flood events. LeveeSim currently includes a high-performance flood model to simulate overland flow, as well as a network optimization model to predict optimal barrier placement during a flood event. The LeveeSim suite models the effects of flooding in predefined regions. By manipulating a domain’s underlying topography, developers alteredmore » flood propagation to reduce detrimental effects in areas of interest. This numerical altering of a domain’s topography is analogous to building levees, placing sandbags, etc. To induce optimal changes in topography, NISAC used a novel application of an optimization algorithm to minimize flooding effects in regions of interest. To develop LeveeSim, NISAC constructed and coupled hydrodynamic and optimization algorithms. NISAC first implemented its existing flood modeling software to use massively parallel graphics processing units (GPUs), which allowed for the simulation of larger domains and longer timescales. NISAC then implemented a network optimization model to predict optimal barrier placement based on output from flood simulations. As proof of concept, NISAC developed five simple test scenarios, and optimized topographic solutions were compared with intuitive solutions. Finally, as an early validation example, barrier placement was optimized to protect an arbitrary region in a simulation of the historic Taum Sauk dam breach.« less

  12. Finding Bayesian Optimal Designs for Nonlinear Models: A Semidefinite Programming-Based Approach.

    PubMed

    Duarte, Belmiro P M; Wong, Weng Kee

    2015-08-01

    This paper uses semidefinite programming (SDP) to construct Bayesian optimal design for nonlinear regression models. The setup here extends the formulation of the optimal designs problem as an SDP problem from linear to nonlinear models. Gaussian quadrature formulas (GQF) are used to compute the expectation in the Bayesian design criterion, such as D-, A- or E-optimality. As an illustrative example, we demonstrate the approach using the power-logistic model and compare results in the literature. Additionally, we investigate how the optimal design is impacted by different discretising schemes for the design space, different amounts of uncertainty in the parameter values, different choices of GQF and different prior distributions for the vector of model parameters, including normal priors with and without correlated components. Further applications to find Bayesian D-optimal designs with two regressors for a logistic model and a two-variable generalised linear model with a gamma distributed response are discussed, and some limitations of our approach are noted.

  13. Finding Bayesian Optimal Designs for Nonlinear Models: A Semidefinite Programming-Based Approach

    PubMed Central

    Duarte, Belmiro P. M.; Wong, Weng Kee

    2014-01-01

    Summary This paper uses semidefinite programming (SDP) to construct Bayesian optimal design for nonlinear regression models. The setup here extends the formulation of the optimal designs problem as an SDP problem from linear to nonlinear models. Gaussian quadrature formulas (GQF) are used to compute the expectation in the Bayesian design criterion, such as D-, A- or E-optimality. As an illustrative example, we demonstrate the approach using the power-logistic model and compare results in the literature. Additionally, we investigate how the optimal design is impacted by different discretising schemes for the design space, different amounts of uncertainty in the parameter values, different choices of GQF and different prior distributions for the vector of model parameters, including normal priors with and without correlated components. Further applications to find Bayesian D-optimal designs with two regressors for a logistic model and a two-variable generalised linear model with a gamma distributed response are discussed, and some limitations of our approach are noted. PMID:26512159

  14. Optimization of a centrifugal compressor impeller using CFD: the choice of simulation model parameters

    NASA Astrophysics Data System (ADS)

    Neverov, V. V.; Kozhukhov, Y. V.; Yablokov, A. M.; Lebedev, A. A.

    2017-08-01

    Nowadays the optimization using computational fluid dynamics (CFD) plays an important role in the design process of turbomachines. However, for the successful and productive optimization it is necessary to define a simulation model correctly and rationally. The article deals with the choice of a grid and computational domain parameters for optimization of centrifugal compressor impellers using computational fluid dynamics. Searching and applying optimal parameters of the grid model, the computational domain and solver settings allows engineers to carry out a high-accuracy modelling and to use computational capability effectively. The presented research was conducted using Numeca Fine/Turbo package with Spalart-Allmaras and Shear Stress Transport turbulence models. Two radial impellers was investigated: the high-pressure at ψT=0.71 and the low-pressure at ψT=0.43. The following parameters of the computational model were considered: the location of inlet and outlet boundaries, type of mesh topology, size of mesh and mesh parameter y+. Results of the investigation demonstrate that the choice of optimal parameters leads to the significant reduction of the computational time. Optimal parameters in comparison with non-optimal but visually similar parameters can reduce the calculation time up to 4 times. Besides, it is established that some parameters have a major impact on the result of modelling.

  15. Multi-level optimization of a beam-like space truss utilizing a continuum model

    NASA Technical Reports Server (NTRS)

    Yates, K.; Gurdal, Z.; Thangjitham, S.

    1992-01-01

    A continuous beam model is developed for approximate analysis of a large, slender, beam-like truss. The model is incorporated in a multi-level optimization scheme for the weight minimization of such trusses. This scheme is tested against traditional optimization procedures for savings in computational cost. Results from both optimization methods are presented for comparison.

  16. USMC Inventory Control Using Optimization Modeling and Discrete Event Simulation

    DTIC Science & Technology

    2016-09-01

    release. Distribution is unlimited. USMC INVENTORY CONTROL USING OPTIMIZATION MODELING AND DISCRETE EVENT SIMULATION by Timothy A. Curling...USING OPTIMIZATION MODELING AND DISCRETE EVENT SIMULATION 5. FUNDING NUMBERS 6. AUTHOR(S) Timothy A. Curling 7. PERFORMING ORGANIZATION NAME(S...optimization and discrete -event simulation. This construct can potentially provide an effective means in improving order management decisions. However

  17. Review: Optimization methods for groundwater modeling and management

    NASA Astrophysics Data System (ADS)

    Yeh, William W.-G.

    2015-09-01

    Optimization methods have been used in groundwater modeling as well as for the planning and management of groundwater systems. This paper reviews and evaluates the various optimization methods that have been used for solving the inverse problem of parameter identification (estimation), experimental design, and groundwater planning and management. Various model selection criteria are discussed, as well as criteria used for model discrimination. The inverse problem of parameter identification concerns the optimal determination of model parameters using water-level observations. In general, the optimal experimental design seeks to find sampling strategies for the purpose of estimating the unknown model parameters. A typical objective of optimal conjunctive-use planning of surface water and groundwater is to minimize the operational costs of meeting water demand. The optimization methods include mathematical programming techniques such as linear programming, quadratic programming, dynamic programming, stochastic programming, nonlinear programming, and the global search algorithms such as genetic algorithms, simulated annealing, and tabu search. Emphasis is placed on groundwater flow problems as opposed to contaminant transport problems. A typical two-dimensional groundwater flow problem is used to explain the basic formulations and algorithms that have been used to solve the formulated optimization problems.

  18. First-Order Frameworks for Managing Models in Engineering Optimization

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natlia M.; Lewis, Robert Michael

    2000-01-01

    Approximation/model management optimization (AMMO) is a rigorous methodology for attaining solutions of high-fidelity optimization problems with minimal expense in high- fidelity function and derivative evaluation. First-order AMMO frameworks allow for a wide variety of models and underlying optimization algorithms. Recent demonstrations with aerodynamic optimization achieved three-fold savings in terms of high- fidelity function and derivative evaluation in the case of variable-resolution models and five-fold savings in the case of variable-fidelity physics models. The savings are problem dependent but certain trends are beginning to emerge. We give an overview of the first-order frameworks, current computational results, and an idea of the scope of the first-order framework applicability.

  19. A flexible, interactive software tool for fitting the parameters of neuronal models.

    PubMed

    Friedrich, Péter; Vella, Michael; Gulyás, Attila I; Freund, Tamás F; Káli, Szabolcs

    2014-01-01

    The construction of biologically relevant neuronal models as well as model-based analysis of experimental data often requires the simultaneous fitting of multiple model parameters, so that the behavior of the model in a certain paradigm matches (as closely as possible) the corresponding output of a real neuron according to some predefined criterion. Although the task of model optimization is often computationally hard, and the quality of the results depends heavily on technical issues such as the appropriate choice (and implementation) of cost functions and optimization algorithms, no existing program provides access to the best available methods while also guiding the user through the process effectively. Our software, called Optimizer, implements a modular and extensible framework for the optimization of neuronal models, and also features a graphical interface which makes it easy for even non-expert users to handle many commonly occurring scenarios. Meanwhile, educated users can extend the capabilities of the program and customize it according to their needs with relatively little effort. Optimizer has been developed in Python, takes advantage of open-source Python modules for nonlinear optimization, and interfaces directly with the NEURON simulator to run the models. Other simulators are supported through an external interface. We have tested the program on several different types of problems of varying complexity, using different model classes. As targets, we used simulated traces from the same or a more complex model class, as well as experimental data. We successfully used Optimizer to determine passive parameters and conductance densities in compartmental models, and to fit simple (adaptive exponential integrate-and-fire) neuronal models to complex biological data. Our detailed comparisons show that Optimizer can handle a wider range of problems, and delivers equally good or better performance than any other existing neuronal model fitting tool.

  20. A flexible, interactive software tool for fitting the parameters of neuronal models

    PubMed Central

    Friedrich, Péter; Vella, Michael; Gulyás, Attila I.; Freund, Tamás F.; Káli, Szabolcs

    2014-01-01

    The construction of biologically relevant neuronal models as well as model-based analysis of experimental data often requires the simultaneous fitting of multiple model parameters, so that the behavior of the model in a certain paradigm matches (as closely as possible) the corresponding output of a real neuron according to some predefined criterion. Although the task of model optimization is often computationally hard, and the quality of the results depends heavily on technical issues such as the appropriate choice (and implementation) of cost functions and optimization algorithms, no existing program provides access to the best available methods while also guiding the user through the process effectively. Our software, called Optimizer, implements a modular and extensible framework for the optimization of neuronal models, and also features a graphical interface which makes it easy for even non-expert users to handle many commonly occurring scenarios. Meanwhile, educated users can extend the capabilities of the program and customize it according to their needs with relatively little effort. Optimizer has been developed in Python, takes advantage of open-source Python modules for nonlinear optimization, and interfaces directly with the NEURON simulator to run the models. Other simulators are supported through an external interface. We have tested the program on several different types of problems of varying complexity, using different model classes. As targets, we used simulated traces from the same or a more complex model class, as well as experimental data. We successfully used Optimizer to determine passive parameters and conductance densities in compartmental models, and to fit simple (adaptive exponential integrate-and-fire) neuronal models to complex biological data. Our detailed comparisons show that Optimizer can handle a wider range of problems, and delivers equally good or better performance than any other existing neuronal model fitting tool. PMID:25071540

  1. Portfolio optimization with mean-variance model

    NASA Astrophysics Data System (ADS)

    Hoe, Lam Weng; Siew, Lam Weng

    2016-06-01

    Investors wish to achieve the target rate of return at the minimum level of risk in their investment. Portfolio optimization is an investment strategy that can be used to minimize the portfolio risk and can achieve the target rate of return. The mean-variance model has been proposed in portfolio optimization. The mean-variance model is an optimization model that aims to minimize the portfolio risk which is the portfolio variance. The objective of this study is to construct the optimal portfolio using the mean-variance model. The data of this study consists of weekly returns of 20 component stocks of FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI). The results of this study show that the portfolio composition of the stocks is different. Moreover, investors can get the return at minimum level of risk with the constructed optimal mean-variance portfolio.

  2. Optimal Strategy for Integrated Dynamic Inventory Control and Supplier Selection in Unknown Environment via Stochastic Dynamic Programming

    NASA Astrophysics Data System (ADS)

    Sutrisno; Widowati; Solikhin

    2016-06-01

    In this paper, we propose a mathematical model in stochastic dynamic optimization form to determine the optimal strategy for an integrated single product inventory control problem and supplier selection problem where the demand and purchasing cost parameters are random. For each time period, by using the proposed model, we decide the optimal supplier and calculate the optimal product volume purchased from the optimal supplier so that the inventory level will be located at some point as close as possible to the reference point with minimal cost. We use stochastic dynamic programming to solve this problem and give several numerical experiments to evaluate the model. From the results, for each time period, the proposed model was generated the optimal supplier and the inventory level was tracked the reference point well.

  3. Optimization of monitoring networks based on uncertainty quantification of model predictions of contaminant transport

    NASA Astrophysics Data System (ADS)

    Vesselinov, V. V.; Harp, D.

    2010-12-01

    The process of decision making to protect groundwater resources requires a detailed estimation of uncertainties in model predictions. Various uncertainties associated with modeling a natural system, such as: (1) measurement and computational errors; (2) uncertainties in the conceptual model and model-parameter estimates; (3) simplifications in model setup and numerical representation of governing processes, contribute to the uncertainties in the model predictions. Due to this combination of factors, the sources of predictive uncertainties are generally difficult to quantify individually. Decision support related to optimal design of monitoring networks requires (1) detailed analyses of existing uncertainties related to model predictions of groundwater flow and contaminant transport, (2) optimization of the proposed monitoring network locations in terms of their efficiency to detect contaminants and provide early warning. We apply existing and newly-proposed methods to quantify predictive uncertainties and to optimize well locations. An important aspect of the analysis is the application of newly-developed optimization technique based on coupling of Particle Swarm and Levenberg-Marquardt optimization methods which proved to be robust and computationally efficient. These techniques and algorithms are bundled in a software package called MADS. MADS (Model Analyses for Decision Support) is an object-oriented code that is capable of performing various types of model analyses and supporting model-based decision making. The code can be executed under different computational modes, which include (1) sensitivity analyses (global and local), (2) Monte Carlo analysis, (3) model calibration, (4) parameter estimation, (5) uncertainty quantification, and (6) model selection. The code can be externally coupled with any existing model simulator through integrated modules that read/write input and output files using a set of template and instruction files (consistent with the PEST I/O protocol). MADS can also be internally coupled with a series of built-in analytical simulators. MADS provides functionality to work directly with existing control files developed for the code PEST (Doherty 2009). To perform the computational modes mentioned above, the code utilizes (1) advanced Latin-Hypercube sampling techniques (including Improved Distributed Sampling), (2) various gradient-based Levenberg-Marquardt optimization methods, (3) advanced global optimization methods (including Particle Swarm Optimization), and (4) a selection of alternative objective functions. The code has been successfully applied to perform various model analyses related to environmental management of real contamination sites. Examples include source identification problems, quantification of uncertainty, model calibration, and optimization of monitoring networks. The methodology and software codes are demonstrated using synthetic and real case studies where monitoring networks are optimized taking into account the uncertainty in model predictions of contaminant transport.

  4. Aerodynamic Optimization of Rocket Control Surface Geometry Using Cartesian Methods and CAD Geometry

    NASA Technical Reports Server (NTRS)

    Nelson, Andrea; Aftosmis, Michael J.; Nemec, Marian; Pulliam, Thomas H.

    2004-01-01

    Aerodynamic design is an iterative process involving geometry manipulation and complex computational analysis subject to physical constraints and aerodynamic objectives. A design cycle consists of first establishing the performance of a baseline design, which is usually created with low-fidelity engineering tools, and then progressively optimizing the design to maximize its performance. Optimization techniques have evolved from relying exclusively on designer intuition and insight in traditional trial and error methods, to sophisticated local and global search methods. Recent attempts at automating the search through a large design space with formal optimization methods include both database driven and direct evaluation schemes. Databases are being used in conjunction with surrogate and neural network models as a basis on which to run optimization algorithms. Optimization algorithms are also being driven by the direct evaluation of objectives and constraints using high-fidelity simulations. Surrogate methods use data points obtained from simulations, and possibly gradients evaluated at the data points, to create mathematical approximations of a database. Neural network models work in a similar fashion, using a number of high-fidelity database calculations as training iterations to create a database model. Optimal designs are obtained by coupling an optimization algorithm to the database model. Evaluation of the current best design then gives either a new local optima and/or increases the fidelity of the approximation model for the next iteration. Surrogate methods have also been developed that iterate on the selection of data points to decrease the uncertainty of the approximation model prior to searching for an optimal design. The database approximation models for each of these cases, however, become computationally expensive with increase in dimensionality. Thus the method of using optimization algorithms to search a database model becomes problematic as the number of design variables is increased.

  5. Environmental optimal control strategies based on plant canopy photosynthesis responses and greenhouse climate model

    NASA Astrophysics Data System (ADS)

    Deng, Lujuan; Xie, Songhe; Cui, Jiantao; Liu, Tao

    2006-11-01

    It is the essential goal of intelligent greenhouse environment optimal control to enhance income of cropper and energy save. There were some characteristics such as uncertainty, imprecision, nonlinear, strong coupling, bigger inertia and different time scale in greenhouse environment control system. So greenhouse environment optimal control was not easy and especially model-based optimal control method was more difficult. So the optimal control problem of plant environment in intelligent greenhouse was researched. Hierarchical greenhouse environment control system was constructed. In the first level data measuring was carried out and executive machine was controlled. Optimal setting points of climate controlled variable in greenhouse was calculated and chosen in the second level. Market analysis and planning were completed in third level. The problem of the optimal setting point was discussed in this paper. Firstly the model of plant canopy photosynthesis responses and the model of greenhouse climate model were constructed. Afterwards according to experience of the planting expert, in daytime the optimal goals were decided according to the most maximal photosynthesis rate principle. In nighttime on plant better growth conditions the optimal goals were decided by energy saving principle. Whereafter environment optimal control setting points were computed by GA. Compared the optimal result and recording data in real system, the method is reasonable and can achieve energy saving and the maximal photosynthesis rate in intelligent greenhouse

  6. Performance Optimizing Adaptive Control with Time-Varying Reference Model Modification

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.; Hashemi, Kelley E.

    2017-01-01

    This paper presents a new adaptive control approach that involves a performance optimization objective. The control synthesis involves the design of a performance optimizing adaptive controller from a subset of control inputs. The resulting effect of the performance optimizing adaptive controller is to modify the initial reference model into a time-varying reference model which satisfies the performance optimization requirement obtained from an optimal control problem. The time-varying reference model modification is accomplished by the real-time solutions of the time-varying Riccati and Sylvester equations coupled with the least-squares parameter estimation of the sensitivities of the performance metric. The effectiveness of the proposed method is demonstrated by an application of maneuver load alleviation control for a flexible aircraft.

  7. MPSalsa Version 1.5: A Finite Element Computer Program for Reacting Flow Problems: Part 1 - Theoretical Development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Devine, K.D.; Hennigan, G.L.; Hutchinson, S.A.

    1999-01-01

    The theoretical background for the finite element computer program, MPSalsa Version 1.5, is presented in detail. MPSalsa is designed to solve laminar or turbulent low Mach number, two- or three-dimensional incompressible and variable density reacting fluid flows on massively parallel computers, using a Petrov-Galerkin finite element formulation. The code has the capability to solve coupled fluid flow (with auxiliary turbulence equations), heat transport, multicomponent species transport, and finite-rate chemical reactions, and to solve coupled multiple Poisson or advection-diffusion-reaction equations. The program employs the CHEMKIN library to provide a rigorous treatment of multicomponent ideal gas kinetics and transport. Chemical reactions occurringmore » in the gas phase and on surfaces are treated by calls to CHEMKIN and SURFACE CHEMK3N, respectively. The code employs unstructured meshes, using the EXODUS II finite element database suite of programs for its input and output files. MPSalsa solves both transient and steady flows by using fully implicit time integration, an inexact Newton method and iterative solvers based on preconditioned Krylov methods as implemented in the Aztec. solver library.« less

  8. Current Status of Legislation on Dietary Products for Sportspeople in a European Framework.

    PubMed

    Martínez-Sanz, José Miguel; Sospedra, Isabel; Baladía, Eduard; Arranz, Laura; Ortiz-Moncada, Rocío; Gil-Izquierdo, Angel

    2017-11-08

    The consumption of nutritional ergogenic aids is conditioned by laws/regulations, but standards/regulations vary between countries. The aim of this review is to explore legislative documents that regulate the use of nutritional ergogenic aids intended for sportspeople in a Spanish/European framework. A narrative review has been developed from official websites of Spanish (Spanish Agency of the Consumer, Food Safety, and Nutrition) and European (European Commission and European Food Safety Authority) bodies. A descriptive analysis of documents was performed. Eighteen legislative documents have been compiled in three sections: (1) Advertising of any type of food and/or product; (2) Composition, labeling, and advertising of foods; (3) Nutritional ergogenic aids. In spite of the existence of these legal documents, the regulation lacks guidance on the use/application of nutritional ergogenic aids for sportspeople. It is essential to prevent the introduction or dissemination of false, ambiguous, or inexact information and contents that induce an error in the receivers of the information. In this field, it is worth highlighting the roles of the European Food Safety Authority and the World Anti-Doping Agency, which provide information about consumer guidelines, prescribing practices, and recommendations for the prudent use of nutritional ergogenic aids.

  9. A two-component Matched Interface and Boundary (MIB) regularization for charge singularity in implicit solvation

    NASA Astrophysics Data System (ADS)

    Geng, Weihua; Zhao, Shan

    2017-12-01

    We present a new Matched Interface and Boundary (MIB) regularization method for treating charge singularity in solvated biomolecules whose electrostatics are described by the Poisson-Boltzmann (PB) equation. In a regularization method, by decomposing the potential function into two or three components, the singular component can be analytically represented by the Green's function, while other components possess a higher regularity. Our new regularization combines the efficiency of two-component schemes with the accuracy of the three-component schemes. Based on this regularization, a new MIB finite difference algorithm is developed for solving both linear and nonlinear PB equations, where the nonlinearity is handled by using the inexact-Newton's method. Compared with the existing MIB PB solver based on a three-component regularization, the present algorithm is simpler to implement by circumventing the work to solve a boundary value Poisson equation inside the molecular interface and to compute related interface jump conditions numerically. Moreover, the new MIB algorithm becomes computationally less expensive, while maintains the same second order accuracy. This is numerically verified by calculating the electrostatic potential and solvation energy on the Kirkwood sphere on which the analytical solutions are available and on a series of proteins with various sizes.

  10. TranAir: A full-potential, solution-adaptive, rectangular grid code for predicting subsonic, transonic, and supersonic flows about arbitrary configurations. Theory document

    NASA Technical Reports Server (NTRS)

    Johnson, F. T.; Samant, S. S.; Bieterman, M. B.; Melvin, R. G.; Young, D. P.; Bussoletti, J. E.; Hilmes, C. L.

    1992-01-01

    A new computer program, called TranAir, for analyzing complex configurations in transonic flow (with subsonic or supersonic freestream) was developed. This program provides accurate and efficient simulations of nonlinear aerodynamic flows about arbitrary geometries with the ease and flexibility of a typical panel method program. The numerical method implemented in TranAir is described. The method solves the full potential equation subject to a set of general boundary conditions and can handle regions with differing total pressure and temperature. The boundary value problem is discretized using the finite element method on a locally refined rectangular grid. The grid is automatically constructed by the code and is superimposed on the boundary described by networks of panels; thus no surface fitted grid generation is required. The nonlinear discrete system arising from the finite element method is solved using a preconditioned Krylov subspace method embedded in an inexact Newton method. The solution is obtained on a sequence of successively refined grids which are either constructed adaptively based on estimated solution errors or are predetermined based on user inputs. Many results obtained by using TranAir to analyze aerodynamic configurations are presented.

  11. How compliant are dental practice Facebook pages with Australian health care advertising regulations? A Netnographic review.

    PubMed

    Holden, Acl; Spallek, H

    2018-03-01

    The National Law that regulates the dental and other health care professions in Australia sets out regulations that dictate how dental practices are to advertise. This study examines the extent to which the profession complies with these regulations and the potential impact that advertising may have upon professionalism. A Facebook search of 38 local government areas in Sydney, New South Wales, was carried out to identify dental practices that had pages on this social media site. A framework for assessment of compliance was developed using the regulatory guidelines and was used to conduct a netnographic review. Two hundred and sixty-six practice pages were identified from across the 38 regions. Of these pages, 71.05% were in breach of the National Law in their use of testimonials, 5.26% displayed misleading or false information, 4.14% displayed offers that had no clear terms and conditions or had inexact pricing, 19.55% had pictures or text that was likely to create unrealistic expectations of treatment benefit and 16.92% encouraged the indiscriminate and unnecessary utilization of health services. This study found that compliance with the National Law by the Facebook pages surveyed was poor. © 2017 Australian Dental Association.

  12. A scalable nonlinear fluid-structure interaction solver based on a Schwarz preconditioner with isogeometric unstructured coarse spaces in 3D

    NASA Astrophysics Data System (ADS)

    Kong, Fande; Cai, Xiao-Chuan

    2017-07-01

    Nonlinear fluid-structure interaction (FSI) problems on unstructured meshes in 3D appear in many applications in science and engineering, such as vibration analysis of aircrafts and patient-specific diagnosis of cardiovascular diseases. In this work, we develop a highly scalable, parallel algorithmic and software framework for FSI problems consisting of a nonlinear fluid system and a nonlinear solid system, that are coupled monolithically. The FSI system is discretized by a stabilized finite element method in space and a fully implicit backward difference scheme in time. To solve the large, sparse system of nonlinear algebraic equations at each time step, we propose an inexact Newton-Krylov method together with a multilevel, smoothed Schwarz preconditioner with isogeometric coarse meshes generated by a geometry preserving coarsening algorithm. Here "geometry" includes the boundary of the computational domain and the wet interface between the fluid and the solid. We show numerically that the proposed algorithm and implementation are highly scalable in terms of the number of linear and nonlinear iterations and the total compute time on a supercomputer with more than 10,000 processor cores for several problems with hundreds of millions of unknowns.

  13. Preventing medico-legal issues in clinical practice

    PubMed Central

    Raveesh, Bevinahalli N.; Nayak, Ragavendra B.; Kumbar, Shivakumar F.

    2016-01-01

    The medical profession is considered to be one of the noblest professions in the world. The practice of medicine is capable of rendering noble service to humanity provided due care, sincerity, efficiency, and professional skill is observed by the doctors. However, today, the patient–doctor relationship has almost diminished its fiduciary character and has become more formal and structured. Doctors are no longer regarded as infallible and beyond questioning. Corporatization of health care has made it like any other business, and the medical profession is increasingly being guided by the profit motive rather than that of service. On the other hand, a well-publicized malpractice case can ruin the doctor's career and practice. The law, like medicine, is an inexact science. One cannot predict with certainty an outcome of cases many a time. It depends on the particular facts and circumstances of the case, and also the personal notions of the judge concerned who is hearing the case. The axiom “you learn from your mistakes” is too little honored in healthcare. The best way to handle medico-legal issues is by preventing them, and this article tries to enumerate the preventive measures in safeguarding the doctor against negligence suit. PMID:27891020

  14. Oil pipeline geohazard monitoring using optical fiber FBG strain sensors (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Salazar-Ferro, Andres; Mendez, Alexis

    2016-04-01

    Pipelines are naturally vulnerable to operational, environmental and man-made effects such as internal erosion and corrosion; mechanical deformation due to geophysical risks and ground movements; leaks from neglect and vandalism; as well as encroachments from nearby excavations or illegal intrusions. The actual detection and localization of incipient and advanced faults in pipelines is a very difficult, expensive and inexact task. Anything that operators can do to mitigate the effects of these faults will provide increased reliability, reduced downtime and maintenance costs, as well as increased revenues. This talk will review the on-line monitoring of an extensive network of oil pipelines in service in Colombia using optical fiber Bragg grating (FBG) strain sensors for the measurement of strains and bending caused by geohazard risks such as soil movements, landslides, settlements, flooding and seismic activity. The FBG sensors were mounted on the outside of the pipelines at discrete locations where geohazard risk was expected. The system has been in service for the past 3 years with over 1,000 strain sensors mounted. The technique has been reliable and effective in giving advanced warning of accumulated pipeline strains as well as possible ruptures.

  15. Portfolio of automated trading systems: complexity and learning set size issues.

    PubMed

    Raudys, Sarunas

    2013-03-01

    In this paper, we consider using profit/loss histories of multiple automated trading systems (ATSs) as N input variables in portfolio management. By means of multivariate statistical analysis and simulation studies, we analyze the influences of sample size (L) and input dimensionality on the accuracy of determining the portfolio weights. We find that degradation in portfolio performance due to inexact estimation of N means and N(N - 1)/2 correlations is proportional to N/L; however, estimation of N variances does not worsen the result. To reduce unhelpful sample size/dimensionality effects, we perform a clustering of N time series and split them into a small number of blocks. Each block is composed of mutually correlated ATSs. It generates an expert trading agent based on a nontrainable 1/N portfolio rule. To increase the diversity of the expert agents, we use training sets of different lengths for clustering. In the output of the portfolio management system, the regularized mean-variance framework-based fusion agent is developed in each walk-forward step of an out-of-sample portfolio validation experiment. Experiments with the real financial data (2003-2012) confirm the effectiveness of the suggested approach.

  16. Calculation of global carbon dioxide emissions: Review of emission factors and a new approach taking fuel quality into consideration

    NASA Astrophysics Data System (ADS)

    Hiete, Michael; Berner, Ulrich; Richter, Otto

    2001-03-01

    Anthropogenic carbon dioxide emissions resulting from fossil fuel consumption play a major role in the current debate on climate change. Carbon dioxide emissions are calculated on the basis of a carbon dioxide emission factor (CEF) for each type of fuel. Published CEFs are reviewed in this paper. It was found that for nearly all CEFs, fuel quality is not adequately taken into account. This is especially true in the case of the CEFs for coal. Published CEFs are often based on generalized assumptions and inexact conversions. In particular, conversions from gross calorific value to net calorific value were examined. A new method for determining CEFs as a function of calorific value (for coal, peat, and natural gas) and specific gravity (for crude oil) is presented that permits CEFs to be calculated for specific fuel qualities. A review of proportions of fossil fuels that remain unoxidized owing to incomplete combustion or inclusion in petrochemical products, etc., (stored carbon) shows that these figures need to be updated and checked for their applicability on a global scale, since they are mostly based on U.S. data.

  17. Viscosity of thickened fluids that relate to the Australian National Standards.

    PubMed

    Karsten Hadde, Enrico; Ann Yvette Cichero, Julie; Michael Nicholson, Timothy

    2016-08-01

    In 2007, Australia published standardized terminology and definitions for three levels of thickened fluids used in the management of dysphagia. This study examined the thickness of the current Australian National Fluid Standards rheologically (i.e. viscosity, yield stress) and correlated these results with the "fork test", as described in the national standards. Clinicians who prescribe or work with thickened liquids and laypersons were recruited to categorize 15 different thickened fluids of known viscosities using the fork test. The mean apparent viscosity and the yield stress for each fluid category were calculated. Clear responses were obtained by both clinicians and laypersons for very thin fluids (< 90 mPa.s) and very thick fluids (> 1150 mPa.s), but large variations of responses were seen for intermediate viscosities. Measures of viscosity and yield stress were important in allocating liquids to different categories. Three bands of fluid viscosity with distinct intermediate band gaps and associated yield stress measures were clearly identifiable and are proposed as objective complements to the Australian National Standards. The "fork test" provides rudimentary information about both viscosity and yield stress, but is an inexact measure of both variables.

  18. Current Status of Legislation on Dietary Products for Sportspeople in a European Framework

    PubMed Central

    Arranz, Laura; Ortiz-Moncada, Rocío

    2017-01-01

    The consumption of nutritional ergogenic aids is conditioned by laws/regulations, but standards/regulations vary between countries. The aim of this review is to explore legislative documents that regulate the use of nutritional ergogenic aids intended for sportspeople in a Spanish/European framework. A narrative review has been developed from official websites of Spanish (Spanish Agency of the Consumer, Food Safety, and Nutrition) and European (European Commission and European Food Safety Authority) bodies. A descriptive analysis of documents was performed. Eighteen legislative documents have been compiled in three sections: (1) Advertising of any type of food and/or product; (2) Composition, labeling, and advertising of foods; (3) Nutritional ergogenic aids. In spite of the existence of these legal documents, the regulation lacks guidance on the use/application of nutritional ergogenic aids for sportspeople. It is essential to prevent the introduction or dissemination of false, ambiguous, or inexact information and contents that induce an error in the receivers of the information. In this field, it is worth highlighting the roles of the European Food Safety Authority and the World Anti-Doping Agency, which provide information about consumer guidelines, prescribing practices, and recommendations for the prudent use of nutritional ergogenic aids. PMID:29117104

  19. Constrained Low-Rank Learning Using Least Squares-Based Regularization.

    PubMed

    Li, Ping; Yu, Jun; Wang, Meng; Zhang, Luming; Cai, Deng; Li, Xuelong

    2017-12-01

    Low-rank learning has attracted much attention recently due to its efficacy in a rich variety of real-world tasks, e.g., subspace segmentation and image categorization. Most low-rank methods are incapable of capturing low-dimensional subspace for supervised learning tasks, e.g., classification and regression. This paper aims to learn both the discriminant low-rank representation (LRR) and the robust projecting subspace in a supervised manner. To achieve this goal, we cast the problem into a constrained rank minimization framework by adopting the least squares regularization. Naturally, the data label structure tends to resemble that of the corresponding low-dimensional representation, which is derived from the robust subspace projection of clean data by low-rank learning. Moreover, the low-dimensional representation of original data can be paired with some informative structure by imposing an appropriate constraint, e.g., Laplacian regularizer. Therefore, we propose a novel constrained LRR method. The objective function is formulated as a constrained nuclear norm minimization problem, which can be solved by the inexact augmented Lagrange multiplier algorithm. Extensive experiments on image classification, human pose estimation, and robust face recovery have confirmed the superiority of our method.

  20. Medical negligence: Indian legal perspective.

    PubMed

    Agrawal, Amit

    2016-10-01

    A basic knowledge of how judicial forums deal with the cases relating to medical negligence is of absolute necessity for doctors. The need for such knowledge is more now than before in light of higher premium being placed by the Indian forums on the value of human life and suffering, and perhaps rightly so. Judicial forums, while seeking to identify delinquents and delinquency in the cases of medical negligence, actually aim at striking a careful balance between the autonomy of a doctor to make judgments and the rights of a patient to be dealt with fairly. In the process of adjudication, the judicial forums tend to give sufficient leeway to doctors and expressly recognize the complexity of the human body, inexactness of medical science, the inherent subjectivity of the process, genuine scope for error of judgment, and the importance of the autonomy of the doctors. The law does not prescribe the limits of high standards that can be adopted but only the minimum standard below which the patients cannot be dealt with. Judicial forums have also signaled an increased need of the doctors to engage with the patients during treatment, especially when the line of treatment is contested, has serious side effects and alternative treatments exist.

  1. The efficacy of using inventory data to develop optimal diameter increment models

    Treesearch

    Don C. Bragg

    2002-01-01

    Most optimal tree diameter growth models have arisen through either the conceptualization of physiological processes or the adaptation of empirical increment models. However, surprisingly little effort has been invested in the melding of these approaches even though it is possible to develop theoretically sound, computationally efficient optimal tree growth models...

  2. Co-Optimization of Electricity Transmission and Generation Resources for Planning and Policy Analysis: Review of Concepts and Modeling Approaches

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krishnan, Venkat; Ho, Jonathan; Hobbs, Benjamin F.

    2016-05-01

    The recognition of transmission's interaction with other resources has motivated the development of co-optimization methods to optimize transmission investment while simultaneously considering tradeoffs with investments in electricity supply, demand, and storage resources. For a given set of constraints, co-optimized planning models provide solutions that have lower costs than solutions obtained from decoupled optimization (transmission-only, generation-only, or iterations between them). This paper describes co-optimization and provides an overview of approaches to co-optimizing transmission options, supply-side resources, demand-side resources, and natural gas pipelines. In particular, the paper provides an up-to-date assessment of the present and potential capabilities of existing co-optimization tools, andmore » it discusses needs and challenges for developing advanced co-optimization models.« less

  3. Empirical Performance Model-Driven Data Layout Optimization and Library Call Selection for Tensor Contraction Expressions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Qingda; Gao, Xiaoyang; Krishnamoorthy, Sriram

    Empirical optimizers like ATLAS have been very effective in optimizing computational kernels in libraries. The best choice of parameters such as tile size and degree of loop unrolling is determined by executing different versions of the computation. In contrast, optimizing compilers use a model-driven approach to program transformation. While the model-driven approach of optimizing compilers is generally orders of magnitude faster than ATLAS-like library generators, its effectiveness can be limited by the accuracy of the performance models used. In this paper, we describe an approach where a class of computations is modeled in terms of constituent operations that are empiricallymore » measured, thereby allowing modeling of the overall execution time. The performance model with empirically determined cost components is used to perform data layout optimization together with the selection of library calls and layout transformations in the context of the Tensor Contraction Engine, a compiler for a high-level domain-specific language for expressing computational models in quantum chemistry. The effectiveness of the approach is demonstrated through experimental measurements on representative computations from quantum chemistry.« less

  4. A framework for modeling and optimizing dynamic systems under uncertainty

    DOE PAGES

    Nicholson, Bethany; Siirola, John

    2017-11-11

    Algebraic modeling languages (AMLs) have drastically simplified the implementation of algebraic optimization problems. However, there are still many classes of optimization problems that are not easily represented in most AMLs. These classes of problems are typically reformulated before implementation, which requires significant effort and time from the modeler and obscures the original problem structure or context. In this work we demonstrate how the Pyomo AML can be used to represent complex optimization problems using high-level modeling constructs. We focus on the operation of dynamic systems under uncertainty and demonstrate the combination of Pyomo extensions for dynamic optimization and stochastic programming.more » We use a dynamic semibatch reactor model and a large-scale bubbling fluidized bed adsorber model as test cases.« less

  5. Flexible Approximation Model Approach for Bi-Level Integrated System Synthesis

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw; Kim, Hongman; Ragon, Scott; Soremekun, Grant; Malone, Brett

    2004-01-01

    Bi-Level Integrated System Synthesis (BLISS) is an approach that allows design problems to be naturally decomposed into a set of subsystem optimizations and a single system optimization. In the BLISS approach, approximate mathematical models are used to transfer information from the subsystem optimizations to the system optimization. Accurate approximation models are therefore critical to the success of the BLISS procedure. In this paper, new capabilities that are being developed to generate accurate approximation models for BLISS procedure will be described. The benefits of using flexible approximation models such as Kriging will be demonstrated in terms of convergence characteristics and computational cost. An approach of dealing with cases where subsystem optimization cannot find a feasible design will be investigated by using the new flexible approximation models for the violated local constraints.

  6. A framework for modeling and optimizing dynamic systems under uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nicholson, Bethany; Siirola, John

    Algebraic modeling languages (AMLs) have drastically simplified the implementation of algebraic optimization problems. However, there are still many classes of optimization problems that are not easily represented in most AMLs. These classes of problems are typically reformulated before implementation, which requires significant effort and time from the modeler and obscures the original problem structure or context. In this work we demonstrate how the Pyomo AML can be used to represent complex optimization problems using high-level modeling constructs. We focus on the operation of dynamic systems under uncertainty and demonstrate the combination of Pyomo extensions for dynamic optimization and stochastic programming.more » We use a dynamic semibatch reactor model and a large-scale bubbling fluidized bed adsorber model as test cases.« less

  7. Hybrid surrogate-model-based multi-fidelity efficient global optimization applied to helicopter blade design

    NASA Astrophysics Data System (ADS)

    Ariyarit, Atthaphon; Sugiura, Masahiko; Tanabe, Yasutada; Kanazaki, Masahiro

    2018-06-01

    A multi-fidelity optimization technique by an efficient global optimization process using a hybrid surrogate model is investigated for solving real-world design problems. The model constructs the local deviation using the kriging method and the global model using a radial basis function. The expected improvement is computed to decide additional samples that can improve the model. The approach was first investigated by solving mathematical test problems. The results were compared with optimization results from an ordinary kriging method and a co-kriging method, and the proposed method produced the best solution. The proposed method was also applied to aerodynamic design optimization of helicopter blades to obtain the maximum blade efficiency. The optimal shape obtained by the proposed method achieved performance almost equivalent to that obtained using the high-fidelity, evaluation-based single-fidelity optimization. Comparing all three methods, the proposed method required the lowest total number of high-fidelity evaluation runs to obtain a converged solution.

  8. Optimization Control of the Color-Coating Production Process for Model Uncertainty

    PubMed Central

    He, Dakuo; Wang, Zhengsong; Yang, Le; Mao, Zhizhong

    2016-01-01

    Optimized control of the color-coating production process (CCPP) aims at reducing production costs and improving economic efficiency while meeting quality requirements. However, because optimization control of the CCPP is hampered by model uncertainty, a strategy that considers model uncertainty is proposed. Previous work has introduced a mechanistic model of CCPP based on process analysis to simulate the actual production process and generate process data. The partial least squares method is then applied to develop predictive models of film thickness and economic efficiency. To manage the model uncertainty, the robust optimization approach is introduced to improve the feasibility of the optimized solution. Iterative learning control is then utilized to further refine the model uncertainty. The constrained film thickness is transformed into one of the tracked targets to overcome the drawback that traditional iterative learning control cannot address constraints. The goal setting of economic efficiency is updated continuously according to the film thickness setting until this reaches its desired value. Finally, fuzzy parameter adjustment is adopted to ensure that the economic efficiency and film thickness converge rapidly to their optimized values under the constraint conditions. The effectiveness of the proposed optimization control strategy is validated by simulation results. PMID:27247563

  9. Optimization Control of the Color-Coating Production Process for Model Uncertainty.

    PubMed

    He, Dakuo; Wang, Zhengsong; Yang, Le; Mao, Zhizhong

    2016-01-01

    Optimized control of the color-coating production process (CCPP) aims at reducing production costs and improving economic efficiency while meeting quality requirements. However, because optimization control of the CCPP is hampered by model uncertainty, a strategy that considers model uncertainty is proposed. Previous work has introduced a mechanistic model of CCPP based on process analysis to simulate the actual production process and generate process data. The partial least squares method is then applied to develop predictive models of film thickness and economic efficiency. To manage the model uncertainty, the robust optimization approach is introduced to improve the feasibility of the optimized solution. Iterative learning control is then utilized to further refine the model uncertainty. The constrained film thickness is transformed into one of the tracked targets to overcome the drawback that traditional iterative learning control cannot address constraints. The goal setting of economic efficiency is updated continuously according to the film thickness setting until this reaches its desired value. Finally, fuzzy parameter adjustment is adopted to ensure that the economic efficiency and film thickness converge rapidly to their optimized values under the constraint conditions. The effectiveness of the proposed optimization control strategy is validated by simulation results.

  10. Adaptive surrogate model based multiobjective optimization for coastal aquifer management

    NASA Astrophysics Data System (ADS)

    Song, Jian; Yang, Yun; Wu, Jianfeng; Wu, Jichun; Sun, Xiaomin; Lin, Jin

    2018-06-01

    In this study, a novel surrogate model assisted multiobjective memetic algorithm (SMOMA) is developed for optimal pumping strategies of large-scale coastal groundwater problems. The proposed SMOMA integrates an efficient data-driven surrogate model with an improved non-dominated sorted genetic algorithm-II (NSGAII) that employs a local search operator to accelerate its convergence in optimization. The surrogate model based on Kernel Extreme Learning Machine (KELM) is developed and evaluated as an approximate simulator to generate the patterns of regional groundwater flow and salinity levels in coastal aquifers for reducing huge computational burden. The KELM model is adaptively trained during evolutionary search to satisfy desired fidelity level of surrogate so that it inhibits error accumulation of forecasting and results in correctly converging to true Pareto-optimal front. The proposed methodology is then applied to a large-scale coastal aquifer management in Baldwin County, Alabama. Objectives of minimizing the saltwater mass increase and maximizing the total pumping rate in the coastal aquifers are considered. The optimal solutions achieved by the proposed adaptive surrogate model are compared against those solutions obtained from one-shot surrogate model and original simulation model. The adaptive surrogate model does not only improve the prediction accuracy of Pareto-optimal solutions compared with those by the one-shot surrogate model, but also maintains the equivalent quality of Pareto-optimal solutions compared with those by NSGAII coupled with original simulation model, while retaining the advantage of surrogate models in reducing computational burden up to 94% of time-saving. This study shows that the proposed methodology is a computationally efficient and promising tool for multiobjective optimizations of coastal aquifer managements.

  11. Network-optimized congestion pricing : a parable, model and algorithm

    DOT National Transportation Integrated Search

    1995-05-31

    This paper recites a parable, formulates a model and devises an algorithm for optimizing tolls on a road network. Such tolls induce an equilibrium traffic flow that is at once system-optimal and user-optimal. The parable introduces the network-wide c...

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hart, William E.; Siirola, John Daniel

    We describe new capabilities for modeling MPEC problems within the Pyomo modeling software. These capabilities include new modeling components that represent complementar- ity conditions, modeling transformations for re-expressing models with complementarity con- ditions in other forms, and meta-solvers that apply transformations and numeric optimization solvers to optimize MPEC problems. We illustrate the breadth of Pyomo's modeling capabil- ities for MPEC problems, and we describe how Pyomo's meta-solvers can perform local and global optimization of MPEC problems.

  13. Adaptive surrogate model based multi-objective transfer trajectory optimization between different libration points

    NASA Astrophysics Data System (ADS)

    Peng, Haijun; Wang, Wei

    2016-10-01

    An adaptive surrogate model-based multi-objective optimization strategy that combines the benefits of invariant manifolds and low-thrust control toward developing a low-computational-cost transfer trajectory between libration orbits around the L1 and L2 libration points in the Sun-Earth system has been proposed in this paper. A new structure for a multi-objective transfer trajectory optimization model that divides the transfer trajectory into several segments and gives the dominations for invariant manifolds and low-thrust control in different segments has been established. To reduce the computational cost of multi-objective transfer trajectory optimization, a mixed sampling strategy-based adaptive surrogate model has been proposed. Numerical simulations show that the results obtained from the adaptive surrogate-based multi-objective optimization are in agreement with the results obtained using direct multi-objective optimization methods, and the computational workload of the adaptive surrogate-based multi-objective optimization is only approximately 10% of that of direct multi-objective optimization. Furthermore, the generating efficiency of the Pareto points of the adaptive surrogate-based multi-objective optimization is approximately 8 times that of the direct multi-objective optimization. Therefore, the proposed adaptive surrogate-based multi-objective optimization provides obvious advantages over direct multi-objective optimization methods.

  14. Improving flood forecasting capability of physically based distributed hydrological model by parameter optimization

    NASA Astrophysics Data System (ADS)

    Chen, Y.; Li, J.; Xu, H.

    2015-10-01

    Physically based distributed hydrological models discrete the terrain of the whole catchment into a number of grid cells at fine resolution, and assimilate different terrain data and precipitation to different cells, and are regarded to have the potential to improve the catchment hydrological processes simulation and prediction capability. In the early stage, physically based distributed hydrological models are assumed to derive model parameters from the terrain properties directly, so there is no need to calibrate model parameters, but unfortunately, the uncertanties associated with this model parameter deriving is very high, which impacted their application in flood forecasting, so parameter optimization may also be necessary. There are two main purposes for this study, the first is to propose a parameter optimization method for physically based distributed hydrological models in catchment flood forecasting by using PSO algorithm and to test its competence and to improve its performances, the second is to explore the possibility of improving physically based distributed hydrological models capability in cathcment flood forecasting by parameter optimization. In this paper, based on the scalar concept, a general framework for parameter optimization of the PBDHMs for catchment flood forecasting is first proposed that could be used for all PBDHMs. Then, with Liuxihe model as the study model, which is a physically based distributed hydrological model proposed for catchment flood forecasting, the improverd Particle Swarm Optimization (PSO) algorithm is developed for the parameter optimization of Liuxihe model in catchment flood forecasting, the improvements include to adopt the linear decreasing inertia weight strategy to change the inertia weight, and the arccosine function strategy to adjust the acceleration coefficients. This method has been tested in two catchments in southern China with different sizes, and the results show that the improved PSO algorithm could be used for Liuxihe model parameter optimization effectively, and could improve the model capability largely in catchment flood forecasting, thus proven that parameter optimization is necessary to improve the flood forecasting capability of physically based distributed hydrological model. It also has been found that the appropriate particle number and the maximum evolution number of PSO algorithm used for Liuxihe model catchment flood forcasting is 20 and 30, respectively.

  15. Using constraints and their value for optimization of large ODE systems

    PubMed Central

    Domijan, Mirela; Rand, David A.

    2015-01-01

    We provide analytical tools to facilitate a rigorous assessment of the quality and value of the fit of a complex model to data. We use this to provide approaches to model fitting, parameter estimation, the design of optimization functions and experimental optimization. This is in the context where multiple constraints are used to select or optimize a large model defined by differential equations. We illustrate the approach using models of circadian clocks and the NF-κB signalling system. PMID:25673300

  16. Minimal time spiking in various ChR2-controlled neuron models.

    PubMed

    Renault, Vincent; Thieullen, Michèle; Trélat, Emmanuel

    2018-02-01

    We use conductance based neuron models, and the mathematical modeling of optogenetics to define controlled neuron models and we address the minimal time control of these affine systems for the first spike from equilibrium. We apply tools of geometric optimal control theory to study singular extremals, and we implement a direct method to compute optimal controls. When the system is too large to theoretically investigate the existence of singular optimal controls, we observe numerically the optimal bang-bang controls.

  17. Parametric optimal control of uncertain systems under an optimistic value criterion

    NASA Astrophysics Data System (ADS)

    Li, Bo; Zhu, Yuanguo

    2018-01-01

    It is well known that the optimal control of a linear quadratic model is characterized by the solution of a Riccati differential equation. In many cases, the corresponding Riccati differential equation cannot be solved exactly such that the optimal feedback control may be a complex time-oriented function. In this article, a parametric optimal control problem of an uncertain linear quadratic model under an optimistic value criterion is considered for simplifying the expression of optimal control. Based on the equation of optimality for the uncertain optimal control problem, an approximation method is presented to solve it. As an application, a two-spool turbofan engine optimal control problem is given to show the utility of the proposed model and the efficiency of the presented approximation method.

  18. Coordinated control of active and reactive power of distribution network with distributed PV cluster via model predictive control

    NASA Astrophysics Data System (ADS)

    Ji, Yu; Sheng, Wanxing; Jin, Wei; Wu, Ming; Liu, Haitao; Chen, Feng

    2018-02-01

    A coordinated optimal control method of active and reactive power of distribution network with distributed PV cluster based on model predictive control is proposed in this paper. The method divides the control process into long-time scale optimal control and short-time scale optimal control with multi-step optimization. The models are transformed into a second-order cone programming problem due to the non-convex and nonlinear of the optimal models which are hard to be solved. An improved IEEE 33-bus distribution network system is used to analyse the feasibility and the effectiveness of the proposed control method

  19. Energy-saving management modelling and optimization for lead-acid battery formation process

    NASA Astrophysics Data System (ADS)

    Wang, T.; Chen, Z.; Xu, J. Y.; Wang, F. Y.; Liu, H. M.

    2017-11-01

    In this context, a typical lead-acid battery producing process is introduced. Based on the formation process, an efficiency management method is proposed. An optimization model with the objective to minimize the formation electricity cost in a single period is established. This optimization model considers several related constraints, together with two influencing factors including the transformation efficiency of IGBT charge-and-discharge machine and the time-of-use price. An example simulation is shown using PSO algorithm to solve this mathematic model, and the proposed optimization strategy is proved to be effective and learnable for energy-saving and efficiency optimization in battery producing industries.

  20. Optimal Geoid Modelling to determine the Mean Ocean Circulation - Project Overview and early Results

    NASA Astrophysics Data System (ADS)

    Fecher, Thomas; Knudsen, Per; Bettadpur, Srinivas; Gruber, Thomas; Maximenko, Nikolai; Pie, Nadege; Siegismund, Frank; Stammer, Detlef

    2017-04-01

    The ESA project GOCE-OGMOC (Optimal Geoid Modelling based on GOCE and GRACE third-party mission data and merging with altimetric sea surface data to optimally determine Ocean Circulation) examines the influence of the satellite missions GRACE and in particular GOCE in ocean modelling applications. The project goal is an improved processing of satellite and ground data for the preparation and combination of gravity and altimetry data on the way to an optimal MDT solution. Explicitly, the two main objectives are (i) to enhance the GRACE error modelling and optimally combine GOCE and GRACE [and optionally terrestrial/altimetric data] and (ii) to integrate the optimal Earth gravity field model with MSS and drifter information to derive a state-of-the art MDT including an error assessment. The main work packages referring to (i) are the characterization of geoid model errors, the identification of GRACE error sources, the revision of GRACE error models, the optimization of weighting schemes for the participating data sets and finally the estimation of an optimally combined gravity field model. In this context, also the leakage of terrestrial data into coastal regions shall be investigated, as leakage is not only a problem for the gravity field model itself, but is also mirrored in a derived MDT solution. Related to (ii) the tasks are the revision of MSS error covariances, the assessment of the mean circulation using drifter data sets and the computation of an optimal geodetic MDT as well as a so called state-of-the-art MDT, which combines the geodetic MDT with drifter mean circulation data. This paper presents an overview over the project results with focus on the geodetic results part.

  1. Exploring How Technology Growth Limits Impact Optimal Carbon dioxide Mitigation Pathways

    EPA Science Inventory

    Energy system optimization models prescribe the optimal mix of technologies and fuels for meeting energy demands over a time horizon, subject to energy supplies, demands, and other constraints. When optimizing, these models will, to the extent allowed, favor the least cost combin...

  2. Parametric geometric model and hydrodynamic shape optimization of a flying-wing structure underwater glider

    NASA Astrophysics Data System (ADS)

    Wang, Zhen-yu; Yu, Jian-cheng; Zhang, Ai-qun; Wang, Ya-xing; Zhao, Wen-tao

    2017-12-01

    Combining high precision numerical analysis methods with optimization algorithms to make a systematic exploration of a design space has become an important topic in the modern design methods. During the design process of an underwater glider's flying-wing structure, a surrogate model is introduced to decrease the computation time for a high precision analysis. By these means, the contradiction between precision and efficiency is solved effectively. Based on the parametric geometry modeling, mesh generation and computational fluid dynamics analysis, a surrogate model is constructed by adopting the design of experiment (DOE) theory to solve the multi-objects design optimization problem of the underwater glider. The procedure of a surrogate model construction is presented, and the Gaussian kernel function is specifically discussed. The Particle Swarm Optimization (PSO) algorithm is applied to hydrodynamic design optimization. The hydrodynamic performance of the optimized flying-wing structure underwater glider increases by 9.1%.

  3. Postaudit of optimal conjunctive use policies

    USGS Publications Warehouse

    Nishikawa, Tracy; Martin, Peter; ,

    1998-01-01

    A simulation-optimization model was developed for the optimal management of the city of Santa Barbara's water resources during a drought; however, this model addressed only groundwater flow and not the advective-dispersive, density-dependent transport of seawater. Zero-m freshwater head constraints at the coastal boundary were used as surrogates for the control of seawater intrusion. In this study, the strategies derived from the simulation-optimization model using two surface water supply scenarios are evaluated using a two-dimensional, density-dependent groundwater flow and transport model. Comparisons of simulated chloride mass fractions are made between maintaining the actual pumping policies of the 1987-91 drought and implementing the optimal pumping strategies for each scenario. The results indicate that using 0-m freshwater head constraints allowed no more seawater intrusion than under actual 1987-91 drought conditions and that the simulation-optimization model yields least-cost strategies that deliver more water than under actual drought conditions while controlling seawater intrusion.

  4. Portfolio optimization by using linear programing models based on genetic algorithm

    NASA Astrophysics Data System (ADS)

    Sukono; Hidayat, Y.; Lesmana, E.; Putra, A. S.; Napitupulu, H.; Supian, S.

    2018-01-01

    In this paper, we discussed the investment portfolio optimization using linear programming model based on genetic algorithms. It is assumed that the portfolio risk is measured by absolute standard deviation, and each investor has a risk tolerance on the investment portfolio. To complete the investment portfolio optimization problem, the issue is arranged into a linear programming model. Furthermore, determination of the optimum solution for linear programming is done by using a genetic algorithm. As a numerical illustration, we analyze some of the stocks traded on the capital market in Indonesia. Based on the analysis, it is shown that the portfolio optimization performed by genetic algorithm approach produces more optimal efficient portfolio, compared to the portfolio optimization performed by a linear programming algorithm approach. Therefore, genetic algorithms can be considered as an alternative on determining the investment portfolio optimization, particularly using linear programming models.

  5. Application of simulation models for the optimization of business processes

    NASA Astrophysics Data System (ADS)

    Jašek, Roman; Sedláček, Michal; Chramcov, Bronislav; Dvořák, Jiří

    2016-06-01

    The paper deals with the applications of modeling and simulation tools in the optimization of business processes, especially in solving an optimization of signal flow in security company. As a modeling tool was selected Simul8 software that is used to process modeling based on discrete event simulation and which enables the creation of a visual model of production and distribution processes.

  6. Active and Reactive Power Optimal Dispatch Associated with Load and DG Uncertainties in Active Distribution Network

    NASA Astrophysics Data System (ADS)

    Gao, F.; Song, X. H.; Zhang, Y.; Li, J. F.; Zhao, S. S.; Ma, W. Q.; Jia, Z. Y.

    2017-05-01

    In order to reduce the adverse effects of uncertainty on optimal dispatch in active distribution network, an optimal dispatch model based on chance-constrained programming is proposed in this paper. In this model, the active and reactive power of DG can be dispatched at the aim of reducing the operating cost. The effect of operation strategy on the cost can be reflected in the objective which contains the cost of network loss, DG curtailment, DG reactive power ancillary service, and power quality compensation. At the same time, the probabilistic constraints can reflect the operation risk degree. Then the optimal dispatch model is simplified as a series of single stage model which can avoid large variable dimension and improve the convergence speed. And the single stage model is solved using a combination of particle swarm optimization (PSO) and point estimate method (PEM). Finally, the proposed optimal dispatch model and method is verified by the IEEE33 test system.

  7. H∞ memory feedback control with input limitation minimization for offshore jacket platform stabilization

    NASA Astrophysics Data System (ADS)

    Yang, Jia Sheng

    2018-06-01

    In this paper, we investigate a H∞ memory controller with input limitation minimization (HMCIM) for offshore jacket platforms stabilization. The main objective of this study is to reduce the control consumption as well as protect the actuator when satisfying the requirement of the system performance. First, we introduce a dynamic model of offshore platform with low order main modes based on mode reduction method in numerical analysis. Then, based on H∞ control theory and matrix inequality techniques, we develop a novel H∞ memory controller with input limitation. Furthermore, a non-convex optimization model to minimize input energy consumption is proposed. Since it is difficult to solve this non-convex optimization model by optimization algorithm, we use a relaxation method with matrix operations to transform this non-convex optimization model to be a convex optimization model. Thus, it could be solved by a standard convex optimization solver in MATLAB or CPLEX. Finally, several numerical examples are given to validate the proposed models and methods.

  8. A Hierarchical Modeling for Reactive Power Optimization With Joint Transmission and Distribution Networks by Curve Fitting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ding, Tao; Li, Cheng; Huang, Can

    Here, in order to solve the reactive power optimization with joint transmission and distribution networks, a hierarchical modeling method is proposed in this paper. It allows the reactive power optimization of transmission and distribution networks to be performed separately, leading to a master–slave structure and improves traditional centralized modeling methods by alleviating the big data problem in a control center. Specifically, the transmission-distribution-network coordination issue of the hierarchical modeling method is investigated. First, a curve-fitting approach is developed to provide a cost function of the slave model for the master model, which reflects the impacts of each slave model. Second,more » the transmission and distribution networks are decoupled at feeder buses, and all the distribution networks are coordinated by the master reactive power optimization model to achieve the global optimality. Finally, numerical results on two test systems verify the effectiveness of the proposed hierarchical modeling and curve-fitting methods.« less

  9. A Hierarchical Modeling for Reactive Power Optimization With Joint Transmission and Distribution Networks by Curve Fitting

    DOE PAGES

    Ding, Tao; Li, Cheng; Huang, Can; ...

    2017-01-09

    Here, in order to solve the reactive power optimization with joint transmission and distribution networks, a hierarchical modeling method is proposed in this paper. It allows the reactive power optimization of transmission and distribution networks to be performed separately, leading to a master–slave structure and improves traditional centralized modeling methods by alleviating the big data problem in a control center. Specifically, the transmission-distribution-network coordination issue of the hierarchical modeling method is investigated. First, a curve-fitting approach is developed to provide a cost function of the slave model for the master model, which reflects the impacts of each slave model. Second,more » the transmission and distribution networks are decoupled at feeder buses, and all the distribution networks are coordinated by the master reactive power optimization model to achieve the global optimality. Finally, numerical results on two test systems verify the effectiveness of the proposed hierarchical modeling and curve-fitting methods.« less

  10. Calibration of an agricultural-hydrological model (RZWQM2) using surrogate global optimization

    DOE PAGES

    Xi, Maolong; Lu, Dan; Gui, Dongwei; ...

    2016-11-27

    Robust calibration of an agricultural-hydrological model is critical for simulating crop yield and water quality and making reasonable agricultural management. However, calibration of the agricultural-hydrological system models is challenging because of model complexity, the existence of strong parameter correlation, and significant computational requirements. Therefore, only a limited number of simulations can be allowed in any attempt to find a near-optimal solution within an affordable time, which greatly restricts the successful application of the model. The goal of this study is to locate the optimal solution of the Root Zone Water Quality Model (RZWQM2) given a limited simulation time, so asmore » to improve the model simulation and help make rational and effective agricultural-hydrological decisions. To this end, we propose a computationally efficient global optimization procedure using sparse-grid based surrogates. We first used advanced sparse grid (SG) interpolation to construct a surrogate system of the actual RZWQM2, and then we calibrate the surrogate model using the global optimization algorithm, Quantum-behaved Particle Swarm Optimization (QPSO). As the surrogate model is a polynomial with fast evaluation, it can be efficiently evaluated with a sufficiently large number of times during the optimization, which facilitates the global search. We calibrate seven model parameters against five years of yield, drain flow, and NO 3-N loss data from a subsurface-drained corn-soybean field in Iowa. Results indicate that an accurate surrogate model can be created for the RZWQM2 with a relatively small number of SG points (i.e., RZWQM2 runs). Compared to the conventional QPSO algorithm, our surrogate-based optimization method can achieve a smaller objective function value and better calibration performance using a fewer number of expensive RZWQM2 executions, which greatly improves computational efficiency.« less

  11. Calibration of an agricultural-hydrological model (RZWQM2) using surrogate global optimization

    NASA Astrophysics Data System (ADS)

    Xi, Maolong; Lu, Dan; Gui, Dongwei; Qi, Zhiming; Zhang, Guannan

    2017-01-01

    Robust calibration of an agricultural-hydrological model is critical for simulating crop yield and water quality and making reasonable agricultural management. However, calibration of the agricultural-hydrological system models is challenging because of model complexity, the existence of strong parameter correlation, and significant computational requirements. Therefore, only a limited number of simulations can be allowed in any attempt to find a near-optimal solution within an affordable time, which greatly restricts the successful application of the model. The goal of this study is to locate the optimal solution of the Root Zone Water Quality Model (RZWQM2) given a limited simulation time, so as to improve the model simulation and help make rational and effective agricultural-hydrological decisions. To this end, we propose a computationally efficient global optimization procedure using sparse-grid based surrogates. We first used advanced sparse grid (SG) interpolation to construct a surrogate system of the actual RZWQM2, and then we calibrate the surrogate model using the global optimization algorithm, Quantum-behaved Particle Swarm Optimization (QPSO). As the surrogate model is a polynomial with fast evaluation, it can be efficiently evaluated with a sufficiently large number of times during the optimization, which facilitates the global search. We calibrate seven model parameters against five years of yield, drain flow, and NO3-N loss data from a subsurface-drained corn-soybean field in Iowa. Results indicate that an accurate surrogate model can be created for the RZWQM2 with a relatively small number of SG points (i.e., RZWQM2 runs). Compared to the conventional QPSO algorithm, our surrogate-based optimization method can achieve a smaller objective function value and better calibration performance using a fewer number of expensive RZWQM2 executions, which greatly improves computational efficiency.

  12. Calibration of an agricultural-hydrological model (RZWQM2) using surrogate global optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xi, Maolong; Lu, Dan; Gui, Dongwei

    Robust calibration of an agricultural-hydrological model is critical for simulating crop yield and water quality and making reasonable agricultural management. However, calibration of the agricultural-hydrological system models is challenging because of model complexity, the existence of strong parameter correlation, and significant computational requirements. Therefore, only a limited number of simulations can be allowed in any attempt to find a near-optimal solution within an affordable time, which greatly restricts the successful application of the model. The goal of this study is to locate the optimal solution of the Root Zone Water Quality Model (RZWQM2) given a limited simulation time, so asmore » to improve the model simulation and help make rational and effective agricultural-hydrological decisions. To this end, we propose a computationally efficient global optimization procedure using sparse-grid based surrogates. We first used advanced sparse grid (SG) interpolation to construct a surrogate system of the actual RZWQM2, and then we calibrate the surrogate model using the global optimization algorithm, Quantum-behaved Particle Swarm Optimization (QPSO). As the surrogate model is a polynomial with fast evaluation, it can be efficiently evaluated with a sufficiently large number of times during the optimization, which facilitates the global search. We calibrate seven model parameters against five years of yield, drain flow, and NO 3-N loss data from a subsurface-drained corn-soybean field in Iowa. Results indicate that an accurate surrogate model can be created for the RZWQM2 with a relatively small number of SG points (i.e., RZWQM2 runs). Compared to the conventional QPSO algorithm, our surrogate-based optimization method can achieve a smaller objective function value and better calibration performance using a fewer number of expensive RZWQM2 executions, which greatly improves computational efficiency.« less

  13. Multiobjective Collaborative Optimization of Systems of Systems

    DTIC Science & Technology

    2005-06-01

    K: HSC MODEL AND OPTIMIZATION DESCRIPTION ................................................ 157 APPENDIX L: HSC OPTIMIZATION CODE...7 0 Table 6. System Variables of FPF Data Set Showing Minimal HSC Impact on...App.E, F) Data Analysis Front ITS Model (App. I, J) Chap.] 1 ConclusionsSHSC Model (App. K, L) Cot[& HSC Model (App. M, NV) MoeJ Future Work Figure

  14. Applications of polynomial optimization in financial risk investment

    NASA Astrophysics Data System (ADS)

    Zeng, Meilan; Fu, Hongwei

    2017-09-01

    Recently, polynomial optimization has many important applications in optimization, financial economics and eigenvalues of tensor, etc. This paper studies the applications of polynomial optimization in financial risk investment. We consider the standard mean-variance risk measurement model and the mean-variance risk measurement model with transaction costs. We use Lasserre's hierarchy of semidefinite programming (SDP) relaxations to solve the specific cases. The results show that polynomial optimization is effective for some financial optimization problems.

  15. Optimization of cell seeding in a 2D bio-scaffold system using computational models.

    PubMed

    Ho, Nicholas; Chua, Matthew; Chui, Chee-Kong

    2017-05-01

    The cell expansion process is a crucial part of generating cells on a large-scale level in a bioreactor system. Hence, it is important to set operating conditions (e.g. initial cell seeding distribution, culture medium flow rate) to an optimal level. Often, the initial cell seeding distribution factor is neglected and/or overlooked in the design of a bioreactor using conventional seeding distribution methods. This paper proposes a novel seeding distribution method that aims to maximize cell growth and minimize production time/cost. The proposed method utilizes two computational models; the first model represents cell growth patterns whereas the second model determines optimal initial cell seeding positions for adherent cell expansions. Cell growth simulation from the first model demonstrates that the model can be a representation of various cell types with known probabilities. The second model involves a combination of combinatorial optimization, Monte Carlo and concepts of the first model, and is used to design a multi-layer 2D bio-scaffold system that increases cell production efficiency in bioreactor applications. Simulation results have shown that the recommended input configurations obtained from the proposed optimization method are the most optimal configurations. The results have also illustrated the effectiveness of the proposed optimization method. The potential of the proposed seeding distribution method as a useful tool to optimize the cell expansion process in modern bioreactor system applications is highlighted. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Optimization of Profile and Material of Abrasive Water Jet Nozzle

    NASA Astrophysics Data System (ADS)

    Anand Bala Selwin, K. P.; Ramachandran, S.

    2017-05-01

    The objective of this work is to study the behaviour of the abrasive water jet nozzle with different profiles and materials. Taguchi-Grey relational analysis optimization technique is used to optimize the value with different material and different profiles. Initially the 3D models of the nozzle are modelled with different profiles by changing the tapered inlet angle of the nozzle. The different profile models are analysed with different materials and the results are optimized. The optimized results would give the better result taking wear and machining behaviour of the nozzle.

  17. Pumping strategies for management of a shallow water table: The value of the simulation-optimization approach

    USGS Publications Warehouse

    Barlow, P.M.; Wagner, B.J.; Belitz, K.

    1996-01-01

    The simulation-optimization approach is used to identify ground-water pumping strategies for control of the shallow water table in the western San Joaquin Valley, California, where shallow ground water threatens continued agricultural productivity. The approach combines the use of ground-water flow simulation with optimization techniques to build on and refine pumping strategies identified in previous research that used flow simulation alone. Use of the combined simulation-optimization model resulted in a 20 percent reduction in the area subject to a shallow water table over that identified by use of the simulation model alone. The simulation-optimization model identifies increasingly more effective pumping strategies for control of the water table as the complexity of the problem increases; that is, as the number of subareas in which pumping is to be managed increases, the simulation-optimization model is better able to discriminate areally among subareas to determine optimal pumping locations. The simulation-optimization approach provides an improved understanding of controls on the ground-water flow system and management alternatives that can be implemented in the valley. In particular, results of the simulation-optimization model indicate that optimal pumping strategies are constrained by the existing distribution of wells between the semiconfined and confined zones of the aquifer, by the distribution of sediment types (and associated hydraulic conductivities) in the western valley, and by the historical distribution of pumping throughout the western valley.

  18. Extreme Learning Machine and Particle Swarm Optimization in optimizing CNC turning operation

    NASA Astrophysics Data System (ADS)

    Janahiraman, Tiagrajah V.; Ahmad, Nooraziah; Hani Nordin, Farah

    2018-04-01

    The CNC machine is controlled by manipulating cutting parameters that could directly influence the process performance. Many optimization methods has been applied to obtain the optimal cutting parameters for the desired performance function. Nonetheless, the industry still uses the traditional technique to obtain those values. Lack of knowledge on optimization techniques is the main reason for this issue to be prolonged. Therefore, the simple yet easy to implement, Optimal Cutting Parameters Selection System is introduced to help the manufacturer to easily understand and determine the best optimal parameters for their turning operation. This new system consists of two stages which are modelling and optimization. In modelling of input-output and in-process parameters, the hybrid of Extreme Learning Machine and Particle Swarm Optimization is applied. This modelling technique tend to converge faster than other artificial intelligent technique and give accurate result. For the optimization stage, again the Particle Swarm Optimization is used to get the optimal cutting parameters based on the performance function preferred by the manufacturer. Overall, the system can reduce the gap between academic world and the industry by introducing a simple yet easy to implement optimization technique. This novel optimization technique can give accurate result besides being the fastest technique.

  19. Stochastic Robust Mathematical Programming Model for Power System Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Cong; Changhyeok, Lee; Haoyong, Chen

    2016-01-01

    This paper presents a stochastic robust framework for two-stage power system optimization problems with uncertainty. The model optimizes the probabilistic expectation of different worst-case scenarios with ifferent uncertainty sets. A case study of unit commitment shows the effectiveness of the proposed model and algorithms.

  20. Model Specification Searches Using Ant Colony Optimization Algorithms

    ERIC Educational Resources Information Center

    Marcoulides, George A.; Drezner, Zvi

    2003-01-01

    Ant colony optimization is a recently proposed heuristic procedure inspired by the behavior of real ants. This article applies the procedure to model specification searches in structural equation modeling and reports the results. The results demonstrate the capabilities of ant colony optimization algorithms for conducting automated searches.

  1. An optimization framework for measuring spatial access over healthcare networks.

    PubMed

    Li, Zihao; Serban, Nicoleta; Swann, Julie L

    2015-07-17

    Measurement of healthcare spatial access over a network involves accounting for demand, supply, and network structure. Popular approaches are based on floating catchment areas; however the methods can overestimate demand over the network and fail to capture cascading effects across the system. Optimization is presented as a framework to measure spatial access. Questions related to when and why optimization should be used are addressed. The accuracy of the optimization models compared to the two-step floating catchment area method and its variations is analytically demonstrated, and a case study of specialty care for Cystic Fibrosis over the continental United States is used to compare these approaches. The optimization models capture a patient's experience rather than their opportunities and avoid overestimating patient demand. They can also capture system effects due to change based on congestion. Furthermore, the optimization models provide more elements of access than traditional catchment methods. Optimization models can incorporate user choice and other variations, and they can be useful towards targeting interventions to improve access. They can be easily adapted to measure access for different types of patients, over different provider types, or with capacity constraints in the network. Moreover, optimization models allow differences in access in rural and urban areas.

  2. Multiobjective optimization of low impact development stormwater controls

    NASA Astrophysics Data System (ADS)

    Eckart, Kyle; McPhee, Zach; Bolisetti, Tirupati

    2018-07-01

    Green infrastructure such as Low Impact Development (LID) controls are being employed to manage the urban stormwater and restore the predevelopment hydrological conditions besides improving the stormwater runoff water quality. Since runoff generation and infiltration processes are nonlinear, there is a need for identifying optimal combination of LID controls. A coupled optimization-simulation model was developed by linking the U.S. EPA Stormwater Management Model (SWMM) to the Borg Multiobjective Evolutionary Algorithm (Borg MOEA). The coupled model is capable of performing multiobjective optimization which uses SWMM simulations as a tool to evaluate potential solutions to the optimization problem. The optimization-simulation tool was used to evaluate low impact development (LID) stormwater controls. A SWMM model was developed, calibrated, and validated for a sewershed in Windsor, Ontario and LID stormwater controls were tested for three different return periods. LID implementation strategies were optimized using the optimization-simulation model for five different implementation scenarios for each of the three storm events with the objectives of minimizing peak flow in the stormsewers, reducing total runoff, and minimizing cost. For the sewershed in Windsor, Ontario, the peak run off and total volume of the runoff were found to reduce by 13% and 29%, respectively.

  3. Capacitated vehicle-routing problem model for scheduled solid waste collection and route optimization using PSO algorithm.

    PubMed

    Hannan, M A; Akhtar, Mahmuda; Begum, R A; Basri, H; Hussain, A; Scavino, Edgar

    2018-01-01

    Waste collection widely depends on the route optimization problem that involves a large amount of expenditure in terms of capital, labor, and variable operational costs. Thus, the more waste collection route is optimized, the more reduction in different costs and environmental effect will be. This study proposes a modified particle swarm optimization (PSO) algorithm in a capacitated vehicle-routing problem (CVRP) model to determine the best waste collection and route optimization solutions. In this study, threshold waste level (TWL) and scheduling concepts are applied in the PSO-based CVRP model under different datasets. The obtained results from different datasets show that the proposed algorithmic CVRP model provides the best waste collection and route optimization in terms of travel distance, total waste, waste collection efficiency, and tightness at 70-75% of TWL. The obtained results for 1 week scheduling show that 70% of TWL performs better than all node consideration in terms of collected waste, distance, tightness, efficiency, fuel consumption, and cost. The proposed optimized model can serve as a valuable tool for waste collection and route optimization toward reducing socioeconomic and environmental impacts. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Constrained optimization via simulation models for new product innovation

    NASA Astrophysics Data System (ADS)

    Pujowidianto, Nugroho A.

    2017-11-01

    We consider the problem of constrained optimization where the decision makers aim to optimize the primary performance measure while constraining the secondary performance measures. This paper provides a brief overview of stochastically constrained optimization via discrete event simulation. Most review papers tend to be methodology-based. This review attempts to be problem-based as decision makers may have already decided on the problem formulation. We consider constrained optimization models as there are usually constraints on secondary performance measures as trade-off in new product development. It starts by laying out different possible methods and the reasons using constrained optimization via simulation models. It is then followed by the review of different simulation optimization approach to address constrained optimization depending on the number of decision variables, the type of constraints, and the risk preferences of the decision makers in handling uncertainties.

  5. Model-based optimal design of experiments - semidefinite and nonlinear programming formulations

    PubMed Central

    Duarte, Belmiro P.M.; Wong, Weng Kee; Oliveira, Nuno M.C.

    2015-01-01

    We use mathematical programming tools, such as Semidefinite Programming (SDP) and Nonlinear Programming (NLP)-based formulations to find optimal designs for models used in chemistry and chemical engineering. In particular, we employ local design-based setups in linear models and a Bayesian setup in nonlinear models to find optimal designs. In the latter case, Gaussian Quadrature Formulas (GQFs) are used to evaluate the optimality criterion averaged over the prior distribution for the model parameters. Mathematical programming techniques are then applied to solve the optimization problems. Because such methods require the design space be discretized, we also evaluate the impact of the discretization scheme on the generated design. We demonstrate the techniques for finding D–, A– and E–optimal designs using design problems in biochemical engineering and show the method can also be directly applied to tackle additional issues, such as heteroscedasticity in the model. Our results show that the NLP formulation produces highly efficient D–optimal designs but is computationally less efficient than that required for the SDP formulation. The efficiencies of the generated designs from the two methods are generally very close and so we recommend the SDP formulation in practice. PMID:26949279

  6. Model-based optimal design of experiments - semidefinite and nonlinear programming formulations.

    PubMed

    Duarte, Belmiro P M; Wong, Weng Kee; Oliveira, Nuno M C

    2016-02-15

    We use mathematical programming tools, such as Semidefinite Programming (SDP) and Nonlinear Programming (NLP)-based formulations to find optimal designs for models used in chemistry and chemical engineering. In particular, we employ local design-based setups in linear models and a Bayesian setup in nonlinear models to find optimal designs. In the latter case, Gaussian Quadrature Formulas (GQFs) are used to evaluate the optimality criterion averaged over the prior distribution for the model parameters. Mathematical programming techniques are then applied to solve the optimization problems. Because such methods require the design space be discretized, we also evaluate the impact of the discretization scheme on the generated design. We demonstrate the techniques for finding D -, A - and E -optimal designs using design problems in biochemical engineering and show the method can also be directly applied to tackle additional issues, such as heteroscedasticity in the model. Our results show that the NLP formulation produces highly efficient D -optimal designs but is computationally less efficient than that required for the SDP formulation. The efficiencies of the generated designs from the two methods are generally very close and so we recommend the SDP formulation in practice.

  7. Optimality of profit-including prices under ideal planning.

    PubMed

    Samuelson, P A

    1973-07-01

    Although prices calculated by a constant percentage markup on all costs (nonlabor as well as direct-labor) are usually admitted to be more realistic for a competitive capitalistic model, the view is often expressed that, for optimal planning purposes, the "values" model of Marx's Capital, Volume I, is to be preferred. It is shown here that an optimal-control model that maximizes discounted social utility of consumption per capita and that ultimately approaches a steady state must ultimately have optimal pricing that involves equal rates of steady-state profit in all industries; and such optimal pricing will necessarily deviate from Marx's model of equal rates of surplus value (markups on direct-labor only) in all industries.

  8. Optimality of Profit-Including Prices Under Ideal Planning

    PubMed Central

    Samuelson, Paul A.

    1973-01-01

    Although prices calculated by a constant percentage markup on all costs (nonlabor as well as direct-labor) are usually admitted to be more realistic for a competitive capitalistic model, the view is often expressed that, for optimal planning purposes, the “values” model of Marx's Capital, Volume I, is to be preferred. It is shown here that an optimal-control model that maximizes discounted social utility of consumption per capita and that ultimately approaches a steady state must ultimately have optimal pricing that involves equal rates of steady-state profit in all industries; and such optimal pricing will necessarily deviate from Marx's model of equal rates of surplus value (markups on direct-labor only) in all industries. PMID:16592102

  9. Manual of phosphoric acid fuel cell power plant optimization model and computer program

    NASA Technical Reports Server (NTRS)

    Lu, C. Y.; Alkasab, K. A.

    1984-01-01

    An optimized cost and performance model for a phosphoric acid fuel cell power plant system was derived and developed into a modular FORTRAN computer code. Cost, energy, mass, and electrochemical analyses were combined to develop a mathematical model for optimizing the steam to methane ratio in the reformer, hydrogen utilization in the PAFC plates per stack. The nonlinear programming code, COMPUTE, was used to solve this model, in which the method of mixed penalty function combined with Hooke and Jeeves pattern search was chosen to evaluate this specific optimization problem.

  10. Analysis of point-to-point lung motion with full inspiration and expiration CT data using non-linear optimization method: optimal geometric assumption model for the effective registration algorithm

    NASA Astrophysics Data System (ADS)

    Kim, Namkug; Seo, Joon Beom; Heo, Jeong Nam; Kang, Suk-Ho

    2007-03-01

    The study was conducted to develop a simple model for more robust lung registration of volumetric CT data, which is essential for various clinical lung analysis applications, including the lung nodule matching in follow up CT studies, semi-quantitative assessment of lung perfusion, and etc. The purpose of this study is to find the most effective reference point and geometric model based on the lung motion analysis from the CT data sets obtained in full inspiration (In.) and expiration (Ex.). Ten pairs of CT data sets in normal subjects obtained in full In. and Ex. were used in this study. Two radiologists were requested to draw 20 points representing the subpleural point of the central axis in each segment. The apex, hilar point, and center of inertia (COI) of each unilateral lung were proposed as the reference point. To evaluate optimal expansion point, non-linear optimization without constraints was employed. The objective function is sum of distances from the line, consist of the corresponding points between In. and Ex. to the optimal point x. By using the nonlinear optimization, the optimal points was evaluated and compared between reference points. The average distance between the optimal point and each line segment revealed that the balloon model was more suitable to explain the lung expansion model. This lung motion analysis based on vector analysis and non-linear optimization shows that balloon model centered on the center of inertia of lung is most effective geometric model to explain lung expansion by breathing.

  11. Combinatorial optimization in foundry practice

    NASA Astrophysics Data System (ADS)

    Antamoshkin, A. N.; Masich, I. S.

    2016-04-01

    The multicriteria mathematical model of foundry production capacity planning is suggested in the paper. The model is produced in terms of pseudo-Boolean optimization theory. Different search optimization methods were used to solve the obtained problem.

  12. Pavement maintenance optimization model using Markov Decision Processes

    NASA Astrophysics Data System (ADS)

    Mandiartha, P.; Duffield, C. F.; Razelan, I. S. b. M.; Ismail, A. b. H.

    2017-09-01

    This paper presents an optimization model for selection of pavement maintenance intervention using a theory of Markov Decision Processes (MDP). There are some particular characteristics of the MDP developed in this paper which distinguish it from other similar studies or optimization models intended for pavement maintenance policy development. These unique characteristics include a direct inclusion of constraints into the formulation of MDP, the use of an average cost method of MDP, and the policy development process based on the dual linear programming solution. The limited information or discussions that are available on these matters in terms of stochastic based optimization model in road network management motivates this study. This paper uses a data set acquired from road authorities of state of Victoria, Australia, to test the model and recommends steps in the computation of MDP based stochastic optimization model, leading to the development of optimum pavement maintenance policy.

  13. Dynamic malware containment under an epidemic model with alert

    NASA Astrophysics Data System (ADS)

    Zhang, Tianrui; Yang, Lu-Xing; Yang, Xiaofan; Wu, Yingbo; Tang, Yuan Yan

    2017-03-01

    Alerting at the early stage of malware invasion turns out to be an important complement to malware detection and elimination. This paper addresses the issue of how to dynamically contain the prevalence of malware at a lower cost, provided alerting is feasible. A controlled epidemic model with alert is established, and an optimal control problem based on the epidemic model is formulated. The optimality system for the optimal control problem is derived. The structure of an optimal control for the proposed optimal control problem is characterized under some conditions. Numerical examples show that the cost-efficiency of an optimal control strategy can be enhanced by adjusting the upper and lower bounds on admissible controls.

  14. Improving flood forecasting capability of physically based distributed hydrological models by parameter optimization

    NASA Astrophysics Data System (ADS)

    Chen, Y.; Li, J.; Xu, H.

    2016-01-01

    Physically based distributed hydrological models (hereafter referred to as PBDHMs) divide the terrain of the whole catchment into a number of grid cells at fine resolution and assimilate different terrain data and precipitation to different cells. They are regarded to have the potential to improve the catchment hydrological process simulation and prediction capability. In the early stage, physically based distributed hydrological models are assumed to derive model parameters from the terrain properties directly, so there is no need to calibrate model parameters. However, unfortunately the uncertainties associated with this model derivation are very high, which impacted their application in flood forecasting, so parameter optimization may also be necessary. There are two main purposes for this study: the first is to propose a parameter optimization method for physically based distributed hydrological models in catchment flood forecasting by using particle swarm optimization (PSO) algorithm and to test its competence and to improve its performances; the second is to explore the possibility of improving physically based distributed hydrological model capability in catchment flood forecasting by parameter optimization. In this paper, based on the scalar concept, a general framework for parameter optimization of the PBDHMs for catchment flood forecasting is first proposed that could be used for all PBDHMs. Then, with the Liuxihe model as the study model, which is a physically based distributed hydrological model proposed for catchment flood forecasting, the improved PSO algorithm is developed for the parameter optimization of the Liuxihe model in catchment flood forecasting. The improvements include adoption of the linearly decreasing inertia weight strategy to change the inertia weight and the arccosine function strategy to adjust the acceleration coefficients. This method has been tested in two catchments in southern China with different sizes, and the results show that the improved PSO algorithm could be used for the Liuxihe model parameter optimization effectively and could improve the model capability largely in catchment flood forecasting, thus proving that parameter optimization is necessary to improve the flood forecasting capability of physically based distributed hydrological models. It also has been found that the appropriate particle number and the maximum evolution number of PSO algorithm used for the Liuxihe model catchment flood forecasting are 20 and 30 respectively.

  15. Optimality models in the age of experimental evolution and genomics.

    PubMed

    Bull, J J; Wang, I-N

    2010-09-01

    Optimality models have been used to predict evolution of many properties of organisms. They typically neglect genetic details, whether by necessity or design. This omission is a common source of criticism, and although this limitation of optimality is widely acknowledged, it has mostly been defended rather than evaluated for its impact. Experimental adaptation of model organisms provides a new arena for testing optimality models and for simultaneously integrating genetics. First, an experimental context with a well-researched organism allows dissection of the evolutionary process to identify causes of model failure--whether the model is wrong about genetics or selection. Second, optimality models provide a meaningful context for the process and mechanics of evolution, and thus may be used to elicit realistic genetic bases of adaptation--an especially useful augmentation to well-researched genetic systems. A few studies of microbes have begun to pioneer this new direction. Incompatibility between the assumed and actual genetics has been demonstrated to be the cause of model failure in some cases. More interestingly, evolution at the phenotypic level has sometimes matched prediction even though the adaptive mutations defy mechanisms established by decades of classic genetic studies. Integration of experimental evolutionary tests with genetics heralds a new wave for optimality models and their extensions that does not merely emphasize the forces driving evolution.

  16. Optimal Designs for the Rasch Model

    ERIC Educational Resources Information Center

    Grasshoff, Ulrike; Holling, Heinz; Schwabe, Rainer

    2012-01-01

    In this paper, optimal designs will be derived for estimating the ability parameters of the Rasch model when difficulty parameters are known. It is well established that a design is locally D-optimal if the ability and difficulty coincide. But locally optimal designs require that the ability parameters to be estimated are known. To attenuate this…

  17. Multi-Objective Aerodynamic Optimization of the Streamlined Shape of High-Speed Trains Based on the Kriging Model.

    PubMed

    Xu, Gang; Liang, Xifeng; Yao, Shuanbao; Chen, Dawei; Li, Zhiwei

    2017-01-01

    Minimizing the aerodynamic drag and the lift of the train coach remains a key issue for high-speed trains. With the development of computing technology and computational fluid dynamics (CFD) in the engineering field, CFD has been successfully applied to the design process of high-speed trains. However, developing a new streamlined shape for high-speed trains with excellent aerodynamic performance requires huge computational costs. Furthermore, relationships between multiple design variables and the aerodynamic loads are seldom obtained. In the present study, the Kriging surrogate model is used to perform a multi-objective optimization of the streamlined shape of high-speed trains, where the drag and the lift of the train coach are the optimization objectives. To improve the prediction accuracy of the Kriging model, the cross-validation method is used to construct the optimal Kriging model. The optimization results show that the two objectives are efficiently optimized, indicating that the optimization strategy used in the present study can greatly improve the optimization efficiency and meet the engineering requirements.

  18. The application of artificial intelligence in the optimal design of mechanical systems

    NASA Astrophysics Data System (ADS)

    Poteralski, A.; Szczepanik, M.

    2016-11-01

    The paper is devoted to new computational techniques in mechanical optimization where one tries to study, model, analyze and optimize very complex phenomena, for which more precise scientific tools of the past were incapable of giving low cost and complete solution. Soft computing methods differ from conventional (hard) computing in that, unlike hard computing, they are tolerant of imprecision, uncertainty, partial truth and approximation. The paper deals with an application of the bio-inspired methods, like the evolutionary algorithms (EA), the artificial immune systems (AIS) and the particle swarm optimizers (PSO) to optimization problems. Structures considered in this work are analyzed by the finite element method (FEM), the boundary element method (BEM) and by the method of fundamental solutions (MFS). The bio-inspired methods are applied to optimize shape, topology and material properties of 2D, 3D and coupled 2D/3D structures, to optimize the termomechanical structures, to optimize parameters of composites structures modeled by the FEM, to optimize the elastic vibrating systems to identify the material constants for piezoelectric materials modeled by the BEM and to identify parameters in acoustics problem modeled by the MFS.

  19. Empty tracks optimization based on Z-Map model

    NASA Astrophysics Data System (ADS)

    Liu, Le; Yan, Guangrong; Wang, Zaijun; Zang, Genao

    2017-12-01

    For parts with many features, there are more empty tracks during machining. If these tracks are not optimized, the machining efficiency will be seriously affected. In this paper, the characteristics of the empty tracks are studied in detail. Combining with the existing optimization algorithm, a new tracks optimization method based on Z-Map model is proposed. In this method, the tool tracks are divided into the unit processing section, and then the Z-Map model simulation technique is used to analyze the order constraint between the unit segments. The empty stroke optimization problem is transformed into the TSP with sequential constraints, and then through the genetic algorithm solves the established TSP problem. This kind of optimization method can not only optimize the simple structural parts, but also optimize the complex structural parts, so as to effectively plan the empty tracks and greatly improve the processing efficiency.

  20. A framework for optimization and quantification of uncertainty and sensitivity for developing carbon capture systems

    DOE PAGES

    Eslick, John C.; Ng, Brenda; Gao, Qianwen; ...

    2014-12-31

    Under the auspices of the U.S. Department of Energy’s Carbon Capture Simulation Initiative (CCSI), a Framework for Optimization and Quantification of Uncertainty and Sensitivity (FOQUS) has been developed. This tool enables carbon capture systems to be rapidly synthesized and rigorously optimized, in an environment that accounts for and propagates uncertainties in parameters and models. FOQUS currently enables (1) the development of surrogate algebraic models utilizing the ALAMO algorithm, which can be used for superstructure optimization to identify optimal process configurations, (2) simulation-based optimization utilizing derivative free optimization (DFO) algorithms with detailed black-box process models, and (3) rigorous uncertainty quantification throughmore » PSUADE. FOQUS utilizes another CCSI technology, the Turbine Science Gateway, to manage the thousands of simulated runs necessary for optimization and UQ. Thus, this computational framework has been demonstrated for the design and analysis of a solid sorbent based carbon capture system.« less

  1. Optimization of Turbine Engine Cycle Analysis with Analytic Derivatives

    NASA Technical Reports Server (NTRS)

    Hearn, Tristan; Hendricks, Eric; Chin, Jeffrey; Gray, Justin; Moore, Kenneth T.

    2016-01-01

    A new engine cycle analysis tool, called Pycycle, was built using the OpenMDAO framework. Pycycle provides analytic derivatives allowing for an efficient use of gradient-based optimization methods on engine cycle models, without requiring the use of finite difference derivative approximation methods. To demonstrate this, a gradient-based design optimization was performed on a turbofan engine model. Results demonstrate very favorable performance compared to an optimization of an identical model using finite-difference approximated derivatives.

  2. Derivative Free Optimization of Complex Systems with the Use of Statistical Machine Learning Models

    DTIC Science & Technology

    2015-09-12

    AFRL-AFOSR-VA-TR-2015-0278 DERIVATIVE FREE OPTIMIZATION OF COMPLEX SYSTEMS WITH THE USE OF STATISTICAL MACHINE LEARNING MODELS Katya Scheinberg...COMPLEX SYSTEMS WITH THE USE OF STATISTICAL MACHINE LEARNING MODELS 5a.  CONTRACT NUMBER 5b.  GRANT NUMBER FA9550-11-1-0239 5c.  PROGRAM ELEMENT...developed, which has been the focus of our research. 15. SUBJECT TERMS optimization, Derivative-Free Optimization, Statistical Machine Learning 16. SECURITY

  3. Application of genetic algorithm in modeling on-wafer inductors for up to 110 Ghz

    NASA Astrophysics Data System (ADS)

    Liu, Nianhong; Fu, Jun; Liu, Hui; Cui, Wenpu; Liu, Zhihong; Liu, Linlin; Zhou, Wei; Wang, Quan; Guo, Ao

    2018-05-01

    In this work, the genetic algorithm has been introducted into parameter extraction for on-wafer inductors for up to 110 GHz millimeter-wave operations, and nine independent parameters of the equivalent circuit model are optimized together. With the genetic algorithm, the model with the optimized parameters gives a better fitting accuracy than the preliminary parameters without optimization. Especially, the fitting accuracy of the Q value achieves a significant improvement after the optimization.

  4. JuPOETs: a constrained multiobjective optimization approach to estimate biochemical model ensembles in the Julia programming language.

    PubMed

    Bassen, David M; Vilkhovoy, Michael; Minot, Mason; Butcher, Jonathan T; Varner, Jeffrey D

    2017-01-25

    Ensemble modeling is a promising approach for obtaining robust predictions and coarse grained population behavior in deterministic mathematical models. Ensemble approaches address model uncertainty by using parameter or model families instead of single best-fit parameters or fixed model structures. Parameter ensembles can be selected based upon simulation error, along with other criteria such as diversity or steady-state performance. Simulations using parameter ensembles can estimate confidence intervals on model variables, and robustly constrain model predictions, despite having many poorly constrained parameters. In this software note, we present a multiobjective based technique to estimate parameter or models ensembles, the Pareto Optimal Ensemble Technique in the Julia programming language (JuPOETs). JuPOETs integrates simulated annealing with Pareto optimality to estimate ensembles on or near the optimal tradeoff surface between competing training objectives. We demonstrate JuPOETs on a suite of multiobjective problems, including test functions with parameter bounds and system constraints as well as for the identification of a proof-of-concept biochemical model with four conflicting training objectives. JuPOETs identified optimal or near optimal solutions approximately six-fold faster than a corresponding implementation in Octave for the suite of test functions. For the proof-of-concept biochemical model, JuPOETs produced an ensemble of parameters that gave both the mean of the training data for conflicting data sets, while simultaneously estimating parameter sets that performed well on each of the individual objective functions. JuPOETs is a promising approach for the estimation of parameter and model ensembles using multiobjective optimization. JuPOETs can be adapted to solve many problem types, including mixed binary and continuous variable types, bilevel optimization problems and constrained problems without altering the base algorithm. JuPOETs is open source, available under an MIT license, and can be installed using the Julia package manager from the JuPOETs GitHub repository.

  5. Multi objective optimization model for minimizing production cost and environmental impact in CNC turning process

    NASA Astrophysics Data System (ADS)

    Widhiarso, Wahyu; Rosyidi, Cucuk Nur

    2018-02-01

    Minimizing production cost in a manufacturing company will increase the profit of the company. The cutting parameters will affect total processing time which then will affect the production cost of machining process. Besides affecting the production cost and processing time, the cutting parameters will also affect the environment. An optimization model is needed to determine the optimum cutting parameters. In this paper, we develop an optimization model to minimize the production cost and the environmental impact in CNC turning process. The model is used a multi objective optimization. Cutting speed and feed rate are served as the decision variables. Constraints considered are cutting speed, feed rate, cutting force, output power, and surface roughness. The environmental impact is converted from the environmental burden by using eco-indicator 99. Numerical example is given to show the implementation of the model and solved using OptQuest of Oracle Crystal Ball software. The results of optimization indicate that the model can be used to optimize the cutting parameters to minimize the production cost and the environmental impact.

  6. NMR Parameters Determination through ACE Committee Machine with Genetic Implanted Fuzzy Logic and Genetic Implanted Neural Network

    NASA Astrophysics Data System (ADS)

    Asoodeh, Mojtaba; Bagheripour, Parisa; Gholami, Amin

    2015-06-01

    Free fluid porosity and rock permeability, undoubtedly the most critical parameters of hydrocarbon reservoir, could be obtained by processing of nuclear magnetic resonance (NMR) log. Despite conventional well logs (CWLs), NMR logging is very expensive and time-consuming. Therefore, idea of synthesizing NMR log from CWLs would be of a great appeal among reservoir engineers. For this purpose, three optimization strategies are followed. Firstly, artificial neural network (ANN) is optimized by virtue of hybrid genetic algorithm-pattern search (GA-PS) technique, then fuzzy logic (FL) is optimized by means of GA-PS, and eventually an alternative condition expectation (ACE) model is constructed using the concept of committee machine to combine outputs of optimized and non-optimized FL and ANN models. Results indicated that optimization of traditional ANN and FL model using GA-PS technique significantly enhances their performances. Furthermore, the ACE committee of aforementioned models produces more accurate and reliable results compared with a singular model performing alone.

  7. A Decision-making Model for a Two-stage Production-delivery System in SCM Environment

    NASA Astrophysics Data System (ADS)

    Feng, Ding-Zhong; Yamashiro, Mitsuo

    A decision-making model is developed for an optimal production policy in a two-stage production-delivery system that incorporates a fixed quantity supply of finished goods to a buyer at a fixed interval of time. First, a general cost model is formulated considering both supplier (of raw materials) and buyer (of finished products) sides. Then an optimal solution to the problem is derived on basis of the cost model. Using the proposed model and its optimal solution, one can determine optimal production lot size for each stage, optimal number of transportation for semi-finished goods, and optimal quantity of semi-finished goods transported each time to meet the lumpy demand of consumers. Also, we examine the sensitivity of raw materials ordering and production lot size to changes in ordering cost, transportation cost and manufacturing setup cost. A pragmatic computation approach for operational situations is proposed to solve integer approximation solution. Finally, we give some numerical examples.

  8. Applying Mathematical Optimization Methods to an ACT-R Instance-Based Learning Model.

    PubMed

    Said, Nadia; Engelhart, Michael; Kirches, Christian; Körkel, Stefan; Holt, Daniel V

    2016-01-01

    Computational models of cognition provide an interface to connect advanced mathematical tools and methods to empirically supported theories of behavior in psychology, cognitive science, and neuroscience. In this article, we consider a computational model of instance-based learning, implemented in the ACT-R cognitive architecture. We propose an approach for obtaining mathematical reformulations of such cognitive models that improve their computational tractability. For the well-established Sugar Factory dynamic decision making task, we conduct a simulation study to analyze central model parameters. We show how mathematical optimization techniques can be applied to efficiently identify optimal parameter values with respect to different optimization goals. Beyond these methodological contributions, our analysis reveals the sensitivity of this particular task with respect to initial settings and yields new insights into how average human performance deviates from potential optimal performance. We conclude by discussing possible extensions of our approach as well as future steps towards applying more powerful derivative-based optimization methods.

  9. A nonlinear bi-level programming approach for product portfolio management.

    PubMed

    Ma, Shuang

    2016-01-01

    Product portfolio management (PPM) is a critical decision-making for companies across various industries in today's competitive environment. Traditional studies on PPM problem have been motivated toward engineering feasibilities and marketing which relatively pay less attention to other competitors' actions and the competitive relations, especially in mathematical optimization domain. The key challenge lies in that how to construct a mathematical optimization model to describe this Stackelberg game-based leader-follower PPM problem and the competitive relations between them. The primary work of this paper is the representation of a decision framework and the optimization model to leverage the PPM problem of leader and follower. A nonlinear, integer bi-level programming model is developed based on the decision framework. Furthermore, a bi-level nested genetic algorithm is put forward to solve this nonlinear bi-level programming model for leader-follower PPM problem. A case study of notebook computer product portfolio optimization is reported. Results and analyses reveal that the leader-follower bi-level optimization model is robust and can empower product portfolio optimization.

  10. The optimization problems of CP operation

    NASA Astrophysics Data System (ADS)

    Kler, A. M.; Stepanova, E. L.; Maximov, A. S.

    2017-11-01

    The problem of enhancing energy and economic efficiency of CP is urgent indeed. One of the main methods for solving it is optimization of CP operation. To solve the optimization problems of CP operation, Energy Systems Institute, SB of RAS, has developed a software. The software makes it possible to make optimization calculations of CP operation. The software is based on the techniques and software tools of mathematical modeling and optimization of heat and power installations. Detailed mathematical models of new equipment have been developed in the work. They describe sufficiently accurately the processes that occur in the installations. The developed models include steam turbine models (based on the checking calculation) which take account of all steam turbine compartments and regeneration system. They also enable one to make calculations with regenerative heaters disconnected. The software for mathematical modeling of equipment and optimization of CP operation has been developed. It is based on the technique for optimization of CP operating conditions in the form of software tools and integrates them in the common user interface. The optimization of CP operation often generates the need to determine the minimum and maximum possible total useful electricity capacity of the plant at set heat loads of consumers, i.e. it is necessary to determine the interval on which the CP capacity may vary. The software has been applied to optimize the operating conditions of the Novo-Irkutskaya CP of JSC “Irkutskenergo”. The efficiency of operating condition optimization and the possibility for determination of CP energy characteristics that are necessary for optimization of power system operation are shown.

  11. A Semi-Infinite Programming based algorithm for determining T-optimum designs for model discrimination

    PubMed Central

    Duarte, Belmiro P.M.; Wong, Weng Kee; Atkinson, Anthony C.

    2016-01-01

    T-optimum designs for model discrimination are notoriously difficult to find because of the computational difficulty involved in solving an optimization problem that involves two layers of optimization. Only a handful of analytical T-optimal designs are available for the simplest problems; the rest in the literature are found using specialized numerical procedures for a specific problem. We propose a potentially more systematic and general way for finding T-optimal designs using a Semi-Infinite Programming (SIP) approach. The strategy requires that we first reformulate the original minimax or maximin optimization problem into an equivalent semi-infinite program and solve it using an exchange-based method where lower and upper bounds produced by solving the outer and the inner programs, are iterated to convergence. A global Nonlinear Programming (NLP) solver is used to handle the subproblems, thus finding the optimal design and the least favorable parametric configuration that minimizes the residual sum of squares from the alternative or test models. We also use a nonlinear program to check the global optimality of the SIP-generated design and automate the construction of globally optimal designs. The algorithm is successfully used to produce results that coincide with several T-optimal designs reported in the literature for various types of model discrimination problems with normally distributed errors. However, our method is more general, merely requiring that the parameters of the model be estimated by a numerical optimization. PMID:27330230

  12. A Semi-Infinite Programming based algorithm for determining T-optimum designs for model discrimination.

    PubMed

    Duarte, Belmiro P M; Wong, Weng Kee; Atkinson, Anthony C

    2015-03-01

    T-optimum designs for model discrimination are notoriously difficult to find because of the computational difficulty involved in solving an optimization problem that involves two layers of optimization. Only a handful of analytical T-optimal designs are available for the simplest problems; the rest in the literature are found using specialized numerical procedures for a specific problem. We propose a potentially more systematic and general way for finding T-optimal designs using a Semi-Infinite Programming (SIP) approach. The strategy requires that we first reformulate the original minimax or maximin optimization problem into an equivalent semi-infinite program and solve it using an exchange-based method where lower and upper bounds produced by solving the outer and the inner programs, are iterated to convergence. A global Nonlinear Programming (NLP) solver is used to handle the subproblems, thus finding the optimal design and the least favorable parametric configuration that minimizes the residual sum of squares from the alternative or test models. We also use a nonlinear program to check the global optimality of the SIP-generated design and automate the construction of globally optimal designs. The algorithm is successfully used to produce results that coincide with several T-optimal designs reported in the literature for various types of model discrimination problems with normally distributed errors. However, our method is more general, merely requiring that the parameters of the model be estimated by a numerical optimization.

  13. An extended continuum model considering optimal velocity change with memory and numerical tests

    NASA Astrophysics Data System (ADS)

    Qingtao, Zhai; Hongxia, Ge; Rongjun, Cheng

    2018-01-01

    In this paper, an extended continuum model of traffic flow is proposed with the consideration of optimal velocity changes with memory. The new model's stability condition and KdV-Burgers equation considering the optimal velocities change with memory are deduced through linear stability theory and nonlinear analysis, respectively. Numerical simulation is carried out to study the extended continuum model, which explores how optimal velocity changes with memory affected velocity, density and energy consumption. Numerical results show that when considering the effects of optimal velocity changes with memory, the traffic jams can be suppressed efficiently. Both the memory step and sensitivity parameters of optimal velocity changes with memory will enhance the stability of traffic flow efficiently. Furthermore, numerical results demonstrates that the effect of optimal velocity changes with memory can avoid the disadvantage of historical information, which increases the stability of traffic flow on road, and so it improve the traffic flow stability and minimize cars' energy consumptions.

  14. Optimal plant nitrogen use improves model representation of vegetation response to elevated CO2

    NASA Astrophysics Data System (ADS)

    Caldararu, Silvia; Kern, Melanie; Engel, Jan; Zaehle, Sönke

    2017-04-01

    Existing global vegetation models often cannot accurately represent observed ecosystem behaviour under transient conditions such as elevated atmospheric CO2, a problem that can be attributed to an inflexibility in model representation of plant responses. Plant optimality concepts have been proposed as a solution to this problem as they offer a way to represent plastic plant responses in complex models. Here we present a novel, next generation vegetation model which includes optimal nitrogen allocation to and within the canopy as well as optimal biomass allocation between above- and belowground components in response to nutrient and water availability. The underlying hypothesis is that plants adjust their use of nitrogen in response to environmental conditions and nutrient availability in order to maximise biomass growth. We show that for two FACE (Free Air CO2 enrichment) experiments, the Duke forest and Oak Ridge forest sites, the model can better predict vegetation responses over the duration of the experiment when optimal processes are included. Specifically, under elevated CO2 conditions, the model predicts a lower optimal leaf N concentration as well as increased biomass allocation to fine roots, which, combined with a redistribution of leaf N between the Rubisco and chlorophyll components, leads to a continued NPP response under high CO2, where models with a fixed canopy stoichiometry predict a quick onset of N limitation.Existing global vegetation models often cannot accurately represent observed ecosystem behaviour under transient conditions such as elevated atmospheric CO2, a problem that can be attributed to an inflexibility in model representation of plant responses. Plant optimality concepts have been proposed as a solution to this problem as they offer a way to represent plastic plant responses in complex models. Here we present a novel, next generation vegetation model which includes optimal nitrogen allocation to and within the canopy as well as optimal biomass allocation between above- and belowground components in response to nutrient and water availability. The underlying hypothesis is that plants adjust their use of nitrogen in response to environmental conditions and nutrient availability in order to maximise biomass growth. We show that for two FACE (Free Air CO2 enrichment) experiments, the Duke forest and Oak Ridge forest sites, the model can better predict vegetation responses over the duration of the experiment when optimal processes are included. Specifically, under elevated CO2 conditions, the model predicts a lower optimal leaf N concentration as well as increased biomass allocation to fine roots, which, combined with a redistribution of leaf N between the Rubisco and chlorophyll components, leads to a continued NPP response under high CO2, where models with a fixed canopy stoichiometry predict a quick onset of N limitation.

  15. Functional and Structural Optimality in Plant Growth: A Crop Modelling Case Study

    NASA Astrophysics Data System (ADS)

    Caldararu, S.; Purves, D. W.; Smith, M. J.

    2014-12-01

    Simple mechanistic models of vegetation processes are essential both to our understanding of plant behaviour and to our ability to predict future changes in vegetation. One concept that can take us closer to such models is that of plant optimality, the hypothesis that plants aim to achieve an optimal state. Conceptually, plant optimality can be either structural or functional optimality. A structural constraint would mean that plants aim to achieve a certain structural characteristic such as an allometric relationship or nutrient content that allows optimal function. A functional condition refers to plants achieving optimal functionality, in most cases by maximising carbon gain. Functional optimality conditions are applied on shorter time scales and lead to higher plasticity, making plants more adaptable to changes in their environment. In contrast, structural constraints are optimal given the specific environmental conditions that plants are adapted to and offer less flexibility. We exemplify these concepts using a simple model of crop growth. The model represents annual cycles of growth from sowing date to harvest, including both vegetative and reproductive growth and phenology. Structural constraints to growth are represented as an optimal C:N ratio in all plant organs, which drives allocation throughout the vegetative growing stage. Reproductive phenology - i.e. the onset of flowering and grain filling - is determined by a functional optimality condition in the form of maximising final seed mass, so that vegetative growth stops when the plant reaches maximum nitrogen or carbon uptake. We investigate the plants' response to variations in environmental conditions within these two optimality constraints and show that final yield is most affected by changes during vegetative growth which affect the structural constraint.

  16. Comparison of Optimal Design Methods in Inverse Problems

    PubMed Central

    Banks, H. T.; Holm, Kathleen; Kappel, Franz

    2011-01-01

    Typical optimal design methods for inverse or parameter estimation problems are designed to choose optimal sampling distributions through minimization of a specific cost function related to the resulting error in parameter estimates. It is hoped that the inverse problem will produce parameter estimates with increased accuracy using data collected according to the optimal sampling distribution. Here we formulate the classical optimal design problem in the context of general optimization problems over distributions of sampling times. We present a new Prohorov metric based theoretical framework that permits one to treat succinctly and rigorously any optimal design criteria based on the Fisher Information Matrix (FIM). A fundamental approximation theory is also included in this framework. A new optimal design, SE-optimal design (standard error optimal design), is then introduced in the context of this framework. We compare this new design criteria with the more traditional D-optimal and E-optimal designs. The optimal sampling distributions from each design are used to compute and compare standard errors; the standard errors for parameters are computed using asymptotic theory or bootstrapping and the optimal mesh. We use three examples to illustrate ideas: the Verhulst-Pearl logistic population model [13], the standard harmonic oscillator model [13] and a popular glucose regulation model [16, 19, 29]. PMID:21857762

  17. Optimization of Geothermal Well Placement under Geological Uncertainty

    NASA Astrophysics Data System (ADS)

    Schulte, Daniel O.; Arnold, Dan; Demyanov, Vasily; Sass, Ingo; Geiger, Sebastian

    2017-04-01

    Well placement optimization is critical to commercial success of geothermal projects. However, uncertainties of geological parameters prohibit optimization based on a single scenario of the subsurface, particularly when few expensive wells are to be drilled. The optimization of borehole locations is usually based on numerical reservoir models to predict reservoir performance and entails the choice of objectives to optimize (total enthalpy, minimum enthalpy rate, production temperature) and the development options to adjust (well location, pump rate, difference in production and injection temperature). Optimization traditionally requires trying different development options on a single geological realization yet there are many possible different interpretations possible. Therefore, we aim to optimize across a range of representative geological models to account for geological uncertainty in geothermal optimization. We present an approach that uses a response surface methodology based on a large number of geological realizations selected by experimental design to optimize the placement of geothermal wells in a realistic field example. A large number of geological scenarios and design options were simulated and the response surfaces were constructed using polynomial proxy models, which consider both geological uncertainties and design parameters. The polynomial proxies were validated against additional simulation runs and shown to provide an adequate representation of the model response for the cases tested. The resulting proxy models allow for the identification of the optimal borehole locations given the mean response of the geological scenarios from the proxy (i.e. maximizing or minimizing the mean response). The approach is demonstrated on the realistic Watt field example by optimizing the borehole locations to maximize the mean heat extraction from the reservoir under geological uncertainty. The training simulations are based on a comprehensive semi-synthetic data set of a hierarchical benchmark case study for a hydrocarbon reservoir, which specifically considers the interpretational uncertainty in the modeling work flow. The optimal choice of boreholes prolongs the time to cold water breakthrough and allows for higher pump rates and increased water production temperatures.

  18. Improving Empirical Magnetic Field Models by Fitting to In Situ Data Using an Optimized Parameter Approach

    DOE PAGES

    Brito, Thiago V.; Morley, Steven K.

    2017-10-25

    A method for comparing and optimizing the accuracy of empirical magnetic field models using in situ magnetic field measurements is presented in this paper. The optimization method minimizes a cost function—τ—that explicitly includes both a magnitude and an angular term. A time span of 21 days, including periods of mild and intense geomagnetic activity, was used for this analysis. A comparison between five magnetic field models (T96, T01S, T02, TS04, and TS07) widely used by the community demonstrated that the T02 model was, on average, the most accurate when driven by the standard model input parameters. The optimization procedure, performedmore » in all models except TS07, generally improved the results when compared to unoptimized versions of the models. Additionally, using more satellites in the optimization procedure produces more accurate results. This procedure reduces the number of large errors in the model, that is, it reduces the number of outliers in the error distribution. The TS04 model shows the most accurate results after the optimization in terms of both the magnitude and direction, when using at least six satellites in the fitting. It gave a smaller error than its unoptimized counterpart 57.3% of the time and outperformed the best unoptimized model (T02) 56.2% of the time. Its median percentage error in |B| was reduced from 4.54% to 3.84%. Finally, the difference among the models analyzed, when compared in terms of the median of the error distributions, is not very large. However, the unoptimized models can have very large errors, which are much reduced after the optimization.« less

  19. Improving Empirical Magnetic Field Models by Fitting to In Situ Data Using an Optimized Parameter Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brito, Thiago V.; Morley, Steven K.

    A method for comparing and optimizing the accuracy of empirical magnetic field models using in situ magnetic field measurements is presented in this paper. The optimization method minimizes a cost function—τ—that explicitly includes both a magnitude and an angular term. A time span of 21 days, including periods of mild and intense geomagnetic activity, was used for this analysis. A comparison between five magnetic field models (T96, T01S, T02, TS04, and TS07) widely used by the community demonstrated that the T02 model was, on average, the most accurate when driven by the standard model input parameters. The optimization procedure, performedmore » in all models except TS07, generally improved the results when compared to unoptimized versions of the models. Additionally, using more satellites in the optimization procedure produces more accurate results. This procedure reduces the number of large errors in the model, that is, it reduces the number of outliers in the error distribution. The TS04 model shows the most accurate results after the optimization in terms of both the magnitude and direction, when using at least six satellites in the fitting. It gave a smaller error than its unoptimized counterpart 57.3% of the time and outperformed the best unoptimized model (T02) 56.2% of the time. Its median percentage error in |B| was reduced from 4.54% to 3.84%. Finally, the difference among the models analyzed, when compared in terms of the median of the error distributions, is not very large. However, the unoptimized models can have very large errors, which are much reduced after the optimization.« less

  20. Modified optimal control pilot model for computer-aided design and analysis

    NASA Technical Reports Server (NTRS)

    Davidson, John B.; Schmidt, David K.

    1992-01-01

    This paper presents the theoretical development of a modified optimal control pilot model based upon the optimal control model (OCM) of the human operator developed by Kleinman, Baron, and Levison. This model is input compatible with the OCM and retains other key aspects of the OCM, such as a linear quadratic solution for the pilot gains with inclusion of control rate in the cost function, a Kalman estimator, and the ability to account for attention allocation and perception threshold effects. An algorithm designed for each implementation in current dynamic systems analysis and design software is presented. Example results based upon the analysis of a tracking task using three basic dynamic systems are compared with measured results and with similar analyses performed with the OCM and two previously proposed simplified optimal pilot models. The pilot frequency responses and error statistics obtained with this modified optimal control model are shown to compare more favorably to the measured experimental results than the other previously proposed simplified models evaluated.

  1. Modeling joint restoration strategies for interdependent infrastructure systems.

    PubMed

    Zhang, Chao; Kong, Jingjing; Simonovic, Slobodan P

    2018-01-01

    Life in the modern world depends on multiple critical services provided by infrastructure systems which are interdependent at multiple levels. To effectively respond to infrastructure failures, this paper proposes a model for developing optimal joint restoration strategy for interdependent infrastructure systems following a disruptive event. First, models for (i) describing structure of interdependent infrastructure system and (ii) their interaction process, are presented. Both models are considering the failure types, infrastructure operating rules and interdependencies among systems. Second, an optimization model for determining an optimal joint restoration strategy at infrastructure component level by minimizing the economic loss from the infrastructure failures, is proposed. The utility of the model is illustrated using a case study of electric-water systems. Results show that a small number of failed infrastructure components can trigger high level failures in interdependent systems; the optimal joint restoration strategy varies with failure occurrence time. The proposed models can help decision makers to understand the mechanisms of infrastructure interactions and search for optimal joint restoration strategy, which can significantly enhance safety of infrastructure systems.

  2. Optimization of A(2)O BNR processes using ASM and EAWAG Bio-P models: model performance.

    PubMed

    El Shorbagy, Walid E; Radif, Nawras N; Droste, Ronald L

    2013-12-01

    This paper presents the performance of an optimization model for a biological nutrient removal (BNR) system using the anaerobic-anoxic-oxic (A(2)O) process. The formulated model simulates removal of organics, nitrogen, and phosphorus using a reduced International Water Association (IWA) Activated Sludge Model #3 (ASM3) model and a Swiss Federal Institute for Environmental Science and Technology (EAWAG) Bio-P module. Optimal sizing is attained considering capital and operational costs. Process performance is evaluated against the effect of influent conditions, effluent limits, and selected parameters of various optimal solutions with the following results: an increase of influent temperature from 10 degrees C to 25 degrees C decreases the annual cost by about 8.5%, an increase of influent flow from 500 to 2500 m(3)/h triples the annual cost, the A(2)O BNR system is more sensitive to variations in influent ammonia than phosphorus concentration and the maximum growth rate of autotrophic biomass was the most sensitive kinetic parameter in the optimization model.

  3. [Temporal and spatial heterogeneity analysis of optimal value of sensitive parameters in ecological process model: The BIOME-BGC model as an example.

    PubMed

    Li, Yi Zhe; Zhang, Ting Long; Liu, Qiu Yu; Li, Ying

    2018-01-01

    The ecological process models are powerful tools for studying terrestrial ecosystem water and carbon cycle at present. However, there are many parameters for these models, and weather the reasonable values of these parameters were taken, have important impact on the models simulation results. In the past, the sensitivity and the optimization of model parameters were analyzed and discussed in many researches. But the temporal and spatial heterogeneity of the optimal parameters is less concerned. In this paper, the BIOME-BGC model was used as an example. In the evergreen broad-leaved forest, deciduous broad-leaved forest and C3 grassland, the sensitive parameters of the model were selected by constructing the sensitivity judgment index with two experimental sites selected under each vegetation type. The objective function was constructed by using the simulated annealing algorithm combined with the flux data to obtain the monthly optimal values of the sensitive parameters at each site. Then we constructed the temporal heterogeneity judgment index, the spatial heterogeneity judgment index and the temporal and spatial heterogeneity judgment index to quantitatively analyze the temporal and spatial heterogeneity of the optimal values of the model sensitive parameters. The results showed that the sensitivity of BIOME-BGC model parameters was different under different vegetation types, but the selected sensitive parameters were mostly consistent. The optimal values of the sensitive parameters of BIOME-BGC model mostly presented time-space heterogeneity to different degrees which varied with vegetation types. The sensitive parameters related to vegetation physiology and ecology had relatively little temporal and spatial heterogeneity while those related to environment and phenology had generally larger temporal and spatial heterogeneity. In addition, the temporal heterogeneity of the optimal values of the model sensitive parameters showed a significant linear correlation with the spatial heterogeneity under the three vegetation types. According to the temporal and spatial heterogeneity of the optimal values, the parameters of the BIOME-BGC model could be classified in order to adopt different parameter strategies in practical application. The conclusion could help to deeply understand the parameters and the optimal values of the ecological process models, and provide a way or reference for obtaining the reasonable values of parameters in models application.

  4. Multiobjective optimization and multivariable control of the beer fermentation process with the use of evolutionary algorithms.

    PubMed

    Andrés-Toro, B; Girón-Sierra, J M; Fernández-Blanco, P; López-Orozco, J A; Besada-Portas, E

    2004-04-01

    This paper describes empirical research on the model, optimization and supervisory control of beer fermentation. Conditions in the laboratory were made as similar as possible to brewery industry conditions. Since mathematical models that consider realistic industrial conditions were not available, a new mathematical model design involving industrial conditions was first developed. Batch fermentations are multiobjective dynamic processes that must be guided along optimal paths to obtain good results. The paper describes a direct way to apply a Pareto set approach with multiobjective evolutionary algorithms (MOEAs). Successful finding of optimal ways to drive these processes were reported. Once obtained, the mathematical fermentation model was used to optimize the fermentation process by using an intelligent control based on certain rules.

  5. An effective model for ergonomic optimization applied to a new automotive assembly line

    NASA Astrophysics Data System (ADS)

    Duraccio, Vincenzo; Elia, Valerio; Forcina, Antonio

    2016-06-01

    An efficient ergonomic optimization can lead to a significant improvement in production performance and a considerable reduction of costs. In the present paper new model for ergonomic optimization is proposed. The new approach is based on the criteria defined by National Institute of Occupational Safety and Health and, adapted to Italian legislation. The proposed model provides an ergonomic optimization, by analyzing ergonomic relations between manual work in correct conditions. The model includes a schematic and systematic analysis method of the operations, and identifies all possible ergonomic aspects to be evaluated. The proposed approach has been applied to an automotive assembly line, where the operation repeatability makes the optimization fundamental. The proposed application clearly demonstrates the effectiveness of the new approach.

  6. Optimization of a Thermodynamic Model Using a Dakota Toolbox Interface

    NASA Astrophysics Data System (ADS)

    Cyrus, J.; Jafarov, E. E.; Schaefer, K. M.; Wang, K.; Clow, G. D.; Piper, M.; Overeem, I.

    2016-12-01

    Scientific modeling of the Earth physical processes is an important driver of modern science. The behavior of these scientific models is governed by a set of input parameters. It is crucial to choose accurate input parameters that will also preserve the corresponding physics being simulated in the model. In order to effectively simulate real world processes the models output data must be close to the observed measurements. To achieve this optimal simulation, input parameters are tuned until we have minimized the objective function, which is the error between the simulation model outputs and the observed measurements. We developed an auxiliary package, which serves as a python interface between the user and DAKOTA. The package makes it easy for the user to conduct parameter space explorations, parameter optimizations, as well as sensitivity analysis while tracking and storing results in a database. The ability to perform these analyses via a Python library also allows the users to combine analysis techniques, for example finding an approximate equilibrium with optimization then immediately explore the space around it. We used the interface to calibrate input parameters for the heat flow model, which is commonly used in permafrost science. We performed optimization on the first three layers of the permafrost model, each with two thermal conductivity coefficients input parameters. Results of parameter space explorations indicate that the objective function not always has a unique minimal value. We found that gradient-based optimization works the best for the objective functions with one minimum. Otherwise, we employ more advanced Dakota methods such as genetic optimization and mesh based convergence in order to find the optimal input parameters. We were able to recover 6 initially unknown thermal conductivity parameters within 2% accuracy of their known values. Our initial tests indicate that the developed interface for the Dakota toolbox could be used to perform analysis and optimization on a `black box' scientific model more efficiently than using just Dakota.

  7. A novel model of motor learning capable of developing an optimal movement control law online from scratch.

    PubMed

    Shimansky, Yury P; Kang, Tao; He, Jiping

    2004-02-01

    A computational model of a learning system (LS) is described that acquires knowledge and skill necessary for optimal control of a multisegmental limb dynamics (controlled object or CO), starting from "knowing" only the dimensionality of the object's state space. It is based on an optimal control problem setup different from that of reinforcement learning. The LS solves the optimal control problem online while practicing the manipulation of CO. The system's functional architecture comprises several adaptive components, each of which incorporates a number of mapping functions approximated based on artificial neural nets. Besides the internal model of the CO's dynamics and adaptive controller that computes the control law, the LS includes a new type of internal model, the minimal cost (IM(mc)) of moving the controlled object between a pair of states. That internal model appears critical for the LS's capacity to develop an optimal movement trajectory. The IM(mc) interacts with the adaptive controller in a cooperative manner. The controller provides an initial approximation of an optimal control action, which is further optimized in real time based on the IM(mc). The IM(mc) in turn provides information for updating the controller. The LS's performance was tested on the task of center-out reaching to eight randomly selected targets with a 2DOF limb model. The LS reached an optimal level of performance in a few tens of trials. It also quickly adapted to movement perturbations produced by two different types of external force field. The results suggest that the proposed design of a self-optimized control system can serve as a basis for the modeling of motor learning that includes the formation and adaptive modification of the plan of a goal-directed movement.

  8. Optimal treatment interruptions control of TB transmission model

    NASA Astrophysics Data System (ADS)

    Nainggolan, Jonner; Suparwati, Titik; Kawuwung, Westy B.

    2018-03-01

    A tuberculosis model which incorporates treatment interruptions of infectives is established. Optimal control of individuals infected with active TB is given in the model. It is obtained that the control reproduction numbers is smaller than the reproduction number, this means treatment controls could optimize the decrease in the spread of active TB. For this model, controls on treatment of infection individuals to reduce the actively infected individual populations, by application the Pontryagins Maximum Principle for optimal control. The result further emphasized the importance of controlling disease relapse in reducing the number of actively infected and treatment interruptions individuals with tuberculosis.

  9. Rethinking exchange market models as optimization algorithms

    NASA Astrophysics Data System (ADS)

    Luquini, Evandro; Omar, Nizam

    2018-02-01

    The exchange market model has mainly been used to study the inequality problem. Although the human society inequality problem is very important, the exchange market models dynamics until stationary state and its capability of ranking individuals is interesting in itself. This study considers the hypothesis that the exchange market model could be understood as an optimization procedure. We present herein the implications for algorithmic optimization and also the possibility of a new family of exchange market models

  10. Info-gap robust-satisficing model of foraging behavior: do foragers optimize or satisfice?

    PubMed

    Carmel, Yohay; Ben-Haim, Yakov

    2005-11-01

    In this note we compare two mathematical models of foraging that reflect two competing theories of animal behavior: optimizing and robust satisficing. The optimal-foraging model is based on the marginal value theorem (MVT). The robust-satisficing model developed here is an application of info-gap decision theory. The info-gap robust-satisficing model relates to the same circumstances described by the MVT. We show how these two alternatives translate into specific predictions that at some points are quite disparate. We test these alternative predictions against available data collected in numerous field studies with a large number of species from diverse taxonomic groups. We show that a large majority of studies appear to support the robust-satisficing model and reject the optimal-foraging model.

  11. Optimization of energy window and evaluation of scatter compensation methods in MPS using the ideal observer with model mismatch

    NASA Astrophysics Data System (ADS)

    Ghaly, Michael; Links, Jonathan M.; Frey, Eric

    2015-03-01

    In this work, we used the ideal observer (IO) and IO with model mismatch (IO-MM) applied in the projection domain and an anthropomorphic Channelized Hotelling Observer (CHO) applied to reconstructed images to optimize the acquisition energy window width and evaluate various scatter compensation methods in the context of a myocardial perfusion SPECT defect detection task. The IO has perfect knowledge of the image formation process and thus reflects performance with perfect compensation for image-degrading factors. Thus, using the IO to optimize imaging systems could lead to suboptimal parameters compared to those optimized for humans interpreting SPECT images reconstructed with imperfect or no compensation. The IO-MM allows incorporating imperfect system models into the IO optimization process. We found that with near-perfect scatter compensation, the optimal energy window for the IO and CHO were similar; in its absence the IO-MM gave a better prediction of the optimal energy window for the CHO using different scatter compensation methods. These data suggest that the IO-MM may be useful for projection-domain optimization when model mismatch is significant, and that the IO is useful when followed by reconstruction with good models of the image formation process.

  12. Integrative systems modeling and multi-objective optimization

    EPA Science Inventory

    This presentation presents a number of algorithms, tools, and methods for utilizing multi-objective optimization within integrated systems modeling frameworks. We first present innovative methods using a genetic algorithm to optimally calibrate the VELMA and SWAT ecohydrological ...

  13. Solving bi-level optimization problems in engineering design using kriging models

    NASA Astrophysics Data System (ADS)

    Xia, Yi; Liu, Xiaojie; Du, Gang

    2018-05-01

    Stackelberg game-theoretic approaches are applied extensively in engineering design to handle distributed collaboration decisions. Bi-level genetic algorithms (BLGAs) and response surfaces have been used to solve the corresponding bi-level programming models. However, the computational costs for BLGAs often increase rapidly with the complexity of lower-level programs, and optimal solution functions sometimes cannot be approximated by response surfaces. This article proposes a new method, namely the optimal solution function approximation by kriging model (OSFAKM), in which kriging models are used to approximate the optimal solution functions. A detailed example demonstrates that OSFAKM can obtain better solutions than BLGAs and response surface-based methods, and at the same time reduce the workload of computation remarkably. Five benchmark problems and a case study of the optimal design of a thin-walled pressure vessel are also presented to illustrate the feasibility and potential of the proposed method for bi-level optimization in engineering design.

  14. Bayesian Regression with Network Prior: Optimal Bayesian Filtering Perspective

    PubMed Central

    Qian, Xiaoning; Dougherty, Edward R.

    2017-01-01

    The recently introduced intrinsically Bayesian robust filter (IBRF) provides fully optimal filtering relative to a prior distribution over an uncertainty class ofjoint random process models, whereas formerly the theory was limited to model-constrained Bayesian robust filters, for which optimization was limited to the filters that are optimal for models in the uncertainty class. This paper extends the IBRF theory to the situation where there are both a prior on the uncertainty class and sample data. The result is optimal Bayesian filtering (OBF), where optimality is relative to the posterior distribution derived from the prior and the data. The IBRF theories for effective characteristics and canonical expansions extend to the OBF setting. A salient focus of the present work is to demonstrate the advantages of Bayesian regression within the OBF setting over the classical Bayesian approach in the context otlinear Gaussian models. PMID:28824268

  15. From leaf longevity to canopy seasonality: a carbon optimality phenology model for tropical evergreen forests

    NASA Astrophysics Data System (ADS)

    Xu, X.; Medvigy, D.; Wu, J.; Wright, S. J.; Kitajima, K.; Pacala, S. W.

    2016-12-01

    Tropical evergreen forests play a key role in the global carbon, water and energy cycles. Despite apparent evergreenness, this biome shows strong seasonality in leaf litter and photosynthesis. Recent studies have suggested that this seasonality is not directly related to environmental variability but is dominated by seasonal changes of leaf development and senescence. Meanwhile, current terrestrial biosphere models (TBMs) can not capture this pattern because leaf life cycle is highly underrepresented. One challenge to model this leaf life cycle is the remarkable diversity in leaf longevity, ranging from several weeks to multiple years. Ecologists have proposed models where leaf longevity is regarded as a strategy to optimize carbon gain. However previous optimality models can not be readily integrated into TBMs because (i) there are still large biases in predicted leaf longevity and (ii) it is never tested whether the carbon optimality model can capture the observed seasonality in leaf demography and canopy photosynthesis. In this study, we develop a new carbon optimality model for leaf demography. The novelty of our approach is two-fold. First, we incorporate a mechanistic photosynthesis model that can better estimate leaf carbon gain. Second, we consider the interspecific variations in leaf senescence rate, which strongly influence the modelled optimal carbon gain. We test our model with a leaf trait database for Panamanian evergreen forests. Then, we apply the model at seasonal scale and compare simulated seasonality of leaf litter and canopy photosynthesis with in-situ observations from several Amazonian forest sites. We find that (i) compared with original optimality model, the regression slope between observed and predicted leaf longevity increases from 0.15 to 1.04 in our new model and (ii) that our new model can capture the observed seasonal variations of leaf demography and canopy photosynthesis. Our results suggest that the phenology in tropical evergreen forests might result from plant adaptation to optimize canopy carbon gain. Finally, this proposed trait-driven prognostic phenology model could potentially be incorporated into next generation TBMs to improve simulation of carbon and water fluxes in the tropics.

  16. Sequential optimization of a terrestrial biosphere model constrained by multiple satellite based products

    NASA Astrophysics Data System (ADS)

    Ichii, K.; Kondo, M.; Wang, W.; Hashimoto, H.; Nemani, R. R.

    2012-12-01

    Various satellite-based spatial products such as evapotranspiration (ET) and gross primary productivity (GPP) are now produced by integration of ground and satellite observations. Effective use of these multiple satellite-based products in terrestrial biosphere models is an important step toward better understanding of terrestrial carbon and water cycles. However, due to the complexity of terrestrial biosphere models with large number of model parameters, the application of these spatial data sets in terrestrial biosphere models is difficult. In this study, we established an effective but simple framework to refine a terrestrial biosphere model, Biome-BGC, using multiple satellite-based products as constraints. We tested the framework in the monsoon Asia region covered by AsiaFlux observations. The framework is based on the hierarchical analysis (Wang et al. 2009) with model parameter optimization constrained by satellite-based spatial data. The Biome-BGC model is separated into several tiers to minimize the freedom of model parameter selections and maximize the independency from the whole model. For example, the snow sub-model is first optimized using MODIS snow cover product, followed by soil water sub-model optimized by satellite-based ET (estimated by an empirical upscaling method; Support Vector Regression (SVR) method; Yang et al. 2007), photosynthesis model optimized by satellite-based GPP (based on SVR method), and respiration and residual carbon cycle models optimized by biomass data. As a result of initial assessment, we found that most of default sub-models (e.g. snow, water cycle and carbon cycle) showed large deviations from remote sensing observations. However, these biases were removed by applying the proposed framework. For example, gross primary productivities were initially underestimated in boreal and temperate forest and overestimated in tropical forests. However, the parameter optimization scheme successfully reduced these biases. Our analysis shows that terrestrial carbon and water cycle simulations in monsoon Asia were greatly improved, and the use of multiple satellite observations with this framework is an effective way for improving terrestrial biosphere models.

  17. Network models for solving the problem of multicriterial adaptive optimization of investment projects control with several acceptable technologies

    NASA Astrophysics Data System (ADS)

    Shorikov, A. F.; Butsenko, E. V.

    2017-10-01

    This paper discusses the problem of multicriterial adaptive optimization the control of investment projects in the presence of several technologies. On the basis of network modeling proposed a new economic and mathematical model and a method for solving the problem of multicriterial adaptive optimization the control of investment projects in the presence of several technologies. Network economic and mathematical modeling allows you to determine the optimal time and calendar schedule for the implementation of the investment project and serves as an instrument to increase the economic potential and competitiveness of the enterprise. On a meaningful practical example, the processes of forming network models are shown, including the definition of the sequence of actions of a particular investment projecting process, the network-based work schedules are constructed. The calculation of the parameters of network models is carried out. Optimal (critical) paths have been formed and the optimal time for implementing the chosen technologies of the investment project has been calculated. It also shows the selection of the optimal technology from a set of possible technologies for project implementation, taking into account the time and cost of the work. The proposed model and method for solving the problem of managing investment projects can serve as a basis for the development, creation and application of appropriate computer information systems to support the adoption of managerial decisions by business people.

  18. Optimizing separate phase light hydrocarbon recovery from contaminated unconfined aquifers

    NASA Astrophysics Data System (ADS)

    Cooper, Grant S.; Peralta, Richard C.; Kaluarachchi, Jagath J.

    A modeling approach is presented that optimizes separate phase recovery of light non-aqueous phase liquids (LNAPL) for a single dual-extraction well in a homogeneous, isotropic unconfined aquifer. A simulation/regression/optimization (S/R/O) model is developed to predict, analyze, and optimize the oil recovery process. The approach combines detailed simulation, nonlinear regression, and optimization. The S/R/O model utilizes nonlinear regression equations describing system response to time-varying water pumping and oil skimming. Regression equations are developed for residual oil volume and free oil volume. The S/R/O model determines optimized time-varying (stepwise) pumping rates which minimize residual oil volume and maximize free oil recovery while causing free oil volume to decrease a specified amount. This S/R/O modeling approach implicitly immobilizes the free product plume by reversing the water table gradient while achieving containment. Application to a simple representative problem illustrates the S/R/O model utility for problem analysis and remediation design. When compared with the best steady pumping strategies, the optimal stepwise pumping strategy improves free oil recovery by 11.5% and reduces the amount of residual oil left in the system due to pumping by 15%. The S/R/O model approach offers promise for enhancing the design of free phase LNAPL recovery systems and to help in making cost-effective operation and management decisions for hydrogeologists, engineers, and regulators.

  19. Fuzzy multiobjective models for optimal operation of a hydropower system

    NASA Astrophysics Data System (ADS)

    Teegavarapu, Ramesh S. V.; Ferreira, André R.; Simonovic, Slobodan P.

    2013-06-01

    Optimal operation models for a hydropower system using new fuzzy multiobjective mathematical programming models are developed and evaluated in this study. The models use (i) mixed integer nonlinear programming (MINLP) with binary variables and (ii) integrate a new turbine unit commitment formulation along with water quality constraints used for evaluation of reservoir downstream impairment. Reardon method used in solution of genetic algorithm optimization problems forms the basis for development of a new fuzzy multiobjective hydropower system optimization model with creation of Reardon type fuzzy membership functions. The models are applied to a real-life hydropower reservoir system in Brazil. Genetic Algorithms (GAs) are used to (i) solve the optimization formulations to avoid computational intractability and combinatorial problems associated with binary variables in unit commitment, (ii) efficiently address Reardon method formulations, and (iii) deal with local optimal solutions obtained from the use of traditional gradient-based solvers. Decision maker's preferences are incorporated within fuzzy mathematical programming formulations to obtain compromise operating rules for a multiobjective reservoir operation problem dominated by conflicting goals of energy production, water quality and conservation releases. Results provide insight into compromise operation rules obtained using the new Reardon fuzzy multiobjective optimization framework and confirm its applicability to a variety of multiobjective water resources problems.

  20. Optimal control of HIV/AIDS dynamic: Education and treatment

    NASA Astrophysics Data System (ADS)

    Sule, Amiru; Abdullah, Farah Aini

    2014-07-01

    A mathematical model which describes the transmission dynamics of HIV/AIDS is developed. The optimal control representing education and treatment for this model is explored. The existence of optimal Control is established analytically by the use of optimal control theory. Numerical simulations suggest that education and treatment for the infected has a positive impact on HIV/AIDS control.

  1. Multiobjective optimization model of intersection signal timing considering emissions based on field data: A case study of Beijing.

    PubMed

    Kou, Weibin; Chen, Xumei; Yu, Lei; Gong, Huibo

    2018-04-18

    Most existing signal timing models are aimed to minimize the total delay and stops at intersections, without considering environmental factors. This paper analyzes the trade-off between vehicle emissions and traffic efficiencies on the basis of field data. First, considering the different operating modes of cruising, acceleration, deceleration, and idling, field data of emissions and Global Positioning System (GPS) are collected to estimate emission rates for heavy-duty and light-duty vehicles. Second, multiobjective signal timing optimization model is established based on a genetic algorithm to minimize delay, stops, and emissions. Finally, a case study is conducted in Beijing. Nine scenarios are designed considering different weights of emission and traffic efficiency. The results compared with those using Highway Capacity Manual (HCM) 2010 show that signal timing optimized by the model proposed in this paper can decrease vehicles delay and emissions more significantly. The optimization model can be applied in different cities, which provides supports for eco-signal design and development. Vehicle emissions are heavily at signal intersections in urban area. The multiobjective signal timing optimization model is proposed considering the trade-off between vehicle emissions and traffic efficiencies on the basis of field data. The results indicate that signal timing optimized by the model proposed in this paper can decrease vehicle emissions and delays more significantly. The optimization model can be applied in different cities, which provides supports for eco-signal design and development.

  2. A tool for efficient, model-independent management optimization under uncertainty

    USGS Publications Warehouse

    White, Jeremy; Fienen, Michael N.; Barlow, Paul M.; Welter, Dave E.

    2018-01-01

    To fill a need for risk-based environmental management optimization, we have developed PESTPP-OPT, a model-independent tool for resource management optimization under uncertainty. PESTPP-OPT solves a sequential linear programming (SLP) problem and also implements (optional) efficient, “on-the-fly” (without user intervention) first-order, second-moment (FOSM) uncertainty techniques to estimate model-derived constraint uncertainty. Combined with a user-specified risk value, the constraint uncertainty estimates are used to form chance-constraints for the SLP solution process, so that any optimal solution includes contributions from model input and observation uncertainty. In this way, a “single answer” that includes uncertainty is yielded from the modeling analysis. PESTPP-OPT uses the familiar PEST/PEST++ model interface protocols, which makes it widely applicable to many modeling analyses. The use of PESTPP-OPT is demonstrated with a synthetic, integrated surface-water/groundwater model. The function and implications of chance constraints for this synthetic model are discussed.

  3. Resolution analysis of finite fault source inversion using one- and three-dimensional Green's functions 1. Strong motions

    USGS Publications Warehouse

    Graves, R.W.; Wald, D.J.

    2001-01-01

    We develop a methodology to perform finite fault source inversions from strong motion data using Green's functions (GFs) calculated for a three-dimensional (3-D) velocity structure. The 3-D GFs are calculated numerically by inserting body forces at each of the strong motion sites and then recording the resulting strains along the target fault surface. Using reciprocity, these GFs can be recombined to represent the ground motion at each site for any (heterogeneous) slip distribution on the fault. The reciprocal formulation significantly reduces the required number of 3-D finite difference computations to at most 3NS, where NS is the number of strong motion sites used in the inversion. Using controlled numerical resolution tests, we have examined the relative importance of accurate GFs for finite fault source inversions which rely on near-source ground motions. These experiments use both 1-D and 3-D GFs in inversions for hypothetical rupture models in order (1) to analyze the ability of the 3-D methodology to resolve trade-offs between complex source phenomena and 3-D path effects, (2) to address the sensitivity of the inversion results to uncertainties in the 3-D velocity structure, and (3) to test the adequacy of the 1-D GF method when propagation effects are known to be three-dimensional. We find that given "data" from a prescribed 3-D Earth structure, the use of well-calibrated 3-D GFs in the inversion provides very good resolution of the assumed slip distribution, thus adequately separating source and 3-D propagation effects. In contrast, using a set of inexact 3-D GFs or a set of hybrid 1-D GFs allows only partial recovery of the slip distribution. These findings suggest that in regions of complex geology the use of well-calibrated 3-D GFs has the potential for increased resolution of the rupture process relative to 1-D GFs. However, realizing this full potential requires that the 3-D velocity model and associated GFs should be carefully validated against the true 3-D Earth structure before performing the inverse problem with actual data. Copyright 2001 by the American Geophysical Union.

  4. Interpretation of Flow Logs from Nevada Test Site Boreholes to Estimate Hydraulic Conductivity Using Numerical Simulations Constrained by Single-Well Aquifer Tests

    USGS Publications Warehouse

    Garcia, C. Amanda; Halford, Keith J.; Laczniak, Randell J.

    2010-01-01

    Hydraulic conductivities of volcanic and carbonate lithologic units at the Nevada Test Site were estimated from flow logs and aquifer-test data. Borehole flow and drawdown were integrated and interpreted using a radial, axisymmetric flow model, AnalyzeHOLE. This integrated approach is used because complex well completions and heterogeneous aquifers and confining units produce vertical flow in the annular space and aquifers adjacent to the wellbore. AnalyzeHOLE simulates vertical flow, in addition to horizontal flow, which accounts for converging flow toward screen ends and diverging flow toward transmissive intervals. Simulated aquifers and confining units uniformly are subdivided by depth into intervals in which the hydraulic conductivity is estimated with the Parameter ESTimation (PEST) software. Between 50 and 150 hydraulic-conductivity parameters were estimated by minimizing weighted differences between simulated and measured flow and drawdown. Transmissivity estimates from single-well or multiple-well aquifer tests were used to constrain estimates of hydraulic conductivity. The distribution of hydraulic conductivity within each lithology had a minimum variance because estimates were constrained with Tikhonov regularization. AnalyzeHOLE simulated hydraulic-conductivity estimates for lithologic units across screened and cased intervals are as much as 100 times less than those estimated using proportional flow-log analyses applied across screened intervals only. Smaller estimates of hydraulic conductivity for individual lithologic units are simulated because sections of the unit behind cased intervals of the wellbore are not assumed to be impermeable, and therefore, can contribute flow to the wellbore. Simulated hydraulic-conductivity estimates vary by more than three orders of magnitude across a lithologic unit, indicating a high degree of heterogeneity in volcanic and carbonate-rock units. The higher water transmitting potential of carbonate-rock units relative to volcanic-rock units is exemplified by the large difference in their estimated maximum hydraulic conductivity; 4,000 and 400 feet per day, respectively. Simulated minimum estimates of hydraulic conductivity are inexact and represent the lower detection limit of the method. Minimum thicknesses of lithologic intervals also were defined for comparing AnalyzeHOLE results to hydraulic properties in regional ground-water flow models.

  5. AMERICAN-SOVIET SYMPOSIUM ON USE OF MATHEMATICAL MODELS TO OPTIMIZE WATER QUALITY MANAGEMENT HELD AT KHARKOV AND ROSTOV-ON-DON, USSR ON DECEMBER 9-16, 1975

    EPA Science Inventory

    The American-Soviet Symposium on Use of Mathematical Models to Optimize Water Quality Management examines methodological questions related to simulation and optimization modeling of processes that determine water quality of river basins. Discussants describe the general state of ...

  6. Improved Modeling of Intelligent Tutoring Systems Using Ant Colony Optimization

    ERIC Educational Resources Information Center

    Rastegarmoghadam, Mahin; Ziarati, Koorush

    2017-01-01

    Swarm intelligence approaches, such as ant colony optimization (ACO), are used in adaptive e-learning systems and provide an effective method for finding optimal learning paths based on self-organization. The aim of this paper is to develop an improved modeling of adaptive tutoring systems using ACO. In this model, the learning object is…

  7. Academic Optimism and Collective Responsibility: An Organizational Model of the Dynamics of Student Achievement

    ERIC Educational Resources Information Center

    Wu, Jason H.

    2013-01-01

    This study was designed to examine the construct of academic optimism and its relationship with collective responsibility in a sample of Taiwan elementary schools. The construct of academic optimism was tested using confirmatory factor analysis, and the whole structural model was tested with a structural equation modeling analysis. The data were…

  8. New reflective symmetry design capability in the JPL-IDEAS Structure Optimization Program

    NASA Technical Reports Server (NTRS)

    Strain, D.; Levy, R.

    1986-01-01

    The JPL-IDEAS antenna structure analysis and design optimization computer program was modified to process half structure models of symmetric structures subjected to arbitrary external static loads, synthesize the performance, and optimize the design of the full structure. Significant savings in computation time and cost (more than 50%) were achieved compared to the cost of full model computer runs. The addition of the new reflective symmetry analysis design capabilities to the IDEAS program allows processing of structure models whose size would otherwise prevent automated design optimization. The new program produced synthesized full model iterative design results identical to those of actual full model program executions at substantially reduced cost, time, and computer storage.

  9. Optimization of an electromagnetic linear actuator using a network and a finite element model

    NASA Astrophysics Data System (ADS)

    Neubert, Holger; Kamusella, Alfred; Lienig, Jens

    2011-03-01

    Model based design optimization leads to robust solutions only if the statistical deviations of design, load and ambient parameters from nominal values are considered. We describe an optimization methodology that involves these deviations as stochastic variables for an exemplary electromagnetic actuator used to drive a Braille printer. A combined model simulates the dynamic behavior of the actuator and its non-linear load. It consists of a dynamic network model and a stationary magnetic finite element (FE) model. The network model utilizes lookup tables of the magnetic force and the flux linkage computed by the FE model. After a sensitivity analysis using design of experiment (DoE) methods and a nominal optimization based on gradient methods, a robust design optimization is performed. Selected design variables are involved in form of their density functions. In order to reduce the computational effort we use response surfaces instead of the combined system model obtained in all stochastic analysis steps. Thus, Monte-Carlo simulations can be applied. As a result we found an optimum system design meeting our requirements with regard to function and reliability.

  10. An optimal control strategies using vaccination and fogging in dengue fever transmission model

    NASA Astrophysics Data System (ADS)

    Fitria, Irma; Winarni, Pancahayani, Sigit; Subchan

    2017-08-01

    This paper discussed regarding a model and an optimal control problem of dengue fever transmission. We classified the model as human and vector (mosquito) population classes. For the human population, there are three subclasses, such as susceptible, infected, and resistant classes. Then, for the vector population, we divided it into wiggler, susceptible, and infected vector classes. Thus, the model consists of six dynamic equations. To minimize the number of dengue fever cases, we designed two optimal control variables in the model, the giving of fogging and vaccination. The objective function of this optimal control problem is to minimize the number of infected human population, the number of vector, and the cost of the controlling efforts. By giving the fogging optimally, the number of vector can be minimized. In this case, we considered the giving of vaccination as a control variable because it is one of the efforts that are being developed to reduce the spreading of dengue fever. We used Pontryagin Minimum Principle to solve the optimal control problem. Furthermore, the numerical simulation results are given to show the effect of the optimal control strategies in order to minimize the epidemic of dengue fever.

  11. Surrogate-Based Optimization of Biogeochemical Transport Models

    NASA Astrophysics Data System (ADS)

    Prieß, Malte; Slawig, Thomas

    2010-09-01

    First approaches towards a surrogate-based optimization method for a one-dimensional marine biogeochemical model of NPZD type are presented. The model, developed by Oschlies and Garcon [1], simulates the distribution of nitrogen, phytoplankton, zooplankton and detritus in a water column and is driven by ocean circulation data. A key issue is to minimize the misfit between the model output and given observational data. Our aim is to reduce the overall optimization cost avoiding expensive function and derivative evaluations by using a surrogate model replacing the high-fidelity model in focus. This in particular becomes important for more complex three-dimensional models. We analyse a coarsening in the discretization of the model equations as one way to create such a surrogate. Here the numerical stability crucially depends upon the discrete stepsize in time and space and the biochemical terms. We show that for given model parameters the level of grid coarsening can be choosen accordingly yielding a stable and satisfactory surrogate. As one example of a surrogate-based optimization method we present results of the Aggressive Space Mapping technique (developed by John W. Bandler [2, 3]) applied to the optimization of this one-dimensional biogeochemical transport model.

  12. Parameterized data-driven fuzzy model based optimal control of a semi-batch reactor.

    PubMed

    Kamesh, Reddi; Rani, K Yamuna

    2016-09-01

    A parameterized data-driven fuzzy (PDDF) model structure is proposed for semi-batch processes, and its application for optimal control is illustrated. The orthonormally parameterized input trajectories, initial states and process parameters are the inputs to the model, which predicts the output trajectories in terms of Fourier coefficients. Fuzzy rules are formulated based on the signs of a linear data-driven model, while the defuzzification step incorporates a linear regression model to shift the domain from input to output domain. The fuzzy model is employed to formulate an optimal control problem for single rate as well as multi-rate systems. Simulation study on a multivariable semi-batch reactor system reveals that the proposed PDDF modeling approach is capable of capturing the nonlinear and time-varying behavior inherent in the semi-batch system fairly accurately, and the results of operating trajectory optimization using the proposed model are found to be comparable to the results obtained using the exact first principles model, and are also found to be comparable to or better than parameterized data-driven artificial neural network model based optimization results. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  13. Numerical optimization of composite hip endoprostheses under different loading conditions

    NASA Technical Reports Server (NTRS)

    Blake, T. A.; Davy, D. T.; Saravanos, D. A.; Hopkins, D. A.

    1992-01-01

    The optimization of composite hip implants was investigated. Emphasis was placed on the effect of shape and material tailoring of the implant to improve the implant-bone interaction. A variety of loading conditions were investigated to better understand the relationship between loading and optimization outcome. Comparisons of the initial and optimal models with more complex 3D finite element models were performed. The results indicate that design improvements made using this method result in similar improvements in the 3D models. Although the optimization outcomes were significantly affected by the choice of loading conditions, certain trends were observed that were independent of the applied loading.

  14. Artificial bee colony algorithm for constrained possibilistic portfolio optimization problem

    NASA Astrophysics Data System (ADS)

    Chen, Wei

    2015-07-01

    In this paper, we discuss the portfolio optimization problem with real-world constraints under the assumption that the returns of risky assets are fuzzy numbers. A new possibilistic mean-semiabsolute deviation model is proposed, in which transaction costs, cardinality and quantity constraints are considered. Due to such constraints the proposed model becomes a mixed integer nonlinear programming problem and traditional optimization methods fail to find the optimal solution efficiently. Thus, a modified artificial bee colony (MABC) algorithm is developed to solve the corresponding optimization problem. Finally, a numerical example is given to illustrate the effectiveness of the proposed model and the corresponding algorithm.

  15. Optimization Research of Generation Investment Based on Linear Programming Model

    NASA Astrophysics Data System (ADS)

    Wu, Juan; Ge, Xueqian

    Linear programming is an important branch of operational research and it is a mathematical method to assist the people to carry out scientific management. GAMS is an advanced simulation and optimization modeling language and it will combine a large number of complex mathematical programming, such as linear programming LP, nonlinear programming NLP, MIP and other mixed-integer programming with the system simulation. In this paper, based on the linear programming model, the optimized investment decision-making of generation is simulated and analyzed. At last, the optimal installed capacity of power plants and the final total cost are got, which provides the rational decision-making basis for optimized investments.

  16. Optimal control of information epidemics modeled as Maki Thompson rumors

    NASA Astrophysics Data System (ADS)

    Kandhway, Kundan; Kuri, Joy

    2014-12-01

    We model the spread of information in a homogeneously mixed population using the Maki Thompson rumor model. We formulate an optimal control problem, from the perspective of single campaigner, to maximize the spread of information when the campaign budget is fixed. Control signals, such as advertising in the mass media, attempt to convert ignorants and stiflers into spreaders. We show the existence of a solution to the optimal control problem when the campaigning incurs non-linear costs under the isoperimetric budget constraint. The solution employs Pontryagin's Minimum Principle and a modified version of forward backward sweep technique for numerical computation to accommodate the isoperimetric budget constraint. The techniques developed in this paper are general and can be applied to similar optimal control problems in other areas. We have allowed the spreading rate of the information epidemic to vary over the campaign duration to model practical situations when the interest level of the population in the subject of the campaign changes with time. The shape of the optimal control signal is studied for different model parameters and spreading rate profiles. We have also studied the variation of the optimal campaigning costs with respect to various model parameters. Results indicate that, for some model parameters, significant improvements can be achieved by the optimal strategy compared to the static control strategy. The static strategy respects the same budget constraint as the optimal strategy and has a constant value throughout the campaign horizon. This work finds application in election and social awareness campaigns, product advertising, movie promotion and crowdfunding campaigns.

  17. Optimal policies of non-cross-resistant chemotherapy on Goldie and Coldman's cancer model.

    PubMed

    Chen, Jeng-Huei; Kuo, Ya-Hui; Luh, Hsing Paul

    2013-10-01

    Mathematical models can be used to study the chemotherapy on tumor cells. Especially, in 1979, Goldie and Coldman proposed the first mathematical model to relate the drug sensitivity of tumors to their mutation rates. Many scientists have since referred to this pioneering work because of its simplicity and elegance. Its original idea has also been extended and further investigated in massive follow-up studies of cancer modeling and optimal treatment. Goldie and Coldman, together with Guaduskas, later used their model to explain why an alternating non-cross-resistant chemotherapy is optimal with a simulation approach. Subsequently in 1983, Goldie and Coldman proposed an extended stochastic based model and provided a rigorous mathematical proof to their earlier simulation work when the extended model is approximated by its quasi-approximation. However, Goldie and Coldman's analytic study of optimal treatments majorly focused on a process with symmetrical parameter settings, and presented few theoretical results for asymmetrical settings. In this paper, we recast and restate Goldie, Coldman, and Guaduskas' model as a multi-stage optimization problem. Under an asymmetrical assumption, the conditions under which a treatment policy can be optimal are derived. The proposed framework enables us to consider some optimal policies on the model analytically. In addition, Goldie, Coldman and Guaduskas' work with symmetrical settings can be treated as a special case of our framework. Based on the derived conditions, this study provides an alternative proof to Goldie and Coldman's work. In addition to the theoretical derivation, numerical results are included to justify the correctness of our work. Copyright © 2013 Elsevier Inc. All rights reserved.

  18. Dental implant customization using numerical optimization design and 3-dimensional printing fabrication of zirconia ceramic.

    PubMed

    Cheng, Yung-Chang; Lin, Deng-Huei; Jiang, Cho-Pei; Lin, Yuan-Min

    2017-05-01

    This study proposes a new methodology for dental implant customization consisting of numerical geometric optimization and 3-dimensional printing fabrication of zirconia ceramic. In the numerical modeling, exogenous factors for implant shape include the thread pitch, thread depth, maximal diameter of implant neck, and body size. Endogenous factors are bone density, cortical bone thickness, and non-osseointegration. An integration procedure, including uniform design method, Kriging interpolation and genetic algorithm, is applied to optimize the geometry of dental implants. The threshold of minimal micromotion for optimization evaluation was 100 μm. The optimized model is imported to the 3-dimensional slurry printer to fabricate the zirconia green body (powder is bonded by polymer weakly) of the implant. The sintered implant is obtained using a 2-stage sintering process. Twelve models are constructed according to uniform design method and simulated the micromotion behavior using finite element modeling. The result of uniform design models yields a set of exogenous factors that can provide the minimal micromotion (30.61 μm), as a suitable model. Kriging interpolation and genetic algorithm modified the exogenous factor of the suitable model, resulting in 27.11 μm as an optimization model. Experimental results show that the 3-dimensional slurry printer successfully fabricated the green body of the optimization model, but the accuracy of sintered part still needs to be improved. In addition, the scanning electron microscopy morphology is a stabilized t-phase microstructure, and the average compressive strength of the sintered part is 632.1 MPa. Copyright © 2016 John Wiley & Sons, Ltd.

  19. A computer tool for a minimax criterion in binary response and heteroscedastic simple linear regression models.

    PubMed

    Casero-Alonso, V; López-Fidalgo, J; Torsney, B

    2017-01-01

    Binary response models are used in many real applications. For these models the Fisher information matrix (FIM) is proportional to the FIM of a weighted simple linear regression model. The same is also true when the weight function has a finite integral. Thus, optimal designs for one binary model are also optimal for the corresponding weighted linear regression model. The main objective of this paper is to provide a tool for the construction of MV-optimal designs, minimizing the maximum of the variances of the estimates, for a general design space. MV-optimality is a potentially difficult criterion because of its nondifferentiability at equal variance designs. A methodology for obtaining MV-optimal designs where the design space is a compact interval [a, b] will be given for several standard weight functions. The methodology will allow us to build a user-friendly computer tool based on Mathematica to compute MV-optimal designs. Some illustrative examples will show a representation of MV-optimal designs in the Euclidean plane, taking a and b as the axes. The applet will be explained using two relevant models. In the first one the case of a weighted linear regression model is considered, where the weight function is directly chosen from a typical family. In the second example a binary response model is assumed, where the probability of the outcome is given by a typical probability distribution. Practitioners can use the provided applet to identify the solution and to know the exact support points and design weights. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  20. Comparison of optimal design methods in inverse problems

    NASA Astrophysics Data System (ADS)

    Banks, H. T.; Holm, K.; Kappel, F.

    2011-07-01

    Typical optimal design methods for inverse or parameter estimation problems are designed to choose optimal sampling distributions through minimization of a specific cost function related to the resulting error in parameter estimates. It is hoped that the inverse problem will produce parameter estimates with increased accuracy using data collected according to the optimal sampling distribution. Here we formulate the classical optimal design problem in the context of general optimization problems over distributions of sampling times. We present a new Prohorov metric-based theoretical framework that permits one to treat succinctly and rigorously any optimal design criteria based on the Fisher information matrix. A fundamental approximation theory is also included in this framework. A new optimal design, SE-optimal design (standard error optimal design), is then introduced in the context of this framework. We compare this new design criterion with the more traditional D-optimal and E-optimal designs. The optimal sampling distributions from each design are used to compute and compare standard errors; the standard errors for parameters are computed using asymptotic theory or bootstrapping and the optimal mesh. We use three examples to illustrate ideas: the Verhulst-Pearl logistic population model (Banks H T and Tran H T 2009 Mathematical and Experimental Modeling of Physical and Biological Processes (Boca Raton, FL: Chapman and Hall/CRC)), the standard harmonic oscillator model (Banks H T and Tran H T 2009) and a popular glucose regulation model (Bergman R N, Ider Y Z, Bowden C R and Cobelli C 1979 Am. J. Physiol. 236 E667-77 De Gaetano A and Arino O 2000 J. Math. Biol. 40 136-68 Toffolo G, Bergman R N, Finegood D T, Bowden C R and Cobelli C 1980 Diabetes 29 979-90).

  1. Water-resources optimization model for Santa Barbara, California

    USGS Publications Warehouse

    Nishikawa, Tracy

    1998-01-01

    A simulation-optimization model has been developed for the optimal management of the city of Santa Barbara's water resources during a drought. The model, which links groundwater simulation with linear programming, has a planning horizon of 5 years. The objective is to minimize the cost of water supply subject to: water demand constraints, hydraulic head constraints to control seawater intrusion, and water capacity constraints. The decision variables are montly water deliveries from surface water and groundwater. The state variables are hydraulic heads. The drought of 1947-51 is the city's worst drought on record, and simulated surface-water supplies for this period were used as a basis for testing optimal management of current water resources under drought conditions. The simulation-optimization model was applied using three reservoir operation rules. In addition, the model's sensitivity to demand, carry over [the storage of water in one year for use in the later year(s)], head constraints, and capacity constraints was tested.

  2. [Optimization of the parameters of microcirculatory structural adaptation model based on improved quantum-behaved particle swarm optimization algorithm].

    PubMed

    Pan, Qing; Yao, Jialiang; Wang, Ruofan; Cao, Ping; Ning, Gangmin; Fang, Luping

    2017-08-01

    The vessels in the microcirculation keep adjusting their structure to meet the functional requirements of the different tissues. A previously developed theoretical model can reproduce the process of vascular structural adaptation to help the study of the microcirculatory physiology. However, until now, such model lacks the appropriate methods for its parameter settings with subsequent limitation of further applications. This study proposed an improved quantum-behaved particle swarm optimization (QPSO) algorithm for setting the parameter values in this model. The optimization was performed on a real mesenteric microvascular network of rat. The results showed that the improved QPSO was superior to the standard particle swarm optimization, the standard QPSO and the previously reported Downhill algorithm. We conclude that the improved QPSO leads to a better agreement between mathematical simulation and animal experiment, rendering the model more reliable in future physiological studies.

  3. An integrated prediction and optimization model of biogas production system at a wastewater treatment facility.

    PubMed

    Akbaş, Halil; Bilgen, Bilge; Turhan, Aykut Melih

    2015-11-01

    This study proposes an integrated prediction and optimization model by using multi-layer perceptron neural network and particle swarm optimization techniques. Three different objective functions are formulated. The first one is the maximization of methane percentage with single output. The second one is the maximization of biogas production with single output. The last one is the maximization of biogas quality and biogas production with two outputs. Methane percentage, carbon dioxide percentage, and other contents' percentage are used as the biogas quality criteria. Based on the formulated models and data from a wastewater treatment facility, optimal values of input variables and their corresponding maximum output values are found out for each model. It is expected that the application of the integrated prediction and optimization models increases the biogas production and biogas quality, and contributes to the quantity of electricity production at the wastewater treatment facility. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Optimal short-range trajectories for helicopters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Slater, G.L.; Erzberger, H.

    1982-12-01

    An optimal flight path algorithm using a simplified altitude state model and a priori climb cruise descent flight profile was developed and applied to determine minimum fuel and minimum cost trajectories for a helicopter flying a fixed range trajectory. In addition, a method was developed for obtaining a performance model in simplified form which is based on standard flight manual data and which is applicable to the computation of optimal trajectories. The entire performance optimization algorithm is simple enough that on line trajectory optimization is feasible with a relatively small computer. The helicopter model used is the Silorsky S-61N. Themore » results show that for this vehicle the optimal flight path and optimal cruise altitude can represent a 10% fuel saving on a minimum fuel trajectory. The optimal trajectories show considerable variability because of helicopter weight, ambient winds, and the relative cost trade off between time and fuel. In general, reasonable variations from the optimal velocities and cruise altitudes do not significantly degrade the optimal cost. For fuel optimal trajectories, the optimum cruise altitude varies from the maximum (12,000 ft) to the minimum (0 ft) depending on helicopter weight.« less

  5. Robust and fast nonlinear optimization of diffusion MRI microstructure models.

    PubMed

    Harms, R L; Fritz, F J; Tobisch, A; Goebel, R; Roebroeck, A

    2017-07-15

    Advances in biophysical multi-compartment modeling for diffusion MRI (dMRI) have gained popularity because of greater specificity than DTI in relating the dMRI signal to underlying cellular microstructure. A large range of these diffusion microstructure models have been developed and each of the popular models comes with its own, often different, optimization algorithm, noise model and initialization strategy to estimate its parameter maps. Since data fit, accuracy and precision is hard to verify, this creates additional challenges to comparability and generalization of results from diffusion microstructure models. In addition, non-linear optimization is computationally expensive leading to very long run times, which can be prohibitive in large group or population studies. In this technical note we investigate the performance of several optimization algorithms and initialization strategies over a few of the most popular diffusion microstructure models, including NODDI and CHARMED. We evaluate whether a single well performing optimization approach exists that could be applied to many models and would equate both run time and fit aspects. All models, algorithms and strategies were implemented on the Graphics Processing Unit (GPU) to remove run time constraints, with which we achieve whole brain dataset fits in seconds to minutes. We then evaluated fit, accuracy, precision and run time for different models of differing complexity against three common optimization algorithms and three parameter initialization strategies. Variability of the achieved quality of fit in actual data was evaluated on ten subjects of each of two population studies with a different acquisition protocol. We find that optimization algorithms and multi-step optimization approaches have a considerable influence on performance and stability over subjects and over acquisition protocols. The gradient-free Powell conjugate-direction algorithm was found to outperform other common algorithms in terms of run time, fit, accuracy and precision. Parameter initialization approaches were found to be relevant especially for more complex models, such as those involving several fiber orientations per voxel. For these, a fitting cascade initializing or fixing parameter values in a later optimization step from simpler models in an earlier optimization step further improved run time, fit, accuracy and precision compared to a single step fit. This establishes and makes available standards by which robust fit and accuracy can be achieved in shorter run times. This is especially relevant for the use of diffusion microstructure modeling in large group or population studies and in combining microstructure parameter maps with tractography results. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  6. Collimator optimization and collimator-detector response compensation in myocardial perfusion SPECT using the ideal observer with and without model mismatch and an anthropomorphic model observer

    NASA Astrophysics Data System (ADS)

    Ghaly, Michael; Links, Jonathan M.; Frey, Eric C.

    2016-03-01

    The collimator is the primary factor that determines the spatial resolution and noise tradeoff in myocardial perfusion SPECT images. In this paper, the goal was to find the collimator that optimizes the image quality in terms of a perfusion defect detection task. Since the optimal collimator could depend on the level of approximation of the collimator-detector response (CDR) compensation modeled in reconstruction, we performed this optimization for the cases of modeling the full CDR (including geometric, septal penetration and septal scatter responses), the geometric CDR, or no model of the CDR. We evaluated the performance on the detection task using three model observers. Two observers operated on data in the projection domain: the Ideal Observer (IO) and IO with Model-Mismatch (IO-MM). The third observer was an anthropomorphic Channelized Hotelling Observer (CHO), which operated on reconstructed images. The projection-domain observers have the advantage that they are computationally less intensive. The IO has perfect knowledge of the image formation process, i.e. it has a perfect model of the CDR. The IO-MM takes into account the mismatch between the true (complete and accurate) model and an approximate model, e.g. one that might be used in reconstruction. We evaluated the utility of these projection domain observers in optimizing instrumentation parameters. We investigated a family of 8 parallel-hole collimators, spanning a wide range of resolution and sensitivity tradeoffs, using a population of simulated projection (for the IO and IO-MM) and reconstructed (for the CHO) images that included background variability. We simulated anterolateral and inferior perfusion defects with variable extents and severities. The area under the ROC curve was estimated from the IO, IO-MM, and CHO test statistics and served as the figure-of-merit. The optimal collimator for the IO had a resolution of 9-11 mm FWHM at 10 cm, which is poorer resolution than typical collimators used for MPS. When the IO-MM and CHO used a geometric or no model of the CDR, the optimal collimator shifted toward higher resolution than that obtained using the IO and the CHO with full CDR modeling. With the optimal collimator, the IO-MM and CHO using geometric modeling gave similar performance to full CDR modeling. Collimators with poorer resolution were optimal when CDR modeling was used. The agreement of rankings between the IO-MM and CHO confirmed that the IO-MM is useful for optimization tasks when model mismatch is present due to its substantially reduced computational burden compared to the CHO.

  7. An effective model for ergonomic optimization applied to a new automotive assembly line

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duraccio, Vincenzo; Elia, Valerio; Forcina, Antonio

    2016-06-08

    An efficient ergonomic optimization can lead to a significant improvement in production performance and a considerable reduction of costs. In the present paper new model for ergonomic optimization is proposed. The new approach is based on the criteria defined by National Institute of Occupational Safety and Health and, adapted to Italian legislation. The proposed model provides an ergonomic optimization, by analyzing ergonomic relations between manual work in correct conditions. The model includes a schematic and systematic analysis method of the operations, and identifies all possible ergonomic aspects to be evaluated. The proposed approach has been applied to an automotive assemblymore » line, where the operation repeatability makes the optimization fundamental. The proposed application clearly demonstrates the effectiveness of the new approach.« less

  8. AN OPTIMAL MAINTENANCE MANAGEMENT MODEL FOR AIRPORT CONCRETE PAVEMENT

    NASA Astrophysics Data System (ADS)

    Shimomura, Taizo; Fujimori, Yuji; Kaito, Kiyoyuki; Obama, Kengo; Kobayashi, Kiyoshi

    In this paper, an optimal management model is formulated for the performance-based rehabilitation/maintenance contract for airport concrete pavement, whereby two types of life cycle cost risks, i.e., ground consolidation risk and concrete depreciation risk, are explicitly considered. The non-homogenous Markov chain model is formulated to represent the deterioration processes of concrete pavement which are conditional upon the ground consolidation processes. The optimal non-homogenous Markov decision model with multiple types of risk is presented to design the optimal rehabilitation/maintenance plans. And the methodology to revise the optimal rehabilitation/maintenance plans based upon the monitoring data by the Bayesian up-to-dating rules. The validity of the methodology presented in this paper is examined based upon the case studies carried out for the H airport.

  9. Conceptual design and multidisciplinary optimization of in-plane morphing wing structures

    NASA Astrophysics Data System (ADS)

    Inoyama, Daisaku; Sanders, Brian P.; Joo, James J.

    2006-03-01

    In this paper, the topology optimization methodology for the synthesis of distributed actuation system with specific applications to the morphing air vehicle is discussed. The main emphasis is placed on the topology optimization problem formulations and the development of computational modeling concepts. For demonstration purposes, the inplane morphing wing model is presented. The analysis model is developed to meet several important criteria: It must allow large rigid-body displacements, as well as variation in planform area, with minimum strain on structural members while retaining acceptable numerical stability for finite element analysis. Preliminary work has indicated that addressed modeling concept meets the criteria and may be suitable for the purpose. Topology optimization is performed on the ground structure based on this modeling concept with design variables that control the system configuration. In other words, states of each element in the model are design variables and they are to be determined through optimization process. In effect, the optimization process assigns morphing members as 'soft' elements, non-morphing load-bearing members as 'stiff' elements, and non-existent members as 'voids.' In addition, the optimization process determines the location and relative force intensities of distributed actuators, which is represented computationally as equal and opposite nodal forces with soft axial stiffness. Several different optimization problem formulations are investigated to understand their potential benefits in solution quality, as well as meaningfulness of formulation itself. Sample in-plane morphing problems are solved to demonstrate the potential capability of the methodology introduced in this paper.

  10. Autonomous optimal trajectory design employing convex optimization for powered descent on an asteroid

    NASA Astrophysics Data System (ADS)

    Pinson, Robin Marie

    Mission proposals that land spacecraft on asteroids are becoming increasingly popular. However, in order to have a successful mission the spacecraft must reliably and softly land at the intended landing site with pinpoint precision. The problem under investigation is how to design a propellant (fuel) optimal powered descent trajectory that can be quickly computed onboard the spacecraft, without interaction from ground control. The goal is to autonomously design the optimal powered descent trajectory onboard the spacecraft immediately prior to the descent burn for use during the burn. Compared to a planetary powered landing problem, the challenges that arise from designing an asteroid powered descent trajectory include complicated nonlinear gravity fields, small rotating bodies, and low thrust vehicles. The nonlinear gravity fields cannot be represented by a constant gravity model nor a Newtonian model. The trajectory design algorithm needs to be robust and efficient to guarantee a designed trajectory and complete the calculations in a reasonable time frame. This research investigates the following questions: Can convex optimization be used to design the minimum propellant powered descent trajectory for a soft landing on an asteroid? Is this method robust and reliable to allow autonomy onboard the spacecraft without interaction from ground control? This research designed a convex optimization based method that rapidly generates the propellant optimal asteroid powered descent trajectory. The solution to the convex optimization problem is the thrust magnitude and direction, which designs and determines the trajectory. The propellant optimal problem was formulated as a second order cone program, a subset of convex optimization, through relaxation techniques by including a slack variable, change of variables, and incorporation of the successive solution method. Convex optimization solvers, especially second order cone programs, are robust, reliable, and are guaranteed to find the global minimum provided one exists. In addition, an outer optimization loop using Brent's method determines the optimal flight time corresponding to the minimum propellant usage over all flight times. Inclusion of additional trajectory constraints, solely vertical motion near the landing site and glide slope, were evaluated. Through a theoretical proof involving the Minimum Principle from Optimal Control Theory and the Karush-Kuhn-Tucker conditions it was shown that the relaxed problem is identical to the original problem at the minimum point. Therefore, the optimal solution of the relaxed problem is an optimal solution of the original problem, referred to as lossless convexification. A key finding is that this holds for all levels of gravity model fidelity. The designed thrust magnitude profiles were the bang-bang predicted by Optimal Control Theory. The first high fidelity gravity model employed was the 2x2 spherical harmonics model assuming a perfect triaxial ellipsoid and placement of the coordinate frame at the asteroid's center of mass and aligned with the semi-major axes. The spherical harmonics model is not valid inside the Brillouin sphere and this becomes relevant for irregularly shaped asteroids. Then, a higher fidelity model was implemented combining the 4x4 spherical harmonics gravity model with the interior spherical Bessel gravity model. All gravitational terms in the equations of motion are evaluated with the position vector from the previous iteration, creating the successive solution method. Methodology success was shown by applying the algorithm to three triaxial ellipsoidal asteroids with four different rotation speeds using the 2x2 gravity model. Finally, the algorithm was tested using the irregularly shaped asteroid, Castalia.

  11. A two-stage stochastic rule-based model to determine pre-assembly buffer content

    NASA Astrophysics Data System (ADS)

    Gunay, Elif Elcin; Kula, Ufuk

    2018-01-01

    This study considers instant decision-making needs of the automobile manufactures for resequencing vehicles before final assembly (FA). We propose a rule-based two-stage stochastic model to determine the number of spare vehicles that should be kept in the pre-assembly buffer to restore the altered sequence due to paint defects and upstream department constraints. First stage of the model decides the spare vehicle quantities, where the second stage model recovers the scrambled sequence respect to pre-defined rules. The problem is solved by sample average approximation (SAA) algorithm. We conduct a numerical study to compare the solutions of heuristic model with optimal ones and provide following insights: (i) as the mismatch between paint entrance and scheduled sequence decreases, the rule-based heuristic model recovers the scrambled sequence as good as the optimal resequencing model, (ii) the rule-based model is more sensitive to the mismatch between the paint entrance and scheduled sequences for recovering the scrambled sequence, (iii) as the defect rate increases, the difference in recovery effectiveness between rule-based heuristic and optimal solutions increases, (iv) as buffer capacity increases, the recovery effectiveness of the optimization model outperforms heuristic model, (v) as expected the rule-based model holds more inventory than the optimization model.

  12. Optimization of wind plant layouts using an adjoint approach

    DOE PAGES

    King, Ryan N.; Dykes, Katherine; Graf, Peter; ...

    2017-03-10

    Using adjoint optimization and three-dimensional steady-state Reynolds-averaged Navier–Stokes (RANS) simulations, we present a new gradient-based approach for optimally siting wind turbines within utility-scale wind plants. By solving the adjoint equations of the flow model, the gradients needed for optimization are found at a cost that is independent of the number of control variables, thereby permitting optimization of large wind plants with many turbine locations. Moreover, compared to the common approach of superimposing prescribed wake deficits onto linearized flow models, the computational efficiency of the adjoint approach allows the use of higher-fidelity RANS flow models which can capture nonlinear turbulent flowmore » physics within a wind plant. The steady-state RANS flow model is implemented in the Python finite-element package FEniCS and the derivation and solution of the discrete adjoint equations are automated within the dolfin-adjoint framework. Gradient-based optimization of wind turbine locations is demonstrated for idealized test cases that reveal new optimization heuristics such as rotational symmetry, local speedups, and nonlinear wake curvature effects. Layout optimization is also demonstrated on more complex wind rose shapes, including a full annual energy production (AEP) layout optimization over 36 inflow directions and 5 wind speed bins.« less

  13. Optimization of wind plant layouts using an adjoint approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    King, Ryan N.; Dykes, Katherine; Graf, Peter

    Using adjoint optimization and three-dimensional steady-state Reynolds-averaged Navier–Stokes (RANS) simulations, we present a new gradient-based approach for optimally siting wind turbines within utility-scale wind plants. By solving the adjoint equations of the flow model, the gradients needed for optimization are found at a cost that is independent of the number of control variables, thereby permitting optimization of large wind plants with many turbine locations. Moreover, compared to the common approach of superimposing prescribed wake deficits onto linearized flow models, the computational efficiency of the adjoint approach allows the use of higher-fidelity RANS flow models which can capture nonlinear turbulent flowmore » physics within a wind plant. The steady-state RANS flow model is implemented in the Python finite-element package FEniCS and the derivation and solution of the discrete adjoint equations are automated within the dolfin-adjoint framework. Gradient-based optimization of wind turbine locations is demonstrated for idealized test cases that reveal new optimization heuristics such as rotational symmetry, local speedups, and nonlinear wake curvature effects. Layout optimization is also demonstrated on more complex wind rose shapes, including a full annual energy production (AEP) layout optimization over 36 inflow directions and 5 wind speed bins.« less

  14. Development of optimization model for sputtering process parameter based on gravitational search algorithm

    NASA Astrophysics Data System (ADS)

    Norlina, M. S.; Diyana, M. S. Nor; Mazidah, P.; Rusop, M.

    2016-07-01

    In the RF magnetron sputtering process, the desirable layer properties are largely influenced by the process parameters and conditions. If the quality of the thin film has not reached up to its intended level, the experiments have to be repeated until the desirable quality has been met. This research is proposing Gravitational Search Algorithm (GSA) as the optimization model to reduce the time and cost to be spent in the thin film fabrication. The optimization model's engine has been developed using Java. The model is developed based on GSA concept, which is inspired by the Newtonian laws of gravity and motion. In this research, the model is expected to optimize four deposition parameters which are RF power, deposition time, oxygen flow rate and substrate temperature. The results have turned out to be promising and it could be concluded that the performance of the model is satisfying in this parameter optimization problem. Future work could compare GSA with other nature based algorithms and test them with various set of data.

  15. Power Grid Construction Project Portfolio Optimization Based on Bi-level programming model

    NASA Astrophysics Data System (ADS)

    Zhao, Erdong; Li, Shangqi

    2017-08-01

    As the main body of power grid operation, county-level power supply enterprises undertake an important emission to guarantee the security of power grid operation and safeguard social power using order. The optimization of grid construction projects has been a key issue of power supply capacity and service level of grid enterprises. According to the actual situation of power grid construction project optimization of county-level power enterprises, on the basis of qualitative analysis of the projects, this paper builds a Bi-level programming model based on quantitative analysis. The upper layer of the model is the target restriction of the optimal portfolio; the lower layer of the model is enterprises’ financial restrictions on the size of the enterprise project portfolio. Finally, using a real example to illustrate operation proceeding and the optimization result of the model. Through qualitative analysis and quantitative analysis, the bi-level programming model improves the accuracy and normative standardization of power grid enterprises projects.

  16. Interior and exterior ballistics coupled optimization with constraints of attitude control and mechanical-thermal conditions

    NASA Astrophysics Data System (ADS)

    Liang, Xin-xin; Zhang, Nai-min; Zhang, Yan

    2016-07-01

    For solid launch vehicle performance promotion, a modeling method of interior and exterior ballistics associated optimization with constraints of attitude control and mechanical-thermal condition is proposed. Firstly, the interior and external ballistic models of the solid launch vehicle are established, and the attitude control model of the high wind area and the stage of the separation is presented, and the load calculation model of the drag reduction device is presented, and thermal condition calculation model of flight is presented. Secondly, the optimization model is established to optimize the range, which has internal and external ballistic design parameters as variables selected by sensitivity analysis, and has attitude control and mechanical-thermal conditions as constraints. Finally, the method is applied to the optimal design of a three stage solid launch vehicle simulation with differential evolution algorithm. Simulation results are shown that range capability is improved by 10.8%, and both attitude control and mechanical-thermal conditions are satisfied.

  17. Optimally managing water resources in large river basins for an uncertain future

    USGS Publications Warehouse

    Roehl, Edwin A.; Conrads, Paul

    2014-01-01

    One of the challenges of basin management is the optimization of water use through ongoing regional economic development, droughts, and climate change. This paper describes a model of the Savannah River Basin designed to continuously optimize regulated flow to meet prioritized objectives set by resource managers and stakeholders. The model was developed from historical data by using machine learning, making it more accurate and adaptable to changing conditions than traditional models. The model is coupled to an optimization routine that computes the daily flow needed to most efficiently meet the water-resource management objectives. The model and optimization routine are packaged in a decision support system that makes it easy for managers and stakeholders to use. Simulation results show that flow can be regulated to substantially reduce salinity intrusions in the Savannah National Wildlife Refuge while conserving more water in the reservoirs. A method for using the model to assess the effectiveness of the flow-alteration features after the deepening also is demonstrated.

  18. GPR Technologies and Methodologies in Italy: A Review

    NASA Astrophysics Data System (ADS)

    Benedetto, Andrea; Frezza, Fabrizio; Manacorda, Guido; Massa, Andrea; Pajewski, Lara

    2014-05-01

    GPR techniques and technologies have been subject of intense research activities at the Italian level in the last 15 years because of their potential applications specifically to civil engineering. More in detail, several innovative approaches and models have been developed to inspect road pavements to measure the thickness of their layers as well as to diagnose or prevent damage. Moreover, new frontiers in bridge inspection as well as in geotechnical applications such as slides and flows have been investigated using GPR. From the methodological viewpoint, innovative techniques have been developed to solve GPR forward-scattering problems, as well to locate and classify subsurface targets in real-time and to retrieve their properties through multi-resolution strategies, and linear and non-linear methodologies. Furthermore, the application of GPR and other non-destructive testing methods in archaeological prospecting, cultural heritage diagnostics, and in the localization and detection of vital signs of trapped people has been widely investigated. More recently, new theoretical and empirical paradigms regarding water moisture evaluation in various porous media and soil characterization have been published as the results of long terms research activities. Pioneer studies are also currently under development with the scope to correlate GPR measurement with mechanical characteristics of bound and unbound construction materials. In such a framework, this abstract will be aimed at reviewing some of the most recent advances of GPR techniques and technologies within the Italian industrial and academic communities [also including their application within international projects such as FP7 ISTIMES (http://www.istimes.eu)], and at envisaging some of the most promising research trends currently under development. Acknowledgment - This work was supported by COST Action TU1208 'Civil Engineering Applications of Ground Penetrating Radar' References [1] M. Balsi, S. Esposito, F. Frezza, P. Nocito, L. Porrini, L. Pajewski, G. Schettini e C. Twizere, 'FDTD Simulation of GPR Measurements in a Laboratory Sandbox for Landmine Detection', Proc. IWAGPR 2009, Granada, Spagna, 27-29 maggio 2009, pp. 45-49. [2] M. Balsi, S. Esposito, F. Frezza, P. Nocito, P. M. Barone, S. E. Lauro, E. Mattei, E. Pettinelli, G. Schettini e C. Twizere, 'GPR Measurements and FDTD Simulations for Landmine Detection', Proc. XIII International Conference on Ground Penetrating Radar, 21-25 giugno 2010, Lecce, pp. 865-869. [3] M. Balsi, P. M. Barone, S. Esposito, F. Frezza, S. E. Lauro, P. Nocito, E. Pettinelli, G. Schettini e C. Twizere, 'FDTD Simulations and GPR Measurements for Land Mine Detection in a Controlled Environment', Atti XVIII Riunione Nazionale di Elettromagnetismo, Benevento, 6-10 settembre 2010, pp. 59-65. [4] M. Salucci, D. Sartori, N. Anselmi, A. Randazzo, G. Oliveri, and A. Massa, 'Imaging buried objects within the second-order born approximation through a multiresolution regularized Inexact-Newton method', 2013 International Symposium on Electromagnetic Theory (EMTS), (Hiroshima, Japan), May 20-24 2013. [5] L. Lizzi, F. Viani, P. Rocca, G. Oliveri, M. Benedetti and A. Massa, 'Three-dimensional real-time localization of subsurface objects - From theory to experimental validation,' 2009 IEEE International Geoscience and Remote Sensing Symposium, vol. 2, pp. II-121-II-124, 12-17 July 2009. [6] S. Meschino, L. Pajewski, M. Pastorino, A. Randazzo, and G. Schettini, 'Detection of Subsurface Metallic Utilities by Means of a SAP Technique: Comparing MUSIC- and SVM-Based Approaches', J. Appl. Geophy., vol. 97, pp. 60-68, Oct. 2013. [7] M. Pastorino, and A. Randazzo, 'Buried object detection by an Inexact-Newton applied to nonlinear inverse scattering', Int. J. of Microwave Sci. Technol., vol. 2012, Article ID 637301, 7 pages, 2012. [8] F. Soldovieri and L. Crocco, "Electromagnetic Tomography", in Subsurface Sensing, A. S. Turk, K. A. Hocaoglu, A. A. Vertiy ed., Wiley Series in Microwave and Optical Engineering (Volume 1), Sept. 2011, John Wiley & Sons, NY. [9] I. Catapano, L. Crocco, Y. Krellmann, G. Triltzsch, F. Soldovieri, "A Tomographic Approach for Helicopter-Borne Ground Penetrating Radar Imaging", IEEE Geosci. Rem. Sensing Lett., vol. 9, p. 378-382, 2012.

  19. Selection of optimal complexity for ENSO-EMR model by minimum description length principle

    NASA Astrophysics Data System (ADS)

    Loskutov, E. M.; Mukhin, D.; Mukhina, A.; Gavrilov, A.; Kondrashov, D. A.; Feigin, A. M.

    2012-12-01

    One of the main problems arising in modeling of data taken from natural system is finding a phase space suitable for construction of the evolution operator model. Since we usually deal with strongly high-dimensional behavior, we are forced to construct a model working in some projection of system phase space corresponding to time scales of interest. Selection of optimal projection is non-trivial problem since there are many ways to reconstruct phase variables from given time series, especially in the case of a spatio-temporal data field. Actually, finding optimal projection is significant part of model selection, because, on the one hand, the transformation of data to some phase variables vector can be considered as a required component of the model. On the other hand, such an optimization of a phase space makes sense only in relation to the parametrization of the model we use, i.e. representation of evolution operator, so we should find an optimal structure of the model together with phase variables vector. In this paper we propose to use principle of minimal description length (Molkov et al., 2009) for selection models of optimal complexity. The proposed method is applied to optimization of Empirical Model Reduction (EMR) of ENSO phenomenon (Kravtsov et al. 2005, Kondrashov et. al., 2005). This model operates within a subset of leading EOFs constructed from spatio-temporal field of SST in Equatorial Pacific, and has a form of multi-level stochastic differential equations (SDE) with polynomial parameterization of the right-hand side. Optimal values for both the number of EOF, the order of polynomial and number of levels are estimated from the Equatorial Pacific SST dataset. References: Ya. Molkov, D. Mukhin, E. Loskutov, G. Fidelin and A. Feigin, Using the minimum description length principle for global reconstruction of dynamic systems from noisy time series, Phys. Rev. E, Vol. 80, P 046207, 2009 Kravtsov S, Kondrashov D, Ghil M, 2005: Multilevel regression modeling of nonlinear processes: Derivation and applications to climatic variability. J. Climate, 18 (21): 4404-4424. D. Kondrashov, S. Kravtsov, A. W. Robertson and M. Ghil, 2005. A hierarchy of data-based ENSO models. J. Climate, 18, 4425-4444.

  20. Parameter optimization for surface flux transport models

    NASA Astrophysics Data System (ADS)

    Whitbread, T.; Yeates, A. R.; Muñoz-Jaramillo, A.; Petrie, G. J. D.

    2017-11-01

    Accurate prediction of solar activity calls for precise calibration of solar cycle models. Consequently we aim to find optimal parameters for models which describe the physical processes on the solar surface, which in turn act as proxies for what occurs in the interior and provide source terms for coronal models. We use a genetic algorithm to optimize surface flux transport models using National Solar Observatory (NSO) magnetogram data for Solar Cycle 23. This is applied to both a 1D model that inserts new magnetic flux in the form of idealized bipolar magnetic regions, and also to a 2D model that assimilates specific shapes of real active regions. The genetic algorithm searches for parameter sets (meridional flow speed and profile, supergranular diffusivity, initial magnetic field, and radial decay time) that produce the best fit between observed and simulated butterfly diagrams, weighted by a latitude-dependent error structure which reflects uncertainty in observations. Due to the easily adaptable nature of the 2D model, the optimization process is repeated for Cycles 21, 22, and 24 in order to analyse cycle-to-cycle variation of the optimal solution. We find that the ranges and optimal solutions for the various regimes are in reasonable agreement with results from the literature, both theoretical and observational. The optimal meridional flow profiles for each regime are almost entirely within observational bounds determined by magnetic feature tracking, with the 2D model being able to accommodate the mean observed profile more successfully. Differences between models appear to be important in deciding values for the diffusive and decay terms. In like fashion, differences in the behaviours of different solar cycles lead to contrasts in parameters defining the meridional flow and initial field strength.

  1. Portfolio optimization with skewness and kurtosis

    NASA Astrophysics Data System (ADS)

    Lam, Weng Hoe; Jaaman, Saiful Hafizah Hj.; Isa, Zaidi

    2013-04-01

    Mean and variance of return distributions are two important parameters of the mean-variance model in portfolio optimization. However, the mean-variance model will become inadequate if the returns of assets are not normally distributed. Therefore, higher moments such as skewness and kurtosis cannot be ignored. Risk averse investors prefer portfolios with high skewness and low kurtosis so that the probability of getting negative rates of return will be reduced. The objective of this study is to compare the portfolio compositions as well as performances between the mean-variance model and mean-variance-skewness-kurtosis model by using the polynomial goal programming approach. The results show that the incorporation of skewness and kurtosis will change the optimal portfolio compositions. The mean-variance-skewness-kurtosis model outperforms the mean-variance model because the mean-variance-skewness-kurtosis model takes skewness and kurtosis into consideration. Therefore, the mean-variance-skewness-kurtosis model is more appropriate for the investors of Malaysia in portfolio optimization.

  2. Modeling, Analysis, and Optimization Issues for Large Space Structures

    NASA Technical Reports Server (NTRS)

    Pinson, L. D. (Compiler); Amos, A. K. (Compiler); Venkayya, V. B. (Compiler)

    1983-01-01

    Topics concerning the modeling, analysis, and optimization of large space structures are discussed including structure-control interaction, structural and structural dynamics modeling, thermal analysis, testing, and design.

  3. Extracting TSK-type Neuro-Fuzzy model using the Hunting search algorithm

    NASA Astrophysics Data System (ADS)

    Bouzaida, Sana; Sakly, Anis; M'Sahli, Faouzi

    2014-01-01

    This paper proposes a Takagi-Sugeno-Kang (TSK) type Neuro-Fuzzy model tuned by a novel metaheuristic optimization algorithm called Hunting Search (HuS). The HuS algorithm is derived based on a model of group hunting of animals such as lions, wolves, and dolphins when looking for a prey. In this study, the structure and parameters of the fuzzy model are encoded into a particle. Thus, the optimal structure and parameters are achieved simultaneously. The proposed method was demonstrated through modeling and control problems, and the results have been compared with other optimization techniques. The comparisons indicate that the proposed method represents a powerful search approach and an effective optimization technique as it can extract the accurate TSK fuzzy model with an appropriate number of rules.

  4. Optimal symmetric flight studies

    NASA Technical Reports Server (NTRS)

    Weston, A. R.; Menon, P. K. A.; Bilimoria, K. D.; Cliff, E. M.; Kelley, H. J.

    1985-01-01

    Several topics in optimal symmetric flight of airbreathing vehicles are examined. In one study, an approximation scheme designed for onboard real-time energy management of climb-dash is developed and calculations for a high-performance aircraft presented. In another, a vehicle model intermediate in complexity between energy and point-mass models is explored and some quirks in optimal flight characteristics peculiar to the model uncovered. In yet another study, energy-modelling procedures are re-examined with a view to stretching the range of validity of zeroth-order approximation by special choice of state variables. In a final study, time-fuel tradeoffs in cruise-dash are examined for the consequences of nonconvexities appearing in the classical steady cruise-dash model. Two appendices provide retrospective looks at two early publications on energy modelling and related optimal control theory.

  5. Optimizing the Shunting Schedule of Electric Multiple Units Depot Using an Enhanced Particle Swarm Optimization Algorithm

    PubMed Central

    Jin, Junchen

    2016-01-01

    The shunting schedule of electric multiple units depot (SSED) is one of the essential plans for high-speed train maintenance activities. This paper presents a 0-1 programming model to address the problem of determining an optimal SSED through automatic computing. The objective of the model is to minimize the number of shunting movements and the constraints include track occupation conflicts, shunting routes conflicts, time durations of maintenance processes, and shunting running time. An enhanced particle swarm optimization (EPSO) algorithm is proposed to solve the optimization problem. Finally, an empirical study from Shanghai South EMU Depot is carried out to illustrate the model and EPSO algorithm. The optimization results indicate that the proposed method is valid for the SSED problem and that the EPSO algorithm outperforms the traditional PSO algorithm on the aspect of optimality. PMID:27436998

  6. The effect of dropout on the efficiency of D-optimal designs of linear mixed models.

    PubMed

    Ortega-Azurduy, S A; Tan, F E S; Berger, M P F

    2008-06-30

    Dropout is often encountered in longitudinal data. Optimal designs will usually not remain optimal in the presence of dropout. In this paper, we study D-optimal designs for linear mixed models where dropout is encountered. Moreover, we estimate the efficiency loss in cases where a D-optimal design for complete data is chosen instead of that for data with dropout. Two types of monotonically decreasing response probability functions are investigated to describe dropout. Our results show that the location of D-optimal design points for the dropout case will shift with respect to that for the complete and uncorrelated data case. Owing to this shift, the information collected at the D-optimal design points for the complete data case does not correspond to the smallest variance. We show that the size of the displacement of the time points depends on the linear mixed model and that the efficiency loss is moderate.

  7. Piezoresistive Cantilever Performance—Part II: Optimization

    PubMed Central

    Park, Sung-Jin; Doll, Joseph C.; Rastegar, Ali J.; Pruitt, Beth L.

    2010-01-01

    Piezoresistive silicon cantilevers fabricated by ion implantation are frequently used for force, displacement, and chemical sensors due to their low cost and electronic readout. However, the design of piezoresistive cantilevers is not a straightforward problem due to coupling between the design parameters, constraints, process conditions, and performance. We systematically analyzed the effect of design and process parameters on force resolution and then developed an optimization approach to improve force resolution while satisfying various design constraints using simulation results. The combined simulation and optimization approach is extensible to other doping methods beyond ion implantation in principle. The optimization results were validated by fabricating cantilevers with the optimized conditions and characterizing their performance. The measurement results demonstrate that the analytical model accurately predicts force and displacement resolution, and sensitivity and noise tradeoff in optimal cantilever performance. We also performed a comparison between our optimization technique and existing models and demonstrated eight times improvement in force resolution over simplified models. PMID:20333323

  8. Fusion of Hard and Soft Information in Nonparametric Density Estimation

    DTIC Science & Technology

    2015-06-10

    and stochastic optimization models, in analysis of simulation output, and when instantiating probability models. We adopt a constrained maximum...particular, density estimation is needed for generation of input densities to simulation and stochastic optimization models, in analysis of simulation output...an essential step in simulation analysis and stochastic optimization is the generation of probability densities for input random variables; see for

  9. Design optimization of axial flow hydraulic turbine runner: Part II - multi-objective constrained optimization method

    NASA Astrophysics Data System (ADS)

    Peng, Guoyi; Cao, Shuliang; Ishizuka, Masaru; Hayama, Shinji

    2002-06-01

    This paper is concerned with the design optimization of axial flow hydraulic turbine runner blade geometry. In order to obtain a better design plan with good performance, a new comprehensive performance optimization procedure has been presented by combining a multi-variable multi-objective constrained optimization model with a Q3D inverse computation and a performance prediction procedure. With careful analysis of the inverse design of axial hydraulic turbine runner, the total hydraulic loss and the cavitation coefficient are taken as optimization objectives and a comprehensive objective function is defined using the weight factors. Parameters of a newly proposed blade bound circulation distribution function and parameters describing positions of blade leading and training edges in the meridional flow passage are taken as optimization variables.The optimization procedure has been applied to the design optimization of a Kaplan runner with specific speed of 440 kW. Numerical results show that the performance of designed runner is successfully improved through optimization computation. The optimization model is found to be validated and it has the feature of good convergence. With the multi-objective optimization model, it is possible to control the performance of designed runner by adjusting the value of weight factors defining the comprehensive objective function. Copyright

  10. Static vs stochastic optimization: A case study of FTSE Bursa Malaysia sectorial indices

    NASA Astrophysics Data System (ADS)

    Mamat, Nur Jumaadzan Zaleha; Jaaman, Saiful Hafizah; Ahmad, Rokiah@Rozita

    2014-06-01

    Traditional portfolio optimization methods in the likes of Markowitz' mean-variance model and semi-variance model utilize static expected return and volatility risk from historical data to generate an optimal portfolio. The optimal portfolio may not truly be optimal in reality due to the fact that maximum and minimum values from the data may largely influence the expected return and volatility risk values. This paper considers distributions of assets' return and volatility risk to determine a more realistic optimized portfolio. For illustration purposes, the sectorial indices data in FTSE Bursa Malaysia is employed. The results show that stochastic optimization provides more stable information ratio.

  11. Optimal pricing and replenishment policies for instantaneous deteriorating items with backlogging and trade credit under inflation

    NASA Astrophysics Data System (ADS)

    Sundara Rajan, R.; Uthayakumar, R.

    2017-12-01

    In this paper we develop an economic order quantity model to investigate the optimal replenishment policies for instantaneous deteriorating items under inflation and trade credit. Demand rate is a linear function of selling price and decreases negative exponentially with time over a finite planning horizon. Shortages are allowed and partially backlogged. Under these conditions, we model the retailer's inventory system as a profit maximization problem to determine the optimal selling price, optimal order quantity and optimal replenishment time. An easy-to-use algorithm is developed to determine the optimal replenishment policies for the retailer. We also provide optimal present value of profit when shortages are completely backlogged as a special case. Numerical examples are presented to illustrate the algorithm provided to obtain optimal profit. And we also obtain managerial implications from numerical examples to substantiate our model. The results show that there is an improvement in total profit from complete backlogging rather than the items being partially backlogged.

  12. The Sizing and Optimization Language, (SOL): Computer language for design problems

    NASA Technical Reports Server (NTRS)

    Lucas, Stephen H.; Scotti, Stephen J.

    1988-01-01

    The Sizing and Optimization Language, (SOL), a new high level, special purpose computer language was developed to expedite application of numerical optimization to design problems and to make the process less error prone. SOL utilizes the ADS optimization software and provides a clear, concise syntax for describing an optimization problem, the OPTIMIZE description, which closely parallels the mathematical description of the problem. SOL offers language statements which can be used to model a design mathematically, with subroutines or code logic, and with existing FORTRAN routines. In addition, SOL provides error checking and clear output of the optimization results. Because of these language features, SOL is best suited to model and optimize a design concept when the model consits of mathematical expressions written in SOL. For such cases, SOL's unique syntax and error checking can be fully utilized. SOL is presently available for DEC VAX/VMS systems. A SOL package is available which includes the SOL compiler, runtime library routines, and a SOL reference manual.

  13. A Power Transformers Fault Diagnosis Model Based on Three DGA Ratios and PSO Optimization SVM

    NASA Astrophysics Data System (ADS)

    Ma, Hongzhe; Zhang, Wei; Wu, Rongrong; Yang, Chunyan

    2018-03-01

    In order to make up for the shortcomings of existing transformer fault diagnosis methods in dissolved gas-in-oil analysis (DGA) feature selection and parameter optimization, a transformer fault diagnosis model based on the three DGA ratios and particle swarm optimization (PSO) optimize support vector machine (SVM) is proposed. Using transforming support vector machine to the nonlinear and multi-classification SVM, establishing the particle swarm optimization to optimize the SVM multi classification model, and conducting transformer fault diagnosis combined with the cross validation principle. The fault diagnosis results show that the average accuracy of test method is better than the standard support vector machine and genetic algorithm support vector machine, and the proposed method can effectively improve the accuracy of transformer fault diagnosis is proved.

  14. Robust input design for nonlinear dynamic modeling of AUV.

    PubMed

    Nouri, Nowrouz Mohammad; Valadi, Mehrdad

    2017-09-01

    Input design has a dominant role in developing the dynamic model of autonomous underwater vehicles (AUVs) through system identification. Optimal input design is the process of generating informative inputs that can be used to generate the good quality dynamic model of AUVs. In a problem with optimal input design, the desired input signal depends on the unknown system which is intended to be identified. In this paper, the input design approach which is robust to uncertainties in model parameters is used. The Bayesian robust design strategy is applied to design input signals for dynamic modeling of AUVs. The employed approach can design multiple inputs and apply constraints on an AUV system's inputs and outputs. Particle swarm optimization (PSO) is employed to solve the constraint robust optimization problem. The presented algorithm is used for designing the input signals for an AUV, and the estimate obtained by robust input design is compared with that of the optimal input design. According to the results, proposed input design can satisfy both robustness of constraints and optimality. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  15. A stochastic discrete optimization model for designing container terminal facilities

    NASA Astrophysics Data System (ADS)

    Zukhruf, Febri; Frazila, Russ Bona; Burhani, Jzolanda Tsavalista

    2017-11-01

    As uncertainty essentially affect the total transportation cost, it remains important in the container terminal that incorporates several modes and transshipments process. This paper then presents a stochastic discrete optimization model for designing the container terminal, which involves the decision of facilities improvement action. The container terminal operation model is constructed by accounting the variation of demand and facilities performance. In addition, for illustrating the conflicting issue that practically raises in the terminal operation, the model also takes into account the possible increment delay of facilities due to the increasing number of equipment, especially the container truck. Those variations expectantly reflect the uncertainty issue in the container terminal operation. A Monte Carlo simulation is invoked to propagate the variations by following the observed distribution. The problem is constructed within the framework of the combinatorial optimization problem for investigating the optimal decision of facilities improvement. A new variant of glow-worm swarm optimization (GSO) is thus proposed for solving the optimization, which is rarely explored in the transportation field. The model applicability is tested by considering the actual characteristics of the container terminal.

  16. Incorporation of β-glucans in meat emulsions through an optimal mixture modeling systems.

    PubMed

    Vasquez Mejia, Sandra M; de Francisco, Alicia; Manique Barreto, Pedro L; Damian, César; Zibetti, Andre Wüst; Mahecha, Hector Suárez; Bohrer, Benjamin M

    2018-09-01

    The effects of β-glucans (βG) in beef emulsions with carrageenan and starch were evaluated using an optimal mixture modeling system. The best mathematical models to describe the cooking loss, color, and textural profile analysis (TPA) were selected and optimized. The cubic models were better to describe the cooking loss, color, and TPA parameters, with the exception of springiness. Emulsions with greater levels of βG and starch had less cooking loss (<1%), intermediate L* (>54 and <62), and greater hardness, cohesiveness and springiness values. Subsequently, during the optimization phase, the use of carrageenan was eliminated. The optimized emulsion contained 3.13 ± 0.11% βG, which could cover the intake daily of βG recommendations. However, the hardness of the optimized emulsion was greater (60,224 ± 1025 N) than expected. The optimized emulsion had a homogeneous structure and normal thermal behavior by DSC and allowed for the manufacture of products with high amounts of βG and desired functional attributes. Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. Optimization techniques using MODFLOW-GWM

    USGS Publications Warehouse

    Grava, Anna; Feinstein, Daniel T.; Barlow, Paul M.; Bonomi, Tullia; Buarne, Fabiola; Dunning, Charles; Hunt, Randall J.

    2015-01-01

    An important application of optimization codes such as MODFLOW-GWM is to maximize water supply from unconfined aquifers subject to constraints involving surface-water depletion and drawdown. In optimizing pumping for a fish hatchery in a bedrock aquifer system overlain by glacial deposits in eastern Wisconsin, various features of the GWM-2000 code were used to overcome difficulties associated with: 1) Non-linear response matrices caused by unconfined conditions and head-dependent boundaries; 2) Efficient selection of candidate well and drawdown constraint locations; and 3) Optimizing against water-level constraints inside pumping wells. Features of GWM-2000 were harnessed to test the effects of systematically varying the decision variables and constraints on the optimized solution for managing withdrawals. An important lesson of the procedure, similar to lessons learned in model calibration, is that the optimized outcome is non-unique, and depends on a range of choices open to the user. The modeler must balance the complexity of the numerical flow model used to represent the groundwater-flow system against the range of options (decision variables, objective functions, constraints) available for optimizing the model.

  18. Optimal harvesting of a stochastic delay tri-trophic food-chain model with Lévy jumps

    NASA Astrophysics Data System (ADS)

    Qiu, Hong; Deng, Wenmin

    2018-02-01

    In this paper, the optimal harvesting of a stochastic delay tri-trophic food-chain model with Lévy jumps is considered. We introduce two kinds of environmental perturbations in this model. One is called white noise which is continuous and is described by a stochastic integral with respect to the standard Brownian motion. And the other one is jumping noise which is modeled by a Lévy process. Under some mild assumptions, the critical values between extinction and persistent in the mean of each species are established. The sufficient and necessary criteria for the existence of optimal harvesting policy are established and the optimal harvesting effort and the maximum of sustainable yield are also obtained. We utilize the ergodic method to discuss the optimal harvesting problem. The results show that white noises and Lévy noises significantly affect the optimal harvesting policy while time delays is harmless for the optimal harvesting strategy in some cases. At last, some numerical examples are introduced to show the validity of our results.

  19. A new method to optimize natural convection heat sinks

    NASA Astrophysics Data System (ADS)

    Lampio, K.; Karvinen, R.

    2017-08-01

    The performance of a heat sink cooled by natural convection is strongly affected by its geometry, because buoyancy creates flow. Our model utilizes analytical results of forced flow and convection, and only conduction in a solid, i.e., the base plate and fins, is solved numerically. Sufficient accuracy for calculating maximum temperatures in practical applications is proved by comparing the results of our model with some simple analytical and computational fluid dynamics (CFD) solutions. An essential advantage of our model is that it cuts down on calculation CPU time by many orders of magnitude compared with CFD. The shorter calculation time makes our model well suited for multi-objective optimization, which is the best choice for improving heat sink geometry, because many geometrical parameters with opposite effects influence the thermal behavior. In multi-objective optimization, optimal locations of components and optimal dimensions of the fin array can be found by simultaneously minimizing the heat sink maximum temperature, size, and mass. This paper presents the principles of the particle swarm optimization (PSO) algorithm and applies it as a basis for optimizing existing heat sinks.

  20. Optimization methods and silicon solar cell numerical models

    NASA Technical Reports Server (NTRS)

    Girardini, K.; Jacobsen, S. E.

    1986-01-01

    An optimization algorithm for use with numerical silicon solar cell models was developed. By coupling an optimization algorithm with a solar cell model, it is possible to simultaneously vary design variables such as impurity concentrations, front junction depth, back junction depth, and cell thickness to maximize the predicted cell efficiency. An optimization algorithm was developed and interfaced with the Solar Cell Analysis Program in 1 Dimension (SCAP1D). SCAP1D uses finite difference methods to solve the differential equations which, along with several relations from the physics of semiconductors, describe mathematically the performance of a solar cell. A major obstacle is that the numerical methods used in SCAP1D require a significant amount of computer time, and during an optimization the model is called iteratively until the design variables converge to the values associated with the maximum efficiency. This problem was alleviated by designing an optimization code specifically for use with numerically intensive simulations, to reduce the number of times the efficiency has to be calculated to achieve convergence to the optimal solution.

  1. Biokinetic model-based multi-objective optimization of Dunaliella tertiolecta cultivation using elitist non-dominated sorting genetic algorithm with inheritance.

    PubMed

    Sinha, Snehal K; Kumar, Mithilesh; Guria, Chandan; Kumar, Anup; Banerjee, Chiranjib

    2017-10-01

    Algal model based multi-objective optimization using elitist non-dominated sorting genetic algorithm with inheritance was carried out for batch cultivation of Dunaliella tertiolecta using NPK-fertilizer. Optimization problems involving two- and three-objective functions were solved simultaneously. The objective functions are: maximization of algae-biomass and lipid productivity with minimization of cultivation time and cost. Time variant light intensity and temperature including NPK-fertilizer, NaCl and NaHCO 3 loadings are the important decision variables. Algal model involving Monod/Andrews adsorption kinetics and Droop model with internal nutrient cell quota was used for optimization studies. Sets of non-dominated (equally good) Pareto optimal solutions were obtained for the problems studied. It was observed that time variant optimal light intensity and temperature trajectories, including optimum NPK fertilizer, NaCl and NaHCO 3 concentration has significant influence to improve biomass and lipid productivity under minimum cultivation time and cost. Proposed optimization studies may be helpful to implement the control strategy in scale-up operation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Simulation-Optimization Model for Seawater Intrusion Management at Pingtung Coastal Area, Taiwan

    NASA Astrophysics Data System (ADS)

    Huang, P. S.; Chiu, Y.

    2015-12-01

    In 1970's, the agriculture and aquaculture were rapidly developed at Pingtung coastal area in southern Taiwan. The groundwater aquifers were over-pumped and caused the seawater intrusion. In order to remedy the contaminated groundwater and find the best strategies of groundwater usage, a management model to search the optimal groundwater operational strategies is developed in this study. The objective function is to minimize the total amount of injection water and a set of constraints are applied to ensure the groundwater levels and concentrations are satisfied. A three-dimension density-dependent flow and transport simulation model, called SEAWAT developed by U.S. Geological Survey, is selected to simulate the phenomenon of seawater intrusion. The simulation model is well calibrated by the field measurements and replaced by the surrogate model of trained artificial neural networks (ANNs) to reduce the computational time. The ANNs are embedded in the management model to link the simulation and optimization models, and the global optimizer of differential evolution (DE) is applied for solving the management model. The optimal results show that the fully trained ANNs could substitute the original simulation model and reduce much computational time. Under appropriate setting of objective function and constraints, DE can find the optimal injection rates at predefined barriers. The concentrations at the target locations could decrease more than 50 percent within the planning horizon of 20 years. Keywords : Seawater intrusion, groundwater management, numerical model, artificial neural networks, differential evolution

  3. A system-level cost-of-energy wind farm layout optimization with landowner modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Le; MacDonald, Erin

    This work applies an enhanced levelized wind farm cost model, including landowner remittance fees, to determine optimal turbine placements under three landowner participation scenarios and two land-plot shapes. Instead of assuming a continuous piece of land is available for the wind farm construction, as in most layout optimizations, the problem formulation represents landowner participation scenarios as a binary string variable, along with the number of turbines. The cost parameters and model are a combination of models from the National Renewable Energy Laboratory (NREL), Lawrence Berkeley National Laboratory, and Windustiy. The system-level cost-of-energy (COE) optimization model is also tested under twomore » land-plot shapes: equally-sized square land plots and unequal rectangle land plots. The optimal COEs results are compared to actual COE data and found to be realistic. The results show that landowner remittances account for approximately 10% of farm operating costs across all cases. Irregular land-plot shapes are easily handled by the model. We find that larger land plots do not necessarily receive higher remittance fees. The model can help site developers identify the most crucial land plots for project success and the optimal positions of turbines, with realistic estimates of costs and profitability. (C) 2013 Elsevier Ltd. All rights reserved.« less

  4. Numerically accurate computational techniques for optimal estimator analyses of multi-parameter models

    NASA Astrophysics Data System (ADS)

    Berger, Lukas; Kleinheinz, Konstantin; Attili, Antonio; Bisetti, Fabrizio; Pitsch, Heinz; Mueller, Michael E.

    2018-05-01

    Modelling unclosed terms in partial differential equations typically involves two steps: First, a set of known quantities needs to be specified as input parameters for a model, and second, a specific functional form needs to be defined to model the unclosed terms by the input parameters. Both steps involve a certain modelling error, with the former known as the irreducible error and the latter referred to as the functional error. Typically, only the total modelling error, which is the sum of functional and irreducible error, is assessed, but the concept of the optimal estimator enables the separate analysis of the total and the irreducible errors, yielding a systematic modelling error decomposition. In this work, attention is paid to the techniques themselves required for the practical computation of irreducible errors. Typically, histograms are used for optimal estimator analyses, but this technique is found to add a non-negligible spurious contribution to the irreducible error if models with multiple input parameters are assessed. Thus, the error decomposition of an optimal estimator analysis becomes inaccurate, and misleading conclusions concerning modelling errors may be drawn. In this work, numerically accurate techniques for optimal estimator analyses are identified and a suitable evaluation of irreducible errors is presented. Four different computational techniques are considered: a histogram technique, artificial neural networks, multivariate adaptive regression splines, and an additive model based on a kernel method. For multiple input parameter models, only artificial neural networks and multivariate adaptive regression splines are found to yield satisfactorily accurate results. Beyond a certain number of input parameters, the assessment of models in an optimal estimator analysis even becomes practically infeasible if histograms are used. The optimal estimator analysis in this paper is applied to modelling the filtered soot intermittency in large eddy simulations using a dataset of a direct numerical simulation of a non-premixed sooting turbulent flame.

  5. A parameters optimization method for planar joint clearance model and its application for dynamics simulation of reciprocating compressor

    NASA Astrophysics Data System (ADS)

    Hai-yang, Zhao; Min-qiang, Xu; Jin-dong, Wang; Yong-bo, Li

    2015-05-01

    In order to improve the accuracy of dynamics response simulation for mechanism with joint clearance, a parameter optimization method for planar joint clearance contact force model was presented in this paper, and the optimized parameters were applied to the dynamics response simulation for mechanism with oversized joint clearance fault. By studying the effect of increased clearance on the parameters of joint clearance contact force model, the relation of model parameters between different clearances was concluded. Then the dynamic equation of a two-stage reciprocating compressor with four joint clearances was developed using Lagrange method, and a multi-body dynamic model built in ADAMS software was used to solve this equation. To obtain a simulated dynamic response much closer to that of experimental tests, the parameters of joint clearance model, instead of using the designed values, were optimized by genetic algorithms approach. Finally, the optimized parameters were applied to simulate the dynamics response of model with oversized joint clearance fault according to the concluded parameter relation. The dynamics response of experimental test verified the effectiveness of this application.

  6. Modeling joint restoration strategies for interdependent infrastructure systems

    PubMed Central

    Simonovic, Slobodan P.

    2018-01-01

    Life in the modern world depends on multiple critical services provided by infrastructure systems which are interdependent at multiple levels. To effectively respond to infrastructure failures, this paper proposes a model for developing optimal joint restoration strategy for interdependent infrastructure systems following a disruptive event. First, models for (i) describing structure of interdependent infrastructure system and (ii) their interaction process, are presented. Both models are considering the failure types, infrastructure operating rules and interdependencies among systems. Second, an optimization model for determining an optimal joint restoration strategy at infrastructure component level by minimizing the economic loss from the infrastructure failures, is proposed. The utility of the model is illustrated using a case study of electric-water systems. Results show that a small number of failed infrastructure components can trigger high level failures in interdependent systems; the optimal joint restoration strategy varies with failure occurrence time. The proposed models can help decision makers to understand the mechanisms of infrastructure interactions and search for optimal joint restoration strategy, which can significantly enhance safety of infrastructure systems. PMID:29649300

  7. Automated optimization of water-water interaction parameters for a coarse-grained model.

    PubMed

    Fogarty, Joseph C; Chiu, See-Wing; Kirby, Peter; Jakobsson, Eric; Pandit, Sagar A

    2014-02-13

    We have developed an automated parameter optimization software framework (ParOpt) that implements the Nelder-Mead simplex algorithm and applied it to a coarse-grained polarizable water model. The model employs a tabulated, modified Morse potential with decoupled short- and long-range interactions incorporating four water molecules per interaction site. Polarizability is introduced by the addition of a harmonic angle term defined among three charged points within each bead. The target function for parameter optimization was based on the experimental density, surface tension, electric field permittivity, and diffusion coefficient. The model was validated by comparison of statistical quantities with experimental observation. We found very good performance of the optimization procedure and good agreement of the model with experiment.

  8. Space mapping method for the design of passive shields

    NASA Astrophysics Data System (ADS)

    Sergeant, Peter; Dupré, Luc; Melkebeek, Jan

    2006-04-01

    The aim of the paper is to find the optimal geometry of a passive shield for the reduction of the magnetic stray field of an axisymmetric induction heater. For the optimization, a space mapping algorithm is used that requires two models. The first is an accurate model with a high computational effort as it contains finite element models. The second is less accurate, but it has a low computational effort as it uses an analytical model: the shield is replaced by a number of mutually coupled coils. The currents in the shield are found by solving an electrical circuit. Space mapping combines both models to obtain the optimal passive shield fast and accurately. The presented optimization technique is compared with gradient, simplex, and genetic algorithms.

  9. Optimal harvesting of a stochastic delay logistic model with Lévy jumps

    NASA Astrophysics Data System (ADS)

    Qiu, Hong; Deng, Wenmin

    2016-10-01

    The optimal harvesting problem of a stochastic time delay logistic model with Lévy jumps is considered in this article. We first show that the model has a unique global positive solution and discuss the uniform boundedness of its pth moment with harvesting. Then we prove that the system is globally attractive and asymptotically stable in distribution under our assumptions. Furthermore, we obtain the existence of the optimal harvesting effort by the ergodic method, and then we give the explicit expression of the optimal harvesting policy and maximum yield.

  10. Optimization of Turbine Engine Cycle Analysis with Analytic Derivatives

    NASA Technical Reports Server (NTRS)

    Hearn, Tristan; Hendricks, Eric; Chin, Jeffrey; Gray, Justin; Moore, Kenneth T.

    2016-01-01

    A new engine cycle analysis tool, called Pycycle, was recently built using the OpenMDAO framework. This tool uses equilibrium chemistry based thermodynamics, and provides analytic derivatives. This allows for stable and efficient use of gradient-based optimization and sensitivity analysis methods on engine cycle models, without requiring the use of finite difference derivative approximation methods. To demonstrate this, a gradient-based design optimization was performed on a multi-point turbofan engine model. Results demonstrate very favorable performance compared to an optimization of an identical model using finite-difference approximated derivatives.

  11. Low Complexity Models to improve Incomplete Sensitivities for Shape Optimization

    NASA Astrophysics Data System (ADS)

    Stanciu, Mugurel; Mohammadi, Bijan; Moreau, Stéphane

    2003-01-01

    The present global platform for simulation and design of multi-model configurations treat shape optimization problems in aerodynamics. Flow solvers are coupled with optimization algorithms based on CAD-free and CAD-connected frameworks. Newton methods together with incomplete expressions of gradients are used. Such incomplete sensitivities are improved using reduced models based on physical assumptions. The validity and the application of this approach in real-life problems are presented. The numerical examples concern shape optimization for an airfoil, a business jet and a car engine cooling axial fan.

  12. Towards a personalized and dynamic CRT-D. A computational cardiovascular model dedicated to therapy optimization.

    PubMed

    Di Molfetta, A; Santini, L; Forleo, G B; Minni, V; Mafhouz, K; Della Rocca, D G; Fresiello, L; Romeo, F; Ferrari, G

    2012-01-01

    In spite of cardiac resynchronization therapy (CRT) benefits, 25-30% of patients are still non responders. One of the possible reasons could be the non optimal atrioventricular (AV) and interventricular (VV) intervals settings. Our aim was to exploit a numerical model of cardiovascular system for AV and VV intervals optimization in CRT. A numerical model of the cardiovascular system CRT-dedicated was previously developed. Echocardiographic parameters, Systemic aortic pressure and ECG were collected in 20 consecutive patients before and after CRT. Patient data were simulated by the model that was used to optimize and set into the device the intervals at the baseline and at the follow up. The optimal AV and VV intervals were chosen to optimize the simulated selected variable/s on the base of both echocardiographic and electrocardiographic parameters. Intervals were different for each patient and in most cases, they changed at follow up. The model can well reproduce clinical data as verified with Bland Altman analysis and T-test (p > 0.05). Left ventricular remodeling was 38.7% and left ventricular ejection fraction increasing was 11% against the 15% and 6% reported in literature, respectively. The developed numerical model could reproduce patients conditions at the baseline and at the follow up including the CRT effects. The model could be used to optimize AV and VV intervals at the baseline and at the follow up realizing a personalized and dynamic CRT. A patient tailored CRT could improve patients outcome in comparison to literature data.

  13. A plastic corticostriatal circuit model of adaptation in perceptual decision making

    PubMed Central

    Hsiao, Pao-Yueh; Lo, Chung-Chuan

    2013-01-01

    The ability to optimize decisions and adapt them to changing environments is a crucial brain function that increase survivability. Although much has been learned about the neuronal activity in various brain regions that are associated with decision making, and about how the nervous systems may learn to achieve optimization, the underlying neuronal mechanisms of how the nervous systems optimize decision strategies with preference given to speed or accuracy, and how the systems adapt to changes in the environment, remain unclear. Based on extensive empirical observations, we addressed the question by extending a previously described cortico-basal ganglia circuit model of perceptual decisions with the inclusion of a dynamic dopamine (DA) system that modulates spike-timing dependent plasticity (STDP). We found that, once an optimal model setting that maximized the reward rate was selected, the same setting automatically optimized decisions across different task environments through dynamic balancing between the facilitating and depressing components of the DA dynamics. Interestingly, other model parameters were also optimal if we considered the reward rate that was weighted by the subject's preferences for speed or accuracy. Specifically, the circuit model favored speed if we increased the phasic DA response to the reward prediction error, whereas the model favored accuracy if we reduced the tonic DA activity or the phasic DA responses to the estimated reward probability. The proposed model provides insight into the roles of different components of DA responses in decision adaptation and optimization in a changing environment. PMID:24339814

  14. Parameterization of the InVEST Crop Pollination Model to spatially predict abundance of wild blueberry (Vaccinium angustifolium Aiton) native bee pollinators in Maine, USA

    USGS Publications Warehouse

    Groff, Shannon C.; Loftin, Cynthia S.; Drummond, Frank; Bushmann, Sara; McGill, Brian J.

    2016-01-01

    Non-native honeybees historically have been managed for crop pollination, however, recent population declines draw attention to pollination services provided by native bees. We applied the InVEST Crop Pollination model, developed to predict native bee abundance from habitat resources, in Maine's wild blueberry crop landscape. We evaluated model performance with parameters informed by four approaches: 1) expert opinion; 2) sensitivity analysis; 3) sensitivity analysis informed model optimization; and, 4) simulated annealing (uninformed) model optimization. Uninformed optimization improved model performance by 29% compared to expert opinion-informed model, while sensitivity-analysis informed optimization improved model performance by 54%. This suggests that expert opinion may not result in the best parameter values for the InVEST model. The proportion of deciduous/mixed forest within 2000 m of a blueberry field also reliably predicted native bee abundance in blueberry fields, however, the InVEST model provides an efficient tool to estimate bee abundance beyond the field perimeter.

  15. Response Surface Methodology Using a Fullest Balanced Model: A Re-Analysis of a Dataset in the Korean Journal for Food Science of Animal Resources.

    PubMed

    Rheem, Sungsue; Rheem, Insoo; Oh, Sejong

    2017-01-01

    Response surface methodology (RSM) is a useful set of statistical techniques for modeling and optimizing responses in research studies of food science. In the analysis of response surface data, a second-order polynomial regression model is usually used. However, sometimes we encounter situations where the fit of the second-order model is poor. If the model fitted to the data has a poor fit including a lack of fit, the modeling and optimization results might not be accurate. In such a case, using a fullest balanced model, which has no lack of fit, can fix such problem, enhancing the accuracy of the response surface modeling and optimization. This article presents how to develop and use such a model for the better modeling and optimizing of the response through an illustrative re-analysis of a dataset in Park et al. (2014) published in the Korean Journal for Food Science of Animal Resources .

  16. Optimizing Biorefinery Design and Operations via Linear Programming Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Talmadge, Michael; Batan, Liaw; Lamers, Patrick

    The ability to assess and optimize economics of biomass resource utilization for the production of fuels, chemicals and power is essential for the ultimate success of a bioenergy industry. The team of authors, consisting of members from the National Renewable Energy Laboratory (NREL) and the Idaho National Laboratory (INL), has developed simple biorefinery linear programming (LP) models to enable the optimization of theoretical or existing biorefineries. The goal of this analysis is to demonstrate how such models can benefit the developing biorefining industry. It focuses on a theoretical multi-pathway, thermochemical biorefinery configuration and demonstrates how the biorefinery can use LPmore » models for operations planning and optimization in comparable ways to the petroleum refining industry. Using LP modeling tools developed under U.S. Department of Energy's Bioenergy Technologies Office (DOE-BETO) funded efforts, the authors investigate optimization challenges for the theoretical biorefineries such as (1) optimal feedstock slate based on available biomass and prices, (2) breakeven price analysis for available feedstocks, (3) impact analysis for changes in feedstock costs and product prices, (4) optimal biorefinery operations during unit shutdowns / turnarounds, and (5) incentives for increased processing capacity. These biorefinery examples are comparable to crude oil purchasing and operational optimization studies that petroleum refiners perform routinely using LPs and other optimization models. It is important to note that the analyses presented in this article are strictly theoretical and they are not based on current energy market prices. The pricing structure assigned for this demonstrative analysis is consistent with $4 per gallon gasoline, which clearly assumes an economic environment that would favor the construction and operation of biorefineries. The analysis approach and examples provide valuable insights into the usefulness of analysis tools for maximizing the potential benefits of biomass utilization for production of fuels, chemicals and power.« less

  17. The combination of simulation and response methodology and its application in an aggregate production plan

    NASA Astrophysics Data System (ADS)

    Chen, Zhiming; Feng, Yuncheng

    1988-08-01

    This paper describes an algorithmic structure for combining simulation and optimization techniques both in theory and practice. Response surface methodology is used to optimize the decision variables in the simulation environment. A simulation-optimization software has been developed and successfully implemented, and its application to an aggregate production planning simulation-optimization model is reported. The model's objective is to minimize the production cost and to generate an optimal production plan and inventory control strategy for an aircraft factory.

  18. Optimal Control of Micro Grid Operation Mode Seamless Switching Based on Radau Allocation Method

    NASA Astrophysics Data System (ADS)

    Chen, Xiaomin; Wang, Gang

    2017-05-01

    The seamless switching process of micro grid operation mode directly affects the safety and stability of its operation. According to the switching process from island mode to grid-connected mode of micro grid, we establish a dynamic optimization model based on two grid-connected inverters. We use Radau allocation method to discretize the model, and use Newton iteration method to obtain the optimal solution. Finally, we implement the optimization mode in MATLAB and get the optimal control trajectory of the inverters.

  19. Assessing FPAR Source and Parameter Optimization Scheme in Application of a Diagnostic Carbon Flux Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turner, D P; Ritts, W D; Wharton, S

    2009-02-26

    The combination of satellite remote sensing and carbon cycle models provides an opportunity for regional to global scale monitoring of terrestrial gross primary production, ecosystem respiration, and net ecosystem production. FPAR (the fraction of photosynthetically active radiation absorbed by the plant canopy) is a critical input to diagnostic models, however little is known about the relative effectiveness of FPAR products from different satellite sensors nor about the sensitivity of flux estimates to different parameterization approaches. In this study, we used multiyear observations of carbon flux at four eddy covariance flux tower sites within the conifer biome to evaluate these factors.more » FPAR products from the MODIS and SeaWiFS sensors, and the effects of single site vs. cross-site parameter optimization were tested with the CFLUX model. The SeaWiFs FPAR product showed greater dynamic range across sites and resulted in slightly reduced flux estimation errors relative to the MODIS product when using cross-site optimization. With site-specific parameter optimization, the flux model was effective in capturing seasonal and interannual variation in the carbon fluxes at these sites. The cross-site prediction errors were lower when using parameters from a cross-site optimization compared to parameter sets from optimization at single sites. These results support the practice of multisite optimization within a biome for parameterization of diagnostic carbon flux models.« less

  20. Hydropower Optimization Using Artificial Neural Network Surrogate Models of a High-Fidelity Hydrodynamics and Water Quality Model

    NASA Astrophysics Data System (ADS)

    Shaw, Amelia R.; Smith Sawyer, Heather; LeBoeuf, Eugene J.; McDonald, Mark P.; Hadjerioua, Boualem

    2017-11-01

    Hydropower operations optimization subject to environmental constraints is limited by challenges associated with dimensionality and spatial and temporal resolution. The need for high-fidelity hydrodynamic and water quality models within optimization schemes is driven by improved computational capabilities, increased requirements to meet specific points of compliance with greater resolution, and the need to optimize operations of not just single reservoirs but systems of reservoirs. This study describes an important advancement for computing hourly power generation schemes for a hydropower reservoir using high-fidelity models, surrogate modeling techniques, and optimization methods. The predictive power of the high-fidelity hydrodynamic and water quality model CE-QUAL-W2 is successfully emulated by an artificial neural network, then integrated into a genetic algorithm optimization approach to maximize hydropower generation subject to constraints on dam operations and water quality. This methodology is applied to a multipurpose reservoir near Nashville, Tennessee, USA. The model successfully reproduced high-fidelity reservoir information while enabling 6.8% and 6.6% increases in hydropower production value relative to actual operations for dissolved oxygen (DO) limits of 5 and 6 mg/L, respectively, while witnessing an expected decrease in power generation at more restrictive DO constraints. Exploration of simultaneous temperature and DO constraints revealed capability to address multiple water quality constraints at specified locations. The reduced computational requirements of the new modeling approach demonstrated an ability to provide decision support for reservoir operations scheduling while maintaining high-fidelity hydrodynamic and water quality information as part of the optimization decision support routines.

Top