A Decision Support System for Solving Multiple Criteria Optimization Problems
ERIC Educational Resources Information Center
Filatovas, Ernestas; Kurasova, Olga
2011-01-01
In this paper, multiple criteria optimization has been investigated. A new decision support system (DSS) has been developed for interactive solving of multiple criteria optimization problems (MOPs). The weighted-sum (WS) approach is implemented to solve the MOPs. The MOPs are solved by selecting different weight coefficient values for the criteria…
Structural optimization of large structural systems by optimality criteria methods
NASA Technical Reports Server (NTRS)
Berke, Laszlo
1992-01-01
The fundamental concepts of the optimality criteria method of structural optimization are presented. The effect of the separability properties of the objective and constraint functions on the optimality criteria expressions is emphasized. The single constraint case is treated first, followed by the multiple constraint case with a more complex evaluation of the Lagrange multipliers. Examples illustrate the efficiency of the method.
Using multi-criteria analysis of simulation models to understand complex biological systems
Maureen C. Kennedy; E. David Ford
2011-01-01
Scientists frequently use computer-simulation models to help solve complex biological problems. Typically, such models are highly integrated, they produce multiple outputs, and standard methods of model analysis are ill suited for evaluating them. We show how multi-criteria optimization with Pareto optimality allows for model outputs to be compared to multiple system...
Post Pareto optimization-A case
NASA Astrophysics Data System (ADS)
Popov, Stoyan; Baeva, Silvia; Marinova, Daniela
2017-12-01
Simulation performance may be evaluated according to multiple quality measures that are in competition and their simultaneous consideration poses a conflict. In the current study we propose a practical framework for investigating such simulation performance criteria, exploring the inherent conflicts amongst them and identifying the best available tradeoffs, based upon multi-objective Pareto optimization. This approach necessitates the rigorous derivation of performance criteria to serve as objective functions and undergo vector optimization. We demonstrate the effectiveness of our proposed approach by applying it with multiple stochastic quality measures. We formulate performance criteria of this use-case, pose an optimization problem, and solve it by means of a simulation-based Pareto approach. Upon attainment of the underlying Pareto Frontier, we analyze it and prescribe preference-dependent configurations for the optimal simulation training.
Experiment in multiple-criteria energy policy analysis
NASA Astrophysics Data System (ADS)
Ho, J. K.
1980-07-01
An international panel of energy analysts participated in an experiment to use HOPE (holistic preference evaluation): an interactive parametric linear programming method for multiple criteria optimization. The criteria of cost, environmental effect, crude oil, and nuclear fuel were considered, according to BESOM: an energy model for the US in the year 2000.
Structural optimization of framed structures using generalized optimality criteria
NASA Technical Reports Server (NTRS)
Kolonay, R. M.; Venkayya, Vipperla B.; Tischler, V. A.; Canfield, R. A.
1989-01-01
The application of a generalized optimality criteria to framed structures is presented. The optimality conditions, Lagrangian multipliers, resizing algorithm, and scaling procedures are all represented as a function of the objective and constraint functions along with their respective gradients. The optimization of two plane frames under multiple loading conditions subject to stress, displacement, generalized stiffness, and side constraints is presented. These results are compared to those found by optimizing the frames using a nonlinear mathematical programming technique.
ERIC Educational Resources Information Center
Yin, Peng-Yeng; Chang, Kuang-Cheng; Hwang, Gwo-Jen; Hwang, Gwo-Haur; Chan, Ying
2006-01-01
To accurately analyze the problems of students in learning, the composed test sheets must meet multiple assessment criteria, such as the ratio of relevant concepts to be evaluated, the average discrimination degree, difficulty degree and estimated testing time. Furthermore, to precisely evaluate the improvement of student's learning performance…
A multiple objective optimization approach to quality control
NASA Technical Reports Server (NTRS)
Seaman, Christopher Michael
1991-01-01
The use of product quality as the performance criteria for manufacturing system control is explored. The goal in manufacturing, for economic reasons, is to optimize product quality. The problem is that since quality is a rather nebulous product characteristic, there is seldom an analytic function that can be used as a measure. Therefore standard control approaches, such as optimal control, cannot readily be applied. A second problem with optimizing product quality is that it is typically measured along many dimensions: there are many apsects of quality which must be optimized simultaneously. Very often these different aspects are incommensurate and competing. The concept of optimality must now include accepting tradeoffs among the different quality characteristics. These problems are addressed using multiple objective optimization. It is shown that the quality control problem can be defined as a multiple objective optimization problem. A controller structure is defined using this as the basis. Then, an algorithm is presented which can be used by an operator to interactively find the best operating point. Essentially, the algorithm uses process data to provide the operator with two pieces of information: (1) if it is possible to simultaneously improve all quality criteria, then determine what changes to the process input or controller parameters should be made to do this; and (2) if it is not possible to improve all criteria, and the current operating point is not a desirable one, select a criteria in which a tradeoff should be made, and make input changes to improve all other criteria. The process is not operating at an optimal point in any sense if no tradeoff has to be made to move to a new operating point. This algorithm ensures that operating points are optimal in some sense and provides the operator with information about tradeoffs when seeking the best operating point. The multiobjective algorithm was implemented in two different injection molding scenarios: tuning of process controllers to meet specified performance objectives and tuning of process inputs to meet specified quality objectives. Five case studies are presented.
NASA Astrophysics Data System (ADS)
Chembuly, V. V. M. J. Satish; Voruganti, Hari Kumar
2018-04-01
Hyper redundant manipulators have a large number of degrees of freedom (DOF) than the required to perform a given task. Additional DOF of manipulators provide the flexibility to work in highly cluttered environment and in constrained workspaces. Inverse kinematics (IK) of hyper-redundant manipulators is complicated due to large number of DOF and these manipulators have multiple IK solutions. The redundancy gives a choice of selecting best solution out of multiple solutions based on certain criteria such as obstacle avoidance, singularity avoidance, joint limit avoidance and joint torque minimization. This paper focuses on IK solution and redundancy resolution of hyper-redundant manipulator using classical optimization approach. Joint positions are computed by optimizing various criteria for a serial hyper redundant manipulators while traversing different paths in the workspace. Several cases are addressed using this scheme to obtain the inverse kinematic solution while optimizing the criteria like obstacle avoidance, joint limit avoidance.
Maureen C. Kennedy; E. David Ford; Thomas M. Hinckley
2009-01-01
Many hypotheses have been advanced about factors that control tree longevity. We use a simulation model with multi-criteria optimization and Pareto optimality to determine branch morphologies in the Pinaceae that minimize the effect of growth limitations due to water stress while simultaneously maximizing carbohydrate gain. Two distinct branch morphologies in the...
Rodríguez-Yáñez, Alicia Berenice; Méndez-Vázquez, Yaileen
2014-01-01
Process windows in injection molding are habitually built with only one performance measure in mind. In reality, a more realistic picture can be obtained when considering multiple performance measures at a time, especially in the presence of conflict. In this work, the construction of process windows for injection molding (IM) is undertaken considering two and three performance measures in conflict simultaneously. The best compromises between the criteria involved are identified through the direct application of the concept of Pareto-dominance in multiple criteria optimization. The aim is to provide a formal and realistic strategy to set processing conditions in IM operations. The resulting optimization approach is easily implementable in MS Excel. The solutions are presented graphically to facilitate their use in manufacturing plants. PMID:25530927
Rodríguez-Yáñez, Alicia Berenice; Méndez-Vázquez, Yaileen; Cabrera-Ríos, Mauricio
2014-01-01
Process windows in injection molding are habitually built with only one performance measure in mind. In reality, a more realistic picture can be obtained when considering multiple performance measures at a time, especially in the presence of conflict. In this work, the construction of process windows for injection molding (IM) is undertaken considering two and three performance measures in conflict simultaneously. The best compromises between the criteria involved are identified through the direct application of the concept of Pareto-dominance in multiple criteria optimization. The aim is to provide a formal and realistic strategy to set processing conditions in IM operations. The resulting optimization approach is easily implementable in MS Excel. The solutions are presented graphically to facilitate their use in manufacturing plants.
Lexicographic goal programming and assessment tools for a combinatorial production problem.
DOT National Transportation Integrated Search
2008-01-01
NP-complete combinatorial problems often necessitate the use of near-optimal solution techniques including : heuristics and metaheuristics. The addition of multiple optimization criteria can further complicate : comparison of these solution technique...
Geospatial optimization of siting large-scale solar projects
Macknick, Jordan; Quinby, Ted; Caulfield, Emmet; Gerritsen, Margot; Diffendorfer, James E.; Haines, Seth S.
2014-01-01
guidelines by being user-driven, transparent, interactive, capable of incorporating multiple criteria, and flexible. This work provides the foundation for a dynamic siting assistance tool that can greatly facilitate siting decisions among multiple stakeholders.
Ahmed, Sameh; Alqurshi, Abdulmalik; Mohamed, Abdel-Maaboud Ismail
2018-07-01
A new robust and reliable high-performance liquid chromatography (HPLC) method with multi-criteria decision making (MCDM) approach was developed to allow simultaneous quantification of atenolol (ATN) and nifedipine (NFD) in content uniformity testing. Felodipine (FLD) was used as an internal standard (I.S.) in this study. A novel marriage between a new interactive response optimizer and a HPLC method was suggested for multiple response optimizations of target responses. An interactive response optimizer was used as a decision and prediction tool for the optimal settings of target responses, according to specified criteria, based on Derringer's desirability. Four independent variables were considered in this study: Acetonitrile%, buffer pH and concentration along with column temperature. Eight responses were optimized: retention times of ATN, NFD, and FLD, resolutions between ATN/NFD and NFD/FLD, and plate numbers for ATN, NFD, and FLD. Multiple regression analysis was applied in order to scan the influences of the most significant variables for the regression models. The experimental design was set to give minimum retention times, maximum resolution and plate numbers. The interactive response optimizer allowed prediction of optimum conditions according to these criteria with a good composite desirability value of 0.98156. The developed method was validated according to the International Conference on Harmonization (ICH) guidelines with the aid of the experimental design. The developed MCDM-HPLC method showed superior robustness and resolution in short analysis time allowing successful simultaneous content uniformity testing of ATN and NFD in marketed capsules. The current work presents an interactive response optimizer as an efficient platform to optimize, predict responses, and validate HPLC methodology with tolerable design space for assay in quality control laboratories. Copyright © 2018 Elsevier B.V. All rights reserved.
SynGenics Optimization System (SynOptSys)
NASA Technical Reports Server (NTRS)
Ventresca, Carol; McMilan, Michelle L.; Globus, Stephanie
2013-01-01
The SynGenics Optimization System (SynOptSys) software application optimizes a product with respect to multiple, competing criteria using statistical Design of Experiments, Response-Surface Methodology, and the Desirability Optimization Methodology. The user is not required to be skilled in the underlying math; thus, SynOptSys can help designers and product developers overcome the barriers that prevent them from using powerful techniques to develop better pro ducts in a less costly manner. SynOpt-Sys is applicable to the design of any product or process with multiple criteria to meet, and at least two factors that influence achievement of those criteria. The user begins with a selected solution principle or system concept and a set of criteria that needs to be satisfied. The criteria may be expressed in terms of documented desirements or defined responses that the future system needs to achieve. Documented desirements can be imported into SynOptSys or created and documented directly within SynOptSys. Subsequent steps include identifying factors, specifying model order for each response, designing the experiment, running the experiment and gathering the data, analyzing the results, and determining the specifications for the optimized system. The user may also enter textual information as the project progresses. Data is easily edited within SynOptSys, and the software design enables full traceability within any step in the process, and facilitates reporting as needed. SynOptSys is unique in the way responses are defined and the nuances of the goodness associated with changes in response values for each of the responses of interest. The Desirability Optimization Methodology provides the basis of this novel feature. Moreover, this is a complete, guided design and optimization process tool with embedded math that can remain invisible to the user. It is not a standalone statistical program; it is a design and optimization system.
NASA Astrophysics Data System (ADS)
Nabavi, N.
2018-07-01
The author investigates the monitoring methods for fine adjustment of the previously proposed on-chip architecture for frequency multiplication and translation of harmonics by design. Digital signal processing (DSP) algorithms are utilized to create an optimized microwave photonic integrated circuit functionality toward automated frequency multiplication. The implemented DSP algorithms are formed on discrete Fourier transform and optimization-based algorithms (Greedy and gradient-based algorithms), which are analytically derived and numerically compared based on the accuracy and speed of convergence criteria.
Merits and limitations of optimality criteria method for structural optimization
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Guptill, James D.; Berke, Laszlo
1993-01-01
The merits and limitations of the optimality criteria (OC) method for the minimum weight design of structures subjected to multiple load conditions under stress, displacement, and frequency constraints were investigated by examining several numerical examples. The examples were solved utilizing the Optimality Criteria Design Code that was developed for this purpose at NASA Lewis Research Center. This OC code incorporates OC methods available in the literature with generalizations for stress constraints, fully utilized design concepts, and hybrid methods that combine both techniques. Salient features of the code include multiple choices for Lagrange multiplier and design variable update methods, design strategies for several constraint types, variable linking, displacement and integrated force method analyzers, and analytical and numerical sensitivities. The performance of the OC method, on the basis of the examples solved, was found to be satisfactory for problems with few active constraints or with small numbers of design variables. For problems with large numbers of behavior constraints and design variables, the OC method appears to follow a subset of active constraints that can result in a heavier design. The computational efficiency of OC methods appears to be similar to some mathematical programming techniques.
Mergias, I; Moustakas, K; Papadopoulos, A; Loizidou, M
2007-08-25
Each alternative scheme for treating a vehicle at its end of life has its own consequences from a social, environmental, economic and technical point of view. Furthermore, the criteria used to determine these consequences are often contradictory and not equally important. In the presence of multiple conflicting criteria, an optimal alternative scheme never exists. A multiple-criteria decision aid (MCDA) method to aid the Decision Maker (DM) in selecting the best compromise scheme for the management of End-of-Life Vehicles (ELVs) is presented in this paper. The constitution of a set of alternatives schemes, the selection of a list of relevant criteria to evaluate these alternative schemes and the choice of an appropriate management system are also analyzed in this framework. The proposed procedure relies on the PROMETHEE method which belongs to the well-known family of multiple criteria outranking methods. For this purpose, level, linear and Gaussian functions are used as preference functions.
A collaborative study on colposcopic biopsy with aims to study cervical disease on the lesion level, optimize criteria for biopsy placement, and analyze the incremental benefit of taking multiple biopsies
NASA Astrophysics Data System (ADS)
Thomas, L.; Tremblais, B.; David, L.
2014-03-01
Optimization of multiplicative algebraic reconstruction technique (MART), simultaneous MART and block iterative MART reconstruction techniques was carried out on synthetic and experimental data. Different criteria were defined to improve the preprocessing of the initial images. Knowledge of how each reconstruction parameter influences the quality of particle volume reconstruction and computing time is the key in Tomo-PIV. These criteria were applied to a real case, a jet in cross flow, and were validated.
Modeling, simulation and optimization approaches for design of lightweight car body structures
NASA Astrophysics Data System (ADS)
Kiani, Morteza
Simulation-based design optimization and finite element method are used in this research to investigate weight reduction of car body structures made of metallic and composite materials under different design criteria. Besides crashworthiness in full frontal, offset frontal, and side impact scenarios, vibration frequencies, static stiffness, and joint rigidity are also considered. Energy absorption at the component level is used to study the effectiveness of carbon fiber reinforced polymer (CFRP) composite material with consideration of different failure criteria. A global-local design strategy is introduced and applied to multi-objective optimization of car body structures with CFRP components. Multiple example problems involving the analysis of full-vehicle crash and body-in-white models are used to examine the effect of material substitution and the choice of design criteria on weight reduction. The results of this study show that car body structures that are optimized for crashworthiness alone may not meet the vibration criterion. Moreover, optimized car body structures with CFRP components can be lighter with superior crashworthiness than the baseline and optimized metallic structures.
NASA Astrophysics Data System (ADS)
Bykovsky, A. Yu; Sherbakov, A. A.
2016-08-01
The C-valued Allen-Givone algebra is the attractive tool for modeling of a robotic agent, but it requires the consensus method of minimization for the simplification of logic expressions. This procedure substitutes some undefined states of the function for the maximal truth value, thus extending the initially given truth table. This further creates the problem of different formal representations for the same initially given function. The multi-criteria optimization is proposed for the deliberate choice of undefined states and model formation.
Yoon, Hyejin; Leitner, Thomas
2014-12-17
Analyses of entire viral genomes or mtDNA requires comprehensive design of many primers across their genomes. In addition, simultaneous optimization of several DNA primer design criteria may improve overall experimental efficiency and downstream bioinformatic processing. To achieve these goals, we developed PrimerDesign-M. It includes several options for multiple-primer design, allowing researchers to efficiently design walking primers that cover long DNA targets, such as entire HIV-1 genomes, and that optimizes primers simultaneously informed by genetic diversity in multiple alignments and experimental design constraints given by the user. PrimerDesign-M can also design primers that include DNA barcodes and minimize primer dimerization. PrimerDesign-Mmore » finds optimal primers for highly variable DNA targets and facilitates design flexibility by suggesting alternative designs to adapt to experimental conditions.« less
Automated optimal coordination of multiple-DOF neuromuscular actions in feedforward neuroprostheses.
Lujan, J Luis; Crago, Patrick E
2009-01-01
This paper describes a new method for designing feedforward controllers for multiple-muscle, multiple-DOF, motor system neural prostheses. The design process is based on experimental measurement of the forward input/output properties of the neuromechanical system and numerical optimization of stimulation patterns to meet muscle coactivation criteria, thus resolving the muscle redundancy (i.e., overcontrol) and the coupled DOF problems inherent in neuromechanical systems. We designed feedforward controllers to control the isometric forces at the tip of the thumb in two directions during stimulation of three thumb muscles as a model system. We tested the method experimentally in ten able-bodied individuals and one patient with spinal cord injury. Good control of isometric force in both DOFs was observed, with rms errors less than 10% of the force range in seven experiments and statistically significant correlations between the actual and target forces in all ten experiments. Systematic bias and slope errors were observed in a few experiments, likely due to the neuromuscular fatigue. Overall, the tests demonstrated the ability of a general design approach to satisfy both control and coactivation criteria in multiple-muscle, multiple-axis neuromechanical systems, which is applicable to a wide range of neuromechanical systems and stimulation electrodes.
Craft, David
2010-10-01
A discrete set of points and their convex combinations can serve as a sparse representation of the Pareto surface in multiple objective convex optimization. We develop a method to evaluate the quality of such a representation, and show by example that in multiple objective radiotherapy planning, the number of Pareto optimal solutions needed to represent Pareto surfaces of up to five dimensions grows at most linearly with the number of objectives. The method described is also applicable to the representation of convex sets. Copyright © 2009 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
The economics of project analysis: Optimal investment criteria and methods of study
NASA Technical Reports Server (NTRS)
Scriven, M. C.
1979-01-01
Insight is provided toward the development of an optimal program for investment analysis of project proposals offering commercial potential and its components. This involves a critique of economic investment criteria viewed in relation to requirements of engineering economy analysis. An outline for a systems approach to project analysis is given Application of the Leontief input-output methodology to analysis of projects involving multiple processes and products is investigated. Effective application of elements of neoclassical economic theory to investment analysis of project components is demonstrated. Patterns of both static and dynamic activity levels are incorporated.
MINIVER: Miniature version of real/ideal gas aero-heating and ablation computer program
NASA Technical Reports Server (NTRS)
Hendler, D. R.
1976-01-01
Computer code is used to determine heat transfer multiplication factors, special flow field simulation techniques, different heat transfer methods, different transition criteria, crossflow simulation, and more efficient thin skin thickness optimization procedure.
Successive equimarginal approach for optimal design of a pump and treat system
NASA Astrophysics Data System (ADS)
Guo, Xiaoniu; Zhang, Chuan-Mian; Borthwick, John C.
2007-08-01
An economic concept-based optimization method is developed for groundwater remediation design. Design of a pump and treat (P&T) system is viewed as a resource allocation problem constrained by specified cleanup criteria. An optimal allocation of resources requires that the equimarginal principle, a fundamental economic principle, must hold. The proposed method is named successive equimarginal approach (SEA), which continuously shifts a pumping rate from a less effective well to a more effective one until equal marginal productivity for all units is reached. Through the successive process, the solution evenly approaches the multiple inequality constraints that represent the specified cleanup criteria in space and in time. The goal is to design an equal protection system so that the distributed contaminant plumes can be equally contained without bypass and overprotection is minimized. SEA is a hybrid of the gradient-based method and the deterministic heuristics-based method, which allows flexibility in dealing with multiple inequality constraints without using a penalty function and in balancing computational efficiency with robustness. This method was applied to design a large-scale P&T system for containment of multiple plumes at the former Blaine Naval Ammunition Depot (NAD) site, near Hastings, Nebraska. To evaluate this method, the SEA results were also compared with those using genetic algorithms.
Yang, Po-Jen; Lee, Yuan-Ti; Tzeng, Shu-Ling; Lee, Huei-Chao; Tsai, Chin-Feng; Chen, Chun-Chieh; Chen, Shiuan-Chih; Lee, Meng-Chih
2015-01-01
To evaluate the prescription of potentially inappropriate medications (PIM), using the Screening Tool of Older Persons' potentially inappropriate Prescriptions (STOPP) and Beers criteria, to disabled older people. One hundred and forty-one patients aged ≥65 years with Barthel scale scores ≤60 and a regular intake of medication for chronic diseases at Chung Shan Medical University Hospital from July to December 2012 were included, and their medical records were reviewed. Comprehensive patient information was extracted from the patients' medical notes. The STOPP and Beers 2012 criteria were used separately to identify PIM, and logistic regression analyses were performed to identify risk factors for PIM. The optimal cutoff for the number of medications prescribed for predicting PIM was estimated using the Youden index. Of the 141 patients, 94 (66.7%) and 94 (66.7%) had at least one PIM identified by the STOPP and Beers criteria, respectively. In multivariate analysis, PIM identified by the Beers criteria were associated with the prescription of multiple medications (p = 0.013) and the presence of psychiatric diseases (p < 0.001), whereas PIM identified by the STOPP criteria were only associated with the prescription of multiple medications (p = 0.008). The optimal cutoff for the number of medications prescribed for predicting PIM by using the STOPP or Beers criteria was 6. After adjustment for covariates, patients prescribed ≥6 medications had a significantly higher risk of PIM, identified using the STOPP or Beers criteria, compared to patients prescribed <6 medications (both p < 0.05). This study revealed a high frequency of PIM in disabled older patients with chronic diseases, particularly those prescribed ≥6 medications. © 2015 S. Karger AG, Basel.
Yang, Po-Jen; Lee, Yuan-Ti; Tzeng, Shu-Ling; Lee, Huei-Chao; Tsai, Chin-Feng; Chen, Chun-Chieh; Chen, Shiuan-Chih; Lee, Meng-Chih
2015-01-01
Objective To evaluate the prescription of potentially inappropriate medications (PIM), using the Screening Tool of Older Persons' potentially inappropriate Prescriptions (STOPP) and Beers criteria, to disabled older people. Subjects and Methods One hundred and forty-one patients aged ≥65 years with Barthel scale scores ≤60 and a regular intake of medication for chronic diseases at Chung Shan Medical University Hospital from July to December 2012 were included, and their medical records were reviewed. Comprehensive patient information was extracted from the patients' medical notes. The STOPP and Beers 2012 criteria were used separately to identify PIM, and logistic regression analyses were performed to identify risk factors for PIM. The optimal cutoff for the number of medications prescribed for predicting PIM was estimated using the Youden index. Results Of the 141 patients, 94 (66.7%) and 94 (66.7%) had at least one PIM identified by the STOPP and Beers criteria, respectively. In multivariate analysis, PIM identified by the Beers criteria were associated with the prescription of multiple medications (p = 0.013) and the presence of psychiatric diseases (p < 0.001), whereas PIM identified by the STOPP criteria were only associated with the prescription of multiple medications (p = 0.008). The optimal cutoff for the number of medications prescribed for predicting PIM by using the STOPP or Beers criteria was 6. After adjustment for covariates, patients prescribed ≥6 medications had a significantly higher risk of PIM, identified using the STOPP or Beers criteria, compared to patients prescribed <6 medications (both p < 0.05). Conclusion This study revealed a high frequency of PIM in disabled older patients with chronic diseases, particularly those prescribed ≥6 medications. PMID:26279164
Geospatial Optimization of Siting Large-Scale Solar Projects
DOE Office of Scientific and Technical Information (OSTI.GOV)
Macknick, Jordan; Quinby, Ted; Caulfield, Emmet
2014-03-01
Recent policy and economic conditions have encouraged a renewed interest in developing large-scale solar projects in the U.S. Southwest. However, siting large-scale solar projects is complex. In addition to the quality of the solar resource, solar developers must take into consideration many environmental, social, and economic factors when evaluating a potential site. This report describes a proof-of-concept, Web-based Geographical Information Systems (GIS) tool that evaluates multiple user-defined criteria in an optimization algorithm to inform discussions and decisions regarding the locations of utility-scale solar projects. Existing siting recommendations for large-scale solar projects from governmental and non-governmental organizations are not consistent withmore » each other, are often not transparent in methods, and do not take into consideration the differing priorities of stakeholders. The siting assistance GIS tool we have developed improves upon the existing siting guidelines by being user-driven, transparent, interactive, capable of incorporating multiple criteria, and flexible. This work provides the foundation for a dynamic siting assistance tool that can greatly facilitate siting decisions among multiple stakeholders.« less
NASA Astrophysics Data System (ADS)
Muratore-Ginanneschi, Paolo
2005-05-01
Investment strategies in multiplicative Markovian market models with transaction costs are defined using growth optimal criteria. The optimal strategy is shown to consist in holding the amount of capital invested in stocks within an interval around an ideal optimal investment. The size of the holding interval is determined by the intensity of the transaction costs and the time horizon. The inclusion of financial derivatives in the models is also considered. All the results presented in this contributions were previously derived in collaboration with E. Aurell.
FY04 Advanced Life Support Architecture and Technology Studies: Mid-Year Presentation
NASA Technical Reports Server (NTRS)
Lange, Kevin; Anderson, Molly; Duffield, Bruce; Hanford, Tony; Jeng, Frank
2004-01-01
Long-Term Objective: Identify optimal advanced life support system designs that meet existing and projected requirements for future human spaceflight missions. a) Include failure-tolerance, reliability, and safe-haven requirements. b) Compare designs based on multiple criteria including equivalent system mass (ESM), technology readiness level (TRL), simplicity, commonality, etc. c) Develop and evaluate new, more optimal, architecture concepts and technology applications.
NASA Astrophysics Data System (ADS)
Garambois, Pierre; Besset, Sebastien; Jézéquel, Louis
2015-07-01
This paper presents a methodology for the multi-objective (MO) shape optimization of plate structure under stress criteria, based on a mixed Finite Element Model (FEM) enhanced with a sub-structuring method. The optimization is performed with a classical Genetic Algorithm (GA) method based on Pareto-optimal solutions and considers thickness distributions parameters and antagonist objectives among them stress criteria. We implement a displacement-stress Dynamic Mixed FEM (DM-FEM) for plate structure vibrations analysis. Such a model gives a privileged access to the stress within the plate structure compared to primal classical FEM, and features a linear dependence to the thickness parameters. A sub-structuring reduction method is also computed in order to reduce the size of the mixed FEM and split the given structure into smaller ones with their own thickness parameters. Those methods combined enable a fast and stress-wise efficient structure analysis, and improve the performance of the repetitive GA. A few cases of minimizing the mass and the maximum Von Mises stress within a plate structure under a dynamic load put forward the relevance of our method with promising results. It is able to satisfy multiple damage criteria with different thickness distributions, and use a smaller FEM.
Maulidiani; Rudiyanto; Abas, Faridah; Ismail, Intan Safinar; Lajis, Nordin H
2018-06-01
Optimization process is an important aspect in the natural product extractions. Herein, an alternative approach is proposed for the optimization in extraction, namely, the Generalized Likelihood Uncertainty Estimation (GLUE). The approach combines the Latin hypercube sampling, the feasible range of independent variables, the Monte Carlo simulation, and the threshold criteria of response variables. The GLUE method is tested in three different techniques including the ultrasound, the microwave, and the supercritical CO 2 assisted extractions utilizing the data from previously published reports. The study found that this method can: provide more information on the combined effects of the independent variables on the response variables in the dotty plots; deal with unlimited number of independent and response variables; consider combined multiple threshold criteria, which is subjective depending on the target of the investigation for response variables; and provide a range of values with their distribution for the optimization. Copyright © 2018 Elsevier Ltd. All rights reserved.
Merlé, Y; Mentré, F
1995-02-01
In this paper 3 criteria to design experiments for Bayesian estimation of the parameters of nonlinear models with respect to their parameters, when a prior distribution is available, are presented: the determinant of the Bayesian information matrix, the determinant of the pre-posterior covariance matrix, and the expected information provided by an experiment. A procedure to simplify the computation of these criteria is proposed in the case of continuous prior distributions and is compared with the criterion obtained from a linearization of the model about the mean of the prior distribution for the parameters. This procedure is applied to two models commonly encountered in the area of pharmacokinetics and pharmacodynamics: the one-compartment open model with bolus intravenous single-dose injection and the Emax model. They both involve two parameters. Additive as well as multiplicative gaussian measurement errors are considered with normal prior distributions. Various combinations of the variances of the prior distribution and of the measurement error are studied. Our attention is restricted to designs with limited numbers of measurements (1 or 2 measurements). This situation often occurs in practice when Bayesian estimation is performed. The optimal Bayesian designs that result vary with the variances of the parameter distribution and with the measurement error. The two-point optimal designs sometimes differ from the D-optimal designs for the mean of the prior distribution and may consist of replicating measurements. For the studied cases, the determinant of the Bayesian information matrix and its linearized form lead to the same optimal designs. In some cases, the pre-posterior covariance matrix can be far from its lower bound, namely, the inverse of the Bayesian information matrix, especially for the Emax model and a multiplicative measurement error. The expected information provided by the experiment and the determinant of the pre-posterior covariance matrix generally lead to the same designs except for the Emax model and the multiplicative measurement error. Results show that these criteria can be easily computed and that they could be incorporated in modules for designing experiments.
Design of clinical trials involving multiple hypothesis tests with a common control.
Schou, I Manjula; Marschner, Ian C
2017-07-01
Randomized clinical trials comparing several treatments to a common control are often reported in the medical literature. For example, multiple experimental treatments may be compared with placebo, or in combination therapy trials, a combination therapy may be compared with each of its constituent monotherapies. Such trials are typically designed using a balanced approach in which equal numbers of individuals are randomized to each arm, however, this can result in an inefficient use of resources. We provide a unified framework and new theoretical results for optimal design of such single-control multiple-comparator studies. We consider variance optimal designs based on D-, A-, and E-optimality criteria, using a general model that allows for heteroscedasticity and a range of effect measures that include both continuous and binary outcomes. We demonstrate the sensitivity of these designs to the type of optimality criterion by showing that the optimal allocation ratios are systematically ordered according to the optimality criterion. Given this sensitivity to the optimality criterion, we argue that power optimality is a more suitable approach when designing clinical trials where testing is the objective. Weighted variance optimal designs are also discussed, which, like power optimal designs, allow the treatment difference to play a major role in determining allocation ratios. We illustrate our methods using two real clinical trial examples taken from the medical literature. Some recommendations on the use of optimal designs in single-control multiple-comparator trials are also provided. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Microseismic Monitoring Design Optimization Based on Multiple Criteria Decision Analysis
NASA Astrophysics Data System (ADS)
Kovaleva, Y.; Tamimi, N.; Ostadhassan, M.
2017-12-01
Borehole microseismic monitoring of hydraulic fracture treatments of unconventional reservoirs is a widely used method in the oil and gas industry. Sometimes, the quality of the acquired microseismic data is poor. One of the reasons for poor data quality is poor survey design. We attempt to provide a comprehensive and thorough workflow, using multiple criteria decision analysis (MCDA), to optimize planning micriseismic monitoring. So far, microseismic monitoring has been used extensively as a powerful tool for determining fracture parameters that affect the influx of formation fluids into the wellbore. The factors that affect the quality of microseismic data and their final results include average distance between microseismic events and receivers, complexity of the recorded wavefield, signal-to-noise ratio, data aperture, etc. These criteria often conflict with each other. In a typical microseismic monitoring, those factors should be considered to choose the best monitoring well(s), optimum number of required geophones, and their depth. We use MDCA to address these design challenges and develop a method that offers an optimized design out of all possible combinations to produce the best data acquisition results. We believe that this will be the first research to include the above-mentioned factors in a 3D model. Such a tool would assist companies and practicing engineers in choosing the best design parameters for future microseismic projects.
Structural Optimization for Reliability Using Nonlinear Goal Programming
NASA Technical Reports Server (NTRS)
El-Sayed, Mohamed E.
1999-01-01
This report details the development of a reliability based multi-objective design tool for solving structural optimization problems. Based on two different optimization techniques, namely sequential unconstrained minimization and nonlinear goal programming, the developed design method has the capability to take into account the effects of variability on the proposed design through a user specified reliability design criterion. In its sequential unconstrained minimization mode, the developed design tool uses a composite objective function, in conjunction with weight ordered design objectives, in order to take into account conflicting and multiple design criteria. Multiple design criteria of interest including structural weight, load induced stress and deflection, and mechanical reliability. The nonlinear goal programming mode, on the other hand, provides for a design method that eliminates the difficulty of having to define an objective function and constraints, while at the same time has the capability of handling rank ordered design objectives or goals. For simulation purposes the design of a pressure vessel cover plate was undertaken as a test bed for the newly developed design tool. The formulation of this structural optimization problem into sequential unconstrained minimization and goal programming form is presented. The resulting optimization problem was solved using: (i) the linear extended interior penalty function method algorithm; and (ii) Powell's conjugate directions method. Both single and multi-objective numerical test cases are included demonstrating the design tool's capabilities as it applies to this design problem.
Joseph Buongiorno; Espen Andreas Halvorsen; Ole Martin Bollandsas; Terje Gobakken; Ole Hofstad
2012-01-01
This study sought optimal sustainable management regimes of uneven-aged Norway spruce-dominated stands with multiple objectives. The criteria were financial returns, CO2 sequestration and diversity of tree size and species. At prevailing timber prices, harvest and transport costs, and interest rates, uneven-aged management for timber alone was...
Sumiyoshi, Chika; Fujino, Haruo; Sumiyoshi, Tomiki; Yasuda, Yuka; Yamamori, Hidenaga; Ohi, Kazutaka; Fujimoto, Michiko; Takeda, Masatoshi; Hashimoto, Ryota
2016-11-30
The Wechsler Adult Intelligence Scale (WAIS) has been widely used to assess intellectual functioning not only in healthy adults but also people with psychiatric disorders. The purpose of the study was to develop an optimal WAIS-3 short form (SF) to evaluate intellectual status in patients with schizophrenia. One hundred and fifty patients with schizophrenia and 221 healthy controls entered the study. To select subtests for SFs, following criteria were considered: 1) predictability for the full IQ (FIQ), 2) representativeness for the IQ structure, 3) consistency of subtests across versions, 4) sensitivity to functional outcome measures, 5) conciseness in administration time. First, exploratory factor analysis (EFA) and multiple regression analysis were conducted to select subtests satisfying the first and the second criteria. Then, candidate SFs were nominated based on the third criterion and the coverage of verbal IQ and performance IQ. Finally, the optimality of candidate SFs was evaluated in terms of the fourth and fifth criteria. The results suggest that the dyad of Similarities and Symbol Search was the most optimal satisfying the above criteria. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Proposed Diagnostic Criteria for Smartphone Addiction
Lin, Yu-Hsuan; Chiang, Chih-Lin; Lin, Po-Hsien; Chang, Li-Ren; Ko, Chih-Hung; Lee, Yang-Han
2016-01-01
Background Global smartphone penetration has led to unprecedented addictive behaviors. The aims of this study are to develop diagnostic criteria of smartphone addiction and to examine the discriminative ability and the validity of the diagnostic criteria. Methods We developed twelve candidate criteria for characteristic symptoms of smartphone addiction and four criteria for functional impairment caused by excessive smartphone use. The participants consisted of 281 college students. Each participant was systematically assessed for smartphone-using behaviors by psychiatrist’s structured diagnostic interview. The sensitivity, specificity, and diagnostic accuracy of the candidate symptom criteria were analyzed with reference to the psychiatrists’ clinical global impression. The optimal model selection with its cutoff point of the diagnostic criteria differentiating the smartphone addicted subjects from non-addicted subjects was then determined by the best diagnostic accuracy. Results Six symptom criteria model with optimal cutoff point were determined based on the maximal diagnostic accuracy. The proposed smartphone addiction diagnostic criteria consisted of (1) six symptom criteria, (2) four functional impairment criteria and (3) exclusion criteria. Setting three symptom criteria as the cutoff point resulted in the highest diagnostic accuracy (84.3%), while the sensitivity and specificity were 79.4% and 87.5%, respectively. We suggested determining the functional impairment by two or more of the four domains considering the high accessibility and penetration of smartphone use. Conclusion The diagnostic criteria of smartphone addiction demonstrated the core symptoms “impaired control” paralleled with substance related and addictive disorders. The functional impairment involved multiple domains provide a strict standard for clinical assessment. PMID:27846211
Proposed Diagnostic Criteria for Smartphone Addiction.
Lin, Yu-Hsuan; Chiang, Chih-Lin; Lin, Po-Hsien; Chang, Li-Ren; Ko, Chih-Hung; Lee, Yang-Han; Lin, Sheng-Hsuan
2016-01-01
Global smartphone penetration has led to unprecedented addictive behaviors. The aims of this study are to develop diagnostic criteria of smartphone addiction and to examine the discriminative ability and the validity of the diagnostic criteria. We developed twelve candidate criteria for characteristic symptoms of smartphone addiction and four criteria for functional impairment caused by excessive smartphone use. The participants consisted of 281 college students. Each participant was systematically assessed for smartphone-using behaviors by psychiatrist's structured diagnostic interview. The sensitivity, specificity, and diagnostic accuracy of the candidate symptom criteria were analyzed with reference to the psychiatrists' clinical global impression. The optimal model selection with its cutoff point of the diagnostic criteria differentiating the smartphone addicted subjects from non-addicted subjects was then determined by the best diagnostic accuracy. Six symptom criteria model with optimal cutoff point were determined based on the maximal diagnostic accuracy. The proposed smartphone addiction diagnostic criteria consisted of (1) six symptom criteria, (2) four functional impairment criteria and (3) exclusion criteria. Setting three symptom criteria as the cutoff point resulted in the highest diagnostic accuracy (84.3%), while the sensitivity and specificity were 79.4% and 87.5%, respectively. We suggested determining the functional impairment by two or more of the four domains considering the high accessibility and penetration of smartphone use. The diagnostic criteria of smartphone addiction demonstrated the core symptoms "impaired control" paralleled with substance related and addictive disorders. The functional impairment involved multiple domains provide a strict standard for clinical assessment.
A connectionist model for diagnostic problem solving
NASA Technical Reports Server (NTRS)
Peng, Yun; Reggia, James A.
1989-01-01
A competition-based connectionist model for solving diagnostic problems is described. The problems considered are computationally difficult in that (1) multiple disorders may occur simultaneously and (2) a global optimum in the space exponential to the total number of possible disorders is sought as a solution. The diagnostic problem is treated as a nonlinear optimization problem, and global optimization criteria are decomposed into local criteria governing node activation updating in the connectionist model. Nodes representing disorders compete with each other to account for each individual manifestation, yet complement each other to account for all manifestations through parallel node interactions. When equilibrium is reached, the network settles into a locally optimal state. Three randomly generated examples of diagnostic problems, each of which has 1024 cases, were tested, and the decomposition plus competition plus resettling approach yielded very high accuracy.
Regression analysis as a design optimization tool
NASA Technical Reports Server (NTRS)
Perley, R.
1984-01-01
The optimization concepts are described in relation to an overall design process as opposed to a detailed, part-design process where the requirements are firmly stated, the optimization criteria are well established, and a design is known to be feasible. The overall design process starts with the stated requirements. Some of the design criteria are derived directly from the requirements, but others are affected by the design concept. It is these design criteria that define the performance index, or objective function, that is to be minimized within some constraints. In general, there will be multiple objectives, some mutually exclusive, with no clear statement of their relative importance. The optimization loop that is given adjusts the design variables and analyzes the resulting design, in an iterative fashion, until the objective function is minimized within the constraints. This provides a solution, but it is only the beginning. In effect, the problem definition evolves as information is derived from the results. It becomes a learning process as we determine what the physics of the system can deliver in relation to the desirable system characteristics. As with any learning process, an interactive capability is a real attriubute for investigating the many alternatives that will be suggested as learning progresses.
Optimization in the Face of Contradictory Criteria - the Example of Muscle
NASA Astrophysics Data System (ADS)
Davison, M.; Shiner, J. S.
2002-09-01
Biological thought suggests that organisms tend toward optimal design through evolution. This optimization should be evident in the physiology of organs and organ systems. However, a given organ often has multiple roles to play in the optimization of the organism, and sometimes the logical optimization criteria for the different roles may be contradictory. In this paper we consider the case of skeletal muscle. One of its obvious functions is movement of the organism, for which efficiency is clearly a goal. However, muscle is also important for temperature regulation through shivering. In this latter function muscle should produce heat; i.e. it should be maximally inefficient. The thermodynamic optimizations desired for these two roles appear diametrically opposed. We show a way out of this dilemma by constructing a simple, physiologically motivated model of the contraction-relaxation cycle of muscle. This model muscle can be both an efficient mover in a ‘purposeful contraction’ regime, characterized by large movements of low frequency, and a good heat producer in a distinct ‘shivering’ regime characterized by small movements of high frequency.
Negotiating designs of multi-purpose reservoir systems in international basins
NASA Astrophysics Data System (ADS)
Geressu, Robel; Harou, Julien
2016-04-01
Given increasing agricultural and energy demands, coordinated management of multi-reservoir systems could help increase production without further stressing available water resources. However, regional or international disputes about water-use rights pose a challenge to efficient expansion and management of many large reservoir systems. Even when projects are likely to benefit all stakeholders, agreeing on the design, operation, financing, and benefit sharing can be challenging. This is due to the difficulty of considering multiple stakeholder interests in the design of projects and understanding the benefit trade-offs that designs imply. Incommensurate performance metrics, incomplete knowledge on system requirements, lack of objectivity in managing conflict and difficulty to communicate complex issue exacerbate the problem. This work proposes a multi-step hybrid multi-objective optimization and multi-criteria ranking approach for supporting negotiation in water resource systems. The approach uses many-objective optimization to generate alternative efficient designs and reveal the trade-offs between conflicting objectives. This enables informed elicitation of criteria weights for further multi-criteria ranking of alternatives. An ideal design would be ranked as best by all stakeholders. Resource-sharing mechanisms such as power-trade and/or cost sharing may help competing stakeholders arrive at designs acceptable to all. Many-objective optimization helps suggests efficient designs (reservoir site, its storage size and operating rule) and coordination levels considering the perspectives of multiple stakeholders simultaneously. We apply the proposed approach to a proof-of-concept study of the expansion of the Blue Nile transboundary reservoir system.
Fuzzy linear model for production optimization of mining systems with multiple entities
NASA Astrophysics Data System (ADS)
Vujic, Slobodan; Benovic, Tomo; Miljanovic, Igor; Hudej, Marjan; Milutinovic, Aleksandar; Pavlovic, Petar
2011-12-01
Planning and production optimization within multiple mines or several work sites (entities) mining systems by using fuzzy linear programming (LP) was studied. LP is the most commonly used operations research methods in mining engineering. After the introductory review of properties and limitations of applying LP, short reviews of the general settings of deterministic and fuzzy LP models are presented. With the purpose of comparative analysis, the application of both LP models is presented using the example of the Bauxite Basin Niksic with five mines. After the assessment, LP is an efficient mathematical modeling tool in production planning and solving many other single-criteria optimization problems of mining engineering. After the comparison of advantages and deficiencies of both deterministic and fuzzy LP models, the conclusion presents benefits of the fuzzy LP model but is also stating that seeking the optimal plan of production means to accomplish the overall analysis that will encompass the LP model approaches.
Stachura, Z; Zralek, C; Siemianowicz, S; Kiczka-Zralek, M; Zawadzki, T; Kluczewska, E; Giec-Lorenc, A
1998-01-01
A case of neurofibromatosis type II in a 19-year-old man is described with clinical and neuroimaging (MRI) findings. The diagnostic criteria of neurofibromatosis type I (NF1) and type II (NF2) and the optimal management options are still controversial. The authors suggest that this patient fulfills criteria of neurofibromatosis type II as well as partially neurofibromatosis type I. At present, without molecular analysis of DNA, this assumption can not be verified.
NASA Astrophysics Data System (ADS)
Bogoljubova, M. N.; Afonasov, A. I.; Kozlov, B. N.; Shavdurov, D. E.
2018-05-01
A predictive simulation technique of optimal cutting modes in the turning of workpieces made of nickel-based heat-resistant alloys, different from the well-known ones, is proposed. The impact of various factors on the cutting process with the purpose of determining optimal parameters of machining in concordance with certain effectiveness criteria is analyzed in the paper. A mathematical model of optimization, algorithms and computer programmes, visual graphical forms reflecting dependences of the effectiveness criteria – productivity, net cost, and tool life on parameters of the technological process - have been worked out. A nonlinear model for multidimensional functions, “solution of the equation with multiple unknowns”, “a coordinate descent method” and heuristic algorithms are accepted to solve the problem of optimization of cutting mode parameters. Research shows that in machining of workpieces made from heat-resistant alloy AISI N07263, the highest possible productivity will be achieved with the following parameters: cutting speed v = 22.1 m/min., feed rate s=0.26 mm/rev; tool life T = 18 min.; net cost – 2.45 per hour.
The constraints satisfaction problem approach in the design of an architectural functional layout
NASA Astrophysics Data System (ADS)
Zawidzki, Machi; Tateyama, Kazuyoshi; Nishikawa, Ikuko
2011-09-01
A design support system with a new strategy for finding the optimal functional configurations of rooms for architectural layouts is presented. A set of configurations satisfying given constraints is generated and ranked according to multiple objectives. The method can be applied to problems in architectural practice, urban or graphic design-wherever allocation of related geometrical elements of known shape is optimized. Although the methodology is shown using simplified examples-a single story residential building with two apartments each having two rooms-the results resemble realistic functional layouts. One example of a practical size problem of a layout of three apartments with a total of 20 rooms is demonstrated, where the generated solution can be used as a base for a realistic architectural blueprint. The discretization of design space is discussed, followed by application of a backtrack search algorithm used for generating a set of potentially 'good' room configurations. Next the solutions are classified by a machine learning method (FFN) as 'proper' or 'improper' according to the internal communication criteria. Examples of interactive ranking of the 'proper' configurations according to multiple criteria and choosing 'the best' ones are presented. The proposed framework is general and universal-the criteria, parameters and weights can be individually defined by a user and the search algorithm can be adjusted to a specific problem.
Integrated aerodynamic/dynamic optimization of helicopter rotor blades
NASA Technical Reports Server (NTRS)
Chattopadhyay, Aditi; Walsh, Joanne L.; Riley, Michael F.
1989-01-01
An integrated aerodynamic/dynamic optimization procedure is used to minimize blade weight and 4 per rev vertical hub shear for a rotor blade in forward flight. The coupling of aerodynamics and dynamics is accomplished through the inclusion of airloads which vary with the design variables during the optimization process. Both single and multiple objective functions are used in the optimization formulation. The Global Criteria Approach is used to formulate the multiple objective optimization and results are compared with those obtained by using single objective function formulations. Constraints are imposed on natural frequencies, autorotational inertia, and centrifugal stress. The program CAMRAD is used for the blade aerodynamic and dynamic analyses, and the program CONMIN is used for the optimization. Since the spanwise and the azimuthal variations of loading are responsible for most rotor vibration and noise, the vertical airload distributions on the blade, before and after optimization, are compared. The total power required by the rotor to produce the same amount of thrust for a given area is also calculated before and after optimization. Results indicate that integrated optimization can significantly reduce the blade weight, the hub shear and the amplitude of the vertical airload distributions on the blade and the total power required by the rotor.
Making objective decisions in mechanical engineering problems
NASA Astrophysics Data System (ADS)
Raicu, A.; Oanta, E.; Sabau, A.
2017-08-01
Decision making process has a great influence in the development of a given project, the goal being to select an optimal choice in a given context. Because of its great importance, the decision making was studied using various science methods, finally being conceived the game theory that is considered the background for the science of logical decision making in various fields. The paper presents some basic ideas regarding the game theory in order to offer the necessary information to understand the multiple-criteria decision making (MCDM) problems in engineering. The solution is to transform the multiple-criteria problem in a one-criterion decision problem, using the notion of utility, together with the weighting sum model or the weighting product model. The weighted importance of the criteria is computed using the so-called Step method applied to a relation of preferences between the criteria. Two relevant examples from engineering are also presented. The future directions of research consist of the use of other types of criteria, the development of computer based instruments for decision making general problems and to conceive a software module based on expert system principles to be included in the Wiki software applications for polymeric materials that are already operational.
Economics of production efficiency: Nutritional grouping of the lactating cow.
Cabrera, V E; Kalantari, A S
2016-01-01
Nutritional grouping of lactating cows under total mixed ration (TMR) feeding systems has been discussed in the literature since 1970. Most studies have concluded that using multiple, more-homogeneous TMR feeding groups is economically beneficial because of either nutrient cost savings, improved productivity, or both. Nonetheless, no consensus has been formed around this technique nor has it been widely adopted. By using optimal criteria for grouping and more precise nutrient specifications of diets, the latest studies have reported a consistently greater income over feed cost ($/cow per year) with multiple TMR groups compared with 1 TMR (3 TMR=$46 and 2 TMR=$21 to $39). Critical factors that determine the economic value of nutritional grouping are: (1) criteria for grouping, (2) nutrient specifications of diets, (3) effects on milk production, (4) health and environmental benefits, (5) number, size, and frequency of grouping, and (6) additional costs and benefits. It has been documented that grouping cows according to their simultaneous nutritional requirements (a.k.a., cluster grouping) is optimal. Cluster grouping is superior to other methods, such as grouping according to days in milk, milk production, or production and body weight combined. However, the dairy industry still uses less-than-optimal grouping criteria. Using cluster grouping will enhance the positive economic effects of multiple TMR. In addition, nutrient specifications of diets for groups do not seem optimal either. Milk lead factors, which are only based on group average milk production, are used. Diets could, however, be formulated more precisely based on overall group nutrient requirements. Providing more precise diets should also be in favor of grouping economics. Furthermore, an area that requires more attention is the potential negative effect of grouping on the milk production of moved cows because of either or both social interactions or diet concentration changes. Although the literature is inconclusive on this subject matter, the latest studies indicate that multiple TMR groups economically outperform 1 TMR, even after considering plausible potential milk losses when grouping. Moreover, additional positive effects of nutritional grouping of improved herd health and environmental stewardship should be translated into economic benefits. Finally, additional costs of management, labor, facilities, and equipment required for grouping are farm specific. The few studies that have integrated these factors in their analyses found that multiple TMR groups would still be economically superior to 1 TMR. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Multi-Objective Parallel Test-Sheet Composition Using Enhanced Particle Swarm Optimization
ERIC Educational Resources Information Center
Ho, Tsu-Feng; Yin, Peng-Yeng; Hwang, Gwo-Jen; Shyu, Shyong Jian; Yean, Ya-Nan
2009-01-01
For large-scale tests, such as certification tests or entrance examinations, the composed test sheets must meet multiple assessment criteria. Furthermore, to fairly compare the knowledge levels of the persons who receive tests at different times owing to the insufficiency of available examination halls or the occurrence of certain unexpected…
A multiple-objective optimal exploration strategy
Christakos, G.; Olea, R.A.
1988-01-01
Exploration for natural resources is accomplished through partial sampling of extensive domains. Such imperfect knowledge is subject to sampling error. Complex systems of equations resulting from modelling based on the theory of correlated random fields are reduced to simple analytical expressions providing global indices of estimation variance. The indices are utilized by multiple objective decision criteria to find the best sampling strategies. The approach is not limited by geometric nature of the sampling, covers a wide range in spatial continuity and leads to a step-by-step procedure. ?? 1988.
Integral criteria for large-scale multiple fingerprint solutions
NASA Astrophysics Data System (ADS)
Ushmaev, Oleg S.; Novikov, Sergey O.
2004-08-01
We propose the definition and analysis of the optimal integral similarity score criterion for large scale multmodal civil ID systems. Firstly, the general properties of score distributions for genuine and impostor matches for different systems and input devices are investigated. The empirical statistics was taken from the real biometric tests. Then we carry out the analysis of simultaneous score distributions for a number of combined biometric tests and primary for ultiple fingerprint solutions. The explicit and approximate relations for optimal integral score, which provides the least value of the FRR while the FAR is predefined, have been obtained. The results of real multiple fingerprint test show good correspondence with the theoretical results in the wide range of the False Acceptance and the False Rejection Rates.
High-Lift Optimization Design Using Neural Networks on a Multi-Element Airfoil
NASA Technical Reports Server (NTRS)
Greenman, Roxana M.; Roth, Karlin R.; Smith, Charles A. (Technical Monitor)
1998-01-01
The high-lift performance of a multi-element airfoil was optimized by using neural-net predictions that were trained using a computational data set. The numerical data was generated using a two-dimensional, incompressible, Navier-Stokes algorithm with the Spalart-Allmaras turbulence model. Because it is difficult to predict maximum lift for high-lift systems, an empirically-based maximum lift criteria was used in this study to determine both the maximum lift and the angle at which it occurs. Multiple input, single output networks were trained using the NASA Ames variation of the Levenberg-Marquardt algorithm for each of the aerodynamic coefficients (lift, drag, and moment). The artificial neural networks were integrated with a gradient-based optimizer. Using independent numerical simulations and experimental data for this high-lift configuration, it was shown that this design process successfully optimized flap deflection, gap, overlap, and angle of attack to maximize lift. Once the neural networks were trained and integrated with the optimizer, minimal additional computer resources were required to perform optimization runs with different initial conditions and parameters. Applying the neural networks within the high-lift rigging optimization process reduced the amount of computational time and resources by 83% compared with traditional gradient-based optimization procedures for multiple optimization runs.
Acquisition of decision making criteria: reward rate ultimately beats accuracy.
Balci, Fuat; Simen, Patrick; Niyogi, Ritwik; Saxe, Andrew; Hughes, Jessica A; Holmes, Philip; Cohen, Jonathan D
2011-02-01
Speed-accuracy trade-offs strongly influence the rate of reward that can be earned in many decision-making tasks. Previous reports suggest that human participants often adopt suboptimal speed-accuracy trade-offs in single session, two-alternative forced-choice tasks. We investigated whether humans acquired optimal speed-accuracy trade-offs when extensively trained with multiple signal qualities. When performance was characterized in terms of decision time and accuracy, our participants eventually performed nearly optimally in the case of higher signal qualities. Rather than adopting decision criteria that were individually optimal for each signal quality, participants adopted a single threshold that was nearly optimal for most signal qualities. However, setting a single threshold for different coherence conditions resulted in only negligible decrements in the maximum possible reward rate. Finally, we tested two hypotheses regarding the possible sources of suboptimal performance: (1) favoring accuracy over reward rate and (2) misestimating the reward rate due to timing uncertainty. Our findings provide support for both hypotheses, but also for the hypothesis that participants can learn to approach optimality. We find specifically that an accuracy bias dominates early performance, but diminishes greatly with practice. The residual discrepancy between optimal and observed performance can be explained by an adaptive response to uncertainty in time estimation.
Vaccaro, G; Pelaez, J I; Gil, J A
2016-07-01
Objective masticatory performance assessment using two-coloured specimens relies on image processing techniques; however, just a few approaches have been tested and no comparative studies are reported. The aim of this study was to present a selection procedure of the optimal image analysis method for masticatory performance assessment with a given two-coloured chewing gum. Dentate participants (n = 250; 25 ± 6·3 years) chewed red-white chewing gums for 3, 6, 9, 12, 15, 18, 21 and 25 cycles (2000 samples). Digitalised images of retrieved specimens were analysed using 122 image processing methods (IPMs) based on feature extraction algorithms (pixel values and histogram analysis). All IPMs were tested following the criteria of: normality of measurements (Kolmogorov-Smirnov), ability to detect differences among mixing states (anova corrected with post hoc Bonferroni) and moderate-to-high correlation with the number of cycles (Spearman's Rho). The optimal IPM was chosen using multiple criteria decision analysis (MCDA). Measurements provided by all IPMs proved to be normally distributed (P < 0·05), 116 proved sensible to mixing states (P < 0·05), and 35 showed moderate-to-high correlation with the number of cycles (|ρ| > 0·5; P < 0·05). The variance of the histogram of the Hue showed the highest correlation with the number of cycles (ρ = 0·792; P < 0·0001) and the highest MCDA score (optimal). The proposed procedure proved to be reliable and able to select the optimal approach among multiple IPMs. This experiment may be reproduced to identify the optimal approach for each case of locally available test foods. © 2016 John Wiley & Sons Ltd.
Mühlbacher, Axel C; Kaczynski, Anika
2016-02-01
Healthcare decision making is usually characterized by a low degree of transparency. The demand for transparent decision processes can be fulfilled only when assessment, appraisal and decisions about health technologies are performed under a systematic construct of benefit assessment. The benefit of an intervention is often multidimensional and, thus, must be represented by several decision criteria. Complex decision problems require an assessment and appraisal of various criteria; therefore, a decision process that systematically identifies the best available alternative and enables an optimal and transparent decision is needed. For that reason, decision criteria must be weighted and goal achievement must be scored for all alternatives. Methods of multi-criteria decision analysis (MCDA) are available to analyse and appraise multiple clinical endpoints and structure complex decision problems in healthcare decision making. By means of MCDA, value judgments, priorities and preferences of patients, insurees and experts can be integrated systematically and transparently into the decision-making process. This article describes the MCDA framework and identifies potential areas where MCDA can be of use (e.g. approval, guidelines and reimbursement/pricing of health technologies). A literature search was performed to identify current research in healthcare. The results showed that healthcare decision making is addressing the problem of multiple decision criteria and is focusing on the future development and use of techniques to weight and score different decision criteria. This article emphasizes the use and future benefit of MCDA.
Optimality versus stability in water resource allocation.
Read, Laura; Madani, Kaveh; Inanloo, Bahareh
2014-01-15
Water allocation is a growing concern in a developing world where limited resources like fresh water are in greater demand by more parties. Negotiations over allocations often involve multiple groups with disparate social, economic, and political status and needs, who are seeking a management solution for a wide range of demands. Optimization techniques for identifying the Pareto-optimal (social planner solution) to multi-criteria multi-participant problems are commonly implemented, although often reaching agreement for this solution is difficult. In negotiations with multiple-decision makers, parties who base decisions on individual rationality may find the social planner solution to be unfair, thus creating a need to evaluate the willingness to cooperate and practicality of a cooperative allocation solution, i.e., the solution's stability. This paper suggests seeking solutions for multi-participant resource allocation problems through an economics-based power index allocation method. This method can inform on allocation schemes that quantify a party's willingness to participate in a negotiation rather than opt for no agreement. Through comparison of the suggested method with a range of distance-based multi-criteria decision making rules, namely, least squares, MAXIMIN, MINIMAX, and compromise programming, this paper shows that optimality and stability can produce different allocation solutions. The mismatch between the socially-optimal alternative and the most stable alternative can potentially result in parties leaving the negotiation as they may be too dissatisfied with their resource share. This finding has important policy implications as it justifies why stakeholders may not accept the socially optimal solution in practice, and underlies the necessity of considering stability where it may be more appropriate to give up an unstable Pareto-optimal solution for an inferior stable one. Authors suggest assessing the stability of an allocation solution as an additional component to an analysis that seeks to distribute water in a negotiated process. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Guo, H.; Li, W.; Wang, L.; Cheng, G.; Zhu, J.; Wang, Y.; Chen, Y.
2016-12-01
Groundwater supply accounts for two-thirds of the water supply of the Beijing municipality, and groundwater resources play a fundamental role in assuring the security and sustainability of the regional economy in Beijing. In this report, ten groundwater abstraction scenarios were designed based on the water demand and the capacity of water supply in the Beijing plain, and the impacts of these scenarios on the groundwater storage and level were illustrated with a transient 3D groundwater model constructed with MODFLOW. In addition, a set of evaluation criteria was developed taking into account of a number of factors such as the amount of groundwater exploitation, the evaporation of unconfined groundwater, river outflow, regional average groundwater depth, drawdowns in depression cones and the ratio of storage to the total recharge. Based on this set of criteria, the ten proposed groundwater abstraction scenarios were compared using a multi-criteria fuzzy pattern recognition model, which is suitable for solving large-scale, transient groundwater management problems and also proven to be a useful scientific analysis tool to identify the optimal groundwater resource utilization scenario. The evaluation results show that the groundwater resources can be rationally and optimally used when multiple measures such as control of groundwater abstraction and increase of recharge are jointly implemented.
An Integrated Method for Airfoil Optimization
NASA Astrophysics Data System (ADS)
Okrent, Joshua B.
Design exploration and optimization is a large part of the initial engineering and design process. To evaluate the aerodynamic performance of a design, viscous Navier-Stokes solvers can be used. However this method can prove to be overwhelmingly time consuming when performing an initial design sweep. Therefore, another evaluation method is needed to provide accurate results at a faster pace. To accomplish this goal, a coupled viscous-inviscid method is used. This thesis proposes an integrated method for analyzing, evaluating, and optimizing an airfoil using a coupled viscous-inviscid solver along with a genetic algorithm to find the optimal candidate. The method proposed is different from prior optimization efforts in that it greatly broadens the design space, while allowing the optimization to search for the best candidate that will meet multiple objectives over a characteristic mission profile rather than over a single condition and single optimization parameter. The increased design space is due to the use of multiple parametric airfoil families, namely the NACA 4 series, CST family, and the PARSEC family. Almost all possible airfoil shapes can be created with these three families allowing for all possible configurations to be included. This inclusion of multiple airfoil families addresses a possible criticism of prior optimization attempts since by only focusing on one airfoil family, they were inherently limiting the number of possible airfoil configurations. By using multiple parametric airfoils, it can be assumed that all reasonable airfoil configurations are included in the analysis and optimization and that a global and not local maximum is found. Additionally, the method used is amenable to customization to suit any specific needs as well as including the effects of other physical phenomena or design criteria and/or constraints. This thesis found that an airfoil configuration that met multiple objectives could be found for a given set of nominal operational conditions from a broad design space with the use of minimal computational resources on both an absolute and relative scale to traditional analysis techniques. Aerodynamicists, program managers, aircraft configuration specialist, and anyone else in charge of aircraft configuration, design studies, and program level decisions might find the evaluation and optimization method proposed of interest.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ekmekcioglu, Mehmet, E-mail: meceng3584@yahoo.co; Kaya, Tolga; Kahraman, Cengiz
The use of fuzzy multiple criteria analysis (MCA) in solid waste management has the advantage of rendering subjective and implicit decision making more objective and analytical, with its ability to accommodate both quantitative and qualitative data. In this paper a modified fuzzy TOPSIS methodology is proposed for the selection of appropriate disposal method and site for municipal solid waste (MSW). Our method is superior to existing methods since it has capability of representing vague qualitative data and presenting all possible results with different degrees of membership. In the first stage of the proposed methodology, a set of criteria of cost,more » reliability, feasibility, pollution and emission levels, waste and energy recovery is optimized to determine the best MSW disposal method. Landfilling, composting, conventional incineration, and refuse-derived fuel (RDF) combustion are the alternatives considered. The weights of the selection criteria are determined by fuzzy pairwise comparison matrices of Analytic Hierarchy Process (AHP). It is found that RDF combustion is the best disposal method alternative for Istanbul. In the second stage, the same methodology is used to determine the optimum RDF combustion plant location using adjacent land use, climate, road access and cost as the criteria. The results of this study illustrate the importance of the weights on the various factors in deciding the optimized location, with the best site located in Catalca. A sensitivity analysis is also conducted to monitor how sensitive our model is to changes in the various criteria weights.« less
Didier, Ryne A; Hopkins, Katharine L; Coakley, Fergus V; Krishnaswami, Sanjay; Spiro, David M; Foster, Bryan R
2017-09-01
Magnetic resonance imaging (MRI) has emerged as a promising modality for evaluating pediatric appendicitis. However optimal imaging protocols, including roles of contrast agents and sedation, have not been established and diagnostic criteria have not been fully evaluated. To investigate performance characteristics of rapid MRI without contrast agents or sedation in the diagnosis of pediatric appendicitis. We included patients ages 4-18 years with suspicion of appendicitis who underwent rapid MRI between October 2013 and March 2015 without contrast agent or sedation. After two-radiologist review, we determined performance characteristics of individual diagnostic criteria and aggregate diagnostic criteria by comparing MRI results to clinical outcomes. We used receiver operating characteristic (ROC) curves to determine cut-points for appendiceal diameter and wall thickness for optimization of predictive power, and we calculated area under the curve (AUC) as a measure of test accuracy. Ninety-eight MRI examinations were performed in 97 subjects. Overall, MRI had a 94% sensitivity, 95% specificity, 91% positive predictive value and 97% negative predictive value. Optimal cut-points for appendiceal diameter and wall thickness were ≥7 mm and ≥2 mm, respectively. Independently, those cut-points produced sensitivities of 91% and 84% and specificities of 84% and 43%. Presence of intraluminal fluid (30/33) or localized periappendiceal fluid (32/33) showed a significant association with acute appendicitis (P<0.01), with sensitivities of 91% and 97% and specificities of 60% and 50%. For examinations in which the appendix was not identified by one or both reviewers (23/98), the clinical outcome was negative. Rapid MRI without contrast agents or sedation is accurate for diagnosis of pediatric appendicitis when multiple diagnostic criteria are considered in aggregate. Individual diagnostic criteria including optimized cut-points of ≥7 mm for diameter and ≥2 mm for wall thickness demonstrate high sensitivities but relatively low specificities. Nonvisualization of the appendix favors a negative diagnosis.
Optimization of Selective Laser Melting by Evaluation Method of Multiple Quality Characteristics
NASA Astrophysics Data System (ADS)
Khaimovich, A. I.; Stepanenko, I. S.; Smelov, V. G.
2018-01-01
Article describes the adoption of the Taguchi method in selective laser melting process of sector of combustion chamber by numerical and natural experiments for achieving minimum temperature deformation. The aim was to produce a quality part with minimum amount of numeric experiments. For the study, the following optimization parameters (independent factors) were chosen: the laser beam power and velocity; two factors for compensating the effect of the residual thermal stresses: the scale factor of the preliminary correction of the part geometry and the number of additional reinforcing elements. We used an orthogonal plan of 9 experiments with a factor variation at three levels (L9). As quality criterias, the values of distortions for 9 zones of the combustion chamber and the maximum strength of the material of the chamber were chosen. Since the quality parameters are multidirectional, a grey relational analysis was used to solve the optimization problem for multiple quality parameters. As a result, according to the parameters obtained, the combustion chamber segments of the gas turbine engine were manufactured.
The Army Study Program Fiscal Year 1993 Report
1992-11-16
results of the PERFORMER: CAA Ardennes campaign and, if necessary, to recommend modifications to CEM. PROJECT TITLE: Economic Analysis Of HODA Automation...DCSOPS PERFORMER: CAA PROJECT TITLE: Wartime Requirements, FY 99 PUIC: CSCAMNO15 To assist HODA in determining conventional munition requirements...STUDY WILL ATTEMPT TO DEVELOP A MULTIPLE CRITERIA OPTIMIZATION MODEL DTIC NUMBER: TO AID IN THE PROGRAMMING OF ARMY ACQUISITION FUNDS AT HODA . THE
A VIKOR Technique with Applications Based on DEMATEL and ANP
NASA Astrophysics Data System (ADS)
Ou Yang, Yu-Ping; Shieh, How-Ming; Tzeng, Gwo-Hshiung
In multiple criteria decision making (MCDM) methods, the compromise ranking method (named VIKOR) was introduced as one applicable technique to implement within MCDM. It was developed for multicriteria optimization of complex systems. However, few papers discuss conflicting (competing) criteria with dependence and feedback in the compromise solution method. Therefore, this study proposes and provides applications for a novel model using the VIKOR technique based on DEMATEL and the ANP to solve the problem of conflicting criteria with dependence and feedback. In addition, this research also uses DEMATEL to normalize the unweighted supermatrix of the ANP to suit the real world. An example is also presented to illustrate the proposed method with applications thereof. The results show the proposed method is suitable and effective in real-world applications.
EnRICH: Extraction and Ranking using Integration and Criteria Heuristics.
Zhang, Xia; Greenlee, M Heather West; Serb, Jeanne M
2013-01-15
High throughput screening technologies enable biologists to generate candidate genes at a rate that, due to time and cost constraints, cannot be studied by experimental approaches in the laboratory. Thus, it has become increasingly important to prioritize candidate genes for experiments. To accomplish this, researchers need to apply selection requirements based on their knowledge, which necessitates qualitative integration of heterogeneous data sources and filtration using multiple criteria. A similar approach can also be applied to putative candidate gene relationships. While automation can assist in this routine and imperative procedure, flexibility of data sources and criteria must not be sacrificed. A tool that can optimize the trade-off between automation and flexibility to simultaneously filter and qualitatively integrate data is needed to prioritize candidate genes and generate composite networks from heterogeneous data sources. We developed the java application, EnRICH (Extraction and Ranking using Integration and Criteria Heuristics), in order to alleviate this need. Here we present a case study in which we used EnRICH to integrate and filter multiple candidate gene lists in order to identify potential retinal disease genes. As a result of this procedure, a candidate pool of several hundred genes was narrowed down to five candidate genes, of which four are confirmed retinal disease genes and one is associated with a retinal disease state. We developed a platform-independent tool that is able to qualitatively integrate multiple heterogeneous datasets and use different selection criteria to filter each of them, provided the datasets are tables that have distinct identifiers (required) and attributes (optional). With the flexibility to specify data sources and filtering criteria, EnRICH automatically prioritizes candidate genes or gene relationships for biologists based on their specific requirements. Here, we also demonstrate that this tool can be effectively and easily used to apply highly specific user-defined criteria and can efficiently identify high quality candidate genes from relatively sparse datasets.
Fitts, Douglas A
2017-09-21
The variable criteria sequential stopping rule (vcSSR) is an efficient way to add sample size to planned ANOVA tests while holding the observed rate of Type I errors, α o , constant. The only difference from regular null hypothesis testing is that criteria for stopping the experiment are obtained from a table based on the desired power, rate of Type I errors, and beginning sample size. The vcSSR was developed using between-subjects ANOVAs, but it should work with p values from any type of F test. In the present study, the α o remained constant at the nominal level when using the previously published table of criteria with repeated measures designs with various numbers of treatments per subject, Type I error rates, values of ρ, and four different sample size models. New power curves allow researchers to select the optimal sample size model for a repeated measures experiment. The criteria held α o constant either when used with a multiple correlation that varied the sample size model and the number of predictor variables, or when used with MANOVA with multiple groups and two levels of a within-subject variable at various levels of ρ. Although not recommended for use with χ 2 tests such as the Friedman rank ANOVA test, the vcSSR produces predictable results based on the relation between F and χ 2 . Together, the data confirm the view that the vcSSR can be used to control Type I errors during sequential sampling with any t- or F-statistic rather than being restricted to certain ANOVA designs.
Particle swarm optimization based space debris surveillance network scheduling
NASA Astrophysics Data System (ADS)
Jiang, Hai; Liu, Jing; Cheng, Hao-Wen; Zhang, Yao
2017-02-01
The increasing number of space debris has created an orbital debris environment that poses increasing impact risks to existing space systems and human space flights. For the safety of in-orbit spacecrafts, we should optimally schedule surveillance tasks for the existing facilities to allocate resources in a manner that most significantly improves the ability to predict and detect events involving affected spacecrafts. This paper analyzes two criteria that mainly affect the performance of a scheduling scheme and introduces an artificial intelligence algorithm into the scheduling of tasks of the space debris surveillance network. A new scheduling algorithm based on the particle swarm optimization algorithm is proposed, which can be implemented in two different ways: individual optimization and joint optimization. Numerical experiments with multiple facilities and objects are conducted based on the proposed algorithm, and simulation results have demonstrated the effectiveness of the proposed algorithm.
Impulsive Control for Continuous-Time Markov Decision Processes: A Linear Programming Approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dufour, F., E-mail: dufour@math.u-bordeaux1.fr; Piunovskiy, A. B., E-mail: piunov@liv.ac.uk
2016-08-15
In this paper, we investigate an optimization problem for continuous-time Markov decision processes with both impulsive and continuous controls. We consider the so-called constrained problem where the objective of the controller is to minimize a total expected discounted optimality criterion associated with a cost rate function while keeping other performance criteria of the same form, but associated with different cost rate functions, below some given bounds. Our model allows multiple impulses at the same time moment. The main objective of this work is to study the associated linear program defined on a space of measures including the occupation measures ofmore » the controlled process and to provide sufficient conditions to ensure the existence of an optimal control.« less
Field Scale Optimization for Long-Term Sustainability of Best Management Practices in Watersheds
NASA Astrophysics Data System (ADS)
Samuels, A.; Babbar-Sebens, M.
2012-12-01
Agricultural and urban land use changes have led to disruption of natural hydrologic processes and impairment of streams and rivers. Multiple previous studies have evaluated Best Management Practices (BMPs) as means for restoring existing hydrologic conditions and reducing impairment of water resources. However, planning of these practices have relied on watershed scale hydrologic models for identifying locations and types of practices at scales much coarser than the actual field scale, where landowners have to plan, design and implement the practices. Field scale hydrologic modeling provides means for identifying relationships between BMP type, spatial location, and the interaction between BMPs at a finer farm/field scale that is usually more relevant to the decision maker (i.e. the landowner). This study focuses on development of a simulation-optimization approach for field-scale planning of BMPs in the School Branch stream system of Eagle Creek Watershed, Indiana, USA. The Agricultural Policy Environmental Extender (APEX) tool is used as the field scale hydrologic model, and a multi-objective optimization algorithm is used to search for optimal alternatives. Multiple climate scenarios downscaled to the watershed-scale are used to test the long term performance of these alternatives and under extreme weather conditions. The effectiveness of these BMPs under multiple weather conditions are included within the simulation-optimization approach as a criteria/goal to assist landowners in identifying sustainable design of practices. The results from these scenarios will further enable efficient BMP planning for current and future usage.
Two-Dimensional High-Lift Aerodynamic Optimization Using Neural Networks
NASA Technical Reports Server (NTRS)
Greenman, Roxana M.
1998-01-01
The high-lift performance of a multi-element airfoil was optimized by using neural-net predictions that were trained using a computational data set. The numerical data was generated using a two-dimensional, incompressible, Navier-Stokes algorithm with the Spalart-Allmaras turbulence model. Because it is difficult to predict maximum lift for high-lift systems, an empirically-based maximum lift criteria was used in this study to determine both the maximum lift and the angle at which it occurs. The 'pressure difference rule,' which states that the maximum lift condition corresponds to a certain pressure difference between the peak suction pressure and the pressure at the trailing edge of the element, was applied and verified with experimental observations for this configuration. Multiple input, single output networks were trained using the NASA Ames variation of the Levenberg-Marquardt algorithm for each of the aerodynamic coefficients (lift, drag and moment). The artificial neural networks were integrated with a gradient-based optimizer. Using independent numerical simulations and experimental data for this high-lift configuration, it was shown that this design process successfully optimized flap deflection, gap, overlap, and angle of attack to maximize lift. Once the neural nets were trained and integrated with the optimizer, minimal additional computer resources were required to perform optimization runs with different initial conditions and parameters. Applying the neural networks within the high-lift rigging optimization process reduced the amount of computational time and resources by 44% compared with traditional gradient-based optimization procedures for multiple optimization runs.
Determination of laser cutting process conditions using the preference selection index method
NASA Astrophysics Data System (ADS)
Madić, Miloš; Antucheviciene, Jurgita; Radovanović, Miroslav; Petković, Dušan
2017-03-01
Determination of adequate parameter settings for improvement of multiple quality and productivity characteristics at the same time is of great practical importance in laser cutting. This paper discusses the application of the preference selection index (PSI) method for discrete optimization of the CO2 laser cutting of stainless steel. The main motivation for application of the PSI method is that it represents an almost unexplored multi-criteria decision making (MCDM) method, and moreover, this method does not require assessment of the considered criteria relative significances. After reviewing and comparing the existing approaches for determination of laser cutting parameter settings, the application of the PSI method was explained in detail. Experiment realization was conducted by using Taguchi's L27 orthogonal array. Roughness of the cut surface, heat affected zone (HAZ), kerf width and material removal rate (MRR) were considered as optimization criteria. The proposed methodology is found to be very useful in real manufacturing environment since it involves simple calculations which are easy to understand and implement. However, while applying the PSI method it was observed that it can not be useful in situations where there exist a large number of alternatives which have attribute values (performances) very close to those which are preferred.
Fuzzy multicriteria disposal method and site selection for municipal solid waste.
Ekmekçioğlu, Mehmet; Kaya, Tolga; Kahraman, Cengiz
2010-01-01
The use of fuzzy multiple criteria analysis (MCA) in solid waste management has the advantage of rendering subjective and implicit decision making more objective and analytical, with its ability to accommodate both quantitative and qualitative data. In this paper a modified fuzzy TOPSIS methodology is proposed for the selection of appropriate disposal method and site for municipal solid waste (MSW). Our method is superior to existing methods since it has capability of representing vague qualitative data and presenting all possible results with different degrees of membership. In the first stage of the proposed methodology, a set of criteria of cost, reliability, feasibility, pollution and emission levels, waste and energy recovery is optimized to determine the best MSW disposal method. Landfilling, composting, conventional incineration, and refuse-derived fuel (RDF) combustion are the alternatives considered. The weights of the selection criteria are determined by fuzzy pairwise comparison matrices of Analytic Hierarchy Process (AHP). It is found that RDF combustion is the best disposal method alternative for Istanbul. In the second stage, the same methodology is used to determine the optimum RDF combustion plant location using adjacent land use, climate, road access and cost as the criteria. The results of this study illustrate the importance of the weights on the various factors in deciding the optimized location, with the best site located in Catalca. A sensitivity analysis is also conducted to monitor how sensitive our model is to changes in the various criteria weights. 2010 Elsevier Ltd. All rights reserved.
Integration of ecological-biological thresholds in conservation decision making.
Mavrommati, Georgia; Bithas, Kostas; Borsuk, Mark E; Howarth, Richard B
2016-12-01
In the Anthropocene, coupled human and natural systems dominate and only a few natural systems remain relatively unaffected by human influence. On the one hand, conservation criteria based on areas of minimal human impact are not relevant to much of the biosphere. On the other hand, conservation criteria based on economic factors are problematic with respect to their ability to arrive at operational indicators of well-being that can be applied in practice over multiple generations. Coupled human and natural systems are subject to economic development which, under current management structures, tends to affect natural systems and cross planetary boundaries. Hence, designing and applying conservation criteria applicable in real-world systems where human and natural systems need to interact and sustainably coexist is essential. By recognizing the criticality of satisfying basic needs as well as the great uncertainty over the needs and preferences of future generations, we sought to incorporate conservation criteria based on minimal human impact into economic evaluation. These criteria require the conservation of environmental conditions such that the opportunity for intergenerational welfare optimization is maintained. Toward this end, we propose the integration of ecological-biological thresholds into decision making and use as an example the planetary-boundaries approach. Both conservation scientists and economists must be involved in defining operational ecological-biological thresholds that can be incorporated into economic thinking and reflect the objectives of conservation, sustainability, and intergenerational welfare optimization. © 2016 Society for Conservation Biology.
Stoms, David M.; Davis, Frank W.
2014-01-01
Quantitative methods of spatial conservation prioritization have traditionally been applied to issues in conservation biology and reserve design, though their use in other types of natural resource management is growing. The utility maximization problem is one form of a covering problem where multiple criteria can represent the expected social benefits of conservation action. This approach allows flexibility with a problem formulation that is more general than typical reserve design problems, though the solution methods are very similar. However, few studies have addressed optimization in utility maximization problems for conservation planning, and the effect of solution procedure is largely unquantified. Therefore, this study mapped five criteria describing elements of multifunctional agriculture to determine a hypothetical conservation resource allocation plan for agricultural land conservation in the Central Valley of CA, USA. We compared solution procedures within the utility maximization framework to determine the difference between an open source integer programming approach and a greedy heuristic, and find gains from optimization of up to 12%. We also model land availability for conservation action as a stochastic process and determine the decline in total utility compared to the globally optimal set using both solution algorithms. Our results are comparable to other studies illustrating the benefits of optimization for different conservation planning problems, and highlight the importance of maximizing the effectiveness of limited funding for conservation and natural resource management. PMID:25538868
Kreitler, Jason R.; Stoms, David M.; Davis, Frank W.
2014-01-01
Quantitative methods of spatial conservation prioritization have traditionally been applied to issues in conservation biology and reserve design, though their use in other types of natural resource management is growing. The utility maximization problem is one form of a covering problem where multiple criteria can represent the expected social benefits of conservation action. This approach allows flexibility with a problem formulation that is more general than typical reserve design problems, though the solution methods are very similar. However, few studies have addressed optimization in utility maximization problems for conservation planning, and the effect of solution procedure is largely unquantified. Therefore, this study mapped five criteria describing elements of multifunctional agriculture to determine a hypothetical conservation resource allocation plan for agricultural land conservation in the Central Valley of CA, USA. We compared solution procedures within the utility maximization framework to determine the difference between an open source integer programming approach and a greedy heuristic, and find gains from optimization of up to 12%. We also model land availability for conservation action as a stochastic process and determine the decline in total utility compared to the globally optimal set using both solution algorithms. Our results are comparable to other studies illustrating the benefits of optimization for different conservation planning problems, and highlight the importance of maximizing the effectiveness of limited funding for conservation and natural resource management.
Multiobjective optimization of combinatorial libraries.
Agrafiotis, D K
2002-01-01
Combinatorial chemistry and high-throughput screening have caused a fundamental shift in the way chemists contemplate experiments. Designing a combinatorial library is a controversial art that involves a heterogeneous mix of chemistry, mathematics, economics, experience, and intuition. Although there seems to be little agreement as to what constitutes an ideal library, one thing is certain: only one property or measure seldom defines the quality of the design. In most real-world applications, a good experiment requires the simultaneous optimization of several, often conflicting, design objectives, some of which may be vague and uncertain. In this paper, we discuss a class of algorithms for subset selection rooted in the principles of multiobjective optimization. Our approach is to employ an objective function that encodes all of the desired selection criteria, and then use a simulated annealing or evolutionary approach to identify the optimal (or a nearly optimal) subset from among the vast number of possibilities. Many design criteria can be accommodated, including diversity, similarity to known actives, predicted activity and/or selectivity determined by quantitative structure-activity relationship (QSAR) models or receptor binding models, enforcement of certain property distributions, reagent cost and availability, and many others. The method is robust, convergent, and extensible, offers the user full control over the relative significance of the various objectives in the final design, and permits the simultaneous selection of compounds from multiple libraries in full- or sparse-array format.
Kazakis, Georgios; Kanellopoulos, Ioannis; Sotiropoulos, Stefanos; Lagaros, Nikos D
2017-10-01
Construction industry has a major impact on the environment that we spend most of our life. Therefore, it is important that the outcome of architectural intuition performs well and complies with the design requirements. Architects usually describe as "optimal design" their choice among a rather limited set of design alternatives, dictated by their experience and intuition. However, modern design of structures requires accounting for a great number of criteria derived from multiple disciplines, often of conflicting nature. Such criteria derived from structural engineering, eco-design, bioclimatic and acoustic performance. The resulting vast number of alternatives enhances the need for computer-aided architecture in order to increase the possibility of arriving at a more preferable solution. Therefore, the incorporation of smart, automatic tools in the design process, able to further guide designer's intuition becomes even more indispensable. The principal aim of this study is to present possibilities to integrate automatic computational techniques related to topology optimization in the phase of intuition of civil structures as part of computer aided architectural design. In this direction, different aspects of a new computer aided architectural era related to the interpretation of the optimized designs, difficulties resulted from the increased computational effort and 3D printing capabilities are covered here in.
Treatment optimization in MS: Canadian MS Working Group updated recommendations.
Freedman, Mark S; Selchen, Daniel; Arnold, Douglas L; Prat, Alexandre; Banwell, Brenda; Yeung, Michael; Morgenthau, David; Lapierre, Yves
2013-05-01
The Canadian Multiple Sclerosis Working Group (CMSWG) developed practical recommendations in 2004 to assist clinicians in optimizing the use of disease-modifying therapies (DMT) in patients with relapsing multiple sclerosis. The CMSWG convened to review how disease activity is assessed, propose a more current approach for assessing suboptimal response, and to suggest a scheme for switching or escalating treatment. Practical criteria for relapses, Expanded Disability Status Scale (EDSS) progression and MRI were developed to classify the clinical level of concern as Low, Medium and High. The group concluded that a change in treatment may be considered in any RRMS patient if there is a high level of concern in any one domain (relapses, progression or MRI), a medium level of concern in any two domains, or a low level of concern in all three domains. These recommendations for assessing treatment response should assist clinicians in making more rational choices in their management of relapsing MS patients.
NASA Astrophysics Data System (ADS)
Jung, Sang-Young
Design procedures for aircraft wing structures with control surfaces are presented using multidisciplinary design optimization. Several disciplines such as stress analysis, structural vibration, aerodynamics, and controls are considered simultaneously and combined for design optimization. Vibration data and aerodynamic data including those in the transonic regime are calculated by existing codes. Flutter analyses are performed using those data. A flutter suppression method is studied using control laws in the closed-loop flutter equation. For the design optimization, optimization techniques such as approximation, design variable linking, temporary constraint deletion, and optimality criteria are used. Sensitivity derivatives of stresses and displacements for static loads, natural frequency, flutter characteristics, and control characteristics with respect to design variables are calculated for an approximate optimization. The objective function is the structural weight. The design variables are the section properties of the structural elements and the control gain factors. Existing multidisciplinary optimization codes (ASTROS* and MSC/NASTRAN) are used to perform single and multiple constraint optimizations of fully built up finite element wing structures. Three benchmark wing models are developed and/or modified for this purpose. The models are tested extensively.
An optimization tool for satellite equipment layout
NASA Astrophysics Data System (ADS)
Qin, Zheng; Liang, Yan-gang; Zhou, Jian-ping
2018-01-01
Selection of the satellite equipment layout with performance constraints is a complex task which can be viewed as a constrained multi-objective optimization and a multiple criteria decision making problem. The layout design of a satellite cabin involves the process of locating the required equipment in a limited space, thereby satisfying various behavioral constraints of the interior and exterior environments. The layout optimization of satellite cabin in this paper includes the C.G. offset, the moments of inertia and the space debris impact risk of the system, of which the impact risk index is developed to quantify the risk to a satellite cabin of coming into contact with space debris. In this paper an optimization tool for the integration of CAD software as well as the optimization algorithms is presented, which is developed to automatically find solutions for a three-dimensional layout of equipment in satellite. The effectiveness of the tool is also demonstrated by applying to the layout optimization of a satellite platform.
Kinematic control of redundant robots and the motion optimizability measure.
Li, L; Gruver, W A; Zhang, Q; Yang, Z
2001-01-01
This paper treats the kinematic control of manipulators with redundant degrees of freedom. We derive an analytical solution for the inverse kinematics that provides a means for accommodating joint velocity constraints in real time. We define the motion optimizability measure and use it to develop an efficient method for the optimization of joint trajectories subject to multiple criteria. An implementation of the method for a 7-dof experimental redundant robot is present.
Adaptive track scheduling to optimize concurrency and vectorization in GeantV
Apostolakis, J.; Bandieramonte, M.; Bitzes, G.; ...
2015-05-22
The GeantV project is focused on the R&D of new particle transport techniques to maximize parallelism on multiple levels, profiting from the use of both SIMD instructions and co-processors for the CPU-intensive calculations specific to this type of applications. In our approach, vectors of tracks belonging to multiple events and matching different locality criteria must be gathered and dispatched to algorithms having vector signatures. While the transport propagates tracks and changes their individual states, data locality becomes harder to maintain. The scheduling policy has to be changed to maintain efficient vectors while keeping an optimal level of concurrency. The modelmore » has complex dynamics requiring tuning the thresholds to switch between the normal regime and special modes, i.e. prioritizing events to allow flushing memory, adding new events in the transport pipeline to boost locality, dynamically adjusting the particle vector size or switching between vector to single track mode when vectorization causes only overhead. Lastly, this work requires a comprehensive study for optimizing these parameters to make the behaviour of the scheduler self-adapting, presenting here its initial results.« less
Optimal Contractor Selection in Construction Industry: The Fuzzy Way
NASA Astrophysics Data System (ADS)
Krishna Rao, M. V.; Kumar, V. S. S.; Rathish Kumar, P.
2018-02-01
A purely price-based approach to contractor selection has been identified as the root cause for many serious project delivery problems. Therefore, the capability of the contractor to execute the project should be evaluated using a multiple set of selection criteria including reputation, past performance, performance potential, financial soundness and other project specific criteria. An industry-wide questionnaire survey was conducted with the objective of identifying the important criteria for adoption in the selection process. In this work, a fuzzy set based model was developed for contractor prequalification/evaluation, by using effective criteria obtained from the percept of construction professionals, taking subjective judgments of decision makers also into consideration. A case study consisting of four alternatives (contractors in the present case) solicited from a public works department of Pondicherry in India, is used to illustrate the effectiveness of the proposed approach. The final selection of contractor is made based on the integrated score or Overall Evaluation Score of the decision alternative in prequalification as well as bid evaluation stages.
Multiple objective optimization in reliability demonstration test
Lu, Lu; Anderson-Cook, Christine Michaela; Li, Mingyang
2016-10-01
Reliability demonstration tests are usually performed in product design or validation processes to demonstrate whether a product meets specified requirements on reliability. For binomial demonstration tests, the zero-failure test has been most commonly used due to its simplicity and use of minimum sample size to achieve an acceptable consumer’s risk level. However, this test can often result in unacceptably high risk for producers as well as a low probability of passing the test even when the product has good reliability. This paper explicitly explores the interrelationship between multiple objectives that are commonly of interest when planning a demonstration test andmore » proposes structured decision-making procedures using a Pareto front approach for selecting an optimal test plan based on simultaneously balancing multiple criteria. Different strategies are suggested for scenarios with different user priorities and graphical tools are developed to help quantify the trade-offs between choices and to facilitate informed decision making. As a result, potential impacts of some subjective user inputs on the final decision are studied to offer insights and useful guidance for general applications.« less
McKenzie, Elizabeth M.; Balter, Peter A.; Stingo, Francesco C.; Jones, Jimmy; Followill, David S.; Kry, Stephen F.
2014-01-01
Purpose: The authors investigated the performance of several patient-specific intensity-modulated radiation therapy (IMRT) quality assurance (QA) dosimeters in terms of their ability to correctly identify dosimetrically acceptable and unacceptable IMRT patient plans, as determined by an in-house-designed multiple ion chamber phantom used as the gold standard. A further goal was to examine optimal threshold criteria that were consistent and based on the same criteria among the various dosimeters. Methods: The authors used receiver operating characteristic (ROC) curves to determine the sensitivity and specificity of (1) a 2D diode array undergoing anterior irradiation with field-by-field evaluation, (2) a 2D diode array undergoing anterior irradiation with composite evaluation, (3) a 2D diode array using planned irradiation angles with composite evaluation, (4) a helical diode array, (5) radiographic film, and (6) an ion chamber. This was done with a variety of evaluation criteria for a set of 15 dosimetrically unacceptable and 9 acceptable clinical IMRT patient plans, where acceptability was defined on the basis of multiple ion chamber measurements using independent ion chambers and a phantom. The area under the curve (AUC) on the ROC curves was used to compare dosimeter performance across all thresholds. Optimal threshold values were obtained from the ROC curves while incorporating considerations for cost and prevalence of unacceptable plans. Results: Using common clinical acceptance thresholds, most devices performed very poorly in terms of identifying unacceptable plans. Grouping the detector performance based on AUC showed two significantly different groups. The ion chamber, radiographic film, helical diode array, and anterior-delivered composite 2D diode array were in the better-performing group, whereas the anterior-delivered field-by-field and planned gantry angle delivery using the 2D diode array performed less well. Additionally, based on the AUCs, there was no significant difference in the performance of any device between gamma criteria of 2%/2 mm, 3%/3 mm, and 5%/3 mm. Finally, optimal cutoffs (e.g., percent of pixels passing gamma) were determined for each device and while clinical practice commonly uses a threshold of 90% of pixels passing for most cases, these results showed variability in the optimal cutoff among devices. Conclusions: IMRT QA devices have differences in their ability to accurately detect dosimetrically acceptable and unacceptable plans. Field-by-field analysis with a MapCheck device and use of the MapCheck with a MapPhan phantom while delivering at planned rotational gantry angles resulted in a significantly poorer ability to accurately sort acceptable and unacceptable plans compared with the other techniques examined. Patient-specific IMRT QA techniques in general should be thoroughly evaluated for their ability to correctly differentiate acceptable and unacceptable plans. Additionally, optimal agreement thresholds should be identified and used as common clinical thresholds typically worked very poorly to identify unacceptable plans. PMID:25471949
McKenzie, Elizabeth M; Balter, Peter A; Stingo, Francesco C; Jones, Jimmy; Followill, David S; Kry, Stephen F
2014-12-01
The authors investigated the performance of several patient-specific intensity-modulated radiation therapy (IMRT) quality assurance (QA) dosimeters in terms of their ability to correctly identify dosimetrically acceptable and unacceptable IMRT patient plans, as determined by an in-house-designed multiple ion chamber phantom used as the gold standard. A further goal was to examine optimal threshold criteria that were consistent and based on the same criteria among the various dosimeters. The authors used receiver operating characteristic (ROC) curves to determine the sensitivity and specificity of (1) a 2D diode array undergoing anterior irradiation with field-by-field evaluation, (2) a 2D diode array undergoing anterior irradiation with composite evaluation, (3) a 2D diode array using planned irradiation angles with composite evaluation, (4) a helical diode array, (5) radiographic film, and (6) an ion chamber. This was done with a variety of evaluation criteria for a set of 15 dosimetrically unacceptable and 9 acceptable clinical IMRT patient plans, where acceptability was defined on the basis of multiple ion chamber measurements using independent ion chambers and a phantom. The area under the curve (AUC) on the ROC curves was used to compare dosimeter performance across all thresholds. Optimal threshold values were obtained from the ROC curves while incorporating considerations for cost and prevalence of unacceptable plans. Using common clinical acceptance thresholds, most devices performed very poorly in terms of identifying unacceptable plans. Grouping the detector performance based on AUC showed two significantly different groups. The ion chamber, radiographic film, helical diode array, and anterior-delivered composite 2D diode array were in the better-performing group, whereas the anterior-delivered field-by-field and planned gantry angle delivery using the 2D diode array performed less well. Additionally, based on the AUCs, there was no significant difference in the performance of any device between gamma criteria of 2%/2 mm, 3%/3 mm, and 5%/3 mm. Finally, optimal cutoffs (e.g., percent of pixels passing gamma) were determined for each device and while clinical practice commonly uses a threshold of 90% of pixels passing for most cases, these results showed variability in the optimal cutoff among devices. IMRT QA devices have differences in their ability to accurately detect dosimetrically acceptable and unacceptable plans. Field-by-field analysis with a MapCheck device and use of the MapCheck with a MapPhan phantom while delivering at planned rotational gantry angles resulted in a significantly poorer ability to accurately sort acceptable and unacceptable plans compared with the other techniques examined. Patient-specific IMRT QA techniques in general should be thoroughly evaluated for their ability to correctly differentiate acceptable and unacceptable plans. Additionally, optimal agreement thresholds should be identified and used as common clinical thresholds typically worked very poorly to identify unacceptable plans.
NASA Astrophysics Data System (ADS)
Babbar-Sebens, M.; Minsker, B. S.
2006-12-01
In the water resources management field, decision making encompasses many kinds of engineering, social, and economic constraints and objectives. Representing all of these problem dependant criteria through models (analytical or numerical) and various formulations (e.g., objectives, constraints, etc.) within an optimization- simulation system can be a very non-trivial issue. Most models and formulations utilized for discerning desirable traits in a solution can only approximate the decision maker's (DM) true preference criteria, and they often fail to consider important qualitative and incomputable phenomena related to the management problem. In our research, we have proposed novel decision support frameworks that allow DMs to actively participate in the optimization process. The DMs explicitly indicate their true preferences based on their subjective criteria and the results of various simulation models and formulations. The feedback from the DMs is then used to guide the search process towards solutions that are "all-rounders" from the perspective of the DM. The two main research questions explored in this work are: a) Does interaction between the optimization algorithm and a DM assist the system in searching for groundwater monitoring designs that are robust from the DM's perspective?, and b) How can an interactive search process be made more effective when human factors, such as human fatigue and cognitive learning processes, affect the performance of the algorithm? The application of these frameworks on a real-world groundwater long-term monitoring (LTM) case study in Michigan highlighted the following salient advantages: a) in contrast to the non-interactive optimization methodology, the proposed interactive frameworks were able to identify low cost monitoring designs whose interpolation maps respected the expected spatial distribution of the contaminants, b) for many same-cost designs, the interactive methodologies were able to propose multiple alternatives that met the DM's preference criteria, therefore allowing the expert to select among several strong candidate designs depending on her/his LTM budget, c) two of the methodologies - Case-Based Micro Interactive Genetic Algorithm (CBMIGA) and Interactive Genetic Algorithm with Mixed Initiative Interaction (IGAMII) - were also able to assist in controlling human fatigue and adapt to the DM's learning process.
Linear System Control Using Stochastic Learning Automata
NASA Technical Reports Server (NTRS)
Ziyad, Nigel; Cox, E. Lucien; Chouikha, Mohamed F.
1998-01-01
This paper explains the use of a Stochastic Learning Automata (SLA) to control switching between three systems to produce the desired output response. The SLA learns the optimal choice of the damping ratio for each system to achieve a desired result. We show that the SLA can learn these states for the control of an unknown system with the proper choice of the error criteria. The results of using a single automaton are compared to using multiple automata.
Williams, Mitchel T; Tapos, Daniela O; Juhász, Csaba
2014-12-01
Pediatric-onset multiple sclerosis represents around 3-5% of all patients with multiple sclerosis. Both the 2005 and 2010 McDonald criteria for multiple sclerosis have been suggested for the possible use in pediatric-onset multiple sclerosis. Modifications incorporated into the 2010 criteria enabled the fulfillment of dissemination in time to be met with the initial magnetic resonance imaging. The present study was designed to compare the diagnostic sensitivity of these criteria at initial presentation, the time to fulfilling them, and secondary effects of ethnicity in pediatric-onset multiple sclerosis. Twenty-five children with clinically definite multiple sclerosis (mean age, 14.6 ± 3.1 years; 15 girls) from a single center between 2005 and 2012 were analyzed using both the 2005 and 2010 McDonald criteria based on initial clinical presentation and neuroimaging findings comparing diagnostic sensitivity, time interval to meet diagnosis, and ethnicity. Initial multiple sclerosis diagnosis rates applying the 2005 McDonald criteria were 32% compared with 92% for the 2010 criteria (P = 0.0003). The mean time after initial signs until the 2005 and 2010 McDonald criteria for multiple sclerosis were met was 5.0 vs 0.7 months, respectively (P = 0.001). Time to diagnosis using the 2010 criteria was shorter in black children than the European white (P = 0.005). The 2010 McDonald criteria are an appropriate tool for the timely diagnosis of pediatric multiple sclerosis, especially in black children, potentially allowing an earlier initiation of disease-modifying therapy. Copyright © 2014 Elsevier Inc. All rights reserved.
Lumley, R; Davenport, R; Williams, A
2015-03-01
The diagnostic criteria for multiple sclerosis have evolved over time and currently the 2010 McDonald criteria are the most widely accepted. These criteria allow the diagnosis of multiple sclerosis to be made at the clinically isolated syndrome stage provided certain criteria are met on a single magnetic resonance brain scan. Our hypothesis was that neurologists in Scotland did not use these criteria routinely. We sent a SurveyMonkey questionnaire to all Scottish neurologists (consultants and trainees) regarding the diagnosis of multiple sclerosis. Our questionnaire response rate was 65/99 (66%). Most Scottish neurologists were aware of the criteria and 31/58 (53%) felt that they were using these routinely. However, in a clinical vignette designed to test the application of these criteria, only 5/57 (9%) of neurologists appeared to use them. Scottish neurologists' use of the 2010 McDonald criteria for diagnosis of multiple sclerosis varies from practitioners' perception of their use of these criteria.
Influence of diagnostic criteria on the interpretation of adrenal vein sampling.
Lethielleux, Gaëlle; Amar, Laurence; Raynaud, Alain; Plouin, Pierre-François; Steichen, Olivier
2015-04-01
Guidelines promote the use of adrenal vein sampling (AVS) to document lateralized aldosterone hypersecretion in primary aldosteronism. However, there are large discrepancies between institutions in the criteria used to interpret its results. This study evaluates the consequences of these differences on the classification and management of patients. The results of all 537 AVS procedures performed between January 2001 and July 2010 in our institution were interpreted with 4 diagnostic criteria used in experienced institutions where AVS is performed without cosyntropin (Brisbane, Padua, Paris, and Turin) and with criteria proposed by a recent consensus statement. AVS procedures were classified as unsuccessful, lateralized, or not lateralized according to each set of criteria. Almost 5× more AVS procedures were classified as unsuccessful with the strictest criteria than with the least strict criteria (18% versus 4%, respectively). Similarly, over 2× more AVS procedures were classified as lateralized with the least stringent criteria than with the most stringent criteria (60% versus 26%, respectively). Multiple samples were available from ≥1 side for 155 AVS procedures. These procedures were classified differently by ≥2 right-left sample pairs in 12% to 20% of cases. Thus, different sets of criteria used to interpret AVS in experienced institutions translate into heterogeneous classifications and hence management decisions, for patients with primary aldosteronism. Defining the most appropriate procedures and diagnostic criteria is needed for AVS to achieve optimal performance and fully justify its status as a gold standard. © 2015 American Heart Association, Inc.
Design criteria monograph on turbopump inducers
NASA Technical Reports Server (NTRS)
1972-01-01
State of the art and design criteria for liquid rocket engine turbopump inducers are summarized for optimal fabrication. Design criteria optimize hydrodynamic parameters to obtain highest suction specific speed without violating structural and mechanical constraints.
Liu, Shaoli; Xia, Zeyang; Liu, Jianhua; Xu, Jing; Ren, He; Lu, Tong; Yang, Xiangdong
2016-01-01
The “robotic-assisted liver tumor coagulation therapy” (RALTCT) system is a promising candidate for large liver tumor treatment in terms of accuracy and speed. A prerequisite for effective therapy is accurate surgical planning. However, it is difficult for the surgeon to perform surgical planning manually due to the difficulties associated with robot-assisted large liver tumor therapy. These main difficulties include the following aspects: (1) multiple needles are needed to destroy the entire tumor, (2) the insertion trajectories of the needles should avoid the ribs, blood vessels, and other tissues and organs in the abdominal cavity, (3) the placement of multiple needles should avoid interference with each other, (4) an inserted needle will cause some deformation of liver, which will result in changes in subsequently inserted needles’ operating environment, and (5) the multiple needle-insertion trajectories should be consistent with the needle-driven robot’s movement characteristics. Thus, an effective multiple-needle surgical planning procedure is needed. To overcome these problems, we present an automatic multiple-needle surgical planning of optimal insertion trajectories to the targets, based on a mathematical description of all relevant structure surfaces. The method determines the analytical expression of boundaries of every needle “collision-free reachable workspace” (CFRW), which are the feasible insertion zones based on several constraints. Then, the optimal needle insertion trajectory within the optimization criteria will be chosen in the needle CFRW automatically. Also, the results can be visualized with our navigation system. In the simulation experiment, three needle-insertion trajectories were obtained successfully. In the in vitro experiment, the robot successfully achieved insertion of multiple needles. The proposed automatic multiple-needle surgical planning can improve the efficiency and safety of robot-assisted large liver tumor therapy, significantly reduce the surgeon’s workload, and is especially helpful for an inexperienced surgeon. The methodology should be easy to adapt in other body parts. PMID:26982341
Dealing with Multiple Solutions in Structural Vector Autoregressive Models.
Beltz, Adriene M; Molenaar, Peter C M
2016-01-01
Structural vector autoregressive models (VARs) hold great potential for psychological science, particularly for time series data analysis. They capture the magnitude, direction of influence, and temporal (lagged and contemporaneous) nature of relations among variables. Unified structural equation modeling (uSEM) is an optimal structural VAR instantiation, according to large-scale simulation studies, and it is implemented within an SEM framework. However, little is known about the uniqueness of uSEM results. Thus, the goal of this study was to investigate whether multiple solutions result from uSEM analysis and, if so, to demonstrate ways to select an optimal solution. This was accomplished with two simulated data sets, an empirical data set concerning children's dyadic play, and modifications to the group iterative multiple model estimation (GIMME) program, which implements uSEMs with group- and individual-level relations in a data-driven manner. Results revealed multiple solutions when there were large contemporaneous relations among variables. Results also verified several ways to select the correct solution when the complete solution set was generated, such as the use of cross-validation, maximum standardized residuals, and information criteria. This work has immediate and direct implications for the analysis of time series data and for the inferences drawn from those data concerning human behavior.
NASA Astrophysics Data System (ADS)
Alameddine, Ibrahim; Karmakar, Subhankar; Qian, Song S.; Paerl, Hans W.; Reckhow, Kenneth H.
2013-10-01
The total maximum daily load program aims to monitor more than 40,000 standard violations in around 20,000 impaired water bodies across the United States. Given resource limitations, future monitoring efforts have to be hedged against the uncertainties in the monitored system, while taking into account existing knowledge. In that respect, we have developed a hierarchical spatiotemporal Bayesian model that can be used to optimize an existing monitoring network by retaining stations that provide the maximum amount of information, while identifying locations that would benefit from the addition of new stations. The model assumes the water quality parameters are adequately described by a joint matrix normal distribution. The adopted approach allows for a reduction in redundancies, while emphasizing information richness rather than data richness. The developed approach incorporates the concept of entropy to account for the associated uncertainties. Three different entropy-based criteria are adopted: total system entropy, chlorophyll-a standard violation entropy, and dissolved oxygen standard violation entropy. A multiple attribute decision making framework is adopted to integrate the competing design criteria and to generate a single optimal design. The approach is implemented on the water quality monitoring system of the Neuse River Estuary in North Carolina, USA. The model results indicate that the high priority monitoring areas identified by the total system entropy and the dissolved oxygen violation entropy criteria are largely coincident. The monitoring design based on the chlorophyll-a standard violation entropy proved to be less informative, given the low probabilities of violating the water quality standard in the estuary.
Ranking Schools' Academic Performance Using a Fuzzy VIKOR
NASA Astrophysics Data System (ADS)
Musani, Suhaina; Aziz Jemain, Abdul
2015-06-01
Determination rank is structuring alternatives in order of priority. It is based on the criteria determined for each alternative involved. Evaluation criteria are performed and then a composite index composed of each alternative for the purpose of arranging in order of preference alternatives. This practice is known as multiple criteria decision making (MCDM). There are several common approaches to MCDM, one of the practice is known as VIKOR (Multi-criteria Optimization and Compromise Solution). The objective of this study is to develop a rational method for school ranking based on linguistic information of a criterion. The school represents an alternative, while the results for a number of subjects as the criterion. The results of the examination for a course, is given according to the student percentage of each grade. Five grades of excellence, honours, average, pass and fail is used to indicate a level of achievement in linguistics. Linguistic variables are transformed to fuzzy numbers to form a composite index of school performance. Results showed that fuzzy set theory can solve the limitations of using MCDM when there is uncertainty problems exist in the data.
Kornek, Barbara; Schmitl, Beate; Vass, Karl; Zehetmayer, Sonja; Pritsch, Martin; Penzien, Johann; Karenfort, Michael; Blaschek, Astrid; Seidl, Rainer; Prayer, Daniela; Rostasy, Kevin
2012-12-01
Magnetic resonance imaging diagnostic criteria for paediatric multiple sclerosis have been established on the basis of brain imaging findings alone. The 2010 McDonald criteria for the diagnosis of multiple sclerosis, however, include spinal cord imaging for detection of lesion dissemination in space. The new criteria have been recommended in paediatric multiple sclerosis. (1) To evaluate the 2010 McDonald multiple sclerosis criteria in children with a clinically isolated syndrome and to compare them with recently proposed magnetic resonance criteria for children; (2) to assess whether the inclusion of spinal cord imaging provided additional value to the 2010 McDonald criteria. We performed a retrospective analysis of brain and spinal cord magnetic resonance imaging scans from 52 children with a clinically isolated syndrome. Sensitivity, specificity and accuracy of the magnetic resonance criteria were assessed. The 2010 McDonald dissemination in space criteria were more sensitive (85% versus 74%) but less specific (80% versus 100%) compared to the 2005 McDonald criteria. The Callen criteria were more accurate (89%) compared to the 2010 McDonald (85%), the 2005 McDonald criteria for dissemination in space (81%), the KIDMUS criteria (46%) and the Canadian Pediatric Demyelinating Disease Network criteria (76%). The 2010 McDonald criteria for dissemination in time were more accurate (93%) than the dissemination in space criteria (85%). Inclusion of the spinal cord did not increase the accuracy of the McDonald criteria.
Genetic algorithms - What fitness scaling is optimal?
NASA Technical Reports Server (NTRS)
Kreinovich, Vladik; Quintana, Chris; Fuentes, Olac
1993-01-01
A problem of choosing the best scaling function as a mathematical optimization problem is formulated and solved under different optimality criteria. A list of functions which are optimal under different criteria is presented which includes both the best functions empirically proved and new functions that may be worth trying.
Falcon: automated optimization method for arbitrary assessment criteria
Yang, Tser-Yuan; Moses, Edward I.; Hartmann-Siantar, Christine
2001-01-01
FALCON is a method for automatic multivariable optimization for arbitrary assessment criteria that can be applied to numerous fields where outcome simulation is combined with optimization and assessment criteria. A specific implementation of FALCON is for automatic radiation therapy treatment planning. In this application, FALCON implements dose calculations into the planning process and optimizes available beam delivery modifier parameters to determine the treatment plan that best meets clinical decision-making criteria. FALCON is described in the context of the optimization of external-beam radiation therapy and intensity modulated radiation therapy (IMRT), but the concepts could also be applied to internal (brachytherapy) radiotherapy. The radiation beams could consist of photons or any charged or uncharged particles. The concept of optimizing source distributions can be applied to complex radiography (e.g. flash x-ray or proton) to improve the imaging capabilities of facilities proposed for science-based stockpile stewardship.
System optimization on coded aperture spectrometer
NASA Astrophysics Data System (ADS)
Liu, Hua; Ding, Quanxin; Wang, Helong; Chen, Hongliang; Guo, Chunjie; Zhou, Liwei
2017-10-01
For aim to find a simple multiple configuration solution and achieve higher refractive efficiency, and based on to reduce the situation disturbed by FOV change, especially in a two-dimensional spatial expansion. Coded aperture system is designed by these special structure, which includes an objective a coded component a prism reflex system components, a compensatory plate and an imaging lens Correlative algorithms and perfect imaging methods are available to ensure this system can be corrected and optimized adequately. Simulation results show that the system can meet the application requirements in MTF, REA, RMS and other related criteria. Compared with the conventional design, the system has reduced in volume and weight significantly. Therefore, the determining factors are the prototype selection and the system configuration.
Using RGB-D sensors and evolutionary algorithms for the optimization of workstation layouts.
Diego-Mas, Jose Antonio; Poveda-Bautista, Rocio; Garzon-Leal, Diana
2017-11-01
RGB-D sensors can collect postural data in an automatized way. However, the application of these devices in real work environments requires overcoming problems such as lack of accuracy or body parts' occlusion. This work presents the use of RGB-D sensors and genetic algorithms for the optimization of workstation layouts. RGB-D sensors are used to capture workers' movements when they reach objects on workbenches. Collected data are then used to optimize workstation layout by means of genetic algorithms considering multiple ergonomic criteria. Results show that typical drawbacks of using RGB-D sensors for body tracking are not a problem for this application, and that the combination with intelligent algorithms can automatize the layout design process. The procedure described can be used to automatically suggest new layouts when workers or processes of production change, to adapt layouts to specific workers based on their ways to do the tasks, or to obtain layouts simultaneously optimized for several production processes. Copyright © 2017 Elsevier Ltd. All rights reserved.
On the optimization of discrete structures with aeroelastic constraints
NASA Technical Reports Server (NTRS)
Mcintosh, S. C., Jr.; Ashley, H.
1978-01-01
The paper deals with the problem of dynamic structural optimization where constraints relating to flutter of a wing (or other dynamic aeroelastic performance) are imposed along with conditions of a more conventional nature such as those relating to stress under load, deflection, minimum dimensions of structural elements, etc. The discussion is limited to a flutter problem for a linear system with a finite number of degrees of freedom and a single constraint involving aeroelastic stability, and the structure motion is assumed to be a simple harmonic time function. Three search schemes are applied to the minimum-weight redesign of a particular wing: the first scheme relies on the method of feasible directions, while the other two are derived from necessary conditions for a local optimum so that they can be referred to as optimality-criteria schemes. The results suggest that a heuristic redesign algorithm involving an optimality criterion may be best suited for treating multiple constraints with large numbers of design variables.
Optimization of wastewater treatment alternative selection by hierarchy grey relational analysis.
Zeng, Guangming; Jiang, Ru; Huang, Guohe; Xu, Min; Li, Jianbing
2007-01-01
This paper describes an innovative systematic approach, namely hierarchy grey relational analysis for optimal selection of wastewater treatment alternatives, based on the application of analytic hierarchy process (AHP) and grey relational analysis (GRA). It can be applied for complicated multicriteria decision-making to obtain scientific and reasonable results. The effectiveness of this approach was verified through a real case study. Four wastewater treatment alternatives (A(2)/O, triple oxidation ditch, anaerobic single oxidation ditch and SBR) were evaluated and compared against multiple economic, technical and administrative performance criteria, including capital cost, operation and maintenance (O and M) cost, land area, removal of nitrogenous and phosphorous pollutants, sludge disposal effect, stability of plant operation, maturity of technology and professional skills required for O and M. The result illustrated that the anaerobic single oxidation ditch was the optimal scheme and would obtain the maximum general benefits for the wastewater treatment plant to be constructed.
Flight Control Development for the ARH-70 Armed Reconnaissance Helicopter Program
NASA Technical Reports Server (NTRS)
Christensen, Kevin T.; Campbell, Kip G.; Griffith, Carl D.; Ivler, Christina M.; Tischler, Mark B.; Harding, Jeffrey W.
2008-01-01
In July 2005, Bell Helicopter won the U.S. Army's Armed Reconnaissance Helicopter competition to produce a replacement for the OH-58 Kiowa Warrior capable of performing the armed reconnaissance mission. To meet the U.S. Army requirement that the ARH-70A have Level 1 handling qualities for the scout rotorcraft mission task elements defined by ADS-33E-PRF, Bell equipped the aircraft with their generic automatic flight control system (AFCS). Under the constraints of the tight ARH-70A schedule, the development team used modem parameter identification and control law optimization techniques to optimize the AFCS gains to simultaneously meet multiple handling qualities design criteria. This paper will show how linear modeling, control law optimization, and simulation have been used to produce a Level 1 scout rotorcraft for the U.S. Army, while minimizing the amount of flight testing required for AFCS development and handling qualities evaluation of the ARH-70A.
Ke, Chih-Kun; Lin, Zheng-Hua
2015-09-01
The progress of information and communication technologies (ICT) has promoted the development of healthcare which has enabled the exchange of resources and services between organizations. Organizations want to integrate mobile devices into their hospital information systems (HIS) due to the convenience to employees who are then able to perform specific healthcare processes from any location. The collection and merage of healthcare data from discrete mobile devices are worth exploring possible ways for further use, especially in remote districts without public data network (PDN) to connect the HIS. In this study, we propose an optimal mobile service which automatically synchronizes the telecare file resources among discrete mobile devices. The proposed service enforces some technical methods. The role-based access control model defines the telecare file resources accessing mechanism; the symmetric data encryption method protects telecare file resources transmitted over a mobile peer-to-peer network. The multi-criteria decision analysis method, ELECTRE (Elimination Et Choice Translating Reality), evaluates multiple criteria of the candidates' mobile devices to determine a ranking order. This optimizes the synchronization of telecare file resources among discrete mobile devices. A prototype system is implemented to examine the proposed mobile service. The results of the experiment show that the proposed mobile service can automatically and effectively synchronize telecare file resources among discrete mobile devices. The contribution of this experiment is to provide an optimal mobile service that enhances the security of telecare file resource synchronization and strengthens an organization's mobility.
How to Assess the Value of Medicines?
Simoens, Steven
2010-01-01
This study aims to discuss approaches to assessing the value of medicines. Economic evaluation assesses value by means of the incremental cost-effectiveness ratio (ICER). Health is maximized by selecting medicines with increasing ICERs until the budget is exhausted. The budget size determines the value of the threshold ICER and vice versa. Alternatively, the threshold value can be inferred from pricing/reimbursement decisions, although such values vary between countries. Threshold values derived from the value-of-life literature depend on the technique used. The World Health Organization has proposed a threshold value tied to the national GDP. As decision makers may wish to consider multiple criteria, variable threshold values and weighted ICERs have been suggested. Other approaches (i.e., replacement approach, program budgeting and marginal analysis) have focused on improving resource allocation, rather than maximizing health subject to a budget constraint. Alternatively, the generalized optimization framework and multi-criteria decision analysis make it possible to consider other criteria in addition to value. PMID:21607066
How to assess the value of medicines?
Simoens, Steven
2010-01-01
This study aims to discuss approaches to assessing the value of medicines. Economic evaluation assesses value by means of the incremental cost-effectiveness ratio (ICER). Health is maximized by selecting medicines with increasing ICERs until the budget is exhausted. The budget size determines the value of the threshold ICER and vice versa. Alternatively, the threshold value can be inferred from pricing/reimbursement decisions, although such values vary between countries. Threshold values derived from the value-of-life literature depend on the technique used. The World Health Organization has proposed a threshold value tied to the national GDP. As decision makers may wish to consider multiple criteria, variable threshold values and weighted ICERs have been suggested. Other approaches (i.e., replacement approach, program budgeting and marginal analysis) have focused on improving resource allocation, rather than maximizing health subject to a budget constraint. Alternatively, the generalized optimization framework and multi-criteria decision analysis make it possible to consider other criteria in addition to value.
Three-Tesla MRI does not improve the diagnosis of multiple sclerosis: A multicenter study.
Hagens, Marloes H J; Burggraaff, Jessica; Kilsdonk, Iris D; de Vos, Marlieke L; Cawley, Niamh; Sbardella, Emilia; Andelova, Michaela; Amann, Michael; Lieb, Johanna M; Pantano, Patrizia; Lissenberg-Witte, Birgit I; Killestein, Joep; Oreja-Guevara, Celia; Ciccarelli, Olga; Gasperini, Claudio; Lukas, Carsten; Wattjes, Mike P; Barkhof, Frederik
2018-06-20
In the work-up of patients presenting with a clinically isolated syndrome (CIS), 3T MRI might offer a higher lesion detection than 1.5T, but it remains unclear whether this affects the fulfilment of the diagnostic criteria for multiple sclerosis (MS). We recruited 66 patients with CIS within 6 months from symptom onset and 26 healthy controls in 6 MS centers. All participants underwent 1.5T and 3T brain and spinal cord MRI at baseline according to local optimized protocols and the MAGNIMS guidelines. Patients who had not converted to MS during follow-up received repeat brain MRI at 3-6 months and 12-15 months. The number of lesions per anatomical region was scored by 3 raters in consensus. Criteria for dissemination in space (DIS) and dissemination in time (DIT) were determined according to the 2017 revisions of the McDonald criteria. Three-Tesla MRI detected 15% more T2 brain lesions compared to 1.5T ( p < 0.001), which was driven by an increase in baseline detection of periventricular (12%, p = 0.015), (juxta)cortical (21%, p = 0.005), and deep white matter lesions (21%, p < 0.001). The detection rate of spinal cord lesions and gadolinium-enhancing lesions did not differ between field strengths. Three-Tesla MRI did not lead to a higher number of patients fulfilling the criteria for DIS or DIT, or subsequent diagnosis of MS, at any of the 3 time points. Scanning at 3T does not influence the diagnosis of MS according to McDonald diagnostic criteria. Copyright © 2018 The Author(s). Published by Wolters Kluwer Health, Inc. on behalf of the American Academy of Neurology.
Robotic Vision, Tray-Picking System Design Using Multiple, Optical Matched Filters
NASA Astrophysics Data System (ADS)
Leib, Kenneth G.; Mendelsohn, Jay C.; Grieve, Philip G.
1986-10-01
The optical correlator is applied to a robotic vision, tray-picking problem. Complex matched filters (MFs) are designed to provide sufficient optical memory for accepting any orientation of the desired part, and a multiple holographic lens (MHL) is used to increase the memory for continuous coverage. It is shown that with appropriate thresholding a small part can be selected using optical matched filters. A number of criteria are presented for optimizing the vision system. Two of the part-filled trays that Mendelsohn used are considered in this paper which is the analog (optical) expansion of his paper. Our view in this paper is that of the optical correlator as a cueing device for subsequent, finer vision techniques.
Launch Vehicle Propulsion Design with Multiple Selection Criteria
NASA Technical Reports Server (NTRS)
Shelton, Joey D.; Frederick, Robert A.; Wilhite, Alan W.
2005-01-01
The approach and techniques described herein define an optimization and evaluation approach for a liquid hydrogen/liquid oxygen single-stage-to-orbit system. The method uses Monte Carlo simulations, genetic algorithm solvers, a propulsion thermo-chemical code, power series regression curves for historical data, and statistical models in order to optimize a vehicle system. The system, including parameters for engine chamber pressure, area ratio, and oxidizer/fuel ratio, was modeled and optimized to determine the best design for seven separate design weight and cost cases by varying design and technology parameters. Significant model results show that a 53% increase in Design, Development, Test and Evaluation cost results in a 67% reduction in Gross Liftoff Weight. Other key findings show the sensitivity of propulsion parameters, technology factors, and cost factors and how these parameters differ when cost and weight are optimized separately. Each of the three key propulsion parameters; chamber pressure, area ratio, and oxidizer/fuel ratio, are optimized in the seven design cases and results are plotted to show impacts to engine mass and overall vehicle mass.
Coupling between a multi-physics workflow engine and an optimization framework
NASA Astrophysics Data System (ADS)
Di Gallo, L.; Reux, C.; Imbeaux, F.; Artaud, J.-F.; Owsiak, M.; Saoutic, B.; Aiello, G.; Bernardi, P.; Ciraolo, G.; Bucalossi, J.; Duchateau, J.-L.; Fausser, C.; Galassi, D.; Hertout, P.; Jaboulay, J.-C.; Li-Puma, A.; Zani, L.
2016-03-01
A generic coupling method between a multi-physics workflow engine and an optimization framework is presented in this paper. The coupling architecture has been developed in order to preserve the integrity of the two frameworks. The objective is to provide the possibility to replace a framework, a workflow or an optimizer by another one without changing the whole coupling procedure or modifying the main content in each framework. The coupling is achieved by using a socket-based communication library for exchanging data between the two frameworks. Among a number of algorithms provided by optimization frameworks, Genetic Algorithms (GAs) have demonstrated their efficiency on single and multiple criteria optimization. Additionally to their robustness, GAs can handle non-valid data which may appear during the optimization. Consequently GAs work on most general cases. A parallelized framework has been developed to reduce the time spent for optimizations and evaluation of large samples. A test has shown a good scaling efficiency of this parallelized framework. This coupling method has been applied to the case of SYCOMORE (SYstem COde for MOdeling tokamak REactor) which is a system code developed in form of a modular workflow for designing magnetic fusion reactors. The coupling of SYCOMORE with the optimization platform URANIE enables design optimization along various figures of merit and constraints.
NASA Astrophysics Data System (ADS)
Grau, J. B.; Anton, J. M.; Tarquis, A. M.; Colombo, F.; de Los Rios, L.; Cisneros, J. M.
2010-04-01
Multi-criteria Decision Analysis (MCDA) is concerned with identifying the values, uncertainties and other issues relevant in a given decision, its rationality, and the resulting optimal decision. These decisions are difficult because the complexity of the system or because of determining the optimal situation or behavior. This work will illustrate how MCDA is applied in practice to a complex problem to resolve such us soil erosion and degradation. Desertification is a global problem and recently it has been studied in several forums as ONU that literally says: "Desertification has a very high incidence in the environmental and food security, socioeconomic stability and world sustained development". Desertification is the soil quality loss and one of FAO's most important preoccupations as hunger in the world is increasing. Multiple factors are involved of diverse nature related to: natural phenomena (water and wind erosion), human activities linked to soil and water management, and others not related to the former. In the whole world this problem exists, but its effects and solutions are different. It is necessary to take into account economical, environmental, cultural and sociological criteria. A multi-criteria model to select among different alternatives to prepare an integral plan to ameliorate or/and solve this problem in each area has been elaborated taking in account eight criteria and six alternatives. Six sub zones have been established following previous studies and in each one the initial matrix and weights have been defined to apply on different criteria. Three Multicriteria Decision Methods have been used for the different sub zones: ELECTRE, PROMETHEE and AHP. The results show a high level of consistency among the three different multicriteria methods despite the complexity of the system studied. The methods are described for La Estrella sub zone, indicating election of weights, Initial Matrixes, the MATHCAD8 algorithms used for PROMETHEE, and the Graph of Expert Choice showing the results of AHP. A brief schema of the actions recommended for each of the six different sub zones is reported in Conclusions, with "We can combine Autochthonous and High Value Forest" for La Estrella.
NASA Astrophysics Data System (ADS)
Grau, J. B.; Antón, J. M.; Tarquis, A. M.; Colombo, F.; de Los Ríos, L.; Cisneros, J. M.
2010-11-01
Multi-criteria Decision Analysis (MCDA) is concerned with identifying the values, uncertainties and other issues relevant in a given decision, its rationality, and the resulting optimal decision. These decisions are difficult because the complexity of the system or because of determining the optimal situation or behaviour. This work will illustrate how MCDA is applied in practice to a complex problem to resolve such us soil erosion and degradation. Desertification is a global problem and recently it has been studied in several forums as ONU that literally says: "Desertification has a very high incidence in the environmental and food security, socioeconomic stability and world sustained development". Desertification is the soil quality loss and one of FAO's most important preoccupations as hunger in the world is increasing. Multiple factors are involved of diverse nature related to: natural phenomena (water and wind erosion), human activities linked to soil and water management, and others not related to the former. In the whole world this problem exists, but its effects and solutions are different. It is necessary to take into account economical, environmental, cultural and sociological criteria. A multi-criteria model to select among different alternatives to prepare an integral plan to ameliorate or/and solve this problem in each area has been elaborated taking in account eight criteria and five alternatives. Six sub zones have been established following previous studies and in each one the initial matrix and weights have been defined to apply on different criteria. Three multicriteria decision methods have been used for the different sub zones: ELECTRE, PROMETHEE and AHP. The results show a high level of consistency among the three different multicriteria methods despite the complexity of the system studied. The methods are fully described for La Estrella sub zone, indicating election of weights, Initial Matrixes, algorithms used for PROMETHEE, and the Graph of Expert Choice showing the AHP results. A brief schema of the actions recommended for each of the six different sub zones is discussed.
Valuing hydrological alteration in multi-objective water resources management
NASA Astrophysics Data System (ADS)
Bizzi, Simone; Pianosi, Francesca; Soncini-Sessa, Rodolfo
2012-11-01
SummaryThe management of water through the impoundment of rivers by dams and reservoirs is necessary to support key human activities such as hydropower production, agriculture and flood risk mitigation. Advances in multi-objective optimization techniques and ever growing computing power make it possible to design reservoir operating policies that represent Pareto-optimal tradeoffs between multiple interests. On the one hand, such optimization methods can enhance performances of commonly targeted objectives (such as hydropower production or water supply), on the other hand they risk strongly penalizing all the interests not directly (i.e. mathematically) included in the optimization algorithm. The alteration of the downstream hydrological regime is a well established cause of ecological degradation and its evaluation and rehabilitation is commonly required by recent legislation (as the Water Framework Directive in Europe). However, it is rarely embedded in reservoir optimization routines and, even when explicitly considered, the criteria adopted for its evaluation are doubted and not commonly trusted, undermining the possibility of real implementation of environmentally friendly policies. The main challenges in defining and assessing hydrological alterations are: how to define a reference state (referencing); how to define criteria upon which to build mathematical indicators of alteration (measuring); and finally how to aggregate the indicators in a single evaluation index (valuing) that can serve as objective function in the optimization problem. This paper aims to address these issues by: (i) discussing the benefits and constrains of different approaches to referencing, measuring and valuing hydrological alteration; (ii) testing two alternative indices of hydrological alteration, one based on the established framework of Indicators of Hydrological Alteration (Richter et al., 1996), and one satisfying the mathematical properties required by widely used optimization methods based on dynamic programming; (iii) demonstrating and discussing these indices by application River Ticino, in Italy; (iv) providing a framework to effectively include hydrological alteration within reservoir operation optimization.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Watkins, W.T.; Siebers, J.V.
Purpose: To introduce quasi-constrained Multi-Criteria Optimization (qcMCO) for unsupervised radiation therapy optimization which generates alternative patient-specific plans emphasizing dosimetric tradeoffs and conformance to clinical constraints for multiple delivery techniques. Methods: For N Organs At Risk (OARs) and M delivery techniques, qcMCO generates M(N+1) alternative treatment plans per patient. Objective weight variations for OARs and targets are used to generate alternative qcMCO plans. For 30 locally advanced lung cancer patients, qcMCO plans were generated for dosimetric tradeoffs to four OARs: each lung, heart, and esophagus (N=4) and 4 delivery techniques (simple 4-field arrangements, 9-field coplanar IMRT, 27-field non-coplanar IMRT, and non-coplanarmore » Arc IMRT). Quasi-constrained objectives included target prescription isodose to 95% (PTV-D95), maximum PTV dose (PTV-Dmax)< 110% of prescription, and spinal cord Dmax<45 Gy. The algorithm’s ability to meet these constraints while simultaneously revealing dosimetric tradeoffs was investigated. Statistically significant dosimetric tradeoffs were defined such that the coefficient of determination between dosimetric indices which varied by at least 5 Gy between different plans was >0.8. Results: The qcMCO plans varied mean dose by >5 Gy to ipsilateral lung for 24/30 patients, contralateral lung for 29/30 patients, esophagus for 29/30 patients, and heart for 19/30 patients. In the 600 plans computed without human interaction, average PTV-D95=67.4±3.3 Gy, PTV-Dmax=79.2±5.3 Gy, and spinal cord Dmax was >45 Gy in 93 plans (>50 Gy in 2/600 plans). Statistically significant dosimetric tradeoffs were evident in 19/30 plans, including multiple tradeoffs of at least 5 Gy between multiple OARs in 7/30 cases. The most common statistically significant tradeoff was increasing PTV-Dmax to reduce OAR dose (15/30 patients). Conclusion: The qcMCO method can conform to quasi-constrained objectives while revealing significant variations in OAR doses including mean dose reductions >5 Gy. Clinical implementation will facilitate patient-specific decision making based on achievable dosimetry as opposed to accept/reject models based on population derived objectives.« less
Multi-objective optimisation of aircraft flight trajectories in the ATM and avionics context
NASA Astrophysics Data System (ADS)
Gardi, Alessandro; Sabatini, Roberto; Ramasamy, Subramanian
2016-05-01
The continuous increase of air transport demand worldwide and the push for a more economically viable and environmentally sustainable aviation are driving significant evolutions of aircraft, airspace and airport systems design and operations. Although extensive research has been performed on the optimisation of aircraft trajectories and very efficient algorithms were widely adopted for the optimisation of vertical flight profiles, it is only in the last few years that higher levels of automation were proposed for integrated flight planning and re-routing functionalities of innovative Communication Navigation and Surveillance/Air Traffic Management (CNS/ATM) and Avionics (CNS+A) systems. In this context, the implementation of additional environmental targets and of multiple operational constraints introduces the need to efficiently deal with multiple objectives as part of the trajectory optimisation algorithm. This article provides a comprehensive review of Multi-Objective Trajectory Optimisation (MOTO) techniques for transport aircraft flight operations, with a special focus on the recent advances introduced in the CNS+A research context. In the first section, a brief introduction is given, together with an overview of the main international research initiatives where this topic has been studied, and the problem statement is provided. The second section introduces the mathematical formulation and the third section reviews the numerical solution techniques, including discretisation and optimisation methods for the specific problem formulated. The fourth section summarises the strategies to articulate the preferences and to select optimal trajectories when multiple conflicting objectives are introduced. The fifth section introduces a number of models defining the optimality criteria and constraints typically adopted in MOTO studies, including fuel consumption, air pollutant and noise emissions, operational costs, condensation trails, airspace and airport operations. A brief overview of atmospheric and weather modelling is also included. Key equations describing the optimality criteria are presented, with a focus on the latest advancements in the respective application areas. In the sixth section, a number of MOTO implementations in the CNS+A systems context are mentioned with relevant simulation case studies addressing different operational tasks. The final section draws some conclusions and outlines guidelines for future research on MOTO and associated CNS+A system implementations.
NASA Astrophysics Data System (ADS)
Farahmand, Parisa; Kovacevic, Radovan
2014-12-01
In laser cladding, the performance of the deposited layers subjected to severe working conditions (e.g., wear and high temperature conditions) depends on the mechanical properties, the metallurgical bond to the substrate, and the percentage of dilution. The clad geometry and mechanical characteristics of the deposited layer are influenced greatly by the type of laser used as a heat source and process parameters used. Nowadays, the quality of fabricated coating by laser cladding and the efficiency of this process has improved thanks to the development of high-power diode lasers, with power up to 10 kW. In this study, the laser cladding by a high power direct diode laser (HPDDL) as a new heat source in laser cladding was investigated in detail. The high alloy tool steel material (AISI H13) as feedstock was deposited on mild steel (ASTM A36) by a HPDDL up to 8kW laser and with new design lateral feeding nozzle. The influences of the main process parameters (laser power, powder flow rate, and scanning speed) on the clad-bead geometry (specifically layer height and depth of the heat affected zone), and clad microhardness were studied. Multiple regression analysis was used to develop the analytical models for desired output properties according to input process parameters. The Analysis of Variance was applied to check the accuracy of the developed models. The response surface methodology (RSM) and desirability function were used for multi-criteria optimization of the cladding process. In order to investigate the effect of process parameters on the molten pool evolution, in-situ monitoring was utilized. Finally, the validation results for optimized process conditions show the predicted results were in a good agreement with measured values. The multi-criteria optimization makes it possible to acquire an efficient process for a combination of clad geometrical and mechanical characteristics control.
Aghajani Mir, M; Taherei Ghazvinei, P; Sulaiman, N M N; Basri, N E A; Saheri, S; Mahmood, N Z; Jahan, A; Begum, R A; Aghamohammadi, N
2016-01-15
Selecting a suitable Multi Criteria Decision Making (MCDM) method is a crucial stage to establish a Solid Waste Management (SWM) system. Main objective of the current study is to demonstrate and evaluate a proposed method using Multiple Criteria Decision Making methods (MCDM). An improved version of Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) applied to obtain the best municipal solid waste management method by comparing and ranking the scenarios. Applying this method in order to rank treatment methods is introduced as one contribution of the study. Besides, Viekriterijumsko Kompromisno Rangiranje (VIKOR) compromise solution method applied for sensitivity analyses. The proposed method can assist urban decision makers in prioritizing and selecting an optimized Municipal Solid Waste (MSW) treatment system. Besides, a logical and systematic scientific method was proposed to guide an appropriate decision-making. A modified TOPSIS methodology as a superior to existing methods for first time was applied for MSW problems. Applying this method in order to rank treatment methods is introduced as one contribution of the study. Next, 11 scenarios of MSW treatment methods are defined and compared environmentally and economically based on the waste management conditions. Results show that integrating a sanitary landfill (18.1%), RDF (3.1%), composting (2%), anaerobic digestion (40.4%), and recycling (36.4%) was an optimized model of integrated waste management. An applied decision-making structure provides the opportunity for optimum decision-making. Therefore, the mix of recycling and anaerobic digestion and a sanitary landfill with Electricity Production (EP) are the preferred options for MSW management. Copyright © 2015 Elsevier Ltd. All rights reserved.
Jovanovic, Sasa; Savic, Slobodan; Jovicic, Nebojsa; Boskovic, Goran; Djordjevic, Zorica
2016-09-01
Multi-criteria decision making (MCDM) is a relatively new tool for decision makers who deal with numerous and often contradictory factors during their decision making process. This paper presents a procedure to choose the optimal municipal solid waste (MSW) management system for the area of the city of Kragujevac (Republic of Serbia) based on the MCDM method. Two methods of multiple attribute decision making, i.e. SAW (simple additive weighting method) and TOPSIS (technique for order preference by similarity to ideal solution), respectively, were used to compare the proposed waste management strategies (WMS). Each of the created strategies was simulated using the software package IWM2. Total values for eight chosen parameters were calculated for all the strategies. Contribution of each of the six waste treatment options was valorized. The SAW analysis was used to obtain the sum characteristics for all the waste management treatment strategies and they were ranked accordingly. The TOPSIS method was used to calculate the relative closeness factors to the ideal solution for all the alternatives. Then, the proposed strategies were ranked in form of tables and diagrams obtained based on both MCDM methods. As shown in this paper, the results were in good agreement, which additionally confirmed and facilitated the choice of the optimal MSW management strategy. © The Author(s) 2016.
Sacubitril/Valsartan in Clinical Practice: A Report of 2 Cases.
Cosentino, Eugenio
Following the results of the PARADIGM-HF trial, the European Society of Cardiology (ESC) guidelines recommend sacubitril/valsartan to replace ACE inhibitors in ambulatory patients with heart failure with reduced ejection fraction (HFrEF) who remain symptomatic despite optimal therapy and who fit trial criteria. However, the optimal use of sacubitril/valsartan in clinical practice needs further investigation. We report here the cases of 2 patients with HFrEH successfully treated with sacubitril/valsartan in our daily practice. Both subjects presented multiple comorbidities and received an implantable cardioverter defibrillator in primary prevention. In both patients, therapy with sacubitril/valsartan led to prompt (30 days) amelioration of heart function, with a corresponding decrease in NHYA class and without any relevant safety issue. © 2017 S. Karger AG, Basel.
2012-01-01
Background The NCBI Conserved Domain Database (CDD) consists of a collection of multiple sequence alignments of protein domains that are at various stages of being manually curated into evolutionary hierarchies based on conserved and divergent sequence and structural features. These domain models are annotated to provide insights into the relationships between sequence, structure and function via web-based BLAST searches. Results Here we automate the generation of conserved domain (CD) hierarchies using a combination of heuristic and Markov chain Monte Carlo (MCMC) sampling procedures and starting from a (typically very large) multiple sequence alignment. This procedure relies on statistical criteria to define each hierarchy based on the conserved and divergent sequence patterns associated with protein functional-specialization. At the same time this facilitates the sequence and structural annotation of residues that are functionally important. These statistical criteria also provide a means to objectively assess the quality of CD hierarchies, a non-trivial task considering that the protein subgroups are often very distantly related—a situation in which standard phylogenetic methods can be unreliable. Our aim here is to automatically generate (typically sub-optimal) hierarchies that, based on statistical criteria and visual comparisons, are comparable to manually curated hierarchies; this serves as the first step toward the ultimate goal of obtaining optimal hierarchical classifications. A plot of runtimes for the most time-intensive (non-parallelizable) part of the algorithm indicates a nearly linear time complexity so that, even for the extremely large Rossmann fold protein class, results were obtained in about a day. Conclusions This approach automates the rapid creation of protein domain hierarchies and thus will eliminate one of the most time consuming aspects of conserved domain database curation. At the same time, it also facilitates protein domain annotation by identifying those pattern residues that most distinguish each protein domain subgroup from other related subgroups. PMID:22726767
User-Centric Multi-Criteria Information Retrieval
NASA Technical Reports Server (NTRS)
Wolfe, Shawn R.; Zhang, Yi
2009-01-01
Information retrieval models usually represent content only, and not other considerations, such as authority, cost, and recency. How could multiple criteria be utilized in information retrieval, and how would it affect the results? In our experiments, using multiple user-centric criteria always produced better results than a single criteria.
NASA Astrophysics Data System (ADS)
Subagadis, Y. H.; Schütze, N.; Grundmann, J.
2014-09-01
The conventional methods used to solve multi-criteria multi-stakeholder problems are less strongly formulated, as they normally incorporate only homogeneous information at a time and suggest aggregating objectives of different decision-makers avoiding water-society interactions. In this contribution, Multi-Criteria Group Decision Analysis (MCGDA) using a fuzzy-stochastic approach has been proposed to rank a set of alternatives in water management decisions incorporating heterogeneous information under uncertainty. The decision making framework takes hydrologically, environmentally, and socio-economically motivated conflicting objectives into consideration. The criteria related to the performance of the physical system are optimized using multi-criteria simulation-based optimization, and fuzzy linguistic quantifiers have been used to evaluate subjective criteria and to assess stakeholders' degree of optimism. The proposed methodology is applied to find effective and robust intervention strategies for the management of a coastal hydrosystem affected by saltwater intrusion due to excessive groundwater extraction for irrigated agriculture and municipal use. Preliminary results show that the MCGDA based on a fuzzy-stochastic approach gives useful support for robust decision-making and is sensitive to the decision makers' degree of optimism.
Analysis of a Turbine Blade Failure in a Military Turbojet Engine
NASA Astrophysics Data System (ADS)
Sahoo, Benudhar; Satpathy, R. K.; Panigrahi, S. K.
2016-06-01
This paper deals with failure analysis of a low-pressure turbine blade of a straight flow turbojet engine. The blade is made of a wrought precipitation hardened Nickel base superalloy with oxidation-resistant diffusion aluminizing coating. The failure mode is found to be fatigue with multiple cracks inside the blade having crack origin at metal carbides. In addition to the damage in the coating, carbide banding has been observed in few blades. Carbide banding may be defined as inclusions in the form of highly elongated along deformation direction. The size, shape and banding of carbides and their location critically affect the failure of blades. Carbon content needs to be optimized to reduce interdendritic segregation and thereby provide improved fatigue and stress rupture life. Hence, optimization of size, shape and distribution of carbides in the billet and forging parameters during manufacturing of blade play a vital role to eliminate/reduce extent of banding. Reference micrographs as acceptance criteria are essential for evaluation of raw material and blade. There is a need to define the acceptance criteria for carbide bandings and introduce more sensitive ultrasonic check during billet and on finished blade inspection.
Selecting at-risk populations for sexually transmitted disease/HIV intervention studies.
Wu, Zunyou; Rotheram-Borus, Mary Jane; Detels, Roger; Li, Li; Guan, Jihui; Liang, Guojun; Yap, Lorraine
2007-12-01
This paper describes one option to select populations for randomized, controlled trials (RCT). We used a popular opinion leader intervention in Fuzhou, China, to: (1) identify population selection criteria; (2) systematically examine the suitability of potential target populations and settings; (3) briefly evaluate risk and stability in the population; and (4) evaluate regional and organizational support among administrators and government officials. After comparing migrant villagers, truck drivers, factory workers, construction workers, and market employees in five regions of China, market employees in Fuzhou were identified as the optimal target population. Markets were the optimal sites for several reasons: (1) the population demonstrated a sufficient base rate of sexually transmitted diseases; (2) the population was stable over time; (3) a sufficient number of sites of manageable sizes were available; (4) stable networks existed; (5) local gatekeepers/stakeholders supported the intervention; (6) there was organizational capacity in the local health department to mount the intervention; (7) the demographic profile was similar across potential sites; and (8) the sites were sufficiently distanced to minimize contamination. Evaluating intervention efficacy in an RCT requires a time-consuming and rigorous process that systematically and routinely documents selection criteria, evaluates multiple populations, sites, and organizations for their appropriateness.
Development of Multiobjective Optimization Techniques for Sonic Boom Minimization
NASA Technical Reports Server (NTRS)
Chattopadhyay, Aditi; Rajadas, John Narayan; Pagaldipti, Naryanan S.
1996-01-01
A discrete, semi-analytical sensitivity analysis procedure has been developed for calculating aerodynamic design sensitivities. The sensitivities of the flow variables and the grid coordinates are numerically calculated using direct differentiation of the respective discretized governing equations. The sensitivity analysis techniques are adapted within a parabolized Navier Stokes equations solver. Aerodynamic design sensitivities for high speed wing-body configurations are calculated using the semi-analytical sensitivity analysis procedures. Representative results obtained compare well with those obtained using the finite difference approach and establish the computational efficiency and accuracy of the semi-analytical procedures. Multidisciplinary design optimization procedures have been developed for aerospace applications namely, gas turbine blades and high speed wing-body configurations. In complex applications, the coupled optimization problems are decomposed into sublevels using multilevel decomposition techniques. In cases with multiple objective functions, formal multiobjective formulation such as the Kreisselmeier-Steinhauser function approach and the modified global criteria approach have been used. Nonlinear programming techniques for continuous design variables and a hybrid optimization technique, based on a simulated annealing algorithm, for discrete design variables have been used for solving the optimization problems. The optimization procedure for gas turbine blades improves the aerodynamic and heat transfer characteristics of the blades. The two-dimensional, blade-to-blade aerodynamic analysis is performed using a panel code. The blade heat transfer analysis is performed using an in-house developed finite element procedure. The optimization procedure yields blade shapes with significantly improved velocity and temperature distributions. The multidisciplinary design optimization procedures for high speed wing-body configurations simultaneously improve the aerodynamic, the sonic boom and the structural characteristics of the aircraft. The flow solution is obtained using a comprehensive parabolized Navier Stokes solver. Sonic boom analysis is performed using an extrapolation procedure. The aircraft wing load carrying member is modeled as either an isotropic or a composite box beam. The isotropic box beam is analyzed using thin wall theory. The composite box beam is analyzed using a finite element procedure. The developed optimization procedures yield significant improvements in all the performance criteria and provide interesting design trade-offs. The semi-analytical sensitivity analysis techniques offer significant computational savings and allow the use of comprehensive analysis procedures within design optimization studies.
Optimal design of compact and connected nature reserves for multiple species.
Wang, Yicheng; Önal, Hayri
2016-04-01
When designing a conservation reserve system for multiple species, spatial attributes of the reserves must be taken into account at species level. The existing optimal reserve design literature considers either one spatial attribute or when multiple attributes are considered the analysis is restricted only to one species. We built a linear integer programing model that incorporates compactness and connectivity of the landscape reserved for multiple species. The model identifies multiple reserves that each serve a subset of target species with a specified coverage probability threshold to ensure the species' long-term survival in the reserve, and each target species is covered (protected) with another probability threshold at the reserve system level. We modeled compactness by minimizing the total distance between selected sites and central sites, and we modeled connectivity of a selected site to its designated central site by selecting at least one of its adjacent sites that has a nearer distance to the central site. We considered structural distance and functional distances that incorporated site quality between sites. We tested the model using randomly generated data on 2 species, one ground species that required structural connectivity and the other an avian species that required functional connectivity. We applied the model to 10 bird species listed as endangered by the state of Illinois (U.S.A.). Spatial coherence and selection cost of the reserves differed substantially depending on the weights assigned to these 2 criteria. The model can be used to design a reserve system for multiple species, especially species whose habitats are far apart in which case multiple disjunct but compact and connected reserves are advantageous. The model can be modified to increase or decrease the distance between reserves to reduce or promote population connectivity. © 2015 Society for Conservation Biology.
Investigation of effective decision criteria for multiobjective optimization in IMRT.
Holdsworth, Clay; Stewart, Robert D; Kim, Minsun; Liao, Jay; Phillips, Mark H
2011-06-01
To investigate how using different sets of decision criteria impacts the quality of intensity modulated radiation therapy (IMRT) plans obtained by multiobjective optimization. A multiobjective optimization evolutionary algorithm (MOEA) was used to produce sets of IMRT plans. The MOEA consisted of two interacting algorithms: (i) a deterministic inverse planning optimization of beamlet intensities that minimizes a weighted sum of quadratic penalty objectives to generate IMRT plans and (ii) an evolutionary algorithm that selects the superior IMRT plans using decision criteria and uses those plans to determine the new weights and penalty objectives of each new plan. Plans resulting from the deterministic algorithm were evaluated by the evolutionary algorithm using a set of decision criteria for both targets and organs at risk (OARs). Decision criteria used included variation in the target dose distribution, mean dose, maximum dose, generalized equivalent uniform dose (gEUD), an equivalent uniform dose (EUD(alpha,beta) formula derived from the linear-quadratic survival model, and points on dose volume histograms (DVHs). In order to quantatively compare results from trials using different decision criteria, a neutral set of comparison metrics was used. For each set of decision criteria investigated, IMRT plans were calculated for four different cases: two simple prostate cases, one complex prostate Case, and one complex head and neck Case. When smaller numbers of decision criteria, more descriptive decision criteria, or less anti-correlated decision criteria were used to characterize plan quality during multiobjective optimization, dose to OARs and target dose variation were reduced in the final population of plans. Mean OAR dose and gEUD (a = 4) decision criteria were comparable. Using maximum dose decision criteria for OARs near targets resulted in inferior populations that focused solely on low target variance at the expense of high OAR dose. Target dose range, (D(max) - D(min)), decision criteria were found to be most effective for keeping targets uniform. Using target gEUD decision criteria resulted in much lower OAR doses but much higher target dose variation. EUD(alpha,beta) based decision criteria focused on a region of plan space that was a compromise between target and OAR objectives. None of these target decision criteria dominated plans using other criteria, but only focused on approaching a different area of the Pareto front. The choice of decision criteria implemented in the MOEA had a significant impact on the region explored and the rate of convergence toward the Pareto front. When more decision criteria, anticorrelated decision criteria, or decision criteria with insufficient information were implemented, inferior populations are resulted. When more informative decision criteria were used, such as gEUD, EUD(alpha,beta), target dose range, and mean dose, MOEA optimizations focused on approaching different regions of the Pareto front, but did not dominate each other. Using simple OAR decision criteria and target EUD(alpha,beta) decision criteria demonstrated the potential to generate IMRT plans that significantly reduce dose to OARs while achieving the same or better tumor control when clinical requirements on target dose variance can be met or relaxed.
A channel estimation scheme for MIMO-OFDM systems
NASA Astrophysics Data System (ADS)
He, Chunlong; Tian, Chu; Li, Xingquan; Zhang, Ce; Zhang, Shiqi; Liu, Chaowen
2017-08-01
In view of the contradiction of the time-domain least squares (LS) channel estimation performance and the practical realization complexity, a reduced complexity channel estimation method for multiple input multiple output-orthogonal frequency division multiplexing (MIMO-OFDM) based on pilot is obtained. This approach can transform the complexity of MIMO-OFDM channel estimation problem into a simple single input single output-orthogonal frequency division multiplexing (SISO-OFDM) channel estimation problem and therefore there is no need for large matrix pseudo-inverse, which greatly reduces the complexity of algorithms. Simulation results show that the bit error rate (BER) performance of the obtained method with time orthogonal training sequences and linear minimum mean square error (LMMSE) criteria is better than that of time-domain LS estimator and nearly optimal performance.
Optimal control theory for non-scalar-valued performance criteria. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Gerring, H. P.
1971-01-01
The theory of optimal control for nonscalar-valued performance criteria is discussed. In the space, where the performance criterion attains its value, the relations better than, worse than, not better than, and not worse than are defined by a partial order relation. The notion of optimality splits up into superiority and non-inferiority, because worse than is not the complement of better than, in general. A superior solution is better than every other solution. A noninferior solution is not worse than any other solution. Noninferior solutions have been investigated particularly for vector-valued performance criteria. Superior solutions for non-scalar-valued performance criteria attaining their values in abstract partially ordered spaces are emphasized. The main result is the infimum principle which constitutes necessary conditions for a control to be a superior solution to an optimal control problem.
Identification of multi-criteria for supplier selection in IT project outsourcing
NASA Astrophysics Data System (ADS)
Fusiripong, Prashaya; Baharom, Fauziah; Yusof, Yuhanis
2017-10-01
In the increasing global business competitiveness, most organizations have attempted to determine the suitable external parties to support their core and non-core competency, particularly, in IT project outsourcing. The IT supplier selection is required to apply multi-criteria which comprised tangible criteria and intangible criteria in consider optimal IT supplier. Most researches attempted to identify optimal criteria for selecting IT supplier, however, the criteria cannot be the considered common criteria support the variety of IT outsourcing. Therefore, the study aimed to identify a common set of criteria being used in the various types of IT outsourcing. The common criteria are constructed by multi-criteria and success criteria, which were collected by literature review with comprehensive and comparative approach. Consequently, the researchers are able to identify a common set of criteria adopted in the variety of selection problem IT outsourcing supplier.
ACT Payload Shroud Structural Concept Analysis and Optimization
NASA Technical Reports Server (NTRS)
Zalewski, Bart B.; Bednarcyk, Brett A.
2010-01-01
Aerospace structural applications demand a weight efficient design to perform in a cost effective manner. This is particularly true for launch vehicle structures, where weight is the dominant design driver. The design process typically requires many iterations to ensure that a satisfactory minimum weight has been obtained. Although metallic structures can be weight efficient, composite structures can provide additional weight savings due to their lower density and additional design flexibility. This work presents structural analysis and weight optimization of a composite payload shroud for NASA s Ares V heavy lift vehicle. Two concepts, which were previously determined to be efficient for such a structure are evaluated: a hat stiffened/corrugated panel and a fiber reinforced foam sandwich panel. A composite structural optimization code, HyperSizer, is used to optimize the panel geometry, composite material ply orientations, and sandwich core material. HyperSizer enables an efficient evaluation of thousands of potential designs versus multiple strength and stability-based failure criteria across multiple load cases. HyperSizer sizing process uses a global finite element model to obtain element forces, which are statistically processed to arrive at panel-level design-to loads. These loads are then used to analyze each candidate panel design. A near optimum design is selected as the one with the lowest weight that also provides all positive margins of safety. The stiffness of each newly sized panel or beam component is taken into account in the subsequent finite element analysis. Iteration of analysis/optimization is performed to ensure a converged design. Sizing results for the hat stiffened panel concept and the fiber reinforced foam sandwich concept are presented.
Decentralized Optimal Dispatch of Photovoltaic Inverters in Residential Distribution Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dall'Anese, Emiliano; Dhople, Sairaj V.; Johnson, Brian B.
Summary form only given. Decentralized methods for computing optimal real and reactive power setpoints for residential photovoltaic (PV) inverters are developed in this paper. It is known that conventional PV inverter controllers, which are designed to extract maximum power at unity power factor, cannot address secondary performance objectives such as voltage regulation and network loss minimization. Optimal power flow techniques can be utilized to select which inverters will provide ancillary services, and to compute their optimal real and reactive power setpoints according to well-defined performance criteria and economic objectives. Leveraging advances in sparsity-promoting regularization techniques and semidefinite relaxation, this papermore » shows how such problems can be solved with reduced computational burden and optimality guarantees. To enable large-scale implementation, a novel algorithmic framework is introduced - based on the so-called alternating direction method of multipliers - by which optimal power flow-type problems in this setting can be systematically decomposed into sub-problems that can be solved in a decentralized fashion by the utility and customer-owned PV systems with limited exchanges of information. Since the computational burden is shared among multiple devices and the requirement of all-to-all communication can be circumvented, the proposed optimization approach scales favorably to large distribution networks.« less
Optimal design of clinical trials with biologics using dose-time-response models.
Lange, Markus R; Schmidli, Heinz
2014-12-30
Biologics, in particular monoclonal antibodies, are important therapies in serious diseases such as cancer, psoriasis, multiple sclerosis, or rheumatoid arthritis. While most conventional drugs are given daily, the effect of monoclonal antibodies often lasts for months, and hence, these biologics require less frequent dosing. A good understanding of the time-changing effect of the biologic for different doses is needed to determine both an adequate dose and an appropriate time-interval between doses. Clinical trials provide data to estimate the dose-time-response relationship with semi-mechanistic nonlinear regression models. We investigate how to best choose the doses and corresponding sample size allocations in such clinical trials, so that the nonlinear dose-time-response model can be precisely estimated. We consider both local and conservative Bayesian D-optimality criteria for the design of clinical trials with biologics. For determining the optimal designs, computer-intensive numerical methods are needed, and we focus here on the particle swarm optimization algorithm. This metaheuristic optimizer has been successfully used in various areas but has only recently been applied in the optimal design context. The equivalence theorem is used to verify the optimality of the designs. The methodology is illustrated based on results from a clinical study in patients with gout, treated by a monoclonal antibody. Copyright © 2014 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Langton, John T.; Caroli, Joseph A.; Rosenberg, Brad
2008-04-01
To support an Effects Based Approach to Operations (EBAO), Intelligence, Surveillance, and Reconnaissance (ISR) planners must optimize collection plans within an evolving battlespace. A need exists for a decision support tool that allows ISR planners to rapidly generate and rehearse high-performing ISR plans that balance multiple objectives and constraints to address dynamic collection requirements for assessment. To meet this need we have designed an evolutionary algorithm (EA)-based "Integrated ISR Plan Analysis and Rehearsal System" (I2PARS) to support Effects-based Assessment (EBA). I2PARS supports ISR mission planning and dynamic replanning to coordinate assets and optimize their routes, allocation and tasking. It uses an evolutionary algorithm to address the large parametric space of route-finding problems which is sometimes discontinuous in the ISR domain because of conflicting objectives such as minimizing asset utilization yet maximizing ISR coverage. EAs are uniquely suited for generating solutions in dynamic environments and also allow user feedback. They are therefore ideal for "streaming optimization" and dynamic replanning of ISR mission plans. I2PARS uses the Non-dominated Sorting Genetic Algorithm (NSGA-II) to automatically generate a diverse set of high performing collection plans given multiple objectives, constraints, and assets. Intended end users of I2PARS include ISR planners in the Combined Air Operations Centers and Joint Intelligence Centers. Here we show the feasibility of applying the NSGA-II algorithm and EAs in general to the ISR planning domain. Unique genetic representations and operators for optimization within the ISR domain are presented along with multi-objective optimization criteria for ISR planning. Promising results of the I2PARS architecture design, early software prototype, and limited domain testing of the new algorithm are discussed. We also present plans for future research and development, as well as technology transition goals.
Shiino, Kai; Iwao, Yasunori; Miyagishima, Atsuo; Itai, Shigeru
2010-08-16
The purpose of the present study was to design and evaluate a novel wax matrix system containing various ratios of aminoalkyl methacrylate copolymer E (AMCE) and ethylcellulose (EC) as functional polymers in order to achieve the optimal acetaminophen (APAP) release rate for taste masking. A two factor, three level (3(2)) full factorial study design was used to optimize the ratios of AMCE and EC, and the release of APAP from the wax matrix was evaluated using a stationary disk in accordance with the paddle method. The disk was prepared by congealing glyceryl monostearate (GM), a wax with a low melting point, with various ratios of polymers and APAP. The criteria for release rate of APAP from the disk at pH 4.0 and pH 6.5 were calculated to be more than 0.5017 microg/(mlxmin) and less than 0.1414 microg/(mlxmin), respectively, under the assumption that the particle size of spherical matrix should be 100 microm. In multiple regression analysis, the release of APAP at pH 4.0 was found to increase markedly as the concentration of AMCE increased, whereas the release of APAP at pH 6.5 decreased as the EC concentration increased, even when a high level of AMCE was incorporated. Using principle component analysis, it was found that the viscosity of the matrix affects the pH-dependent release of APAP at pH 4.0 and pH 6.5. Furthermore, using multiple regression analysis, the optimum ratio of APAP:AMCE:EC:GM was found to be 30:7:10:53, and the release pattern of APAP from the optimum wax formulation nearly complied with the desired criteria. Therefore, the present study demonstrated that the incorporation of AMCE and EC into a wax matrix system enabled the appropriate release of APAP as a means of taste masking. Copyright (c) 2010 Elsevier B.V. All rights reserved.
Optimal Bayesian Adaptive Design for Test-Item Calibration.
van der Linden, Wim J; Ren, Hao
2015-06-01
An optimal adaptive design for test-item calibration based on Bayesian optimality criteria is presented. The design adapts the choice of field-test items to the examinees taking an operational adaptive test using both the information in the posterior distributions of their ability parameters and the current posterior distributions of the field-test parameters. Different criteria of optimality based on the two types of posterior distributions are possible. The design can be implemented using an MCMC scheme with alternating stages of sampling from the posterior distributions of the test takers' ability parameters and the parameters of the field-test items while reusing samples from earlier posterior distributions of the other parameters. Results from a simulation study demonstrated the feasibility of the proposed MCMC implementation for operational item calibration. A comparison of performances for different optimality criteria showed faster calibration of substantial numbers of items for the criterion of D-optimality relative to A-optimality, a special case of c-optimality, and random assignment of items to the test takers.
Optimization of Aerospace Structure Subject to Damage Tolerance Criteria
NASA Technical Reports Server (NTRS)
Akgun, Mehmet A.
1999-01-01
The objective of this cooperative agreement was to seek computationally efficient ways to optimize aerospace structures subject to damage tolerance criteria. Optimization was to involve sizing as well as topology optimization. The work was done in collaboration with Steve Scotti, Chauncey Wu and Joanne Walsh at the NASA Langley Research Center. Computation of constraint sensitivity is normally the most time-consuming step of an optimization procedure. The cooperative work first focused on this issue and implemented the adjoint method of sensitivity computation in an optimization code (runstream) written in Engineering Analysis Language (EAL). The method was implemented both for bar and plate elements including buckling sensitivity for the latter. Lumping of constraints was investigated as a means to reduce the computational cost. Adjoint sensitivity computation was developed and implemented for lumped stress and buckling constraints. Cost of the direct method and the adjoint method was compared for various structures with and without lumping. The results were reported in two papers. It is desirable to optimize topology of an aerospace structure subject to a large number of damage scenarios so that a damage tolerant structure is obtained. Including damage scenarios in the design procedure is critical in order to avoid large mass penalties at later stages. A common method for topology optimization is that of compliance minimization which has not been used for damage tolerant design. In the present work, topology optimization is treated as a conventional problem aiming to minimize the weight subject to stress constraints. Multiple damage configurations (scenarios) are considered. Each configuration has its own structural stiffness matrix and, normally, requires factoring of the matrix and solution of the system of equations. Damage that is expected to be tolerated is local and represents a small change in the stiffness matrix compared to the baseline (undamaged) structure. The exact solution to a slightly modified set of equations can be obtained from the baseline solution economically without actually solving the modified system. Sherrnan-Morrison-Woodbury (SMW) formulas are matrix update formulas that allow this. SMW formulas were therefore used here to compute adjoint displacements for sensitivity computation and structural displacements in damaged configurations.
Rothermundt, Christian; Bailey, Alexandra; Cerbone, Linda; Eisen, Tim; Escudier, Bernard; Gillessen, Silke; Grünwald, Viktor; Larkin, James; McDermott, David; Oldenburg, Jan; Porta, Camillo; Rini, Brian; Schmidinger, Manuela; Sternberg, Cora; Putora, Paul M
2015-09-01
With the advent of targeted therapies, many treatment options in the first-line setting of metastatic clear cell renal cell carcinoma (mccRCC) have emerged. Guidelines and randomized trial reports usually do not elucidate the decision criteria for the different treatment options. In order to extract the decision criteria for the optimal therapy for patients, we performed an analysis of treatment algorithms from experts in the field. Treatment algorithms for the treatment of mccRCC from experts of 11 institutions were obtained, and decision trees were deduced. Treatment options were identified and a list of unified decision criteria determined. The final decision trees were analyzed with a methodology based on diagnostic nodes, which allows for an automated cross-comparison of decision trees. The most common treatment recommendations were determined, and areas of discordance were identified. The analysis revealed heterogeneity in most clinical scenarios. The recommendations selected for first-line treatment of mccRCC included sunitinib, pazopanib, temsirolimus, interferon-α combined with bevacizumab, high-dose interleukin-2, sorafenib, axitinib, everolimus, and best supportive care. The criteria relevant for treatment decisions were performance status, Memorial Sloan Kettering Cancer Center risk group, only or mainly lung metastases, cardiac insufficiency, hepatic insufficiency, age, and "zugzwang" (composite of multiple, related criteria). In the present study, we used diagnostic nodes to compare treatment algorithms in the first-line treatment of mccRCC. The results illustrate the heterogeneity of the decision criteria and treatment strategies for mccRCC and how available data are interpreted and implemented differently among experts. The data provided in the present report should not be considered to serve as treatment recommendations for the management of treatment-naïve patients with multiple metastases from metastatic clear cell renal cell carcinoma outside a clinical trial; however, the data highlight the different treatment options and the criteria used to select them. The diversity in decision making and how results from phase III trials can be interpreted and implemented differently in daily practice are demonstrated. ©AlphaMed Press.
Optimizing structure of complex technical system by heterogeneous vector criterion in interval form
NASA Astrophysics Data System (ADS)
Lysenko, A. V.; Kochegarov, I. I.; Yurkov, N. K.; Grishko, A. K.
2018-05-01
The article examines the methods of development and multi-criteria choice of the preferred structural variant of the complex technical system at the early stages of its life cycle in the absence of sufficient knowledge of parameters and variables for optimizing this structure. The suggested methods takes into consideration the various fuzzy input data connected with the heterogeneous quality criteria of the designed system and the parameters set by their variation range. The suggested approach is based on the complex use of methods of interval analysis, fuzzy sets theory, and the decision-making theory. As a result, the method for normalizing heterogeneous quality criteria has been developed on the basis of establishing preference relations in the interval form. The method of building preferential relations in the interval form on the basis of the vector of heterogeneous quality criteria suggest the use of membership functions instead of the coefficients considering the criteria value. The former show the degree of proximity of the realization of the designed system to the efficient or Pareto optimal variants. The study analyzes the example of choosing the optimal variant for the complex system using heterogeneous quality criteria.
Multi-Objective Approach for Energy-Aware Workflow Scheduling in Cloud Computing Environments
Kadima, Hubert; Granado, Bertrand
2013-01-01
We address the problem of scheduling workflow applications on heterogeneous computing systems like cloud computing infrastructures. In general, the cloud workflow scheduling is a complex optimization problem which requires considering different criteria so as to meet a large number of QoS (Quality of Service) requirements. Traditional research in workflow scheduling mainly focuses on the optimization constrained by time or cost without paying attention to energy consumption. The main contribution of this study is to propose a new approach for multi-objective workflow scheduling in clouds, and present the hybrid PSO algorithm to optimize the scheduling performance. Our method is based on the Dynamic Voltage and Frequency Scaling (DVFS) technique to minimize energy consumption. This technique allows processors to operate in different voltage supply levels by sacrificing clock frequencies. This multiple voltage involves a compromise between the quality of schedules and energy. Simulation results on synthetic and real-world scientific applications highlight the robust performance of the proposed approach. PMID:24319361
Multi-objective approach for energy-aware workflow scheduling in cloud computing environments.
Yassa, Sonia; Chelouah, Rachid; Kadima, Hubert; Granado, Bertrand
2013-01-01
We address the problem of scheduling workflow applications on heterogeneous computing systems like cloud computing infrastructures. In general, the cloud workflow scheduling is a complex optimization problem which requires considering different criteria so as to meet a large number of QoS (Quality of Service) requirements. Traditional research in workflow scheduling mainly focuses on the optimization constrained by time or cost without paying attention to energy consumption. The main contribution of this study is to propose a new approach for multi-objective workflow scheduling in clouds, and present the hybrid PSO algorithm to optimize the scheduling performance. Our method is based on the Dynamic Voltage and Frequency Scaling (DVFS) technique to minimize energy consumption. This technique allows processors to operate in different voltage supply levels by sacrificing clock frequencies. This multiple voltage involves a compromise between the quality of schedules and energy. Simulation results on synthetic and real-world scientific applications highlight the robust performance of the proposed approach.
NASA Astrophysics Data System (ADS)
Vasu, M.; Shivananda, Nayaka H.
2018-04-01
EN47 steel samples are machined on a self-centered lathe using Chemical Vapor Deposition of coated TiCN/Al2O3/TiN and uncoated tungsten carbide tool inserts, with nose radius 0.8mm. Results are compared with each other and optimized using statistical tool. Input (cutting) parameters that are considered in this work are feed rate (f), cutting speed (Vc), and depth of cut (ap), the optimization criteria are based on the Taguchi (L9) orthogonal array. ANOVA method is adopted to evaluate the statistical significance and also percentage contribution for each model. Multiple response characteristics namely cutting force (Fz), tool tip temperature (T) and surface roughness (Ra) are evaluated. The results discovered that coated tool insert (TiCN/Al2O3/TiN) exhibits 1.27 and 1.29 times better than the uncoated tool insert for tool tip temperature and surface roughness respectively. A slight increase in cutting force was observed for coated tools.
NASA Astrophysics Data System (ADS)
Harney, Robert C.
1997-03-01
A novel methodology offering the potential for resolving two of the significant problems of implementing multisensor target recognition systems, i.e., the rational selection of a specific sensor suite and optimal allocation of requirements among sensors, is presented. Based on a sequence of conjectures (and their supporting arguments) concerning the relationship of extractable information content to recognition performance of a sensor system, a set of heuristics (essentially a reformulation of Johnson's criteria applicable to all sensor and data types) is developed. An approach to quantifying the information content of sensor data is described. Coupling this approach with the widely accepted Johnson's criteria for target recognition capabilities results in a quantitative method for comparing the target recognition ability of diverse sensors (imagers, nonimagers, active, passive, electromagnetic, acoustic, etc.). Extension to describing the performance of multiple sensors is straightforward. The application of the technique to sensor selection and requirements allocation is discussed.
On optimal infinite impulse response edge detection filters
NASA Technical Reports Server (NTRS)
Sarkar, Sudeep; Boyer, Kim L.
1991-01-01
The authors outline the design of an optimal, computationally efficient, infinite impulse response edge detection filter. The optimal filter is computed based on Canny's high signal to noise ratio, good localization criteria, and a criterion on the spurious response of the filter to noise. An expression for the width of the filter, which is appropriate for infinite-length filters, is incorporated directly in the expression for spurious responses. The three criteria are maximized using the variational method and nonlinear constrained optimization. The optimal filter parameters are tabulated for various values of the filter performance criteria. A complete methodology for implementing the optimal filter using approximating recursive digital filtering is presented. The approximating recursive digital filter is separable into two linear filters operating in two orthogonal directions. The implementation is very simple and computationally efficient, has a constant time of execution for different sizes of the operator, and is readily amenable to real-time hardware implementation.
1979-11-01
plasma focus operations have been experimentally analyzed in terms of (A) The fine structure of the axial-current channel during maximum of compression. (B) Correlation coefficient, for neutron yield n (by D2 discharges) and the multiplicity of the electron beam pulses; (C) Different values of the electrode voltage. The current distribution near the axial plasma column during the explosive decay of the column has been monitored and correlated with the electron beam production. Plasma focus discharges by our mode of operation generate high-intensity
NASA Astrophysics Data System (ADS)
Georgiou, Andreas; Skarlatos, Dimitrios
2016-07-01
Among the renewable power sources, solar power is rapidly becoming popular because it is inexhaustible, clean, and dependable. It has also become more efficient since the power conversion efficiency of photovoltaic solar cells has increased. Following these trends, solar power will become more affordable in years to come and considerable investments are to be expected. Despite the size of solar plants, the sitting procedure is a crucial factor for their efficiency and financial viability. Many aspects influence such a decision: legal, environmental, technical, and financial to name a few. This paper describes a general integrated framework to evaluate land suitability for the optimal placement of photovoltaic solar power plants, which is based on a combination of a geographic information system (GIS), remote sensing techniques, and multi-criteria decision-making methods. An application of the proposed framework for the Limassol district in Cyprus is further illustrated. The combination of a GIS and multi-criteria methods produces an excellent analysis tool that creates an extensive database of spatial and non-spatial data, which will be used to simplify problems as well as solve and promote the use of multiple criteria. A set of environmental, economic, social, and technical constrains, based on recent Cypriot legislation, European's Union policies, and expert advice, identifies the potential sites for solar park installation. The pairwise comparison method in the context of the analytic hierarchy process (AHP) is applied to estimate the criteria weights in order to establish their relative importance in site evaluation. In addition, four different methods to combine information layers and check their sensitivity were used. The first considered all the criteria as being equally important and assigned them equal weight, whereas the others grouped the criteria and graded them according to their objective perceived importance. The overall suitability of the study region for sitting solar parks is appraised through the summation rule. Strict application of the framework depicts 3.0 % of the study region scoring a best-suitability index for solar resource exploitation, hence minimizing the risk in a potential investment. However, using different weighting schemes for criteria, suitable areas may reach up to 83 % of the study region. The suggested methodological framework applied can be easily utilized by potential investors and renewable energy developers, through a front end web-based application with proper GUI for personalized weighting schemes.
Fox, Elizabeth J; Twigger, Shirley; Allen, Keith R
2009-01-01
Liquid chromatography linked to tandem mass spectrometry (LC/MS/MS) is being increasingly used for drug confirmation. At present, no official criteria exist for drug identification using this technique although the European Union (EU) criteria for compound identification have been adopted. These criteria are evaluated with respect to opiate confirmation by LC/MS/MS and problems highlighted. Urine samples screened positive for opiates by immunoassay were subjected to confirmation by LC/MS/MS using multiple reaction monitoring (MRM) and two separate buffer systems of pH 6.8 and 8.0, respectively. The EU criteria for compound identification were applied for confirmation of morphine, 6-monoacetylmorphine (6MAM), codeine and dihydrocodeine (DHC). Using the pH 6.8 buffer, confirmation could be achieved for 84%, 94%, 96% and 95%, respectively, for samples demonstrating MRM chromatographic peaks at retention times for morphine, 6MAM, codeine and DHC. Failure to meet the EU criteria was mainly attributed to low signal-to-noise (S:N) ratios or excessively high drug concentrations. Isobaric interferences and poor chromatography were also contributing factors. The identification of morphine was considerably improved with chromatography at pH 8.0 owing to resolution of interferences. Oxycodone metabolites were a potential problem for the identification of DHC. Isobaric interferences can pose a problem with drug identification using LC/MS/MS. Optimizing chromatographic conditions is important to overcome these interferences. Consideration needs to be given to investigating drug metabolites as well as parent drugs in method development.
Gamage, Sujani Madhurika Kodagoda; Wijeweera, Indunil; Wijesinghe, Priyangi; Adikari, Sanjaya Bandara; Fink, Katharina; Sominanda, Herath Mudiyanselage Ajith
2018-05-31
The magnetic resonance imaging in multiple sclerosis (MAGNIMS) group recently proposed guidelines to replace the existing dissemination-in-space criteria in McDonald 2010 magnetic resonance imaging (MRI) criteria for diagnosing multiple sclerosis. There has been insufficient research regarding their applicability in Asians. Objective of this study was to determine the sensitivity, specificity, accuracy, positive predictive value (PPV), and negative predictive value (NPV) of McDonald 2010 and MAGNIMS 2016 MRI criteria with the aim of verifying their applicability in Sri Lankan patients. Patients with clinically isolated syndrome diagnosed by consultant neurologists were recruited from five major neurology centers. Baseline and follow-up MRI scans were performed within 3 months from the initial presentation and at one year after baseline MRI, respectively. McDonald 2010 and MAGNIMS 2016 MRI criteria were applied to all MRI scans. Patients were followed-up for 2 years to assess the conversion to clinically definite multiple sclerosis (CDMS). The sensitivity, specificity, accuracy, PPV, and NPV for predicting the conversion to CDMS were calculated. Forty-two of 66 patients converted to CDMS. Thirty-seven fulfilled the McDonald 2010 MRI criteria, and 33 converted to CDMS. MAGNIMS 2016 MRI criteria were fulfilled by 29, with 28 converting to CDMS. The sensitivity, specificity, accuracy, PPV, and NPV were 78%, 83%, 64%, 89%, and 69%, respectively, for the McDonald 2010 criteria, and 67%, 96%, 77%, 96%, and 62% for the MAGNIMS 2016 MRI criteria. MAGNIMS 2016 MRI criteria were superior to McDonald 2010 MRI criteria in specificity, accuracy, and PPV, but inferior in sensitivity and NPV. Copyright © 2018 Korean Neurological Association.
Rosebraugh, Matthew R; Widness, John A; Nalbant, Demet; Cress, Gretchen; Veng-Pedersen, Peter
2014-02-01
Preterm very-low-birth-weight (VLBW) infants weighing <1.5 kg at birth develop anemia, often requiring multiple red blood cell transfusions (RBCTx). Because laboratory blood loss is a primary cause of anemia leading to RBCTx in VLBW infants, our purpose was to simulate the extent to which RBCTx can be reduced or eliminated by reducing laboratory blood loss in combination with pharmacodynamically optimized erythropoietin (Epo) treatment. Twenty-six VLBW ventilated infants receiving RBCTx were studied during the first month of life. RBCTx simulations were based on previously published RBCTx criteria and data-driven Epo pharmacodynamic optimization of literature-derived RBC life span and blood volume data corrected for phlebotomy loss. Simulated pharmacodynamic optimization of Epo administration and reduction in phlebotomy by ≥ 55% predicted a complete elimination of RBCTx in 1.0-1.5 kg infants. In infants <1.0 kg with 100% reduction in simulated phlebotomy and optimized Epo administration, a 45% reduction in RBCTx was predicted. The mean blood volume drawn from all infants was 63 ml/kg: 33% required for analysis and 67% discarded. When reduced laboratory blood loss and optimized Epo treatment are combined, marked reductions in RBCTx in ventilated VLBW infants were predicted, particularly among those with birth weights >1.0 kg.
Affordable Window Insulation with R-10/inch Rating
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jenifer Marchesi Redouane Begag; Je Kyun Lee; Danny Ou
2004-10-15
During the performance of contract DE-FC26-00-NT40998, entitled ''Affordable Window Insulation with R-10/inch Value'', research was conducted at Aspen Aerogels, Inc. to develop new transparent aerogel materials suitable for window insulation applications. The project requirements were to develop a formulation or multiple formulations that have high transparency (85-90%) in the visible region, are hydrophobic (will not opacify with exposure to water vapor or liquid), and have at least 2% resiliency (interpreted as recoverable 2% strain and better than 5% strain to failure in compression). Results from an unrelated project showed that silica aerogels covalently bonded to organic polymers exhibit excellent mechanicalmore » properties. At the outset of this project, we believed that such a route is the best to improve mechanical properties. We have applied Design of Experiment (DOE) techniques to optimize formulations including both silica aerogels and organically modified silica aerogels (''Ormosils''). We used these DOE results to optimize formulations around the local/global optimization points. This report documents that we succeeded in developing a number of formulations that meet all of the stated criteria. We successfully developed formulations utilizing a two-step approach where the first step involves acid catalyzed hydrolysis and the second step involves base catalyzed condensation to make the gels. The gels were dried using supercritical CO{sub 2} and we were able to make 1 foot x 1 foot x 0.5 inch panels that met the criteria established.« less
Ruiz-Peña, Juan Luís; Duque, Pablo; Izquierdo, Guillermo
2008-01-01
Background A software based tool has been developed (Optem) to allow automatize the recommendations of the Canadian Multiple Sclerosis Working Group for optimizing MS treatment in order to avoid subjective interpretation. Methods Treatment Optimization Recommendations (TORs) were applied to our database of patients treated with IFN β1a IM. Patient data were assessed during year 1 for disease activity, and patients were assigned to 2 groups according to TOR: "change treatment" (CH) and "no change treatment" (NCH). These assessments were then compared to observed clinical outcomes for disease activity over the following years. Results We have data on 55 patients. The "change treatment" status was assigned to 22 patients, and "no change treatment" to 33 patients. The estimated sensitivity and specificity according to last visit status were 73.9% and 84.4%. During the following years, the Relapse Rate was always higher in the "change treatment" group than in the "no change treatment" group (5 y; CH: 0.7, NCH: 0.07; p < 0.001, 12 m – last visit; CH: 0.536, NCH: 0.34). We obtained the same results with the EDSS (4 y; CH: 3.53, NCH: 2.55, annual progression rate in 12 m – last visit; CH: 0.29, NCH: 0.13). Conclusion Applying TOR at the first year of therapy allowed accurate prediction of continued disease activity in relapses and disability progression. PMID:18325088
NASA Technical Reports Server (NTRS)
Ly, Uy-Loi; Schoemig, Ewald
1993-01-01
In the past few years, the mixed H(sub 2)/H-infinity control problem has been the object of much research interest since it allows the incorporation of robust stability into the LQG framework. The general mixed H(sub 2)/H-infinity design problem has yet to be solved analytically. Numerous schemes have considered upper bounds for the H(sub 2)-performance criterion and/or imposed restrictive constraints on the class of systems under investigation. Furthermore, many modern control applications rely on dynamic models obtained from finite-element analysis and thus involve high-order plant models. Hence the capability to design low-order (fixed-order) controllers is of great importance. In this research a new design method was developed that optimizes the exact H(sub 2)-norm of a certain subsystem subject to robust stability in terms of H-infinity constraints and a minimal number of system assumptions. The derived algorithm is based on a differentiable scalar time-domain penalty function to represent the H-infinity constraints in the overall optimization. The scheme is capable of handling multiple plant conditions and hence multiple performance criteria and H-infinity constraints and incorporates additional constraints such as fixed-order and/or fixed structure controllers. The defined penalty function is applicable to any constraint that is expressible in form of a real symmetric matrix-inequity.
Integrating genome assemblies with MAIA
Nijkamp, Jurgen; Winterbach, Wynand; van den Broek, Marcel; Daran, Jean-Marc; Reinders, Marcel; de Ridder, Dick
2010-01-01
Motivation: De novo assembly of a eukaryotic genome with next-generation sequencing data is still a challenging task. Over the past few years several assemblers have been developed, often suitable for one specific type of sequencing data. The number of known genomes is expanding rapidly, therefore it becomes possible to use multiple reference genomes for assembly projects. We introduce an assembly integrator that makes use of all available data, i.e. multiple de novo assemblies and mappings against multiple related genomes, by optimizing a weighted combination of criteria. Results: The developed algorithm was applied on the de novo sequencing of the Saccharomyces cerevisiae CEN.PK 113-7D strain. Using Solexa and 454 read data, two de novo and three comparative assemblies were constructed and subsequently integrated, yielding 29 contigs, covering more than 12 Mbp; a drastic improvement compared with the single assemblies. Availability: MAIA is available as a Matlab package and can be downloaded from http://bioinformatics.tudelft.nl Contact: j.f.nijkamp@tudelft.nl PMID:20823304
Sparse and optimal acquisition design for diffusion MRI and beyond
Koay, Cheng Guan; Özarslan, Evren; Johnson, Kevin M.; Meyerand, M. Elizabeth
2012-01-01
Purpose: Diffusion magnetic resonance imaging (MRI) in combination with functional MRI promises a whole new vista for scientists to investigate noninvasively the structural and functional connectivity of the human brain—the human connectome, which had heretofore been out of reach. As with other imaging modalities, diffusion MRI data are inherently noisy and its acquisition time-consuming. Further, a faithful representation of the human connectome that can serve as a predictive model requires a robust and accurate data-analytic pipeline. The focus of this paper is on one of the key segments of this pipeline—in particular, the development of a sparse and optimal acquisition (SOA) design for diffusion MRI multiple-shell acquisition and beyond. Methods: The authors propose a novel optimality criterion for sparse multiple-shell acquisition and quasimultiple-shell designs in diffusion MRI and a novel and effective semistochastic and moderately greedy combinatorial search strategy with simulated annealing to locate the optimum design or configuration. The goal of the optimality criteria is threefold: first, to maximize uniformity of the diffusion measurements in each shell, which is equivalent to maximal incoherence in angular measurements; second, to maximize coverage of the diffusion measurements around each radial line to achieve maximal incoherence in radial measurements for multiple-shell acquisition; and finally, to ensure maximum uniformity of diffusion measurement directions in the limiting case when all the shells are coincidental as in the case of a single-shell acquisition. The approach taken in evaluating the stability of various acquisition designs is based on the condition number and the A-optimal measure of the design matrix. Results: Even though the number of distinct configurations for a given set of diffusion gradient directions is very large in general—e.g., in the order of 10232 for a set of 144 diffusion gradient directions, the proposed search strategy was found to be effective in finding the optimum configuration. It was found that the square design is the most robust (i.e., with stable condition numbers and A-optimal measures under varying experimental conditions) among many other possible designs of the same sample size. Under the same performance evaluation, the square design was found to be more robust than the widely used sampling schemes similar to that of 3D radial MRI and of diffusion spectrum imaging (DSI). Conclusions: A novel optimality criterion for sparse multiple-shell acquisition and quasimultiple-shell designs in diffusion MRI and an effective search strategy for finding the best configuration have been developed. The results are very promising, interesting, and practical for diffusion MRI acquisitions. PMID:22559620
Alternative diagnostic criteria for idiopathic hypersomnia: A 32-hour protocol.
Evangelista, Elisa; Lopez, Régis; Barateau, Lucie; Chenini, Sofiene; Bosco, Adriana; Jaussent, Isabelle; Dauvilliers, Yves
2018-02-01
To assess the diagnostic value of extended sleep duration on a controlled 32-hour bed rest protocol in idiopathic hypersomnia (IH). One hundred sixteen patients with high suspicion of IH (37 clear-cut IH according to multiple sleep latency test criteria and 79 probable IH), 32 with hypersomnolence associated with a comorbid disorder (non-IH), and 21 controls underwent polysomnography, modified sleep latency tests, and a 32-hour bed rest protocol. Receiver operating characteristic curves were used to find optimal total sleep time (TST) cutoff values on various periods that discriminate patients from controls. TST was longer in patients with clear-cut IH than other groups (probable IH, non-IH, and controls) and in patients with probable IH than non-IH and controls. The TST cutoff best discriminating clear-cut IH and controls was 19 hours for the 32-hour recording (sensitivity = 91.9%, specificity = 85.7%) and 12 hours (100%, 85.7%) for the first 24 hours. The 19-hour cutoff displayed a specificity and sensitivity of 91.9% and 81.2% between IH and non-IH patients. Patients with IH above the 19-hour cutoff were overweight, had more sleep inertia, and had higher TST on all periods compared to patients below 19 hours, whereas no differences were found for the 12-hour cutoff. An inverse correlation was found between the mean sleep latency and TST during 32-hour recording in IH patients. In standardized and controlled stringent conditions, the optimal cutoff best discriminating patients from controls was 19 hours over 32 hours, allowing a clear-cut phenotypical characterization of major interest for research purposes. Sleepier patients on the multiple sleep latency test were also the more severe in terms of extended sleep. Ann Neurol 2018;83:235-247. © 2018 American Neurological Association.
An Intuitionistic Multiplicative ORESTE Method for Patients’ Prioritization of Hospitalization
Zhang, Cheng; Wu, Xingli; Wu, Di; Luo, Li; Herrera-Viedma, Enrique
2018-01-01
The tension brought about by sickbeds is a common and intractable issue in public hospitals in China due to the large population. Assigning the order of hospitalization of patients is difficult because of complex patient information such as disease type, emergency degree, and severity. It is critical to rank the patients taking full account of various factors. However, most of the evaluation criteria for hospitalization are qualitative, and the classical ranking method cannot derive the detailed relations between patients based on these criteria. Motivated by this, a comprehensive multiple criteria decision making method named the intuitionistic multiplicative ORESTE (organísation, rangement et Synthèse dedonnées relarionnelles, in French) was proposed to handle the problem. The subjective and objective weights of criteria were considered in the proposed method. To do so, first, considering the vagueness of human perceptions towards the alternatives, an intuitionistic multiplicative preference relation model is applied to represent the experts’ preferences over the pairwise alternatives with respect to the predetermined criteria. Then, a correlation coefficient-based weight determining method is developed to derive the objective weights of criteria. This method can overcome the biased results caused by highly-related criteria. Afterwards, we improved the general ranking method, ORESTE, by introducing a new score function which considers both the subjective and objective weights of criteria. An intuitionistic multiplicative ORESTE method was then developed and further highlighted by a case study concerning the patients’ prioritization. PMID:29673212
Soltani, Atousa; Hewage, Kasun; Reza, Bahareh; Sadiq, Rehan
2015-01-01
Municipal Solid Waste Management (MSWM) is a complicated process that involves multiple environmental and socio-economic criteria. Decision-makers look for decision support frameworks that can guide in defining alternatives, relevant criteria and their weights, and finding a suitable solution. In addition, decision-making in MSWM problems such as finding proper waste treatment locations or strategies often requires multiple stakeholders such as government, municipalities, industries, experts, and/or general public to get involved. Multi-criteria Decision Analysis (MCDA) is the most popular framework employed in previous studies on MSWM; MCDA methods help multiple stakeholders evaluate the often conflicting criteria, communicate their different preferences, and rank or prioritize MSWM strategies to finally agree on some elements of these strategies and make an applicable decision. This paper reviews and brings together research on the application of MCDA for solving MSWM problems with more focus on the studies that have considered multiple stakeholders and offers solutions for such problems. Results of this study show that AHP is the most common approach in consideration of multiple stakeholders and experts and governments/municipalities are the most common participants in these studies. Copyright © 2014 Elsevier Ltd. All rights reserved.
Pareto-Optimal Estimates of California Precipitation Change
NASA Astrophysics Data System (ADS)
Langenbrunner, Baird; Neelin, J. David
2017-12-01
In seeking constraints on global climate model projections under global warming, one commonly finds that different subsets of models perform well under different objective functions, and these trade-offs are difficult to weigh. Here a multiobjective approach is applied to a large set of subensembles generated from the Climate Model Intercomparison Project phase 5 ensemble. We use observations and reanalyses to constrain tropical Pacific sea surface temperatures, upper level zonal winds in the midlatitude Pacific, and California precipitation. An evolutionary algorithm identifies the set of Pareto-optimal subensembles across these three measures, and these subensembles are used to constrain end-of-century California wet season precipitation change. This methodology narrows the range of projections throughout California, increasing confidence in estimates of positive mean precipitation change. Finally, we show how this technique complements and generalizes emergent constraint approaches for restricting uncertainty in end-of-century projections within multimodel ensembles using multiple criteria for observational constraints.
Multiobjective optimization in structural design with uncertain parameters and stochastic processes
NASA Technical Reports Server (NTRS)
Rao, S. S.
1984-01-01
The application of multiobjective optimization techniques to structural design problems involving uncertain parameters and random processes is studied. The design of a cantilever beam with a tip mass subjected to a stochastic base excitation is considered for illustration. Several of the problem parameters are assumed to be random variables and the structural mass, fatigue damage, and negative of natural frequency of vibration are considered for minimization. The solution of this three-criteria design problem is found by using global criterion, utility function, game theory, goal programming, goal attainment, bounded objective function, and lexicographic methods. It is observed that the game theory approach is superior in finding a better optimum solution, assuming the proper balance of the various objective functions. The procedures used in the present investigation are expected to be useful in the design of general dynamic systems involving uncertain parameters, stochastic process, and multiple objectives.
Studies in Software Cost Model Behavior: Do We Really Understand Cost Model Performance?
NASA Technical Reports Server (NTRS)
Lum, Karen; Hihn, Jairus; Menzies, Tim
2006-01-01
While there exists extensive literature on software cost estimation techniques, industry practice continues to rely upon standard regression-based algorithms. These software effort models are typically calibrated or tuned to local conditions using local data. This paper cautions that current approaches to model calibration often produce sub-optimal models because of the large variance problem inherent in cost data and by including far more effort multipliers than the data supports. Building optimal models requires that a wider range of models be considered while correctly calibrating these models requires rejection rules that prune variables and records and use multiple criteria for evaluating model performance. The main contribution of this paper is to document a standard method that integrates formal model identification, estimation, and validation. It also documents what we call the large variance problem that is a leading cause of cost model brittleness or instability.
Adaptive magnetorheological seat suspension for shock mitigation
NASA Astrophysics Data System (ADS)
Singh, Harinder J.; Wereley, Norman M.
2013-04-01
An adaptive magnetorheological seat suspension (AMSS) was analyzed for optimal protection of occupants from shock loads caused by the impact of a helicopter with the ground. The AMSS system consists of an adaptive linear stroke magnetorheological shock absorber (MRSA) integrated into the seat structure of a helicopter. The MRSA provides a large controllability yield force to accommodate a wide spectrum for shock mitigation. A multiple degrees-of-freedom nonlinear biodynamic model for a 50th percentile male occupant was integrated with the dynamics of MRSA and the governing equations of motion were investigated theoretically. The load-stroke profile of MRSA was optimized with the goal of minimizing the potential for injuries. The MRSA yield force and the shock absorber stroke limitations were the most crucial parameters for improved biodynamic response mitigation. An assessment of injuries based on established injury criteria for different body parts was carried out.
Multiple Criteria Evaluation of Quality and Optimisation of e-Learning System Components
ERIC Educational Resources Information Center
Kurilovas, Eugenijus; Dagiene, Valentina
2010-01-01
The main research object of the paper is investigation and proposal of the comprehensive Learning Object Repositories (LORs) quality evaluation tool suitable for their multiple criteria decision analysis, evaluation and optimisation. Both LORs "internal quality" and "quality in use" evaluation (decision making) criteria are analysed in the paper.…
Mohammed, Yassene; Percy, Andrew J; Chambers, Andrew G; Borchers, Christoph H
2015-02-06
Multiplexed targeted quantitative proteomics typically utilizes multiple reaction monitoring and allows the optimized quantification of a large number of proteins. One challenge, however, is the large amount of data that needs to be reviewed, analyzed, and interpreted. Different vendors provide software for their instruments, which determine the recorded responses of the heavy and endogenous peptides and perform the response-curve integration. Bringing multiplexed data together and generating standard curves is often an off-line step accomplished, for example, with spreadsheet software. This can be laborious, as it requires determining the concentration levels that meet the required accuracy and precision criteria in an iterative process. We present here a computer program, Qualis-SIS, that generates standard curves from multiplexed MRM experiments and determines analyte concentrations in biological samples. Multiple level-removal algorithms and acceptance criteria for concentration levels are implemented. When used to apply the standard curve to new samples, the software flags each measurement according to its quality. From the user's perspective, the data processing is instantaneous due to the reactivity paradigm used, and the user can download the results of the stepwise calculations for further processing, if necessary. This allows for more consistent data analysis and can dramatically accelerate the downstream data analysis.
Sharifi, N; Ozgoli, S; Ramezani, A
2017-06-01
Mixed immunotherapy and chemotherapy of tumours is one of the most efficient ways to improve cancer treatment strategies. However, it is important to 'design' an effective treatment programme which can optimize the ways of combining immunotherapy and chemotherapy to diminish their imminent side effects. Control engineering techniques could be used for this. The method of multiple model predictive controller (MMPC) is applied to the modified Stepanova model to induce the best combination of drugs scheduling under a better health criteria profile. The proposed MMPC is a feedback scheme that can perform global optimization for both tumour volume and immune competent cell density by performing multiple constraints. Although current studies usually assume that immunotherapy has no side effect, this paper presents a new method of mixed drug administration by employing MMPC, which implements several constraints for chemotherapy and immunotherapy by considering both drug toxicity and autoimmune. With designed controller we need maximum 57% and 28% of full dosage of drugs for chemotherapy and immunotherapy in some instances, respectively. Therefore, through the proposed controller less dosage of drugs are needed, which contribute to suitable results with a perceptible reduction in medicine side effects. It is observed that in the presence of MMPC, the amount of required drugs is minimized, while the tumour volume is reduced. The efficiency of the presented method has been illustrated through simulations, as the system from an initial condition in the malignant region of the state space (macroscopic tumour volume) transfers into the benign region (microscopic tumour volume) in which the immune system can control tumour growth. Copyright © 2017 Elsevier B.V. All rights reserved.
Hu, Xiaofeng; Hu, Rui; Zhang, Zhaowei; Li, Peiwu; Zhang, Qi; Wang, Min
2016-09-01
A sensitive and specific immunoaffinity column to clean up and isolate multiple mycotoxins was developed along with a rapid one-step sample preparation procedure for ultra-performance liquid chromatography-tandem mass spectrometry analysis. Monoclonal antibodies against aflatoxin B1, aflatoxin B2, aflatoxin G1, aflatoxin G2, zearalenone, ochratoxin A, sterigmatocystin, and T-2 toxin were coupled to microbeads for mycotoxin purification. We optimized a homogenization and extraction procedure as well as column loading and elution conditions to maximize recoveries from complex feed matrices. This method allowed rapid, simple, and simultaneous determination of mycotoxins in feeds with a single chromatographic run. Detection limits for these toxins ranged from 0.006 to 0.12 ng mL(-1), and quantitation limits ranged from 0.06 to 0.75 ng mL(-1). Concentration curves were linear from 0.12 to 40 μg kg(-1) with correlation coefficients of R (2) > 0.99. Intra-assay and inter-assay comparisons indicated excellent repeatability and reproducibility of the multiple immunoaffinity columns. As a proof of principle, 80 feed samples were tested and several contained multiple mycotoxins. This method is sensitive, rapid, and durable enough for multiple mycotoxin determinations that fulfill European Union and Chinese testing criteria.
Pohlheim, Hartmut
2006-01-01
Multidimensional scaling as a technique for the presentation of high-dimensional data with standard visualization techniques is presented. The technique used is often known as Sammon mapping. We explain the mathematical foundations of multidimensional scaling and its robust calculation. We also demonstrate the use of this technique in the area of evolutionary algorithms. First, we present the visualization of the path through the search space of the best individuals during an optimization run. We then apply multidimensional scaling to the comparison of multiple runs regarding the variables of individuals and multi-criteria objective values (path through the solution space).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Soltani, Atousa; Hewage, Kasun; Reza, Bahareh
2015-01-15
Highlights: • We review Municipal Solid Waste Management studies with focus on multiple stakeholders. • We focus on studies with multi-criteria decision analysis methods and discover their trends. • Most studies do not offer solutions for situations where stakeholders compete for more benefits or have unequal voting powers. • Governments and experts are the most participated stakeholders and AHP is the most dominant method. - Abstract: Municipal Solid Waste Management (MSWM) is a complicated process that involves multiple environmental and socio-economic criteria. Decision-makers look for decision support frameworks that can guide in defining alternatives, relevant criteria and their weights, andmore » finding a suitable solution. In addition, decision-making in MSWM problems such as finding proper waste treatment locations or strategies often requires multiple stakeholders such as government, municipalities, industries, experts, and/or general public to get involved. Multi-criteria Decision Analysis (MCDA) is the most popular framework employed in previous studies on MSWM; MCDA methods help multiple stakeholders evaluate the often conflicting criteria, communicate their different preferences, and rank or prioritize MSWM strategies to finally agree on some elements of these strategies and make an applicable decision. This paper reviews and brings together research on the application of MCDA for solving MSWM problems with more focus on the studies that have considered multiple stakeholders and offers solutions for such problems. Results of this study show that AHP is the most common approach in consideration of multiple stakeholders and experts and governments/municipalities are the most common participants in these studies.« less
Composite skid landing gear design investigation
NASA Astrophysics Data System (ADS)
Shrotri, Kshitij
A Composite Skid Landing Gear Design investigation has been conducted. Limit Drop Test as per Federal Aviation Regulations (FAR) Part 27.725 and Crash test as per MIL STD 1290A (AV) were simulated using ABAQUS to evaluate performance of multiple composite fiber-matrix systems. Load factor developed during multiple landing scenarios and energy dissipated during crash were computed. Strength and stiffness based constraints were imposed. Tsai-Wu and LaRC04 physics based failure criteria were used for limit loads. Hashin's damage initiation criteria with Davila-Camanho's energy based damage evolution damage evolution law were used for crash. Initial results indicate that all single-composite skid landing gear may no be feasible due to strength concerns in the cross member bends. Hybridization of multiple composites with elasto-plastic aluminum 7075 showed proof of strength under limit loads. Laminate tailoring for load factor optimization under limit loads was done by parameterization of a single variable fiber orientation angle for multiple laminate families. Tsai-Wu failure criterion was used to impose strength contraints. A quasi-isotropic N = 4 (pi/4) 48 ply IM7/8552 laminate was shown to be the optimal solution with a load failure will be initiated as matrix cracking under compression and fiber kinking under in-plane shear and longitudinal compression. All failures under limit loads being reported in the metal-composite hybrid joint region, the joint was simulated by adhesive bonding and filament winding, separately. Simply adhesive bonding the metal and composite regions does not meet strength requirements. A filament wound metal-composite joint shows proof of strength. Filament wound bolted metal-composite joint shows proof of strength. Filament wound composite bolted to metal cross member radii is the final joining methodology. Finally, crash analysis was conducted as per requirements from MIL STD 1290A (AV). Crash at 42 ft/sec with 1 design gross weight (DGW) lift was simulated using ABAQUS. Plastic and friction energy dissipation in the reference aluminum skid landing gear was compared with plastic, friction and damage energy dissipation in the hybrid composite design. Damage in composites was modeled as progressive damage with Hashin's damage initiation criteria and an energy based damage evolution law. The latter meets requirements of aircraft kinetic energy dissipation up to 20 ft/sec (67.6 kJ) as per MIL STD 1290A (AV). Weight saving possibility of up to 49% over conventional metal skid landing gear is reported. The final design recommended includes Ke49/PEEK skids, 48 ply IM7/8552 (or IM7/PEEK) cross member tapered beams and Al 7075 cross member bend radii, the latter bolted to the filament wound composite-metal tapered beam. Concerns in composite skid landing gear designs, testing requirements and future opportunities are addressed.
SUMIYOSHI, Toshiaki; ENDO, Natsumi; TANAKA, Tomomi; KAMOMAE, Hideo
2017-01-01
Relaxation of the intravaginal part of the uterus is obvious around 6 to 18 h before ovulation, and this is considered the optimal time for artificial insemination (AI), as demonstrated in recent studies. Estrous signs have been suggested as useful criteria for determining the optimal time for AI. Therefore, this study evaluated the usefulness of estrous signs, particularly the relaxation of the intravaginal part of the uterus, as criteria for determining the optimal time for AI. A Total of 100 lactating Holstein-Friesian cows kept in tie-stall barns were investigated. AI was carried out based on the criterion for the optimal time for AI (optimal group), and earlier (early group) and later (late group) than the optimal time for AI, determined on the basis of estrous signs. After AI, ovulation was assessed by rectal palpation and ultrasonographic observation at 6-h intervals. For 87.5% (35/40) of cows in the optimal group, AI was carried out 24-6 h before ovulation, which was previously accepted as the optimal time for AI. AI was carried out earlier (early group) and later (late group) than optimal time for AI in 62.1% (18/29) and 71.0% (22/31) of cows, respectively. The conception rate for the optimal group was 60.0%, and this conception rate was higher than that for the early group (44.8%) and late group (32.2%), without significance. Further, the conception rate of the optimal group was significantly higher than the sum of the conception rates of the early and late groups (38.3%; 23/60) (P < 0.05). These results indicate that the criteria postulated, relaxation of the intravaginal part of the uterus and other estrous signs are useful in determining the optimal time for AI. Furthermore, these estrous signs enable the estimations of stages in the periovulatory period. PMID:29081451
Optimal power allocation and joint source-channel coding for wireless DS-CDMA visual sensor networks
NASA Astrophysics Data System (ADS)
Pandremmenou, Katerina; Kondi, Lisimachos P.; Parsopoulos, Konstantinos E.
2011-01-01
In this paper, we propose a scheme for the optimal allocation of power, source coding rate, and channel coding rate for each of the nodes of a wireless Direct Sequence Code Division Multiple Access (DS-CDMA) visual sensor network. The optimization is quality-driven, i.e. the received quality of the video that is transmitted by the nodes is optimized. The scheme takes into account the fact that the sensor nodes may be imaging scenes with varying levels of motion. Nodes that image low-motion scenes will require a lower source coding rate, so they will be able to allocate a greater portion of the total available bit rate to channel coding. Stronger channel coding will mean that such nodes will be able to transmit at lower power. This will both increase battery life and reduce interference to other nodes. Two optimization criteria are considered. One that minimizes the average video distortion of the nodes and one that minimizes the maximum distortion among the nodes. The transmission powers are allowed to take continuous values, whereas the source and channel coding rates can assume only discrete values. Thus, the resulting optimization problem lies in the field of mixed-integer optimization tasks and is solved using Particle Swarm Optimization. Our experimental results show the importance of considering the characteristics of the video sequences when determining the transmission power, source coding rate and channel coding rate for the nodes of the visual sensor network.
Field-based optimal-design of an electric motor: a new sensitivity formulation
NASA Astrophysics Data System (ADS)
Barba, Paolo Di; Mognaschi, Maria Evelina; Lowther, David Alister; Wiak, Sławomir
2017-12-01
In this paper, a new approach to robust optimal design is proposed. The idea is to consider the sensitivity by means of two auxiliary criteria A and D, related to the magnitude and isotropy of the sensitivity, respectively. The optimal design of a switched-reluctance motor is considered as a case study: since the case study exhibits two design criteria, the relevant Pareto front is approximated by means of evolutionary computing.
Structural optimization: Status and promise
NASA Astrophysics Data System (ADS)
Kamat, Manohar P.
Chapters contained in this book include fundamental concepts of optimum design, mathematical programming methods for constrained optimization, function approximations, approximate reanalysis methods, dual mathematical programming methods for constrained optimization, a generalized optimality criteria method, and a tutorial and survey of multicriteria optimization in engineering. Also included are chapters on the compromise decision support problem and the adaptive linear programming algorithm, sensitivity analyses of discrete and distributed systems, the design sensitivity analysis of nonlinear structures, optimization by decomposition, mixed elements in shape sensitivity analysis of structures based on local criteria, and optimization of stiffened cylindrical shells subjected to destabilizing loads. Other chapters are on applications to fixed-wing aircraft and spacecraft, integrated optimum structural and control design, modeling concurrency in the design of composite structures, and tools for structural optimization. (No individual items are abstracted in this volume)
Monte Carlo simulations within avalanche rescue
NASA Astrophysics Data System (ADS)
Reiweger, Ingrid; Genswein, Manuel; Schweizer, Jürg
2016-04-01
Refining concepts for avalanche rescue involves calculating suitable settings for rescue strategies such as an adequate probing depth for probe line searches or an optimal time for performing resuscitation for a recovered avalanche victim in case of additional burials. In the latter case, treatment decisions have to be made in the context of triage. However, given the low number of incidents it is rarely possible to derive quantitative criteria based on historical statistics in the context of evidence-based medicine. For these rare, but complex rescue scenarios, most of the associated concepts, theories, and processes involve a number of unknown "random" parameters which have to be estimated in order to calculate anything quantitatively. An obvious approach for incorporating a number of random variables and their distributions into a calculation is to perform a Monte Carlo (MC) simulation. We here present Monte Carlo simulations for calculating the most suitable probing depth for probe line searches depending on search area and an optimal resuscitation time in case of multiple avalanche burials. The MC approach reveals, e.g., new optimized values for the duration of resuscitation that differ from previous, mainly case-based assumptions.
Finite element analysis of 6 large PMMA skull reconstructions: A multi-criteria evaluation approach
Ridwan-Pramana, Angela; Marcián, Petr; Borák, Libor; Narra, Nathaniel; Forouzanfar, Tymour; Wolff, Jan
2017-01-01
In this study 6 pre-operative designs for PMMA based reconstructions of cranial defects were evaluated for their mechanical robustness using finite element modeling. Clinical experience and engineering principles were employed to create multiple plan options, which were subsequently computationally analyzed for mechanically relevant parameters under 50N loads: stress, strain and deformation in various components of the assembly. The factors assessed were: defect size, location and shape. The major variable in the cranioplasty assembly design was the arrangement of the fixation plates. An additional study variable introduced was the location of the 50N load within the implant area. It was found that in smaller defects, it was simpler to design a symmetric distribution of plates and under limited variability in load location it was possible to design an optimal for expected loads. However, for very large defects with complex shapes, the variability in the load locations introduces complications to the intuitive design of the optimal assembly. The study shows that it can be beneficial to incorporate multi design computational analyses to decide upon the most optimal plan for a clinical case. PMID:28609471
Finite element analysis of 6 large PMMA skull reconstructions: A multi-criteria evaluation approach.
Ridwan-Pramana, Angela; Marcián, Petr; Borák, Libor; Narra, Nathaniel; Forouzanfar, Tymour; Wolff, Jan
2017-01-01
In this study 6 pre-operative designs for PMMA based reconstructions of cranial defects were evaluated for their mechanical robustness using finite element modeling. Clinical experience and engineering principles were employed to create multiple plan options, which were subsequently computationally analyzed for mechanically relevant parameters under 50N loads: stress, strain and deformation in various components of the assembly. The factors assessed were: defect size, location and shape. The major variable in the cranioplasty assembly design was the arrangement of the fixation plates. An additional study variable introduced was the location of the 50N load within the implant area. It was found that in smaller defects, it was simpler to design a symmetric distribution of plates and under limited variability in load location it was possible to design an optimal for expected loads. However, for very large defects with complex shapes, the variability in the load locations introduces complications to the intuitive design of the optimal assembly. The study shows that it can be beneficial to incorporate multi design computational analyses to decide upon the most optimal plan for a clinical case.
Analysis of grinding of superalloys and ceramics for off-line process optimization
NASA Astrophysics Data System (ADS)
Sathyanarayanan, G.
The present study has compared the performances of resinoid, vitrified, and electroplated CBN wheels in creep feed grinding of M42 and D2 tool steels. Responses such as a specific energy, normal and tangential forces, and surface roughness were used as measures of performance. It was found that creep feed grinding with resinoid, vitrified, and electroplated CBN wheels has its own advantages, but no single wheel could provide good finish, lower specific energy, and high material removal rates simultaneously. To optimize the CBN grinding with different bonded wheels, a Multiple Criteria Decision Making (MCDM) methodology was used. Creep feed grinding of superalloys, Ti-6Al-4V and Inconel 718, has been modeled by utilizing neural networks to optimize the grinding process. A parallel effort was directed at creep feed grinding of alumina ceramics with diamond wheels to investigate the influence of process variables on responses based on experimental results and statistical analysis. The conflicting influence of variables was observed. This led to the formulation of ceramic grinding process as a multi-objective nonlinear mixed integer problem.
A stochastic model for optimizing composite predictors based on gene expression profiles.
Ramanathan, Murali
2003-07-01
This project was done to develop a mathematical model for optimizing composite predictors based on gene expression profiles from DNA arrays and proteomics. The problem was amenable to a formulation and solution analogous to the portfolio optimization problem in mathematical finance: it requires the optimization of a quadratic function subject to linear constraints. The performance of the approach was compared to that of neighborhood analysis using a data set containing cDNA array-derived gene expression profiles from 14 multiple sclerosis patients receiving intramuscular inteferon-beta1a. The Markowitz portfolio model predicts that the covariance between genes can be exploited to construct an efficient composite. The model predicts that a composite is not needed for maximizing the mean value of a treatment effect: only a single gene is needed, but the usefulness of the effect measure may be compromised by high variability. The model optimized the composite to yield the highest mean for a given level of variability or the least variability for a given mean level. The choices that meet this optimization criteria lie on a curve of composite mean vs. composite variability plot referred to as the "efficient frontier." When a composite is constructed using the model, it outperforms the composite constructed using the neighborhood analysis method. The Markowitz portfolio model may find potential applications in constructing composite biomarkers and in the pharmacogenomic modeling of treatment effects derived from gene expression endpoints.
NASA Astrophysics Data System (ADS)
Matrosov, E.; Padula, S.; Huskova, I.; Harou, J. J.
2012-12-01
Population growth and the threat of drier or changed climates are likely to increase water scarcity world-wide. A combination of demand management (water conservation) and new supply infrastructure is often needed to meet future projected demands. In this case system planners must decide what to implement, when and at what capacity. Choices can range from infrastructure to policies or a mix of the two, culminating in a complex planning problem. Decision making under uncertainty frameworks can be used to help planners with this planning problem. This presentation introduces, applies and compares four decision making under uncertainty frameworks. The application is to the Thames basin water resource system which includes the city of London. The approaches covered here include least-economic cost capacity expansion optimization (EO), Robust Decision Making (RDM), Info-Gap Decision Theory (Info-gap) and many-objective evolutionary optimization (MOEO). EO searches for the least-economic cost program, i.e. the timing, sizing, and choice of supply-demand management actions/upgrades which meet projected water demands. Instead of striving for optimality, the RDM and Info-gap approaches help build plans that are robust to 'deep' uncertainty in future conditions. The MOEO framework considers multiple performance criteria and uses water systems simulators as a function evaluator for the evolutionary algorithm. Visualizations show Pareto approximate tradeoffs between multiple objectives. In this presentation we detail the application of each framework to the Thames basin (including London) water resource planning problem. Supply and demand options are proposed by the major water companies in the basin. We apply the EO method using a 29 year time horizon and an annual time step considering capital, operating (fixed and variable), social and environmental costs. The method considers all plausible combinations of supply and conservation schemes and capacities proposed by water companies and generates the least-economic cost annual plan. The RDM application uses stochastic simulation under a weekly time-step and regret analysis to choose a candidate strategy. We then use a statistical cluster algorithm to identify future states of the world under which the strategy is vulnerable. The method explicitly considers the effects of uncertainty in supply, demands and energy price on multiple performance criteria. The Info-gap approach produces robustness and opportuneness plots that show the performance of different plans under the most dire and favorable sets of future conditions. The same simulator, supply and demand options and uncertainties are considered as in the RDM application. The MOEO application considers many more combinations of supply and demand options while still employing a simulator that enables a more realistic representation of the physical system and operating rules. A computer cluster is employed to ease the computational burden. Visualization software allows decision makers to interactively view tradeoffs in many dimensions. Benefits and limitations of each framework are discussed and recommendations for future planning in the basin are provided.
Zhu, Tian; Cao, Shuyi; Su, Pin-Chih; Patel, Ram; Shah, Darshan; Chokshi, Heta B; Szukala, Richard; Johnson, Michael E; Hevener, Kirk E
2013-09-12
A critical analysis of virtual screening results published between 2007 and 2011 was performed. The activity of reported hit compounds from over 400 studies was compared to their hit identification criteria. Hit rates and ligand efficiencies were calculated to assist in these analyses, and the results were compared with factors such as the size of the virtual library and the number of compounds tested. A series of promiscuity, druglike, and ADMET filters were applied to the reported hits to assess the quality of compounds reported, and a careful analysis of a subset of the studies that presented hit optimization was performed. These data allowed us to make several practical recommendations with respect to selection of compounds for experimental testing, definition of hit identification criteria, and general virtual screening hit criteria to allow for realistic hit optimization. A key recommendation is the use of size-targeted ligand efficiency values as hit identification criteria.
[Revision of McDonald's new diagnostic criteria for multiple sclerosis].
Wiendl, H; Kieseier, B C; Gold, R; Hohlfeld, R; Bendszus, M; Hartung, H-P
2006-10-01
In 2001, an international panel suggested new diagnostic criteria for multiple sclerosis (MS). These criteria integrate clinical, imaging (MRI), and paraclinical results in order to facilitate diagnosis. Since then, these so-called McDonald criteria have been broadly accepted and widely propagated. In the meantime a number of publications have dealt with the sensitivity and specificity for MS diagnosis and with implementing these new criteria in clinical practice. Based on these empirical values and newer data on MS, an international expert group recently proposed a revision of the criteria. Substantial changes affect (1) MRI criteria for the dissemination of lesions over time, (2) the role of spinal cord lesions in the MRI and (3) diagnosis of primary progressive MS. In this article we present recent experiences with the McDonald and revised criteria.
Voting systems for environmental decisions.
Burgman, Mark A; Regan, Helen M; Maguire, Lynn A; Colyvan, Mark; Justus, James; Martin, Tara G; Rothley, Kris
2014-04-01
Voting systems aggregate preferences efficiently and are often used for deciding conservation priorities. Desirable characteristics of voting systems include transitivity, completeness, and Pareto optimality, among others. Voting systems that are common and potentially useful for environmental decision making include simple majority, approval, and preferential voting. Unfortunately, no voting system can guarantee an outcome, while also satisfying a range of very reasonable performance criteria. Furthermore, voting methods may be manipulated by decision makers and strategic voters if they have knowledge of the voting patterns and alliances of others in the voting populations. The difficult properties of voting systems arise in routine decision making when there are multiple criteria and management alternatives. Because each method has flaws, we do not endorse one method. Instead, we urge organizers to be transparent about the properties of proposed voting systems and to offer participants the opportunity to approve the voting system as part of the ground rules for operation of a group. © 2014 The Authors. Conservation Biology published by Wiley Periodicals, Inc., on behalf of the Society for Conservation Biology.
Fuzzy bilevel programming with multiple non-cooperative followers: model, algorithm and application
NASA Astrophysics Data System (ADS)
Ke, Hua; Huang, Hu; Ralescu, Dan A.; Wang, Lei
2016-04-01
In centralized decision problems, it is not complicated for decision-makers to make modelling technique selections under uncertainty. When a decentralized decision problem is considered, however, choosing appropriate models is no longer easy due to the difficulty in estimating the other decision-makers' inconclusive decision criteria. These decision criteria may vary with different decision-makers because of their special risk tolerances and management requirements. Considering the general differences among the decision-makers in decentralized systems, we propose a general framework of fuzzy bilevel programming including hybrid models (integrated with different modelling methods in different levels). Specially, we discuss two of these models which may have wide applications in many fields. Furthermore, we apply the proposed two models to formulate a pricing decision problem in a decentralized supply chain with fuzzy coefficients. In order to solve these models, a hybrid intelligent algorithm integrating fuzzy simulation, neural network and particle swarm optimization based on penalty function approach is designed. Some suggestions on the applications of these models are also presented.
NASA Astrophysics Data System (ADS)
Hoffmann, Aswin L.; den Hertog, Dick; Siem, Alex Y. D.; Kaanders, Johannes H. A. M.; Huizenga, Henk
2008-11-01
Finding fluence maps for intensity-modulated radiation therapy (IMRT) can be formulated as a multi-criteria optimization problem for which Pareto optimal treatment plans exist. To account for the dose-per-fraction effect of fractionated IMRT, it is desirable to exploit radiobiological treatment plan evaluation criteria based on the linear-quadratic (LQ) cell survival model as a means to balance the radiation benefits and risks in terms of biologic response. Unfortunately, the LQ-model-based radiobiological criteria are nonconvex functions, which make the optimization problem hard to solve. We apply the framework proposed by Romeijn et al (2004 Phys. Med. Biol. 49 1991-2013) to find transformations of LQ-model-based radiobiological functions and establish conditions under which transformed functions result in equivalent convex criteria that do not change the set of Pareto optimal treatment plans. The functions analysed are: the LQ-Poisson-based model for tumour control probability (TCP) with and without inter-patient heterogeneity in radiation sensitivity, the LQ-Poisson-based relative seriality s-model for normal tissue complication probability (NTCP), the equivalent uniform dose (EUD) under the LQ-Poisson model and the fractionation-corrected Probit-based model for NTCP according to Lyman, Kutcher and Burman. These functions differ from those analysed before in that they cannot be decomposed into elementary EUD or generalized-EUD functions. In addition, we show that applying increasing and concave transformations to the convexified functions is beneficial for the piecewise approximation of the Pareto efficient frontier.
Contrast Media Administration in Coronary Computed Tomography Angiography - A Systematic Review.
Mihl, Casper; Maas, Monique; Turek, Jakub; Seehofnerova, Anna; Leijenaar, Ralph T H; Kok, Madeleine; Lobbes, Marc B I; Wildberger, Joachim E; Das, Marco
2017-04-01
Background Various different injection parameters influence enhancement of the coronary arteries. There is no consensus in the literature regarding the optimal contrast media (CM) injection protocol. The aim of this study is to provide an update on the effect of different CM injection parameters on the coronary attenuation in coronary computed tomographic angiography (CCTA). Method Studies published between January 2001 and May 2014 identified by Pubmed, Embase and MEDLINE were evaluated. Using predefined inclusion criteria and a data extraction form, the content of each eligible study was assessed. Initially, 2551 potential studies were identified. After applying our criteria, 36 studies were found to be eligible. Studies were systematically assessed for quality based on the validated Quality Assessment of Diagnostic Accuracy Studies (QUADAS)-II checklist. Results Extracted data proved to be heterogeneous and often incomplete. The injection protocol and outcome of the included publications were very diverse and results are difficult to compare. Based on the extracted data, it remains unclear which of the injection parameters is the most important determinant for adequate attenuation. It is likely that one parameter which combines multiple parameters (e. g. IDR) will be the most suitable determinant of coronary attenuation in CCTA protocols. Conclusion Research should be directed towards determining the influence of different injection parameters and defining individualized optimal IDRs tailored to patient-related factors (ideally in large randomized trials). Key points · This systematic review provides insight into decisive factors on coronary attenuation.. · Different and contradicting outcomes are reported on coronary attenuation in CCTA.. · One parameter combining multiple parameters (IDR) is likely decisive in coronary attenuation.. · Research should aim at defining individualized optimal IDRs tailored to individual factors.. · Future directions should be tailored towards the influence of different injection parameters.. Citation Format · Mihl C, Maas M, Turek J et al. Contrast Media Administration in Coronary Computed Tomography Angiography - A Systematic Review. Fortschr Röntgenstr 2017; 189: 312 - 325. © Georg Thieme Verlag KG Stuttgart · New York.
Radiological Determination of Postoperative Cervical Fusion: A Systematic Review.
Rhee, John M; Chapman, Jens R; Norvell, Daniel C; Smith, Justin; Sherry, Ned A; Riew, K Daniel
2015-07-01
Systematic review. To determine best criteria for radiological determination of postoperative subaxial cervical fusion to be applied to current clinical practice and ongoing future research assessing fusion to standardize assessment and improve comparability. Despite availability of multiple imaging modalities and criteria, there remains no method of determining cervical fusion with absolute certainty, nor clear consensus on specific criteria to be applied. A systematic search in MEDLINE/Cochrane Collaboration Library (through March 2014). Included studies assessed C2 to C7 via anterior or posterior approach, at 12 weeks or more postoperative, with any graft or implant. Overall body of evidence with respect to 6 posited key questions was determined using Grading of Recommendations Assessment, Development and Evaluation and Agency for Healthcare Research and Quality precepts. Of plain radiographical modalities, there is moderate evidence that the interspinous process motion method (<1 mm) is more accurate than the Cobb angle method for assessing anterior cervical fusion. Of the advanced imaging modalities, there is moderate evidence that computed tomography (CT) is more accurate and reliable than magnetic resonance imaging in assessing anterior cervical fusion. There is insufficient evidence regarding the optimal modality and criteria for assessing posterior cervical fusions and insufficient evidence to support a single time point after surgery as being optimal for determining fusion, although some evidence suggest that reliability of radiography and CT improves with increasing time postoperatively. We recommend using less than 1-mm motion as the initial modality for determining anterior cervical arthrodesis for both clinical and research applications. If further imaging is needed because of indeterminate radiographical evaluation, we recommend CT, which has relatively high accuracy and reliability, but due to greater radiation exposure and cost, it is not routinely suggested. We recommend that plain radiographs also be the initial method of determining posterior cervical fusion but suggest a lower threshold for obtaining CT scans because dynamic radiographs may not be as useful if spinous processes have been removed by laminectomy. 1.
Deb, Kalyanmoy; Sinha, Ankur
2010-01-01
Bilevel optimization problems involve two optimization tasks (upper and lower level), in which every feasible upper level solution must correspond to an optimal solution to a lower level optimization problem. These problems commonly appear in many practical problem solving tasks including optimal control, process optimization, game-playing strategy developments, transportation problems, and others. However, they are commonly converted into a single level optimization problem by using an approximate solution procedure to replace the lower level optimization task. Although there exist a number of theoretical, numerical, and evolutionary optimization studies involving single-objective bilevel programming problems, not many studies look at the context of multiple conflicting objectives in each level of a bilevel programming problem. In this paper, we address certain intricate issues related to solving multi-objective bilevel programming problems, present challenging test problems, and propose a viable and hybrid evolutionary-cum-local-search based algorithm as a solution methodology. The hybrid approach performs better than a number of existing methodologies and scales well up to 40-variable difficult test problems used in this study. The population sizing and termination criteria are made self-adaptive, so that no additional parameters need to be supplied by the user. The study indicates a clear niche of evolutionary algorithms in solving such difficult problems of practical importance compared to their usual solution by a computationally expensive nested procedure. The study opens up many issues related to multi-objective bilevel programming and hopefully this study will motivate EMO and other researchers to pay more attention to this important and difficult problem solving activity.
Bender, Kimberly; Brown, Samantha M; Thompson, Sanna J; Ferguson, Kristin M; Langenderfer, Lisa
2015-05-01
Exposure to multiple forms of maltreatment during childhood is associated with serious mental health consequences among youth in the general population, but limited empirical attention has focused on homeless youth-a population with markedly high rates of childhood maltreatment followed by elevated rates of street victimization. This study investigated the rates of multiple childhood abuses (physical, sexual, and emotional abuse) and multiple street victimizations (robbery, physical assault, and sexual assault) and examined their relative relationships to mental health outcomes (meeting Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, Text Revision, criteria for post-traumatic stress disorder [PTSD], depression, and substance use disorder) among a large (N = 601) multisite sample of homeless youth. Approximately 79% of youth retrospectively reported multiple childhood abuses (two or more types) and 28% reported multiple street victimizations (two or more types). Each additional type of street victimization nearly doubled youths' odds for meeting criteria for substance use disorder. Furthermore, each additional type of childhood abuse experienced more than doubled youths' odds for meeting criteria for PTSD. Both multiple abuses and multiple street victimizations were associated with an approximate twofold increase in meeting depression criteria. Findings suggest the need for screening, assessment, and trauma-informed services for homeless youth who consider multiple types of abuse and victimization experiences. © The Author(s) 2014.
Glaholt, Stephen P; Chen, Celia Y; Demidenko, Eugene; Bugge, Deenie M; Folt, Carol L; Shaw, Joseph R
2012-08-15
The study of stressor interactions by eco-toxicologists using nonlinear response variables is limited by required amounts of a priori knowledge, complexity of experimental designs, the use of linear models, and the lack of use of optimal designs of nonlinear models to characterize complex interactions. Therefore, we developed AID, an adaptive-iterative design for eco-toxicologist to more accurately and efficiently examine complex multiple stressor interactions. AID incorporates the power of the general linear model and A-optimal criteria with an iterative process that: 1) minimizes the required amount of a priori knowledge, 2) simplifies the experimental design, and 3) quantifies both individual and interactive effects. Once a stable model is determined, the best fit model is identified and the direction and magnitude of stressors, individually and all combinations (including complex interactions) are quantified. To validate AID, we selected five commonly co-occurring components of polluted aquatic systems, three metal stressors (Cd, Zn, As) and two water chemistry parameters (pH, hardness) to be tested using standard acute toxicity tests in which Daphnia mortality is the (nonlinear) response variable. We found after the initial data input of experimental data, although literature values (e.g. EC-values) may also be used, and after only two iterations of AID, our dose response model was stable. The model ln(Cd)*ln(Zn) was determined the best predictor of Daphnia mortality response to the combined effects of Cd, Zn, As, pH, and hardness. This model was then used to accurately identify and quantify the strength of both greater- (e.g. As*Cd) and less-than additive interactions (e.g. Cd*Zn). Interestingly, our study found only binary interactions significant, not higher order interactions. We conclude that AID is more efficient and effective at assessing multiple stressor interactions than current methods. Other applications, including life-history endpoints commonly used by regulators, could benefit from AID's efficiency in assessing water quality criteria. Copyright © 2012 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Shafii, Mahyar; Tolson, Bryan; Shawn Matott, L.
2015-04-01
GLUE is one of the most commonly used informal methodologies for uncertainty estimation in hydrological modelling. Despite the ease-of-use of GLUE, it involves a number of subjective decisions such as the strategy for identifying the behavioural solutions. This study evaluates the impact of behavioural solution identification strategies in GLUE on the quality of model output uncertainty. Moreover, two new strategies are developed to objectively identify behavioural solutions. The first strategy considers Pareto-based ranking of parameter sets, while the second one is based on ranking the parameter sets based on an aggregated criterion. The proposed strategies, as well as the traditional strategies in the literature, are evaluated with respect to reliability (coverage of observations by the envelope of model outcomes) and sharpness (width of the envelope of model outcomes) in different numerical experiments. These experiments include multi-criteria calibration and uncertainty estimation of three rainfall-runoff models with different number of parameters. To demonstrate the importance of behavioural solution identification strategy more appropriately, GLUE is also compared with two other informal multi-criteria calibration and uncertainty estimation methods (Pareto optimization and DDS-AU). The results show that the model output uncertainty varies with the behavioural solution identification strategy, and furthermore, a robust GLUE implementation would require considering multiple behavioural solution identification strategies and choosing the one that generates the desired balance between sharpness and reliability. The proposed objective strategies prove to be the best options in most of the case studies investigated in this research. Implementing such an approach for a high-dimensional calibration problem enables GLUE to generate robust results in comparison with Pareto optimization and DDS-AU.
Decomposition-Based Decision Making for Aerospace Vehicle Design
NASA Technical Reports Server (NTRS)
Borer, Nicholas K.; Mavris, DImitri N.
2005-01-01
Most practical engineering systems design problems have multiple and conflicting objectives. Furthermore, the satisfactory attainment level for each objective ( requirement ) is likely uncertain early in the design process. Systems with long design cycle times will exhibit more of this uncertainty throughout the design process. This is further complicated if the system is expected to perform for a relatively long period of time, as now it will need to grow as new requirements are identified and new technologies are introduced. These points identify a need for a systems design technique that enables decision making amongst multiple objectives in the presence of uncertainty. Traditional design techniques deal with a single objective or a small number of objectives that are often aggregates of the overarching goals sought through the generation of a new system. Other requirements, although uncertain, are viewed as static constraints to this single or multiple objective optimization problem. With either of these formulations, enabling tradeoffs between the requirements, objectives, or combinations thereof is a slow, serial process that becomes increasingly complex as more criteria are added. This research proposal outlines a technique that attempts to address these and other idiosyncrasies associated with modern aerospace systems design. The proposed formulation first recasts systems design into a multiple criteria decision making problem. The now multiple objectives are decomposed to discover the critical characteristics of the objective space. Tradeoffs between the objectives are considered amongst these critical characteristics by comparison to a probabilistic ideal tradeoff solution. The proposed formulation represents a radical departure from traditional methods. A pitfall of this technique is in the validation of the solution: in a multi-objective sense, how can a decision maker justify a choice between non-dominated alternatives? A series of examples help the reader to observe how this technique can be applied to aerospace systems design and compare the results of this so-called Decomposition-Based Decision Making to more traditional design approaches.
Immunotherapy for metastatic urothelial carcinoma: status quo and the future.
Necchi, Andrea; Rink, Michael; Giannatempo, Patrizia; Raggi, Daniele; Xylinas, Evanguelos
2018-01-01
The treatment paradigm of urothelial carcinoma has been revolutionized by the advent of multiple anti-programmed-cell death-1/ligand-1 (PD-1/PD-L1) antibodies. Significant improvements have been obtained in the locally advanced or metastatic stage, which was lacking of therapeutic standards. This review reports key findings from completed and ongoing clinical trials that highlight the potential of PD-1/PD-L1 blockade in this disease. Anti-PD-1/PD-L1 monoclonal antibodies have shown efficacy and safety in patients with urothelial carcinoma, regardless of their prognostic features. Efficacy was similar across different compounds, with objective responses that approximate 20%, with some differences favoring PD-L1-expressing patients. Typically, responding patients have good chances of achieving durable response, but biomarkers predictive of therapeutic effect are lacking. To date, evidences from randomized studies are limited to the second-line, postplatinum therapy. Despite the activity of PD-1/PD-L1 inhibitors is well established in metastatic urothelial carcinoma, multiple gray zones still exist regarding their optimal use in clinical practice. These uncertainties are related to patient and treatment-related criteria, to the optimal duration of treatment, including combination or sequence with standard chemotherapy. Special issues are represented by pseudoprogression or hyperprogression. Generally, enhanced predictive tools are needed and a myriad of further investigations are underway.
Diagnostic criteria for multiple sclerosis: 2010 Revisions to the McDonald criteria
Polman, Chris H; Reingold, Stephen C; Banwell, Brenda; Clanet, Michel; Cohen, Jeffrey A; Filippi, Massimo; Fujihara, Kazuo; Havrdova, Eva; Hutchinson, Michael; Kappos, Ludwig; Lublin, Fred D; Montalban, Xavier; O'Connor, Paul; Sandberg-Wollheim, Magnhild; Thompson, Alan J; Waubant, Emmanuelle; Weinshenker, Brian; Wolinsky, Jerry S
2011-01-01
New evidence and consensus has led to further revision of the McDonald Criteria for diagnosis of multiple sclerosis. The use of imaging for demonstration of dissemination of central nervous system lesions in space and time has been simplified, and in some circumstances dissemination in space and time can be established by a single scan. These revisions simplify the Criteria, preserve their diagnostic sensitivity and specificity, address their applicability across populations, and may allow earlier diagnosis and more uniform and widespread use. Ann Neurol 2011 PMID:21387374
Role of optimization criterion in static asymmetric analysis of lumbar spine load.
Daniel, Matej
2011-10-01
A common method for load estimation in biomechanics is the inverse dynamics optimization, where the muscle activation pattern is found by minimizing or maximizing the optimization criterion. It has been shown that various optimization criteria predict remarkably similar muscle activation pattern and intra-articular contact forces during leg motion. The aim of this paper is to study the effect of the choice of optimization criterion on L4/L5 loading during static asymmetric loading. Upright standing with weight in one stretched arm was taken as a representative position. Musculoskeletal model of lumbar spine model was created from CT images of Visible Human Project. Several criteria were tested based on the minimization of muscle forces, muscle stresses, and spinal load. All criteria provide the same level of lumbar spine loading (difference is below 25%), except the criterion of minimum lumbar shear force which predicts unrealistically high spinal load and should not be considered further. Estimated spinal load and predicted muscle force activation pattern are in accordance with the intradiscal pressure measurements and EMG measurements. The L4/L5 spine loads 1312 N, 1674 N, and 1993 N were predicted for mass of weight in hand 2, 5, and 8 kg, respectively using criterion of mininum muscle stress cubed. As the optimization criteria do not considerably affect the spinal load, their choice is not critical in further clinical or ergonomic studies and computationally simpler criterion can be used.
Soft-information flipping approach in multi-head multi-track BPMR systems
NASA Astrophysics Data System (ADS)
Warisarn, C.; Busyatras, W.; Myint, L. M. M.
2018-05-01
Inter-track interference is one of the most severe impairments in bit-patterned media recording system. This impairment can be effectively handled by a modulation code and a multi-head array jointly processing multiple tracks; however, such a modulation constraint has never been utilized to improve the soft-information. Therefore, this paper proposes the utilization of modulation codes with an encoded constraint defined by the criteria for soft-information flipping during a three-track data detection process. Moreover, we also investigate the optimal offset position of readheads to provide the most improvement in system performance. The simulation results indicate that the proposed systems with and without position jitter are significantly superior to uncoded systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Jane L.; Limburg, David; Graneto, Matthew J.
2012-05-29
In this Letter, we provide the structure-activity relationships, optimization of design, testing criteria, and human half-life data for a series of selective COX-2 inhibitors. During the course of our structure-based drug design efforts, we discovered two distinct binding modes within the COX-2 active site for differently substituted members of this class. The challenge of a undesirably long human half-life for the first clinical candidate 1t{sub 1/2} = 360 h was addressed by multiple strategies, leading to the discovery of 29b-(S) (SC-75416) with t{sub 1/2} = 34 h.
Multicriteria Analysis of Assembling Buildings from Steel Frame Structures
NASA Astrophysics Data System (ADS)
Miniotaite, Ruta
2017-10-01
Steel frame structures are often used in the construction of public and industrial buildings. They are used for: all types of slope roofs; walls of newly-built public and industrial buildings; load bearing structures; roofs of renovated buildings. The process of assembling buildings from steel frame structures should be analysed as an integrated process influenced by such factors as construction materials and machinery used, the qualification level of construction workers, complexity of work, available finance. It is necessary to find a rational technological design solution for assembling buildings from steel frame structures by conducting a multiple criteria analysis. The analysis provides a possibility to evaluate the engineering considerations and find unequivocal solutions. The rational alternative of a complex process of assembling buildings from steel frame structures was found through multiple criteria analysis and multiple criteria evaluation. In multiple criteria evaluation of technological solutions for assembling buildings from steel frame structures by pairwise comparison method the criteria by significance are distributed as follows: durability is the most important criterion in the evaluation of alternatives; the price (EUR/unit of measurement) of a part of assembly process; construction workers’ qualification level (category); mechanization level of a part of assembling process (%), and complexity of assembling work (in points) are less important criteria.
Multi-Criteria Optimization of Regulation in Metabolic Networks
Higuera, Clara; Villaverde, Alejandro F.; Banga, Julio R.; Ross, John; Morán, Federico
2012-01-01
Determining the regulation of metabolic networks at genome scale is a hard task. It has been hypothesized that biochemical pathways and metabolic networks might have undergone an evolutionary process of optimization with respect to several criteria over time. In this contribution, a multi-criteria approach has been used to optimize parameters for the allosteric regulation of enzymes in a model of a metabolic substrate-cycle. This has been carried out by calculating the Pareto set of optimal solutions according to two objectives: the proper direction of flux in a metabolic cycle and the energetic cost of applying the set of parameters. Different Pareto fronts have been calculated for eight different “environments” (specific time courses of end product concentrations). For each resulting front the so-called knee point is identified, which can be considered a preferred trade-off solution. Interestingly, the optimal control parameters corresponding to each of these points also lead to optimal behaviour in all the other environments. By calculating the average of the different parameter sets for the knee solutions more frequently found, a final and optimal consensus set of parameters can be obtained, which is an indication on the existence of a universal regulation mechanism for this system.The implications from such a universal regulatory switch are discussed in the framework of large metabolic networks. PMID:22848435
Kang, Jian; Li, Xin; Jin, Rui; Ge, Yong; Wang, Jinfeng; Wang, Jianghao
2014-01-01
The eco-hydrological wireless sensor network (EHWSN) in the middle reaches of the Heihe River Basin in China is designed to capture the spatial and temporal variability and to estimate the ground truth for validating the remote sensing productions. However, there is no available prior information about a target variable. To meet both requirements, a hybrid model-based sampling method without any spatial autocorrelation assumptions is developed to optimize the distribution of EHWSN nodes based on geostatistics. This hybrid model incorporates two sub-criteria: one for the variogram modeling to represent the variability, another for improving the spatial prediction to evaluate remote sensing productions. The reasonability of the optimized EHWSN is validated from representativeness, the variogram modeling and the spatial accuracy through using 15 types of simulation fields generated with the unconditional geostatistical stochastic simulation. The sampling design shows good representativeness; variograms estimated by samples have less than 3% mean error relative to true variograms. Then, fields at multiple scales are predicted. As the scale increases, estimated fields have higher similarities to simulation fields at block sizes exceeding 240 m. The validations prove that this hybrid sampling method is effective for both objectives when we do not know the characteristics of an optimized variables. PMID:25317762
To what extent can ecosystem services motivate protecting biodiversity?
Dee, Laura E; De Lara, Michel; Costello, Christopher; Gaines, Steven D
2017-08-01
Society increasingly focuses on managing nature for the services it provides people rather than for the existence of particular species. How much biodiversity protection would result from this modified focus? Although biodiversity contributes to ecosystem services, the details of which species are critical, and whether they will go functionally extinct in the future, are fraught with uncertainty. Explicitly considering this uncertainty, we develop an analytical framework to determine how much biodiversity protection would arise solely from optimising net value from an ecosystem service. Using stochastic dynamic programming, we find that protecting a threshold number of species is optimal, and uncertainty surrounding how biodiversity produces services makes it optimal to protect more species than are presumed critical. We define conditions under which the economically optimal protection strategy is to protect all species, no species, and cases in between. We show how the optimal number of species to protect depends upon different relationships between species and services, including considering multiple services. Our analysis provides simple criteria to evaluate when managing for particular ecosystem services could warrant protecting all species, given uncertainty. Evaluating this criterion with empirical estimates from different ecosystems suggests that optimising some services will be more likely to protect most species than others. © 2017 John Wiley & Sons Ltd/CNRS.
Techniques for designing rotorcraft control systems
NASA Technical Reports Server (NTRS)
Levine, William S.; Barlow, Jewel
1993-01-01
This report summarizes the work that was done on the project from 1 Apr. 1992 to 31 Mar. 1993. The main goal of this research is to develop a practical tool for rotorcraft control system design based on interactive optimization tools (CONSOL-OPTCAD) and classical rotorcraft design considerations (ADOCS). This approach enables the designer to combine engineering intuition and experience with parametric optimization. The combination should make it possible to produce a better design faster than would be possible using either pure optimization or pure intuition and experience. We emphasize that the goal of this project is not to develop an algorithm. It is to develop a tool. We want to keep the human designer in the design process to take advantage of his or her experience and creativity. The role of the computer is to perform the calculation necessary to improve and to display the performance of the nominal design. Briefly, during the first year we have connected CONSOL-OPTCAD, an existing software package for optimizing parameters with respect to multiple performance criteria, to a simplified nonlinear simulation of the UH-60 rotorcraft. We have also created mathematical approximations to the Mil-specs for rotorcraft handling qualities and input them into CONSOL-OPTCAD. Finally, we have developed the additional software necessary to use CONSOL-OPTCAD for the design of rotorcraft controllers.
Kang, Jian; Li, Xin; Jin, Rui; Ge, Yong; Wang, Jinfeng; Wang, Jianghao
2014-10-14
The eco-hydrological wireless sensor network (EHWSN) in the middle reaches of the Heihe River Basin in China is designed to capture the spatial and temporal variability and to estimate the ground truth for validating the remote sensing productions. However, there is no available prior information about a target variable. To meet both requirements, a hybrid model-based sampling method without any spatial autocorrelation assumptions is developed to optimize the distribution of EHWSN nodes based on geostatistics. This hybrid model incorporates two sub-criteria: one for the variogram modeling to represent the variability, another for improving the spatial prediction to evaluate remote sensing productions. The reasonability of the optimized EHWSN is validated from representativeness, the variogram modeling and the spatial accuracy through using 15 types of simulation fields generated with the unconditional geostatistical stochastic simulation. The sampling design shows good representativeness; variograms estimated by samples have less than 3% mean error relative to true variograms. Then, fields at multiple scales are predicted. As the scale increases, estimated fields have higher similarities to simulation fields at block sizes exceeding 240 m. The validations prove that this hybrid sampling method is effective for both objectives when we do not know the characteristics of an optimized variables.
MO-D-BRC-04: Multiple-Criteria Optimization Planning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Donaghue, J.
Treatment planning is a central part of radiation therapy, including delineation in tumor volumes and critical organs, setting treatment goals of prescription doses to the tumor targets and tolerance doses to the critical organs, and finally generation of treatment plans to meet the treatment goals. National groups like RTOG have led the effort to standardize treatment goals of the prescription doses to the tumor targets and tolerance doses to the critical organs based on accumulated knowledge from decades of abundant clinical trial experience. The challenge for each clinical department is how to achieve or surpass these set goals within themore » time constraints of clinical practice. Using fifteen testing cases from different treatment sites such as head and neck, prostate with and without pelvic lymph nodes, SBRT spine, we will present clinically utility of advanced planning tools, including knowledge based, automatic based, and multiple criteria based tools that are clinically implemented. The objectives of this session are: Understand differences among these three advanced planning tools Provide clinical assessments on the utility of the advanced planning tools Discuss clinical challenges of treatment planning with large variations in tumor volumes and their relationships with adjacent critical organs. Ping Xia received research grant from Philips. Jackie Wu received research grant from Varian; P. Xia, Research support by Philips and Varian; Q. Wu, NIH, Varian Medical.« less
Zhu, Tian; Cao, Shuyi; Su, Pin-Chih; Patel, Ram; Shah, Darshan; Chokshi, Heta B.; Szukala, Richard; Johnson, Michael E.; Hevener, Kirk E.
2013-01-01
A critical analysis of virtual screening results published between 2007 and 2011 was performed. The activity of reported hit compounds from over 400 studies was compared to their hit identification criteria. Hit rates and ligand efficiencies were calculated to assist in these analyses and the results were compared with factors such as the size of the virtual library and the number of compounds tested. A series of promiscuity, drug-like, and ADMET filters were applied to the reported hits to assess the quality of compounds reported and a careful analysis of a subset of the studies which presented hit optimization was performed. This data allowed us to make several practical recommendations with respect to selection of compounds for experimental testing, defining hit identification criteria, and general virtual screening hit criteria to allow for realistic hit optimization. A key recommendation is the use of size-targeted ligand efficiency values as hit identification criteria. PMID:23688234
Optimizing the construction of devices to control inaccesible surfaces - case study
NASA Astrophysics Data System (ADS)
Niţu, E. L.; Costea, A.; Iordache, M. D.; Rizea, A. D.; Babă, Al
2017-10-01
The modern concept for the evolution of manufacturing systems requires multi-criteria optimization of technological processes and equipments, prioritizing associated criteria according to their importance. Technological preparation of the manufacturing can be developed, depending on the volume of production, to the limit of favourable economical effects related to the recovery of the costs for the design and execution of the technological equipment. Devices, as subsystems of the technological system, in the general context of modernization and diversification of machines, tools, semi-finished products and drives, are made in a multitude of constructive variants, which in many cases do not allow their identification, study and improvement. This paper presents a case study in which the multi-criteria analysis of some structures, based on a general optimization method, of novelty character, is used in order to determine the optimal construction variant of a control device. The rational construction of the control device confirms that the optimization method and the proposed calculation methods are correct and determine a different system configuration, new features and functions, and a specific method of working to control inaccessible surfaces.
WU, LI-TZY; WOODY, GEORGE E.; YANG, CHONGMING; PAN, JENG-JONG; REEVE, BRYCE B.; BLAZER, DAN G.
2012-01-01
While item response theory (IRT) research shows a latent severity trait underlying response patterns of substance abuse and dependence symptoms, little is known about IRT-based severity estimates in relation to clinically relevant measures. In response to increased prevalences of marijuana-related treatment admissions, an elevated level of marijuana potency, and the debate on medical marijuana use, we applied dimensional approaches to understand IRT-based severity estimates for marijuana use disorders (MUDs) and their correlates while simultaneously considering gender- and race/ethnicity-related differential item functioning (DIF). Using adult data from the 2008 National Survey on Drug Use and Health (N=37,897), DSM-IV criteria for MUDs among past-year marijuana users were examined by IRT, logistic regression, and multiple indicators–multiple causes (MIMIC) approaches. Among 6,917 marijuana users, 15% met criteria for a MUD; another 24% exhibited subthreshold dependence. Abuse criteria were highly correlated with dependence criteria (correlation=0.90), indicating unidimensionality; item information curves revealed redundancy in multiple criteria. MIMIC analyses showed that MUD criteria were positively associated with weekly marijuana use, early marijuana use, other substance use disorders, substance abuse treatment, and serious psychological distress. African Americans and Hispanics showed higher levels of MUDs than whites, even after adjusting for race/ethnicity-related DIF. The redundancy in multiple criteria suggests an opportunity to improve efficiency in measuring symptom-level manifestations by removing low-informative criteria. Elevated rates of MUDs among African Americans and Hispanics require research to elucidate risk factors and improve assessments of MUDs for different racial/ethnic groups. PMID:22351489
NASA Astrophysics Data System (ADS)
Hendrickson, Thomas P.; Kavvada, Olga; Shah, Nihar; Sathre, Roger; Scown, Corinne D.
2015-01-01
Plug-in electric vehicle (PEV) use in the United States (US) has doubled in recent years and is projected to continue increasing rapidly. This is especially true in California, which makes up nearly one-third of the current US PEV market. Planning and constructing the necessary infrastructure to support this projected increase requires insight into the optimal strategies for PEV battery recycling. Utilizing life-cycle perspectives in evaluating these supply chain networks is essential in fully understanding the environmental consequences of this infrastructure expansion. This study combined life-cycle assessment and geographic information systems (GIS) to analyze the energy, greenhouse gas (GHG), water use, and criteria air pollutant implications of end-of-life infrastructure networks for lithium-ion batteries (LIBs) in California. Multiple end-of-life scenarios were assessed, including hydrometallurgical and pyrometallurgical recycling processes. Using economic and environmental criteria, GIS modeling revealed optimal locations for battery dismantling and recycling facilities for in-state and out-of-state recycling scenarios. Results show that economic return on investment is likely to diminish if more than two in-state dismantling facilities are constructed. Using rail as well as truck transportation can substantially reduce transportation-related GHG emissions (23-45%) for both in-state and out-of-state recycling scenarios. The results revealed that material recovery from pyrometallurgy can offset environmental burdens associated with LIB production, namely a 6-56% reduction in primary energy demand and 23% reduction in GHG emissions, when compared to virgin production. Incorporating human health damages from air emissions into the model indicated that Los Angeles and Kern Counties are most at risk in the infrastructure scale-up for in-state recycling due to their population density and proximity to the optimal location.
Graph drawing using tabu search coupled with path relinking.
Dib, Fadi K; Rodgers, Peter
2018-01-01
Graph drawing, or the automatic layout of graphs, is a challenging problem. There are several search based methods for graph drawing which are based on optimizing an objective function which is formed from a weighted sum of multiple criteria. In this paper, we propose a new neighbourhood search method which uses a tabu search coupled with path relinking to optimize such objective functions for general graph layouts with undirected straight lines. To our knowledge, before our work, neither of these methods have been previously used in general multi-criteria graph drawing. Tabu search uses a memory list to speed up searching by avoiding previously tested solutions, while the path relinking method generates new solutions by exploring paths that connect high quality solutions. We use path relinking periodically within the tabu search procedure to speed up the identification of good solutions. We have evaluated our new method against the commonly used neighbourhood search optimization techniques: hill climbing and simulated annealing. Our evaluation examines the quality of the graph layout (objective function's value) and the speed of layout in terms of the number of evaluated solutions required to draw a graph. We also examine the relative scalability of each method. Our experimental results were applied to both random graphs and a real-world dataset. We show that our method outperforms both hill climbing and simulated annealing by producing a better layout in a lower number of evaluated solutions. In addition, we demonstrate that our method has greater scalability as it can layout larger graphs than the state-of-the-art neighbourhood search methods. Finally, we show that similar results can be produced in a real world setting by testing our method against a standard public graph dataset.
Graph drawing using tabu search coupled with path relinking
Rodgers, Peter
2018-01-01
Graph drawing, or the automatic layout of graphs, is a challenging problem. There are several search based methods for graph drawing which are based on optimizing an objective function which is formed from a weighted sum of multiple criteria. In this paper, we propose a new neighbourhood search method which uses a tabu search coupled with path relinking to optimize such objective functions for general graph layouts with undirected straight lines. To our knowledge, before our work, neither of these methods have been previously used in general multi-criteria graph drawing. Tabu search uses a memory list to speed up searching by avoiding previously tested solutions, while the path relinking method generates new solutions by exploring paths that connect high quality solutions. We use path relinking periodically within the tabu search procedure to speed up the identification of good solutions. We have evaluated our new method against the commonly used neighbourhood search optimization techniques: hill climbing and simulated annealing. Our evaluation examines the quality of the graph layout (objective function’s value) and the speed of layout in terms of the number of evaluated solutions required to draw a graph. We also examine the relative scalability of each method. Our experimental results were applied to both random graphs and a real-world dataset. We show that our method outperforms both hill climbing and simulated annealing by producing a better layout in a lower number of evaluated solutions. In addition, we demonstrate that our method has greater scalability as it can layout larger graphs than the state-of-the-art neighbourhood search methods. Finally, we show that similar results can be produced in a real world setting by testing our method against a standard public graph dataset. PMID:29746576
Genetic algorithms for multicriteria shape optimization of induction furnace
NASA Astrophysics Data System (ADS)
Kůs, Pavel; Mach, František; Karban, Pavel; Doležel, Ivo
2012-09-01
In this contribution we deal with a multi-criteria shape optimization of an induction furnace. We want to find shape parameters of the furnace in such a way, that two different criteria are optimized. Since they cannot be optimized simultaneously, instead of one optimum we find set of partially optimal designs, so called Pareto front. We compare two different approaches to the optimization, one using nonlinear conjugate gradient method and second using variation of genetic algorithm. As can be seen from the numerical results, genetic algorithm seems to be the right choice for this problem. Solution of direct problem (coupled problem consisting of magnetic and heat field) is done using our own code Agros2D. It uses finite elements of higher order leading to fast and accurate solution of relatively complicated coupled problem. It also provides advanced scripting support, allowing us to prepare parametric model of the furnace and simply incorporate various types of optimization algorithms.
A generalized concept of power helped to choose optimal endpoints in clinical trials.
Borm, George F; van der Wilt, Gert J; Kremer, Jan A M; Zielhuis, Gerhard A
2007-04-01
A clinical trial may have multiple objectives. Sometimes the results for several parameters may need to be significant or meet certain other criteria. In such cases, it is important to evaluate the probability that all these objectives will be met, rather than the probability that each will be met. The purpose of this article is to introduce a definition of power that is tailored to handle this situation and that is helpful for the design of such trials. We introduce a generalized concept of power. It can handle complex situations, for example, in which there is a logical combination of partial objectives. These may be formulated not only in terms of statistical tests and of confidence intervals, but also in nonstatistical terms, such as "selecting the optimal by dose." The power of a trial was calculated for various objectives and combinations of objectives. The generalized concept of power may lead to power calculations that closely match the objectives of the trial and contribute to choosing more efficient endpoints and designs.
NASA Astrophysics Data System (ADS)
Yi, Jin; Li, Xinyu; Xiao, Mi; Xu, Junnan; Zhang, Lin
2017-01-01
Engineering design often involves different types of simulation, which results in expensive computational costs. Variable fidelity approximation-based design optimization approaches can realize effective simulation and efficiency optimization of the design space using approximation models with different levels of fidelity and have been widely used in different fields. As the foundations of variable fidelity approximation models, the selection of sample points of variable-fidelity approximation, called nested designs, is essential. In this article a novel nested maximin Latin hypercube design is constructed based on successive local enumeration and a modified novel global harmony search algorithm. In the proposed nested designs, successive local enumeration is employed to select sample points for a low-fidelity model, whereas the modified novel global harmony search algorithm is employed to select sample points for a high-fidelity model. A comparative study with multiple criteria and an engineering application are employed to verify the efficiency of the proposed nested designs approach.
Uncertainty Quantification and Statistical Convergence Guidelines for PIV Data
NASA Astrophysics Data System (ADS)
Stegmeir, Matthew; Kassen, Dan
2016-11-01
As Particle Image Velocimetry has continued to mature, it has developed into a robust and flexible technique for velocimetry used by expert and non-expert users. While historical estimates of PIV accuracy have typically relied heavily on "rules of thumb" and analysis of idealized synthetic images, recently increased emphasis has been placed on better quantifying real-world PIV measurement uncertainty. Multiple techniques have been developed to provide per-vector instantaneous uncertainty estimates for PIV measurements. Often real-world experimental conditions introduce complications in collecting "optimal" data, and the effect of these conditions is important to consider when planning an experimental campaign. The current work utilizes the results of PIV Uncertainty Quantification techniques to develop a framework for PIV users to utilize estimated PIV confidence intervals to compute reliable data convergence criteria for optimal sampling of flow statistics. Results are compared using experimental and synthetic data, and recommended guidelines and procedures leveraging estimated PIV confidence intervals for efficient sampling for converged statistics are provided.
NASA Astrophysics Data System (ADS)
Strunz, Richard; Herrmann, Jeffrey W.
2011-12-01
The hot fire test strategy for liquid rocket engines has always been a concern of space industry and agency alike because no recognized standard exists. Previous hot fire test plans focused on the verification of performance requirements but did not explicitly include reliability as a dimensioning variable. The stakeholders are, however, concerned about a hot fire test strategy that balances reliability, schedule, and affordability. A multiple criteria test planning model is presented that provides a framework to optimize the hot fire test strategy with respect to stakeholder concerns. The Staged Combustion Rocket Engine Demonstrator, a program of the European Space Agency, is used as example to provide the quantitative answer to the claim that a reduced thrust scale demonstrator is cost beneficial for a subsequent flight engine development. Scalability aspects of major subsystems are considered in the prior information definition inside the Bayesian framework. The model is also applied to assess the impact of an increase of the demonstrated reliability level on schedule and affordability.
Multi-Objective Control Optimization for Greenhouse Environment Using Evolutionary Algorithms
Hu, Haigen; Xu, Lihong; Wei, Ruihua; Zhu, Bingkun
2011-01-01
This paper investigates the issue of tuning the Proportional Integral and Derivative (PID) controller parameters for a greenhouse climate control system using an Evolutionary Algorithm (EA) based on multiple performance measures such as good static-dynamic performance specifications and the smooth process of control. A model of nonlinear thermodynamic laws between numerous system variables affecting the greenhouse climate is formulated. The proposed tuning scheme is tested for greenhouse climate control by minimizing the integrated time square error (ITSE) and the control increment or rate in a simulation experiment. The results show that by tuning the gain parameters the controllers can achieve good control performance through step responses such as small overshoot, fast settling time, and less rise time and steady state error. Besides, it can be applied to tuning the system with different properties, such as strong interactions among variables, nonlinearities and conflicting performance criteria. The results implicate that it is a quite effective and promising tuning method using multi-objective optimization algorithms in the complex greenhouse production. PMID:22163927
Quantum cryptography: Security criteria reexamined
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaszlikowski, Dagomir; Liang, Y.C.; Englert, Berthold-Georg
2004-09-01
We find that the generally accepted security criteria are flawed for a whole class of protocols for quantum cryptography. This is so because a standard assumption of the security analysis, namely that the so-called square-root measurement is optimal for eavesdropping purposes, is not true in general. There are rather large parameter regimes in which the optimal measurement extracts substantially more information than the square-root measurement.
Trade-offs and efficiencies in optimal budget-constrained multispecies corridor networks.
Dilkina, Bistra; Houtman, Rachel; Gomes, Carla P; Montgomery, Claire A; McKelvey, Kevin S; Kendall, Katherine; Graves, Tabitha A; Bernstein, Richard; Schwartz, Michael K
2017-02-01
Conservation biologists recognize that a system of isolated protected areas will be necessary but insufficient to meet biodiversity objectives. Current approaches to connecting core conservation areas through corridors consider optimal corridor placement based on a single optimization goal: commonly, maximizing the movement for a target species across a network of protected areas. We show that designing corridors for single species based on purely ecological criteria leads to extremely expensive linkages that are suboptimal for multispecies connectivity objectives. Similarly, acquiring the least-expensive linkages leads to ecologically poor solutions. We developed algorithms for optimizing corridors for multispecies use given a specific budget. We applied our approach in western Montana to demonstrate how the solutions may be used to evaluate trade-offs in connectivity for 2 species with different habitat requirements, different core areas, and different conservation values under different budgets. We evaluated corridors that were optimal for each species individually and for both species jointly. Incorporating a budget constraint and jointly optimizing for both species resulted in corridors that were close to the individual species movement-potential optima but with substantial cost savings. Our approach produced corridors that were within 14% and 11% of the best possible corridor connectivity for grizzly bears (Ursus arctos) and wolverines (Gulo gulo), respectively, and saved 75% of the cost. Similarly, joint optimization under a combined budget resulted in improved connectivity for both species relative to splitting the budget in 2 to optimize for each species individually. Our results demonstrate economies of scale and complementarities conservation planners can achieve by optimizing corridor designs for financial costs and for multiple species connectivity jointly. We believe that our approach will facilitate corridor conservation by reducing acquisition costs and by allowing derived corridors to more closely reflect conservation priorities. © 2016 Society for Conservation Biology.
Trade-offs and efficiencies in optimal budget-constrained multispecies corridor networks
Dilkina, Bistra; Houtman, Rachel; Gomes, Carla P.; Montgomery, Claire A.; McKelvey, Kevin; Kendall, Katherine; Graves, Tabitha A.; Bernstein, Richard; Schwartz, Michael K.
2017-01-01
Conservation biologists recognize that a system of isolated protected areas will be necessary but insufficient to meet biodiversity objectives. Current approaches to connecting core conservation areas through corridors consider optimal corridor placement based on a single optimization goal: commonly, maximizing the movement for a target species across a network of protected areas. We show that designing corridors for single species based on purely ecological criteria leads to extremely expensive linkages that are suboptimal for multispecies connectivity objectives. Similarly, acquiring the least-expensive linkages leads to ecologically poor solutions. We developed algorithms for optimizing corridors for multispecies use given a specific budget. We applied our approach in western Montana to demonstrate how the solutions may be used to evaluate trade-offs in connectivity for 2 species with different habitat requirements, different core areas, and different conservation values under different budgets. We evaluated corridors that were optimal for each species individually and for both species jointly. Incorporating a budget constraint and jointly optimizing for both species resulted in corridors that were close to the individual species movement-potential optima but with substantial cost savings. Our approach produced corridors that were within 14% and 11% of the best possible corridor connectivity for grizzly bears (Ursus arctos) and wolverines (Gulo gulo), respectively, and saved 75% of the cost. Similarly, joint optimization under a combined budget resulted in improved connectivity for both species relative to splitting the budget in 2 to optimize for each species individually. Our results demonstrate economies of scale and complementarities conservation planners can achieve by optimizing corridor designs for financial costs and for multiple species connectivity jointly. We believe that our approach will facilitate corridor conservation by reducing acquisition costs and by allowing derived corridors to more closely reflect conservation priorities.
An efficient, scalable, and adaptable framework for solving generic systems of level-set PDEs
Mosaliganti, Kishore R.; Gelas, Arnaud; Megason, Sean G.
2013-01-01
In the last decade, level-set methods have been actively developed for applications in image registration, segmentation, tracking, and reconstruction. However, the development of a wide variety of level-set PDEs and their numerical discretization schemes, coupled with hybrid combinations of PDE terms, stopping criteria, and reinitialization strategies, has created a software logistics problem. In the absence of an integrative design, current toolkits support only specific types of level-set implementations which restrict future algorithm development since extensions require significant code duplication and effort. In the new NIH/NLM Insight Toolkit (ITK) v4 architecture, we implemented a level-set software design that is flexible to different numerical (continuous, discrete, and sparse) and grid representations (point, mesh, and image-based). Given that a generic PDE is a summation of different terms, we used a set of linked containers to which level-set terms can be added or deleted at any point in the evolution process. This container-based approach allows the user to explore and customize terms in the level-set equation at compile-time in a flexible manner. The framework is optimized so that repeated computations of common intensity functions (e.g., gradient and Hessians) across multiple terms is eliminated. The framework further enables the evolution of multiple level-sets for multi-object segmentation and processing of large datasets. For doing so, we restrict level-set domains to subsets of the image domain and use multithreading strategies to process groups of subdomains or level-set functions. Users can also select from a variety of reinitialization policies and stopping criteria. Finally, we developed a visualization framework that shows the evolution of a level-set in real-time to help guide algorithm development and parameter optimization. We demonstrate the power of our new framework using confocal microscopy images of cells in a developing zebrafish embryo. PMID:24501592
An efficient, scalable, and adaptable framework for solving generic systems of level-set PDEs.
Mosaliganti, Kishore R; Gelas, Arnaud; Megason, Sean G
2013-01-01
In the last decade, level-set methods have been actively developed for applications in image registration, segmentation, tracking, and reconstruction. However, the development of a wide variety of level-set PDEs and their numerical discretization schemes, coupled with hybrid combinations of PDE terms, stopping criteria, and reinitialization strategies, has created a software logistics problem. In the absence of an integrative design, current toolkits support only specific types of level-set implementations which restrict future algorithm development since extensions require significant code duplication and effort. In the new NIH/NLM Insight Toolkit (ITK) v4 architecture, we implemented a level-set software design that is flexible to different numerical (continuous, discrete, and sparse) and grid representations (point, mesh, and image-based). Given that a generic PDE is a summation of different terms, we used a set of linked containers to which level-set terms can be added or deleted at any point in the evolution process. This container-based approach allows the user to explore and customize terms in the level-set equation at compile-time in a flexible manner. The framework is optimized so that repeated computations of common intensity functions (e.g., gradient and Hessians) across multiple terms is eliminated. The framework further enables the evolution of multiple level-sets for multi-object segmentation and processing of large datasets. For doing so, we restrict level-set domains to subsets of the image domain and use multithreading strategies to process groups of subdomains or level-set functions. Users can also select from a variety of reinitialization policies and stopping criteria. Finally, we developed a visualization framework that shows the evolution of a level-set in real-time to help guide algorithm development and parameter optimization. We demonstrate the power of our new framework using confocal microscopy images of cells in a developing zebrafish embryo.
A Comparison of Risk Sensitive Path Planning Methods for Aircraft Emergency Landing
NASA Technical Reports Server (NTRS)
Meuleau, Nicolas; Plaunt, Christian; Smith, David E.; Smith, Tristan
2009-01-01
Determining the best site to land a damaged aircraft presents some interesting challenges for standard path planning techniques. There are multiple possible locations to consider, the space is 3-dimensional with dynamics, the criteria for a good path is determined by overall risk rather than distance or time, and optimization really matters, since an improved path corresponds to greater expected survival rate. We have investigated a number of different path planning methods for solving this problem, including cell decomposition, visibility graphs, probabilistic road maps (PRMs), and local search techniques. In their pure form, none of these techniques have proven to be entirely satisfactory - some are too slow or unpredictable, some produce highly non-optimal paths or do not find certain types of paths, and some do not cope well with the dynamic constraints when controllability is limited. In the end, we are converging towards a hybrid technique that involves seeding a roadmap with a layered visibility graph, using PRM to extend that roadmap, and using local search to further optimize the resulting paths. We describe the techniques we have investigated, report on our experiments with these techniques, and discuss when and why various techniques were unsatisfactory.
Solving multi-objective optimization problems in conservation with the reference point method
Dujardin, Yann; Chadès, Iadine
2018-01-01
Managing the biodiversity extinction crisis requires wise decision-making processes able to account for the limited resources available. In most decision problems in conservation biology, several conflicting objectives have to be taken into account. Most methods used in conservation either provide suboptimal solutions or use strong assumptions about the decision-maker’s preferences. Our paper reviews some of the existing approaches to solve multi-objective decision problems and presents new multi-objective linear programming formulations of two multi-objective optimization problems in conservation, allowing the use of a reference point approach. Reference point approaches solve multi-objective optimization problems by interactively representing the preferences of the decision-maker with a point in the criteria (objectives) space, called the reference point. We modelled and solved the following two problems in conservation: a dynamic multi-species management problem under uncertainty and a spatial allocation resource management problem. Results show that the reference point method outperforms classic methods while illustrating the use of an interactive methodology for solving combinatorial problems with multiple objectives. The method is general and can be adapted to a wide range of ecological combinatorial problems. PMID:29293650
Hydrogen Energy Storage and Power-to-Gas: Establishing Criteria for Successful Business Cases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eichman, Joshua; Melaina, Marc
As the electric sector evolves and increasing amounts of variable generation are installed on the system, there are greater needs for system flexibility, sufficient capacity and greater concern for overgeneration. As a result there is growing interest in exploring the role of energy storage and demand response technologies to support grid needs. Hydrogen is a versatile feedstock that can be used in a variety of applications including chemical and industrial processes, as well as a transportation fuel and heating fuel. Traditionally, hydrogen technologies focus on providing services to a single sector; however, participating in multiple sectors has the potential tomore » provide benefits to each sector and increase the revenue for hydrogen technologies. The goal of this work is to explore promising system configurations for hydrogen systems and the conditions that will make for successful business cases in a renewable, low-carbon future. Current electricity market data, electric and gas infrastructure data and credit and incentive information are used to perform a techno-economic analysis to identify promising criteria and locations for successful hydrogen energy storage and power-to-gas projects. Infrastructure data will be assessed using geographic information system applications. An operation optimization model is used to co-optimizes participation in energy and ancillary service markets as well as the sale of hydrogen. From previous work we recognize the great opportunity that energy storage and power-to-gas but there is a lack of information about the economic favorability of such systems. This work explores criteria for selecting locations and compares the system cost and potential revenue to establish competitiveness for a variety of equipment configurations. Hydrogen technologies offer unique system flexibility that can enable interactions between multiple energy sectors including electric, transport, heating fuel and industrial. Previous research established that hydrogen technologies, and in particular electrolyzers, can respond fast enough and for sufficient duration to participate in electricity markets. This work recognizes that participation in electricity markets and integration with the gas system can enhance the revenue streams available for hydrogen storage systems and quantifies the economic competitiveness and of these systems. A few of the key results include 1) the most valuable revenue stream for hydrogen systems is to sell the produced hydrogen, 2) participation in both energy and ancillary service markets yields the greatest revenue and 3) electrolyzers acting as demand response devices are particularly favorable.« less
NASA Technical Reports Server (NTRS)
Jezewski, D.
1980-01-01
Prime vector theory is used in analyzing a set of linear relative-motion equations - the Clohessy-Wiltshire (C/W) equations - to determine the criteria and necessary conditions for an optimal N-impulse trajectory. The analysis develops the analytical criteria for improving a solution by: (1) moving any dependent or independent variable in the initial and/or final orbit, and (2) adding intermediate impulses. If these criteria are violated, the theory establishes a sufficient number of analytical equations. The subsequent satisfaction of these equations will result in the optimal position vectors and times of an N-impulse trajectory. The solution is examined for the specific boundary conditions of: (1) fixed-end conditions, two impulse, and time-open transfer; (2) an orbit-to-orbit transfer; and (3) a generalized renezvous problem.
NASA Astrophysics Data System (ADS)
MacDonald, Garrick Richard
To limit biodiversity loss caused by human activity, conservation planning must protect biodiversity while considering socio-economic cost criteria. This research aimed to determine the effects of socio-economic criteria and spatial configurations on the development of CANs for three species with different distribution patterns, while simultaneously attempting to address the uncertainty and sensitivity of CANs produced by ConsNet. The socio-economic factors and spatial criteria included the cost of land, population density, agricultural output value, area, average cluster area, number of clusters, shape, and perimeter. Three sensitive mammal species with different distribution patterns were selected and included the Bobcat, Ringtail, and a custom created mammal distribution. Forty problems and the corresponding number of CANs were formulated and computed by running each predicted presence species model with and without the four different socioeconomic threshold groups at two different resolutions. Thirty-two percent less area was conserved after considering multiple socio-economic constraints and spatial configurations in comparison to CANs that did not consider multiple socio-economic constraints and spatial configurations. Without including socio-economic costs, ConsNet's ALL_CELLS heuristic solution was the highest ranking CAN. After considering multiple socio-economic costs, the number one ranking CAN was no longer the ALL_CELLS heuristic solution, but a spatially different meta-heuristic solution. The effects of multiple constraints and objectives on the design of CANs with different distribution patterns did not vary significantly across the criteria. The CANs produced by ConsNet appeared to demonstrate some uncertainty surrounding particular criteria, but did not demonstrate substantial uncertainty across all criteria used to rank the CANs. Similarly, the range of socio-economic criteria thresholds did not have a substantial impact. ConsNet was very applicable to the research project, however, it did exhibit a few limitations. Both the advantages and disadvantages of ConsNet should be considered before using ConsNet for future conservation planning projects. The research project is an example of a large data scenario undertaken with a multiple criteria decision analysis (MCDA) approach.
A study of natural circulation in the evaporator of a horizontal-tube heat recovery steam generator
NASA Astrophysics Data System (ADS)
Roslyakov, P. V.; Pleshanov, K. A.; Sterkhov, K. V.
2014-07-01
Results obtained from investigations of stable natural circulation in an intricate circulation circuit with a horizontal layout of the tubes of evaporating surface having a negative useful head are presented. The possibility of making a shift from using multiple forced circulation organized by means of a circulation pump to natural circulation in vertical heat recovery steam generator is estimated. Criteria for characterizing the performance reliability and efficiency of a horizontal evaporator with negative useful head are proposed. The influence of various design solutions on circulation robustness is considered. With due regard of the optimal parameters, the most efficient and least costly methods are proposed for achieving more stable circulation in a vertical heat recovery steam generator when a shift is made from multiple forced to natural circulation. A procedure for calculating the circulation parameters and an algorithm for checking evaporator performance reliability are developed, and recommendations for the design of heat recovery steam generator, nonheated parts of natural circulation circuit, and evaporating surface are suggested.
Embedding Human Expert Cognition Into Autonomous UAS Trajectory Planning.
Narayan, Pritesh; Meyer, Patrick; Campbell, Duncan
2013-04-01
This paper presents a new approach for the inclusion of human expert cognition into autonomous trajectory planning for unmanned aerial systems (UASs) operating in low-altitude environments. During typical UAS operations, multiple objectives may exist; therefore, the use of multicriteria decision aid techniques can potentially allow for convergence to trajectory solutions which better reflect overall mission requirements. In that context, additive multiattribute value theory has been applied to optimize trajectories with respect to multiple objectives. A graphical user interface was developed to allow for knowledge capture from a human decision maker (HDM) through simulated decision scenarios. The expert decision data gathered are converted into value functions and corresponding criteria weightings using utility additive theory. The inclusion of preferences elicited from HDM data within an automated decision system allows for the generation of trajectories which more closely represent the candidate HDM decision preferences. This approach has been demonstrated in this paper through simulation using a fixed-wing UAS operating in low-altitude environments.
Deep brain stimulation in uncommon tremor disorders: indications, targets, and programming.
Artusi, Carlo Alberto; Farooqi, Ashar; Romagnolo, Alberto; Marsili, Luca; Balestrino, Roberta; Sokol, Leonard L; Wang, Lily L; Zibetti, Maurizio; Duker, Andrew P; Mandybur, George T; Lopiano, Leonardo; Merola, Aristide
2018-03-06
In uncommon tremor disorders, clinical efficacy and optimal anatomical targets for deep brain stimulation (DBS) remain inadequately studied and insufficiently quantified. We performed a systematic review of PubMed.gov and ClinicalTrials.gov. Relevant articles were identified using the following keywords: "tremor", "Holmes tremor", "orthostatic tremor", "multiple sclerosis", "multiple sclerosis tremor", "neuropathy", "neuropathic tremor", "fragile X-associated tremor/ataxia syndrome", and "fragile X." We identified a total of 263 cases treated with DBS for uncommon tremor disorders. Of these, 44 had Holmes tremor (HT), 18 orthostatic tremor (OT), 177 multiple sclerosis (MS)-associated tremor, 14 neuropathy-associated tremor, and 10 fragile X-associated tremor/ataxia syndrome (FXTAS). DBS resulted in favorable, albeit partial, clinical improvements in HT cases receiving Vim-DBS alone or in combination with additional targets. A sustained improvement was reported in OT cases treated with bilateral Vim-DBS, while the two cases treated with unilateral Vim-DBS demonstrated only a transient effect. MS-associated tremor responded to dual-target Vim-/VO-DBS, but the inability to account for the progression of MS-associated disability impeded the assessment of its long-term clinical efficacy. Neuropathy-associated tremor substantially improved with Vim-DBS. In FXTAS patients, while Vim-DBS was effective in improving tremor, equivocal results were observed in those with ataxia. DBS of select targets may represent an effective therapeutic strategy for uncommon tremor disorders, although the level of evidence is currently in its incipient form and based on single cases or limited case series. An international registry is, therefore, warranted to clarify selection criteria, long-term results, and optimal surgical targets.
NASA Astrophysics Data System (ADS)
Widesott, L.; Strigari, L.; Pressello, M. C.; Benassi, M.; Landoni, V.
2008-03-01
We investigated the role and the weight of the parameters involved in the intensity modulated radiation therapy (IMRT) optimization based on the generalized equivalent uniform dose (gEUD) method, for prostate and head-and-neck plans. We systematically varied the parameters (gEUDmax and weight) involved in the gEUD-based optimization of rectal wall and parotid glands. We found that the proper value of weight factor, still guaranteeing planning treatment volumes coverage, produced similar organs at risks dose-volume (DV) histograms for different gEUDmax with fixed a = 1. Most of all, we formulated a simple relation that links the reference gEUDmax and the associated weight factor. As secondary objective, we evaluated plans obtained with the gEUD-based optimization and ones based on DV criteria, using the normal tissue complication probability (NTCP) models. gEUD criteria seemed to improve sparing of rectum and parotid glands with respect to DV-based optimization: the mean dose, the V40 and V50 values to the rectal wall were decreased of about 10%, the mean dose to parotids decreased of about 20-30%. But more than the OARs sparing, we underlined the halving of the OARs optimization time with the implementation of the gEUD-based cost function. Using NTCP models we enhanced differences between the two optimization criteria for parotid glands, but no for rectum wall.
Criteria for quantitative and qualitative data integration: mixed-methods research methodology.
Lee, Seonah; Smith, Carrol A M
2012-05-01
Many studies have emphasized the need and importance of a mixed-methods approach for evaluation of clinical information systems. However, those studies had no criteria to guide integration of multiple data sets. Integrating different data sets serves to actualize the paradigm that a mixed-methods approach argues; thus, we require criteria that provide the right direction to integrate quantitative and qualitative data. The first author used a set of criteria organized from a literature search for integration of multiple data sets from mixed-methods research. The purpose of this article was to reorganize the identified criteria. Through critical appraisal of the reasons for designing mixed-methods research, three criteria resulted: validation, complementarity, and discrepancy. In applying the criteria to empirical data of a previous mixed methods study, integration of quantitative and qualitative data was achieved in a systematic manner. It helped us obtain a better organized understanding of the results. The criteria of this article offer the potential to produce insightful analyses of mixed-methods evaluations of health information systems.
Optimal design criteria - prediction vs. parameter estimation
NASA Astrophysics Data System (ADS)
Waldl, Helmut
2014-05-01
G-optimality is a popular design criterion for optimal prediction, it tries to minimize the kriging variance over the whole design region. A G-optimal design minimizes the maximum variance of all predicted values. If we use kriging methods for prediction it is self-evident to use the kriging variance as a measure of uncertainty for the estimates. Though the computation of the kriging variance and even more the computation of the empirical kriging variance is computationally very costly and finding the maximum kriging variance in high-dimensional regions can be time demanding such that we cannot really find the G-optimal design with nowadays available computer equipment in practice. We cannot always avoid this problem by using space-filling designs because small designs that minimize the empirical kriging variance are often non-space-filling. D-optimality is the design criterion related to parameter estimation. A D-optimal design maximizes the determinant of the information matrix of the estimates. D-optimality in terms of trend parameter estimation and D-optimality in terms of covariance parameter estimation yield basically different designs. The Pareto frontier of these two competing determinant criteria corresponds with designs that perform well under both criteria. Under certain conditions searching the G-optimal design on the above Pareto frontier yields almost as good results as searching the G-optimal design in the whole design region. In doing so the maximum of the empirical kriging variance has to be computed only a few times though. The method is demonstrated by means of a computer simulation experiment based on data provided by the Belgian institute Management Unit of the North Sea Mathematical Models (MUMM) that describe the evolution of inorganic and organic carbon and nutrients, phytoplankton, bacteria and zooplankton in the Southern Bight of the North Sea.
Hyun, Seung Won; Wong, Weng Kee
2016-01-01
We construct an optimal design to simultaneously estimate three common interesting features in a dose-finding trial with possibly different emphasis on each feature. These features are (1) the shape of the dose-response curve, (2) the median effective dose and (3) the minimum effective dose level. A main difficulty of this task is that an optimal design for a single objective may not perform well for other objectives. There are optimal designs for dual objectives in the literature but we were unable to find optimal designs for 3 or more objectives to date with a concrete application. A reason for this is that the approach for finding a dual-objective optimal design does not work well for a 3 or more multiple-objective design problem. We propose a method for finding multiple-objective optimal designs that estimate the three features with user-specified higher efficiencies for the more important objectives. We use the flexible 4-parameter logistic model to illustrate the methodology but our approach is applicable to find multiple-objective optimal designs for other types of objectives and models. We also investigate robustness properties of multiple-objective optimal designs to mis-specification in the nominal parameter values and to a variation in the optimality criterion. We also provide computer code for generating tailor made multiple-objective optimal designs. PMID:26565557
Hyun, Seung Won; Wong, Weng Kee
2015-11-01
We construct an optimal design to simultaneously estimate three common interesting features in a dose-finding trial with possibly different emphasis on each feature. These features are (1) the shape of the dose-response curve, (2) the median effective dose and (3) the minimum effective dose level. A main difficulty of this task is that an optimal design for a single objective may not perform well for other objectives. There are optimal designs for dual objectives in the literature but we were unable to find optimal designs for 3 or more objectives to date with a concrete application. A reason for this is that the approach for finding a dual-objective optimal design does not work well for a 3 or more multiple-objective design problem. We propose a method for finding multiple-objective optimal designs that estimate the three features with user-specified higher efficiencies for the more important objectives. We use the flexible 4-parameter logistic model to illustrate the methodology but our approach is applicable to find multiple-objective optimal designs for other types of objectives and models. We also investigate robustness properties of multiple-objective optimal designs to mis-specification in the nominal parameter values and to a variation in the optimality criterion. We also provide computer code for generating tailor made multiple-objective optimal designs.
NASA Astrophysics Data System (ADS)
Boughari, Yamina
New methodologies have been developed to optimize the integration, testing and certification of flight control systems, an expensive process in the aerospace industry. This thesis investigates the stability of the Cessna Citation X aircraft without control, and then optimizes two different flight controllers from design to validation. The aircraft's model was obtained from the data provided by the Research Aircraft Flight Simulator (RAFS) of the Cessna Citation business aircraft. To increase the stability and control of aircraft systems, optimizations of two different flight control designs were performed: 1) the Linear Quadratic Regulation and the Proportional Integral controllers were optimized using the Differential Evolution algorithm and the level 1 handling qualities as the objective function. The results were validated for the linear and nonlinear aircraft models, and some of the clearance criteria were investigated; and 2) the Hinfinity control method was applied on the stability and control augmentation systems. To minimize the time required for flight control design and its validation, an optimization of the controllers design was performed using the Differential Evolution (DE), and the Genetic algorithms (GA). The DE algorithm proved to be more efficient than the GA. New tools for visualization of the linear validation process were also developed to reduce the time required for the flight controller assessment. Matlab software was used to validate the different optimization algorithms' results. Research platforms of the aircraft's linear and nonlinear models were developed, and compared with the results of flight tests performed on the Research Aircraft Flight Simulator. Some of the clearance criteria of the optimized H-infinity flight controller were evaluated, including its linear stability, eigenvalues, and handling qualities criteria. Nonlinear simulations of the maneuvers criteria were also investigated during this research to assess the Cessna Citation X's flight controller clearance, and therefore, for its anticipated certification.
NASA Astrophysics Data System (ADS)
Gorsevski, Pece V.; Jankowski, Piotr
2010-08-01
The Kalman recursive algorithm has been very widely used for integrating navigation sensor data to achieve optimal system performances. This paper explores the use of the Kalman filter to extend the aggregation of spatial multi-criteria evaluation (MCE) and to find optimal solutions with respect to a decision strategy space where a possible decision rule falls. The approach was tested in a case study in the Clearwater National Forest in central Idaho, using existing landslide datasets from roaded and roadless areas and terrain attributes. In this approach, fuzzy membership functions were used to standardize terrain attributes and develop criteria, while the aggregation of the criteria was achieved by the use of a Kalman filter. The approach presented here offers advantages over the classical MCE theory because the final solution includes both the aggregated solution and the areas of uncertainty expressed in terms of standard deviation. A comparison of this methodology with similar approaches suggested that this approach is promising for predicting landslide susceptibility and further application as a spatial decision support system.
Deist, T M; Gorissen, B L
2016-02-07
High-dose-rate brachytherapy is a tumor treatment method where a highly radioactive source is brought in close proximity to the tumor. In this paper we develop a simulated annealing algorithm to optimize the dwell times at preselected dwell positions to maximize tumor coverage under dose-volume constraints on the organs at risk. Compared to existing algorithms, our algorithm has advantages in terms of speed and objective value and does not require an expensive general purpose solver. Its success mainly depends on exploiting the efficiency of matrix multiplication and a careful selection of the neighboring states. In this paper we outline its details and make an in-depth comparison with existing methods using real patient data.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-04
... SOCIAL SECURITY ADMINISTRATION 20 CFR Part 404 [Docket No. SSA-2009-0039] RIN 0960-AH04 Revised Medical Criteria for Evaluating Congenital Disorders That Affect Multiple Body Systems AGENCY: Social... in adults and children under titles II and XVI of the Social Security Act (Act). The revisions...
Myth 6: Cosmetic Use of Multiple Selection Criteria
ERIC Educational Resources Information Center
Friedman-Nimz, Reva
2009-01-01
Twenty-five years ago, armed with the courage of her convictions and a respectable collection of empirical evidence, the author articulated what she considered to be a compelling argument against the cosmetic use of multiple selection criteria as a guiding principle for identifying children and youth with high potential. To assess the current…
NASA Technical Reports Server (NTRS)
Bair, E. K.
1986-01-01
The System Trades Study and Design Methodology Plan is used to conduct trade studies to define the combination of Space Shuttle Main Engine features that will optimize candidate engine configurations. This is accomplished by using vehicle sensitivities and engine parametric data to establish engine chamber pressure and area ratio design points for candidate engine configurations. Engineering analyses are to be conducted to refine and optimize the candidate configurations at their design points. The optimized engine data and characteristics are then evaluated and compared against other candidates being considered. The Evaluation Criteria Plan is then used to compare and rank the optimized engine configurations on the basis of cost.
Overview of field gamma spectrometries based on Si-photomultiplier
NASA Astrophysics Data System (ADS)
Denisov, Viktor; Korotaev, Valery; Titov, Aleksandr; Blokhina, Anastasia; Kleshchenok, Maksim
2017-05-01
Design of optical-electronic devices and systems involves the selection of such technical patterns that under given initial requirements and conditions are optimal according to certain criteria. The original characteristic of the OES for any purpose, defining its most important feature ability is a threshold detection. Based on this property, will be achieved the required functional quality of the device or system. Therefore, the original criteria and optimization methods have to subordinate to the idea of a better detectability. Generally reduces to the problem of optimal selection of the expected (predetermined) signals in the predetermined observation conditions. Thus the main purpose of optimization of the system when calculating its detectability is the choice of circuits and components that provide the most effective selection of a target.
Improvements to direct quantitative analysis of multiple microRNAs facilitating faster analysis.
Ghasemi, Farhad; Wegman, David W; Kanoatov, Mirzo; Yang, Burton B; Liu, Stanley K; Yousef, George M; Krylov, Sergey N
2013-11-05
Studies suggest that patterns of deregulation in sets of microRNA (miRNA) can be used as cancer diagnostic and prognostic biomarkers. Establishing a "miRNA fingerprint"-based diagnostic technique requires a suitable miRNA quantitation method. The appropriate method must be direct, sensitive, capable of simultaneous analysis of multiple miRNAs, rapid, and robust. Direct quantitative analysis of multiple microRNAs (DQAMmiR) is a recently introduced capillary electrophoresis-based hybridization assay that satisfies most of these criteria. Previous implementations of the method suffered, however, from slow analysis time and required lengthy and stringent purification of hybridization probes. Here, we introduce a set of critical improvements to DQAMmiR that address these technical limitations. First, we have devised an efficient purification procedure that achieves the required purity of the hybridization probe in a fast and simple fashion. Second, we have optimized the concentrations of the DNA probe to decrease the hybridization time to 10 min. Lastly, we have demonstrated that the increased probe concentrations and decreased incubation time removed the need for masking DNA, further simplifying the method and increasing its robustness. The presented improvements bring DQAMmiR closer to use in a clinical setting.
Sova, Cassandra; Feuling, Mary Beth; Baumler, Megan; Gleason, Linda; Tam, Jonathan S; Zafra, Heidi; Goday, Praveen S
2013-12-01
Food allergies affect up to 8% of American children. The current recommended treatment for food allergies is strict elimination of the allergens from the diet. Dietary elimination of nutrient-dense foods may result in inadequate nutrient intake and impaired growth. The purpose of this review was to critically analyze available research on the effect of an elimination diet on nutrient intake and growth in children with multiple food allergies. A systematic review of the literature was conducted and a workgroup was established to critically analyze each relevant article. The findings were summarized and a conclusion was generated. Six studies were analyzed. One study found that children with food allergies are more likely to be malnourished than children without food allergies. Three studies found that children with multiple food allergies were shorter than children with 1 food allergy. Four studies assessed nutrient intake of children with multiple food allergies, but the inclusion and comparison criteria were different in each of the studies and the findings were conflicting. One study found that children with food allergies who did not receive nutrition counseling were more likely to have inadequate intake of calcium and vitamin D. Children with multiple food allergies have a higher risk of impaired growth and may have a higher risk of inadequate nutrient intake than children without food allergies. Until more research is available, we recommend monitoring of nutrition and growth of children with multiple food allergies to prevent possible nutrient deficiencies and to optimize growth.
Kyriacou, Andreas; Li Kam Wa, Matthew E; Pabari, Punam A; Unsworth, Beth; Baruah, Resham; Willson, Keith; Peters, Nicholas S; Kanagaratnam, Prapa; Hughes, Alun D; Mayet, Jamil; Whinnett, Zachary I; Francis, Darrel P
2013-08-10
In atrial fibrillation (AF), VV optimization of biventricular pacemakers can be examined in isolation. We used this approach to evaluate internal validity of three VV optimization methods by three criteria. Twenty patients (16 men, age 75 ± 7) in AF were optimized, at two paced heart rates, by LVOT VTI (flow), non-invasive arterial pressure, and ECG (minimizing QRS duration). Each optimization method was evaluated for: singularity (unique peak of function), reproducibility of optimum, and biological plausibility of the distribution of optima. The reproducibility (standard deviation of the difference, SDD) of the optimal VV delay was 10 ms for pressure, versus 8 ms (p=ns) for QRS and 34 ms (p<0.01) for flow. Singularity of optimum was 85% for pressure, 63% for ECG and 45% for flow (Chi(2)=10.9, p<0.005). The distribution of pressure optima was biologically plausible, with 80% LV pre-excited (p=0.007). The distributions of ECG (55% LV pre-excitation) and flow (45% LV pre-excitation) optima were no different to random (p=ns). The pressure-derived optimal VV delay is unaffected by the paced rate: SDD between slow and fast heart rate is 9 ms, no different from the reproducibility SDD at both heart rates. Using non-invasive arterial pressure, VV delay optimization by parabolic fitting is achievable with good precision, satisfying all 3 criteria of internal validity. VV optimum is unaffected by heart rate. Neither QRS minimization nor LVOT VTI satisfy all validity criteria, and therefore seem weaker candidate modalities for VV optimization. AF, unlinking interventricular from atrioventricular delay, uniquely exposes resynchronization concepts to experimental scrutiny. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Structural Optimization of a Force Balance Using a Computational Experiment Design
NASA Technical Reports Server (NTRS)
Parker, P. A.; DeLoach, R.
2002-01-01
This paper proposes a new approach to force balance structural optimization featuring a computational experiment design. Currently, this multi-dimensional design process requires the designer to perform a simplification by executing parameter studies on a small subset of design variables. This one-factor-at-a-time approach varies a single variable while holding all others at a constant level. Consequently, subtle interactions among the design variables, which can be exploited to achieve the design objectives, are undetected. The proposed method combines Modern Design of Experiments techniques to direct the exploration of the multi-dimensional design space, and a finite element analysis code to generate the experimental data. To efficiently search for an optimum combination of design variables and minimize the computational resources, a sequential design strategy was employed. Experimental results from the optimization of a non-traditional force balance measurement section are presented. An approach to overcome the unique problems associated with the simultaneous optimization of multiple response criteria is described. A quantitative single-point design procedure that reflects the designer's subjective impression of the relative importance of various design objectives, and a graphical multi-response optimization procedure that provides further insights into available tradeoffs among competing design objectives are illustrated. The proposed method enhances the intuition and experience of the designer by providing new perspectives on the relationships between the design variables and the competing design objectives providing a systematic foundation for advancements in structural design.
Miao, Minmin; Zeng, Hong; Wang, Aimin; Zhao, Changsen; Liu, Feixiang
2017-02-15
Common spatial pattern (CSP) is most widely used in motor imagery based brain-computer interface (BCI) systems. In conventional CSP algorithm, pairs of the eigenvectors corresponding to both extreme eigenvalues are selected to construct the optimal spatial filter. In addition, an appropriate selection of subject-specific time segments and frequency bands plays an important role in its successful application. This study proposes to optimize spatial-frequency-temporal patterns for discriminative feature extraction. Spatial optimization is implemented by channel selection and finding discriminative spatial filters adaptively on each time-frequency segment. A novel Discernibility of Feature Sets (DFS) criteria is designed for spatial filter optimization. Besides, discriminative features located in multiple time-frequency segments are selected automatically by the proposed sparse time-frequency segment common spatial pattern (STFSCSP) method which exploits sparse regression for significant features selection. Finally, a weight determined by the sparse coefficient is assigned for each selected CSP feature and we propose a Weighted Naïve Bayesian Classifier (WNBC) for classification. Experimental results on two public EEG datasets demonstrate that optimizing spatial-frequency-temporal patterns in a data-driven manner for discriminative feature extraction greatly improves the classification performance. The proposed method gives significantly better classification accuracies in comparison with several competing methods in the literature. The proposed approach is a promising candidate for future BCI systems. Copyright © 2016 Elsevier B.V. All rights reserved.
Very High-Risk Localized Prostate Cancer: Outcomes Following Definitive Radiation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Narang, Amol K.; Gergis, Carol; Robertson, Scott P.
Purpose: Existing definitions of high-risk prostate cancer consist of men who experience significant heterogeneity in outcomes. As such, criteria that identify a subpopulation of National Comprehensive Cancer Network (NCCN) high-risk prostate cancer patients who are at very high risk (VHR) for poor survival outcomes following prostatectomy were recently developed at our institution and include the presence of any of the following disease characteristics: multiple NCCN high-risk factors, primary Gleason pattern 5 disease and/or ≥5 biopsy cores with Gleason sums of 8 to 10. Whether these criteria also apply to men undergoing definitive radiation is unclear, as is the optimal treatment regimenmore » in these patients. Methods and Materials: All men consecutively treated with definitive radiation by a single provider from 1993 to 2006 and who fulfilled criteria for NCCN high-risk disease were identified (n=288), including 99 patients (34%) with VHR disease. Multivariate-adjusted competing risk regression models were constructed to assess associations between the VHR definition and biochemical failure (BF), distant metastasis (DM), and prostate cancer–specific mortality (PCSM). Multivariate-adjusted Cox regression analysis assessed the association of the VHR definition with overall mortality (OM). Cumulative incidences of failure endpoints were compared between VHR men and other NCCN high-risk men. Results: Men with VHR disease compared to other NCCN high-risk men experienced a higher 10-year incidence of BF (54.0% vs 35.4%, respectively, P<.001), DM (34.9% vs 13.4%, respectively, P<.001), PCSM (18.5% vs 5.9%, respectively, P<.001), and OM (36.4% vs 27.0%, respectively, P=.04). VHR men with a detectable prostate-specific antigen (PSA) concentration at the end of radiation (EOR) remained at high risk of 10-year PCSM compared to VHR men with an undetectable EOR PSA (31.0% vs 13.7%, respectively, P=.05). Conclusions: NCCN high-risk prostate cancer patients who meet VHR criteria experience distinctly worse outcomes following definitive radiation and long-term androgen deprivation therapy, particularly if an EOR PSA is detectable. Optimal use of local therapies for VHR patients should be explored further, as should novel agents.« less
Integrating Multiple Criteria Evaluation and GIS in Ecotourism: a Review
NASA Astrophysics Data System (ADS)
Mohd, Z. H.; Ujang, U.
2016-09-01
The concept of 'Eco-tourism' is increasingly heard in recent decades. Ecotourism is one adventure that environmentally responsible intended to appreciate the nature experiences and cultures. Ecotourism should have low impact on environment and must contribute to the prosperity of local residents. This article reviews the use of Multiple Criteria Evaluation (MCE) and Geographic Information System (GIS) in ecotourism. Multiple criteria evaluation mostly used to land suitability analysis or fulfill specific objectives based on various attributes that exist in the selected area. To support the process of environmental decision making, the application of GIS is used to display and analysis the data through Analytic Hierarchy Process (AHP). Integration between MCE and GIS tool is important to determine the relative weight for the criteria used objectively. With the MCE method, it can resolve the conflict between recreation and conservation which is to minimize the environmental and human impact. Most studies evidences that the GIS-based AHP as a multi criteria evaluation is a strong and effective in tourism planning which can aid in the development of ecotourism industry effectively.
Valuing hydrological alteration in Multi-Objective reservoir management
NASA Astrophysics Data System (ADS)
Bizzi, S.; Pianosi, F.; Soncini-Sessa, R.
2012-04-01
Water management through dams and reservoirs is worldwide necessary to support key human-related activities ranging from hydropower production to water allocation for agricultural production, and flood risk mitigation. Advances in multi-objectives (MO) optimization techniques and ever growing computing power make it possible to design reservoir operating policies that represent Pareto-optimal tradeoffs between the multiple interests analysed. These progresses if on one hand are likely to enhance performances of commonly targeted objectives (such as hydropower production or water supply), on the other risk to strongly penalize all the interests not directly (i.e. mathematically) optimized within the MO algorithm. Alteration of hydrological regime, although is a well established cause of ecological degradation and its evaluation and rehabilitation are commonly required by recent legislation (as the Water Framework Directive in Europe), is rarely embedded as an objective in MO planning of optimal releases from reservoirs. Moreover, even when it is explicitly considered, the criteria adopted for its evaluation are doubted and not commonly trusted, undermining the possibility of real implementation of environmentally friendly policies. The main challenges in defining and assessing hydrological alterations are: how to define a reference state (referencing); how to define criteria upon which to build mathematical indicators of alteration (measuring); and finally how to aggregate the indicators in a single evaluation index that can be embedded in a MO optimization problem (valuing). This paper aims to address these issues by: i) discussing benefits and constrains of different approaches to referencing, measuring and valuing hydrological alteration; ii) testing two alternative indices of hydrological alteration in the context of MO problems, one based on the established framework of Indices of Hydrological Alteration (IHA, Richter et al., 1996), and a novel satisfying the mathematical properties required by widely used optimization methods based on dynamic programming; iii) discussing the ranking provided by the proposed indices for a case study in Italy where different operating policies were designed using a MO algorithm, taking into account hydropower production, irrigation supply and flood mitigation and imposing different type of minimum environmental flow; iv) providing a framework to effectively include hydrological alteration within MO problem of reservoir management. Richter, B.D., Baumgartner, J.V., Powell, J., Braun, D.P., 1996, A Method for Assessing Hydrologic Alteration within Ecosystems, Conservation Biology, 10(4), 1163-1174.
Comparison of Optimal Design Methods in Inverse Problems
Banks, H. T.; Holm, Kathleen; Kappel, Franz
2011-01-01
Typical optimal design methods for inverse or parameter estimation problems are designed to choose optimal sampling distributions through minimization of a specific cost function related to the resulting error in parameter estimates. It is hoped that the inverse problem will produce parameter estimates with increased accuracy using data collected according to the optimal sampling distribution. Here we formulate the classical optimal design problem in the context of general optimization problems over distributions of sampling times. We present a new Prohorov metric based theoretical framework that permits one to treat succinctly and rigorously any optimal design criteria based on the Fisher Information Matrix (FIM). A fundamental approximation theory is also included in this framework. A new optimal design, SE-optimal design (standard error optimal design), is then introduced in the context of this framework. We compare this new design criteria with the more traditional D-optimal and E-optimal designs. The optimal sampling distributions from each design are used to compute and compare standard errors; the standard errors for parameters are computed using asymptotic theory or bootstrapping and the optimal mesh. We use three examples to illustrate ideas: the Verhulst-Pearl logistic population model [13], the standard harmonic oscillator model [13] and a popular glucose regulation model [16, 19, 29]. PMID:21857762
The primer vector in linear, relative-motion equations. [spacecraft trajectory optimization
NASA Technical Reports Server (NTRS)
1980-01-01
Primer vector theory is used in analyzing a set of linear, relative-motion equations - the Clohessy-Wiltshire equations - to determine the criteria and necessary conditions for an optimal, N-impulse trajectory. Since the state vector for these equations is defined in terms of a linear system of ordinary differential equations, all fundamental relations defining the solution of the state and costate equations, and the necessary conditions for optimality, can be expressed in terms of elementary functions. The analysis develops the analytical criteria for improving a solution by (1) moving any dependent or independent variable in the initial and/or final orbit, and (2) adding intermediate impulses. If these criteria are violated, the theory establishes a sufficient number of analytical equations. The subsequent satisfaction of these equations will result in the optimal position vectors and times of an N-impulse trajectory. The solution is examined for the specific boundary conditions of (1) fixed-end conditions, two-impulse, and time-open transfer; (2) an orbit-to-orbit transfer; and (3) a generalized rendezvous problem. A sequence of rendezvous problems is solved to illustrate the analysis and the computational procedure.
NASA Astrophysics Data System (ADS)
Brown, G.
2017-12-01
Sediment diversions have been proposed as a crucial component of the restoration of Coastal Louisiana. They are generally characterized as a means of creating land by mimicking natural crevasse-splay sub-delta processes. However, the criteria that are often promoted to optimize the performance of these diversions (i.e. large, sand-rich diversions into existing, degraded wetlands) are at odds with the natural processes that govern the development of crevasse-splay sub-deltas (typically sand-lean or sand-neutral diversions into open water). This is due in large part to the fact that these optimization criteria have been developed in the absence of consideration for the natural constraints associated with fundamental hydraulics: specifically, the conservation of mechanical energy. Although the implementation of the aforementioned optimization criteria have the potential to greatly increase the land-building capacity of a given diversion, the concomitant widespread inundation of the existing wetlands (an unavoidable consequence of diverting into a shallow, vegetated embayment), and the resultant stresses on existing wetland vegetation, have the potential to dramatically accelerate the loss of these existing wetlands. Hence, there are inherent uncertainties in the forecasted performance of sediment diversions that are designed according to the criteria mentioned above. This talk details the reasons for these uncertainties, using analytic and numerical model results, together with evidence from field observations and experiments. The likelihood that, in the foreseeable future, these uncertainties can be reduced, or even rationally bounded, is discussed.
Hybrid optimization and Bayesian inference techniques for a non-smooth radiation detection problem
Stefanescu, Razvan; Schmidt, Kathleen; Hite, Jason; ...
2016-12-12
In this paper, we propose several algorithms to recover the location and intensity of a radiation source located in a simulated 250 × 180 m block of an urban center based on synthetic measurements. Radioactive decay and detection are Poisson random processes, so we employ likelihood functions based on this distribution. Owing to the domain geometry and the proposed response model, the negative logarithm of the likelihood is only piecewise continuous differentiable, and it has multiple local minima. To address these difficulties, we investigate three hybrid algorithms composed of mixed optimization techniques. For global optimization, we consider simulated annealing, particlemore » swarm, and genetic algorithm, which rely solely on objective function evaluations; that is, they do not evaluate the gradient in the objective function. By employing early stopping criteria for the global optimization methods, a pseudo-optimum point is obtained. This is subsequently utilized as the initial value by the deterministic implicit filtering method, which is able to find local extrema in non-smooth functions, to finish the search in a narrow domain. These new hybrid techniques, combining global optimization and implicit filtering address, difficulties associated with the non-smooth response, and their performances, are shown to significantly decrease the computational time over the global optimization methods. To quantify uncertainties associated with the source location and intensity, we employ the delayed rejection adaptive Metropolis and DiffeRential Evolution Adaptive Metropolis algorithms. Finally, marginal densities of the source properties are obtained, and the means of the chains compare accurately with the estimates produced by the hybrid algorithms.« less
Allmendinger, Richard; Simaria, Ana S; Turner, Richard; Farid, Suzanne S
2014-10-01
This paper considers a real-world optimization problem involving the identification of cost-effective equipment sizing strategies for the sequence of chromatography steps employed to purify biopharmaceuticals. Tackling this problem requires solving a combinatorial optimization problem subject to multiple constraints, uncertain parameters, and time-consuming fitness evaluations. An industrially-relevant case study is used to illustrate that evolutionary algorithms can identify chromatography sizing strategies with significant improvements in performance criteria related to process cost, time and product waste over the base case. The results demonstrate also that evolutionary algorithms perform best when infeasible solutions are repaired intelligently, the population size is set appropriately, and elitism is combined with a low number of Monte Carlo trials (needed to account for uncertainty). Adopting this setup turns out to be more important for scenarios where less time is available for the purification process. Finally, a data-visualization tool is employed to illustrate how user preferences can be accounted for when it comes to selecting a sizing strategy to be implemented in a real industrial setting. This work demonstrates that closed-loop evolutionary optimization, when tuned properly and combined with a detailed manufacturing cost model, acts as a powerful decisional tool for the identification of cost-effective purification strategies. © 2013 The Authors. Journal of Chemical Technology & Biotechnology published by John Wiley & Sons Ltd on behalf of Society of Chemical Industry.
Estimation of the laser cutting operating cost by support vector regression methodology
NASA Astrophysics Data System (ADS)
Jović, Srđan; Radović, Aleksandar; Šarkoćević, Živče; Petković, Dalibor; Alizamir, Meysam
2016-09-01
Laser cutting is a popular manufacturing process utilized to cut various types of materials economically. The operating cost is affected by laser power, cutting speed, assist gas pressure, nozzle diameter and focus point position as well as the workpiece material. In this article, the process factors investigated were: laser power, cutting speed, air pressure and focal point position. The aim of this work is to relate the operating cost to the process parameters mentioned above. CO2 laser cutting of stainless steel of medical grade AISI316L has been investigated. The main goal was to analyze the operating cost through the laser power, cutting speed, air pressure, focal point position and material thickness. Since the laser operating cost is a complex, non-linear task, soft computing optimization algorithms can be used. Intelligent soft computing scheme support vector regression (SVR) was implemented. The performance of the proposed estimator was confirmed with the simulation results. The SVR results are then compared with artificial neural network and genetic programing. According to the results, a greater improvement in estimation accuracy can be achieved through the SVR compared to other soft computing methodologies. The new optimization methods benefit from the soft computing capabilities of global optimization and multiobjective optimization rather than choosing a starting point by trial and error and combining multiple criteria into a single criterion.
Closed-loop optimization of chromatography column sizing strategies in biopharmaceutical manufacture
Allmendinger, Richard; Simaria, Ana S; Turner, Richard; Farid, Suzanne S
2014-01-01
BACKGROUND This paper considers a real-world optimization problem involving the identification of cost-effective equipment sizing strategies for the sequence of chromatography steps employed to purify biopharmaceuticals. Tackling this problem requires solving a combinatorial optimization problem subject to multiple constraints, uncertain parameters, and time-consuming fitness evaluations. RESULTS An industrially-relevant case study is used to illustrate that evolutionary algorithms can identify chromatography sizing strategies with significant improvements in performance criteria related to process cost, time and product waste over the base case. The results demonstrate also that evolutionary algorithms perform best when infeasible solutions are repaired intelligently, the population size is set appropriately, and elitism is combined with a low number of Monte Carlo trials (needed to account for uncertainty). Adopting this setup turns out to be more important for scenarios where less time is available for the purification process. Finally, a data-visualization tool is employed to illustrate how user preferences can be accounted for when it comes to selecting a sizing strategy to be implemented in a real industrial setting. CONCLUSION This work demonstrates that closed-loop evolutionary optimization, when tuned properly and combined with a detailed manufacturing cost model, acts as a powerful decisional tool for the identification of cost-effective purification strategies. © 2013 The Authors. Journal of Chemical Technology & Biotechnology published by John Wiley & Sons Ltd on behalf of Society of Chemical Industry. PMID:25506115
Evaluation of early efficacy endpoints for proof-of-concept trials.
Chen, Cong; Sun, Linda; Li, Chih-Lin
2013-03-11
A Phase II proof-of-concept (POC) trial usually uses an early efficacy endpoint other than a clinical endpoint as the primary endpoint. Because of the advancement in bioscience and technology, which has yielded a number of new surrogate biomarkers, drug developers often have more candidate endpoints to choose from than they can handle. As a result, selection of endpoint and its effect size as well as choice of type I/II error rates are often at the center of heated debates in design of POC trials. While optimization of the trade-off between benefit and cost is the implicit objective in such a decision-making process, it is seldom explicitly accounted for in practice. In this research note, motivated by real examples from the oncology field, we provide practical measures for evaluation of early efficacy endpoints (E4) for POC trials. We further provide optimal design strategies for POC trials that include optimal Go-No Go decision criteria for initiation of Phase III and optimal resource allocation strategies for conducting multiple POC trials in a portfolio under fixed resources. Although oncology is used for illustration purpose, the same idea developed in this research note also applies to similar situations in other therapeutic areas or in early-stage drug development in that a Go-No Go decision has to rely on limited data from an early efficacy endpoint and cost-effectiveness is the main concern.
Filippi, Massimo; Preziosa, Paolo; Meani, Alessandro; Ciccarelli, Olga; Mesaros, Sarlota; Rovira, Alex; Frederiksen, Jette; Enzinger, Christian; Barkhof, Frederik; Gasperini, Claudio; Brownlee, Wallace; Drulovic, Jelena; Montalban, Xavier; Cramer, Stig P; Pichler, Alexander; Hagens, Marloes; Ruggieri, Serena; Martinelli, Vittorio; Miszkiel, Katherine; Tintorè, Mar; Comi, Giancarlo; Dekker, Iris; Uitdehaag, Bernard; Dujmovic-Basuroski, Irena; Rocca, Maria A
2018-02-01
In 2016, the Magnetic Resonance Imaging in Multiple Sclerosis (MAGNIMS) network proposed modifications to the MRI criteria to define dissemination in space (DIS) and time (DIT) for the diagnosis of multiple sclerosis in patients with clinically isolated syndrome (CIS). Changes to the DIS definition included removal of the distinction between symptomatic and asymptomatic lesions, increasing the number of lesions needed to define periventricular involvement to three, combining cortical and juxtacortical lesions, and inclusion of optic nerve evaluation. For DIT, removal of the distinction between symptomatic and asymptomatic lesions was suggested. We compared the performance of the 2010 McDonald and 2016 MAGNIMS criteria for multiple sclerosis diagnosis in a large multicentre cohort of patients with CIS to provide evidence to guide revisions of multiple sclerosis diagnostic criteria. Brain and spinal cord MRI and optic nerve assessments from patients with typical CIS suggestive of multiple sclerosis done less than 3 months from clinical onset in eight European multiple sclerosis centres were included in this retrospective study. Eligible patients were 16-60 years, and had a first CIS suggestive of CNS demyelination and typical of relapsing-remitting multiple sclerosis, a complete neurological examination, a baseline brain and spinal cord MRI scan obtained less than 3 months from clinical onset, and a follow-up brain scan obtained less than 12 months from CIS onset. We recorded occurrence of a second clinical attack (clinically definite multiple sclerosis) at months 36 and 60. We evaluated MRI criteria performance for DIS, DIT, and DIS plus DIT with a time-dependent receiver operating characteristic curve analysis. Between June 16, 1995, and Jan 27, 2017, 571 patients with CIS were screened, of whom 368 met all study inclusion criteria. At the last evaluation (median 50·0 months [IQR 27·0-78·4]), 189 (51%) of 368 patients developed clinically definite multiple sclerosis. At 36 months, the two DIS criteria showed high sensitivity (2010 McDonald 0·91 [95% CI 0·85-0·94] and 2016 MAGNIMS 0·93 [0·88-0·96]), similar specificity (0·33 [0·25-0·42] and 0·32 [0·24-0·41]), and similar area under the curve values (AUC; 0·62 [0·57-0·67] and 0·63 [0·58-0·67]). Performance was not affected by inclusion of symptomatic lesions (sensitivity 0·92 [0·87-0·96], specificity 0·31 [0·23-0·40], AUC 0·62 [0·57-0·66]) or cortical lesions (sensitivity 0·92 [0·87-0·95], specificity 0·32 [0·24-0·41], AUC 0·62 [0·57-0·67]). Requirement of three periventricular lesions resulted in slightly lower sensitivity (0·85 [0·78-0·90], slightly higher specificity (0·40 [0·32-0·50], and similar AUC (0·63 [0·57-0·68]). Inclusion of optic nerve evaluation resulted in similar sensitivity (0·92 [0·87-0·96]), and slightly lower specificity (0·26 [0·18-0·34]) and AUC (0·59 [0·55-0·64]). AUC values were also similar for DIT (2010 McDonald 0·61 [0·55-0·67] and 2016 MAGNIMS 0·61 [0·55-0·66]) and DIS plus DIT (0·62 [0·56-0·67] and 0·64 [0·58-0·69]). The 2016 MAGNIMS criteria showed similar accuracy to the 2010 McDonald criteria in predicting the development of clinically definite multiple sclerosis. Inclusion of symptomatic lesions is expected to simplify the clinical use of MRI criteria without reducing accuracy, and our findings suggest that needing three lesions to define periventricular involvement might slightly increase specificity, suggesting that these two factors could be considered during further revisions of multiple sclerosis diagnostic criteria. UK MS Society, National Institute for Health Research University College London Hospitals Biomedical Research Centre, Dutch MS Research Foundation. Copyright © 2018 Elsevier Ltd. All rights reserved.
Menéndez-Valladares, P; García-Sánchez, M I; Cuadri Benítez, P; Lucas, M; Adorna Martínez, M; Carranco Galán, V; García De Veas Silva, J L; Bermudo Guitarte, C; Izquierdo Ayuso, G
2015-01-01
Multiple sclerosis (MS) initiates with a first attack or clinically isolated syndrome (CIS). The importance of an early treatment in MS leads to the search, as soon as possible, for novel biomarkers which can predict conversion from CIS to MS. The purpose of this study was to assess the predictive value of the kappa index ([Formula: see text] index), using kappa free light light chains ([Formula: see text]FLCs) in cerebrospinal fluid (CSF), for the conversion of CIS patients to MS, and compare its accuracy with other parameters used in clinical practice. FLC levels were analysed in CSF from 176 patients: 70 as control group, 77 CIS, and 29 relapsing-remitting MS. FLC levels were quantified by nephelometry. [Formula: see text] Index sensitivity and specificity (93.1%; 95.7%) was higher than those from the immunoglobulin G (IgG) index (75.9%; 94.3%), and lower than those from oligoclonal IgG bands (OCGBs) (96.5%; 98.6%). The optimal cut-off for [Formula: see text] index was 10.62. Most of the CIS patients with [Formula: see text] index >10.62 presented OCGBs, IgG index >0.56 and fulfilled magnetic resonance imaging (MRI) criteria. CIS patients above [Formula: see text] index cut-off of 10.62 present 7.34-fold risk of conversion to MS than CIS below this value. The [Formula: see text] index correlated with positive OCGBs, IgG index above 0.56 and MRI criteria.
Menéndez-Valladares, P; García-Sánchez, MI; Cuadri Benítez, P; Lucas, M; Adorna Martínez, M; Carranco Galán, V; García De Veas Silva, JL; Bermudo Guitarte, C
2015-01-01
Background Multiple sclerosis (MS) initiates with a first attack or clinically isolated syndrome (CIS). The importance of an early treatment in MS leads to the search, as soon as possible, for novel biomarkers which can predict conversion from CIS to MS. Objective The purpose of this study was to assess the predictive value of the kappa index (κ index), using kappa free light light chains (κFLCs) in cerebrospinal fluid (CSF), for the conversion of CIS patients to MS, and compare its accuracy with other parameters used in clinical practice. Methods FLC levels were analysed in CSF from 176 patients: 70 as control group, 77 CIS, and 29 relapsing–remitting MS. FLC levels were quantified by nephelometry. Results κ Index sensitivity and specificity (93.1%; 95.7%) was higher than those from the immunoglobulin G (IgG) index (75.9%; 94.3%), and lower than those from oligoclonal IgG bands (OCGBs) (96.5%; 98.6%). The optimal cut-off for κ index was 10.62. Most of the CIS patients with κ index >10.62 presented OCGBs, IgG index >0.56 and fulfilled magnetic resonance imaging (MRI) criteria. Conclusion CIS patients above κ index cut-off of 10.62 present 7.34-fold risk of conversion to MS than CIS below this value. The κ index correlated with positive OCGBs, IgG index above 0.56 and MRI criteria. PMID:28607709
Diagnosis of multiple sclerosis from EEG signals using nonlinear methods.
Torabi, Ali; Daliri, Mohammad Reza; Sabzposhan, Seyyed Hojjat
2017-12-01
EEG signals have essential and important information about the brain and neural diseases. The main purpose of this study is classifying two groups of healthy volunteers and Multiple Sclerosis (MS) patients using nonlinear features of EEG signals while performing cognitive tasks. EEG signals were recorded when users were doing two different attentional tasks. One of the tasks was based on detecting a desired change in color luminance and the other task was based on detecting a desired change in direction of motion. EEG signals were analyzed in two ways: EEG signals analysis without rhythms decomposition and EEG sub-bands analysis. After recording and preprocessing, time delay embedding method was used for state space reconstruction; embedding parameters were determined for original signals and their sub-bands. Afterwards nonlinear methods were used in feature extraction phase. To reduce the feature dimension, scalar feature selections were done by using T-test and Bhattacharyya criteria. Then, the data were classified using linear support vector machines (SVM) and k-nearest neighbor (KNN) method. The best combination of the criteria and classifiers was determined for each task by comparing performances. For both tasks, the best results were achieved by using T-test criterion and SVM classifier. For the direction-based and the color-luminance-based tasks, maximum classification performances were 93.08 and 79.79% respectively which were reached by using optimal set of features. Our results show that the nonlinear dynamic features of EEG signals seem to be useful and effective in MS diseases diagnosis.
The importance of multiple performance criteria for understanding trust in risk managers.
Johnson, Branden B; White, Mathew P
2010-07-01
Effective risk management requires balancing several, sometimes competing, goals, such as protecting public health and ensuring cost control. Research examining public trust of risk managers has largely focused on trust that is unspecified or for a single goal. Yet it can be reasonable to have a high level of trust in one aspect of a target's performance but not another. Two studies involving redevelopment of contaminated land (Study 1) and drinking water standards (Study 2) present preliminary evidence on the value of distinguishing between performance criteria for understanding of trust. Study 1 assessed perceptions of several trust targets (councilors, developers, scientists, residents) on their competence (capacity to achieve goals) and willingness to take action under uncertainty for four criteria. Study 2 assessed competence, willingness, and trust for five criteria regarding a single government agency. In both studies overall trust in each target was significantly better explained by considering perceptions of their performance on multiple criteria than on the single criterion of public health. In Study 1, the influence of criteria also varied plausibly across trust targets (e.g., willingness to act under uncertainty increased trust in developers on cost control and councilors on local economic improvement, but decreased it for both targets on environmental protection). Study 2 showed that explained variance in trust increased with both dimension- and trust-based measures of criteria. Further conceptual and methodological development of the notion of multiple trust criteria could benefit our understanding of stated trust judgments.
Purchasing a Used Car Using Multiple Criteria Decision Making
ERIC Educational Resources Information Center
Edwards, Thomas G.; Chelst, Kenneth R.
2007-01-01
When studying mathematics, students often ask the age-old question, "When will I ever use this in my future?" The activities described in this article demonstrate for students a process that brings the power of mathematical reasoning to bear on a difficult decision involving multiple criteria that is sure to resonate with the interests of many of…
Power Consumption Optimization in Tooth Gears Processing
NASA Astrophysics Data System (ADS)
Kanatnikov, N.; Harlamov, G.; Kanatnikova, P.; Pashmentova, A.
2018-01-01
The paper reviews the issue of optimization of technological process of tooth gears production of the power consumption criteria. The authors dwell on the indices used for cutting process estimation by the consumed energy criteria and their applicability in the analysis of the toothed wheel production process. The inventors proposed a method for optimization of power consumptions based on the spatial modeling of cutting pattern. The article is aimed at solving the problem of effective source management in order to achieve economical and ecological effect during the mechanical processing of toothed gears. The research was supported by Russian Science Foundation (project No. 17-79-10316).
Automated IMRT planning with regional optimization using planning scripts
Wong, Eugene; Bzdusek, Karl; Lock, Michael; Chen, Jeff Z.
2013-01-01
Intensity‐modulated radiation therapy (IMRT) has become a standard technique in radiation therapy for treating different types of cancers. Various class solutions have been developed for simple cases (e.g., localized prostate, whole breast) to generate IMRT plans efficiently. However, for more complex cases (e.g., head and neck, pelvic nodes), it can be time‐consuming for a planner to generate optimized IMRT plans. To generate optimal plans in these more complex cases which generally have multiple target volumes and organs at risk, it is often required to have additional IMRT optimization structures such as dose limiting ring structures, adjust beam geometry, select inverse planning objectives and associated weights, and additional IMRT objectives to reduce cold and hot spots in the dose distribution. These parameters are generally manually adjusted with a repeated trial and error approach during the optimization process. To improve IMRT planning efficiency in these more complex cases, an iterative method that incorporates some of these adjustment processes automatically in a planning script is designed, implemented, and validated. In particular, regional optimization has been implemented in an iterative way to reduce various hot or cold spots during the optimization process that begins with defining and automatic segmentation of hot and cold spots, introducing new objectives and their relative weights into inverse planning, and turn this into an iterative process with termination criteria. The method has been applied to three clinical sites: prostate with pelvic nodes, head and neck, and anal canal cancers, and has shown to reduce IMRT planning time significantly for clinical applications with improved plan quality. The IMRT planning scripts have been used for more than 500 clinical cases. PACS numbers: 87.55.D, 87.55.de PMID:23318393
Development of a Composite Tailoring Procedure for Airplane Wings
NASA Technical Reports Server (NTRS)
Chattopadhyay, Aditi
2000-01-01
The quest for finding optimum solutions to engineering problems has existed for a long time. In modern times, the development of optimization as a branch of applied mathematics is regarded to have originated in the works of Newton, Bernoulli and Euler. Venkayya has presented a historical perspective on optimization in [1]. The term 'optimization' is defined by Ashley [2] as a procedure "...which attempts to choose the variables in a design process so as formally to achieve the best value of some performance index while not violating any of the associated conditions or constraints". Ashley presented an extensive review of practical applications of optimization in the aeronautical field till about 1980 [2]. It was noted that there existed an enormous amount of published literature in the field of optimization, but its practical applications in industry were very limited. Over the past 15 years, though, optimization has been widely applied to address practical problems in aerospace design [3-5]. The design of high performance aerospace systems is a complex task. It involves the integration of several disciplines such as aerodynamics, structural analysis, dynamics, and aeroelasticity. The problem involves multiple objectives and constraints pertaining to the design criteria associated with each of these disciplines. Many important trade-offs exist between the parameters involved which are used to define the different disciplines. Therefore, the development of multidisciplinary design optimization (MDO) techniques, in which different disciplines and design parameters are coupled into a closed loop numerical procedure, seems appropriate to address such a complex problem. The importance of MDO in successful design of aerospace systems has been long recognized. Recent developments in this field have been surveyed by Sobieszczanski-Sobieski and Haftka [6].
A general optimality criteria algorithm for a class of engineering optimization problems
NASA Astrophysics Data System (ADS)
Belegundu, Ashok D.
2015-05-01
An optimality criteria (OC)-based algorithm for optimization of a general class of nonlinear programming (NLP) problems is presented. The algorithm is only applicable to problems where the objective and constraint functions satisfy certain monotonicity properties. For multiply constrained problems which satisfy these assumptions, the algorithm is attractive compared with existing NLP methods as well as prevalent OC methods, as the latter involve computationally expensive active set and step-size control strategies. The fixed point algorithm presented here is applicable not only to structural optimization problems but also to certain problems as occur in resource allocation and inventory models. Convergence aspects are discussed. The fixed point update or resizing formula is given physical significance, which brings out a strength and trim feature. The number of function evaluations remains independent of the number of variables, allowing the efficient solution of problems with large number of variables.
The R.E.D. tools: advances in RESP and ESP charge derivation and force field library building.
Dupradeau, François-Yves; Pigache, Adrien; Zaffran, Thomas; Savineau, Corentin; Lelong, Rodolphe; Grivel, Nicolas; Lelong, Dimitri; Rosanski, Wilfried; Cieplak, Piotr
2010-07-28
Deriving atomic charges and building a force field library for a new molecule are key steps when developing a force field required for conducting structural and energy-based analysis using molecular mechanics. Derivation of popular RESP charges for a set of residues is a complex and error prone procedure because it depends on numerous input parameters. To overcome these problems, the R.E.D. Tools (RESP and ESP charge Derive, ) have been developed to perform charge derivation in an automatic and straightforward way. The R.E.D. program handles chemical elements up to bromine in the periodic table. It interfaces different quantum mechanical programs employed for geometry optimization and computing molecular electrostatic potential(s), and performs charge fitting using the RESP program. By defining tight optimization criteria and by controlling the molecular orientation of each optimized geometry, charge values are reproduced at any computer platform with an accuracy of 0.0001 e. The charges can be fitted using multiple conformations, making them suitable for molecular dynamics simulations. R.E.D. allows also for defining charge constraints during multiple molecule charge fitting, which are used to derive charges for molecular fragments. Finally, R.E.D. incorporates charges into a force field library, readily usable in molecular dynamics computer packages. For complex cases, such as a set of homologous molecules belonging to a common family, an entire force field topology database is generated. Currently, the atomic charges and force field libraries have been developed for more than fifty model systems and stored in the RESP ESP charge DDataBase. Selected results related to non-polarizable charge models are presented and discussed.
MIMO radar waveform design with peak and sum power constraints
NASA Astrophysics Data System (ADS)
Arulraj, Merline; Jeyaraman, Thiruvengadam S.
2013-12-01
Optimal power allocation for multiple-input multiple-output radar waveform design subject to combined peak and sum power constraints using two different criteria is addressed in this paper. The first one is by maximizing the mutual information between the random target impulse response and the reflected waveforms, and the second one is by minimizing the mean square error in estimating the target impulse response. It is assumed that the radar transmitter has knowledge of the target's second-order statistics. Conventionally, the power is allocated to transmit antennas based on the sum power constraint at the transmitter. However, the wide power variations across the transmit antenna pose a severe constraint on the dynamic range and peak power of the power amplifier at each antenna. In practice, each antenna has the same absolute peak power limitation. So it is desirable to consider the peak power constraint on the transmit antennas. A generalized constraint that jointly meets both the peak power constraint and the average sum power constraint to bound the dynamic range of the power amplifier at each transmit antenna is proposed recently. The optimal power allocation using the concept of waterfilling, based on the sum power constraint, is the special case of p = 1. The optimal solution for maximizing the mutual information and minimizing the mean square error is obtained through the Karush-Kuhn-Tucker (KKT) approach, and the numerical solutions are found through a nested Newton-type algorithm. The simulation results show that the detection performance of the system with both sum and peak power constraints gives better detection performance than considering only the sum power constraint at low signal-to-noise ratio.
An experimental sample of the field gamma-spectrometer based on solid state Si-photomultiplier
NASA Astrophysics Data System (ADS)
Denisov, Viktor; Korotaev, Valery; Titov, Aleksandr; Blokhina, Anastasia; Kleshchenok, Maksim
2017-05-01
Design of optical-electronic devices and systems involves the selection of such technical patterns that under given initial requirements and conditions are optimal according to certain criteria. The original characteristic of the OES for any purpose, defining its most important feature ability is a threshold detection. Based on this property, will be achieved the required functional quality of the device or system. Therefore, the original criteria and optimization methods have to subordinate to the idea of a better detectability. Generally reduces to the problem of optimal selection of the expected (predetermined) signals in the predetermined observation conditions. Thus the main purpose of optimization of the system when calculating its detectability is the choice of circuits and components that provide the most effective selection of a target.
Kok, H P; de Greef, M; Bel, A; Crezee, J
2009-08-01
In regional hyperthermia, optimization is useful to obtain adequate applicator settings. A speed-up of the previously published method for high resolution temperature based optimization is proposed. Element grouping as described in literature uses selected voxel sets instead of single voxels to reduce computation time. Elements which achieve their maximum heating potential for approximately the same phase/amplitude setting are grouped. To form groups, eigenvalues and eigenvectors of precomputed temperature matrices are used. At high resolution temperature matrices are unknown and temperatures are estimated using low resolution (1 cm) computations and the high resolution (2 mm) temperature distribution computed for low resolution optimized settings using zooming. This technique can be applied to estimate an upper bound for high resolution eigenvalues. The heating potential of elements was estimated using these upper bounds. Correlations between elements were estimated with low resolution eigenvalues and eigenvectors, since high resolution eigenvectors remain unknown. Four different grouping criteria were applied. Constraints were set to the average group temperatures. Element grouping was applied for five patients and optimal settings for the AMC-8 system were determined. Without element grouping the average computation times for five and ten runs were 7.1 and 14.4 h, respectively. Strict grouping criteria were necessary to prevent an unacceptable exceeding of the normal tissue constraints (up to approximately 2 degrees C), caused by constraining average instead of maximum temperatures. When strict criteria were applied, speed-up factors of 1.8-2.1 and 2.6-3.5 were achieved for five and ten runs, respectively, depending on the grouping criteria. When many runs are performed, the speed-up factor will converge to 4.3-8.5, which is the average reduction factor of the constraints and depends on the grouping criteria. Tumor temperatures were comparable. Maximum exceeding of the constraint in a hot spot was 0.24-0.34 degree C; average maximum exceeding over all five patients was 0.09-0.21 degree C, which is acceptable. High resolution temperature based optimization using element grouping can achieve a speed-up factor of 4-8, without large deviations from the conventional method.
Optimal time-domain technique for pulse width modulation in power electronics
NASA Astrophysics Data System (ADS)
Mayergoyz, I.; Tyagi, S.
2018-05-01
Optimal time-domain technique for pulse width modulation is presented. It is based on exact and explicit analytical solutions for inverter circuits, obtained for any sequence of input voltage rectangular pulses. Two optimal criteria are discussed and illustrated by numerical examples.
At the time the 1996 Air Quality Criteria for Particulate Matter Criteria Document was prepared there were several epidemiologic studies using multiple years of TSP and PM10 data for the exposure estimate but only one epidemiologic study using multiple years of PM2.5 data. That ...
ERIC Educational Resources Information Center
Hwang, Gwo-Jen; Chu, Hui-Chun; Yin, Peng-Yeng; Lin, Ji-Yu
2008-01-01
The national certification tests and entrance examinations are the most important tests for proving the ability or knowledge level of a person. To accurately evaluate the professional skills or knowledge level, the composed test sheets must meet multiple assessment criteria such as the ratio of relevant concepts to be evaluated and the estimated…
Oreja-Guevara, Celia; Montalban, Xavier; de Andrés, Clara; Casanova-Estruch, Bonaventura; Muñoz-García, Delicias; García, Inmaculada; Fernández, Óscar
2013-10-16
Multiple sclerosis is a chronic neurological inflammatory demyelinating disease. Specialists involved in the symptomatic treatment of this disease tend to apply heterogeneous diagnostic and treatment criteria. To establish homogeneous criteria for treating spasticity based on available scientific knowledge, facilitating decision-making in regular clinical practice. A group of multiple sclerosis specialists from the Spanish Neurological Society demyelinating diseases working group met to review aspects related to spasticity in this disease and draw up the consensus. After an exhaustive bibliographic search and following a metaplan technique, a number of preliminary recommendations were established to incorporate into the document. Finally, each argument was classified depending on the degree of recommendation according to the SIGN (Scottish Intercollegiate Guidelines Network) system. The resulting text was submitted for review by the demyelinating disease group. An experts' consensus was reached regarding spasticity triggering factors, related symptoms, diagnostic criteria, assessment methods, quality of life and therapeutic management (drug and non-drug) criteria. The recommendations included in this consensus can be a useful tool for improving the quality of life of multiple sclerosis patients, as they enable improved diagnosis and treatment of spasticity.
NASA Astrophysics Data System (ADS)
Likhachev, Dmitriy V.
2017-06-01
Johs and Hale developed the Kramers-Kronig consistent B-spline formulation for the dielectric function modeling in spectroscopic ellipsometry data analysis. In this article we use popular Akaike, corrected Akaike and Bayesian Information Criteria (AIC, AICc and BIC, respectively) to determine an optimal number of knots for B-spline model. These criteria allow finding a compromise between under- and overfitting of experimental data since they penalize for increasing number of knots and select representation which achieves the best fit with minimal number of knots. Proposed approach provides objective and practical guidance, as opposite to empirically driven or "gut feeling" decisions, for selecting the right number of knots for B-spline models in spectroscopic ellipsometry. AIC, AICc and BIC selection criteria work remarkably well as we demonstrated in several real-data applications. This approach formalizes selection of the optimal knot number and may be useful in practical perspective of spectroscopic ellipsometry data analysis.
Applying a multiobjective metaheuristic inspired by honey bees to phylogenetic inference.
Santander-Jiménez, Sergio; Vega-Rodríguez, Miguel A
2013-10-01
The development of increasingly popular multiobjective metaheuristics has allowed bioinformaticians to deal with optimization problems in computational biology where multiple objective functions must be taken into account. One of the most relevant research topics that can benefit from these techniques is phylogenetic inference. Throughout the years, different researchers have proposed their own view about the reconstruction of ancestral evolutionary relationships among species. As a result, biologists often report different phylogenetic trees from a same dataset when considering distinct optimality principles. In this work, we detail a multiobjective swarm intelligence approach based on the novel Artificial Bee Colony algorithm for inferring phylogenies. The aim of this paper is to propose a complementary view of phylogenetics according to the maximum parsimony and maximum likelihood criteria, in order to generate a set of phylogenetic trees that represent a compromise between these principles. Experimental results on a variety of nucleotide data sets and statistical studies highlight the relevance of the proposal with regard to other multiobjective algorithms and state-of-the-art biological methods. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Yang, Jing; Xu, Mai; Zhao, Wei; Xu, Baoguo
2010-01-01
For monitoring burst events in a kind of reactive wireless sensor networks (WSNs), a multipath routing protocol (MRP) based on dynamic clustering and ant colony optimization (ACO) is proposed. Such an approach can maximize the network lifetime and reduce the energy consumption. An important attribute of WSNs is their limited power supply, and therefore some metrics (such as energy consumption of communication among nodes, residual energy, path length) were considered as very important criteria while designing routing in the MRP. Firstly, a cluster head (CH) is selected among nodes located in the event area according to some parameters, such as residual energy. Secondly, an improved ACO algorithm is applied in the search for multiple paths between the CH and sink node. Finally, the CH dynamically chooses a route to transmit data with a probability that depends on many path metrics, such as energy consumption. The simulation results show that MRP can prolong the network lifetime, as well as balance of energy consumption among nodes and reduce the average energy consumption effectively.
The absolute threshold of cone vision
Koeing, Darran; Hofer, Heidi
2013-01-01
We report measurements of the absolute threshold of cone vision, which has been previously underestimated due to sub-optimal conditions or overly strict subjective response criteria. We avoided these limitations by using optimized stimuli and experimental conditions while having subjects respond within a rating scale framework. Small (1′ fwhm), brief (34 msec), monochromatic (550 nm) stimuli were foveally presented at multiple intensities in dark-adapted retina for 5 subjects. For comparison, 4 subjects underwent similar testing with rod-optimized stimuli. Cone absolute threshold, that is, the minimum light energy for which subjects were just able to detect a visual stimulus with any response criterion, was 203 ± 38 photons at the cornea, ∼0.47 log units lower than previously reported. Two-alternative forced-choice measurements in a subset of subjects yielded consistent results. Cone thresholds were less responsive to criterion changes than rod thresholds, suggesting a limit to the stimulus information recoverable from the cone mosaic in addition to the limit imposed by Poisson noise. Results were consistent with expectations for detection in the face of stimulus uncertainty. We discuss implications of these findings for modeling the first stages of human cone vision and interpreting psychophysical data acquired with adaptive optics at the spatial scale of the receptor mosaic. PMID:21270115
Modified Fully Utilized Design (MFUD) Method for Stress and Displacement Constraints
NASA Technical Reports Server (NTRS)
Patnaik, Surya; Gendy, Atef; Berke, Laszlo; Hopkins, Dale
1997-01-01
The traditional fully stressed method performs satisfactorily for stress-limited structural design. When this method is extended to include displacement limitations in addition to stress constraints, it is known as the fully utilized design (FUD). Typically, the FUD produces an overdesign, which is the primary limitation of this otherwise elegant method. We have modified FUD in an attempt to alleviate the limitation. This new method, called the modified fully utilized design (MFUD) method, has been tested successfully on a number of designs that were subjected to multiple loads and had both stress and displacement constraints. The solutions obtained with MFUD compare favorably with the optimum results that can be generated by using nonlinear mathematical programming techniques. The MFUD method appears to have alleviated the overdesign condition and offers the simplicity of a direct, fully stressed type of design method that is distinctly different from optimization and optimality criteria formulations. The MFUD method is being developed for practicing engineers who favor traditional design methods rather than methods based on advanced calculus and nonlinear mathematical programming techniques. The Integrated Force Method (IFM) was found to be the appropriate analysis tool in the development of the MFUD method. In this paper, the MFUD method and its optimality are presented along with a number of illustrative examples.
He, Xin; Frey, Eric C
2006-08-01
Previously, we have developed a decision model for three-class receiver operating characteristic (ROC) analysis based on decision theory. The proposed decision model maximizes the expected decision utility under the assumption that incorrect decisions have equal utilities under the same hypothesis (equal error utility assumption). This assumption reduced the dimensionality of the "general" three-class ROC analysis and provided a practical figure-of-merit to evaluate the three-class task performance. However, it also limits the generality of the resulting model because the equal error utility assumption will not apply for all clinical three-class decision tasks. The goal of this study was to investigate the optimality of the proposed three-class decision model with respect to several other decision criteria. In particular, besides the maximum expected utility (MEU) criterion used in the previous study, we investigated the maximum-correctness (MC) (or minimum-error), maximum likelihood (ML), and Nyman-Pearson (N-P) criteria. We found that by making assumptions for both MEU and N-P criteria, all decision criteria lead to the previously-proposed three-class decision model. As a result, this model maximizes the expected utility under the equal error utility assumption, maximizes the probability of making correct decisions, satisfies the N-P criterion in the sense that it maximizes the sensitivity of one class given the sensitivities of the other two classes, and the resulting ROC surface contains the maximum likelihood decision operating point. While the proposed three-class ROC analysis model is not optimal in the general sense due to the use of the equal error utility assumption, the range of criteria for which it is optimal increases its applicability for evaluating and comparing a range of diagnostic systems.
Breast-Feeding Friendly, but Not Formula Averse.
Lewis, Juanita
2017-11-01
Breast-feeding is the optimal source of newborn nutrition in term infants and is associated with multiple short- and long-term health benefits. Establishment of breast-feeding may be difficult in a small subset of mothers, which can lead to adverse consequences in the newborn. Some of the consequences of suboptimal nutritional provision to the newborn, such as severe hyperbilirubinemia and breast-feeding-associated hypernatremic dehydration, can have devastating and long-lasting sequelae. Timely identification of mothers and newborns at risk for developing these complications is necessary to avoid significant morbidity and mortality. In these cases, the judicious use of formula supplementation may be considered. However, more studies are necessary to develop comprehensive formula supplementation criteria and guidelines for pediatric medical providers. [Pediatr Ann. 2017;46(11):e402-e408.]. Copyright 2017, SLACK Incorporated.
Teamwork methods for accountable care: relational coordination and TeamSTEPPS®.
Gittell, Jody Hoffer; Beswick, Joanne; Goldmann, Don; Wallack, Stanley S
2015-01-01
To deliver greater value in the accountable care context, the Institute of Medicine argues for a culture of teamwork at multiple levels--across professional and organizational siloes and with patients and their families and communities. The logic of performance improvement is that data are needed to target interventions and to assess their impact. We argue that efforts to build teamwork will benefit from teamwork measures that provide diagnostic information regarding the current state and teamwork interventions that can respond to the opportunities identified in the current state. We identify teamwork measures and teamwork interventions that are validated and that can work across multiple levels of teamwork. We propose specific ways to combine them for optimal effectiveness. We review measures of teamwork documented by Valentine, Nembhard, and Edmondson and select those that they identified as satisfying the four criteria for psychometric validation and as being unbounded and therefore able to measure teamwork across multiple levels. We then consider teamwork interventions that are widely used in the U.S. health care context, are well validated based on their association with outcomes, and are capable of working at multiple levels of teamwork. We select the top candidate in each category and propose ways to combine them for optimal effectiveness. We find relational coordination is a validated multilevel teamwork measure and TeamSTEPPS® is a validated multilevel teamwork intervention and propose specific ways for the relational coordination measure to enhance the TeamSTEPPS intervention. Health care systems and change agents seeking to respond to the challenges of accountable care can use TeamSTEPPS as a validated multilevel teamwork intervention methodology, enhanced by relational coordination as a validated multilevel teamwork measure with diagnostic capacity to pinpoint opportunities for improving teamwork along specific dimensions (e.g., shared knowledge, timely communication) and in specific role relationships (e.g., nurse/medical assistant, emergency unit/medical unit, primary care/specialty care).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keeling, V; Hossain, S; Hildebrand, K
Purpose: To show improvements in dose conformity and normal brain tissue sparing using an optimal planning technique (OPT) against clinically acceptable planning technique (CAP) in the treatment of multiple brain metastases. Methods: A standardized international benchmark case with12 intracranial tumors was planned using two different VMAT optimization methods. Plans were split into four groups with 3, 6, 9, and 12 targets each planned with 3, 5, and 7 arcs using Eclipse TPS. The beam geometries were 1 full coplanar and half non-coplanar arcs. A prescription dose of 20Gy was used for all targets. The following optimization criteria was used (OPTmore » vs. CAP): (No upper limit vs.108% upper limit for target volume), (priority 140–150 vs. 75–85 for normal-brain-tissue), and (selection of automatic sparing Normal-Tissue-Objective (NTO) vs. Manual NTO). Both had priority 50 to critical structures such as brainstem and optic-chiasm, and both had an NTO priority 150. Normal-brain-tissue doses along with Paddick Conformity Index (PCI) were evaluated. Results: In all cases PCI was higher for OPT plans. The average PCI (OPT,CAP) for all targets was (0.81,0.64), (0.81,0.63), (0.79,0.57), and (0.72,0.55) for 3, 6, 9, and 12 target plans respectively. The percent decrease in normal brain tissue volume (OPT/CAP*100) achieved by OPT plans was (reported as follows: V4, V8, V12, V16, V20) (184, 343, 350, 294, 371%), (192, 417, 380, 299, 360%), and (235, 390, 299, 281, 502%) for the 3, 5, 7 arc 12 target plans, respectively. The maximum brainstem dose decreased for the OPT plan by 4.93, 4.89, and 5.30 Gy for 3, 5, 7 arc 12 target plans, respectively. Conclusion: Substantial increases in PCI, critical structure sparing, and decreases in normal brain tissue dose were achieved by eliminating upper limits from optimization, using automatic sparing of normal tissue function with high priority, and a high priority to normal brain tissue.« less
NASA Astrophysics Data System (ADS)
Doyle, E. J.; Kim, K. W.; Peebles, W. A.; Rhodes, T. L.
1997-01-01
Reflectometry is an attractive and versatile diagnostic technique that can address a wide range of measurement needs on fusion devices. However, progress in the area of profile measurement has been hampered by the lack of a well-understood basis for the optimum design and implementation of such systems. Such a design basis is provided by the realization that reflectometer systems utilized for density profile measurements are in fact specialized forms of radar systems. In this article five criteria are introduced by which reflectometer systems can be systematically designed for optimal performance: range resolution, spatial sampling, turbulence immunity, bandwidth optimization, and the need for adaptive data processing. Many of these criteria are familiar from radar systems analysis, and are applicable to reflectometry after allowance is made for differences stemming from the nature of the plasma target. These criteria are utilized to critically evaluate current reflectometer density profile techniques and indicate improvements that can impact current and next step devices, such as ITER.
Linear energy transfer incorporated intensity modulated proton therapy optimization
NASA Astrophysics Data System (ADS)
Cao, Wenhua; Khabazian, Azin; Yepes, Pablo P.; Lim, Gino; Poenisch, Falk; Grosshans, David R.; Mohan, Radhe
2018-01-01
The purpose of this study was to investigate the feasibility of incorporating linear energy transfer (LET) into the optimization of intensity modulated proton therapy (IMPT) plans. Because increased LET correlates with increased biological effectiveness of protons, high LETs in target volumes and low LETs in critical structures and normal tissues are preferred in an IMPT plan. However, if not explicitly incorporated into the optimization criteria, different IMPT plans may yield similar physical dose distributions but greatly different LET, specifically dose-averaged LET, distributions. Conventionally, the IMPT optimization criteria (or cost function) only includes dose-based objectives in which the relative biological effectiveness (RBE) is assumed to have a constant value of 1.1. In this study, we added LET-based objectives for maximizing LET in target volumes and minimizing LET in critical structures and normal tissues. Due to the fractional programming nature of the resulting model, we used a variable reformulation approach so that the optimization process is computationally equivalent to conventional IMPT optimization. In this study, five brain tumor patients who had been treated with proton therapy at our institution were selected. Two plans were created for each patient based on the proposed LET-incorporated optimization (LETOpt) and the conventional dose-based optimization (DoseOpt). The optimized plans were compared in terms of both dose (assuming a constant RBE of 1.1 as adopted in clinical practice) and LET. Both optimization approaches were able to generate comparable dose distributions. The LET-incorporated optimization achieved not only pronounced reduction of LET values in critical organs, such as brainstem and optic chiasm, but also increased LET in target volumes, compared to the conventional dose-based optimization. However, on occasion, there was a need to tradeoff the acceptability of dose and LET distributions. Our conclusion is that the inclusion of LET-dependent criteria in the IMPT optimization could lead to similar dose distributions as the conventional optimization but superior LET distributions in target volumes and normal tissues. This may have substantial advantages in improving tumor control and reducing normal tissue toxicities.
Set of Criteria for Efficiency of the Process Forming the Answers to Multiple-Choice Test Items
ERIC Educational Resources Information Center
Rybanov, Alexander Aleksandrovich
2013-01-01
Is offered the set of criteria for assessing efficiency of the process forming the answers to multiple-choice test items. To increase accuracy of computer-assisted testing results, it is suggested to assess dynamics of the process of forming the final answer using the following factors: loss of time factor and correct choice factor. The model…
Assessment of Communications-related Admissions Criteria in a Three-year Pharmacy Program
Tejada, Frederick R.; Lang, Lynn A.; Purnell, Miriam; Acedera, Lisa; Ngonga, Ferdinand
2015-01-01
Objective. To determine if there is a correlation between TOEFL and other admissions criteria that assess communications skills (ie, PCAT variables: verbal, reading, essay, and composite), interview, and observational scores and to evaluate TOEFL and these admissions criteria as predictors of academic performance. Methods. Statistical analyses included two sample t tests, multiple regression and Pearson’s correlations for parametric variables, and Mann-Whitney U for nonparametric variables, which were conducted on the retrospective data of 162 students, 57 of whom were foreign-born. Results. The multiple regression model of the other admissions criteria on TOEFL was significant. There was no significant correlation between TOEFL scores and academic performance. However, significant correlations were found between the other admissions criteria and academic performance. Conclusion. Since TOEFL is not a significant predictor of either communication skills or academic success of foreign-born PharmD students in the program, it may be eliminated as an admissions criterion. PMID:26430273
Assessment of Communications-related Admissions Criteria in a Three-year Pharmacy Program.
Parmar, Jayesh R; Tejada, Frederick R; Lang, Lynn A; Purnell, Miriam; Acedera, Lisa; Ngonga, Ferdinand
2015-08-25
To determine if there is a correlation between TOEFL and other admissions criteria that assess communications skills (ie, PCAT variables: verbal, reading, essay, and composite), interview, and observational scores and to evaluate TOEFL and these admissions criteria as predictors of academic performance. Statistical analyses included two sample t tests, multiple regression and Pearson's correlations for parametric variables, and Mann-Whitney U for nonparametric variables, which were conducted on the retrospective data of 162 students, 57 of whom were foreign-born. The multiple regression model of the other admissions criteria on TOEFL was significant. There was no significant correlation between TOEFL scores and academic performance. However, significant correlations were found between the other admissions criteria and academic performance. Since TOEFL is not a significant predictor of either communication skills or academic success of foreign-born PharmD students in the program, it may be eliminated as an admissions criterion.
Longin, C Friedrich H; Utz, H Friedrich; Reif, Jochen C; Schipprack, Wolfgang; Melchinger, Albrecht E
2006-03-01
Optimum allocation of resources is of fundamental importance for the efficiency of breeding programs. The objectives of our study were to (1) determine the optimum allocation for the number of lines and test locations in hybrid maize breeding with doubled haploids (DHs) regarding two optimization criteria, the selection gain deltaG(k) and the probability P(k) of identifying superior genotypes, (2) compare both optimization criteria including their standard deviations (SDs), and (3) investigate the influence of production costs of DHs on the optimum allocation. For different budgets, number of finally selected lines, ratios of variance components, and production costs of DHs, the optimum allocation of test resources under one- and two-stage selection for testcross performance with a given tester was determined by using Monte Carlo simulations. In one-stage selection, lines are tested in field trials in a single year. In two-stage selection, optimum allocation of resources involves evaluation of (1) a large number of lines in a small number of test locations in the first year and (2) a small number of the selected superior lines in a large number of test locations in the second year, thereby maximizing both optimization criteria. Furthermore, to have a realistic chance of identifying a superior genotype, the probability P(k) of identifying superior genotypes should be greater than 75%. For budgets between 200 and 5,000 field plot equivalents, P(k) > 75% was reached only for genotypes belonging to the best 5% of the population. As the optimum allocation for P(k)(5%) was similar to that for deltaG(k), the choice of the optimization criterion was not crucial. The production costs of DHs had only a minor effect on the optimum number of locations and on values of the optimization criteria.
Structural Design of a Horizontal-Axis Tidal Current Turbine Composite Blade
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bir, G. S.; Lawson, M. J.; Li, Y.
2011-10-01
This paper describes the structural design of a tidal composite blade. The structural design is preceded by two steps: hydrodynamic design and determination of extreme loads. The hydrodynamic design provides the chord and twist distributions along the blade length that result in optimal performance of the tidal turbine over its lifetime. The extreme loads, i.e. the extreme flap and edgewise loads that the blade would likely encounter over its lifetime, are associated with extreme tidal flow conditions and are obtained using a computational fluid dynamics (CFD) software. Given the blade external shape and the extreme loads, we use a laminate-theory-basedmore » structural design to determine the optimal layout of composite laminas such that the ultimate-strength and buckling-resistance criteria are satisfied at all points in the blade. The structural design approach allows for arbitrary specification of the chord, twist, and airfoil geometry along the blade and an arbitrary number of shear webs. In addition, certain fabrication criteria are imposed, for example, each composite laminate must be an integral multiple of its constituent ply thickness. In the present effort, the structural design uses only static extreme loads; dynamic-loads-based fatigue design will be addressed in the future. Following the blade design, we compute the distributed structural properties, i.e. flap stiffness, edgewise stiffness, torsion stiffness, mass, moments of inertia, elastic-axis offset, and center-of-mass offset along the blade. Such properties are required by hydro-elastic codes to model the tidal current turbine and to perform modal, stability, loads, and response analyses.« less
Review: Optimization methods for groundwater modeling and management
NASA Astrophysics Data System (ADS)
Yeh, William W.-G.
2015-09-01
Optimization methods have been used in groundwater modeling as well as for the planning and management of groundwater systems. This paper reviews and evaluates the various optimization methods that have been used for solving the inverse problem of parameter identification (estimation), experimental design, and groundwater planning and management. Various model selection criteria are discussed, as well as criteria used for model discrimination. The inverse problem of parameter identification concerns the optimal determination of model parameters using water-level observations. In general, the optimal experimental design seeks to find sampling strategies for the purpose of estimating the unknown model parameters. A typical objective of optimal conjunctive-use planning of surface water and groundwater is to minimize the operational costs of meeting water demand. The optimization methods include mathematical programming techniques such as linear programming, quadratic programming, dynamic programming, stochastic programming, nonlinear programming, and the global search algorithms such as genetic algorithms, simulated annealing, and tabu search. Emphasis is placed on groundwater flow problems as opposed to contaminant transport problems. A typical two-dimensional groundwater flow problem is used to explain the basic formulations and algorithms that have been used to solve the formulated optimization problems.
Improving the secrecy rate by turning foes to allies: An auction scheme
NASA Astrophysics Data System (ADS)
Ma, Ya-Yan; Wang, Bao-Yun
2015-09-01
Security against eavesdroppers is a critical issue in cognitive radio networks (CRNs). In this paper, a scenario consisting of one primary pair and multiple secondary pairs is considered. The secondary transmitters (STs) work in half-duplex mode and they are potential eavesdroppers on the primary transmission unless they are allowed to simultaneously transmit with the primary transmitter (PT). A modified second-price sealed-bid auction scheme is employed to model the interaction between the PT and STs. With the proposed auction scheme, the hostile relationship between the PT and STs is transformed into a cooperative relationship. An iterative algorithm based on the max-min criteria is proposed to find the optimal bidding power of the STs for an access chance in the presence of multiple eavesdroppers. Numerical results show that the proposed auction scheme not only improves the PT’s security but also increases the access opportunities of the STs. Project supported by the National Natural Science Foundation of China (Grant Nos. 61271232 and 61372126) and the University Postgraduate Research and Innovation Project in Jiangsu Province, China (Grant No. CXZZ12-0472).
NASA Astrophysics Data System (ADS)
Tan, Jun; Song, Peng; Li, Jinshan; Wang, Lei; Zhong, Mengxuan; Zhang, Xiaobo
2017-06-01
The surface-related multiple elimination (SRME) method is based on feedback formulation and has become one of the most preferred multiple suppression methods used. However, some differences are apparent between the predicted multiples and those in the source seismic records, which may result in conventional adaptive multiple subtraction methods being barely able to effectively suppress multiples in actual production. This paper introduces a combined adaptive multiple attenuation method based on the optimized event tracing technique and extended Wiener filtering. The method firstly uses multiple records predicted by SRME to generate a multiple velocity spectrum, then separates the original record to an approximate primary record and an approximate multiple record by applying the optimized event tracing method and short-time window FK filtering method. After applying the extended Wiener filtering method, residual multiples in the approximate primary record can then be eliminated and the damaged primary can be restored from the approximate multiple record. This method combines the advantages of multiple elimination based on the optimized event tracing method and the extended Wiener filtering technique. It is an ideal method for suppressing typical hyperbolic and other types of multiples, with the advantage of minimizing damage of the primary. Synthetic and field data tests show that this method produces better multiple elimination results than the traditional multi-channel Wiener filter method and is more suitable for multiple elimination in complicated geological areas.
Stability of Solutions to Classes of Traveling Salesman Problems.
Niendorf, Moritz; Kabamba, Pierre T; Girard, Anouck R
2016-04-01
By performing stability analysis on an optimal tour for problems belonging to classes of the traveling salesman problem (TSP), this paper derives margins of optimality for a solution with respect to disturbances in the problem data. Specifically, we consider the asymmetric sequence-dependent TSP, where the sequence dependence is driven by the dynamics of a stack. This is a generalization of the symmetric non sequence-dependent version of the TSP. Furthermore, we also consider the symmetric sequence-dependent variant and the asymmetric non sequence-dependent variant. Amongst others these problems have applications in logistics and unmanned aircraft mission planning. Changing external conditions such as traffic or weather may alter task costs, which can render an initially optimal itinerary suboptimal. Instead of optimizing the itinerary every time task costs change, stability criteria allow for fast evaluation of whether itineraries remain optimal. This paper develops a method to compute stability regions for the best tour in a set of tours for the symmetric TSP and extends the results to the asymmetric problem as well as their sequence-dependent counterparts. As the TSP is NP-hard, heuristic methods are frequently used to solve it. The presented approach is also applicable to analyze stability regions for a tour obtained through application of the k -opt heuristic with respect to the k -neighborhood. A dimensionless criticality metric for edges is proposed, such that a high criticality of an edge indicates that the optimal tour is more susceptible to cost changes in that edge. Multiple examples demonstrate the application of the developed stability computation method as well as the edge criticality measure that facilitates an intuitive assessment of instances of the TSP.
Dolan, James G
2010-01-01
Current models of healthcare quality recommend that patient management decisions be evidence-based and patient-centered. Evidence-based decisions require a thorough understanding of current information regarding the natural history of disease and the anticipated outcomes of different management options. Patient-centered decisions incorporate patient preferences, values, and unique personal circumstances into the decision making process and actively involve both patients along with health care providers as much as possible. Fundamentally, therefore, evidence-based, patient-centered decisions are multi-dimensional and typically involve multiple decision makers.Advances in the decision sciences have led to the development of a number of multiple criteria decision making methods. These multi-criteria methods are designed to help people make better choices when faced with complex decisions involving several dimensions. They are especially helpful when there is a need to combine "hard data" with subjective preferences, to make trade-offs between desired outcomes, and to involve multiple decision makers. Evidence-based, patient-centered clinical decision making has all of these characteristics. This close match suggests that clinical decision support systems based on multi-criteria decision making techniques have the potential to enable patients and providers to carry out the tasks required to implement evidence-based, patient-centered care effectively and efficiently in clinical settings.The goal of this paper is to give readers a general introduction to the range of multi-criteria methods available and show how they could be used to support clinical decision-making. Methods discussed include the balance sheet, the even swap method, ordinal ranking methods, direct weighting methods, multi-attribute decision analysis, and the analytic hierarchy process (AHP).
Dolan, James G.
2010-01-01
Current models of healthcare quality recommend that patient management decisions be evidence-based and patient-centered. Evidence-based decisions require a thorough understanding of current information regarding the natural history of disease and the anticipated outcomes of different management options. Patient-centered decisions incorporate patient preferences, values, and unique personal circumstances into the decision making process and actively involve both patients along with health care providers as much as possible. Fundamentally, therefore, evidence-based, patient-centered decisions are multi-dimensional and typically involve multiple decision makers. Advances in the decision sciences have led to the development of a number of multiple criteria decision making methods. These multi-criteria methods are designed to help people make better choices when faced with complex decisions involving several dimensions. They are especially helpful when there is a need to combine “hard data” with subjective preferences, to make trade-offs between desired outcomes, and to involve multiple decision makers. Evidence-based, patient-centered clinical decision making has all of these characteristics. This close match suggests that clinical decision support systems based on multi-criteria decision making techniques have the potential to enable patients and providers to carry out the tasks required to implement evidence-based, patient-centered care effectively and efficiently in clinical settings. The goal of this paper is to give readers a general introduction to the range of multi-criteria methods available and show how they could be used to support clinical decision-making. Methods discussed include the balance sheet, the even swap method, ordinal ranking methods, direct weighting methods, multi-attribute decision analysis, and the analytic hierarchy process (AHP) PMID:21394218
Determination of habitat requirements for Apache Trout
Petre, Sally J.; Bonar, Scott A.
2017-01-01
The Apache Trout Oncorhynchus apache, a salmonid endemic to east-central Arizona, is currently listed as threatened under the U.S. Endangered Species Act. Establishing and maintaining recovery streams for Apache Trout and other endemic species requires determination of their specific habitat requirements. We built upon previous studies of Apache Trout habitat by defining both stream-specific and generalized optimal and suitable ranges of habitat criteria in three streams located in the White Mountains of Arizona. Habitat criteria were measured at the time thought to be most limiting to juvenile and adult life stages, the summer base flow period. Based on the combined results from three streams, we found that Apache Trout use relatively deep (optimal range = 0.15–0.32 m; suitable range = 0.032–0.470 m) pools with slow stream velocities (suitable range = 0.00–0.22 m/s), gravel or smaller substrate (suitable range = 0.13–2.0 [Wentworth scale]), overhead cover (suitable range = 26–88%), and instream cover (large woody debris and undercut banks were occupied at higher rates than other instream cover types). Fish were captured at cool to moderate temperatures (suitable range = 10.4–21.1°C) in streams with relatively low maximum seasonal temperatures (optimal range = 20.1–22.9°C; suitable range = 17.1–25.9°C). Multiple logistic regression generally confirmed the importance of these variables for predicting the presence of Apache Trout. All measured variables except mean velocity were significant predictors in our model. Understanding habitat needs is necessary in managing for persistence, recolonization, and recruitment of Apache Trout. Management strategies such as fencing areas to restrict ungulate use and grazing and planting native riparian vegetation might favor Apache Trout persistence and recolonization by providing overhead cover and large woody debris to form pools and instream cover, shading streams and lowering temperatures.
Leavesley, G.H.; Markstrom, S.L.; Viger, R.J.
2004-01-01
The interdisciplinary nature and increasing complexity of water- and environmental-resource problems require the use of modeling approaches that can incorporate knowledge from a broad range of scientific disciplines. The large number of distributed hydrological and ecosystem models currently available are composed of a variety of different conceptualizations of the associated processes they simulate. Assessment of the capabilities of these distributed models requires evaluation of the conceptualizations of the individual processes, and the identification of which conceptualizations are most appropriate for various combinations of criteria, such as problem objectives, data constraints, and spatial and temporal scales of application. With this knowledge, "optimal" models for specific sets of criteria can be created and applied. The U.S. Geological Survey (USGS) Modular Modeling System (MMS) is an integrated system of computer software that has been developed to provide these model development and application capabilities. MMS supports the integration of models and tools at a variety of levels of modular design. These include individual process models, tightly coupled models, loosely coupled models, and fully-integrated decision support systems. A variety of visualization and statistical tools are also provided. MMS has been coupled with the Bureau of Reclamation (BOR) object-oriented reservoir and river-system modeling framework, RiverWare, under a joint USGS-BOR program called the Watershed and River System Management Program. MMS and RiverWare are linked using a shared relational database. The resulting database-centered decision support system provides tools for evaluating and applying optimal resource-allocation and management strategies to complex, operational decisions on multipurpose reservoir systems and watersheds. Management issues being addressed include efficiency of water-resources management, environmental concerns such as meeting flow needs for endangered species, and optimizing operations within the constraints of multiple objectives such as power generation, irrigation, and water conservation. This decision support system approach is being developed, tested, and implemented in the Gunni-son, Yakima, San Juan, Rio Grande, and Truckee River basins of the western United States. Copyright ASCE 2004.
Finite element assisted prediction of ductile fracture in sheet bulging
NASA Astrophysics Data System (ADS)
Donald, Bryan J. Mac; Lorza, Ruben Lostado; Yoshihara, Shoichiro
2017-10-01
With growing demand for energy efficiency, there is much focus on reducing oil consumption rates and utilising alternative fuels. A contributor to the solution in this area is to produce lighter vehicles that are more fuel efficient and/or allow for the use of alternative fuel sources (e.g. electric powered automobiles). Near-net-shape manufacturing processes such as hydroforming have great potential to reduce structural weight while still maintaining structural strength and performance. Finite element analysis techniques have proved invaluable in optimizing such hydroforming processes, however, the majority of such studies have used simple predictors of failure which are usually yield criteria such as von Mises stress. There is clearly potential to obtain more optimal solutions using more advanced predictors of failure. This paper compared the Von Mises stress failure criteria and the Oyane's ductile fracture criteria in the sheet hydroforming of magnesium alloys. It was found that the results obtained from the models which used Oyane's ductile fracture criteria were more realistic than those obtained from those that used Von Mises stress as a failure criteria.
NASA Astrophysics Data System (ADS)
Pierce, S. A.; Ciarleglio, M.; Dulay, M.; Lowry, T. S.; Sharp, J. M.; Barnes, J. W.; Eaton, D. J.; Tidwell, V. C.
2006-12-01
Work in the literature for groundwater allocation emphasizes finding a truly optimal solution, often with the drawback of limiting the reported results to either maximizing net benefit in regional scale models or minimizing pumping costs for localized cases. From a policy perspective, limited insight can be gained from these studies because the results are restricted to a single, efficient solution and they neglect non-market values that may influence a management decision. Conversely, economically derived objective functions tend to exhibit a plateau upon nearing the optimal value. This plateau effect, or non-uniqueness, is actually a positive feature in the behavior of groundwater systems because it demonstrates that multiple management strategies, serving numerous community preferences, may be considered while still achieving similar quantitative results. An optimization problem takes the same set of initial conditions and looks for the most efficient solution while a decision problem looks at a situation and asks for a solution that meets certain user-defined criteria. In other words, the election of an alternative course of action using a decision support system will not always result in selection of the most `optimized' alternative. To broaden the analytical toolset available for science and policy interaction, we have developed a groundwater decision support system (GWDSS) that generates a suite of management alternatives by pairing a combinatorial search algorithm with a numerical groundwater model for consideration by decision makers and stakeholders. Subject to constraints as defined by community concerns, the tabu optimization engine systematically creates hypothetical management scenarios running hundreds, and even thousands, of simulations, and then saving the best performing realizations. Results of the search are then evaluated against stakeholder preference sets using ranking methods to aid in identifying a subset of alternatives for final consideration. Here we present the development of the GWDSS and its use in the decision making process for the Barton Springs segment of the Edwards Aquifer located in Austin Texas. Using hydrogeologic metrics, together with economic estimates and impervious cover valuations, representative rankings are determined. Post search multi-objective analysis reveals that some highly ranked alternatives meet the preference sets of more than one stakeholder and achieve similar quantitative aquifer performance. These results are important to both modelers and policy makers alike.
Raster-based outranking method: a new approach for municipal solid waste landfill (MSW) siting.
Hamzeh, Mohamad; Abbaspour, Rahim Ali; Davalou, Romina
2015-08-01
MSW landfill siting is a complicated process because it requires integration of several factors. In this paper, geographic information system (GIS) and multiple criteria decision analysis (MCDA) were combined to handle the municipal solid waste (MSW) landfill siting. For this purpose, first, 16 input data layers were prepared in GIS environment. Then, the exclusionary lands were eliminated and potentially suitable areas for the MSW disposal were identified. These potentially suitable areas, in an innovative approach, were further examined by deploying Preference Ranking Organization Method for Enrichment Evaluations (PROMETHEE) II and analytic network process (ANP), which are two of the most recent MCDA methods, in order to determine land suitability for landfilling. PROMETHEE II was used to determine a complete ranking of the alternatives, while ANP was employed to quantify the subjective judgments of evaluators as criteria weights. The resulting land suitability was reported on a grading scale of 1-5 from 1 to 5, which is the least to the most suitable area, respectively. Finally, three optimal sites were selected by taking into consideration the local conditions of 15 sites, which were candidates for MSW landfilling. Research findings show that the raster-based method yields effective results.
On Optimizing an Archibald Rubber-Band Heat Engine.
ERIC Educational Resources Information Center
Mullen, J. G.; And Others
1978-01-01
Discusses the criteria and procedure for optimizing the performance of Archibald rubber-band heat engines by using the appropriate choice of dimensions, minimizing frictional torque, maximizing torque and balancing the rubber band system. (GA)
NASA Astrophysics Data System (ADS)
Wang, Weiping; Yuan, Manman; Luo, Xiong; Liu, Linlin; Zhang, Yao
2018-01-01
Proportional delay is a class of unbounded time-varying delay. A class of bidirectional associative memory (BAM) memristive neural networks with multiple proportional delays is concerned in this paper. First, we propose the model of BAM memristive neural networks with multiple proportional delays and stochastic perturbations. Furthermore, by choosing suitable nonlinear variable transformations, the BAM memristive neural networks with multiple proportional delays can be transformed into the BAM memristive neural networks with constant delays. Based on the drive-response system concept, differential inclusions theory and Lyapunov stability theory, some anti-synchronization criteria are obtained. Finally, the effectiveness of proposed criteria are demonstrated through numerical examples.
NASA Astrophysics Data System (ADS)
Zhang, Yun; Okubo, R.; Hirano, M.; Hirano, T.
2009-04-01
We report on a measurement of entanglement in the time domain. The entanglement is examined by the Duan-Simon criteria. We obtained Duan-Simon criteria of Δ2(x1-x2)+Δ2(p1+p2) = 1.4<2 without optimal gain and Δ2(x1-x2)+Δ2(p1+p2) = 1.2<2 with optimal gain, respectively. This opens the way to realize both efficient and truly causal EPR paradox.
Suboptimal Decision Criteria Are Predicted by Subjectively Weighted Probabilities and Rewards
Ackermann, John F.; Landy, Michael S.
2014-01-01
Subjects performed a visual detection task in which the probability of target occurrence at each of the two possible locations, and the rewards for correct responses for each, were varied across conditions. To maximize monetary gain, observers should bias their responses, choosing one location more often than the other in line with the varied probabilities and rewards. Typically, and in our task, observers do not bias their responses to the extent they should, and instead distribute their responses more evenly across locations, a phenomenon referred to as ‘conservatism.’ We investigated several hypotheses regarding the source of the conservatism. We measured utility and probability weighting functions under Prospect Theory for each subject in an independent economic choice task and used the weighting-function parameters to calculate each subject’s subjective utility (SU(c)) as a function of the criterion c, and the corresponding weighted optimal criteria (wcopt). Subjects’ criteria were not close to optimal relative to wcopt. The slope of SU (c) and of expected gain EG(c) at the neutral criterion corresponding to β = 1 were both predictive of subjects’ criteria. The slope of SU(c) was a better predictor of observers’ decision criteria overall. Thus, rather than behaving optimally, subjects move their criterion away from the neutral criterion by estimating how much they stand to gain by such a change based on the slope of subjective gain as a function of criterion, using inherently distorted probabilities and values. PMID:25366822
Suboptimal decision criteria are predicted by subjectively weighted probabilities and rewards.
Ackermann, John F; Landy, Michael S
2015-02-01
Subjects performed a visual detection task in which the probability of target occurrence at each of the two possible locations, and the rewards for correct responses for each, were varied across conditions. To maximize monetary gain, observers should bias their responses, choosing one location more often than the other in line with the varied probabilities and rewards. Typically, and in our task, observers do not bias their responses to the extent they should, and instead distribute their responses more evenly across locations, a phenomenon referred to as 'conservatism.' We investigated several hypotheses regarding the source of the conservatism. We measured utility and probability weighting functions under Prospect Theory for each subject in an independent economic choice task and used the weighting-function parameters to calculate each subject's subjective utility (SU(c)) as a function of the criterion c, and the corresponding weighted optimal criteria (wc opt ). Subjects' criteria were not close to optimal relative to wc opt . The slope of SU(c) and of expected gain EG(c) at the neutral criterion corresponding to β = 1 were both predictive of the subjects' criteria. The slope of SU(c) was a better predictor of observers' decision criteria overall. Thus, rather than behaving optimally, subjects move their criterion away from the neutral criterion by estimating how much they stand to gain by such a change based on the slope of subjective gain as a function of criterion, using inherently distorted probabilities and values.
ERIC Educational Resources Information Center
Fastre, Greet; Gijselaers, Wim H.; Segers, Mien
2008-01-01
The authors report relations between entrance criteria and study success in a program for a master of science in business. Based on the admission criteria broadly used in European business schools and the findings of prior research, the present authors measured eight criteria for study success in the master's degree program. The authors applied…
ZettaBricks: A Language Compiler and Runtime System for Anyscale Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amarasinghe, Saman
This grant supported the ZettaBricks and OpenTuner projects. ZettaBricks is a new implicitly parallel language and compiler where defining multiple implementations of multiple algorithms to solve a problem is the natural way of programming. ZettaBricks makes algorithmic choice a first class construct of the language. Choices are provided in a way that also allows our compiler to tune at a finer granularity. The ZettaBricks compiler autotunes programs by making both fine-grained as well as algorithmic choices. Choices also include different automatic parallelization techniques, data distributions, algorithmic parameters, transformations, and blocking. Additionally, ZettaBricks introduces novel techniques to autotune algorithms for differentmore » convergence criteria. When choosing between various direct and iterative methods, the ZettaBricks compiler is able to tune a program in such a way that delivers near-optimal efficiency for any desired level of accuracy. The compiler has the flexibility of utilizing different convergence criteria for the various components within a single algorithm, providing the user with accuracy choice alongside algorithmic choice. OpenTuner is a generalization of the experience gained in building an autotuner for ZettaBricks. OpenTuner is a new open source framework for building domain-specific multi-objective program autotuners. OpenTuner supports fully-customizable configuration representations, an extensible technique representation to allow for domain-specific techniques, and an easy to use interface for communicating with the program to be autotuned. A key capability inside OpenTuner is the use of ensembles of disparate search techniques simultaneously; techniques that perform well will dynamically be allocated a larger proportion of tests.« less
Mohammadi, Ali; Valinejadi, Ali; Sakipour, Sara; Hemmat, Morteza; Zarei, Javad; Askari Majdabadi, Hesamedin
2017-08-27
Rural health houses constitute a major provider of some primary health services in the villages of Iran. Given the challenges of providing health services in rural areas, health houses should be established based on the criteria of health network systems (HNSs). The value of these criteria and their precedence over others have not yet been thoroughly investigated. The present study was conducted to propose a model for improving the distribution of rural health houses in HNSs. The present applied study was conducted in Khuzestan province in the southwest of Iran in 2014-2016. First, the descriptive and spatial data required were collected and entered into ArcGIS after modifications, and the Geodatabase was then created. Based on the criteria of the HNS and according to experts' opinions, the main criteria and the sub-criteria for an optimal site selection were determined. To determine the criteria's coefficient of importance (ie, their weight), the main criteria and the sub-criteria were compared in pairs according to experts' opinions. The results of the pairwise comparisons were entered into Expert Choice and the weight of the main criteria and the sub-criteria were determined using the analytic hierarchy process (AHP). The application layers were then formed in geographic information system (GIS). A model was ultimately proposed in the GIS for the optimal distribution of rural health houses by overlaying the weighting layers and the other layers related to villages and rural health houses. Based on the experts' opinions, six criteria were determined as the main criteria for an optimal site selection for rural health houses, including welfare infrastructures, population, dispersion, accessibility, corresponding routes, distance to the rural health center and the absence of natural barriers to accessibility. Of the main criteria proposed, the highest weight was given to "population" (0.506). The priorities suggested in the proposed model for establishing rural health houses are presented within five zoning levels -from excellent to very poor. The results of the study showed that the proposed model can help provide a better picture of the distribution of rural health houses. The GIS is recommended to be used as a means of making the HNS more efficient. © 2018 The Author(s); Published by Kerman University of Medical Sciences. This is an open-access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Topology synthesis and size optimization of morphing wing structures
NASA Astrophysics Data System (ADS)
Inoyama, Daisaku
This research demonstrates a novel topology and size optimization methodology for synthesis of distributed actuation systems with specific applications to morphing air vehicle structures. The main emphasis is placed on the topology and size optimization problem formulations and the development of computational modeling concepts. The analysis model is developed to meet several important criteria: It must allow a rigid-body displacement, as well as a variation in planform area, with minimum strain on structural members while retaining acceptable numerical stability for finite element analysis. Topology optimization is performed on a semi-ground structure with design variables that control the system configuration. In effect, the optimization process assigns morphing members as "soft" elements, non-morphing load-bearing members as "stiff' elements, and non-existent members as "voids." The optimization process also determines the optimum actuator placement, where each actuator is represented computationally by equal and opposite nodal forces with soft axial stiffness. In addition, the configuration of attachments that connect the morphing structure to a non-morphing structure is determined simultaneously. Several different optimization problem formulations are investigated to understand their potential benefits in solution quality, as well as meaningfulness of the formulations. Extensions and enhancements to the initial concept and problem formulations are made to accommodate multiple-configuration definitions. In addition, the principal issues on the external-load dependency and the reversibility of a design, as well as the appropriate selection of a reference configuration, are addressed in the research. The methodology to control actuator distributions and concentrations is also discussed. Finally, the strategy to transfer the topology solution to the sizing optimization is developed and cross-sectional areas of existent structural members are optimized under applied aerodynamic loads. That is, the optimization process is implemented in sequential order: The actuation system layout is first determined through multi-disciplinary topology optimization process, and then the thickness or cross-sectional area of each existent member is optimized under given constraints and boundary conditions. Sample problems are solved to demonstrate the potential capabilities of the presented methodology. The research demonstrates an innovative structural design procedure from a computational perspective and opens new insights into the potential design requirements and characteristics of morphing structures.
Aerodynamic design using numerical optimization
NASA Technical Reports Server (NTRS)
Murman, E. M.; Chapman, G. T.
1983-01-01
The procedure of using numerical optimization methods coupled with computational fluid dynamic (CFD) codes for the development of an aerodynamic design is examined. Several approaches that replace wind tunnel tests, develop pressure distributions and derive designs, or fulfill preset design criteria are presented. The method of Aerodynamic Design by Numerical Optimization (ADNO) is described and illustrated with examples.
Optimization of fuels from waste composition with application of genetic algorithm.
Małgorzata, Wzorek
2014-05-01
The objective of this article is to elaborate a method to optimize the composition of the fuels from sewage sludge (PBS fuel - fuel based on sewage sludge and coal slime, PBM fuel - fuel based on sewage sludge and meat and bone meal, PBT fuel - fuel based on sewage sludge and sawdust). As a tool for an optimization procedure, the use of a genetic algorithm is proposed. The optimization task involves the maximization of mass fraction of sewage sludge in a fuel developed on the basis of quality-based criteria for the use as an alternative fuel used by the cement industry. The selection criteria of fuels composition concerned such parameters as: calorific value, content of chlorine, sulphur and heavy metals. Mathematical descriptions of fuel compositions and general forms of the genetic algorithm, as well as the obtained optimization results are presented. The results of this study indicate that the proposed genetic algorithm offers an optimization tool, which could be useful in the determination of the composition of fuels that are produced from waste.
A multiple criteria analysis for household solid waste management in the urban community of Dakar.
Kapepula, Ka-Mbayu; Colson, Gerard; Sabri, Karim; Thonart, Philippe
2007-01-01
Household solid waste management is a severe problem in big cities of developing countries. Mismanaged solid waste dumpsites produce bad sanitary, ecological and economic consequences for the whole population, especially for the poorest urban inhabitants. Dealing with this problem, this paper utilizes field data collected in the urban community of Dakar, in view of ranking nine areas of the city with respect to multiple criteria of nuisance. Nine criteria are built and organized in three families that represent three classical viewpoints: the production of wastes, their collection and their treatment. Thanks to the method PROMETHEE and the software ARGOS, we do a pair-wise comparison of the nine areas, which allows their multiple criteria rankings according to each viewpoint and then globally. Finding the worst and best areas in terms of nuisance for a better waste management in the city is our final purpose, fitting as well as possible the needs of the urban community. Based on field knowledge and on the literature, we suggest applying general and area-specific remedies to the household solid waste problems.
Quantum optimal control with automatic differentiation using graphics processors
NASA Astrophysics Data System (ADS)
Leung, Nelson; Abdelhafez, Mohamed; Chakram, Srivatsan; Naik, Ravi; Groszkowski, Peter; Koch, Jens; Schuster, David
We implement quantum optimal control based on automatic differentiation and harness the acceleration afforded by graphics processing units (GPUs). Automatic differentiation allows us to specify advanced optimization criteria and incorporate them into the optimization process with ease. We will describe efficient techniques to optimally control weakly anharmonic systems that are commonly encountered in circuit QED, including coupled superconducting transmon qubits and multi-cavity circuit QED systems. These systems allow for a rich variety of control schemes that quantum optimal control is well suited to explore.
Optimization of the two-sample rank Neyman-Pearson detector
NASA Astrophysics Data System (ADS)
Akimov, P. S.; Barashkov, V. M.
1984-10-01
The development of optimal algorithms concerned with rank considerations in the case of finite sample sizes involves considerable mathematical difficulties. The present investigation provides results related to the design and the analysis of an optimal rank detector based on a utilization of the Neyman-Pearson criteria. The detection of a signal in the presence of background noise is considered, taking into account n observations (readings) x1, x2, ... xn in the experimental communications channel. The computation of the value of the rank of an observation is calculated on the basis of relations between x and the variable y, representing interference. Attention is given to conditions in the absence of a signal, the probability of the detection of an arriving signal, details regarding the utilization of the Neyman-Pearson criteria, the scheme of an optimal rank, multichannel, incoherent detector, and an analysis of the detector.
Pulsed Inductive Plasma Acceleration: Performance Optimization Criteria
NASA Technical Reports Server (NTRS)
Polzin, Kurt A.
2014-01-01
Optimization criteria for pulsed inductive plasma acceleration are developed using an acceleration model consisting of a set of coupled circuit equations describing the time-varying current in the thruster and a one-dimensional momentum equation. The model is nondimensionalized, resulting in the identification of several scaling parameters that are varied to optimize the performance of the thruster. The analysis reveals the benefits of underdamped current waveforms and leads to a performance optimization criterion that requires the matching of the natural period of the discharge and the acceleration timescale imposed by the inertia of the working gas. In addition, the performance increases when a greater fraction of the propellant is initially located nearer to the inductive acceleration coil. While the dimensionless model uses a constant temperature formulation in calculating performance, the scaling parameters that yield the optimum performance are shown to be relatively invariant if a self-consistent description of energy in the plasma is instead used.
NASA Technical Reports Server (NTRS)
Pilkey, W. D.; Wang, B. P.; Yoo, Y.; Clark, B.
1973-01-01
A description and applications of a computer capability for determining the ultimate optimal behavior of a dynamically loaded structural-mechanical system are presented. This capability provides characteristics of the theoretically best, or limiting, design concept according to response criteria dictated by design requirements. Equations of motion of the system in first or second order form include incompletely specified elements whose characteristics are determined in the optimization of one or more performance indices subject to the response criteria in the form of constraints. The system is subject to deterministic transient inputs, and the computer capability is designed to operate with a large linear programming on-the-shelf software package which performs the desired optimization. The report contains user-oriented program documentation in engineering, problem-oriented form. Applications cover a wide variety of dynamics problems including those associated with such diverse configurations as a missile-silo system, impacting freight cars, and an aircraft ride control system.
[Diagnosis and the technology for optimizing the medical support of a troop unit].
Korshever, N G; Polkovov, S V; Lavrinenko, O V; Krupnov, P A; Anastasov, K N
2000-05-01
The work is devoted to investigation of the system of military unit medical support with the use of principles and states of organizational diagnosis; development of the method allowing to assess its functional activity; and determination of optimization trends. Basing on the conducted organizational diagnosis and expert inquiry the informative criteria were determined which characterize the stages of functioning of the military unit medical support system. To evaluate the success of military unit medical support the complex multi-criteria pattern was developed and algorithm of this process optimization was substantiated. Using the results obtained, particularly realization of principles and states of decision taking theory in machine program it is possible to solve more complex problem of comparison between any number of military units: to dispose them according to priority decrease; to select the programmed number of the best and worst; to determine the trends of activity optimization in corresponding medical service personnel.
Living donor liver transplantation for hepatocellular carcinoma achieves better outcomes.
Lin, Chih-Che; Chen, Chao-Long
2016-10-01
Liver transplantation (LT) for hepatocellular carcinoma (HCC) at Kaohsiung Chang Gung Memorial Hospital mainly relies on live donor LT (LDLT). Owing to taking the risk of LD, we are obligated to adopt strict selection criteria for HCC patients and optimize the pre-transplant conditions to ensure a high disease-free survival similar to those without HCC, even better than deceased donor LT (DDLT). Better outcomes are attributed to excellent surgical results and optimal patient selection. The hospital mortality of primary and salvage LDLT are lower than 2% in our center. Although Taiwan Health Insurance Policy extended the Milan to University of California, San Francisco (UCSF) criteria in 2006, selection criteria will not be consolidated to take into account only by the morphologic size/number of tumors but also by their biology. The criteria are divided into modifiable image morphology, alpha fetoprotein (AFP), and positron emission tomography (PET) scan with standard uptake value (SUV) and unmodifiable unfavorable pathology such as HCC combined with cholangiocarcinoma (CC), sarcomatoid type, and poor differentiation. Downstaging therapy is necessary for HCC patients beyond criteria to fit all modifiable standards. The upper limit of downstaging treatment seems to be extended by more effective drug eluting transarterial chemoembolization in cases without absolute contraindications. In contrast, the pitfall of unmodifiable tumor pathology should be excluded by the findings of pretransplant core biopsy/resection if possible. More recently, achieving complete tumor necrosis in explanted liver could almost predict no recurrence after transplant. Necrotizing therapy is advised if possible before transplant even the tumor status within criteria to minimize the possibility of tumor recurrence. LDLT with low surgical mortality in experienced centers provides the opportunities of optimizing the pre-transplant tumor conditions and timing of transplant to achieve better outcomes.
NASA Astrophysics Data System (ADS)
Shafii, M.; Tolson, B.; Matott, L. S.
2012-04-01
Hydrologic modeling has benefited from significant developments over the past two decades. This has resulted in building of higher levels of complexity into hydrologic models, which eventually makes the model evaluation process (parameter estimation via calibration and uncertainty analysis) more challenging. In order to avoid unreasonable parameter estimates, many researchers have suggested implementation of multi-criteria calibration schemes. Furthermore, for predictive hydrologic models to be useful, proper consideration of uncertainty is essential. Consequently, recent research has emphasized comprehensive model assessment procedures in which multi-criteria parameter estimation is combined with statistically-based uncertainty analysis routines such as Bayesian inference using Markov Chain Monte Carlo (MCMC) sampling. Such a procedure relies on the use of formal likelihood functions based on statistical assumptions, and moreover, the Bayesian inference structured on MCMC samplers requires a considerably large number of simulations. Due to these issues, especially in complex non-linear hydrological models, a variety of alternative informal approaches have been proposed for uncertainty analysis in the multi-criteria context. This study aims at exploring a number of such informal uncertainty analysis techniques in multi-criteria calibration of hydrological models. The informal methods addressed in this study are (i) Pareto optimality which quantifies the parameter uncertainty using the Pareto solutions, (ii) DDS-AU which uses the weighted sum of objective functions to derive the prediction limits, and (iii) GLUE which describes the total uncertainty through identification of behavioral solutions. The main objective is to compare such methods with MCMC-based Bayesian inference with respect to factors such as computational burden, and predictive capacity, which are evaluated based on multiple comparative measures. The measures for comparison are calculated both for calibration and evaluation periods. The uncertainty analysis methodologies are applied to a simple 5-parameter rainfall-runoff model, called HYMOD.
Immunoglobulin A multiple myeloma with cutaneous involvement in a dog.
Mayer, Monique N; Kerr, Moira E; Grier, Candace K; Macdonald, Valerie S
2008-07-01
An 8-year-old rottweiler, diagnosed with multiple myeloma and multiple sites of cutaneous involvement, was treated with chemotherapy and radiation therapy. The diagnostic criteria for canine multiple myeloma, limitations of diagnostic testing for light chain proteinuria in dogs, and the role of radiation therapy in multiple myeloma patients is discussed.
Immunoglobulin A multiple myeloma with cutaneous involvement in a dog
Mayer, Monique N.; Kerr, Moira E.; Grier, Candace K.; MacDonald, Valerie S.
2008-01-01
An 8-year-old rottweiler, diagnosed with multiple myeloma and multiple sites of cutaneous involvement, was treated with chemotherapy and radiation therapy. The diagnostic criteria for canine multiple myeloma, limitations of diagnostic testing for light chain proteinuria in dogs, and the role of radiation therapy in multiple myeloma patients is discussed. PMID:18827847
Identification of the Criteria for Decision Making of Cut-Away Peatland Reuse
NASA Astrophysics Data System (ADS)
Padur, Kadi; Ilomets, Mati; Põder, Tõnis
2017-03-01
The total area of abandoned milled peatlands which need to be rehabilitated for sustainable land-use is nearly 10,000 ha in Estonia. According to the agreement between Estonia and the European Union, Estonia has to create suitable conditions for restoration of 2000 ha of abandoned cut-away peatlands by 2023. The decisions on rehabilitation of abandoned milled peatlands have so far relied on a limited knowledgebase with unestablished methodologies, thus the decision making process needs a significant improvement. This study aims to improve the methodology by identifying the criteria for optimal decision making to ensure sustainable land use planning after peat extraction. Therefore relevant environmental, social and economic restrictive and weighted comparison criteria, which assess reuse alternatives suitability for achieving the goal, is developed in cooperation with stakeholders. Restrictive criteria are arranged into a decision tree to help to determine the implementable reuse alternatives in various situations. Weighted comparison criteria are developed in cooperation with stakeholders to rank the reuse alternatives. The comparison criteria are organised hierarchically into a value tree. In the situation, where the selection of a suitable rehabilitation alternative for a specific milled peatland is going to be made, the weighted comparison criteria values need to be identified and the presented approach supports the optimal and transparent decision making. In addition to Estonian context the general results of the study could also be applied to a cut-away peatlands in other regions with need-based site-dependent modifications of criteria values and weights.
Identification of the Criteria for Decision Making of Cut-Away Peatland Reuse.
Padur, Kadi; Ilomets, Mati; Põder, Tõnis
2017-03-01
The total area of abandoned milled peatlands which need to be rehabilitated for sustainable land-use is nearly 10,000 ha in Estonia. According to the agreement between Estonia and the European Union, Estonia has to create suitable conditions for restoration of 2000 ha of abandoned cut-away peatlands by 2023. The decisions on rehabilitation of abandoned milled peatlands have so far relied on a limited knowledgebase with unestablished methodologies, thus the decision making process needs a significant improvement. This study aims to improve the methodology by identifying the criteria for optimal decision making to ensure sustainable land use planning after peat extraction. Therefore relevant environmental, social and economic restrictive and weighted comparison criteria, which assess reuse alternatives suitability for achieving the goal, is developed in cooperation with stakeholders. Restrictive criteria are arranged into a decision tree to help to determine the implementable reuse alternatives in various situations. Weighted comparison criteria are developed in cooperation with stakeholders to rank the reuse alternatives. The comparison criteria are organised hierarchically into a value tree. In the situation, where the selection of a suitable rehabilitation alternative for a specific milled peatland is going to be made, the weighted comparison criteria values need to be identified and the presented approach supports the optimal and transparent decision making. In addition to Estonian context the general results of the study could also be applied to a cut-away peatlands in other regions with need-based site-dependent modifications of criteria values and weights.
A Collaborative Neurodynamic Approach to Multiple-Objective Distributed Optimization.
Yang, Shaofu; Liu, Qingshan; Wang, Jun
2018-04-01
This paper is concerned with multiple-objective distributed optimization. Based on objective weighting and decision space decomposition, a collaborative neurodynamic approach to multiobjective distributed optimization is presented. In the approach, a system of collaborative neural networks is developed to search for Pareto optimal solutions, where each neural network is associated with one objective function and given constraints. Sufficient conditions are derived for ascertaining the convergence to a Pareto optimal solution of the collaborative neurodynamic system. In addition, it is proved that each connected subsystem can generate a Pareto optimal solution when the communication topology is disconnected. Then, a switching-topology-based method is proposed to compute multiple Pareto optimal solutions for discretized approximation of Pareto front. Finally, simulation results are discussed to substantiate the performance of the collaborative neurodynamic approach. A portfolio selection application is also given.
Ouyang, Xiaoguang; Guo, Fen
2018-04-01
Municipal wastewater discharge is widespread and one of the sources of coastal eutrophication, and is especially uncontrolled in developing and undeveloped coastal regions. Mangrove forests are natural filters of pollutants in wastewater. There are three paradigms of mangroves for municipal wastewater treatment and the selection of the optimal one is a multi-criteria decision-making problem. Combining intuitionistic fuzzy theory, the Fuzzy Delphi Method and the fuzzy analytical hierarchical process (AHP), this study develops an intuitionistic fuzzy AHP (IFAHP) method. For the Fuzzy Delphi Method, the judgments of experts and representatives on criterion weights are made by linguistic variables and quantified by intuitionistic fuzzy theory, which is also used to weight the importance of experts and representatives. This process generates the entropy weights of criteria, which are combined with indices values and weights to rank the alternatives by the fuzzy AHP method. The IFAHP method was used to select the optimal paradigm of mangroves for treating municipal wastewater. The entropy weights were entrained by the valid evaluation of 64 experts and representatives via online survey. Natural mangroves were found to be the optimal paradigm for municipal wastewater treatment. By assigning different weights to the criteria, sensitivity analysis shows that natural mangroves remain to be the optimal paradigm under most scenarios. This study stresses the importance of mangroves for wastewater treatment. Decision-makers need to contemplate mangrove reforestation projects, especially where mangroves are highly deforested but wastewater discharge is uncontrolled. The IFAHP method is expected to be applied in other multi-criteria decision-making cases. Copyright © 2017 Elsevier Ltd. All rights reserved.
Optimized pulses for the control of uncertain qubits
Grace, Matthew D.; Dominy, Jason M.; Witzel, Wayne M.; ...
2012-05-18
The construction of high-fidelity control fields that are robust to control, system, and/or surrounding environment uncertainties is a crucial objective for quantum information processing. Using the two-state Landau-Zener model for illustrative simulations of a controlled qubit, we generate optimal controls for π/2 and π pulses and investigate their inherent robustness to uncertainty in the magnitude of the drift Hamiltonian. Next, we construct a quantum-control protocol to improve system-drift robustness by combining environment-decoupling pulse criteria and optimal control theory for unitary operations. By perturbatively expanding the unitary time-evolution operator for an open quantum system, previous analysis of environment-decoupling control pulses hasmore » calculated explicit control-field criteria to suppress environment-induced errors up to (but not including) third order from π/2 and π pulses. We systematically integrate this criteria with optimal control theory, incorporating an estimate of the uncertain parameter to produce improvements in gate fidelity and robustness, demonstrated via a numerical example based on double quantum dot qubits. For the qubit model used in this work, postfacto analysis of the resulting controls suggests that realistic control-field fluctuations and noise may contribute just as significantly to gate errors as system and environment fluctuations.« less
Bayesian Phase II optimization for time-to-event data based on historical information.
Bertsche, Anja; Fleischer, Frank; Beyersmann, Jan; Nehmiz, Gerhard
2017-01-01
After exploratory drug development, companies face the decision whether to initiate confirmatory trials based on limited efficacy information. This proof-of-concept decision is typically performed after a Phase II trial studying a novel treatment versus either placebo or an active comparator. The article aims to optimize the design of such a proof-of-concept trial with respect to decision making. We incorporate historical information and develop pre-specified decision criteria accounting for the uncertainty of the observed treatment effect. We optimize these criteria based on sensitivity and specificity, given the historical information. Specifically, time-to-event data are considered in a randomized 2-arm trial with additional prior information on the control treatment. The proof-of-concept criterion uses treatment effect size, rather than significance. Criteria are defined on the posterior distribution of the hazard ratio given the Phase II data and the historical control information. Event times are exponentially modeled within groups, allowing for group-specific conjugate prior-to-posterior calculation. While a non-informative prior is placed on the investigational treatment, the control prior is constructed via the meta-analytic-predictive approach. The design parameters including sample size and allocation ratio are then optimized, maximizing the probability of taking the right decision. The approach is illustrated with an example in lung cancer.
Criteria for Authorship in Bioethics
Resnik, David B.; Master, Zubin
2011-01-01
Multiple authorship is becoming increasingly common in bioethics research. There are well-established criteria for authorship in empirical bioethics research but not for conceptual research. It is important to develop criteria for authorship in conceptual publications to prevent undeserved authorship and uphold standards of fairness and accountability. This article explores the issue of multiple authorship in bioethics and develops criteria for determining who should be an author on a conceptual publication in bioethics. Authorship in conceptual research should be based on contributing substantially to: (1) identifying a topic, problem, or issue to study; (2) reviewing and interpreting the relevant literature; (3) formulating, analyzing, and evaluating arguments that support one or more theses; (4) responding to objections and counterarguments; and (5) drafting the manuscript and approving the final version. Authors of conceptual publications should participate substantially in at least two of areas (1)–(5). PMID:21943265
Criteria for authorship in bioethics.
Resnik, David B; Master, Zubin
2011-10-01
Multiple authorship is becoming increasingly common in bioethics research. There are well-established criteria for authorship in empirical bioethics research but not for conceptual research. It is important to develop criteria for authorship in conceptual publications to prevent undeserved authorship and uphold standards of fairness and accountability. This article explores the issue of multiple authorship in bioethics and develops criteria for determining who should be an author on a conceptual publication in bioethics. Authorship in conceptual research should be based on contributing substantially to: (1) identifying a topic, problem, or issue to study; (2) reviewing and interpreting the relevant literature; (3) formulating, analyzing, and evaluating arguments that support one or more theses; (4) responding to objections and counterarguments; and (5) drafting the manuscript. Authors of conceptual publications should participate substantially in at least two of areas (1)-(5) and also approve the final version. [corrected].
Pharmacokinetic de-risking tools for selection of monoclonal antibody lead candidates
Dostalek, Miroslav; Prueksaritanont, Thomayant; Kelley, Robert F.
2017-01-01
ABSTRACT Pharmacokinetic studies play an important role in all stages of drug discovery and development. Recent advancements in the tools for discovery and optimization of therapeutic proteins have created an abundance of candidates that may fulfill target product profile criteria. Implementing a set of in silico, small scale in vitro and in vivo tools can help to identify a clinical lead molecule with promising properties at the early stages of drug discovery, thus reducing the labor and cost in advancing multiple candidates toward clinical development. In this review, we describe tools that should be considered during drug discovery, and discuss approaches that could be included in the pharmacokinetic screening part of the lead candidate generation process to de-risk unexpected pharmacokinetic behaviors of Fc-based therapeutic proteins, with an emphasis on monoclonal antibodies. PMID:28463063
NASA Astrophysics Data System (ADS)
Wales, David J.
2018-04-01
Recent advances in the potential energy landscapes approach are highlighted, including both theoretical and computational contributions. Treating the high dimensionality of molecular and condensed matter systems of contemporary interest is important for understanding how emergent properties are encoded in the landscape and for calculating these properties while faithfully representing barriers between different morphologies. The pathways characterized in full dimensionality, which are used to construct kinetic transition networks, may prove useful in guiding such calculations. The energy landscape perspective has also produced new procedures for structure prediction and analysis of thermodynamic properties. Basin-hopping global optimization, with alternative acceptance criteria and generalizations to multiple metric spaces, has been used to treat systems ranging from biomolecules to nanoalloy clusters and condensed matter. This review also illustrates how all this methodology, developed in the context of chemical physics, can be transferred to landscapes defined by cost functions associated with machine learning.
Deng, Zhimin; Tian, Tianhai
2014-07-29
The advances of systems biology have raised a large number of sophisticated mathematical models for describing the dynamic property of complex biological systems. One of the major steps in developing mathematical models is to estimate unknown parameters of the model based on experimentally measured quantities. However, experimental conditions limit the amount of data that is available for mathematical modelling. The number of unknown parameters in mathematical models may be larger than the number of observation data. The imbalance between the number of experimental data and number of unknown parameters makes reverse-engineering problems particularly challenging. To address the issue of inadequate experimental data, we propose a continuous optimization approach for making reliable inference of model parameters. This approach first uses a spline interpolation to generate continuous functions of system dynamics as well as the first and second order derivatives of continuous functions. The expanded dataset is the basis to infer unknown model parameters using various continuous optimization criteria, including the error of simulation only, error of both simulation and the first derivative, or error of simulation as well as the first and second derivatives. We use three case studies to demonstrate the accuracy and reliability of the proposed new approach. Compared with the corresponding discrete criteria using experimental data at the measurement time points only, numerical results of the ERK kinase activation module show that the continuous absolute-error criteria using both function and high order derivatives generate estimates with better accuracy. This result is also supported by the second and third case studies for the G1/S transition network and the MAP kinase pathway, respectively. This suggests that the continuous absolute-error criteria lead to more accurate estimates than the corresponding discrete criteria. We also study the robustness property of these three models to examine the reliability of estimates. Simulation results show that the models with estimated parameters using continuous fitness functions have better robustness properties than those using the corresponding discrete fitness functions. The inference studies and robustness analysis suggest that the proposed continuous optimization criteria are effective and robust for estimating unknown parameters in mathematical models.
Li, Wei; Zhang, Min; Wang, Mingyu; Han, Zhantao; Liu, Jiankai; Chen, Zhezhou; Liu, Bo; Yan, Yan; Liu, Zhu
2018-06-01
Brownfield sites pollution and remediation is an urgent environmental issue worldwide. The screening and assessment of remedial alternatives is especially complex owing to its multiple criteria that involves technique, economy, and policy. To help the decision-makers selecting the remedial alternatives efficiently, the criteria framework conducted by the U.S. EPA is improved and a comprehensive method that integrates multiple criteria decision analysis (MCDA) with numerical simulation is conducted in this paper. The criteria framework is modified and classified into three categories: qualitative, semi-quantitative, and quantitative criteria, MCDA method, AHP-PROMETHEE (analytical hierarchy process-preference ranking organization method for enrichment evaluation) is used to determine the priority ranking of the remedial alternatives and the solute transport simulation is conducted to assess the remedial efficiency. A case study was present to demonstrate the screening method in a brownfield site in Cangzhou, northern China. The results show that the systematic method provides a reliable way to quantify the priority of the remedial alternatives.
Optimization of structures on the basis of fracture mechanics and reliability criteria
NASA Technical Reports Server (NTRS)
Heer, E.; Yang, J. N.
1973-01-01
Systematic summary of factors which are involved in optimization of given structural configuration is part of report resulting from study of analysis of objective function. Predicted reliability of performance of finished structure is sharply dependent upon results of coupon tests. Optimization analysis developed by study also involves expected cost of proof testing.
29 CFR 1926.1432 - Multiple-crane/derrick lifts-supplemental requirements.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 29 Labor 8 2011-07-01 2011-07-01 false Multiple-crane/derrick lifts-supplemental requirements... Cranes and Derricks in Construction § 1926.1432 Multiple-crane/derrick lifts—supplemental requirements... implementation. (1) The multiple-crane/derrick lift must be directed by a person who meets the criteria for both...
Multi Dimensional Honey Bee Foraging Algorithm Based on Optimal Energy Consumption
NASA Astrophysics Data System (ADS)
Saritha, R.; Vinod Chandra, S. S.
2017-10-01
In this paper a new nature inspired algorithm is proposed based on natural foraging behavior of multi-dimensional honey bee colonies. This method handles issues that arise when food is shared from multiple sources by multiple swarms at multiple destinations. The self organizing nature of natural honey bee swarms in multiple colonies is based on the principle of energy consumption. Swarms of multiple colonies select a food source to optimally fulfill the requirements of its colonies. This is based on the energy requirement for transporting food between a source and destination. Minimum use of energy leads to maximizing profit in each colony. The mathematical model proposed here is based on this principle. This has been successfully evaluated by applying it on multi-objective transportation problem for optimizing cost and time. The algorithm optimizes the needs at each destination in linear time.
Response assessment challenges in clinical trials of gliomas.
Wen, Patrick Y; Norden, Andrew D; Drappatz, Jan; Quant, Eudocia
2010-01-01
Accurate, reproducible criteria for determining tumor response and progression after therapy are critical for optimal patient care and effective evaluation of novel therapeutic agents. Currently, the most widely used criteria for determining treatment response in gliomas is based on two-dimensional tumor measurements using neuroimaging studies (Macdonald criteria). In recent years, the limitation of these criteria, which only address the contrast-enhancing component of the tumor, have become increasingly apparent. This review discusses challenges that have emerged in assessing response in patients with gliomas and approaches being introduced to address them.
Neural networks: What non-linearity to choose
NASA Technical Reports Server (NTRS)
Kreinovich, Vladik YA.; Quintana, Chris
1991-01-01
Neural networks are now one of the most successful learning formalisms. Neurons transform inputs (x(sub 1),...,x(sub n)) into an output f(w(sub 1)x(sub 1) + ... + w(sub n)x(sub n)), where f is a non-linear function and w, are adjustable weights. What f to choose? Usually the logistic function is chosen, but sometimes the use of different functions improves the practical efficiency of the network. The problem of choosing f as a mathematical optimization problem is formulated and solved under different optimality criteria. As a result, a list of functions f that are optimal under these criteria are determined. This list includes both the functions that were empirically proved to be the best for some problems, and some new functions that may be worth trying.
A multiobjective optimization framework for multicontaminant industrial water network design.
Boix, Marianne; Montastruc, Ludovic; Pibouleau, Luc; Azzaro-Pantel, Catherine; Domenech, Serge
2011-07-01
The optimal design of multicontaminant industrial water networks according to several objectives is carried out in this paper. The general formulation of the water allocation problem (WAP) is given as a set of nonlinear equations with binary variables representing the presence of interconnections in the network. For optimization purposes, three antagonist objectives are considered: F(1), the freshwater flow-rate at the network entrance, F(2), the water flow-rate at inlet of regeneration units, and F(3), the number of interconnections in the network. The multiobjective problem is solved via a lexicographic strategy, where a mixed-integer nonlinear programming (MINLP) procedure is used at each step. The approach is illustrated by a numerical example taken from the literature involving five processes, one regeneration unit and three contaminants. The set of potential network solutions is provided in the form of a Pareto front. Finally, the strategy for choosing the best network solution among those given by Pareto fronts is presented. This Multiple Criteria Decision Making (MCDM) problem is tackled by means of two approaches: a classical TOPSIS analysis is first implemented and then an innovative strategy based on the global equivalent cost (GEC) in freshwater that turns out to be more efficient for choosing a good network according to a practical point of view. Copyright © 2011 Elsevier Ltd. All rights reserved.
JiTTree: A Just-in-Time Compiled Sparse GPU Volume Data Structure.
Labschütz, Matthias; Bruckner, Stefan; Gröller, M Eduard; Hadwiger, Markus; Rautek, Peter
2016-01-01
Sparse volume data structures enable the efficient representation of large but sparse volumes in GPU memory for computation and visualization. However, the choice of a specific data structure for a given data set depends on several factors, such as the memory budget, the sparsity of the data, and data access patterns. In general, there is no single optimal sparse data structure, but a set of several candidates with individual strengths and drawbacks. One solution to this problem are hybrid data structures which locally adapt themselves to the sparsity. However, they typically suffer from increased traversal overhead which limits their utility in many applications. This paper presents JiTTree, a novel sparse hybrid volume data structure that uses just-in-time compilation to overcome these problems. By combining multiple sparse data structures and reducing traversal overhead we leverage their individual advantages. We demonstrate that hybrid data structures adapt well to a large range of data sets. They are especially superior to other sparse data structures for data sets that locally vary in sparsity. Possible optimization criteria are memory, performance and a combination thereof. Through just-in-time (JIT) compilation, JiTTree reduces the traversal overhead of the resulting optimal data structure. As a result, our hybrid volume data structure enables efficient computations on the GPU, while being superior in terms of memory usage when compared to non-hybrid data structures.
Airborne data measurement system errors reduction through state estimation and control optimization
NASA Astrophysics Data System (ADS)
Sebryakov, G. G.; Muzhichek, S. M.; Pavlov, V. I.; Ermolin, O. V.; Skrinnikov, A. A.
2018-02-01
The paper discusses the problem of airborne data measurement system errors reduction through state estimation and control optimization. The approaches are proposed based on the methods of experiment design and the theory of systems with random abrupt structure variation. The paper considers various control criteria as applied to an aircraft data measurement system. The physics of criteria is explained, the mathematical description and the sequence of steps for each criterion application is shown. The formula is given for airborne data measurement system state vector posterior estimation based for systems with structure variations.
NASA Astrophysics Data System (ADS)
Sabri, Karim; Colson, Gérard E.; Mbangala, Augustin M.
2008-10-01
Multi-period differences of technical and financial performances are analysed by comparing five North African railways over the period (1990-2004). A first approach is based on the Malmquist DEA TFP index for measuring the total factors productivity change, decomposed into technical efficiency change and technological changes. A multiple criteria analysis is also performed using the PROMETHEE II method and the software ARGOS. These methods provide complementary detailed information, especially by discriminating the technological and management progresses by Malmquist and the two dimensions of performance by Promethee: that are the service to the community and the enterprises performances, often in conflict.
Control of mechanical systems by the mixed "time and expenditure" criterion
NASA Astrophysics Data System (ADS)
Alesova, I. M.; Babadzanjanz, L. K.; Pototskaya, I. Yu.; Pupysheva, Yu. Yu.; Saakyan, A. T.
2018-05-01
The optimal controlled motion of a mechanical system, that is determined by the linear system ODE with constant coefficients and piecewise constant control components, is considered. The number of control switching points and the heights of control steps are considered as preset. The optimized functional is combination of classical time criteria and "Expenditure criteria", that is equal to the total area of all steps of all control components. In the absence of control, the solution of the system is equal to the sum of components (frequency components) corresponding to different eigenvalues of the matrix of the ODE system. Admissible controls are those that turn to zero (at a non predetermined time moment) the previously chosen frequency components of the solution. An algorithm for the finding of control switching points, based on the necessary minimum conditions for mixed criteria, is proposed.
Ishizawa, Yoshiki; Dobashi, Suguru; Kadoya, Noriyuki; Ito, Kengo; Chiba, Takahito; Takayama, Yoshiki; Sato, Kiyokazu; Takeda, Ken
2018-05-17
An accurate source model of a medical linear accelerator is essential for Monte Carlo (MC) dose calculations. This study aims to propose an analytical photon source model based on particle transport in parameterized accelerator structures, focusing on a more realistic determination of linac photon spectra compared to existing approaches. We designed the primary and secondary photon sources based on the photons attenuated and scattered by a parameterized flattening filter. The primary photons were derived by attenuating bremsstrahlung photons based on the path length in the filter. Conversely, the secondary photons were derived from the decrement of the primary photons in the attenuation process. This design facilitates these sources to share the free parameters of the filter shape and be related to each other through the photon interaction in the filter. We introduced two other parameters of the primary photon source to describe the particle fluence in penumbral regions. All the parameters are optimized based on calculated dose curves in water using the pencil-beam-based algorithm. To verify the modeling accuracy, we compared the proposed model with the phase space data (PSD) of the Varian TrueBeam 6 and 15 MV accelerators in terms of the beam characteristics and the dose distributions. The EGS5 Monte Carlo code was used to calculate the dose distributions associated with the optimized model and reference PSD in a homogeneous water phantom and a heterogeneous lung phantom. We calculated the percentage of points passing 1D and 2D gamma analysis with 1%/1 mm criteria for the dose curves and lateral dose distributions, respectively. The optimized model accurately reproduced the spectral curves of the reference PSD both on- and off-axis. The depth dose and lateral dose profiles of the optimized model also showed good agreement with those of the reference PSD. The passing rates of the 1D gamma analysis with 1%/1 mm criteria between the model and PSD were 100% for 4 × 4, 10 × 10, and 20 × 20 cm 2 fields at multiple depths. For the 2D dose distributions calculated in the heterogeneous lung phantom, the 2D gamma pass rate was 100% for 6 and 15 MV beams. The model optimization time was less than 4 min. The proposed source model optimization process accurately produces photon fluence spectra from a linac using valid physical properties, without detailed knowledge of the geometry of the linac head, and with minimal optimization time. © 2018 American Association of Physicists in Medicine.
Parsh, Jessica; Seth, Milan; Briguori, Carlo; Grossman, Paul; Solomon, Richard; Gurm, Hitinder S
2016-05-01
It is unknown which definition of contrast-induced acute kidney injury (CI-AKI) in the setting of percutaneous coronary interventions is best associated with inpatient mortality and whether this association is stable across patients with various preprocedural serum creatinine (SCr) values. We applied logistic regression models to multiple CI-AKI definitions used by the Kidney Disease Improving Global Outcomes guidelines and previously published studies to examine the impact of preprocedural SCr on a candidate definition's correlation with the adverse outcome of inpatient mortality. We used likelihood ratio tests to examine candidate definitions and identify those where association with inpatient mortality remained constant regardless of preprocedural SCr. These definitions were assessed for specificity, sensitivity, and positive and negative predictive values to identify an optimal definition. Our study cohort included 119,554 patients who underwent percutaneous coronary intervention in Michigan between 2010 and 2014. Most commonly used definitions were not associated with inpatient mortality in a constant fashion across various preprocedural SCr values. Of the 266 candidate definitions examined, 16 definition's association with inpatient mortality was not significantly altered by preprocedural SCr. Contrast-induced acute kidney injury defined as an absolute increase of SCr ≥0.3 mg/dL and a relative SCr increase ≥50% was selected as the optimal candidate using Perkins and Shisterman decision theoretic optimality criteria and was highly predictive of and specific for inpatient mortality. We identified the optimal definition for CI-AKI to be an absolute increase in SCr ≥0.3 mg/dL and a relative SCr increase ≥50%. Further work is needed to validate this definition in independent studies and to establish its utility for clinical trials and quality improvement efforts. Copyright © 2016 Elsevier Inc. All rights reserved.
Peixoto, Sara; Abreu, Pedro
2016-11-01
Clinically isolated syndrome may be the first manifestation of multiple sclerosis, a chronic demyelinating disease of the central nervous system, and it is defined by a single clinical episode suggestive of demyelination. However, patients with this syndrome, even with long term follow up, may not develop new symptoms or demyelinating lesions that fulfils multiple sclerosis diagnostic criteria. We reviewed, in clinically isolated syndrome, what are the best magnetic resonance imaging findings that may predict its conversion to multiple sclerosis. A search was made in the PubMed database for papers published between January 2010 and June 2015 using the following terms: 'clinically isolated syndrome', 'cis', 'multiple sclerosis', 'magnetic resonance imaging', 'magnetic resonance' and 'mri'. In this review, the following conventional magnetic resonance imaging abnormalities found in literature were included: lesion load, lesion location, Barkhof's criteria and brain atrophy related features. The non conventional magnetic resonance imaging techniques studied were double inversion recovery, magnetization transfer imaging, spectroscopy and diffusion tensor imaging. The number and location of demyelinating lesions have a clear role in predicting clinically isolated syndrome conversion to multiple sclerosis. On the other hand, more data are needed to confirm the ability to predict this disease development of non conventional techniques and remaining neuroimaging abnormalities. In forthcoming years, in addition to the established predictive value of the above mentioned neuroimaging abnormalities, different clinically isolated syndrome neuroradiological findings may be considered in multiple sclerosis diagnostic criteria and/or change its treatment recommendations.
Acoustic design criteria in a general system for structural optimization
NASA Technical Reports Server (NTRS)
Brama, Torsten
1990-01-01
Passenger comfort is of great importance in most transport vehicles. For instance, in the new generation of regional turboprop aircraft, a low noise level is vital to be competitive on the market. The possibilities to predict noise levels analytically has improved rapidly in recent years. This will make it possible to take acoustic design criteria into account in early project stages. The development of the ASKA FE-system to include also acoustic analysis has been carried out at Saab Aircraft Division and the Aeronautical Research Institute of Sweden in a joint project. New finite elements have been developed to model the free fluid, porous damping materials, and the interaction between the fluid and structural degrees of freedom. The FE approach to the acoustic analysis is best suited for lower frequencies up to a few hundred Hz. For accurate analysis of interior cabin noise, large 3-D FE-models are built, but 2-D models are also considered to be useful for parametric studies and optimization. The interest is here focused on the introduction of an acoustic design criteria in the general structural optimization system OPTSYS available at the Saab Aircraft Division. The first implementation addresses a somewhat limited class of problems. The problems solved are formulated: Minimize the structural weight by modifying the dimensions of the structure while keeping the noise level in the cavity and other structural design criteria within specified limits.
NASA Technical Reports Server (NTRS)
Hauser, F. D.; Szollosi, G. D.; Lakin, W. S.
1972-01-01
COEBRA, the Computerized Optimization of Elastic Booster Autopilots, is an autopilot design program. The bulk of the design criteria is presented in the form of minimum allowed gain/phase stability margins. COEBRA has two optimization phases: (1) a phase to maximize stability margins; and (2) a phase to optimize structural bending moment load relief capability in the presence of minimum requirements on gain/phase stability margins.
Sun, Bo; Lan, Li; Cui, Wenxiu; Xu, Guohua; Sui, Conglan; Wang, Yibaina; Zhao, Yashuang; Wang, Jian; Li, Hongyuan
2015-01-01
To identify optimal cut-off points of fasting plasma glucose (FPG) for two-step strategy in screening abnormal glucose metabolism and estimating prevalence in general Chinese population. A population-based cross-sectional study was conducted on 7913 people aged 20 to 74 years in Harbin. Diabetes and pre-diabetes were determined by fasting and 2 hour post-load glucose from the oral glucose tolerance test in all participants. Screening potential of FPG, cost per case identified by two-step strategy, and optimal FPG cut-off points were described. The prevalence of diabetes was 12.7%, of which 65.2% was undiagnosed. Twelve percent or 9.0% of participants were diagnosed with pre-diabetes using 2003 ADA criteria or 1999 WHO criteria, respectively. The optimal FPG cut-off points for two-step strategy were 5.6 mmol/l for previously undiagnosed diabetes (area under the receiver-operating characteristic curve of FPG 0.93; sensitivity 82.0%; cost per case identified by two-step strategy ¥261), 5.3 mmol/l for both diabetes and pre-diabetes or pre-diabetes alone using 2003 ADA criteria (0.89 or 0.85; 72.4% or 62.9%; ¥110 or ¥258), 5.0 mmol/l for pre-diabetes using 1999 WHO criteria (0.78; 66.8%; ¥399), and 4.9 mmol/l for IGT alone (0.74; 62.2%; ¥502). Using the two-step strategy, the underestimates of prevalence reduced to nearly 38% for pre-diabetes or 18.7% for undiagnosed diabetes, respectively. Approximately a quarter of the general population in Harbin was in hyperglycemic condition. Using optimal FPG cut-off points for two-step strategy in Chinese population may be more effective and less costly for reducing the missed diagnosis of hyperglycemic condition. PMID:25785585
Fuzzy decision-making framework for treatment selection based on the combined QUALIFLEX-TODIM method
NASA Astrophysics Data System (ADS)
Ji, Pu; Zhang, Hong-yu; Wang, Jian-qiang
2017-10-01
Treatment selection is a multi-criteria decision-making problem of significant concern in the medical field. In this study, a fuzzy decision-making framework is established for treatment selection. The framework mitigates information loss by introducing single-valued trapezoidal neutrosophic numbers to denote evaluation information. Treatment selection has multiple criteria that remarkably exceed the alternatives. In consideration of this characteristic, the framework utilises the idea of the qualitative flexible multiple criteria method. Furthermore, it considers the risk-averse behaviour of a decision maker by employing a concordance index based on TODIM (an acronym in Portuguese of interactive and multi-criteria decision-making) method. A sensitivity analysis is performed to illustrate the robustness of the framework. Finally, a comparative analysis is conducted to compare the framework with several extant methods. Results indicate the advantages of the framework and its better performance compared with the extant methods.
NASA Technical Reports Server (NTRS)
Park, Junhong; Palumbo, Daniel L.
2004-01-01
The use of shunted piezoelectric patches in reducing vibration and sound radiation of structures has several advantages over passive viscoelastic elements, e.g., lower weight with increased controllability. The performance of the piezoelectric patches depends on the shunting electronics that are designed to dissipate vibration energy through a resistive element. In past efforts most of the proposed tuning methods were based on modal properties of the structure. In these cases, the tuning applies only to one mode of interest and maximum tuning is limited to invariant points when based on den Hartog's invariant points concept. In this study, a design method based on the wave propagation approach is proposed. Optimal tuning is investigated depending on the dynamic and geometric properties that include effects from boundary conditions and position of the shunted piezoelectric patch relative to the structure. Active filters are proposed as shunting electronics to implement the tuning criteria. The developed tuning methods resulted in superior capabilities in minimizing structural vibration and noise radiation compared to other tuning methods. The tuned circuits are relatively insensitive to changes in modal properties and boundary conditions, and can applied to frequency ranges in which multiple modes have effects.
Clinical Perspective of 3D Total Body Photography for Early Detection and Screening of Melanoma.
Rayner, Jenna E; Laino, Antonia M; Nufer, Kaitlin L; Adams, Laura; Raphael, Anthony P; Menzies, Scott W; Soyer, H Peter
2018-01-01
Melanoma incidence continues to increase across many populations globally and there is significant mortality associated with advanced disease. However, if detected early, patients have a very promising prognosis. The methods that have been utilized for early detection include clinician and patient skin examinations, dermoscopy (static and sequential imaging), and total body photography via 2D imaging. Total body photography has recently witnessed an evolution from 2D imaging with the ability to now create a 3D representation of the patient linked with dermoscopy images of individual lesions. 3D total body photography is a particularly beneficial screening tool for patients at high risk due to their personal or family history or those with multiple dysplastic naevi-the latter can make monitoring especially difficult without the assistance of technology. In this perspective, we discuss clinical examples utilizing 3D total body photography, associated advantages and limitations, and future directions of the technology. The optimal system for melanoma screening should improve diagnostic accuracy, be time and cost efficient, and accessible to patients across all demographic and socioeconomic groups. 3D total body photography has the potential to address these criteria and, most importantly, optimize crucial early detection.
Motion-related resource allocation in dynamic wireless visual sensor network environments.
Katsenou, Angeliki V; Kondi, Lisimachos P; Parsopoulos, Konstantinos E
2014-01-01
This paper investigates quality-driven cross-layer optimization for resource allocation in direct sequence code division multiple access wireless visual sensor networks. We consider a single-hop network topology, where each sensor transmits directly to a centralized control unit (CCU) that manages the available network resources. Our aim is to enable the CCU to jointly allocate the transmission power and source-channel coding rates for each node, under four different quality-driven criteria that take into consideration the varying motion characteristics of each recorded video. For this purpose, we studied two approaches with a different tradeoff of quality and complexity. The first one allocates the resources individually for each sensor, whereas the second clusters them according to the recorded level of motion. In order to address the dynamic nature of the recorded scenery and re-allocate the resources whenever it is dictated by the changes in the amount of motion in the scenery, we propose a mechanism based on the particle swarm optimization algorithm, combined with two restarting schemes that either exploit the previously determined resource allocation or conduct a rough estimation of it. Experimental simulations demonstrate the efficiency of the proposed approaches.
Content and Usability Evaluation of Patient Oriented Drug-Drug Interaction Websites.
Adam, Terrence J; Vang, Joseph
Drug-Drug Interactions (DDI) are an important source of preventable adverse drug events and a common reason for hospitalization among patients on multiple drug therapy regimens. DDI information systems are important patient safety tools with the capacity to identify and warn health professionals of clinically significant DDI risk. While substantial research has been completed on DDI information systems in professional settings such as community, hospital, and independent pharmacies; there has been limited research on DDI systems offered through online websites directly for use by ambulatory patients. The focus of this project is to test patient oriented website capacity to correctly identify drug interactions among well established and clinically significant medication combinations and convey clinical risk data to patients. The patient education capability was assessed by evaluating website Information Capacity, Patient Usability and Readability. The study results indicate that the majority of websites identified which met the inclusion and exclusion criteria operated similarly, but vary in risk severity assessment and are not optimally patient oriented to effectively deliver risk information. The limited quality of information and complex medical term content complicate DDI risk data conveyance and the sites may not provide optimal information delivery to allow medication consumers to understand and manage their medication regimens.
A Framework for Evaluation and Optimization of Relevance and Novelty-Based Retrieval
ERIC Educational Resources Information Center
Lad, Abhimanyu
2011-01-01
There has been growing interest in building and optimizing retrieval systems with respect to relevance and novelty of information, which together more realistically reflect the usefulness of a system as perceived by the user. How to combine these criteria into a single metric that can be used to measure as well as optimize retrieval systems is an…
Design enhancement tools in MSC/NASTRAN
NASA Technical Reports Server (NTRS)
Wallerstein, D. V.
1984-01-01
Design sensitivity is the calculation of derivatives of constraint functions with respect to design variables. While a knowledge of these derivatives is useful in its own right, the derivatives are required in many efficient optimization methods. Constraint derivatives are also required in some reanalysis methods. It is shown where the sensitivity coefficients fit into the scheme of a basic organization of an optimization procedure. The analyzer is to be taken as MSC/NASTRAN. The terminator program monitors the termination criteria and ends the optimization procedure when the criteria are satisfied. This program can reside in several plances: in the optimizer itself, in a user written code, or as part of the MSC/EOS (Engineering Operating System) MSC/EOS currently under development. Since several excellent optimization codes exist and since they require such very specialized technical knowledge, the optimizer under the new MSC/EOS is considered to be selected and supplied by the user to meet his specific needs and preferences. The one exception to this is a fully stressed design (FSD) based on simple scaling. The gradients are currently supplied by various design sensitivity options now existing in MSC/NASTRAN's design sensitivity analysis (DSA).
Ngounou, Guy Merlin; Kom, Martin
2014-12-01
In this paper we present an instrumentation amplifier with discrete elements and optimized noise for the amplification of very low signals. In amplifying signals of very weak amplitude, the noise can completely absorb these signals if the used amplifier does not present the optimal guarantee to minimize the noise. Based on related research and re-viewing of recent patents Journal of Medical Systems, 30:205-209, 2006, we suggest an approach of noise reduction in amplification much more thoroughly than re-viewing of recent patents and we deduce from it the general criteria necessary and essential to achieve this optimization. The comparison of these criteria with the provisions adopted in practice leads to the inadequacy of conventional amplifiers for effective noise reduction. The amplifier we propose is an instrumentation amplifier with active negative feedback and optimized noise for the amplification of signals with very low amplitude. The application of this method in the case of electro cardio graphic signals (ECG) provides simulation results fully in line with forecasts.
Malakooti, Behnam; Yang, Ziyong
2004-02-01
In many real-world problems, the range of consequences of different alternatives are considerably different. In addition, sometimes, selection of a group of alternatives (instead of only one best alternative) is necessary. Traditional decision making approaches treat the set of alternatives with the same method of analysis and selection. In this paper, we propose clustering alternatives into different groups so that different methods of analysis, selection, and implementation for each group can be applied. As an example, consider the selection of a group of functions (or tasks) to be processed by a group of processors. The set of tasks can be grouped according to their similar criteria, and hence, each cluster of tasks to be processed by a processor. The selection of the best alternative for each clustered group can be performed using existing methods; however, the process of selecting groups is different than the process of selecting alternatives within a group. We develop theories and procedures for clustering discrete multiple criteria alternatives. We also demonstrate how the set of alternatives is clustered into mutually exclusive groups based on 1) similar features among alternatives; 2) ideal (or most representative) alternatives given by the decision maker; and 3) other preferential information of the decision maker. The clustering of multiple criteria alternatives also has the following advantages. 1) It decreases the set of alternatives to be considered by the decision maker (for example, different decision makers are assigned to different groups of alternatives). 2) It decreases the number of criteria. 3) It may provide a different approach for analyzing multiple decision makers problems. Each decision maker may cluster alternatives differently, and hence, clustering of alternatives may provide a basis for negotiation. The developed approach is applicable for solving a class of telecommunication networks problems where a set of objects (such as routers, processors, or intelligent autonomous vehicles) are to be clustered into similar groups. Objects are clustered based on several criteria and the decision maker's preferences.
Aziz, Atiqullah; Gierth, Michael; Rink, Michael; Schmid, Marianne; Chun, Felix K; Dahlem, Roland; Roghmann, Florian; Palisaar, Rein-Jüri; Noldus, Joachim; Ellinger, Jörg; Müller, Stefan C; Pycha, Armin; Martini, Thomas; Bolenz, Christian; Moritz, Rudolf; Herrmann, Edwin; Keck, Bastian; Wullich, Bernd; Mayr, Roman; Fritsche, Hans-Martin; Burger, Maximilian; Bastian, Patrick J; Seitz, Christian; Brookman-May, Sabine; Xylinas, Evanguelos; Shariat, Shahrokh F; Fisch, Margit; May, Matthias
2015-12-01
Radical cystectomy (RC) for urothelial carcinoma of the bladder (UCB) is associated with heterogeneous functional and oncological outcomes. The aim of this study was to generate trifecta and pentafecta criteria to optimize outcome reporting after RC. We interviewed 50 experts to consider a virtual group of patients (age ≤ 75 years, ASA score ≤ 3) undergoing RC for a cT2 UCB and a final histology of ≤pT3pN0M0. A ranking was generated for the three and five criteria with the highest sum score. The criteria were applied to the Prospective Multicenter Radical Cystectomy Series 2011. Multivariable binary logistic regression analyses were used to evaluate the impact of clinical and histopathological parameters on meeting the top selected criteria. The criteria with the highest sum score were negative soft tissue surgical margin, lymph node (LN) dissection of at least 16 LNs, no complications according to Clavien-Dindo grade 3-5 within 90 days after RC, treatment-free time between TUR-BT with detection of muscle-invasive UCB and RC <3 months and the absence of local UCB-recurrence in the pelvis ≤12 months. The first three criteria formed trifecta, and all five criteria pentafecta. A total of 334 patients qualified for final analysis, whereas 35.3 and 29 % met trifecta and pentafecta criteria, respectively. Multivariable analyses showed that the relative probability of meeting trifecta and pentafecta decreases with higher age (3.2 %, p = 0.043 and 3.3 %, p = 0.042) per year, respectively. Trifecta and pentafecta incorporate essential criteria in terms of outcome reporting and might be considered for the improvement of standardized quality assessment after RC for UCB.
NASA Astrophysics Data System (ADS)
Ghaderi, F.; Pahlavani, P.
2015-12-01
A multimodal multi-criteria route planning (MMRP) system provides an optimal multimodal route from an origin point to a destination point considering two or more criteria in a way this route can be a combination of public and private transportation modes. In this paper, the simulate annealing (SA) and the fuzzy analytical hierarchy process (fuzzy AHP) were combined in order to find this route. In this regard, firstly, the effective criteria that are significant for users in their trip were determined. Then the weight of each criterion was calculated using the fuzzy AHP weighting method. The most important characteristic of this weighting method is the use of fuzzy numbers that aids the users to consider their uncertainty in pairwise comparison of criteria. After determining the criteria weights, the proposed SA algorithm were used for determining an optimal route from an origin to a destination. One of the most important problems in a meta-heuristic algorithm is trapping in local minima. In this study, five transportation modes, including subway, bus rapid transit (BRT), taxi, walking, and bus were considered for moving between nodes. Also, the fare, the time, the user's bother, and the length of the path were considered as effective criteria for solving the problem. The proposed model was implemented in an area in centre of Tehran in a GUI MATLAB programming language. The results showed a high efficiency and speed of the proposed algorithm that support our analyses.
Wei, Qinglai; Liu, Derong; Lin, Qiao
In this paper, a novel local value iteration adaptive dynamic programming (ADP) algorithm is developed to solve infinite horizon optimal control problems for discrete-time nonlinear systems. The focuses of this paper are to study admissibility properties and the termination criteria of discrete-time local value iteration ADP algorithms. In the discrete-time local value iteration ADP algorithm, the iterative value functions and the iterative control laws are both updated in a given subset of the state space in each iteration, instead of the whole state space. For the first time, admissibility properties of iterative control laws are analyzed for the local value iteration ADP algorithm. New termination criteria are established, which terminate the iterative local ADP algorithm with an admissible approximate optimal control law. Finally, simulation results are given to illustrate the performance of the developed algorithm.In this paper, a novel local value iteration adaptive dynamic programming (ADP) algorithm is developed to solve infinite horizon optimal control problems for discrete-time nonlinear systems. The focuses of this paper are to study admissibility properties and the termination criteria of discrete-time local value iteration ADP algorithms. In the discrete-time local value iteration ADP algorithm, the iterative value functions and the iterative control laws are both updated in a given subset of the state space in each iteration, instead of the whole state space. For the first time, admissibility properties of iterative control laws are analyzed for the local value iteration ADP algorithm. New termination criteria are established, which terminate the iterative local ADP algorithm with an admissible approximate optimal control law. Finally, simulation results are given to illustrate the performance of the developed algorithm.
NASA Astrophysics Data System (ADS)
Feng, Wenjie; Wu, Shenghe; Yin, Yanshu; Zhang, Jiajia; Zhang, Ke
2017-07-01
A training image (TI) can be regarded as a database of spatial structures and their low to higher order statistics used in multiple-point geostatistics (MPS) simulation. Presently, there are a number of methods to construct a series of candidate TIs (CTIs) for MPS simulation based on a modeler's subjective criteria. The spatial structures of TIs are often various, meaning that the compatibilities of different CTIs with the conditioning data are different. Therefore, evaluation and optimal selection of CTIs before MPS simulation is essential. This paper proposes a CTI evaluation and optimal selection method based on minimum data event distance (MDevD). In the proposed method, a set of MDevD properties are established through calculation of the MDevD of conditioning data events in each CTI. Then, CTIs are evaluated and ranked according to the mean value and variance of the MDevD properties. The smaller the mean value and variance of an MDevD property are, the more compatible the corresponding CTI is with the conditioning data. In addition, data events with low compatibility in the conditioning data grid can be located to help modelers select a set of complementary CTIs for MPS simulation. The MDevD property can also help to narrow the range of the distance threshold for MPS simulation. The proposed method was evaluated using three examples: a 2D categorical example, a 2D continuous example, and an actual 3D oil reservoir case study. To illustrate the method, a C++ implementation of the method is attached to the paper.
Comparison of performance criteria for evaluating stake test data
Stan T. Lebow; Patricia K. Lebow; Grant T. Kirker
2017-01-01
Stake tests are a critical part of evaluating durability of wood in ground-contact, but there is a lack of criteria for interpreting stake test results. This paper discusses criteria that might be used to determine if short term ratings indicate satisfactory longterm performance. Ratings of 19 by 19 mm stakes from multiple plots in the Harrison Experimental Forest,...
Kroenke, Kurt; Krebs, Erin; Wu, Jingwei; Bair, Matthew J; Damush, Teresa; Chumbler, Neale; York, Tish; Weitlauf, Sharon; McCalley, Stephanie; Evans, Erica; Barnd, Jeffrey; Yu, Zhangsheng
2013-03-01
Pain is the most common physical symptom in primary care, accounting for an enormous burden in terms of patient suffering, quality of life, work and social disability, and health care and societal costs. Although collaborative care interventions are well-established for conditions such as depression, fewer systems-based interventions have been tested for chronic pain. This paper describes the study design and baseline characteristics of the enrolled sample for the Stepped Care to Optimize Pain care Effectiveness (SCOPE) study, a randomized clinical effectiveness trial conducted in five primary care clinics. SCOPE has enrolled 250 primary care veterans with persistent (3 months or longer) musculoskeletal pain of moderate severity and randomized them to either the stepped care intervention or usual care control group. Using a telemedicine collaborative care approach, the intervention couples automated symptom monitoring with a telephone-based, nurse care manager/physician pain specialist team to treat pain. The goal is to optimize analgesic management using a stepped care approach to drug selection, symptom monitoring, dose adjustment, and switching or adding medications. All subjects undergo comprehensive outcome assessments at baseline, 1, 3, 6 and 12 months by interviewers blinded to treatment group. The primary outcome is pain severity/disability, and secondary outcomes include pain beliefs and behaviors, psychological functioning, health-related quality of life and treatment satisfaction. Innovations of SCOPE include optimized analgesic management (including a stepped care approach, opioid risk stratification, and criteria-based medication adjustment), automated monitoring, and centralized care management that can cover multiple primary care practices. Published by Elsevier Inc.
Code of Federal Regulations, 2014 CFR
2014-10-01
... OF THE INTERIOR LAND RESOURCE MANAGEMENT (2000) MULTIPLE-USE MANAGEMENT CLASSIFICATIONS Criteria for...) Further the objectives of Federal natural resource legislation directed, among other things towards: (1...). (4) Realization of the beneficial utilization of the public lands through occupancy leases, such as...
Code of Federal Regulations, 2011 CFR
2011-10-01
... OF THE INTERIOR LAND RESOURCE MANAGEMENT (2000) MULTIPLE-USE MANAGEMENT CLASSIFICATIONS Criteria for...) Further the objectives of Federal natural resource legislation directed, among other things towards: (1...). (4) Realization of the beneficial utilization of the public lands through occupancy leases, such as...
Code of Federal Regulations, 2013 CFR
2013-10-01
... OF THE INTERIOR LAND RESOURCE MANAGEMENT (2000) MULTIPLE-USE MANAGEMENT CLASSIFICATIONS Criteria for...) Further the objectives of Federal natural resource legislation directed, among other things towards: (1...). (4) Realization of the beneficial utilization of the public lands through occupancy leases, such as...
Code of Federal Regulations, 2012 CFR
2012-10-01
... OF THE INTERIOR LAND RESOURCE MANAGEMENT (2000) MULTIPLE-USE MANAGEMENT CLASSIFICATIONS Criteria for...) Further the objectives of Federal natural resource legislation directed, among other things towards: (1...). (4) Realization of the beneficial utilization of the public lands through occupancy leases, such as...
Optimizing Reasonableness, Critical Thinking, and Cyberspace
ERIC Educational Resources Information Center
Ikuenobe, Polycarp
2003-01-01
In this paper, the author argues that the quantity, superabundance of information, easy availability, and quick access to information in cyberspace may engender critical thinking and the optimization of reasonableness. This point is different from, but presupposes, the commonplace view that critical thinking abilities, criteria, processes, and…
Tignanelli, Christopher J; Vander Kolk, Wayne E; Mikhail, Judy N; Delano, Matthew J; Hemmila, Mark R
2017-11-21
The appropriate triage of acutely injured patients within a trauma system is associated with improved rates of mortality and optimal resource utilization. The American College of Surgeons Committee on Trauma (ACS-COT) put forward six minimum criteria (ACS-6) for full trauma team activation (TTA). We hypothesized that ACS-COT verified trauma center compliance with these criteria is associated with low under-triage rates and improved overall mortality. Data from a state-wide collaborative quality initiative was utilized. We used data collected from 2014 through 2016 at 29 ACS verified level 1 and 2 trauma centers. Inclusion criteria were: adult patients (≥16 years) and ISS ≥5. Quantitative data existed to analyze four of the ACS-6 criteria (ED SBP≤90 mmHg, respiratory compromise/intubation, central GSW, and GCS<9). Patients were considered to be under-triaged if they had major trauma (ISS>15) and did not receive a full TTA. 51,792 patients were included in the study. Compliance with ACS-6 minimum criteria for full TTA varied from 51% to 82%. Presence of any ACS-6 criteria was associated with a high intervention rate and significant risk of mortality (OR 16.7, 95% CI 15.2-18.3, p<0.001). Of the 1004 deaths that were not a full activation, 433 (43%) were classified as under-triaged, and 301 (30%) had at least one ACS-6 criteria present. Under-triaged patients with any ACS-6 criteria were more likely to die than those who were not under-triaged (30% vs 21%, p=0.001). GCS<9 and need for emergent intubation were the ACS-6 criteria most frequently associated with under-triage mortality. Compliance with ACS-COT minimum criteria for full TTA remains sub-optimal and undertriage is associated with increased mortality. This data suggests that the most efficient quality improvement measure around triage should be ensuring compliance with the ACS-6 criteria. This study suggests that practice pattern modification to more strictly adhere to the minimum ACS-COT criteria for full TTA will save lives. Diagnostic Tests or Criteria, Level III.
NASA Astrophysics Data System (ADS)
Harou, J. J.; Hurford, A.; Geressu, R. T.
2015-12-01
Many of the world's multi-reservoir water resource systems are being considered for further development of hydropower and irrigation aiming to meet economic, political and ecological goals. Complex river basins serve many needs so how should the different proposed groupings of reservoirs and their operations be evaluated? How should uncertainty about future supply and demand conditions be factored in? What reservoir designs can meet multiple goals and perform robustly in a context of global change? We propose an optimized multi-criteria screening approach to identify best performing designs, i.e., the selection, size and operating rules of new reservoirs within multi-reservoir systems in a context of deeply uncertain change. Reservoir release operating rules and storage sizes are optimized concurrently for each separate infrastructure design under consideration across many scenarios representing plausible future conditions. Outputs reveal system trade-offs using multi-dimensional scatter plots where each point represents an approximately Pareto-optimal design. The method is applied to proposed Blue Nile River reservoirs in Ethiopia, where trade-offs between capital costs, total and firm energy output, aggregate storage and downstream irrigation and energy provision for the best performing designs are evaluated. The impact of filling period for large reservoirs is considered in a context of hydrological uncertainty. The approach is also applied to the Koshi basin in Nepal where combinations of hydropower storage and run-of-river dams are being considered for investment. We show searching for investment portfolios that meet multiple objectives provides stakeholders with a rich view on the trade-offs inherent in the nexus and how different investment bundles perform differently under plausible futures. Both case-studies show how the proposed approach helps explore and understand the implications of investing in new dams in a global change context.
NASA Astrophysics Data System (ADS)
Lavrinenko, S. V.; Polikarpov, P. I.
2017-11-01
The nuclear industry is one of the most important and high-tech spheres of human activity in Russia. The main cause of accidents in the nuclear industry is the human factor. In this connection, the need to constantly analyze the system of training of specialists and its optimization in order to improve safety at nuclear industry enterprises. To do this, you must analyze the international experience in the field of training in the field of nuclear energy leading countries. Based on the analysis criteria have been formulated to optimize the educational process of training specialists for the nuclear power industry and test their effectiveness. The most effective and promising is the introduction of modern information technologies of training of students, such as real-time simulators, electronic educational resources, etc.
Assessment of Trading Partners for China's Rare Earth Exports Using a Decision Analytic Approach
He, Chunyan; Lei, Yalin; Ge, Jianping
2014-01-01
Chinese rare earth export policies currently result in accelerating its depletion. Thus adopting an optimal export trade selection strategy is crucial to determining and ultimately identifying the ideal trading partners. This paper introduces a multi-attribute decision-making methodology which is then used to select the optimal trading partner. In the method, an evaluation criteria system is established to assess the seven top trading partners based on three dimensions: political relationships, economic benefits and industrial security. Specifically, a simple additive weighing model derived from an additive utility function is utilized to calculate, rank and select alternatives. Results show that Japan would be the optimal trading partner for Chinese rare earths. The criteria evaluation method of trading partners for China's rare earth exports provides the Chinese government with a tool to enhance rare earth industrial policies. PMID:25051534
Assessment of trading partners for China's rare earth exports using a decision analytic approach.
He, Chunyan; Lei, Yalin; Ge, Jianping
2014-01-01
Chinese rare earth export policies currently result in accelerating its depletion. Thus adopting an optimal export trade selection strategy is crucial to determining and ultimately identifying the ideal trading partners. This paper introduces a multi-attribute decision-making methodology which is then used to select the optimal trading partner. In the method, an evaluation criteria system is established to assess the seven top trading partners based on three dimensions: political relationships, economic benefits and industrial security. Specifically, a simple additive weighing model derived from an additive utility function is utilized to calculate, rank and select alternatives. Results show that Japan would be the optimal trading partner for Chinese rare earths. The criteria evaluation method of trading partners for China's rare earth exports provides the Chinese government with a tool to enhance rare earth industrial policies.
Jiao, S; Tiezzi, F; Huang, Y; Gray, K A; Maltecca, C
2016-02-01
Obtaining accurate individual feed intake records is the key first step in achieving genetic progress toward more efficient nutrient utilization in pigs. Feed intake records collected by electronic feeding systems contain errors (erroneous and abnormal values exceeding certain cutoff criteria), which are due to feeder malfunction or animal-feeder interaction. In this study, we examined the use of a novel data-editing strategy involving multiple imputation to minimize the impact of errors and missing values on the quality of feed intake data collected by an electronic feeding system. Accuracy of feed intake data adjustment obtained from the conventional linear mixed model (LMM) approach was compared with 2 alternative implementations of multiple imputation by chained equation, denoted as MI (multiple imputation) and MICE (multiple imputation by chained equation). The 3 methods were compared under 3 scenarios, where 5, 10, and 20% feed intake error rates were simulated. Each of the scenarios was replicated 5 times. Accuracy of the alternative error adjustment was measured as the correlation between the true daily feed intake (DFI; daily feed intake in the testing period) or true ADFI (the mean DFI across testing period) and the adjusted DFI or adjusted ADFI. In the editing process, error cutoff criteria are used to define if a feed intake visit contains errors. To investigate the possibility that the error cutoff criteria may affect any of the 3 methods, the simulation was repeated with 2 alternative error cutoff values. Multiple imputation methods outperformed the LMM approach in all scenarios with mean accuracies of 96.7, 93.5, and 90.2% obtained with MI and 96.8, 94.4, and 90.1% obtained with MICE compared with 91.0, 82.6, and 68.7% using LMM for DFI. Similar results were obtained for ADFI. Furthermore, multiple imputation methods consistently performed better than LMM regardless of the cutoff criteria applied to define errors. In conclusion, multiple imputation is proposed as a more accurate and flexible method for error adjustments in feed intake data collected by electronic feeders.
Intelligent fault recognition strategy based on adaptive optimized multiple centers
NASA Astrophysics Data System (ADS)
Zheng, Bo; Li, Yan-Feng; Huang, Hong-Zhong
2018-06-01
For the recognition principle based optimized single center, one important issue is that the data with nonlinear separatrix cannot be recognized accurately. In order to solve this problem, a novel recognition strategy based on adaptive optimized multiple centers is proposed in this paper. This strategy recognizes the data sets with nonlinear separatrix by the multiple centers. Meanwhile, the priority levels are introduced into the multi-objective optimization, including recognition accuracy, the quantity of optimized centers, and distance relationship. According to the characteristics of various data, the priority levels are adjusted to ensure the quantity of optimized centers adaptively and to keep the original accuracy. The proposed method is compared with other methods, including support vector machine (SVM), neural network, and Bayesian classifier. The results demonstrate that the proposed strategy has the same or even better recognition ability on different distribution characteristics of data.
Umehara, Yutaka; Umehara, Minoru; Tokura, Tomohisa; Yachi, Takafumi; Takahashi, Kenichi; Morita, Takayuki; Hakamada, Kenichi
2015-10-01
A 26-year-old woman presented to our department with a diagnosis of multiple nonfunctioning pancreatic neuroendocrine tumors. She had a family history of pheochromocytoma and a medical history of bilateral adrenalectomy for pheochromocytoma at the age of 25 years. During follow-up treatment for adrenal insufficiency after the surgery, highly enhanced tumors in the pancreas were detected on contrast-enhanced CT. Other examinations found that the patient did not satisfy the clinical criteria for von Hippel-Lindau (VHL) disease. Considering her age and risk of developing multiple heterotopic and heterochronous tumors, we performed a duodenum-preserving resection of the head of the pancreas and spleen-preserving resection of the tail of the pancreas with informed consent. The histopathological findings revealed that all of the tumors were NET G1. She underwent genetic testing postoperatively and was diagnosed with VHL disease. This diagnosis meant that we were able to create an optimal treatment plan for the patient. If a tumor predisposition syndrome is suspected, VHL disease should be borne in mind and genetic testing after genetic counseling should be duly considered.
Tsugawa, Hiroshi; Arita, Masanori; Kanazawa, Mitsuhiro; Ogiwara, Atsushi; Bamba, Takeshi; Fukusaki, Eiichiro
2013-05-21
We developed a new software program, MRMPROBS, for widely targeted metabolomics by using the large-scale multiple reaction monitoring (MRM) mode. The strategy became increasingly popular for the simultaneous analysis of up to several hundred metabolites at high sensitivity, selectivity, and quantitative capability. However, the traditional method of assessing measured metabolomics data without probabilistic criteria is not only time-consuming but is often subjective and makeshift work. Our program overcomes these problems by detecting and identifying metabolites automatically, by separating isomeric metabolites, and by removing background noise using a probabilistic score defined as the odds ratio from an optimized multivariate logistic regression model. Our software program also provides a user-friendly graphical interface to curate and organize data matrices and to apply principal component analyses and statistical tests. For a demonstration, we conducted a widely targeted metabolome analysis (152 metabolites) of propagating Saccharomyces cerevisiae measured at 15 time points by gas and liquid chromatography coupled to triple quadrupole mass spectrometry. MRMPROBS is a useful and practical tool for the assessment of large-scale MRM data available to any instrument or any experimental condition.
Fuzzy Linguistic Knowledge Based Behavior Extraction for Building Energy Management Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dumidu Wijayasekara; Milos Manic
2013-08-01
Significant portion of world energy production is consumed by building Heating, Ventilation and Air Conditioning (HVAC) units. Thus along with occupant comfort, energy efficiency is also an important factor in HVAC control. Modern buildings use advanced Multiple Input Multiple Output (MIMO) control schemes to realize these goals. However, since the performance of HVAC units is dependent on many criteria including uncertainties in weather, number of occupants, and thermal state, the performance of current state of the art systems are sub-optimal. Furthermore, because of the large number of sensors in buildings, and the high frequency of data collection, large amount ofmore » information is available. Therefore, important behavior of buildings that compromise energy efficiency or occupant comfort is difficult to identify. This paper presents an easy to use and understandable framework for identifying such behavior. The presented framework uses human understandable knowledge-base to extract important behavior of buildings and present it to users via a graphical user interface. The presented framework was tested on a building in the Pacific Northwest and was shown to be able to identify important behavior that relates to energy efficiency and occupant comfort.« less
ACR appropriateness Criteria® second and third trimester bleeding.
Podrasky, Ann E; Javitt, Marcia C; Glanc, Phyllis; Dubinsky, Theodore; Harisinghani, Mukesh G; Harris, Robert D; Khati, Nadia J; Mitchell, Donald G; Pandharipande, Pari V; Pannu, Harpreet K; Shipp, Thomas D; Siegel, Cary Lynn; Simpson, Lynn; Wall, Darci J; Wong-You-Cheong, Jade J; Zelop, Carolyn M
2013-12-01
Vaginal bleeding occurring in the second or third trimesters of pregnancy can variably affect perinatal outcome, depending on whether it is minor (i.e. a single, mild episode) or major (heavy bleeding or multiple episodes.) Ultrasound is used to evaluate these patients. Sonographic findings may range from marginal subchorionic hematoma to placental abruption. Abnormal placentations such as placenta previa, placenta accreta and vasa previa require accurate diagnosis for clinical management. In cases of placenta accreta, magnetic resonance imaging is useful as an adjunct to ultrasound and is often appropriate for evaluation of the extent of placental invasiveness and potential involvement of adjacent structures. MRI is useful for preplanning for cases of complex delivery, which may necessitate a multi-disciplinary approach for optimal care.The American College of Radiology Appropriateness Criteria are evidence-based guidelines for specific clinical conditions that are reviewed every two years by a multidisciplinary expert panel. The guideline development and review include an extensive analysis of current medical literature from peer reviewed journals and the application of a well-established consensus methodology (modified Delphi) to rate the appropriateness of imaging and treatment procedures by the panel. In those instances where evidence is lacking or not definitive, expert opinion may be used to recommend imaging or treatment.
Formica, R N; Aeder, M; Boyle, G; Kucheryavaya, A; Stewart, D; Hirose, R; Mulligan, D
2016-03-01
The introduction of the Mayo End-Stage Liver Disease score into the Organ Procurement and Transplantation Network (OPTN) deceased donor liver allocation policy in 2002 has led to a significant increase in the number of simultaneous liver-kidney transplants in the United States. Despite multiple attempts, clinical science has not been able to reliably predict which liver candidates with renal insufficiency will recover renal function or need a concurrent kidney transplant. The problem facing the transplant community is that currently there are almost no medical criteria for candidacy for simultaneous liver-kidney allocation in the United States, and this lack of standardized rules and medical eligibility criteria for kidney allocation with a liver is counter to OPTN's Final Rule. Moreover, almost 50% of simultaneous liver-kidney organs come from a donor with a kidney donor profile index of ≤0.35. The kidneys from these donors could otherwise be allocated to pediatric recipients, young adults or prior organ donors. This paper presents the new OPTN and United Network of Organ Sharing simultaneous liver-kidney allocation policy, provides the supporting evidence and explains the rationale on which the policy was based. © Copyright 2015 The American Society of Transplantation and the American Society of Transplant Surgeons.
Optimal glass-ceramic structures: Components of giant mirror telescopes
NASA Technical Reports Server (NTRS)
Eschenauer, Hans A.
1990-01-01
Detailed investigations are carried out on optimal glass-ceramic mirror structures of terrestrial space technology (optical telescopes). In order to find an optimum design, a nonlinear multi-criteria optimization problem is formulated. 'Minimum deformation' at 'minimum weight' are selected as contradictory objectives, and a set of further constraints (quilting effect, optical faults etc.) is defined and included. A special result of the investigations is described.
Marsh, Kevin; IJzerman, Maarten; Thokala, Praveen; Baltussen, Rob; Boysen, Meindert; Kaló, Zoltán; Lönngren, Thomas; Mussen, Filip; Peacock, Stuart; Watkins, John; Devlin, Nancy
2016-01-01
Health care decisions are complex and involve confronting trade-offs between multiple, often conflicting objectives. Using structured, explicit approaches to decisions involving multiple criteria can improve the quality of decision making. A set of techniques, known under the collective heading, multiple criteria decision analysis (MCDA), are useful for this purpose. In 2014, ISPOR established an Emerging Good Practices Task Force. The task force's first report defined MCDA, provided examples of its use in health care, described the key steps, and provided an overview of the principal methods of MCDA. This second task force report provides emerging good-practice guidance on the implementation of MCDA to support health care decisions. The report includes: a checklist to support the design, implementation and review of an MCDA; guidance to support the implementation of the checklist; the order in which the steps should be implemented; illustrates how to incorporate budget constraints into an MCDA; provides an overview of the skills and resources, including available software, required to implement MCDA; and future research directions. Copyright © 2016 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Kraschnewski, Jennifer L; Keyserling, Thomas C; Bangdiwala, Shrikant I; Gizlice, Ziya; Garcia, Beverly A; Johnston, Larry F; Gustafson, Alison; Petrovic, Lindsay; Glasgow, Russell E; Samuel-Hodge, Carmen D
2010-01-01
Studies of type 2 translation, the adaption of evidence-based interventions to real-world settings, should include representative study sites and staff to improve external validity. Sites for such studies are, however, often selected by convenience sampling, which limits generalizability. We used an optimized probability sampling protocol to select an unbiased, representative sample of study sites to prepare for a randomized trial of a weight loss intervention. We invited North Carolina health departments within 200 miles of the research center to participate (N = 81). Of the 43 health departments that were eligible, 30 were interested in participating. To select a representative and feasible sample of 6 health departments that met inclusion criteria, we generated all combinations of 6 from the 30 health departments that were eligible and interested. From the subset of combinations that met inclusion criteria, we selected 1 at random. Of 593,775 possible combinations of 6 counties, 15,177 (3%) met inclusion criteria. Sites in the selected subset were similar to all eligible sites in terms of health department characteristics and county demographics. Optimized probability sampling improved generalizability by ensuring an unbiased and representative sample of study sites.
NASA Astrophysics Data System (ADS)
Koneva, M. S.; Rudenko, O. V.; Usatikov, S. V.; Bugaets, N. A.; Tereshchenko, I. V.
2018-05-01
To reduce the duration of the process and to ensure the microbiological purity of the germinated material, an improved method of germination has been developed based on the complex use of physical factors: electrochemically activated water (ECHA-water), electromagnetic field of extremely low frequencies (EMF ELF) with round-the-clock artificial illumination by LED lamps. The increase in the efficiency of the "numerical" technology for solving computational problems of parametric optimization of the technological process of hydroponic germination of wheat grains is considered. In this situation, the quality criteria are contradictory and part of them is given by implicit functions of many variables. A solution algorithm is offered without the construction of a Pareto set in which a relatively small number of elements of a set of alternatives is used to obtain a linear convolution of the criteria with given weights, normalized to their "ideal" values from the solution of the problems of single-criterion private optimizations. The use of the proposed mathematical models describing the processes of hydroponic germination of wheat grains made it possible to intensify the germination process and to shorten the time of obtaining wheat sprouts "Altayskaya 105" for 27 hours.
Incorporating Active Runway Crossings in Airport Departure Scheduling
NASA Technical Reports Server (NTRS)
Gupta, Gautam; Malik, Waqar; Jung, Yoon C.
2010-01-01
A mixed integer linear program is presented for deterministically scheduling departure and ar rival aircraft at airport runways. This method addresses different schemes of managing the departure queuing area by treating it as first-in-first-out queues or as a simple par king area where any available aircraft can take-off ir respective of its relative sequence with others. In addition, this method explicitly considers separation criteria between successive aircraft and also incorporates an optional prioritization scheme using time windows. Multiple objectives pertaining to throughput and system delay are used independently. Results indicate improvement over a basic first-come-first-serve rule in both system delay and throughput. Minimizing system delay results in small deviations from optimal throughput, whereas minimizing throughput results in large deviations in system delay. Enhancements for computational efficiency are also presented in the form of reformulating certain constraints and defining additional inequalities for better bounds.
Approach to proliferation risk assessment based on multiple objective analysis framework
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andrianov, A.; Kuptsov, I.; Studgorodok 1, Obninsk, Kaluga region, 249030
2013-07-01
The approach to the assessment of proliferation risk using the methods of multi-criteria decision making and multi-objective optimization is presented. The approach allows the taking into account of the specifics features of the national nuclear infrastructure, and possible proliferation strategies (motivations, intentions, and capabilities). 3 examples of applying the approach are shown. First, the approach has been used to evaluate the attractiveness of HEU (high enriched uranium)production scenarios at a clandestine enrichment facility using centrifuge enrichment technology. Secondly, the approach has been applied to assess the attractiveness of scenarios for undeclared production of plutonium or HEU by theft of materialsmore » circulating in nuclear fuel cycle facilities and thermal reactors. Thirdly, the approach has been used to perform a comparative analysis of the structures of developing nuclear power systems based on different types of nuclear fuel cycles, the analysis being based on indicators of proliferation risk.« less
An optimization model for energy generation and distribution in a dynamic facility
NASA Technical Reports Server (NTRS)
Lansing, F. L.
1981-01-01
An analytical model is described using linear programming for the optimum generation and distribution of energy demands among competing energy resources and different economic criteria. The model, which will be used as a general engineering tool in the analysis of the Deep Space Network ground facility, considers several essential decisions for better design and operation. The decisions sought for the particular energy application include: the optimum time to build an assembly of elements, inclusion of a storage medium of some type, and the size or capacity of the elements that will minimize the total life-cycle cost over a given number of years. The model, which is structured in multiple time divisions, employ the decomposition principle for large-size matrices, the branch-and-bound method in mixed-integer programming, and the revised simplex technique for efficient and economic computer use.
Optimal Utilization of Donor Grafts With Extended Criteria
Cameron, Andrew M.; Ghobrial, R Mark; Yersiz, Hasan; Farmer, Douglas G.; Lipshutz, Gerald S.; Gordon, Sherilyn A.; Zimmerman, Michael; Hong, Johnny; Collins, Thomas E.; Gornbein, Jeffery; Amersi, Farin; Weaver, Michael; Cao, Carlos; Chen, Tony; Hiatt, Jonathan R.; Busuttil, Ronald W.
2006-01-01
Objective: Severely limited organ resources mandate maximum utilization of donor allografts for orthotopic liver transplantation (OLT). This work aimed to identify factors that impact survival outcomes for extended criteria donors (ECD) and developed an ECD scoring system to facilitate graft-recipient matching and optimize utilization of ECDs. Methods: Retrospective analysis of over 1000 primary adult OLTs at UCLA. Extended criteria (EC) considered included donor age (>55 years), donor hospital stay (>5 days), cold ischemia time (>10 hours), and warm ischemia time (>40 minutes). One point was assigned for each extended criterion. Cox proportional hazard regression model was used for multivariate analysis. Results: Of 1153 allografts considered in the study, 568 organs exhibited no extended criteria (0 score), while 429, 135 and 21 donor allografts exhibited an EC score of 1, 2 and 3, respectively. Overall 1-year patient survival rates were 88%, 82%, 77% and 48% for recipients with EC scores of 0, 1, 2 and 3 respectively (P < 0.001). Adjusting for recipient age and urgency at the time of transplantation, multivariate analysis identified an ascending mortality risk ratio of 1.4 and 1.8 compared to a score of 0 for an EC score of 1, and 2 (P < 0.01) respectively. In contrast, an EC score of 3 was associated with a mortality risk ratio of 4.5 (P < 0.001). Further, advanced recipient age linearly increased the death hazard ratio, while an urgent recipient status increased the risk ratio of death by 50%. Conclusions: Extended criteria donors can be scored using readily available parameters. Optimizing perioperative variables and matching ECD allografts to appropriately selected recipients are crucial to maintain acceptable outcomes and represent a preferable alternative to both high waiting list mortality and to a potentially futile transplant that utilizes an ECD for a critically ill recipient. PMID:16772778
Aung, Kyaw L; Donald, Emma; Ellison, Gillian; Bujac, Sarah; Fletcher, Lynn; Cantarini, Mireille; Brady, Ged; Orr, Maria; Clack, Glen; Ranson, Malcolm; Dive, Caroline; Hughes, Andrew
2014-05-01
BRAF mutation testing from circulating free DNA (cfDNA) using the amplification refractory mutation testing system (ARMS) holds potential as a surrogate for tumor mutation testing. Robust assay validation is needed to establish the optimal clinical matrix for measurement and cfDNA-specific mutation calling criteria. Plasma- and serum-derived cfDNA samples from 221 advanced melanoma patients were analyzed for BRAF c.1799T>A (p.V600E) mutation using ARMS in two stages in a blinded fashion. cfDNA-specific mutation calling criteria were defined in stage 1 and validated in stage 2. cfDNA concentrations in serum and plasma, and the sensitivities and specificities of BRAF mutation detection in these two clinical matrices were compared. Sensitivity of BRAF c.1799T>A (p.V600E) mutation detection in cfDNA was increased by using mutation calling criteria optimized for cfDNA (these criteria were adjusted from those used for archival tumor biopsies) without compromising specificity. Sensitivity of BRAF mutation detection in serum was 44% (95% CI, 35% to 53%) and in plasma 52% (95% CI, 43% to 61%). Specificity was 96% (95% CI, 90% to 99%) in both matrices. Serum contains significantly higher total cfDNA than plasma, whereas the proportion of tumor-derived mutant DNA was significantly higher in plasma. Using mutation calling criteria optimized for cfDNA improves sensitivity of BRAF c.1799T>A (p.V600E) mutation detection. The proportion of tumor-derived cfDNA in plasma was significantly higher than in serum. Copyright © 2014 American Society for Investigative Pathology and the Association for Molecular Pathology. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Song, Jae Yeol; Chung, Eun-Sung
2017-04-01
This study developed a multi-criteria decision analysis framework to prioritize sites and types of low impact development (LID) practices. This framework was systemized as a web-based system coupled with the Storm Water Management Model (SWMM) from the Environmental Protection Agency (EPA). Using the technique for order of preference by similarity to ideal solution (TOPSIS), which is a type of multi-criteria decision-making (MCDM) method, multiple types and sites of designated LID practices are prioritized. This system is named the Water Management Prioritization Module (WMPM) and is an improved version of the Water Management Analysis Module (WMAM) that automatically generates and simulates multiple scenarios of LID design and planning parameters for a single LID type. WMPM can simultaneously determine the priority of multiple LID types and sites. In this study, an infiltration trench and permeable pavement were considered for multiple sub-catchments in South Korea to demonstrate the WMPM procedures. The TOPSIS method was manually incorporated to select the vulnerable target sub-catchments and to prioritize the LID planning scenarios for multiple types and sites considering socio-economic, hydrologic and physical-geometric factors. In this application, the Delphi method and entropy theory were used to determine the subjective and objective weights, respectively. Comparing the ranks derived by this system, two sub-catchments, S16 and S4, out of 18 were considered to be the most suitable places for installing an infiltration trench and porous pavement to reduce the peak and total flow, respectively, considering both socio-economic factors and hydrological effectiveness. WMPM can help policy-makers to objectively develop urban water plans for sustainable development. Keywords: Low Impact Development, Multi-Criteria Decision Analysis, SWMM, TOPSIS, Water Management Prioritization Module (WMPM)
2013-01-01
Background There have been few investigations evaluating the burden of malaria disease at district level in the Republic of Congo since the introduction of artemisinin-based combination therapies (ACTs). The main objective of this study was to document laboratory-confirmed cases of malaria using microscopy and/or rapid diagnostic tests (RDTs) in children and pregnant women attending selected health facilities in Brazzaville and Pointe Noire, the two main cities of the country. Secondly, P. falciparum genetic diversity and multiplicity of infection during the malaria transmission season of October 2011 to February 2012 in these areas were described. Methods Three and one health facilities were selected in Brazzaville and Pointe-Noire as sentinel sites for malaria surveillance. Children under 15 years of age and pregnant women were enrolled if study criteria were met and lab technicians used RDT and/or microscopy to diagnose malaria. In order to determine the multiplicity of infection, parasite DNA was extracted from RDT cassette and msp2 P.falciparum genotyped. Results Malaria prevalence among more than 3,000 children and 700 pregnant women ranged from 8 to 29%, and 8 to 24% respectively depending on health center locality. While health workers did not optimize use of RDTs, microscopy remained a reference diagnostic tool. Quality control of malaria diagnosis at the reference laboratory showed acceptable health centre performances. P. falciparum genetic diversity determination using msp2 gene marker ranged from 9 to 20 alleles and remains stable while multiplicity of infection (mean of 1.7clone/infected individual) and parasite densities in clinical isolates were lower than previously reported. Conclusions These findings are consistent with a reduction of malaria transmission in the two areas. This study raises the issue of targeted training for health workers and sustained availability of RDTs in order to improve quality of care through optimal use of RDTs. PMID:23409963
General Methodology for Designing Spacecraft Trajectories
NASA Technical Reports Server (NTRS)
Condon, Gerald; Ocampo, Cesar; Mathur, Ravishankar; Morcos, Fady; Senent, Juan; Williams, Jacob; Davis, Elizabeth C.
2012-01-01
A methodology for designing spacecraft trajectories in any gravitational environment within the solar system has been developed. The methodology facilitates modeling and optimization for problems ranging from that of a single spacecraft orbiting a single celestial body to that of a mission involving multiple spacecraft and multiple propulsion systems operating in gravitational fields of multiple celestial bodies. The methodology consolidates almost all spacecraft trajectory design and optimization problems into a single conceptual framework requiring solution of either a system of nonlinear equations or a parameter-optimization problem with equality and/or inequality constraints.
Clinical features of schwannomatosis: a retrospective analysis of 87 patients.
Merker, Vanessa L; Esparza, Sonia; Smith, Miriam J; Stemmer-Rachamimov, Anat; Plotkin, Scott R
2012-01-01
Schwannomatosis is a recently recognized form of neurofibromatosis characterized by multiple noncutaneous schwannomas, a histologically benign nerve sheath tumor. As more cases are identified, the reported phenotype continues to expand and evolve. We describe the spectrum of clinical findings in a cohort of patients meeting established criteria for schwannomatosis. We retrospectively reviewed the clinical records of patients seen at our institution from 1995-2011 who fulfilled either research or clinical criteria for schwannomatosis. Clinical, radiographic, and pathologic data were extracted with attention to age at onset, location of tumors, ophthalmologic evaluation, family history, and other stigmata of neurofibromatosis 1 (NF1) or NF2. Eighty-seven patients met the criteria for the study. The most common presentation was pain unassociated with a mass (46%). Seventy-seven of 87 (89%) patients had peripheral schwannomas, 49 of 66 (74%) had spinal schwannomas, seven of 77 (9%) had nonvestibular intracranial schwannomas, and four of 77 (5%) had intracranial meningiomas. Three patients were initially diagnosed with a malignant peripheral nerve sheath tumor; however, following pathologic review, the diagnoses were revised in all three cases. Chronic pain was the most common symptom (68%) and usually persisted despite aggressive surgical and medical management. Other common diagnoses included headaches, depression, and anxiety. Peripheral and spinal schwannomas are common in schwannomatosis patients. Severe pain is difficult to treat in these patients and often associated with anxiety and depression. These findings support a proactive surveillance plan to identify tumors by magnetic resonance imaging scan in order to optimize surgical treatment and to treat associated pain, anxiety, and depression.
Clinical Features of Schwannomatosis: A Retrospective Analysis of 87 Patients
Merker, Vanessa L.; Esparza, Sonia; Smith, Miriam J.; Stemmer-Rachamimov, Anat
2012-01-01
Background. Schwannomatosis is a recently recognized form of neurofibromatosis characterized by multiple noncutaneous schwannomas, a histologically benign nerve sheath tumor. As more cases are identified, the reported phenotype continues to expand and evolve. We describe the spectrum of clinical findings in a cohort of patients meeting established criteria for schwannomatosis. Methods. We retrospectively reviewed the clinical records of patients seen at our institution from 1995–2011 who fulfilled either research or clinical criteria for schwannomatosis. Clinical, radiographic, and pathologic data were extracted with attention to age at onset, location of tumors, ophthalmologic evaluation, family history, and other stigmata of neurofibromatosis 1 (NF1) or NF2. Results. Eighty-seven patients met the criteria for the study. The most common presentation was pain unassociated with a mass (46%). Seventy-seven of 87 (89%) patients had peripheral schwannomas, 49 of 66 (74%) had spinal schwannomas, seven of 77 (9%) had nonvestibular intracranial schwannomas, and four of 77 (5%) had intracranial meningiomas. Three patients were initially diagnosed with a malignant peripheral nerve sheath tumor; however, following pathologic review, the diagnoses were revised in all three cases. Chronic pain was the most common symptom (68%) and usually persisted despite aggressive surgical and medical management. Other common diagnoses included headaches, depression, and anxiety. Conclusions. Peripheral and spinal schwannomas are common in schwannomatosis patients. Severe pain is difficult to treat in these patients and often associated with anxiety and depression. These findings support a proactive surveillance plan to identify tumors by magnetic resonance imaging scan in order to optimize surgical treatment and to treat associated pain, anxiety, and depression. PMID:22927469
Rao, S S C; Mudipalli, R S; Stessman, M; Zimmerman, B
2004-10-01
Although 30-50% of constipated patients exhibit dyssynergia, an optimal method of diagnosis is unclear. Recently, consensus criteria have been proposed but their utility is unknown. To examine the diagnostic yield of colorectal tests, reproducibility of manometry and utility of Rome II criteria. A total of 100 patients with difficult defecation were prospectively evaluated with anorectal manometry, balloon expulsion, colonic transit and defecography. Fifty-three patients had repeat manometry. During attempted defecation, 30 showed normal and 70 one of three abnormal manometric patterns. Forty-six patients fulfilled Rome criteria and showed paradoxical anal contraction (type I) or impaired anal relaxation (type III) with adequate propulsion. However, 24 (34%) showed impaired propulsion (type II). Forty-five (64%) had slow transit, 42 (60%) impaired balloon expulsion and 26 (37%) abnormal defecography. Defecography provided no additional discriminant utility. Evidence of dyssynergia was reproducible in 51 of 53 patients. Symptoms alone could not differentiate dyssynergic subtypes or patients. Dyssynergic patients exhibited three patterns that were reproducible: paradoxical contraction, impaired propulsion and impaired relaxation. Although useful, Rome II criteria may be insufficient to identify or subclassify dyssynergic defecation. Symptoms together with abnormal manometry, abnormal balloon expulsion or colonic marker retention are necessary to optimally identify patients with difficult defecation.
NASA Astrophysics Data System (ADS)
Macian-Sorribes, Hector; Pulido-Velazquez, Manuel
2016-04-01
This contribution presents a methodology for defining optimal seasonal operating rules in multireservoir systems coupling expert criteria and stochastic optimization. Both sources of information are combined using fuzzy logic. The structure of the operating rules is defined based on expert criteria, via a joint expert-technician framework consisting in a series of meetings, workshops and surveys carried out between reservoir managers and modelers. As a result, the decision-making process used by managers can be assessed and expressed using fuzzy logic: fuzzy rule-based systems are employed to represent the operating rules and fuzzy regression procedures are used for forecasting future inflows. Once done that, a stochastic optimization algorithm can be used to define optimal decisions and transform them into fuzzy rules. Finally, the optimal fuzzy rules and the inflow prediction scheme are combined into a Decision Support System for making seasonal forecasts and simulate the effect of different alternatives in response to the initial system state and the foreseen inflows. The approach presented has been applied to the Jucar River Basin (Spain). Reservoir managers explained how the system is operated, taking into account the reservoirs' states at the beginning of the irrigation season and the inflows previewed during that season. According to the information given by them, the Jucar River Basin operating policies were expressed via two fuzzy rule-based (FRB) systems that estimate the amount of water to be allocated to the users and how the reservoir storages should be balanced to guarantee those deliveries. A stochastic optimization model using Stochastic Dual Dynamic Programming (SDDP) was developed to define optimal decisions, which are transformed into optimal operating rules embedding them into the two FRBs previously created. As a benchmark, historical records are used to develop alternative operating rules. A fuzzy linear regression procedure was employed to foresee future inflows depending on present and past hydrological and meteorological variables actually used by the reservoir managers to define likely inflow scenarios. A Decision Support System (DSS) was created coupling the FRB systems and the inflow prediction scheme in order to give the user a set of possible optimal releases in response to the reservoir states at the beginning of the irrigation season and the fuzzy inflow projections made using hydrological and meteorological information. The results show that the optimal DSS created using the FRB operating policies are able to increase the amount of water allocated to the users in 20 to 50 Mm3 per irrigation season with respect to the current policies. Consequently, the mechanism used to define optimal operating rules and transform them into a DSS is able to increase the water deliveries in the Jucar River Basin, combining expert criteria and optimization algorithms in an efficient way. This study has been partially supported by the IMPADAPT project (CGL2013-48424-C2-1-R) with Spanish MINECO (Ministerio de Economía y Competitividad) and FEDER funds. It also has received funding from the European Union's Horizon 2020 research and innovation programme under the IMPREX project (grant agreement no: 641.811).
A green chemistry-based classification model for the synthesis of silver nanoparticles
The assessment of implementation of green chemistry principles in the synthesis of nanomaterials is a complex decision-making problem that necessitates integration of several evaluation criteria. Multiple Criteria Decision Aiding (MCDA) provides support for such a challenge. One ...
Integrating Fuel Treatments into Comprehensive Ecosystem Management
Kevin Hyde; Greg Jones; Robin Silverstein; Keith Stockmann; Dan Loeffler
2006-01-01
To plan fuel treatments in the context of comprehensive ecosystem management, forest managers must meet multiple-use and environmental objectives, address administrative and budget constraints, and reconcile performance measures from multiple policy directives. We demonstrate a multiple criteria approach to measuring success of fuel treatments used in the Butte North...
Optimal inverse functions created via population-based optimization.
Jennings, Alan L; Ordóñez, Raúl
2014-06-01
Finding optimal inputs for a multiple-input, single-output system is taxing for a system operator. Population-based optimization is used to create sets of functions that produce a locally optimal input based on a desired output. An operator or higher level planner could use one of the functions in real time. For the optimization, each agent in the population uses the cost and output gradients to take steps lowering the cost while maintaining their current output. When an agent reaches an optimal input for its current output, additional agents are generated in the output gradient directions. The new agents then settle to the local optima for the new output values. The set of associated optimal points forms an inverse function, via spline interpolation, from a desired output to an optimal input. In this manner, multiple locally optimal functions can be created. These functions are naturally clustered in input and output spaces allowing for a continuous inverse function. The operator selects the best cluster over the anticipated range of desired outputs and adjusts the set point (desired output) while maintaining optimality. This reduces the demand from controlling multiple inputs, to controlling a single set point with no loss in performance. Results are demonstrated on a sample set of functions and on a robot control problem.
Optimal Resource Allocation for NOMA-TDMA Scheme with α-Fairness in Industrial Internet of Things.
Sun, Yanjing; Guo, Yiyu; Li, Song; Wu, Dapeng; Wang, Bin
2018-05-15
In this paper, a joint non-orthogonal multiple access and time division multiple access (NOMA-TDMA) scheme is proposed in Industrial Internet of Things (IIoT), which allowed multiple sensors to transmit in the same time-frequency resource block using NOMA. The user scheduling, time slot allocation, and power control are jointly optimized in order to maximize the system α -fair utility under transmit power constraint and minimum rate constraint. The optimization problem is nonconvex because of the fractional objective function and the nonconvex constraints. To deal with the original problem, we firstly convert the objective function in the optimization problem into a difference of two convex functions (D.C.) form, and then propose a NOMA-TDMA-DC algorithm to exploit the global optimum. Numerical results show that the NOMA-TDMA scheme significantly outperforms the traditional orthogonal multiple access scheme in terms of both spectral efficiency and user fairness.
Degelman, Michelle L; Herman, Katya M
2017-10-01
Despite being one of the most common neurological disorders globally, the cause(s) of multiple sclerosis (MS) remain unknown. Cigarette smoking has been studied with regards to both the development and progression of MS. The Bradford Hill criteria for causation can contribute to a more comprehensive evaluation of a potentially causal risk factor-disease outcome relationship. The objective of this systematic review and meta-analysis was to assess the relationship between smoking and both MS risk and MS progression, subsequently applying Hill's criteria to further evaluate the likelihood of causal associations. The Medline, EMBASE, CINAHL, PsycInfo, and Cochrane Library databases were searched for relevant studies up until July 28, 2015. A random-effects meta-analysis was conducted for three outcomes: MS risk, conversion from clinically isolated syndrome (CIS) to clinically definite multiple sclerosis (CDMS), and progression from relapsing-remitting multiple sclerosis (RRMS) to secondary-progressive multiple sclerosis (SPMS). Dose-response relationships and risk factor interactions, and discussions of mechanisms and analogous associations were noted. Hill's criteria were applied to assess causality of the relationships between smoking and each outcome. The effect of second-hand smoke exposure was also briefly reviewed. Smoking had a statistically significant association with both MS risk (conservative: OR/RR 1.54, 95% CI [1.46-1.63]) and SPMS risk (HR 1.80, 95% CI [1.04-3.10]), but the association with progression from CIS to CDMS was non-significant (HR 1.13, 95% CI [0.73-1.76]). Using Hill's criteria, there was strong evidence of a causal role of smoking in MS risk, but only moderate evidence of a causal association between smoking and MS progression. Heterogeneity in study designs and target populations, inconsistent results, and an overall scarcity of studies point to the need for more research on second-hand smoke exposure in relation to MS prior to conducting a detailed meta-analysis. This first review to supplement systematic review and meta-analytic methods with Hill's criteria to analyze the smoking-MS association provides evidence supporting the causal involvement of smoking in the development and progression of MS. Smoking prevention and cessation programs and policies should consider MS as an additional health risk when aiming to reduce smoking prevalence in the population. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Giesy, D. P.
1978-01-01
A technique is presented for the calculation of Pareto-optimal solutions to a multiple-objective constrained optimization problem by solving a series of single-objective problems. Threshold-of-acceptability constraints are placed on the objective functions at each stage to both limit the area of search and to mathematically guarantee convergence to a Pareto optimum.
Peñalvo, Jose L.; Khatibzadeh, Shahab; Singh, Gitanjali M.; Rao, Mayuree; Fahimi, Saman; Powles, John; Mozaffarian, Dariush
2017-01-01
Background Dietary habits are major contributors to coronary heart disease, stroke, and diabetes. However, comprehensive evaluation of etiologic effects of dietary factors on cardiometabolic outcomes, their quantitative effects, and corresponding optimal intakes are not well-established. Objective To systematically review the evidence for effects of dietary factors on cardiometabolic diseases, including comprehensively assess evidence for causality; estimate magnitudes of etiologic effects; evaluate heterogeneity and potential for bias in these etiologic effects; and determine optimal population intake levels. Methods We utilized Bradford-Hill criteria to assess probable or convincing evidence for causal effects of multiple diet-cardiometabolic disease relationships. Etiologic effects were quantified from published or de novo meta-analyses of prospective studies or randomized clinical trials, incorporating standardized units, dose-response estimates, and heterogeneity by age and other characteristics. Potential for bias was assessed in validity analyses. Optimal intakes were determined by levels associated with lowest disease risk. Results We identified 10 foods and 7 nutrients with evidence for causal cardiometabolic effects, including protective effects of fruits, vegetables, beans/legumes, nuts/seeds, whole grains, fish, yogurt, fiber, seafood omega-3s, polyunsaturated fats, and potassium; and harms of unprocessed red meats, processed meats, sugar-sweetened beverages, glycemic load, trans-fats, and sodium. Proportional etiologic effects declined with age, but did not generally vary by sex. Established optimal population intakes were generally consistent with observed national intakes and major dietary guidelines. In validity analyses, the identified effects of individual dietary components were similar to quantified effects of dietary patterns on cardiovascular risk factors and hard endpoints. Conclusions These novel findings provide a comprehensive summary of causal evidence, quantitative etiologic effects, heterogeneity, and optimal intakes of major dietary factors for cardiometabolic diseases, informing disease impact estimation and policy planning and priorities. PMID:28448503
Testing the Multiple in the Multiple Read-Out Model of Visual Word Recognition
ERIC Educational Resources Information Center
De Moor, Wendy; Verguts, Tom; Brysbaert, Marc
2005-01-01
This study provided a test of the multiple criteria concept used for lexical decision, as implemented in J. Grainger and A. M. Jacobs's (1996) multiple read-out model. This account predicts more inhibition (or less facilitation) from a masked neighbor when accuracy is stressed more but more facilitation (or less inhibition) when the speed of…
Ntranos, Achilles; Lublin, Fred
2016-10-01
Multiple sclerosis (MS) is one of the most diverse human diseases. Since its first description by Charcot in the nineteenth century, the diagnostic criteria, clinical course classification, and treatment goals for MS have been constantly revised and updated to improve diagnostic accuracy, physician communication, and clinical trial design. These changes have improved the clinical outcomes and quality of life for patients with the disease. Recent technological and research breakthroughs will almost certainly further change how we diagnose, classify, and treat MS in the future. In this review, we summarize the key events in the history of MS, explain the reasoning behind the current criteria for MS diagnosis, classification, and treatment, and provide suggestions for further improvements that will keep enhancing the clinical practice of MS.
Criteria for the use of regression analysis for remote sensing of sediment and pollutants
NASA Technical Reports Server (NTRS)
Whitlock, C. H.; Kuo, C. Y.; Lecroy, S. R.
1982-01-01
An examination of limitations, requirements, and precision of the linear multiple-regression technique for quantification of marine environmental parameters is conducted. Both environmental and optical physics conditions have been defined for which an exact solution to the signal response equations is of the same form as the multiple regression equation. Various statistical parameters are examined to define a criteria for selection of an unbiased fit when upwelled radiance values contain error and are correlated with each other. Field experimental data are examined to define data smoothing requirements in order to satisfy the criteria of Daniel and Wood (1971). Recommendations are made concerning improved selection of ground-truth locations to maximize variance and to minimize physical errors associated with the remote sensing experiment.
A conceptual framework for economic optimization of an animal health surveillance portfolio.
Guo, X; Claassen, G D H; Oude Lansink, A G J M; Saatkamp, H W
2016-04-01
Decision making on hazard surveillance in livestock product chains is a multi-hazard, multi-stakeholder, and multi-criteria process that includes a variety of decision alternatives. The multi-hazard aspect means that the allocation of the scarce resource for surveillance should be optimized from the point of view of a surveillance portfolio (SP) rather than a single hazard. In this paper, we present a novel conceptual approach for economic optimization of a SP to address the resource allocation problem for a surveillance organization from a theoretical perspective. This approach uses multi-criteria techniques to evaluate the performances of different settings of a SP, taking cost-benefit aspects of surveillance and stakeholders' preferences into account. The credibility of the approach has also been checked for conceptual validity, data needs and operational validity; the application potentials of the approach are also discussed.
Aircraft Flight Modeling During the Optimization of Gas Turbine Engine Working Process
NASA Astrophysics Data System (ADS)
Tkachenko, A. Yu; Kuz'michev, V. S.; Krupenich, I. N.
2018-01-01
The article describes a method for simulating the flight of the aircraft along a predetermined path, establishing a functional connection between the parameters of the working process of gas turbine engine and the efficiency criteria of the aircraft. This connection is necessary for solving the optimization tasks of the conceptual design stage of the engine according to the systems approach. Engine thrust level, in turn, influences the operation of aircraft, thus making accurate simulation of the aircraft behavior during flight necessary for obtaining the correct solution. The described mathematical model of aircraft flight provides the functional connection between the airframe characteristics, working process of gas turbine engines (propulsion system), ambient and flight conditions and flight profile features. This model provides accurate results of flight simulation and the resulting aircraft efficiency criteria, required for optimization of working process and control function of a gas turbine engine.
Kusaka, Mamoru; Kubota, Yusuke; Sasaki, Hitomi; Fukami, Naohiko; Fujita, Tamio; Hirose, Yuichi; Takahashi, Hiroshi; Kenmochi, Takashi; Shiroki, Ryoichi; Hoshinaga, Kiyotaka
2016-04-01
Kidneys procured from the deceased hold great potential for expanding the donor pool. The aims of the present study were to investigate the post-transplant outcomes of renal allografts recovered from donors after cardiac death, to identify risk factors affecting the renal prognosis and to compare the long-term survival from donors after cardiac death according to the number of risk factors shown by expanded criteria donors. A total of 443 grafts recovered using an in situ regional cooling technique from 1983 to 2011 were assessed. To assess the combined predictive value of the significant expanded criteria donor risk criteria, the patients were divided into three groups: those with no expanded criteria donor risk factors (no risk), one expanded criteria donor risk factor (single-risk) and two or more expanded criteria donor risk factors (multiple-risk). Among the donor factors, age ≥50 years, hypertension, maximum serum creatinine level ≥1.5 mg/dL and a warm ischemia time ≥30 min were identified as independent predictors of long-term graft failure on multivariate analysis. Regarding the expanded criteria donors criteria for marginal donors, cerebrovascular disease, hypertension and maximum serum creatinine level ≥1.5 mg/dL were identified as significant predictors on univariate analysis. The single- and multiple-risk groups showed 2.01- and 2.40-fold higher risks of graft loss, respectively. Renal grafts recovered from donors after cardiac death donors have a good renal function with an excellent long-term graft survival. However, an increased number of expanded criteria donors risk factors increase the risk of graft loss. © 2016 The Japanese Urological Association.
Air Pollution Monitoring Site Selection by Multiple Criteria Decision Analysis
Criteria air pollutants (particulate matter, sulfur dioxide, oxides of nitrogen, volatile organic compounds, and carbon monoxide) as well as toxic air pollutants are a global concern. A particular scenario that is receiving increased attention in the research is the exposure to t...
DEVELOPMENT OF AN IMPROVED ARBORLOO TO PROMOTE SANITATION IN RURAL ENVIRONMENTS
Multiple designs will be generated. They will be evaluated against appropriate criteria as identified by the design teams in consultation with advisors and likely users. Such criteria will, at a minimum include the ability to sequester waste material, structural robustness,...
MRI CRITERIA FOR THE DIAGNOSIS OF MULTIPLE SCLEROSIS: MAGNIMS CONSENSUS GUIDELINES
Filippi, M.; Rocca, M.A.; Ciccarelli, O.; De Stefano, N.; Evangelou, N.; Kappos, L.; Rovira, A.; Sastre-Garriga, J.; Tintorè, M.; Frederiksen, J.L.; Gasperini, C.; Palace, J.; Reich, D.S.; Banwell, B.; Montalban, X.; Barkhof, F.
2016-01-01
Summary In patients presenting with a clinically isolated syndrome (CIS), magnetic resonance imaging (MRI) can support and substitute clinical information for multiple sclerosis (MS) diagnosis demonstrating disease dissemination in space (DIS) and time (DIT) and helping to rule out other conditions that can mimic MS. From their inclusion in the diagnostic work-up for MS in 2001, several modifications of MRI diagnostic criteria have been proposed, in the attempt to simplify lesion-count models for demonstrating DIS, change the timing of MRI scanning for demonstrating DIT, and increase the value of spinal cord imaging. Since the last update of these criteria, new data regarding the application of MRI for demonstrating DIS and DIT have become available and improvement in MRI technology has occurred. State-of-the-art MRI findings in these patients were discussed in a MAGNIMS workshop, the goal of which was to provide an evidence-based and expert-opinion consensus on diagnostic MRI criteria modifications. PMID:26822746
Towards Robust Designs Via Multiple-Objective Optimization Methods
NASA Technical Reports Server (NTRS)
Man Mohan, Rai
2006-01-01
Fabricating and operating complex systems involves dealing with uncertainty in the relevant variables. In the case of aircraft, flow conditions are subject to change during operation. Efficiency and engine noise may be different from the expected values because of manufacturing tolerances and normal wear and tear. Engine components may have a shorter life than expected because of manufacturing tolerances. In spite of the important effect of operating- and manufacturing-uncertainty on the performance and expected life of the component or system, traditional aerodynamic shape optimization has focused on obtaining the best design given a set of deterministic flow conditions. Clearly it is important to both maintain near-optimal performance levels at off-design operating conditions, and, ensure that performance does not degrade appreciably when the component shape differs from the optimal shape due to manufacturing tolerances and normal wear and tear. These requirements naturally lead to the idea of robust optimal design wherein the concept of robustness to various perturbations is built into the design optimization procedure. The basic ideas involved in robust optimal design will be included in this lecture. The imposition of the additional requirement of robustness results in a multiple-objective optimization problem requiring appropriate solution procedures. Typically the costs associated with multiple-objective optimization are substantial. Therefore efficient multiple-objective optimization procedures are crucial to the rapid deployment of the principles of robust design in industry. Hence the companion set of lecture notes (Single- and Multiple-Objective Optimization with Differential Evolution and Neural Networks ) deals with methodology for solving multiple-objective Optimization problems efficiently, reliably and with little user intervention. Applications of the methodologies presented in the companion lecture to robust design will be included here. The evolutionary method (DE) is first used to solve a relatively difficult problem in extended surface heat transfer wherein optimal fin geometries are obtained for different safe operating base temperatures. The objective of maximizing the safe operating base temperature range is in direct conflict with the objective of maximizing fin heat transfer. This problem is a good example of achieving robustness in the context of changing operating conditions. The evolutionary method is then used to design a turbine airfoil; the two objectives being reduced sensitivity of the pressure distribution to small changes in the airfoil shape and the maximization of the trailing edge wedge angle with the consequent increase in airfoil thickness and strength. This is a relevant example of achieving robustness to manufacturing tolerances and wear and tear in the presence of other objectives.
NASA Astrophysics Data System (ADS)
Seo, Junyeong; Sung, Youngchul
2018-06-01
In this paper, an efficient transmit beam design and user scheduling method is proposed for multi-user (MU) multiple-input single-output (MISO) non-orthogonal multiple access (NOMA) downlink, based on Pareto-optimality. The proposed beam design and user scheduling method groups simultaneously-served users into multiple clusters with practical two users in each cluster, and then applies spatical zeroforcing (ZF) across clusters to control inter-cluster interference (ICI) and Pareto-optimal beam design with successive interference cancellation (SIC) to two users in each cluster to remove interference to strong users and leverage signal-to-interference-plus-noise ratios (SINRs) of interference-experiencing weak users. The proposed method has flexibility to control the rates of strong and weak users and numerical results show that the proposed method yields good performance.
Duan, Litian; Wang, Zizhong John; Duan, Fu
2016-11-16
In the multiple-reader environment (MRE) of radio frequency identification (RFID) system, multiple readers are often scheduled to interrogate the randomized tags via operating at different time slots or frequency channels to decrease the signal interferences. Based on this, a Geometric Distribution-based Multiple-reader Scheduling Optimization Algorithm using Artificial Immune System (GD-MRSOA-AIS) is proposed to fairly and optimally schedule the readers operating from the viewpoint of resource allocations. GD-MRSOA-AIS is composed of two parts, where a geometric distribution function combined with the fairness consideration is first introduced to generate the feasible scheduling schemes for reader operation. After that, artificial immune system (including immune clone, immune mutation and immune suppression) quickly optimize these feasible ones as the optimal scheduling scheme to ensure that readers are fairly operating with larger effective interrogation range and lower interferences. Compared with the state-of-the-art algorithm, the simulation results indicate that GD-MRSOA-AIS could efficiently schedules the multiple readers operating with a fairer resource allocation scheme, performing in larger effective interrogation range.
Duan, Litian; Wang, Zizhong John; Duan, Fu
2016-01-01
In the multiple-reader environment (MRE) of radio frequency identification (RFID) system, multiple readers are often scheduled to interrogate the randomized tags via operating at different time slots or frequency channels to decrease the signal interferences. Based on this, a Geometric Distribution-based Multiple-reader Scheduling Optimization Algorithm using Artificial Immune System (GD-MRSOA-AIS) is proposed to fairly and optimally schedule the readers operating from the viewpoint of resource allocations. GD-MRSOA-AIS is composed of two parts, where a geometric distribution function combined with the fairness consideration is first introduced to generate the feasible scheduling schemes for reader operation. After that, artificial immune system (including immune clone, immune mutation and immune suppression) quickly optimize these feasible ones as the optimal scheduling scheme to ensure that readers are fairly operating with larger effective interrogation range and lower interferences. Compared with the state-of-the-art algorithm, the simulation results indicate that GD-MRSOA-AIS could efficiently schedules the multiple readers operating with a fairer resource allocation scheme, performing in larger effective interrogation range. PMID:27854342
NASA Astrophysics Data System (ADS)
Frosini, Mikael; Bernard, Denis
2017-09-01
We revisit the precision of the measurement of track parameters (position, angle) with optimal methods in the presence of detector resolution, multiple scattering and zero magnetic field. We then obtain an optimal estimator of the track momentum by a Bayesian analysis of the filtering innovations of a series of Kalman filters applied to the track. This work could pave the way to the development of autonomous high-performance gas time-projection chambers (TPC) or silicon wafer γ-ray space telescopes and be a powerful guide in the optimization of the design of the multi-kilo-ton liquid argon TPCs that are under development for neutrino studies.
Contraction Options and Optimal Multiple-Stopping in Spectrally Negative Lévy Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yamazaki, Kazutoshi, E-mail: kyamazak@kansai-u.ac.jp
This paper studies the optimal multiple-stopping problem arising in the context of the timing option to withdraw from a project in stages. The profits are driven by a general spectrally negative Lévy process. This allows the model to incorporate sudden declines of the project values, generalizing greatly the classical geometric Brownian motion model. We solve the one-stage case as well as the extension to the multiple-stage case. The optimal stopping times are of threshold-type and the value function admits an expression in terms of the scale function. A series of numerical experiments are conducted to verify the optimality and tomore » evaluate the efficiency of the algorithm.« less
Conceptual design and multidisciplinary optimization of in-plane morphing wing structures
NASA Astrophysics Data System (ADS)
Inoyama, Daisaku; Sanders, Brian P.; Joo, James J.
2006-03-01
In this paper, the topology optimization methodology for the synthesis of distributed actuation system with specific applications to the morphing air vehicle is discussed. The main emphasis is placed on the topology optimization problem formulations and the development of computational modeling concepts. For demonstration purposes, the inplane morphing wing model is presented. The analysis model is developed to meet several important criteria: It must allow large rigid-body displacements, as well as variation in planform area, with minimum strain on structural members while retaining acceptable numerical stability for finite element analysis. Preliminary work has indicated that addressed modeling concept meets the criteria and may be suitable for the purpose. Topology optimization is performed on the ground structure based on this modeling concept with design variables that control the system configuration. In other words, states of each element in the model are design variables and they are to be determined through optimization process. In effect, the optimization process assigns morphing members as 'soft' elements, non-morphing load-bearing members as 'stiff' elements, and non-existent members as 'voids.' In addition, the optimization process determines the location and relative force intensities of distributed actuators, which is represented computationally as equal and opposite nodal forces with soft axial stiffness. Several different optimization problem formulations are investigated to understand their potential benefits in solution quality, as well as meaningfulness of formulation itself. Sample in-plane morphing problems are solved to demonstrate the potential capability of the methodology introduced in this paper.
Ou, Yangming; Resnick, Susan M.; Gur, Ruben C.; Gur, Raquel E.; Satterthwaite, Theodore D.; Furth, Susan; Davatzikos, Christos
2016-01-01
Atlas-based automated anatomical labeling is a fundamental tool in medical image segmentation, as it defines regions of interest for subsequent analysis of structural and functional image data. The extensive investigation of multi-atlas warping and fusion techniques over the past 5 or more years has clearly demonstrated the advantages of consensus-based segmentation. However, the common approach is to use multiple atlases with a single registration method and parameter set, which is not necessarily optimal for every individual scan, anatomical region, and problem/data-type. Different registration criteria and parameter sets yield different solutions, each providing complementary information. Herein, we present a consensus labeling framework that generates a broad ensemble of labeled atlases in target image space via the use of several warping algorithms, regularization parameters, and atlases. The label fusion integrates two complementary sources of information: a local similarity ranking to select locally optimal atlases and a boundary modulation term to refine the segmentation consistently with the target image's intensity profile. The ensemble approach consistently outperforms segmentations using individual warping methods alone, achieving high accuracy on several benchmark datasets. The MUSE methodology has been used for processing thousands of scans from various datasets, producing robust and consistent results. MUSE is publicly available both as a downloadable software package, and as an application that can be run on the CBICA Image Processing Portal (https://ipp.cbica.upenn.edu), a web based platform for remote processing of medical images. PMID:26679328
A novel method to construct an air quality index based on air pollution profiles.
Thach, Thuan-Quoc; Tsang, Hilda; Cao, Peihua; Ho, Lai-Ming
2018-01-01
Air quality indices based on the maximum of sub-indices of pollutants are easy to produce and help quantify the degree of air pollution. However, they discount the additive effects of multiple pollutants and are only sensitive to changes in highest sub-index. We propose a simple and concise method to construct an air quality index that takes into account additive effects of multiple pollutants and evaluate the extent to which this index predicts health effects. We obtained concentrations of four criteria pollutants: particulate matter with aerodynamic diameter ≤ 10μm (PM 10 ), sulphur dioxide (SO 2 ), nitrogen dioxide (NO 2 ) and ozone (O 3 ) and daily admissions to Hong Kong hospitals for cardiovascular and respiratory diseases for all ages and those 65 years or older for years 2001-2012. We derived sub-indices of the four criteria pollutants, calculated by normalizing pollutant concentrations to their respective short-term WHO Air Quality Guidelines (WHO AQG). We aggregated the sub-indices using the root-mean-power function with an optimal power to form an overall air quality index. The optimal power was determined by minimizing the sum of over- and under-estimated days. We then assessed associations between the pollution bands of the index and cardiovascular and respiratory admissions using a time-stratified case-crossover design adjusted for ambient temperature, relative humidity and influenza epidemics. Further, we conducted case-crossover analyses using the Hong Kong air quality data with the respective standards and classification of pollution bands of the China Air Quality Index (AQI), the United Kingdom Daily AQI (DAQI), and the United States Environmental Protection Agency (USEPA) AQI. The mean concentrations of PM 10 and SO 2 based on maximum 3-h mean exceeded the WHO AQG by 37% and 50%, respectively. We identified the combined condition of observed high-pollution days as either at least one pollutant > 1.5×WHO AQG or at least two pollutants > 1.0×WHO AQG to characterize the typical pollution profiles over the study period, which resulted in the optimal power=3.0. The distribution of days in different pollution bands of the index was: 5.8% for "Low" (0-50), 37.6% for "Moderate" (51-100), 31.1% for "High" (101-150), 14.7% for "Very High" (151-200), and 10.8% for "Serious" (201+). For cardiovascular and respiratory admissions, there were significant associations with the pollution bands of the index for all ages and those 65 years or older. The trends of increasing pollution bands in relation to increasing excess risks of cardiovascular and respiratory admissions were significant for the proposed index, the China AQI, the UK DAQI and the USEPA AQI (P value for test for linear trend < 0.0001), suggesting a dose-response relation. We have developed a simple and concise method to construct an air quality index that accounts for multiple pollutants to quantify air quality conditions for Hong Kong. Further developments are needed in order to support the extension of the method to other settings. Copyright © 2017 Elsevier GmbH. All rights reserved.
Jiang, Yan; Ye, Zeng pan-pan; You, Chao; Hu, Xin; Liu, Yi; Li, Hao; Lin, Sen; Li, Ji-Pin
2015-10-01
To determine an optimal head elevation degree to decrease intracranial pressure in postcraniotomy patients by meta-analysis. A change in head position can lead to a change in intracranial pressure; however, there are conflicting data regarding the optimal degree of elevation that decreases intracranial pressure in postcraniotomy patients. Quantitative systematic review with meta-analysis following Cochrane methods. The data were collected during 2014; three databases (PubMed, Embase and China National Knowledge Internet) were searched for published and unpublished studies in English. The bibliographies of the articles were also reviewed. The inclusion criteria referred to different elevation degrees and effects on intracranial pressure in postcraniotomy patients. According to pre-determined inclusion criteria and exclusion criteria, two reviewers extracted the eligible studies using a standard data form. These included a total of 237 participants who were included in the meta-analysis. (1) Compared with 0 degree: 10, 15, 30 and 45 degrees of head elevation resulted in lower intracranial pressure. (2) Intracranial pressure at 30 degrees was not significantly different in comparison to 45 degrees and was lower than that at 10 and 15 degrees. Patients with increased intracranial pressure significantly benefitted from a head elevation of 10, 15, 30 and 45 degrees compared with 0 degrees. A head elevation of 30 or 45 degrees is optimal for decreasing intracranial pressure. Research about the relationship of position changes and the outcomes of patient primary diseases is absent. © 2015 John Wiley & Sons Ltd.
Algorithms for bilevel optimization
NASA Technical Reports Server (NTRS)
Alexandrov, Natalia; Dennis, J. E., Jr.
1994-01-01
General multilevel nonlinear optimization problems arise in design of complex systems and can be used as a means of regularization for multi-criteria optimization problems. Here, for clarity in displaying our ideas, we restrict ourselves to general bi-level optimization problems, and we present two solution approaches. Both approaches use a trust-region globalization strategy, and they can be easily extended to handle the general multilevel problem. We make no convexity assumptions, but we do assume that the problem has a nondegenerate feasible set. We consider necessary optimality conditions for the bi-level problem formulations and discuss results that can be extended to obtain multilevel optimization formulations with constraints at each level.
MULTIPLE PERSONALITY DISORDER FOLLOWING CONVERSION AND DISSOCIATIVE DISORDER NOS : A CASE REPORT
Jhingan, Harsh Prem; Aggarwal, Neeruj; Saxena, Shekhar; Gupta, Dhanesh K.
2000-01-01
A case progressing from symptoms of conversion disorder to dissociative disorder and then to multiple personality disorder as per DSM-III-R criteria is being reported. The clinical implications are discussed. PMID:21407917
NASA Astrophysics Data System (ADS)
Sykes, J. F.; Kang, M.; Thomson, N. R.
2007-12-01
The TCE release from The Lockformer Company in Lisle Illinois resulted in a plume in a confined aquifer that is more than 4 km long and impacted more than 300 residential wells. Many of the wells are on the fringe of the plume and have concentrations that did not exceed 5 ppb. The settlement for the Chapter 11 bankruptcy protection of Lockformer involved the establishment of a trust fund that compensates individuals with cancers with payments being based on cancer type, estimated TCE concentration in the well and the duration of exposure to TCE. The estimation of early arrival times and hence low likelihood events is critical in the determination of the eligibility of an individual for compensation. Thus, an emphasis must be placed on the accuracy of the leading tail region in the likelihood distribution of possible arrival times at a well. The estimation of TCE arrival time, using a three-dimensional analytical solution, involved parameter estimation and uncertainty analysis. Parameters in the model included TCE source parameters, groundwater velocities, dispersivities and the TCE decay coefficient for both the confining layer and the bedrock aquifer. Numerous objective functions, which include the well-known L2-estimator, robust estimators (L1-estimators and M-estimators), penalty functions, and dead zones, were incorporated in the parameter estimation process to treat insufficiencies in both the model and observational data due to errors, biases, and limitations. The concept of equifinality was adopted and multiple maximum likelihood parameter sets were accepted if pre-defined physical criteria were met. The criteria ensured that a valid solution predicted TCE concentrations for all TCE impacted areas. Monte Carlo samples are found to be inadequate for uncertainty analysis of this case study due to its inability to find parameter sets that meet the predefined physical criteria. Successful results are achieved using a Dynamically-Dimensioned Search sampling methodology that inherently accounts for parameter correlations and does not require assumptions regarding parameter distributions. For uncertainty analysis, multiple parameter sets were obtained using a modified Cauchy's M-estimator. Penalty functions had to be incorporated into the objective function definitions to generate a sufficient number of acceptable parameter sets. The combined effect of optimization and the application of the physical criteria perform the function of behavioral thresholds by reducing anomalies and by removing parameter sets with high objective function values. The factors that are important to the creation of an uncertainty envelope for TCE arrival at wells are outlined in the work. In general, greater uncertainty appears to be present at the tails of the distribution. For a refinement of the uncertainty envelopes, the application of additional physical criteria or behavioral thresholds is recommended.
29 CFR 1926.753 - Hoisting and rigging.
Code of Federal Regulations, 2010 CFR
2010-07-01
... lift rigging procedure. (1) A multiple lift shall only be performed if the following criteria are met: (i) A multiple lift rigging assembly is used; (ii) A maximum of five members are hoisted per lift... multiple lift have been trained in these procedures in accordance with § 1926.761(c)(1). (v) No crane is...
Validation of a new modal performance measure for flexible controllers design
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simo, J.B.; Tahan, S.A.; Kamwa, I.
1996-05-01
A new modal performance measure for power system stabilizer (PSS) optimization is proposed in this paper. The new method is based on modifying the square envelopes of oscillating modes, in order to take into account their damping ratios while minimizing the performance index. This criteria is applied to flexible controllers optimal design, on a multi-input-multi-output (MIMO) reduced-order model of a prototype power system. The multivariable model includes four generators, each having one input and one output. Linear time-response simulation and transient stability analysis with a nonlinear package confirm the superiority of the proposed criteria and illustrate its effectiveness in decentralizedmore » control.« less
Primary progressive multiple sclerosis diagnostic criteria: a reappraisal.
Montalban, X; Sastre-Garriga, J; Filippi, M; Khaleeli, Z; Téllez, N; Vellinga, M M; Tur, C; Brochet, B; Barkhof, F; Rovaris, M; Miller, D H; Polman, C H; Rovira, A; Thompson, A J
2009-12-01
The diagnostic criteria used in primary progressive (PP) and relapsing-remitting (RR) multiple sclerosis (MS) show substantial differences. This introduces complexity in the diagnosis of MS which could be resolved if these criteria could be unified in terms of the requirements for dissemination in space (DIS). The aim of this study was to assess whether a single algorithm may be used to demonstrate DIS in all forms of MS. Five sets of RRMS criteria for DIS were applied to a cohort of 145 patients with established PPMS (mean disease duration: 11 years - PPMS-1): C1: Barkhof-Tintoré (as in 2005 McDonald's criteria); C2: Swanton et al. (as in JNNP 2006); C3: presence of oligoclonal bands plus two lesions (as in McDonald's criteria); C4 and C5: a two-step approach was also followed (patients not fulfilling C1 or C2 were then assessed for C3). Two sets of PPMS criteria for DIS were applied: C6: Thompson et al. (as in 2001 McDonald's criteria); C7: 2005 McDonald criteria. A second sample of 55 patients with less than 5 years of disease duration (PPMS-2) was also analysed using an identical approach. For PPMS-1/PPMS-2, fulfilment was: C1:73.8%/66.7%; C2:72.1%/59.3%; C3:89%/79.2%; C4:96%/92.3%; C5:96%/85.7%; C6:85.8%/78.7%; C7:91%/80.4%. Levels of fulfilment suggest that the use of a single set of criteria for DIS in RRMS and PPMS might be feasible, and reinforce the added value of cerebrospinal fluid (CSF) findings to increase fulfilment in PPMS. Unification of the DIS criteria for both RRMS and PPMS could be considered in further revisions of the MS diagnostic criteria.
Departures From Optimality When Pursuing Multiple Approach or Avoidance Goals
2016-01-01
This article examines how people depart from optimality during multiple-goal pursuit. The authors operationalized optimality using dynamic programming, which is a mathematical model used to calculate expected value in multistage decisions. Drawing on prospect theory, they predicted that people are risk-averse when pursuing approach goals and are therefore more likely to prioritize the goal in the best position than the dynamic programming model suggests is optimal. The authors predicted that people are risk-seeking when pursuing avoidance goals and are therefore more likely to prioritize the goal in the worst position than is optimal. These predictions were supported by results from an experimental paradigm in which participants made a series of prioritization decisions while pursuing either 2 approach or 2 avoidance goals. This research demonstrates the usefulness of using decision-making theories and normative models to understand multiple-goal pursuit. PMID:26963081
The Optimal Forest Rotation: A Discussion and Annotated Bibliography
David H. Newman
1988-01-01
The literature contains six different criteria of the optimal forest rotation: (1) maximum single-rotation physical yield, (2) maximum single-rotation annual yield, (3) maximum single-rotation discounted net revenues, (4) maximum discounted net revenues from an infinite series of rotations, (5) maximum annual net revenues, and (6) maximum internal rate of return. First...
ERIC Educational Resources Information Center
Suh, Joyce; Orinstein, Alyssa; Barton, Marianne; Chen, Chi-Ming; Eigsti, Inge-Marie; Ramirez-Esparza, Nairan; Fein, Deborah
2016-01-01
The study examines whether "optimal outcome" (OO) children, despite no longer meeting diagnostic criteria for Autism Spectrum Disorder (ASD), exhibit personality traits often found in those with ASD. Nine zero acquaintance raters evaluated Broader Autism Phenotype (BAP) and Big Five personality traits of 22 OO individuals, 27 high…
ERIC Educational Resources Information Center
Ansari, Fazel; Seidenberg, Ulrich
2016-01-01
This paper discusses the complementarity of human and cyber physical production systems (CPPS). The discourse of complementarity is elaborated by defining five criteria for comparing the characteristics of human and CPPS. Finally, a management portfolio matrix is proposed for examining the feasibility of optimal collaboration between them. The…
Optimal allocation in annual plants and its implications for drought response
NASA Astrophysics Data System (ADS)
Caldararu, Silvia; Smith, Matthew; Purves, Drew
2015-04-01
The concept of plant optimality refers to the plastic behaviour of plants that results in lifetime and offspring fitness. Optimality concepts have been used in vegetation models for a variety of processes, including stomatal conductance, leaf phenology and biomass allocation. Including optimality in vegetation models has the advantages of creating process based models with a relatively low complexity in terms of parameter numbers but which are capable of reproducing complex plant behaviour. We present a general model of plant growth for annual plants based on the hypothesis that plants allocate biomass to aboveground and belowground vegetative organs in order to maintain an optimal C:N ratio. The model also represents reproductive growth through a second optimality criteria, which states that plants flower when they reach peak nitrogen uptake. We apply this model to wheat and maize crops at 15 locations corresponding to FLUXNET cropland sites. The model parameters are data constrained using a Bayesian fitting algorithm to eddy covariance data, satellite derived vegetation indices, specifically the MODIS fAPAR product and field level crop yield data. We use the model to simulate the plant drought response under the assumption of plant optimality and show that the plants maintain unstressed total biomass levels under drought for a reduction in precipitation of up to 40%. Beyond that level plant response stops being plastic and growth decreases sharply. This behaviour results simply from the optimal allocation criteria as the model includes no explicit drought sensitivity component. Models that use plant optimality concepts are a useful tool for simulation plant response to stress without the addition of artificial thresholds and parameters.
Shindoh, Junichi; Loyer, Evelyne M; Kopetz, Scott; Boonsirikamchai, Piyaporn; Maru, Dipen M; Chun, Yun Shin; Zimmitti, Giuseppe; Curley, Steven A; Charnsangavej, Chusilp; Aloia, Thomas A; Vauthey, Jean-Nicolas
2012-12-20
The purposes of this study were to confirm the prognostic value of an optimal morphologic response to preoperative chemotherapy in patients undergoing chemotherapy with or without bevacizumab before resection of colorectal liver metastases (CLM) and to identify predictors of the optimal morphologic response. The study included 209 patients who underwent resection of CLM after preoperative chemotherapy with oxaliplatin- or irinotecan-based regimens with or without bevacizumab. Radiologic responses were classified as optimal or suboptimal according to the morphologic response criteria. Overall survival (OS) was determined, and prognostic factors associated with an optimal response were identified in multivariate analysis. An optimal morphologic response was observed in 47% of patients treated with bevacizumab and 12% of patients treated without bevacizumab (P < .001). The 3- and 5-year OS rates were higher in the optimal response group (82% and 74%, respectively) compared with the suboptimal response group (60% and 45%, respectively; P < .001). On multivariate analysis, suboptimal morphologic response was an independent predictor of worse OS (hazard ratio, 2.09; P = .007). Receipt of bevacizumab (odds ratio, 6.71; P < .001) and largest metastasis before chemotherapy of ≤ 3 cm (odds ratio, 2.12; P = .025) were significantly associated with optimal morphologic response. The morphologic response showed no specific correlation with conventional size-based RECIST criteria, and it was superior to RECIST in predicting major pathologic response. Independent of preoperative chemotherapy regimen, optimal morphologic response is sufficiently correlated with OS to be considered a surrogate therapeutic end point for patients with CLM.
Shindoh, Junichi; Loyer, Evelyne M.; Kopetz, Scott; Boonsirikamchai, Piyaporn; Maru, Dipen M.; Chun, Yun Shin; Zimmitti, Giuseppe; Curley, Steven A.; Charnsangavej, Chusilp; Aloia, Thomas A.; Vauthey, Jean-Nicolas
2012-01-01
Purpose The purposes of this study were to confirm the prognostic value of an optimal morphologic response to preoperative chemotherapy in patients undergoing chemotherapy with or without bevacizumab before resection of colorectal liver metastases (CLM) and to identify predictors of the optimal morphologic response. Patients and Methods The study included 209 patients who underwent resection of CLM after preoperative chemotherapy with oxaliplatin- or irinotecan-based regimens with or without bevacizumab. Radiologic responses were classified as optimal or suboptimal according to the morphologic response criteria. Overall survival (OS) was determined, and prognostic factors associated with an optimal response were identified in multivariate analysis. Results An optimal morphologic response was observed in 47% of patients treated with bevacizumab and 12% of patients treated without bevacizumab (P < .001). The 3- and 5-year OS rates were higher in the optimal response group (82% and 74%, respectively) compared with the suboptimal response group (60% and 45%, respectively; P < .001). On multivariate analysis, suboptimal morphologic response was an independent predictor of worse OS (hazard ratio, 2.09; P = .007). Receipt of bevacizumab (odds ratio, 6.71; P < .001) and largest metastasis before chemotherapy of ≤ 3 cm (odds ratio, 2.12; P = .025) were significantly associated with optimal morphologic response. The morphologic response showed no specific correlation with conventional size-based RECIST criteria, and it was superior to RECIST in predicting major pathologic response. Conclusion Independent of preoperative chemotherapy regimen, optimal morphologic response is sufficiently correlated with OS to be considered a surrogate therapeutic end point for patients with CLM. PMID:23150701
Thokala, Praveen; Devlin, Nancy; Marsh, Kevin; Baltussen, Rob; Boysen, Meindert; Kalo, Zoltan; Longrenn, Thomas; Mussen, Filip; Peacock, Stuart; Watkins, John; Ijzerman, Maarten
2016-01-01
Health care decisions are complex and involve confronting trade-offs between multiple, often conflicting, objectives. Using structured, explicit approaches to decisions involving multiple criteria can improve the quality of decision making and a set of techniques, known under the collective heading multiple criteria decision analysis (MCDA), are useful for this purpose. MCDA methods are widely used in other sectors, and recently there has been an increase in health care applications. In 2014, ISPOR established an MCDA Emerging Good Practices Task Force. It was charged with establishing a common definition for MCDA in health care decision making and developing good practice guidelines for conducting MCDA to aid health care decision making. This initial ISPOR MCDA task force report provides an introduction to MCDA - it defines MCDA; provides examples of its use in different kinds of decision making in health care (including benefit risk analysis, health technology assessment, resource allocation, portfolio decision analysis, shared patient clinician decision making and prioritizing patients' access to services); provides an overview of the principal methods of MCDA; and describes the key steps involved. Upon reviewing this report, readers should have a solid overview of MCDA methods and their potential for supporting health care decision making. Copyright © 2016. Published by Elsevier Inc.
Shams-Vahdati, Samad; Gholipour, Changiz; Jalilzadeh-Binazar, Mehran; Moharamzadeh, Payman; Sorkhabi, Rana; Jalilian, Respina
2015-07-01
Multiple trauma patients frequently suffer eye injuries, especially those patients with head traumas. We evaluated the accuracy of physical findings to determine the priorities of emergency ophthalmologic intervention in these patients. This study included all multiple trauma patients with ophthalmic trauma who had a GCS of 15 when they arrived at the emergency department during the period of March, 2008-March, 2009. First, we evaluated the patients according to the criteria of the study. Then, an ophthalmologist evaluated them. From March 2008-March 2009, 306 multiple trauma patients with ocular trauma came to our ED. The sensitivity and accuracy of emergency physicians in diagnosing the priority of ophthalmologic treatment were comparable to an ophthalmologist (measure of agreement in kappa=0.967). The ability of an emergency physician or general surgeon to determine the actual need of early ophthalmologist intervention can improve decision making and saving both time and money. Our study suggests that it is possible to determine according to clinical findings the need of the patient to have ophthalmologic intervention without referring the patient to ophthalmologist examination. Defining specific criteria of ophthalmologic examinations can clarify the necessity of emergency ophthalmologic examination and intervention. Copyright © 2014 Elsevier Ltd. All rights reserved.
Wu, Yunna; Xu, Chuanbo; Ke, Yiming; Chen, Kaifeng; Xu, Hu
2017-12-15
For tidal range power plants to be sustainable, the environmental impacts caused by the implement of various tidal barrage schemes must be assessed before construction. However, several problems exist in the current researches: firstly, evaluation criteria of the tidal barrage schemes environmental impact assessment (EIA) are not adequate; secondly, uncertainty of criteria information fails to be processed properly; thirdly, correlation among criteria is unreasonably measured. Hence the contributions of this paper are as follows: firstly, an evaluation criteria system is established from three dimensions of hydrodynamic, biological and morphological aspects. Secondly, cloud model is applied to describe the uncertainty of criteria information. Thirdly, Choquet integral with respect to λ-fuzzy measure is introduced to measure the correlation among criteria. On the above bases, a multi-criteria decision-making decision framework for tidal barrage scheme EIA is established to select the optimal scheme. Finally, a case study demonstrates the effectiveness of the proposed framework. Copyright © 2017 Elsevier Ltd. All rights reserved.
Householder transformations and optimal linear combinations
NASA Technical Reports Server (NTRS)
Decell, H. P., Jr.; Smiley, W., III
1974-01-01
Several theorems related to the Householder transformation and separability criteria are proven. Orthogonal transformations, topology, divergence, mathematical matrices, and group theory are discussed.
Pedersen, Kine; Sørbye, Sveinung Wergeland; Burger, Emily Annika; Lönnberg, Stefan; Kristiansen, Ivar Sønbø
2015-12-01
Decision makers often need to simultaneously consider multiple criteria or outcomes when deciding whether to adopt new health interventions. Using decision analysis within the context of cervical cancer screening in Norway, we aimed to aid decision makers in identifying a subset of relevant strategies that are simultaneously efficient, feasible, and optimal. We developed an age-stratified probabilistic decision tree model following a cohort of women attending primary screening through one screening round. We enumerated detected precancers (i.e., cervical intraepithelial neoplasia of grade 2 or more severe (CIN2+)), colposcopies performed, and monetary costs associated with 10 alternative triage algorithms for women with abnormal cytology results. As efficiency metrics, we calculated incremental cost-effectiveness, and harm-benefit, ratios, defined as the additional costs, or the additional number of colposcopies, per additional CIN2+ detected. We estimated capacity requirements and uncertainty surrounding which strategy is optimal according to the decision rule, involving willingness to pay (monetary or resources consumed per added benefit). For ages 25 to 33 years, we eliminated four strategies that did not fall on either efficiency frontier, while one strategy was efficient with respect to both efficiency metrics. Compared with current practice in Norway, two strategies detected more precancers at lower monetary costs, but some required more colposcopies. Similar results were found for women aged 34 to 69 years. Improving the effectiveness and efficiency of cervical cancer screening may necessitate additional resources. Although efficient and feasible, both society and individuals must specify their willingness to accept the additional resources and perceived harms required to increase effectiveness before a strategy can be considered optimal. Copyright © 2015. Published by Elsevier Inc.
Modeling urban air pollution with optimized hierarchical fuzzy inference system.
Tashayo, Behnam; Alimohammadi, Abbas
2016-10-01
Environmental exposure assessments (EEA) and epidemiological studies require urban air pollution models with appropriate spatial and temporal resolutions. Uncertain available data and inflexible models can limit air pollution modeling techniques, particularly in under developing countries. This paper develops a hierarchical fuzzy inference system (HFIS) to model air pollution under different land use, transportation, and meteorological conditions. To improve performance, the system treats the issue as a large-scale and high-dimensional problem and develops the proposed model using a three-step approach. In the first step, a geospatial information system (GIS) and probabilistic methods are used to preprocess the data. In the second step, a hierarchical structure is generated based on the problem. In the third step, the accuracy and complexity of the model are simultaneously optimized with a multiple objective particle swarm optimization (MOPSO) algorithm. We examine the capabilities of the proposed model for predicting daily and annual mean PM2.5 and NO2 and compare the accuracy of the results with representative models from existing literature. The benefits provided by the model features, including probabilistic preprocessing, multi-objective optimization, and hierarchical structure, are precisely evaluated by comparing five different consecutive models in terms of accuracy and complexity criteria. Fivefold cross validation is used to assess the performance of the generated models. The respective average RMSEs and coefficients of determination (R (2)) for the test datasets using proposed model are as follows: daily PM2.5 = (8.13, 0.78), annual mean PM2.5 = (4.96, 0.80), daily NO2 = (5.63, 0.79), and annual mean NO2 = (2.89, 0.83). The obtained results demonstrate that the developed hierarchical fuzzy inference system can be utilized for modeling air pollution in EEA and epidemiological studies.
NASA Astrophysics Data System (ADS)
Madani, Kaveh; Hooshyar, Milad
2014-11-01
Reservoir systems with multiple operators can benefit from coordination of operation policies. To maximize the total benefit of these systems the literature has normally used the social planner's approach. Based on this approach operation decisions are optimized using a multi-objective optimization model with a compound system's objective. While the utility of the system can be increased this way, fair allocation of benefits among the operators remains challenging for the social planner who has to assign controversial weights to the system's beneficiaries and their objectives. Cooperative game theory provides an alternative framework for fair and efficient allocation of the incremental benefits of cooperation. To determine the fair and efficient utility shares of the beneficiaries, cooperative game theory solution methods consider the gains of each party in the status quo (non-cooperation) as well as what can be gained through the grand coalition (social planner's solution or full cooperation) and partial coalitions. Nevertheless, estimation of the benefits of different coalitions can be challenging in complex multi-beneficiary systems. Reinforcement learning can be used to address this challenge and determine the gains of the beneficiaries for different levels of cooperation, i.e., non-cooperation, partial cooperation, and full cooperation, providing the essential input for allocation based on cooperative game theory. This paper develops a game theory-reinforcement learning (GT-RL) method for determining the optimal operation policies in multi-operator multi-reservoir systems with respect to fairness and efficiency criteria. As the first step to underline the utility of the GT-RL method in solving complex multi-agent multi-reservoir problems without a need for developing compound objectives and weight assignment, the proposed method is applied to a hypothetical three-agent three-reservoir system.
Gupta, Samir; Wan, Flora T; Hall, Susan E; Straus, Sharon E
2012-01-01
Asthma action plans (AAPs), which decrease hospitalizations and improve symptom control, are recommended in guidelines, but are seldom delivered to patients. Existing AAPs have been developed by experts, without the inclusion of all stakeholders (such as patients with asthma) and without specifically addressing usability and visual design. Our objective was to develop a more usable AAP by involving all stakeholders and considering design preferences. We created a Wiki-based system for multiuser AAP development. Pulmonologists, primary care physicians, asthma educators and patients used the system to collaboratively compile a single AAP by making multiple online selections over 1 week. We combined common elements from 3 AAPs developed in this way into 1, optimized visual design features and tested face validity in focus groups. A total of 41 participants averaged 646 selections/week over a login-time of 28.8 h/week. Of 35 participants, 28 (80%) were satisfied with the final AAP and 32 (91%) perceived that they would be able to use it. The plans created by the 3 groups were very similar, with a unanimous or majority agreement in the handling of 100/110 (91%) AAP options. Inclusion of multiple stakeholders and focus on design preferences predict enhanced usability and uptake of medical tools. The validity of our AAP is further supported by the similarity between the AAPs created by each group, user engagement and satisfaction with the plan and agreement with existing validity criteria proposed by experts. This AAP can be implemented in care with a concurrent measurement of uptake and health impact. Copyright © 2012 S. Karger AG, Basel.
Development of a fast and feasible spectrum modeling technique for flattening filter free beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cho, Woong; Bush, Karl; Mok, Ed
Purpose: To develop a fast and robust technique for the determination of optimized photon spectra for flattening filter free (FFF) beams to be applied in convolution/superposition dose calculations. Methods: A two-step optimization method was developed to derive optimal photon spectra for FFF beams. In the first step, a simple functional form of the photon spectra proposed by Ali ['Functional forms for photon spectra of clinical linacs,' Phys. Med. Biol. 57, 31-50 (2011)] is used to determine generalized shapes of the photon spectra. In this method, the photon spectra were defined for the ranges of field sizes to consider the variationsmore » of the contributions of scattered photons with field size. Percent depth doses (PDDs) for each field size were measured and calculated to define a cost function, and a collapsed cone convolution (CCC) algorithm was used to calculate the PDDs. In the second step, the generalized functional form of the photon spectra was fine-tuned in a process whereby the weights of photon fluence became the optimizing free parameters. A line search method was used for the optimization and first order derivatives with respect to the optimizing parameters were derived from the CCC algorithm to enhance the speed of the optimization. The derived photon spectra were evaluated, and the dose distributions using the optimized spectra were validated. Results: The optimal spectra demonstrate small variations with field size for the 6 MV FFF beam and relatively large variations for the 10 MV FFF beam. The mean energies of the optimized 6 MV FFF spectra were decreased from 1.31 MeV for a 3 Multiplication-Sign 3 cm{sup 2} field to 1.21 MeV for a 40 Multiplication-Sign 40 cm{sup 2} field, and from 2.33 MeV at 3 Multiplication-Sign 3 cm{sup 2} to 2.18 MeV at 40 Multiplication-Sign 40 cm{sup 2} for the 10 MV FFF beam. The developed method could significantly improve the agreement between the calculated and measured PDDs. Root mean square differences on the optimized PDDs were observed to be 0.41% (3 Multiplication-Sign 3 cm{sup 2}) down to 0.21% (40 Multiplication-Sign 40 cm{sup 2}) for the 6 MV FFF beam, and 0.35% (3 Multiplication-Sign 3 cm{sup 2}) down to 0.29% (40 Multiplication-Sign 40 cm{sup 2}) for the 10 MV FFF beam. The first order derivatives from the functional form were found to improve the speed of computational time up to 20 times compared to the other techniques. Conclusions: The derived photon spectra resulted in good agreements with measured PDDs over the range of field sizes investigated. The suggested method is easily applicable to commercial radiation treatment planning systems since it only requires measured PDDs as input.« less
Opuntia in México: Identifying Priority Areas for Conserving Biodiversity in a Multi-Use Landscape
Illoldi-Rangel, Patricia; Ciarleglio, Michael; Sheinvar, Leia; Linaje, Miguel; Sánchez-Cordero, Victor; Sarkar, Sahotra
2012-01-01
Background México is one of the world's centers of species diversity (richness) for Opuntia cacti. Yet, in spite of their economic and ecological importance, Opuntia species remain poorly studied and protected in México. Many of the species are sparsely but widely distributed across the landscape and are subject to a variety of human uses, so devising implementable conservation plans for them presents formidable difficulties. Multi–criteria analysis can be used to design a spatially coherent conservation area network while permitting sustainable human usage. Methods and Findings Species distribution models were created for 60 Opuntia species using MaxEnt. Targets of representation within conservation area networks were assigned at 100% for the geographically rarest species and 10% for the most common ones. Three different conservation plans were developed to represent the species within these networks using total area, shape, and connectivity as relevant criteria. Multi–criteria analysis and a metaheuristic adaptive tabu search algorithm were used to search for optimal solutions. The plans were built on the existing protected areas of México and prioritized additional areas for management for the persistence of Opuntia species. All plans required around one–third of México's total area to be prioritized for attention for Opuntia conservation, underscoring the implausibility of Opuntia conservation through traditional land reservation. Tabu search turned out to be both computationally tractable and easily implementable for search problems of this kind. Conclusions Opuntia conservation in México require the management of large areas of land for multiple uses. The multi-criteria analyses identified priority areas and organized them in large contiguous blocks that can be effectively managed. A high level of connectivity was established among the prioritized areas resulting in the enhancement of possible modes of plant dispersal as well as only a small number of blocks that would be recommended for conservation management. PMID:22606279
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Xingyuan; He, Zhili; Zhou, Jizhong
2005-10-30
The oligonucleotide specificity for microarray hybridizationcan be predicted by its sequence identity to non-targets, continuousstretch to non-targets, and/or binding free energy to non-targets. Mostcurrently available programs only use one or two of these criteria, whichmay choose 'false' specific oligonucleotides or miss 'true' optimalprobes in a considerable proportion. We have developed a software tool,called CommOligo using new algorithms and all three criteria forselection of optimal oligonucleotide probes. A series of filters,including sequence identity, free energy, continuous stretch, GC content,self-annealing, distance to the 3'-untranslated region (3'-UTR) andmelting temperature (Tm), are used to check each possibleoligonucleotide. A sequence identity is calculated based onmore » gapped globalalignments. A traversal algorithm is used to generate alignments for freeenergy calculation. The optimal Tm interval is determined based on probecandidates that have passed all other filters. Final probes are pickedusing a combination of user-configurable piece-wise linear functions andan iterative process. The thresholds for identity, stretch and freeenergy filters are automatically determined from experimental data by anaccessory software tool, CommOligo_PE (CommOligo Parameter Estimator).The program was used to design probes for both whole-genome and highlyhomologous sequence data. CommOligo and CommOligo_PE are freely availableto academic users upon request.« less
Theoretical Foundation of Copernicus: A Unified System for Trajectory Design and Optimization
NASA Technical Reports Server (NTRS)
Ocampo, Cesar; Senent, Juan S.; Williams, Jacob
2010-01-01
The fundamental methods are described for the general spacecraft trajectory design and optimization software system called Copernicus. The methods rely on a unified framework that is used to model, design, and optimize spacecraft trajectories that may operate in complex gravitational force fields, use multiple propulsion systems, and involve multiple spacecraft. The trajectory model, with its associated equations of motion and maneuver models, are discussed.
System and method for optimal load and source scheduling in context aware homes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shetty, Pradeep; Foslien Graber, Wendy; Mangsuli, Purnaprajna R.
A controller for controlling energy consumption in a home includes a constraints engine to define variables for multiple appliances in the home corresponding to various home modes and persona of an occupant of the home. A modeling engine models multiple paths of energy utilization of the multiple appliances to place the home into a desired state from a current context. An optimal scheduler receives the multiple paths of energy utilization and generates a schedule as a function of the multiple paths and a selected persona to place the home in a desired state.
Ishihara, Koji; Morimoto, Jun
2018-03-01
Humans use multiple muscles to generate such joint movements as an elbow motion. With multiple lightweight and compliant actuators, joint movements can also be efficiently generated. Similarly, robots can use multiple actuators to efficiently generate a one degree of freedom movement. For this movement, the desired joint torque must be properly distributed to each actuator. One approach to cope with this torque distribution problem is an optimal control method. However, solving the optimal control problem at each control time step has not been deemed a practical approach due to its large computational burden. In this paper, we propose a computationally efficient method to derive an optimal control strategy for a hybrid actuation system composed of multiple actuators, where each actuator has different dynamical properties. We investigated a singularly perturbed system of the hybrid actuator model that subdivided the original large-scale control problem into smaller subproblems so that the optimal control outputs for each actuator can be derived at each control time step and applied our proposed method to our pneumatic-electric hybrid actuator system. Our method derived a torque distribution strategy for the hybrid actuator by dealing with the difficulty of solving real-time optimal control problems. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.
Gas Dynamic Modernization of Axial Uncooled Turbine by Means of CFD and Optimization Software
NASA Astrophysics Data System (ADS)
Marchukov, E. Yu; Egorov, I. N.
2018-01-01
The results of multicriteria optimization of three-stage low-pressure turbine are described in the paper. The aim of the optimization is to improve turbine operation process by three criteria: turbine outlet flow angle, value of residual swirl at the turbine outlet, and turbine efficiency. Full reprofiling of all blade rows is carried out while solving optimization problem. Reprofiling includes a change in both shape of flat blade sections (profiles) and three-dimensional shape of the blades. The study is carried out with 3D numerical models of turbines.
Motor unit recruitment by size does not provide functional advantages for motor performance
Dideriksen, Jakob L; Farina, Dario
2013-01-01
It is commonly assumed that the orderly recruitment of motor units by size provides a functional advantage for the performance of movements compared with a random recruitment order. On the other hand, the excitability of a motor neuron depends on its size and this is intrinsically linked to its innervation number. A range of innervation numbers among motor neurons corresponds to a range of sizes and thus to a range of excitabilities ordered by size. Therefore, if the excitation drive is similar among motor neurons, the recruitment by size is inevitably due to the intrinsic properties of motor neurons and may not have arisen to meet functional demands. In this view, we tested the assumption that orderly recruitment is necessarily beneficial by determining if this type of recruitment produces optimal motor output. Using evolutionary algorithms and without any a priori assumptions, the parameters of neuromuscular models were optimized with respect to several criteria for motor performance. Interestingly, the optimized model parameters matched well known neuromuscular properties, but none of the optimization criteria determined a consistent recruitment order by size unless this was imposed by an association between motor neuron size and excitability. Further, when the association between size and excitability was imposed, the resultant model of recruitment did not improve the motor performance with respect to the absence of orderly recruitment. A consistent observation was that optimal solutions for a variety of criteria of motor performance always required a broad range of innervation numbers in the population of motor neurons, skewed towards the small values. These results indicate that orderly recruitment of motor units in itself does not provide substantial functional advantages for motor control. Rather, the reason for its near-universal presence in human movements is that motor functions are optimized by a broad range of innervation numbers. PMID:24144879
Motor unit recruitment by size does not provide functional advantages for motor performance.
Dideriksen, Jakob L; Farina, Dario
2013-12-15
It is commonly assumed that the orderly recruitment of motor units by size provides a functional advantage for the performance of movements compared with a random recruitment order. On the other hand, the excitability of a motor neuron depends on its size and this is intrinsically linked to its innervation number. A range of innervation numbers among motor neurons corresponds to a range of sizes and thus to a range of excitabilities ordered by size. Therefore, if the excitation drive is similar among motor neurons, the recruitment by size is inevitably due to the intrinsic properties of motor neurons and may not have arisen to meet functional demands. In this view, we tested the assumption that orderly recruitment is necessarily beneficial by determining if this type of recruitment produces optimal motor output. Using evolutionary algorithms and without any a priori assumptions, the parameters of neuromuscular models were optimized with respect to several criteria for motor performance. Interestingly, the optimized model parameters matched well known neuromuscular properties, but none of the optimization criteria determined a consistent recruitment order by size unless this was imposed by an association between motor neuron size and excitability. Further, when the association between size and excitability was imposed, the resultant model of recruitment did not improve the motor performance with respect to the absence of orderly recruitment. A consistent observation was that optimal solutions for a variety of criteria of motor performance always required a broad range of innervation numbers in the population of motor neurons, skewed towards the small values. These results indicate that orderly recruitment of motor units in itself does not provide substantial functional advantages for motor control. Rather, the reason for its near-universal presence in human movements is that motor functions are optimized by a broad range of innervation numbers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, N. J.; Koltai, R. N.; McGowan, T. K.
The GATEWAY program followed two pedestrian-scale lighting projects that required multiple mockups – one at Stanford University in California and the other at Chautauqua Institution in upstate New York. The report provides insight into pedestrian lighting criteria, how they differ from street and area lighting criteria, and how solid-state lighting can be better applied in pedestrian applications.
van Duijl, Marjolein; Kleijn, Wim; de Jong, Joop
2013-09-01
As in many cultures, spirit possession is a common idiom of distress in Uganda. The DSM-IV contains experimental research criteria for dissociative and possession trance disorder (DTD and PTD), which are under review for the DSM-5. In the current proposed categories of the DSM-5, PTD is subsumed under dissociative identity disorder (DID) and DTD under dissociative disorders not elsewhere classified. Evaluation of these criteria is currently urgently required. This study explores the match between local symptoms of spirit possession in Uganda and experimental research criteria for PTD in the DSM-IV and proposed criteria for DID in the DSM-5. A mixed-method approach was used combining qualitative and quantitative research methods. Local symptoms were explored of 119 spirit possessed patients, using illness narratives and a cultural dissociative symptoms' checklist. Possible meaningful clusters of symptoms were inventoried through multiple correspondence analysis. Finally, local symptoms were compared with experimental criteria for PTD in the DSM-IV and proposed criteria for DID in the DSM-5. Illness narratives revealed different phases of spirit possession, with passive-influence experiences preceding the actual possession states. Multiple correspondence analysis of symptoms revealed two dimensions: 'passive' and 'active' symptoms. Local symptoms, such as changes in consciousness, shaking movements, and talking in a voice attributed to spirits, match with DSM-IV-PTD and DSM-5-DID criteria. Passive-influence experiences, such as feeling influenced or held by powers from outside, strange dreams, and hearing voices, deserve to be more explicitly described in the proposed criteria for DID in the DSM-5. The suggested incorporation of PTD in DID in the DSM-5 and the envisioned separation of DTD and PTD in two distinctive categories have disputable aspects.
Choi, Jae-Hyuk; Seo, Jeong-Min; Lee, Dong Hyun; Park, Kyungil; Kim, Young-Dae
2015-04-01
The aim of this study was to evaluate the clinical utility of the new bleeding criteria, proposed by the Bleeding Academic Research Consortium (BARC), compared with the old criteria for determining the action of physicians in contact with bleeding events, after percutaneous coronary intervention (PCI). The BARC criteria were independently associated with an increased risk of 1-year mortality after PCI, and provided a predictive value, in regard to 1-year mortality. The standardized bleeding definitions will be expected to help the physician to correctly analyze the bleeding events, to select an optimal treatment, and to objectively compare the results of multiple trials and registries. All the patients undergoing PCI from June to September 2012 were prospectively enrolled. Patients who experienced a bleeding event were further classified, based on three different bleeding severity criteria: BARC, Thrombolysis In Myocardial Infarction (TIMI), and Global Use of Strategies To Open coronary arteries (GUSTO). The primary outcome was the occurrence of bleeding events requiring interruption of antiplatelet therapy (IAT) by physicians. A total of 376 consecutive patients were included in this study. Total bleeding events occurred in 46 patients (12.2%). BARC type ≥2 bleeding occurred in 30 patients (8.0%); however, TIMI major or minor bleeding, and GUSTO moderate or severe bleeding occurred in 6 (1.6%) and 11 patients (2.9%), respectively. Of the 46 patients, 28 (60.9% of patients) required IAT. On receiver-operating characteristic curve analysis, bleeding defined BARC type ≥2 effectively predicted IAT, with a sensitivity of 89.3%, and a specificity of 98.5% (p<0.001), compared with TIMI (sensitivity, 21.4%; specificity, 100%; p<0.001), and GUSTO (sensitivity, 39.3%; specificity, 100%; p<0.001). Compared with TIMI and GUSTO, the BARC definition may be a more useful tool for the detection of bleeding with clinical relevance, for patients undergoing PCI. Copyright © 2014 Japanese College of Cardiology. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Indrayana, I. N. E.; P, N. M. Wirasyanti D.; Sudiartha, I. KG
2018-01-01
Mobile application allow many users to access data from the application without being limited to space, space and time. Over time the data population of this application will increase. Data access time will cause problems if the data record has reached tens of thousands to millions of records.The objective of this research is to maintain the performance of data execution for large data records. One effort to maintain data access time performance is to apply query optimization method. The optimization used in this research is query heuristic optimization method. The built application is a mobile-based financial application using MySQL database with stored procedure therein. This application is used by more than one business entity in one database, thus enabling rapid data growth. In this stored procedure there is an optimized query using heuristic method. Query optimization is performed on a “Select” query that involves more than one table with multiple clausa. Evaluation is done by calculating the average access time using optimized and unoptimized queries. Access time calculation is also performed on the increase of population data in the database. The evaluation results shown the time of data execution with query heuristic optimization relatively faster than data execution time without using query optimization.
From the experience of development of composite materials with desired properties
NASA Astrophysics Data System (ADS)
Garkina, I. A.; Danilov, A. M.
2017-04-01
Using the experience in the development of composite materials with desired properties is given the algorithm of construction materials synthesis on the basis of their representation in the form of a complex system. The possibility of creation of a composite and implementation of the technical task originally are defined at a stage of cognitive modeling. On the basis of development of the cognitive map hierarchical structures of criteria of quality are defined; according to them for each allocated large-scale level the corresponding block diagrams of system are specified. On the basis of the solution of problems of one-criteria optimization with use of the found optimum values formalization of a multi-criteria task and its decision is carried out (the optimum organization and properties of system are defined). The emphasis is on methodological aspects of mathematical modeling (construction of a generalized and partial models to optimize the properties and structure of materials, including those based on the concept of systemic homeostasis).
Multiple crack detection in 3D using a stable XFEM and global optimization
NASA Astrophysics Data System (ADS)
Agathos, Konstantinos; Chatzi, Eleni; Bordas, Stéphane P. A.
2018-02-01
A numerical scheme is proposed for the detection of multiple cracks in three dimensional (3D) structures. The scheme is based on a variant of the extended finite element method (XFEM) and a hybrid optimizer solution. The proposed XFEM variant is particularly well-suited for the simulation of 3D fracture problems, and as such serves as an efficient solution to the so-called forward problem. A set of heuristic optimization algorithms are recombined into a multiscale optimization scheme. The introduced approach proves effective in tackling the complex inverse problem involved, where identification of multiple flaws is sought on the basis of sparse measurements collected near the structural boundary. The potential of the scheme is demonstrated through a set of numerical case studies of varying complexity.
[Obstetric care in Mali: effect of organization on in-hospital maternal mortality].
Zongo, A; Traoré, M; Faye, A; Gueye, M; Fournier, P; Dumont, A
2012-08-01
Maternal mortality is still too high in sub-Saharan Africa, particularly in referral hospitals. Solutions exist but their implementation is a great issue in the poor-resources settings. The objective of this study is to assess the effect of the organization of obstetric care services on maternal mortality in referral hospitals in Mali. This is a multicentric observational survey in 22 referral hospitals. Clinical data on 42,929 women delivering in the 22 hospitals within the 2007 to 2008 study period were collected. Organization evaluation was based on explicit criteria defined by an expert committee. The effect of the organization on in-hospital mortality adjusted on individual and institutional characteristics was estimated using multi-level logistic regression models. The results show that an optimal organization of obstetric care services based on eight explicit criteria reduced in-hospital maternal mortality by 41% compared with women delivering in a referral hospital with sub-optimal organization defined as non-compliance with at least one of the eight criteria (ORa=0.59; 95% CI=0.34-0.92). Furthermore, local policies that improved financial access to emergency obstetric care had a significant impact on maternal outcome. Criteria for optimal organization include the management of labor and childbirth by qualified personnel, an organization of human resources that allows timely management of obstetric emergencies, routine use of partography for all patients and availability of guidelines for the management of complications. These conditions could be easily implemented in the context of Mali to reduce in-hospital maternal mortality. Copyright © 2012 Elsevier Masson SAS. All rights reserved.
Locations of Sampling Stations for Water Quality Monitoring in Water Distribution Networks.
Rathi, Shweta; Gupta, Rajesh
2014-04-01
Water quality is required to be monitored in the water distribution networks (WDNs) at salient locations to assure the safe quality of water supplied to the consumers. Such monitoring stations (MSs) provide warning against any accidental contaminations. Various objectives like demand coverage, time for detection, volume of water contaminated before detection, extent of contamination, expected population affected prior to detection, detection likelihood and others, have been independently or jointly considered in determining optimal number and location of MSs in WDNs. "Demand coverage" defined as the percentage of network demand monitored by a particular monitoring station is a simple measure to locate MSs. Several methods based on formulation of coverage matrix using pre-specified coverage criteria and optimization have been suggested. Coverage criteria is defined as some minimum percentage of total flow received at the monitoring stations that passed through any upstream node included then as covered node of the monitoring station. Number of monitoring stations increases with the increase in the value of coverage criteria. Thus, the design of monitoring station becomes subjective. A simple methodology is proposed herein which priority wise iteratively selects MSs to achieve targeted demand coverage. The proposed methodology provided the same number and location of MSs for illustrative network as an optimization method did. Further, the proposed method is simple and avoids subjectivity that could arise from the consideration of coverage criteria. The application of methodology is also shown on a WDN of Dharampeth zone (Nagpur city WDN in Maharashtra, India) having 285 nodes and 367 pipes.
Modelling inter-supply chain competition with resource limitation and demand disruption
NASA Astrophysics Data System (ADS)
Chen, Zhaobo; Teng, Chunxian; Zhang, Ding; Sun, Jiayi
2016-05-01
This paper proposes a comprehensive model for studying supply chain versus supply chain competition with resource limitation and demand disruption. We assume that there are supply chains with heterogeneous supply network structures that compete at multiple demand markets. Each supply chain is comprised of internal and external firms. The internal firms are coordinated in production and distribution and share some common but limited resources within the supply chain, whereas the external firms are independent and do not share the internal resources. The supply chain managers strive to develop optimal strategies in terms of production level and resource allocation in maximising their profit while facing competition at the end market. The Cournot-Nash equilibrium of this inter-supply chain competition is formulated as a variational inequality problem. We further study the case when there is demand disruption in the plan-execution phase. In such a case, the managers need to revise their planned strategy in order to maximise their profit with the new demand under disruption and minimise the cost of change. We present a bi-criteria decision-making model for supply chain managers and develop the optimal conditions in equilibrium, which again can be formulated by another variational inequality problem. Numerical examples are presented for illustrative purpose.
The Effect of Carbonaceous Reductant Selection on Chromite Pre-reduction
NASA Astrophysics Data System (ADS)
Kleynhans, E. L. J.; Beukes, J. P.; Van Zyl, P. G.; Bunt, J. R.; Nkosi, N. S. B.; Venter, M.
2017-04-01
Ferrochrome (FeCr) production is an energy-intensive process. Currently, the pelletized chromite pre-reduction process, also referred to as solid-state reduction of chromite, is most likely the FeCr production process with the lowest specific electricity consumption, i.e., MWh/t FeCr produced. In this study, the effects of carbonaceous reductant selection on chromite pre-reduction and cured pellet strength were investigated. Multiple linear regression analysis was employed to evaluate the effect of reductant characteristics on the aforementioned two parameters. This yielded mathematical solutions that can be used by FeCr producers to select reductants more optimally in future. Additionally, the results indicated that hydrogen (H)- (24 pct) and volatile content (45.8 pct) were the most significant contributors for predicting variance in pre-reduction and compressive strength, respectively. The role of H within this context is postulated to be linked to the ability of a reductant to release H that can induce reduction. Therefore, contrary to the current operational selection criteria, the authors believe that thermally untreated reductants ( e.g., anthracite, as opposed to coke or char), with volatile contents close to the currently applied specification (to ensure pellet strength), would be optimal, since it would maximize H content that would enhance pre-reduction.
Minimum variance optimal rate allocation for multiplexed H.264/AVC bitstreams.
Tagliasacchi, Marco; Valenzise, Giuseppe; Tubaro, Stefano
2008-07-01
Consider the problem of transmitting multiple video streams to fulfill a constant bandwidth constraint. The available bit budget needs to be distributed across the sequences in order to meet some optimality criteria. For example, one might want to minimize the average distortion or, alternatively, minimize the distortion variance, in order to keep almost constant quality among the encoded sequences. By working in the rho-domain, we propose a low-delay rate allocation scheme that, at each time instant, provides a closed form solution for either the aforementioned problems. We show that minimizing the distortion variance instead of the average distortion leads, for each of the multiplexed sequences, to a coding penalty less than 0.5 dB, in terms of average PSNR. In addition, our analysis provides an explicit relationship between model parameters and this loss. In order to smooth the distortion also along time, we accommodate a shared encoder buffer to compensate for rate fluctuations. Although the proposed scheme is general, and it can be adopted for any video and image coding standard, we provide experimental evidence by transcoding bitstreams encoded using the state-of-the-art H.264/AVC standard. The results of our simulations reveal that is it possible to achieve distortion smoothing both in time and across the sequences, without sacrificing coding efficiency.
Jones, R N; Barry, A L
1987-01-01
The ampicillin-sulbactam combination was evaluated in vitro to determine the optimal susceptibility testing conditions among five combination ratios and four fixed concentrations of sulbactam. The organisms tested were markedly resistant to aminopenicillins and most other beta-lactams. The ratio of 2:1 is recommended to assure recognition of the ampicillin-sulbactam spectrum and minimize false-susceptible results among strains known to be resistant to this combination. Proposed MIC breakpoint concentrations were compatible with levels in serum achieved with recommended clinical doses. Cross-resistance analyses comparing ampicillin-sulbactam and amoxicillin-clavulanate showed comparable activity and spectra. However, the major interpretive disagreement was sufficient to require separate testing of these aminopenicillin-inhibitor combinations. The recommended ampicillin-sulbactam MIC susceptibility breakpoints are as follows: (i) less than or equal to 8.0/4.0 micrograms/ml for tests against members of the family Enterobacteriaceae, anaerobes, nonenteric gram-negative bacilli, staphylococci, Haemophilus influenzae, and Branhamella catarrhalis; (ii) the ampicillin MICs alone interpreted by National Committee for Clinical Laboratory Standards criteria should predict ampicillin-sulbactam susceptibility for the enterococci, streptococci, and Listeria monocytogenes. MIC quality control ranges were determined by multiple laboratory broth microdilution trials for the ampicillin-sulbactam 1:1 and 2:1 ratio tests. PMID:3117843
Hypergraph-Based Combinatorial Optimization of Matrix-Vector Multiplication
ERIC Educational Resources Information Center
Wolf, Michael Maclean
2009-01-01
Combinatorial scientific computing plays an important enabling role in computational science, particularly in high performance scientific computing. In this thesis, we will describe our work on optimizing matrix-vector multiplication using combinatorial techniques. Our research has focused on two different problems in combinatorial scientific…
Management of a granulomatous lesion in a patient with Kindler's Syndrome
Bhatsange, Anuradha; Khadse, Yugandhara; Deshmukh, Sabina; Karwa, Swapnil
2018-01-01
Kindler's syndrome is a rare vesiculobullous dermatological disorder sometimes involving multiple organs. First described by Kindler. The differential diagnosis includes Rothmund-Thomson syndrome and epidermolysis bullosa. Fisher's criteria have simplified the diagnosis with major and minor criteria. Oral manifestation of this syndrome includes multiple painful oral ulcers in the mucosa, periodontal attachment loss, gingival bleeding, and fragile mucosa. These manifestations may impair proper nutrition intake, may cause growth and development problems. This case report deals with the management of oral and gingival manifestations in a 12-year-old female child patient diagnosed with Kindler's syndrome. PMID:29568175
NASA Astrophysics Data System (ADS)
Chen, Shiyu; Li, Haiyang; Baoyin, Hexi
2018-06-01
This paper investigates a method for optimizing multi-rendezvous low-thrust trajectories using indirect methods. An efficient technique, labeled costate transforming, is proposed to optimize multiple trajectory legs simultaneously rather than optimizing each trajectory leg individually. Complex inner-point constraints and a large number of free variables are one main challenge in optimizing multi-leg transfers via shooting algorithms. Such a difficulty is reduced by first optimizing each trajectory leg individually. The results may be, next, utilized as an initial guess in the simultaneous optimization of multiple trajectory legs. In this paper, the limitations of similar techniques in previous research is surpassed and a homotopic approach is employed to improve the convergence efficiency of the shooting process in multi-rendezvous low-thrust trajectory optimization. Numerical examples demonstrate that newly introduced techniques are valid and efficient.
Abbatiello, Susan E; Mani, D R; Schilling, Birgit; Maclean, Brendan; Zimmerman, Lisa J; Feng, Xingdong; Cusack, Michael P; Sedransk, Nell; Hall, Steven C; Addona, Terri; Allen, Simon; Dodder, Nathan G; Ghosh, Mousumi; Held, Jason M; Hedrick, Victoria; Inerowicz, H Dorota; Jackson, Angela; Keshishian, Hasmik; Kim, Jong Won; Lyssand, John S; Riley, C Paige; Rudnick, Paul; Sadowski, Pawel; Shaddox, Kent; Smith, Derek; Tomazela, Daniela; Wahlander, Asa; Waldemarson, Sofia; Whitwell, Corbin A; You, Jinsam; Zhang, Shucha; Kinsinger, Christopher R; Mesri, Mehdi; Rodriguez, Henry; Borchers, Christoph H; Buck, Charles; Fisher, Susan J; Gibson, Bradford W; Liebler, Daniel; Maccoss, Michael; Neubert, Thomas A; Paulovich, Amanda; Regnier, Fred; Skates, Steven J; Tempst, Paul; Wang, Mu; Carr, Steven A
2013-09-01
Multiple reaction monitoring (MRM) mass spectrometry coupled with stable isotope dilution (SID) and liquid chromatography (LC) is increasingly used in biological and clinical studies for precise and reproducible quantification of peptides and proteins in complex sample matrices. Robust LC-SID-MRM-MS-based assays that can be replicated across laboratories and ultimately in clinical laboratory settings require standardized protocols to demonstrate that the analysis platforms are performing adequately. We developed a system suitability protocol (SSP), which employs a predigested mixture of six proteins, to facilitate performance evaluation of LC-SID-MRM-MS instrument platforms, configured with nanoflow-LC systems interfaced to triple quadrupole mass spectrometers. The SSP was designed for use with low multiplex analyses as well as high multiplex approaches when software-driven scheduling of data acquisition is required. Performance was assessed by monitoring of a range of chromatographic and mass spectrometric metrics including peak width, chromatographic resolution, peak capacity, and the variability in peak area and analyte retention time (RT) stability. The SSP, which was evaluated in 11 laboratories on a total of 15 different instruments, enabled early diagnoses of LC and MS anomalies that indicated suboptimal LC-MRM-MS performance. The observed range in variation of each of the metrics scrutinized serves to define the criteria for optimized LC-SID-MRM-MS platforms for routine use, with pass/fail criteria for system suitability performance measures defined as peak area coefficient of variation <0.15, peak width coefficient of variation <0.15, standard deviation of RT <0.15 min (9 s), and the RT drift <0.5min (30 s). The deleterious effect of a marginally performing LC-SID-MRM-MS system on the limit of quantification (LOQ) in targeted quantitative assays illustrates the use and need for a SSP to establish robust and reliable system performance. Use of a SSP helps to ensure that analyte quantification measurements can be replicated with good precision within and across multiple laboratories and should facilitate more widespread use of MRM-MS technology by the basic biomedical and clinical laboratory research communities.
Abbatiello, Susan E.; Mani, D. R.; Schilling, Birgit; MacLean, Brendan; Zimmerman, Lisa J.; Feng, Xingdong; Cusack, Michael P.; Sedransk, Nell; Hall, Steven C.; Addona, Terri; Allen, Simon; Dodder, Nathan G.; Ghosh, Mousumi; Held, Jason M.; Hedrick, Victoria; Inerowicz, H. Dorota; Jackson, Angela; Keshishian, Hasmik; Kim, Jong Won; Lyssand, John S.; Riley, C. Paige; Rudnick, Paul; Sadowski, Pawel; Shaddox, Kent; Smith, Derek; Tomazela, Daniela; Wahlander, Asa; Waldemarson, Sofia; Whitwell, Corbin A.; You, Jinsam; Zhang, Shucha; Kinsinger, Christopher R.; Mesri, Mehdi; Rodriguez, Henry; Borchers, Christoph H.; Buck, Charles; Fisher, Susan J.; Gibson, Bradford W.; Liebler, Daniel; MacCoss, Michael; Neubert, Thomas A.; Paulovich, Amanda; Regnier, Fred; Skates, Steven J.; Tempst, Paul; Wang, Mu; Carr, Steven A.
2013-01-01
Multiple reaction monitoring (MRM) mass spectrometry coupled with stable isotope dilution (SID) and liquid chromatography (LC) is increasingly used in biological and clinical studies for precise and reproducible quantification of peptides and proteins in complex sample matrices. Robust LC-SID-MRM-MS-based assays that can be replicated across laboratories and ultimately in clinical laboratory settings require standardized protocols to demonstrate that the analysis platforms are performing adequately. We developed a system suitability protocol (SSP), which employs a predigested mixture of six proteins, to facilitate performance evaluation of LC-SID-MRM-MS instrument platforms, configured with nanoflow-LC systems interfaced to triple quadrupole mass spectrometers. The SSP was designed for use with low multiplex analyses as well as high multiplex approaches when software-driven scheduling of data acquisition is required. Performance was assessed by monitoring of a range of chromatographic and mass spectrometric metrics including peak width, chromatographic resolution, peak capacity, and the variability in peak area and analyte retention time (RT) stability. The SSP, which was evaluated in 11 laboratories on a total of 15 different instruments, enabled early diagnoses of LC and MS anomalies that indicated suboptimal LC-MRM-MS performance. The observed range in variation of each of the metrics scrutinized serves to define the criteria for optimized LC-SID-MRM-MS platforms for routine use, with pass/fail criteria for system suitability performance measures defined as peak area coefficient of variation <0.15, peak width coefficient of variation <0.15, standard deviation of RT <0.15 min (9 s), and the RT drift <0.5min (30 s). The deleterious effect of a marginally performing LC-SID-MRM-MS system on the limit of quantification (LOQ) in targeted quantitative assays illustrates the use and need for a SSP to establish robust and reliable system performance. Use of a SSP helps to ensure that analyte quantification measurements can be replicated with good precision within and across multiple laboratories and should facilitate more widespread use of MRM-MS technology by the basic biomedical and clinical laboratory research communities. PMID:23689285
Optimization of knowledge sharing through multi-forum using cloud computing architecture
NASA Astrophysics Data System (ADS)
Madapusi Vasudevan, Sriram; Sankaran, Srivatsan; Muthuswamy, Shanmugasundaram; Ram, N. Sankar
2011-12-01
Knowledge sharing is done through various knowledge sharing forums which requires multiple logins through multiple browser instances. Here a single Multi-Forum knowledge sharing concept is introduced which requires only one login session which makes user to connect multiple forums and display the data in a single browser window. Also few optimization techniques are introduced here to speed up the access time using cloud computing architecture.
WATER QUALITY IN SOURCE WATER, TREATMENT, AND DISTRIBUTION SYSTEMS
Most drinking water utilities practice the multiple-barrier concept as the guiding principle for providing safe water. This chapter discusses multiple barriers as they relate to the basic criteria for selecting and protecting source waters, including known and potential sources ...
Oltra-Cucarella, Javier; Sánchez-SanSegundo, Miriam; Lipnicki, Darren M; Sachdev, Perminder S; Crawford, John D; Pérez-Vicente, José A; Cabello-Rodríguez, Luis; Ferrer-Cascales, Rosario
2018-05-10
To investigate the implications of obtaining one or more low scores on a battery of cognitive tests on diagnosing mild cognitive impairment (MCI). Observational longitudinal study. Alzheimer's Disease Neuroimaging Initiative. Normal controls (NC, n = 280) and participants with MCI (n = 415) according to Petersen criteria were reclassified using the Jak/Bondi criteria and number of impaired tests (NIT) criteria. Diagnostic statistics and hazard ratios of progression to Alzheimer's disease (AD) were compared according to diagnostic criteria. The NIT criteria were a better predictor of progression to AD than the Petersen or Jak/Bondi criteria, with optimal sensitivity, specificity, and positive and negative predictive value. Considering normal variability in cognitive test performance when diagnosing MCI may help identify individuals at greatest risk of progression to AD with greater certainty. © 2018, Copyright the Authors Journal compilation © 2018, The American Geriatrics Society.
Genetic Algorithms to Optimizatize Lecturer Assessment's Criteria
NASA Astrophysics Data System (ADS)
Jollyta, Deny; Johan; Hajjah, Alyauma
2017-12-01
The lecturer assessment criteria is used as a measurement of the lecturer's performance in a college environment. To determine the value for a criteriais complicated and often leads to doubt. The absence of a standard valuefor each assessment criteria will affect the final results of the assessment and become less presentational data for the leader of college in taking various policies relate to reward and punishment. The Genetic Algorithm comes as an algorithm capable of solving non-linear problems. Using chromosomes in the random initial population, one of the presentations is binary, evaluates the fitness function and uses crossover genetic operator and mutation to obtain the desired crossbreed. It aims to obtain the most optimum criteria values in terms of the fitness function of each chromosome. The training results show that Genetic Algorithm able to produce the optimal values of lecturer assessment criteria so that can be usedby the college as a standard value for lecturer assessment criteria.
Informed multi-objective decision-making in environmental management using Pareto optimality
Maureen C. Kennedy; E. David Ford; Peter Singleton; Mark Finney; James K. Agee
2008-01-01
Effective decisionmaking in environmental management requires the consideration of multiple objectives that may conflict. Common optimization methods use weights on the multiple objectives to aggregate them into a single value, neglecting valuable insight into the relationships among the objectives in the management problem.
We introduce a hierarchical optimization framework for spatially targeting green infrastructure (GI) incentive policies in order to meet objectives related to cost and environmental effectiveness. The framework explicitly simulates the interaction between multiple levels of polic...
Drenth-van Maanen, A Clara; Leendertse, Anne J; Jansen, Paul A F; Knol, Wilma; Keijsers, Carolina J P W; Meulendijk, Michiel C; van Marum, Rob J
2018-04-01
Inappropriate prescribing is a major health care issue, especially regarding older patients on polypharmacy. Multiple implicit and explicit prescribing tools have been developed to improve prescribing, but these have hardly ever been used in combination. The Systematic Tool to Reduce Inappropriate Prescribing (STRIP) combines implicit prescribing tools with the explicit Screening Tool to Alert physicians to the Right Treatment and Screening Tool of Older People's potentially inappropriate Prescriptions criteria and has shared decision-making with the patient as a critical step. This article describes the STRIP and its ability to identify potentially inappropriate prescribing. The STRIP improved general practitioners' and final-year medical students' medication review skills. The Web-application STRIP Assistant was developed to enable health care providers to use the STRIP in daily practice and will be incorporated in clinical decision support systems. It is currently being used in the European Optimizing thERapy to prevent Avoidable hospital admissions in the Multimorbid elderly (OPERAM) project, a multicentre randomized controlled trial involving patients aged 75 years and older using multiple medications for multiple medical conditions. In conclusion, the STRIP helps health care providers to systematically identify potentially inappropriate prescriptions and medication-related problems and to change the patient's medication regimen in accordance with the patient's needs and wishes. This article describes the STRIP and the available evidence so far. The OPERAM study is investigating the effect of STRIP use on clinical and economic outcomes. © 2017 John Wiley & Sons, Ltd.
Uncertainty analysis of trade-offs between multiple responses using hypervolume
Cao, Yongtao; Lu, Lu; Anderson-Cook, Christine M.
2017-08-04
When multiple responses are considered in process optimization, the degree to which they can be simultaneously optimized depends on the optimization objectives and the amount of trade-offs between the responses. The normalized hypervolume of the Pareto front is a useful summary to quantify the amount of trade-offs required to balance performance across the multiple responses. In order to quantify the impact of uncertainty of the estimated response surfaces and add realism to what future data to expect, 2 versions of the scaled normalized hypervolume of the Pareto front are presented. To demonstrate the variation of the hypervolume distributions, we exploremore » a case study for a chemical process involving 3 responses, each with a different type of optimization goal. Our results show that the global normalized hypervolume characterizes the proximity to the ideal results possible, while the instance-specific summary considers the richness of the front and the severity of trade-offs between alternatives. Furthermore, the 2 scaling schemes complement each other and highlight different features of the Pareto front and hence are useful to quantify what solutions are possible for simultaneous optimization of multiple responses.« less
Uncertainty analysis of trade-offs between multiple responses using hypervolume
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cao, Yongtao; Lu, Lu; Anderson-Cook, Christine M.
When multiple responses are considered in process optimization, the degree to which they can be simultaneously optimized depends on the optimization objectives and the amount of trade-offs between the responses. The normalized hypervolume of the Pareto front is a useful summary to quantify the amount of trade-offs required to balance performance across the multiple responses. In order to quantify the impact of uncertainty of the estimated response surfaces and add realism to what future data to expect, 2 versions of the scaled normalized hypervolume of the Pareto front are presented. To demonstrate the variation of the hypervolume distributions, we exploremore » a case study for a chemical process involving 3 responses, each with a different type of optimization goal. Our results show that the global normalized hypervolume characterizes the proximity to the ideal results possible, while the instance-specific summary considers the richness of the front and the severity of trade-offs between alternatives. Furthermore, the 2 scaling schemes complement each other and highlight different features of the Pareto front and hence are useful to quantify what solutions are possible for simultaneous optimization of multiple responses.« less
Launch Vehicle Propulsion Parameter Design Multiple Selection Criteria
NASA Technical Reports Server (NTRS)
Shelton, Joey Dewayne
2004-01-01
The optimization tool described herein addresses and emphasizes the use of computer tools to model a system and focuses on a concept development approach for a liquid hydrogen/liquid oxygen single-stage-to-orbit system, but more particularly the development of the optimized system using new techniques. This methodology uses new and innovative tools to run Monte Carlo simulations, genetic algorithm solvers, and statistical models in order to optimize a design concept. The concept launch vehicle and propulsion system were modeled and optimized to determine the best design for weight and cost by varying design and technology parameters. Uncertainty levels were applied using Monte Carlo Simulations and the model output was compared to the National Aeronautics and Space Administration Space Shuttle Main Engine. Several key conclusions are summarized here for the model results. First, the Gross Liftoff Weight and Dry Weight were 67% higher for the design case for minimization of Design, Development, Test and Evaluation cost when compared to the weights determined by the minimization of Gross Liftoff Weight case. In turn, the Design, Development, Test and Evaluation cost was 53% higher for optimized Gross Liftoff Weight case when compared to the cost determined by case for minimization of Design, Development, Test and Evaluation cost. Therefore, a 53% increase in Design, Development, Test and Evaluation cost results in a 67% reduction in Gross Liftoff Weight. Secondly, the tool outputs define the sensitivity of propulsion parameters, technology and cost factors and how these parameters differ when cost and weight are optimized separately. A key finding was that for a Space Shuttle Main Engine thrust level the oxidizer/fuel ratio of 6.6 resulted in the lowest Gross Liftoff Weight rather than at 5.2 for the maximum specific impulse, demonstrating the relationships between specific impulse, engine weight, tank volume and tank weight. Lastly, the optimum chamber pressure for Gross Liftoff Weight minimization was 2713 pounds per square inch as compared to 3162 for the Design, Development, Test and Evaluation cost optimization case. This chamber pressure range is close to 3000 pounds per square inch for the Space Shuttle Main Engine.
Rapid optimization of multiple-burn rocket flights.
NASA Technical Reports Server (NTRS)
Brown, K. R.; Harrold, E. F.; Johnson, G. W.
1972-01-01
Different formulations of the fuel optimization problem for multiple burn trajectories are considered. It is shown that certain customary idealizing assumptions lead to an ill-posed optimization problem for which no solution exists. Several ways are discussed for avoiding such difficulties by more realistic problem statements. An iterative solution of the boundary value problem is presented together with efficient coast arc computations, the right end conditions for various orbital missions, and some test results.
Theoretical investigation of the microwave electron gun
NASA Astrophysics Data System (ADS)
Gao, J.
1990-12-01
In this article the microwave electron gun (rf gun) is investigated theoretically in a general way. After a brief review of the sources of emittance growth in a cavity, the optimization criteria are given and optimized electric field distributions on the axes of the cavities are found, from which cavities for a rf gun can be designed.
Comparative Properties of Collaborative Optimization and Other Approaches to MDO
NASA Technical Reports Server (NTRS)
Alexandrov, Natalia M.; Lewis, Robert Michael
1999-01-01
We, discuss criteria by which one can classify, analyze, and evaluate approaches to solving multidisciplinary design optimization (MDO) problems. Central to our discussion is the often overlooked distinction between questions of formulating MDO problems and solving the resulting computational problem. We illustrate our general remarks by comparing several approaches to MDO that have been proposed.
Comparative Properties of Collaborative Optimization and other Approaches to MDO
NASA Technical Reports Server (NTRS)
Alexandrov, Natalia M.; Lewis, Robert Michael
1999-01-01
We discuss criteria by which one can classify, analyze, and evaluate approaches to solving multidisciplinary design optimization (MDO) problems. Central to our discussion is the often overlooked distinction between questions of formulating MDO problems and solving the resulting computational problem. We illustrate our general remarks by comparing several approaches to MDO that have been proposed.
ERIC Educational Resources Information Center
Suh, Joyce; Eigsti, Inge-Marie; Naigles, Letitia; Barton, Marianne; Kelley, Elizabeth; Fein, Deborah
2014-01-01
Autism Spectrum Disorders (ASDs) have traditionally been considered a lifelong condition; however, a subset of people makes such significant improvements that they no longer meet diagnostic criteria for an ASD. The current study examines whether these "optimal outcome" (OO) children and adolescents continue to have subtle pragmatic…
A design procedure and handling quality criteria for lateral directional flight control systems
NASA Technical Reports Server (NTRS)
Stein, G.; Henke, A. H.
1972-01-01
A practical design procedure for aircraft augmentation systems is described based on quadratic optimal control technology and handling-quality-oriented cost functionals. The procedure is applied to the design of a lateral-directional control system for the F4C aircraft. The design criteria, design procedure, and final control system are validated with a program of formal pilot evaluation experiments.
Widhalm, Georg; Kiesel, Barbara; Woehrer, Adelheid; Traub-Weidinger, Tatjana; Preusser, Matthias; Marosi, Christine; Prayer, Daniela; Hainfellner, Johannes A.; Knosp, Engelbert; Wolfsberger, Stefan
2013-01-01
Background Intraoperative identification of anaplastic foci in diffusely infiltrating gliomas (DIG) with non-significant contrast-enhancement on MRI is indispensible to avoid histopathological undergrading and subsequent treatment failure. Recently, we found that 5-aminolevulinic acid (5-ALA) induced protoporphyrin IX (PpIX) fluorescence can visualize areas with increased proliferative and metabolic activity in such gliomas intraoperatively. As treatment of DIG is predominantely based on histopathological World Health Organisation (WHO) parameters, we analyzed whether PpIX fluorescence can detect anaplastic foci according to these criteria. Methods We prospectively included DIG patients with non-significant contrast-enhancement that received 5-ALA prior to resection. Intraoperatively, multiple samples from PpIX positive and negative intratumoral areas were collected using a modified neurosurgical microscope. In all samples, histopathological WHO criteria and proliferation rate were assessed and correlated to the PpIX fluorescence status. Results A total of 215 tumor specimens were collected in 59 patients. Of 26 WHO grade III gliomas, 23 cases (85%) showed focal PpIX fluorescence, whereas 29 (91%) of 33 WHO grade II gliomas were PpIX negative. In intratumoral areas with focal PpIX fluorescence, mitotic rate, cell density, nuclear pleomorphism, and proliferation rate were significantly higher than in non-fluorescing areas. The positive predictive value of focal PpIX fluorescence for WHO grade III histology was 85%. Conclusions Our study indicates that 5-ALA induced PpIX fluorescence is a powerful marker for intraoperative identification of anaplastic foci according to the histopathological WHO criteria in DIG with non-significant contrast-enhancement. Therefore, application of 5-ALA optimizes tissue sampling for precise histopathological diagnosis independent of brain-shift. PMID:24204718
Very-high-risk localized prostate cancer: definition and outcomes.
Sundi, D; Wang, V M; Pierorazio, P M; Han, M; Bivalacqua, T J; Ball, M W; Antonarakis, E S; Partin, A W; Schaeffer, E M; Ross, A E
2014-03-01
Outcomes in men with National Comprehensive Cancer Network (NCCN) high-risk prostate cancer (PCa) can vary substantially-some will have excellent cancer-specific survival, whereas others will experience early metastasis even after aggressive local treatments. Current nomograms, which yield continuous risk probabilities, do not separate high-risk PCa into distinct sub-strata. Here, we derive a binary definition of very-high-risk (VHR) localized PCa to aid in risk stratification at diagnosis and selection of therapy. We queried the Johns Hopkins radical prostatectomy database to identify 753 men with NCCN high-risk localized PCa (Gleason sum 8-10, PSA >20 ng ml(-1), or clinical stage ≥T3). Twenty-eight alternate permutations of adverse grade, stage and cancer volume were compared by their hazard ratios for metastasis and cancer-specific mortality. VHR criteria with top-ranking hazard ratios were further evaluated by multivariable analyses and inclusion of a clinically meaningful proportion of the high-risk cohort. The VHR cohort was best defined by primary pattern 5 present on biopsy, or ≥5 cores with Gleason sum 8-10, or multiple NCCN high-risk features. These criteria encompassed 15.1% of the NCCN high-risk cohort. Compared with other high-risk men, VHR men were at significantly higher risk for metastasis (hazard ratio 2.75) and cancer-specific mortality (hazard ratio 3.44) (P<0.001 for both). Among high-risk men, VHR men also had significantly worse 10-year metastasis-free survival (37% vs 78%) and cancer-specific survival (62% vs 90%). Men who meet VHR criteria form a subgroup within the current NCCN high-risk classification who have particularly poor oncological outcomes. Use of these characteristics to distinguish VHR localized PCa may help in counseling and selection optimal candidates for multimodal treatments or clinical trials.
Aschner, Michael; Ceccatelli, Sandra; Daneshian, Mardas; Fritsche, Ellen; Hasiwa, Nina; Hartung, Thomas; Hogberg, Helena T; Leist, Marcel; Li, Abby; Mundi, William R; Padilla, Stephanie; Piersma, Aldert H; Bal-Price, Anna; Seiler, Andrea; Westerink, Remco H; Zimmer, Bastian; Lein, Pamela J
2017-01-01
There is a paucity of information concerning the developmental neurotoxicity (DNT) hazard posed by industrial and environmental chemicals. New testing approaches will most likely be based on batteries of alternative and complementary (non-animal) tests. As DNT is assumed to result from the modulation of fundamental neurodevelopmental processes (such as neuronal differentiation, precursor cell migration or neuronal network formation) by chemicals, the first generation of alternative DNT tests target these processes. The advantage of such types of assays is that they capture toxicants with multiple targets and modes-of-action. Moreover, the processes modelled by the assays can be linked to toxicity endophenotypes, i.e., alterations in neural connectivity that form the basis for neurofunctional deficits in man. The authors of this review convened in a workshop to define criteria for the selection of positive/negative controls, to prepare recommendations on their use, and to initiate the setup of a directory of reference chemicals. For initial technical optimization of tests, a set of > 50 endpoint-specific control compounds was identified. For further test development, an additional "test" set of 33 chemicals considered to act directly as bona fide DNT toxicants is proposed, and each chemical is annotated to the extent it fulfills these criteria. A tabular compilation of the original literature used to select the test set chemicals provides information on statistical procedures, and toxic/non-toxic doses (both for pups and dams). Suggestions are provided on how to use the > 100 compounds (including negative controls) compiled here to address specificity, adversity and use of alternative test systems.
NASA Astrophysics Data System (ADS)
Kondrat'eva, O. E.; Roslyakov, P. V.; Burdyukov, D. A.; Khudolei, O. D.; Loktionov, O. A.
2017-10-01
According to Federal Law no. 219-FZ, dated July 21, 2014, all enterprises that have a significant negative impact on the environment shall continuously monitor and account emissions of harmful substances into the atmospheric air. The choice of measuring equipment that is included in continuous emission monitoring and accounting systems (CEM&ASs) is a complex technical problem; in particular, its solution requires a comparative analysis of gas analysis systems; each of these systems has its advantages and disadvantages. In addition, the choice of gas analysis systems for CEM&ASs should be maximally objective and not depend on preferences of separate experts and specialists. The technique of choosing gas analysis equipment that was developed in previous years at Moscow Power Engineering Institute (MPEI) has been analyzed and the applicability of the mathematical tool of a multiple criteria analysis to choose measuring equipment for the continuous emission monitoring and accounting system have been estimated. New approaches to the optimal choice of gas analysis equipment for systems of the continuous monitoring and accounting of harmful emissions from thermal power plants have been proposed, new criteria of evaluation of gas analysis systems have been introduced, and weight coefficients have been determined for these criteria. The results of this study served as a basis for the Preliminary National Standard of the Russian Federation "Best Available Technologies. Automated Systems of Continuous Monitoring and Accounting of Emissions of Harmful (Polluting) Substances from Thermal Power Plants into the Atmospheric Air. Basic Requirements," which was developed by the Moscow Power Engineering Institute, National Research University, in cooperation with the Council of Power Producers and Strategic Electric Power Investors Association and the All-Russia Research Institute for Materials and Technology Standardization.
Aschner, Michael; Ceccatelli, Sandra; Daneshian, Mardas; Fritsche, Ellen; Hasiwa, Nina; Hartung, Thomas; Hogberg, Helena T.; Leist, Marcel; Li, Abby; Mundy, William R.; Padilla, Stephanie; Piersma, Aldert H.; Bal-Price, Anna; Seiler, Andrea; Westerink, Remco H.; Zimmer, Bastian; Lein, Pamela J.
2016-01-01
Summary There is a paucity of information concerning the developmental neurotoxicity (DNT) hazard posed by industrial and environmental chemicals. New testing approaches will most likely be based on batteries of alternative and complementary (non-animal) tests. As DNT is assumed to result from the modulation of fundamental neurodevelopmental processes (such as neuronal differentiation, precursor cell migration or neuronal network formation) by chemicals, the first generation of alternative DNT tests target these processes. The advantage of such types of assays is that they capture toxicants with multiple targets and modes-of-action. Moreover, the processes modelled by the assays can be linked to toxicity endophenotypes, i.e. alterations in neural connectivity that form the basis for neurofunctional deficits in man. The authors of this review convened in a workshop to define criteria for the selection of positive/negative controls, to prepare recommendations on their use, and to initiate the setup of a directory of reference chemicals. For initial technical optimization of tests, a set of >50 endpoint-specific control compounds was identified. For further test development, an additional “test” set of 33 chemicals considered to act directly as bona fide DNT toxicants is proposed, and each chemical is annotated to the extent it fulfills these criteria. A tabular compilation of the original literature used to select the test set chemicals provides information on statistical procedures, and toxic/non-toxic doses (both for pups and dams). Suggestions are provided on how to use the >100 compounds (including negative controls) compiled here to address specificity, adversity and use of alternative test systems. PMID:27452664
Widhalm, Georg; Kiesel, Barbara; Woehrer, Adelheid; Traub-Weidinger, Tatjana; Preusser, Matthias; Marosi, Christine; Prayer, Daniela; Hainfellner, Johannes A; Knosp, Engelbert; Wolfsberger, Stefan
2013-01-01
Intraoperative identification of anaplastic foci in diffusely infiltrating gliomas (DIG) with non-significant contrast-enhancement on MRI is indispensible to avoid histopathological undergrading and subsequent treatment failure. Recently, we found that 5-aminolevulinic acid (5-ALA) induced protoporphyrin IX (PpIX) fluorescence can visualize areas with increased proliferative and metabolic activity in such gliomas intraoperatively. As treatment of DIG is predominantely based on histopathological World Health Organisation (WHO) parameters, we analyzed whether PpIX fluorescence can detect anaplastic foci according to these criteria. We prospectively included DIG patients with non-significant contrast-enhancement that received 5-ALA prior to resection. Intraoperatively, multiple samples from PpIX positive and negative intratumoral areas were collected using a modified neurosurgical microscope. In all samples, histopathological WHO criteria and proliferation rate were assessed and correlated to the PpIX fluorescence status. A total of 215 tumor specimens were collected in 59 patients. Of 26 WHO grade III gliomas, 23 cases (85%) showed focal PpIX fluorescence, whereas 29 (91%) of 33 WHO grade II gliomas were PpIX negative. In intratumoral areas with focal PpIX fluorescence, mitotic rate, cell density, nuclear pleomorphism, and proliferation rate were significantly higher than in non-fluorescing areas. The positive predictive value of focal PpIX fluorescence for WHO grade III histology was 85%. Our study indicates that 5-ALA induced PpIX fluorescence is a powerful marker for intraoperative identification of anaplastic foci according to the histopathological WHO criteria in DIG with non-significant contrast-enhancement. Therefore, application of 5-ALA optimizes tissue sampling for precise histopathological diagnosis independent of brain-shift.
Wang, Zhaoguo; Du, Xishihui
2016-07-01
Natural World Heritage Sites (NWHSs) are invaluable treasure due to the uniqueness of each site. Proper monitoring and management can guarantee their protection from multiple threats. In this study, geographic information system (GIS)-based multi-criteria decision analysis (GIS-MCDA) was used to assess criteria layers acquired from the data available in the literature. A conceptual model for determining the priority area for monitoring in Bogda, China, was created based on outstanding universal values (OUV) and expert knowledge. Weights were assigned to each layer using the analytic hierarchy process (AHP) based on group decisions, encompassing three experts: one being a heritage site expert, another a forest ranger, and the other a heritage site manager. Subsequently, evaluation layers and constraint layers were used to generate a priority map and to determine the feasibility of monitoring in Bogda. Finally, a monitoring suitability map of Bogda was obtained by referencing priority and feasibility maps.The high-priority monitoring area is located in the montane forest belt, which exhibits high biodiversity and is the main tourist area of Bogda. The northern buffer zone of Bogda comprises the concentrated feasible monitoring areas, and the area closest to roads and monitoring facilities is highly feasible for NWHS monitoring. The suitability of an area in terms of monitoring is largely determined by the monitoring priority in that particular area. The majority of planned monitoring facilities are well distributed in both suitable and less suitable areas. Analysis results indicate that the protection of Bogda will be more scientifically based due to its effective and all-around planned monitoring system proposed by the declaration text of Xinjiang Tianshan, which is the essential file submitted to World Heritage Centre to inscribe as a NWHS.