A Systematic Comparison between Classical Optimal Scaling and the Two-Parameter IRT Model
ERIC Educational Resources Information Center
Warrens, Matthijs J.; de Gruijter, Dato N. M.; Heiser, Willem J.
2007-01-01
In this article, the relationship between two alternative methods for the analysis of multivariate categorical data is systematically explored. It is shown that the person score of the first dimension of classical optimal scaling correlates strongly with the latent variable for the two-parameter item response theory (IRT) model. Next, under the…
Error analysis and system optimization of non-null aspheric testing system
NASA Astrophysics Data System (ADS)
Luo, Yongjie; Yang, Yongying; Liu, Dong; Tian, Chao; Zhuo, Yongmo
2010-10-01
A non-null aspheric testing system, which employs partial null lens (PNL for short) and reverse iterative optimization reconstruction (ROR for short) technique, is proposed in this paper. Based on system modeling in ray tracing software, the parameter of each optical element is optimized and this makes system modeling more precise. Systematic error of non-null aspheric testing system is analyzed and can be categorized into two types, the error due to surface parameters of PNL in the system modeling and the rest from non-null interferometer by the approach of error storage subtraction. Experimental results show that, after systematic error is removed from testing result of non-null aspheric testing system, the aspheric surface is precisely reconstructed by ROR technique and the consideration of systematic error greatly increase the test accuracy of non-null aspheric testing system.
A Rational Analysis of the Selection Task as Optimal Data Selection.
ERIC Educational Resources Information Center
Oaksford, Mike; Chater, Nick
1994-01-01
Experimental data on human reasoning in hypothesis-testing tasks is reassessed in light of a Bayesian model of optimal data selection in inductive hypothesis testing. The rational analysis provided by the model suggests that reasoning in such tasks may be rational rather than subject to systematic bias. (SLD)
Unrean, Pornkamol; Khajeeram, Sutamat; Laoteng, Kobkul
2016-03-01
An integrative simultaneous saccharification and fermentation (SSF) modeling is a useful guiding tool for rapid process optimization to meet the techno-economic requirement of industrial-scale lignocellulosic ethanol production. In this work, we have developed the SSF model composing of a metabolic network of a Saccharomyces cerevisiae cell associated with fermentation kinetics and enzyme hydrolysis model to quantitatively capture dynamic responses of yeast cell growth and fermentation during SSF. By using model-based design of feeding profiles for substrate and yeast cell in the fed-batch SSF process, an efficient ethanol production with high titer of up to 65 g/L and high yield of 85 % of theoretical yield was accomplished. The ethanol titer and productivity was increased by 47 and 41 %, correspondingly, in optimized fed-batch SSF as compared to batch process. The developed integrative SSF model is, therefore, considered as a promising approach for systematic design of economical and sustainable SSF bioprocessing of lignocellulose.
Vilela, Paulina; Liu, Hongbin; Lee, SeungChul; Hwangbo, Soonho; Nam, KiJeon; Yoo, ChangKyoo
2018-08-15
The release of silver nanoparticles (AgNPs) to wastewater caused by over-generation and poor treatment of the remaining nanomaterial has raised the interest of researchers. AgNPs can have a negative impact on watersheds and generate degradation of the effluent quality of wastewater treatment plants (WWTPs). The aim of this research is to design and analyze an integrated model system for the removal of AgNPs with high effluent quality in WWTPs using a systematic approach of removal mechanisms modeling, optimization, and control of the removal of silver nanoparticles. The activated sludge model 1 was modified with the inclusion of AgNPs removal mechanisms, such as adsorption/desorption, dissolution, and inhibition of microbial organisms. Response surface methodology was performed to minimize the AgNPs and total nitrogen concentrations in the effluent by optimizing operating conditions of the system. Then, the optimal operating conditions were utilized for the implementation of control strategies into the system for further analysis of enhancement of AgNPs removal efficiency. Thus, the overall AgNP removal efficiency was found to be slightly higher than 80%, which was an improvement of almost 7% compared to the BSM1 reference value. This study provides a systematic approach to find an optimal solution for enhancing AgNP removal efficiency in WWTPs and thereby to prevent pollution in the environment. Copyright © 2018 Elsevier B.V. All rights reserved.
Optimal error functional for parameter identification in anisotropic finite strain elasto-plasticity
NASA Astrophysics Data System (ADS)
Shutov, A. V.; Kaygorodtseva, A. A.; Dranishnikov, N. S.
2017-10-01
A problem of parameter identification for a model of finite strain elasto-plasticity is discussed. The utilized phenomenological material model accounts for nonlinear isotropic and kinematic hardening; the model kinematics is described by a nested multiplicative split of the deformation gradient. A hierarchy of optimization problems is considered. First, following the standard procedure, the material parameters are identified through minimization of a certain least square error functional. Next, the focus is placed on finding optimal weighting coefficients which enter the error functional. Toward that end, a stochastic noise with systematic and non-systematic components is introduced to the available measurement results; a superordinate optimization problem seeks to minimize the sensitivity of the resulting material parameters to the introduced noise. The advantage of this approach is that no additional experiments are required; it also provides an insight into the robustness of the identification procedure. As an example, experimental data for the steel 42CrMo4 are considered and a set of weighting coefficients is found, which is optimal in a certain class.
Begon, Mickaël; Andersen, Michael Skipper; Dumas, Raphaël
2018-03-01
Multibody kinematics optimization (MKO) aims to reduce soft tissue artefact (STA) and is a key step in musculoskeletal modeling. The objective of this review was to identify the numerical methods, their validation and performance for the estimation of the human joint kinematics using MKO. Seventy-four papers were extracted from a systematized search in five databases and cross-referencing. Model-derived kinematics were obtained using either constrained optimization or Kalman filtering to minimize the difference between measured (i.e., by skin markers, electromagnetic or inertial sensors) and model-derived positions and/or orientations. While hinge, universal, and spherical joints prevail, advanced models (e.g., parallel and four-bar mechanisms, elastic joint) have been introduced, mainly for the knee and shoulder joints. Models and methods were evaluated using: (i) simulated data based, however, on oversimplified STA and joint models; (ii) reconstruction residual errors, ranging from 4 mm to 40 mm; (iii) sensitivity analyses which highlighted the effect (up to 36 deg and 12 mm) of model geometrical parameters, joint models, and computational methods; (iv) comparison with other approaches (i.e., single body kinematics optimization and nonoptimized kinematics); (v) repeatability studies that showed low intra- and inter-observer variability; and (vi) validation against ground-truth bone kinematics (with errors between 1 deg and 22 deg for tibiofemoral rotations and between 3 deg and 10 deg for glenohumeral rotations). Moreover, MKO was applied to various movements (e.g., walking, running, arm elevation). Additional validations, especially for the upper limb, should be undertaken and we recommend a more systematic approach for the evaluation of MKO. In addition, further model development, scaling, and personalization methods are required to better estimate the secondary degrees-of-freedom (DoF).
Quantifying properties of hot and dense QCD matter through systematic model-to-data comparison
Bernhard, Jonah E.; Marcy, Peter W.; Coleman-Smith, Christopher E.; ...
2015-05-22
We systematically compare an event-by-event heavy-ion collision model to data from the CERN Large Hadron Collider. Using a general Bayesian method, we probe multiple model parameters including fundamental quark-gluon plasma properties such as the specific shear viscosity η/s, calibrate the model to optimally reproduce experimental data, and extract quantitative constraints for all parameters simultaneously. Furthermore, the method is universal and easily extensible to other data and collision models.
NASA Astrophysics Data System (ADS)
Bascetin, A.
2007-04-01
The selection of an optimal reclamation method is one of the most important factors in open-pit design and production planning. It also affects economic considerations in open-pit design as a function of plan location and depth. Furthermore, the selection is a complex multi-person, multi-criteria decision problem. The group decision-making process can be improved by applying a systematic and logical approach to assess the priorities based on the inputs of several specialists from different functional areas within the mine company. The analytical hierarchy process (AHP) can be very useful in involving several decision makers with different conflicting objectives to arrive at a consensus decision. In this paper, the selection of an optimal reclamation method using an AHP-based model was evaluated for coal production in an open-pit coal mine located at Seyitomer region in Turkey. The use of the proposed model indicates that it can be applied to improve the group decision making in selecting a reclamation method that satisfies optimal specifications. Also, it is found that the decision process is systematic and using the proposed model can reduce the time taken to select a optimal method.
Treatment of systematic errors in land data assimilation systems
NASA Astrophysics Data System (ADS)
Crow, W. T.; Yilmaz, M.
2012-12-01
Data assimilation systems are generally designed to minimize the influence of random error on the estimation of system states. Yet, experience with land data assimilation systems has also revealed the presence of large systematic differences between model-derived and remotely-sensed estimates of land surface states. Such differences are commonly resolved prior to data assimilation through implementation of a pre-processing rescaling step whereby observations are scaled (or non-linearly transformed) to somehow "match" comparable predictions made by an assimilation model. While the rationale for removing systematic differences in means (i.e., bias) between models and observations is well-established, relatively little theoretical guidance is currently available to determine the appropriate treatment of higher-order moments during rescaling. This talk presents a simple analytical argument to define an optimal linear-rescaling strategy for observations prior to their assimilation into a land surface model. While a technique based on triple collocation theory is shown to replicate this optimal strategy, commonly-applied rescaling techniques (e.g., so called "least-squares regression" and "variance matching" approaches) are shown to represent only sub-optimal approximations to it. Since the triple collocation approach is likely infeasible in many real-world circumstances, general advice for deciding between various feasible (yet sub-optimal) rescaling approaches will be presented with an emphasis of the implications of this work for the case of directly assimilating satellite radiances. While the bulk of the analysis will deal with linear rescaling techniques, its extension to nonlinear cases will also be discussed.
An effective model for ergonomic optimization applied to a new automotive assembly line
NASA Astrophysics Data System (ADS)
Duraccio, Vincenzo; Elia, Valerio; Forcina, Antonio
2016-06-01
An efficient ergonomic optimization can lead to a significant improvement in production performance and a considerable reduction of costs. In the present paper new model for ergonomic optimization is proposed. The new approach is based on the criteria defined by National Institute of Occupational Safety and Health and, adapted to Italian legislation. The proposed model provides an ergonomic optimization, by analyzing ergonomic relations between manual work in correct conditions. The model includes a schematic and systematic analysis method of the operations, and identifies all possible ergonomic aspects to be evaluated. The proposed approach has been applied to an automotive assembly line, where the operation repeatability makes the optimization fundamental. The proposed application clearly demonstrates the effectiveness of the new approach.
Huang, Xiaoqiang; Han, Kehang; Zhu, Yushan
2013-01-01
A systematic optimization model for binding sequence selection in computational enzyme design was developed based on the transition state theory of enzyme catalysis and graph-theoretical modeling. The saddle point on the free energy surface of the reaction system was represented by catalytic geometrical constraints, and the binding energy between the active site and transition state was minimized to reduce the activation energy barrier. The resulting hyperscale combinatorial optimization problem was tackled using a novel heuristic global optimization algorithm, which was inspired and tested by the protein core sequence selection problem. The sequence recapitulation tests on native active sites for two enzyme catalyzed hydrolytic reactions were applied to evaluate the predictive power of the design methodology. The results of the calculation show that most of the native binding sites can be successfully identified if the catalytic geometrical constraints and the structural motifs of the substrate are taken into account. Reliably predicting active site sequences may have significant implications for the creation of novel enzymes that are capable of catalyzing targeted chemical reactions. PMID:23649589
NASA Astrophysics Data System (ADS)
Wang, Zhen-yu; Yu, Jian-cheng; Zhang, Ai-qun; Wang, Ya-xing; Zhao, Wen-tao
2017-12-01
Combining high precision numerical analysis methods with optimization algorithms to make a systematic exploration of a design space has become an important topic in the modern design methods. During the design process of an underwater glider's flying-wing structure, a surrogate model is introduced to decrease the computation time for a high precision analysis. By these means, the contradiction between precision and efficiency is solved effectively. Based on the parametric geometry modeling, mesh generation and computational fluid dynamics analysis, a surrogate model is constructed by adopting the design of experiment (DOE) theory to solve the multi-objects design optimization problem of the underwater glider. The procedure of a surrogate model construction is presented, and the Gaussian kernel function is specifically discussed. The Particle Swarm Optimization (PSO) algorithm is applied to hydrodynamic design optimization. The hydrodynamic performance of the optimized flying-wing structure underwater glider increases by 9.1%.
A Systematic Approach to Sensor Selection for Aircraft Engine Health Estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Garg, Sanjay
2009-01-01
A systematic approach for selecting an optimal suite of sensors for on-board aircraft gas turbine engine health estimation is presented. The methodology optimally chooses the engine sensor suite and the model tuning parameter vector to minimize the Kalman filter mean squared estimation error in the engine s health parameters or other unmeasured engine outputs. This technique specifically addresses the underdetermined estimation problem where there are more unknown system health parameters representing degradation than available sensor measurements. This paper presents the theoretical estimation error equations, and describes the optimization approach that is applied to select the sensors and model tuning parameters to minimize these errors. Two different model tuning parameter vector selection approaches are evaluated: the conventional approach of selecting a subset of health parameters to serve as the tuning parameters, and an alternative approach that selects tuning parameters as a linear combination of all health parameters. Results from the application of the technique to an aircraft engine simulation are presented, and compared to those from an alternative sensor selection strategy.
An effective model for ergonomic optimization applied to a new automotive assembly line
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duraccio, Vincenzo; Elia, Valerio; Forcina, Antonio
2016-06-08
An efficient ergonomic optimization can lead to a significant improvement in production performance and a considerable reduction of costs. In the present paper new model for ergonomic optimization is proposed. The new approach is based on the criteria defined by National Institute of Occupational Safety and Health and, adapted to Italian legislation. The proposed model provides an ergonomic optimization, by analyzing ergonomic relations between manual work in correct conditions. The model includes a schematic and systematic analysis method of the operations, and identifies all possible ergonomic aspects to be evaluated. The proposed approach has been applied to an automotive assemblymore » line, where the operation repeatability makes the optimization fundamental. The proposed application clearly demonstrates the effectiveness of the new approach.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Portone, Teresa; Niederhaus, John Henry; Sanchez, Jason James
This report introduces the concepts of Bayesian model selection, which provides a systematic means of calibrating and selecting an optimal model to represent a phenomenon. This has many potential applications, including for comparing constitutive models. The ideas described herein are applied to a model selection problem between different yield models for hardened steel under extreme loading conditions.
Crown, William; Buyukkaramikli, Nasuh; Thokala, Praveen; Morton, Alec; Sir, Mustafa Y; Marshall, Deborah A; Tosh, Jon; Padula, William V; Ijzerman, Maarten J; Wong, Peter K; Pasupathy, Kalyan S
2017-03-01
Providing health services with the greatest possible value to patients and society given the constraints imposed by patient characteristics, health care system characteristics, budgets, and so forth relies heavily on the design of structures and processes. Such problems are complex and require a rigorous and systematic approach to identify the best solution. Constrained optimization is a set of methods designed to identify efficiently and systematically the best solution (the optimal solution) to a problem characterized by a number of potential solutions in the presence of identified constraints. This report identifies 1) key concepts and the main steps in building an optimization model; 2) the types of problems for which optimal solutions can be determined in real-world health applications; and 3) the appropriate optimization methods for these problems. We first present a simple graphical model based on the treatment of "regular" and "severe" patients, which maximizes the overall health benefit subject to time and budget constraints. We then relate it back to how optimization is relevant in health services research for addressing present day challenges. We also explain how these mathematical optimization methods relate to simulation methods, to standard health economic analysis techniques, and to the emergent fields of analytics and machine learning. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Zwetsloot, P P; Kouwenberg, L H J A; Sena, E S; Eding, J E; den Ruijter, H M; Sluijter, J P G; Pasterkamp, G; Doevendans, P A; Hoefer, I E; Chamuleau, S A J; van Hout, G P J; Jansen Of Lorkeers, S J
2017-10-27
Large animal models are essential for the development of novel therapeutics for myocardial infarction. To optimize translation, we need to assess the effect of experimental design on disease outcome and model experimental design to resemble the clinical course of MI. The aim of this study is therefore to systematically investigate how experimental decisions affect outcome measurements in large animal MI models. We used control animal-data from two independent meta-analyses of large animal MI models. All variables of interest were pre-defined. We performed univariable and multivariable meta-regression to analyze whether these variables influenced infarct size and ejection fraction. Our analyses incorporated 246 relevant studies. Multivariable meta-regression revealed that infarct size and cardiac function were influenced independently by choice of species, sex, co-medication, occlusion type, occluded vessel, quantification method, ischemia duration and follow-up duration. We provide strong systematic evidence that commonly used endpoints significantly depend on study design and biological variation. This makes direct comparison of different study-results difficult and calls for standardized models. Researchers should take this into account when designing large animal studies to most closely mimic the clinical course of MI and enable translational success.
The Most Effective Way of Delivering a Train-the-Trainers Program: A Systematic Review
ERIC Educational Resources Information Center
Pearce, Jennifer; Mann, Mala K.; Jones, Caryl; van Buschbach, Susanne; Olff, Miranda; Bisson, Jonathan I.
2012-01-01
Introduction: Previous literature has shown that multifaceted, interactive interventions may be the most effective way to train health and social care professionals. A Train-the-Trainer (TTT) model could incorporate all these components. We conducted a systematic review to determine the overall effectiveness and optimal delivery of TTT programs.…
NASA Astrophysics Data System (ADS)
Renner, Timothy
2011-12-01
A C++ framework was constructed with the explicit purpose of systematically generating string models using the Weakly Coupled Free Fermionic Heterotic String (WCFFHS) method. The software, optimized for speed, generality, and ease of use, has been used to conduct preliminary systematic investigations of WCFFHS vacua. Documentation for this framework is provided in the Appendix. After an introduction to theoretical and computational aspects of WCFFHS model building, a study of ten-dimensional WCFFHS models is presented. Degeneracies among equivalent expressions of each of the known models are investigated and classified. A study of more phenomenologically realistic four-dimensional models based on the well known "NAHE" set is then presented, with statistics being reported on gauge content, matter representations, and space-time supersymmetries. The final study is a parallel to the NAHE study in which a variation of the NAHE set is systematically extended and examined statistically. Special attention is paid to models with "mirroring"---identical observable and hidden sector gauge groups and matter representations.
Piezoresistive Cantilever Performance—Part II: Optimization
Park, Sung-Jin; Doll, Joseph C.; Rastegar, Ali J.; Pruitt, Beth L.
2010-01-01
Piezoresistive silicon cantilevers fabricated by ion implantation are frequently used for force, displacement, and chemical sensors due to their low cost and electronic readout. However, the design of piezoresistive cantilevers is not a straightforward problem due to coupling between the design parameters, constraints, process conditions, and performance. We systematically analyzed the effect of design and process parameters on force resolution and then developed an optimization approach to improve force resolution while satisfying various design constraints using simulation results. The combined simulation and optimization approach is extensible to other doping methods beyond ion implantation in principle. The optimization results were validated by fabricating cantilevers with the optimized conditions and characterizing their performance. The measurement results demonstrate that the analytical model accurately predicts force and displacement resolution, and sensitivity and noise tradeoff in optimal cantilever performance. We also performed a comparison between our optimization technique and existing models and demonstrated eight times improvement in force resolution over simplified models. PMID:20333323
Feed Forward Neural Network and Optimal Control Problem with Control and State Constraints
NASA Astrophysics Data System (ADS)
Kmet', Tibor; Kmet'ová, Mária
2009-09-01
A feed forward neural network based optimal control synthesis is presented for solving optimal control problems with control and state constraints. The paper extends adaptive critic neural network architecture proposed by [5] to the optimal control problems with control and state constraints. The optimal control problem is transcribed into a nonlinear programming problem which is implemented with adaptive critic neural network. The proposed simulation method is illustrated by the optimal control problem of nitrogen transformation cycle model. Results show that adaptive critic based systematic approach holds promise for obtaining the optimal control with control and state constraints.
A Systematic Software, Firmware, and Hardware Codesign Methodology for Digital Signal Processing
2014-03-01
possible mappings ...................................................60 Table 25. Possible optimal leaf -nodes... size weight and power UAV unmanned aerial vehicle UHF ultra-high frequency UML universal modeling language Verilog verify logic VHDL VHSIC...optimal leaf -nodes to some design patterns for embedded system design. Software and hardware partitioning is a very difficult challenge in the field of
A systematic model to compare nurses' optimal and actual competencies in the clinical setting.
Meretoja, Riitta; Koponen, Leena
2012-02-01
This paper is a report of a study to develop a model to compare nurses' optimal and actual competencies in the clinical setting. Although future challenge is to focus the developmental and educational targets in health care, limited information is available on methods for how to predict optimal competencies. A multidisciplinary group of 24 experts on perioperative care were recruited to this study. They anticipated the effects of future challenges on perioperative care and specified the level of optimal competencies by using the Nurse Competence Scale before and after group discussions. The expert group consensus discussions were held to achieve the highest possible agreement on the overall level of optimal competencies. Registered Nurses (n = 87) and their nurse managers from five different units conducted assessments of the actual level of nurse competence with the Nurse Competence Scale instrument. Data were collected in 2006-2007. Group consensus discussions solidified experts' anticipations about the optimal competence level. This optimal competence level was significantly higher than the nurses' self-reported actual or nurse managers' assessed level of actual competence. The study revealed some competence items that were seen as key challenges for future education of professional nursing practice. It is important that the multidisciplinary experts in a particular care context develop a share understanding of the future competency requirements of patient care. Combining optimal competence profiles to systematic competence assessments contribute to targeted continual learning and educational interventions. © 2011 Blackwell Publishing Ltd.
Optimizing Medical Kits for Space Flight
NASA Technical Reports Server (NTRS)
Minard, Charles G.; FreiredeCarvalho, Mary H.; Iyengar, M. Sriram
2010-01-01
The Integrated Medical Model (IMM) uses Monte Carlo methodologies to predict the occurrence of medical events, their mitigation, and the resources required during space flight. The model includes two modules that utilize output from a single model simulation to identify an optimized medical kit for a specified mission scenario. This poster describes two flexible optimization routines built into SAS 9.1. The first routine utilizes a systematic process of elimination to maximize (or minimize) outcomes subject to attribute constraints. The second routine uses a search and mutate approach to minimize medical kit attributes given a set of outcome constraints. There are currently 273 unique resources identified that are used to treat at least one of 83 medical conditions currently in the model.
Error assessment of biogeochemical models by lower bound methods (NOMMA-1.0)
NASA Astrophysics Data System (ADS)
Sauerland, Volkmar; Löptien, Ulrike; Leonhard, Claudine; Oschlies, Andreas; Srivastav, Anand
2018-03-01
Biogeochemical models, capturing the major feedbacks of the pelagic ecosystem of the world ocean, are today often embedded into Earth system models which are increasingly used for decision making regarding climate policies. These models contain poorly constrained parameters (e.g., maximum phytoplankton growth rate), which are typically adjusted until the model shows reasonable behavior. Systematic approaches determine these parameters by minimizing the misfit between the model and observational data. In most common model approaches, however, the underlying functions mimicking the biogeochemical processes are nonlinear and non-convex. Thus, systematic optimization algorithms are likely to get trapped in local minima and might lead to non-optimal results. To judge the quality of an obtained parameter estimate, we propose determining a preferably large lower bound for the global optimum that is relatively easy to obtain and that will help to assess the quality of an optimum, generated by an optimization algorithm. Due to the unavoidable noise component in all observations, such a lower bound is typically larger than zero. We suggest deriving such lower bounds based on typical properties of biogeochemical models (e.g., a limited number of extremes and a bounded time derivative). We illustrate the applicability of the method with two real-world examples. The first example uses real-world observations of the Baltic Sea in a box model setup. The second example considers a three-dimensional coupled ocean circulation model in combination with satellite chlorophyll a.
Bartz, Daniel; Hatrick, Kerr; Hesse, Christian W; Müller, Klaus-Robert; Lemm, Steven
2013-01-01
Robust and reliable covariance estimates play a decisive role in financial and many other applications. An important class of estimators is based on factor models. Here, we show by extensive Monte Carlo simulations that covariance matrices derived from the statistical Factor Analysis model exhibit a systematic error, which is similar to the well-known systematic error of the spectrum of the sample covariance matrix. Moreover, we introduce the Directional Variance Adjustment (DVA) algorithm, which diminishes the systematic error. In a thorough empirical study for the US, European, and Hong Kong stock market we show that our proposed method leads to improved portfolio allocation.
Bartz, Daniel; Hatrick, Kerr; Hesse, Christian W.; Müller, Klaus-Robert; Lemm, Steven
2013-01-01
Robust and reliable covariance estimates play a decisive role in financial and many other applications. An important class of estimators is based on factor models. Here, we show by extensive Monte Carlo simulations that covariance matrices derived from the statistical Factor Analysis model exhibit a systematic error, which is similar to the well-known systematic error of the spectrum of the sample covariance matrix. Moreover, we introduce the Directional Variance Adjustment (DVA) algorithm, which diminishes the systematic error. In a thorough empirical study for the US, European, and Hong Kong stock market we show that our proposed method leads to improved portfolio allocation. PMID:23844016
Optimization techniques using MODFLOW-GWM
Grava, Anna; Feinstein, Daniel T.; Barlow, Paul M.; Bonomi, Tullia; Buarne, Fabiola; Dunning, Charles; Hunt, Randall J.
2015-01-01
An important application of optimization codes such as MODFLOW-GWM is to maximize water supply from unconfined aquifers subject to constraints involving surface-water depletion and drawdown. In optimizing pumping for a fish hatchery in a bedrock aquifer system overlain by glacial deposits in eastern Wisconsin, various features of the GWM-2000 code were used to overcome difficulties associated with: 1) Non-linear response matrices caused by unconfined conditions and head-dependent boundaries; 2) Efficient selection of candidate well and drawdown constraint locations; and 3) Optimizing against water-level constraints inside pumping wells. Features of GWM-2000 were harnessed to test the effects of systematically varying the decision variables and constraints on the optimized solution for managing withdrawals. An important lesson of the procedure, similar to lessons learned in model calibration, is that the optimized outcome is non-unique, and depends on a range of choices open to the user. The modeler must balance the complexity of the numerical flow model used to represent the groundwater-flow system against the range of options (decision variables, objective functions, constraints) available for optimizing the model.
NASA Technical Reports Server (NTRS)
Frenklach, Michael; Wang, Hai; Rabinowitz, Martin J.
1992-01-01
A method of systematic optimization, solution mapping, as applied to a large-scale dynamic model is presented. The basis of the technique is parameterization of model responses in terms of model parameters by simple algebraic expressions. These expressions are obtained by computer experiments arranged in a factorial design. The developed parameterized responses are then used in a joint multiparameter multidata-set optimization. A brief review of the mathematical background of the technique is given. The concept of active parameters is discussed. The technique is applied to determine an optimum set of parameters for a methane combustion mechanism. Five independent responses - comprising ignition delay times, pre-ignition methyl radical concentration profiles, and laminar premixed flame velocities - were optimized with respect to thirteen reaction rate parameters. The numerical predictions of the optimized model are compared to those computed with several recent literature mechanisms. The utility of the solution mapping technique in situations where the optimum is not unique is also demonstrated.
NASA Technical Reports Server (NTRS)
Biess, J. J.; Yu, Y.; Middlebrook, R. D.; Schoenfeld, A. D.
1974-01-01
A review is given of future power processing systems planned for the next 20 years, and the state-of-the-art of power processing design modeling and analysis techniques used to optimize power processing systems. A methodology of modeling and analysis of power processing equipment and systems has been formulated to fulfill future tradeoff studies and optimization requirements. Computer techniques were applied to simulate power processor performance and to optimize the design of power processing equipment. A program plan to systematically develop and apply the tools for power processing systems modeling and analysis is presented so that meaningful results can be obtained each year to aid the power processing system engineer and power processing equipment circuit designers in their conceptual and detail design and analysis tasks.
A Decision Support Model and Tool to Assist Financial Decision-Making in Universities
ERIC Educational Resources Information Center
Bhayat, Imtiaz; Manuguerra, Maurizio; Baldock, Clive
2015-01-01
In this paper, a model and tool is proposed to assist universities and other mission-based organisations to ascertain systematically the optimal portfolio of projects, in any year, meeting the organisations risk tolerances and available funds. The model and tool presented build on previous work on university operations and decision support systems…
Senço, Natasha M; Huang, Yu; D'Urso, Giordano; Parra, Lucas C; Bikson, Marom; Mantovani, Antonio; Shavitt, Roseli G; Hoexter, Marcelo Q; Miguel, Eurípedes C; Brunoni, André R
2015-07-01
Neuromodulation techniques for obsessive-compulsive disorder (OCD) treatment have expanded with greater understanding of the brain circuits involved. Transcranial direct current stimulation (tDCS) might be a potential new treatment for OCD, although the optimal montage is unclear. To perform a systematic review on meta-analyses of repetitive transcranianal magnetic stimulation (rTMS) and deep brain stimulation (DBS) trials for OCD, aiming to identify brain stimulation targets for future tDCS trials and to support the empirical evidence with computer head modeling analysis. Systematic reviews of rTMS and DBS trials on OCD in Pubmed/MEDLINE were searched. For the tDCS computational analysis, we employed head models with the goal of optimally targeting current delivery to structures of interest. Only three references matched our eligibility criteria. We simulated four different electrodes montages and analyzed current direction and intensity. Although DBS, rTMS and tDCS are not directly comparable and our theoretical model, based on DBS and rTMS targets, needs empirical validation, we found that the tDCS montage with the cathode over the pre-supplementary motor area and extra-cephalic anode seems to activate most of the areas related to OCD.
Design of experiments applications in bioprocessing: concepts and approach.
Kumar, Vijesh; Bhalla, Akriti; Rathore, Anurag S
2014-01-01
Most biotechnology unit operations are complex in nature with numerous process variables, feed material attributes, and raw material attributes that can have significant impact on the performance of the process. Design of experiments (DOE)-based approach offers a solution to this conundrum and allows for an efficient estimation of the main effects and the interactions with minimal number of experiments. Numerous publications illustrate application of DOE towards development of different bioprocessing unit operations. However, a systematic approach for evaluation of the different DOE designs and for choosing the optimal design for a given application has not been published yet. Through this work we have compared the I-optimal and D-optimal designs to the commonly used central composite and Box-Behnken designs for bioprocess applications. A systematic methodology is proposed for construction of the model and for precise prediction of the responses for the three case studies involving some of the commonly used unit operations in downstream processing. Use of Akaike information criterion for model selection has been examined and found to be suitable for the applications under consideration. © 2013 American Institute of Chemical Engineers.
An adaptive response surface method for crashworthiness optimization
NASA Astrophysics Data System (ADS)
Shi, Lei; Yang, Ren-Jye; Zhu, Ping
2013-11-01
Response surface-based design optimization has been commonly used for optimizing large-scale design problems in the automotive industry. However, most response surface models are built by a limited number of design points without considering data uncertainty. In addition, the selection of a response surface in the literature is often arbitrary. This article uses a Bayesian metric to systematically select the best available response surface among several candidates in a library while considering data uncertainty. An adaptive, efficient response surface strategy, which minimizes the number of computationally intensive simulations, was developed for design optimization of large-scale complex problems. This methodology was demonstrated by a crashworthiness optimization example.
Zhao, Xiuli; Yiranbon, Ethel
2014-01-01
The idea of aggregating information is clearly recognizable in the daily lives of all entities whether as individuals or as a group, since time immemorial corporate organizations, governments, and individuals as economic agents aggregate information to formulate decisions. Energy planning represents an investment-decision problem where information needs to be aggregated from credible sources to predict both demand and supply of energy. To do this there are varying methods ranging from the use of portfolio theory to managing risk and maximizing portfolio performance under a variety of unpredictable economic outcomes. The future demand for energy and need to use solar energy in order to avoid future energy crisis in Jiangsu province in China require energy planners in the province to abandon their reliance on traditional, “least-cost,” and stand-alone technology cost estimates and instead evaluate conventional and renewable energy supply on the basis of a hybrid of optimization models in order to ensure effective and reliable supply. Our task in this research is to propose measures towards addressing optimal solar energy forecasting by employing a systematic optimization approach based on a hybrid of weather and energy forecast models. After giving an overview of the sustainable energy issues in China, we have reviewed and classified the various models that existing studies have used to predict the influences of the weather influences and the output of solar energy production units. Further, we evaluate the performance of an exemplary ensemble model which combines the forecast output of two popular statistical prediction methods using a dynamic weighting factor. PMID:24511292
Zhao, Xiuli; Asante Antwi, Henry; Yiranbon, Ethel
2014-01-01
The idea of aggregating information is clearly recognizable in the daily lives of all entities whether as individuals or as a group, since time immemorial corporate organizations, governments, and individuals as economic agents aggregate information to formulate decisions. Energy planning represents an investment-decision problem where information needs to be aggregated from credible sources to predict both demand and supply of energy. To do this there are varying methods ranging from the use of portfolio theory to managing risk and maximizing portfolio performance under a variety of unpredictable economic outcomes. The future demand for energy and need to use solar energy in order to avoid future energy crisis in Jiangsu province in China require energy planners in the province to abandon their reliance on traditional, "least-cost," and stand-alone technology cost estimates and instead evaluate conventional and renewable energy supply on the basis of a hybrid of optimization models in order to ensure effective and reliable supply. Our task in this research is to propose measures towards addressing optimal solar energy forecasting by employing a systematic optimization approach based on a hybrid of weather and energy forecast models. After giving an overview of the sustainable energy issues in China, we have reviewed and classified the various models that existing studies have used to predict the influences of the weather influences and the output of solar energy production units. Further, we evaluate the performance of an exemplary ensemble model which combines the forecast output of two popular statistical prediction methods using a dynamic weighting factor.
Optimal control strategy for a novel computer virus propagation model on scale-free networks
NASA Astrophysics Data System (ADS)
Zhang, Chunming; Huang, Haitao
2016-06-01
This paper aims to study the combined impact of reinstalling system and network topology on the spread of computer viruses over the Internet. Based on scale-free network, this paper proposes a novel computer viruses propagation model-SLBOSmodel. A systematic analysis of this new model shows that the virus-free equilibrium is globally asymptotically stable when its spreading threshold is less than one; nevertheless, it is proved that the viral equilibrium is permanent if the spreading threshold is greater than one. Then, the impacts of different model parameters on spreading threshold are analyzed. Next, an optimally controlled SLBOS epidemic model on complex networks is also studied. We prove that there is an optimal control existing for the control problem. Some numerical simulations are finally given to illustrate the main results.
Systematic study of source mask optimization and verification flows
NASA Astrophysics Data System (ADS)
Ben, Yu; Latypov, Azat; Chua, Gek Soon; Zou, Yi
2012-06-01
Source mask optimization (SMO) emerged as powerful resolution enhancement technique (RET) for advanced technology nodes. However, there is a plethora of flow and verification metrics in the field, confounding the end user of the technique. Systemic study of different flows and the possible unification thereof is missing. This contribution is intended to reveal the pros and cons of different SMO approaches and verification metrics, understand the commonality and difference, and provide a generic guideline for RET selection via SMO. The paper discusses 3 different type of variations commonly arise in SMO, namely pattern preparation & selection, availability of relevant OPC recipe for freeform source and finally the metrics used in source verification. Several pattern selection algorithms are compared and advantages of systematic pattern selection algorithms are discussed. In the absence of a full resist model for SMO, alternative SMO flow without full resist model is reviewed. Preferred verification flow with quality metrics of DOF and MEEF is examined.
NASA Astrophysics Data System (ADS)
Wang, Fengwen
2018-05-01
This paper presents a systematic approach for designing 3D auxetic lattice materials, which exhibit constant negative Poisson's ratios over large strain intervals. A unit cell model mimicking tensile tests is established and based on the proposed model, the secant Poisson's ratio is defined as the negative ratio between the lateral and the longitudinal engineering strains. The optimization problem for designing a material unit cell with a target Poisson's ratio is formulated to minimize the average lateral engineering stresses under the prescribed deformations. Numerical results demonstrate that 3D auxetic lattice materials with constant Poisson's ratios can be achieved by the proposed optimization formulation and that two sets of material architectures are obtained by imposing different symmetry on the unit cell. Moreover, inspired by the topology-optimized material architecture, a subsequent shape optimization is proposed by parametrizing material architectures using super-ellipsoids. By designing two geometrical parameters, simple optimized material microstructures with different target Poisson's ratios are obtained. By interpolating these two parameters as polynomial functions of Poisson's ratios, material architectures for any Poisson's ratio in the interval of ν ∈ [ - 0.78 , 0.00 ] are explicitly presented. Numerical evaluations show that interpolated auxetic lattice materials exhibit constant Poisson's ratios in the target strain interval of [0.00, 0.20] and that 3D auxetic lattice material architectures with programmable Poisson's ratio are achievable.
Optimizing construction quality management of pavements using mechanistic performance analysis.
DOT National Transportation Integrated Search
2004-08-01
This report presents a statistical-based algorithm that was developed to reconcile the results from several pavement performance models used in the state of practice with systematic process control techniques. These algorithms identify project-specif...
NASA Technical Reports Server (NTRS)
Alexandrov, N. M.; Nielsen, E. J.; Lewis, R. M.; Anderson, W. K.
2000-01-01
First-order approximation and model management is a methodology for a systematic use of variable-fidelity models or approximations in optimization. The intent of model management is to attain convergence to high-fidelity solutions with minimal expense in high-fidelity computations. The savings in terms of computationally intensive evaluations depends on the ability of the available lower-fidelity model or a suite of models to predict the improvement trends for the high-fidelity problem, Variable-fidelity models can be represented by data-fitting approximations, variable-resolution models. variable-convergence models. or variable physical fidelity models. The present work considers the use of variable-fidelity physics models. We demonstrate the performance of model management on an aerodynamic optimization of a multi-element airfoil designed to operate in the transonic regime. Reynolds-averaged Navier-Stokes equations represent the high-fidelity model, while the Euler equations represent the low-fidelity model. An unstructured mesh-based analysis code FUN2D evaluates functions and sensitivity derivatives for both models. Model management for the present demonstration problem yields fivefold savings in terms of high-fidelity evaluations compared to optimization done with high-fidelity computations alone.
Uncertainty Analysis and Order-by-Order Optimization of Chiral Nuclear Interactions
Carlsson, Boris; Forssen, Christian; Fahlin Strömberg, D.; ...
2016-02-24
Chiral effective field theory ( ΧEFT) provides a systematic approach to describe low-energy nuclear forces. Moreover, EFT is able to provide well-founded estimates of statistical and systematic uncertainties | although this unique advantage has not yet been fully exploited. We ll this gap by performing an optimization and statistical analysis of all the low-energy constants (LECs) up to next-to-next-to-leading order. Our optimization protocol corresponds to a simultaneous t to scattering and bound-state observables in the pion-nucleon, nucleon-nucleon, and few-nucleon sectors, thereby utilizing the full model capabilities of EFT. Finally, we study the effect on other observables by demonstrating forward-error-propagation methodsmore » that can easily be adopted by future works. We employ mathematical optimization and implement automatic differentiation to attain e cient and machine-precise first- and second-order derivatives of the objective function with respect to the LECs. This is also vital for the regression analysis. We use power-counting arguments to estimate the systematic uncertainty that is inherent to EFT and we construct chiral interactions at different orders with quantified uncertainties. Statistical error propagation is compared with Monte Carlo sampling showing that statistical errors are in general small compared to systematic ones. In conclusion, we find that a simultaneous t to different sets of data is critical to (i) identify the optimal set of LECs, (ii) capture all relevant correlations, (iii) reduce the statistical uncertainty, and (iv) attain order-by-order convergence in EFT. Furthermore, certain systematic uncertainties in the few-nucleon sector are shown to get substantially magnified in the many-body sector; in particlar when varying the cutoff in the chiral potentials. The methodology and results presented in this Paper open a new frontier for uncertainty quantification in ab initio nuclear theory.« less
Multifidelity Analysis and Optimization for Supersonic Design
NASA Technical Reports Server (NTRS)
Kroo, Ilan; Willcox, Karen; March, Andrew; Haas, Alex; Rajnarayan, Dev; Kays, Cory
2010-01-01
Supersonic aircraft design is a computationally expensive optimization problem and multifidelity approaches over a significant opportunity to reduce design time and computational cost. This report presents tools developed to improve supersonic aircraft design capabilities including: aerodynamic tools for supersonic aircraft configurations; a systematic way to manage model uncertainty; and multifidelity model management concepts that incorporate uncertainty. The aerodynamic analysis tools developed are appropriate for use in a multifidelity optimization framework, and include four analysis routines to estimate the lift and drag of a supersonic airfoil, a multifidelity supersonic drag code that estimates the drag of aircraft configurations with three different methods: an area rule method, a panel method, and an Euler solver. In addition, five multifidelity optimization methods are developed, which include local and global methods as well as gradient-based and gradient-free techniques.
Optimal cooperative control synthesis of active displays
NASA Technical Reports Server (NTRS)
Garg, S.; Schmidt, D. K.
1985-01-01
A technique is developed that is intended to provide a systematic approach to synthesizing display augmentation for optimal manual control in complex, closed-loop tasks. A cooperative control synthesis technique, previously developed to design pilot-optimal control augmentation for the plant, is extended to incorporate the simultaneous design of performance enhancing displays. The technique utilizes an optimal control model of the man in the loop. It is applied to the design of a quickening control law for a display and a simple K/s(2) plant, and then to an F-15 type aircraft in a multi-channel task. Utilizing the closed loop modeling and analysis procedures, the results from the display design algorithm are evaluated and an analytical validation is performed. Experimental validation is recommended for future efforts.
Model parameter-related optimal perturbations and their contributions to El Niño prediction errors
NASA Astrophysics Data System (ADS)
Tao, Ling-Jiang; Gao, Chuan; Zhang, Rong-Hua
2018-04-01
Errors in initial conditions and model parameters (MPs) are the main sources that limit the accuracy of ENSO predictions. In addition to exploring the initial error-induced prediction errors, model errors are equally important in determining prediction performance. In this paper, the MP-related optimal errors that can cause prominent error growth in ENSO predictions are investigated using an intermediate coupled model (ICM) and a conditional nonlinear optimal perturbation (CNOP) approach. Two MPs related to the Bjerknes feedback are considered in the CNOP analysis: one involves the SST-surface wind coupling ({α _τ } ), and the other involves the thermocline effect on the SST ({α _{Te}} ). The MP-related optimal perturbations (denoted as CNOP-P) are found uniformly positive and restrained in a small region: the {α _τ } component is mainly concentrated in the central equatorial Pacific, and the {α _{Te}} component is mainly located in the eastern cold tongue region. This kind of CNOP-P enhances the strength of the Bjerknes feedback and induces an El Niño- or La Niña-like error evolution, resulting in an El Niño-like systematic bias in this model. The CNOP-P is also found to play a role in the spring predictability barrier (SPB) for ENSO predictions. Evidently, such error growth is primarily attributed to MP errors in small areas based on the localized distribution of CNOP-P. Further sensitivity experiments firmly indicate that ENSO simulations are sensitive to the representation of SST-surface wind coupling in the central Pacific and to the thermocline effect in the eastern Pacific in the ICM. These results provide guidance and theoretical support for the future improvement in numerical models to reduce the systematic bias and SPB phenomenon in ENSO predictions.
NASA Astrophysics Data System (ADS)
Miura, Yasunari; Sugiyama, Yuki
2017-12-01
We present a general method for analyzing macroscopic collective phenomena observed in many-body systems. For this purpose, we employ diffusion maps, which are one of the dimensionality-reduction techniques, and systematically define a few relevant coarse-grained variables for describing macroscopic phenomena. The time evolution of macroscopic behavior is described as a trajectory in the low-dimensional space constructed by these coarse variables. We apply this method to the analysis of the traffic model, called the optimal velocity model, and reveal a bifurcation structure, which features a transition to the emergence of a moving cluster as a traffic jam.
Can we discover double Higgs production at the LHC?
NASA Astrophysics Data System (ADS)
Alves, Alexandre; Ghosh, Tathagata; Sinha, Kuver
2017-08-01
We explore double Higgs production via gluon fusion in the b b ¯γ γ channel at the high-luminosity LHC using machine learning tools. We first propose a Bayesian optimization approach to select cuts on kinematic variables, obtaining a 30%-50% increase in the significance compared to current results in the literature. We show that this improvement persists once systematic uncertainties are taken into account. We next use boosted decision trees (BDT) to further discriminate signal and background events. Our analysis shows that a joint optimization of kinematic cuts and BDT hyperparameters results in an appreciable improvement in the significance. Finally, we perform a multivariate analysis of the output scores of the BDT. We find that assuming a very low level of systematics, the techniques proposed here will be able to confirm the production of a pair of standard model Higgs bosons at 5 σ level with 3 ab-1 of data. Assuming a more realistic projection of the level of systematics, around 10%, the optimization of cuts to train BDTs combined with a multivariate analysis delivers a respectable significance of 4.6 σ . Even assuming large systematics of 20%, our analysis predicts a 3.6 σ significance, which represents at least strong evidence in favor of double Higgs production. We carefully incorporate background contributions coming from light flavor jets or c jets being misidentified as b jets and jets being misidentified as photons in our analysis.
Statistical model for speckle pattern optimization.
Su, Yong; Zhang, Qingchuan; Gao, Zeren
2017-11-27
Image registration is the key technique of optical metrologies such as digital image correlation (DIC), particle image velocimetry (PIV), and speckle metrology. Its performance depends critically on the quality of image pattern, and thus pattern optimization attracts extensive attention. In this article, a statistical model is built to optimize speckle patterns that are composed of randomly positioned speckles. It is found that the process of speckle pattern generation is essentially a filtered Poisson process. The dependence of measurement errors (including systematic errors, random errors, and overall errors) upon speckle pattern generation parameters is characterized analytically. By minimizing the errors, formulas of the optimal speckle radius are presented. Although the primary motivation is from the field of DIC, we believed that scholars in other optical measurement communities, such as PIV and speckle metrology, will benefit from these discussions.
Li, Hui; Wang, Chuanxu; Shang, Meng; Ou, Wei
2017-01-01
In this paper, we examine the influences of vertical and horizontal cooperation models on the optimal decisions and performance of a low-carbon closed-loop supply chain (CLSC) with a manufacturer and two retailers, and study optimal operation in the competitive pricing, competitive the low-carbon promotion, the carbon emission reduction, the used-products collection and the profits. We consider the completely decentralized model, M-R vertical cooperation model, R-R horizontal cooperation model, M-R-R vertical and horizontal cooperation model and completely centralized model, and also identify the optimal decision results and profits. It can be observed from a systematic comparison and numerical analysis that the completely centralized model is best in all optimal decision results among all models. In semi-cooperation, the M-R vertical cooperation model is positive, the R-R horizontal cooperation model is passive, and the positivity of the M-R-R vertical and horizontal cooperation model decreases with competitive intensity increasing in the used-products returning, carbon emissions reduction level, low-carbon promotion effort and the profits of the manufacturer and the entire supply chain. PMID:29104268
Li, Hui; Wang, Chuanxu; Shang, Meng; Ou, Wei
2017-11-01
In this paper, we examine the influences of vertical and horizontal cooperation models on the optimal decisions and performance of a low-carbon closed-loop supply chain (CLSC) with a manufacturer and two retailers, and study optimal operation in the competitive pricing, competitive the low-carbon promotion, the carbon emission reduction, the used-products collection and the profits. We consider the completely decentralized model, M-R vertical cooperation model, R-R horizontal cooperation model, M-R-R vertical and horizontal cooperation model and completely centralized model, and also identify the optimal decision results and profits. It can be observed from a systematic comparison and numerical analysis that the completely centralized model is best in all optimal decision results among all models. In semi-cooperation, the M-R vertical cooperation model is positive, the R-R horizontal cooperation model is passive, and the positivity of the M-R-R vertical and horizontal cooperation model decreases with competitive intensity increasing in the used-products returning, carbon emissions reduction level, low-carbon promotion effort and the profits of the manufacturer and the entire supply chain.
Systematic design for trait introgression projects.
Cameron, John N; Han, Ye; Wang, Lizhi; Beavis, William D
2017-10-01
Using an Operations Research approach, we demonstrate design of optimal trait introgression projects with respect to competing objectives. We demonstrate an innovative approach for designing Trait Introgression (TI) projects based on optimization principles from Operations Research. If the designs of TI projects are based on clear and measurable objectives, they can be translated into mathematical models with decision variables and constraints that can be translated into Pareto optimality plots associated with any arbitrary selection strategy. The Pareto plots can be used to make rational decisions concerning the trade-offs between maximizing the probability of success while minimizing costs and time. The systematic rigor associated with a cost, time and probability of success (CTP) framework is well suited to designing TI projects that require dynamic decision making. The CTP framework also revealed that previously identified 'best' strategies can be improved to be at least twice as effective without increasing time or expenses.
Duarte, Belmiro P.M.; Wong, Weng Kee; Atkinson, Anthony C.
2016-01-01
T-optimum designs for model discrimination are notoriously difficult to find because of the computational difficulty involved in solving an optimization problem that involves two layers of optimization. Only a handful of analytical T-optimal designs are available for the simplest problems; the rest in the literature are found using specialized numerical procedures for a specific problem. We propose a potentially more systematic and general way for finding T-optimal designs using a Semi-Infinite Programming (SIP) approach. The strategy requires that we first reformulate the original minimax or maximin optimization problem into an equivalent semi-infinite program and solve it using an exchange-based method where lower and upper bounds produced by solving the outer and the inner programs, are iterated to convergence. A global Nonlinear Programming (NLP) solver is used to handle the subproblems, thus finding the optimal design and the least favorable parametric configuration that minimizes the residual sum of squares from the alternative or test models. We also use a nonlinear program to check the global optimality of the SIP-generated design and automate the construction of globally optimal designs. The algorithm is successfully used to produce results that coincide with several T-optimal designs reported in the literature for various types of model discrimination problems with normally distributed errors. However, our method is more general, merely requiring that the parameters of the model be estimated by a numerical optimization. PMID:27330230
Duarte, Belmiro P M; Wong, Weng Kee; Atkinson, Anthony C
2015-03-01
T-optimum designs for model discrimination are notoriously difficult to find because of the computational difficulty involved in solving an optimization problem that involves two layers of optimization. Only a handful of analytical T-optimal designs are available for the simplest problems; the rest in the literature are found using specialized numerical procedures for a specific problem. We propose a potentially more systematic and general way for finding T-optimal designs using a Semi-Infinite Programming (SIP) approach. The strategy requires that we first reformulate the original minimax or maximin optimization problem into an equivalent semi-infinite program and solve it using an exchange-based method where lower and upper bounds produced by solving the outer and the inner programs, are iterated to convergence. A global Nonlinear Programming (NLP) solver is used to handle the subproblems, thus finding the optimal design and the least favorable parametric configuration that minimizes the residual sum of squares from the alternative or test models. We also use a nonlinear program to check the global optimality of the SIP-generated design and automate the construction of globally optimal designs. The algorithm is successfully used to produce results that coincide with several T-optimal designs reported in the literature for various types of model discrimination problems with normally distributed errors. However, our method is more general, merely requiring that the parameters of the model be estimated by a numerical optimization.
Tsipa, Argyro; Koutinas, Michalis; Usaku, Chonlatep; Mantalaris, Athanasios
2018-05-02
Currently, design and optimisation of biotechnological bioprocesses is performed either through exhaustive experimentation and/or with the use of empirical, unstructured growth kinetics models. Whereas, elaborate systems biology approaches have been recently explored, mixed-substrate utilisation is predominantly ignored despite its significance in enhancing bioprocess performance. Herein, bioprocess optimisation for an industrially-relevant bioremediation process involving a mixture of highly toxic substrates, m-xylene and toluene, was achieved through application of a novel experimental-modelling gene regulatory network - growth kinetic (GRN-GK) hybrid framework. The GRN model described the TOL and ortho-cleavage pathways in Pseudomonas putida mt-2 and captured the transcriptional kinetics expression patterns of the promoters. The GRN model informed the formulation of the growth kinetics model replacing the empirical and unstructured Monod kinetics. The GRN-GK framework's predictive capability and potential as a systematic optimal bioprocess design tool, was demonstrated by effectively predicting bioprocess performance, which was in agreement with experimental values, when compared to four commonly used models that deviated significantly from the experimental values. Significantly, a fed-batch biodegradation process was designed and optimised through the model-based control of TOL Pr promoter expression resulting in 61% and 60% enhanced pollutant removal and biomass formation, respectively, compared to the batch process. This provides strong evidence of model-based bioprocess optimisation at the gene level, rendering the GRN-GK framework as a novel and applicable approach to optimal bioprocess design. Finally, model analysis using global sensitivity analysis (GSA) suggests an alternative, systematic approach for model-driven strain modification for synthetic biology and metabolic engineering applications. Copyright © 2018. Published by Elsevier Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Chao Yang; Luo, Gang; Jiang, Fangming
2010-05-01
Current computational models for proton exchange membrane fuel cells (PEMFCs) include a large number of parameters such as boundary conditions, material properties, and numerous parameters used in sub-models for membrane transport, two-phase flow and electrochemistry. In order to successfully use a computational PEMFC model in design and optimization, it is important to identify critical parameters under a wide variety of operating conditions, such as relative humidity, current load, temperature, etc. Moreover, when experimental data is available in the form of polarization curves or local distribution of current and reactant/product species (e.g., O2, H2O concentrations), critical parameters can be estimated inmore » order to enable the model to better fit the data. Sensitivity analysis and parameter estimation are typically performed using manual adjustment of parameters, which is also common in parameter studies. We present work to demonstrate a systematic approach based on using a widely available toolkit developed at Sandia called DAKOTA that supports many kinds of design studies, such as sensitivity analysis as well as optimization and uncertainty quantification. In the present work, we couple a multidimensional PEMFC model (which is being developed, tested and later validated in a joint effort by a team from Penn State Univ. and Sandia National Laboratories) with DAKOTA through the mapping of model parameters to system responses. Using this interface, we demonstrate the efficiency of performing simple parameter studies as well as identifying critical parameters using sensitivity analysis. Finally, we show examples of optimization and parameter estimation using the automated capability in DAKOTA.« less
Strategy Developed for Selecting Optimal Sensors for Monitoring Engine Health
NASA Technical Reports Server (NTRS)
2004-01-01
Sensor indications during rocket engine operation are the primary means of assessing engine performance and health. Effective selection and location of sensors in the operating engine environment enables accurate real-time condition monitoring and rapid engine controller response to mitigate critical fault conditions. These capabilities are crucial to ensure crew safety and mission success. Effective sensor selection also facilitates postflight condition assessment, which contributes to efficient engine maintenance and reduced operating costs. Under the Next Generation Launch Technology program, the NASA Glenn Research Center, in partnership with Rocketdyne Propulsion and Power, has developed a model-based procedure for systematically selecting an optimal sensor suite for assessing rocket engine system health. This optimization process is termed the systematic sensor selection strategy. Engine health management (EHM) systems generally employ multiple diagnostic procedures including data validation, anomaly detection, fault-isolation, and information fusion. The effectiveness of each diagnostic component is affected by the quality, availability, and compatibility of sensor data. Therefore systematic sensor selection is an enabling technology for EHM. Information in three categories is required by the systematic sensor selection strategy. The first category consists of targeted engine fault information; including the description and estimated risk-reduction factor for each identified fault. Risk-reduction factors are used to define and rank the potential merit of timely fault diagnoses. The second category is composed of candidate sensor information; including type, location, and estimated variance in normal operation. The final category includes the definition of fault scenarios characteristic of each targeted engine fault. These scenarios are defined in terms of engine model hardware parameters. Values of these parameters define engine simulations that generate expected sensor values for targeted fault scenarios. Taken together, this information provides an efficient condensation of the engineering experience and engine flow physics needed for sensor selection. The systematic sensor selection strategy is composed of three primary algorithms. The core of the selection process is a genetic algorithm that iteratively improves a defined quality measure of selected sensor suites. A merit algorithm is employed to compute the quality measure for each test sensor suite presented by the selection process. The quality measure is based on the fidelity of fault detection and the level of fault source discrimination provided by the test sensor suite. An inverse engine model, whose function is to derive hardware performance parameters from sensor data, is an integral part of the merit algorithm. The final component is a statistical evaluation algorithm that characterizes the impact of interference effects, such as control-induced sensor variation and sensor noise, on the probability of fault detection and isolation for optimal and near-optimal sensor suites.
Optimal control of epidemic information dissemination over networks.
Chen, Pin-Yu; Cheng, Shin-Ming; Chen, Kwang-Cheng
2014-12-01
Information dissemination control is of crucial importance to facilitate reliable and efficient data delivery, especially in networks consisting of time-varying links or heterogeneous links. Since the abstraction of information dissemination much resembles the spread of epidemics, epidemic models are utilized to characterize the collective dynamics of information dissemination over networks. From a systematic point of view, we aim to explore the optimal control policy for information dissemination given that the control capability is a function of its distribution time, which is a more realistic model in many applications. The main contributions of this paper are to provide an analytically tractable model for information dissemination over networks, to solve the optimal control signal distribution time for minimizing the accumulated network cost via dynamic programming, and to establish a parametric plug-in model for information dissemination control. In particular, we evaluate its performance in mobile and generalized social networks as typical examples.
Simulation-Based Evaluation of Learning Sequences for Instructional Technologies
ERIC Educational Resources Information Center
McEneaney, John E.
2016-01-01
Instructional technologies critically depend on systematic design, and learning hierarchies are a commonly advocated tool for designing instructional sequences. But hierarchies routinely allow numerous sequences and choosing an optimal sequence remains an unsolved problem. This study explores a simulation-based approach to modeling learning…
A comparison of optimal MIMO linear and nonlinear models for brain machine interfaces
NASA Astrophysics Data System (ADS)
Kim, S.-P.; Sanchez, J. C.; Rao, Y. N.; Erdogmus, D.; Carmena, J. M.; Lebedev, M. A.; Nicolelis, M. A. L.; Principe, J. C.
2006-06-01
The field of brain-machine interfaces requires the estimation of a mapping from spike trains collected in motor cortex areas to the hand kinematics of the behaving animal. This paper presents a systematic investigation of several linear (Wiener filter, LMS adaptive filters, gamma filter, subspace Wiener filters) and nonlinear models (time-delay neural network and local linear switching models) applied to datasets from two experiments in monkeys performing motor tasks (reaching for food and target hitting). Ensembles of 100-200 cortical neurons were simultaneously recorded in these experiments, and even larger neuronal samples are anticipated in the future. Due to the large size of the models (thousands of parameters), the major issue studied was the generalization performance. Every parameter of the models (not only the weights) was selected optimally using signal processing and machine learning techniques. The models were also compared statistically with respect to the Wiener filter as the baseline. Each of the optimization procedures produced improvements over that baseline for either one of the two datasets or both.
A comparison of optimal MIMO linear and nonlinear models for brain-machine interfaces.
Kim, S-P; Sanchez, J C; Rao, Y N; Erdogmus, D; Carmena, J M; Lebedev, M A; Nicolelis, M A L; Principe, J C
2006-06-01
The field of brain-machine interfaces requires the estimation of a mapping from spike trains collected in motor cortex areas to the hand kinematics of the behaving animal. This paper presents a systematic investigation of several linear (Wiener filter, LMS adaptive filters, gamma filter, subspace Wiener filters) and nonlinear models (time-delay neural network and local linear switching models) applied to datasets from two experiments in monkeys performing motor tasks (reaching for food and target hitting). Ensembles of 100-200 cortical neurons were simultaneously recorded in these experiments, and even larger neuronal samples are anticipated in the future. Due to the large size of the models (thousands of parameters), the major issue studied was the generalization performance. Every parameter of the models (not only the weights) was selected optimally using signal processing and machine learning techniques. The models were also compared statistically with respect to the Wiener filter as the baseline. Each of the optimization procedures produced improvements over that baseline for either one of the two datasets or both.
Design and study of water supply system for supercritical unit boiler in thermal power station
NASA Astrophysics Data System (ADS)
Du, Zenghui
2018-04-01
In order to design and optimize the boiler feed water system of supercritical unit, the establishment of a highly accurate controlled object model and its dynamic characteristics are prerequisites for developing a perfect thermal control system. In this paper, the method of mechanism modeling often leads to large systematic errors. Aiming at the information contained in the historical operation data of the boiler typical thermal system, the modern intelligent identification method to establish a high-precision quantitative model is used. This method avoids the difficulties caused by the disturbance experiment modeling for the actual system in the field, and provides a strong reference for the design and optimization of the thermal automation control system in the thermal power plant.
Optimal Sensor Selection for Health Monitoring Systems
NASA Technical Reports Server (NTRS)
Santi, L. Michael; Sowers, T. Shane; Aguilar, Robert B.
2005-01-01
Sensor data are the basis for performance and health assessment of most complex systems. Careful selection and implementation of sensors is critical to enable high fidelity system health assessment. A model-based procedure that systematically selects an optimal sensor suite for overall health assessment of a designated host system is described. This procedure, termed the Systematic Sensor Selection Strategy (S4), was developed at NASA John H. Glenn Research Center in order to enhance design phase planning and preparations for in-space propulsion health management systems (HMS). Information and capabilities required to utilize the S4 approach in support of design phase development of robust health diagnostics are outlined. A merit metric that quantifies diagnostic performance and overall risk reduction potential of individual sensor suites is introduced. The conceptual foundation for this merit metric is presented and the algorithmic organization of the S4 optimization process is described. Representative results from S4 analyses of a boost stage rocket engine previously under development as part of NASA's Next Generation Launch Technology (NGLT) program are presented.
Process-aware EHR BPM systems: two prototypes and a conceptual framework.
Webster, Charles; Copenhaver, Mark
2010-01-01
Systematic methods to improve the effectiveness and efficiency of electronic health record-mediated processes will be key to EHRs playing an important role in the positive transformation of healthcare. Business process management (BPM) systematically optimizes process effectiveness, efficiency, and flexibility. Therefore BPM offers relevant ideas and technologies. We provide a conceptual model based on EHR productivity and negative feedback control that links EHR and BPM domains, describe two EHR BPM prototype modules, and close with the argument that typical EHRs must become more process-aware if they are to take full advantage of BPM ideas and technology. A prediction: Future extensible clinical groupware will coordinate delivery of EHR functionality to teams of users by combining modular components with executable process models whose usability (effectiveness, efficiency, and user satisfaction) will be systematically improved using business process management techniques.
de Vries, Rob B M; Wever, Kimberley E; Avey, Marc T; Stephens, Martin L; Sena, Emily S; Leenaars, Marlies
2014-01-01
The question of how animal studies should be designed, conducted, and analyzed remains underexposed in societal debates on animal experimentation. This is not only a scientific but also a moral question. After all, if animal experiments are not appropriately designed, conducted, and analyzed, the results produced are unlikely to be reliable and the animals have in effect been wasted. In this article, we focus on one particular method to address this moral question, namely systematic reviews of previously performed animal experiments. We discuss how the design, conduct, and analysis of future (animal and human) experiments may be optimized through such systematic reviews. In particular, we illustrate how these reviews can help improve the methodological quality of animal experiments, make the choice of an animal model and the translation of animal data to the clinic more evidence-based, and implement the 3Rs. Moreover, we discuss which measures are being taken and which need to be taken in the future to ensure that systematic reviews will actually contribute to optimizing experimental design and thereby to meeting a necessary condition for making the use of animals in these experiments justified. © The Author 2014. Published by Oxford University Press.
de Vries, Rob B. M.; Wever, Kimberley E.; Avey, Marc T.; Stephens, Martin L.; Sena, Emily S.; Leenaars, Marlies
2014-01-01
The question of how animal studies should be designed, conducted, and analyzed remains underexposed in societal debates on animal experimentation. This is not only a scientific but also a moral question. After all, if animal experiments are not appropriately designed, conducted, and analyzed, the results produced are unlikely to be reliable and the animals have in effect been wasted. In this article, we focus on one particular method to address this moral question, namely systematic reviews of previously performed animal experiments. We discuss how the design, conduct, and analysis of future (animal and human) experiments may be optimized through such systematic reviews. In particular, we illustrate how these reviews can help improve the methodological quality of animal experiments, make the choice of an animal model and the translation of animal data to the clinic more evidence-based, and implement the 3Rs. Moreover, we discuss which measures are being taken and which need to be taken in the future to ensure that systematic reviews will actually contribute to optimizing experimental design and thereby to meeting a necessary condition for making the use of animals in these experiments justified. PMID:25541545
Sankaran, Sethuraman; Humphrey, Jay D.; Marsden, Alison L.
2013-01-01
Computational models for vascular growth and remodeling (G&R) are used to predict the long-term response of vessels to changes in pressure, flow, and other mechanical loading conditions. Accurate predictions of these responses are essential for understanding numerous disease processes. Such models require reliable inputs of numerous parameters, including material properties and growth rates, which are often experimentally derived, and inherently uncertain. While earlier methods have used a brute force approach, systematic uncertainty quantification in G&R models promises to provide much better information. In this work, we introduce an efficient framework for uncertainty quantification and optimal parameter selection, and illustrate it via several examples. First, an adaptive sparse grid stochastic collocation scheme is implemented in an established G&R solver to quantify parameter sensitivities, and near-linear scaling with the number of parameters is demonstrated. This non-intrusive and parallelizable algorithm is compared with standard sampling algorithms such as Monte-Carlo. Second, we determine optimal arterial wall material properties by applying robust optimization. We couple the G&R simulator with an adaptive sparse grid collocation approach and a derivative-free optimization algorithm. We show that an artery can achieve optimal homeostatic conditions over a range of alterations in pressure and flow; robustness of the solution is enforced by including uncertainty in loading conditions in the objective function. We then show that homeostatic intramural and wall shear stress is maintained for a wide range of material properties, though the time it takes to achieve this state varies. We also show that the intramural stress is robust and lies within 5% of its mean value for realistic variability of the material parameters. We observe that prestretch of elastin and collagen are most critical to maintaining homeostasis, while values of the material properties are most critical in determining response time. Finally, we outline several challenges to the G&R community for future work. We suggest that these tools provide the first systematic and efficient framework to quantify uncertainties and optimally identify G&R model parameters. PMID:23626380
NASA Technical Reports Server (NTRS)
Frederick, D. K.; Lashmet, P. K.; Sandor, G. N.; Shen, C. N.; Smith, E. V.; Yerazunis, S. W.
1973-01-01
Problems related to the design and control of a mobile planetary vehicle to implement a systematic plan for the exploration of Mars are reported. Problem areas include: vehicle configuration, control, dynamics, systems and propulsion; systems analysis, terrain modeling and path selection; and chemical analysis of specimens. These tasks are summarized: vehicle model design, mathematical model of vehicle dynamics, experimental vehicle dynamics, obstacle negotiation, electrochemical controls, remote control, collapsibility and deployment, construction of a wheel tester, wheel analysis, payload design, system design optimization, effect of design assumptions, accessory optimal design, on-board computer subsystem, laser range measurement, discrete obstacle detection, obstacle detection systems, terrain modeling, path selection system simulation and evaluation, gas chromatograph/mass spectrometer system concepts, and chromatograph model evaluation and improvement.
Recent developments of axial flow compressors under transonic flow conditions
NASA Astrophysics Data System (ADS)
Srinivas, G.; Raghunandana, K.; Satish Shenoy, B.
2017-05-01
The objective of this paper is to give a holistic view of the most advanced technology and procedures that are practiced in the field of turbomachinery design. Compressor flow solver is the turbulence model used in the CFD to solve viscous problems. The popular techniques like Jameson’s rotated difference scheme was used to solve potential flow equation in transonic condition for two dimensional aero foils and later three dimensional wings. The gradient base method is also a popular method especially for compressor blade shape optimization. Various other types of optimization techniques available are Evolutionary algorithms (EAs) and Response surface methodology (RSM). It is observed that in order to improve compressor flow solver and to get agreeable results careful attention need to be paid towards viscous relations, grid resolution, turbulent modeling and artificial viscosity, in CFD. The advanced techniques like Jameson’s rotated difference had most substantial impact on wing design and aero foil. For compressor blade shape optimization, Evolutionary algorithm is quite simple than gradient based technique because it can solve the parameters simultaneously by searching from multiple points in the given design space. Response surface methodology (RSM) is a method basically used to design empirical models of the response that were observed and to study systematically the experimental data. This methodology analyses the correct relationship between expected responses (output) and design variables (input). RSM solves the function systematically in a series of mathematical and statistical processes. For turbomachinery blade optimization recently RSM has been implemented successfully. The well-designed high performance axial flow compressors finds its application in any air-breathing jet engines.
Cost-Based Optimization of a Papermaking Wastewater Regeneration Recycling System
NASA Astrophysics Data System (ADS)
Huang, Long; Feng, Xiao; Chu, Khim H.
2010-11-01
Wastewater can be regenerated for recycling in an industrial process to reduce freshwater consumption and wastewater discharge. Such an environment friendly approach will also lead to cost savings that accrue due to reduced freshwater usage and wastewater discharge. However, the resulting cost savings are offset to varying degrees by the costs incurred for the regeneration of wastewater for recycling. Therefore, systematic procedures should be used to determine the true economic benefits for any water-using system involving wastewater regeneration recycling. In this paper, a total cost accounting procedure is employed to construct a comprehensive cost model for a paper mill. The resulting cost model is optimized by means of mathematical programming to determine the optimal regeneration flowrate and regeneration efficiency that will yield the minimum total cost.
Wei, Zhengxian; Song, Min; Yin, Guisheng; Wang, Hongbin; Ma, Xuefei; Song, Houbing
2017-07-12
Underwater wireless sensor networks (UWSNs) have become a new hot research area. However, due to the work dynamics and harsh ocean environment, how to obtain an UWSN with the best systematic performance while deploying as few sensor nodes as possible and setting up self-adaptive networking is an urgent problem that needs to be solved. Consequently, sensor deployment, networking, and performance calculation of UWSNs are challenging issues, hence the study in this paper centers on this topic and three relevant methods and models are put forward. Firstly, the normal body-centered cubic lattice to cross body-centered cubic lattice (CBCL) has been improved, and a deployment process and topology generation method are built. Then most importantly, a cross deployment networking method (CDNM) for UWSNs suitable for the underwater environment is proposed. Furthermore, a systematic quar-performance calculation model (SQPCM) is proposed from an integrated perspective, in which the systematic performance of a UWSN includes coverage, connectivity, durability and rapid-reactivity. Besides, measurement models are established based on the relationship between systematic performance and influencing parameters. Finally, the influencing parameters are divided into three types, namely, constraint parameters, device performance and networking parameters. Based on these, a networking parameters adjustment method (NPAM) for optimized systematic performance of UWSNs has been presented. The simulation results demonstrate that the approach proposed in this paper is feasible and efficient in networking and performance calculation of UWSNs.
Wei, Zhengxian; Song, Min; Yin, Guisheng; Wang, Hongbin; Ma, Xuefei
2017-01-01
Underwater wireless sensor networks (UWSNs) have become a new hot research area. However, due to the work dynamics and harsh ocean environment, how to obtain an UWSN with the best systematic performance while deploying as few sensor nodes as possible and setting up self-adaptive networking is an urgent problem that needs to be solved. Consequently, sensor deployment, networking, and performance calculation of UWSNs are challenging issues, hence the study in this paper centers on this topic and three relevant methods and models are put forward. Firstly, the normal body-centered cubic lattice to cross body-centered cubic lattice (CBCL) has been improved, and a deployment process and topology generation method are built. Then most importantly, a cross deployment networking method (CDNM) for UWSNs suitable for the underwater environment is proposed. Furthermore, a systematic quar-performance calculation model (SQPCM) is proposed from an integrated perspective, in which the systematic performance of a UWSN includes coverage, connectivity, durability and rapid-reactivity. Besides, measurement models are established based on the relationship between systematic performance and influencing parameters. Finally, the influencing parameters are divided into three types, namely, constraint parameters, device performance and networking parameters. Based on these, a networking parameters adjustment method (NPAM) for optimized systematic performance of UWSNs has been presented. The simulation results demonstrate that the approach proposed in this paper is feasible and efficient in networking and performance calculation of UWSNs. PMID:28704959
Klamt, Steffen; Müller, Stefan; Regensburger, Georg; Zanghellini, Jürgen
2018-05-01
The optimization of metabolic rates (as linear objective functions) represents the methodical core of flux-balance analysis techniques which have become a standard tool for the study of genome-scale metabolic models. Besides (growth and synthesis) rates, metabolic yields are key parameters for the characterization of biochemical transformation processes, especially in the context of biotechnological applications. However, yields are ratios of rates, and hence the optimization of yields (as nonlinear objective functions) under arbitrary linear constraints is not possible with current flux-balance analysis techniques. Despite the fundamental importance of yields in constraint-based modeling, a comprehensive mathematical framework for yield optimization is still missing. We present a mathematical theory that allows one to systematically compute and analyze yield-optimal solutions of metabolic models under arbitrary linear constraints. In particular, we formulate yield optimization as a linear-fractional program. For practical computations, we transform the linear-fractional yield optimization problem to a (higher-dimensional) linear problem. Its solutions determine the solutions of the original problem and can be used to predict yield-optimal flux distributions in genome-scale metabolic models. For the theoretical analysis, we consider the linear-fractional problem directly. Most importantly, we show that the yield-optimal solution set (like the rate-optimal solution set) is determined by (yield-optimal) elementary flux vectors of the underlying metabolic model. However, yield- and rate-optimal solutions may differ from each other, and hence optimal (biomass or product) yields are not necessarily obtained at solutions with optimal (growth or synthesis) rates. Moreover, we discuss phase planes/production envelopes and yield spaces, in particular, we prove that yield spaces are convex and provide algorithms for their computation. We illustrate our findings by a small example and demonstrate their relevance for metabolic engineering with realistic models of E. coli. We develop a comprehensive mathematical framework for yield optimization in metabolic models. Our theory is particularly useful for the study and rational modification of cell factories designed under given yield and/or rate requirements. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Display/control requirements for automated VTOL aircraft
NASA Technical Reports Server (NTRS)
Hoffman, W. C.; Kleinman, D. L.; Young, L. R.
1976-01-01
A systematic design methodology for pilot displays in advanced commercial VTOL aircraft was developed and refined. The analyst is provided with a step-by-step procedure for conducting conceptual display/control configurations evaluations for simultaneous monitoring and control pilot tasks. The approach consists of three phases: formulation of information requirements, configuration evaluation, and system selection. Both the monitoring and control performance models are based upon the optimal control model of the human operator. Extensions to the conventional optimal control model required in the display design methodology include explicit optimization of control/monitoring attention; simultaneous monitoring and control performance predictions; and indifference threshold effects. The methodology was applied to NASA's experimental CH-47 helicopter in support of the VALT program. The CH-47 application examined the system performance of six flight conditions. Four candidate configurations are suggested for evaluation in pilot-in-the-loop simulations and eventual flight tests.
Categorical Biases in Spatial Memory: The Role of Certainty
ERIC Educational Resources Information Center
Holden, Mark P.; Newcombe, Nora S.; Shipley, Thomas F.
2015-01-01
Memories for spatial locations often show systematic errors toward the central value of the surrounding region. The Category Adjustment (CA) model suggests that this bias is due to a Bayesian combination of categorical and metric information, which offers an optimal solution under conditions of uncertainty (Huttenlocher, Hedges, & Duncan,…
Artificial intelligent techniques for optimizing water allocation in a reservoir watershed
NASA Astrophysics Data System (ADS)
Chang, Fi-John; Chang, Li-Chiu; Wang, Yu-Chung
2014-05-01
This study proposes a systematical water allocation scheme that integrates system analysis with artificial intelligence techniques for reservoir operation in consideration of the great uncertainty upon hydrometeorology for mitigating droughts impacts on public and irrigation sectors. The AI techniques mainly include a genetic algorithm and adaptive-network based fuzzy inference system (ANFIS). We first derive evaluation diagrams through systematic interactive evaluations on long-term hydrological data to provide a clear simulation perspective of all possible drought conditions tagged with their corresponding water shortages; then search the optimal reservoir operating histogram using genetic algorithm (GA) based on given demands and hydrological conditions that can be recognized as the optimal base of input-output training patterns for modelling; and finally build a suitable water allocation scheme through constructing an adaptive neuro-fuzzy inference system (ANFIS) model with a learning of the mechanism between designed inputs (water discount rates and hydrological conditions) and outputs (two scenarios: simulated and optimized water deficiency levels). The effectiveness of the proposed approach is tested on the operation of the Shihmen Reservoir in northern Taiwan for the first paddy crop in the study area to assess the water allocation mechanism during drought periods. We demonstrate that the proposed water allocation scheme significantly and substantially avails water managers of reliably determining a suitable discount rate on water supply for both irrigation and public sectors, and thus can reduce the drought risk and the compensation amount induced by making restrictions on agricultural use water.
NASA Technical Reports Server (NTRS)
Frederick, D. K.; Lashmet, P. K.; Sandor, G. N.; Shen, C. N.; Smith, E. J.; Yerazunis, S. W.
1972-01-01
Investigation of problems related to the design and control of a mobile planetary vehicle to implement a systematic plan for the exploration of Mars has been undertaken. Problem areas receiving attention include: vehicle configuration, control, dynamics, systems and propulsion; systems analysis; terrain modeling and path selection; and chemical analysis of specimens. The following specific tasks have been under study: vehicle model design, mathematical modeling of a dynamic vehicle, experimental vehicle dynamics, obstacle negotiation, electromechanical controls, collapsibility and deployment, construction of a wheel tester, wheel analysis, payload design, system design optimization, effect of design assumptions, accessory optimal design, on-board computer sybsystem, laser range measurement, discrete obstacle detection, obstacle detection systems, terrain modeling, path selection system simulation and evaluation, gas chromatograph/mass spectrometer system concepts, chromatograph model evaluation and improvement.
Explaining quantum correlations through evolution of causal models
NASA Astrophysics Data System (ADS)
Harper, Robin; Chapman, Robert J.; Ferrie, Christopher; Granade, Christopher; Kueng, Richard; Naoumenko, Daniel; Flammia, Steven T.; Peruzzo, Alberto
2017-04-01
We propose a framework for the systematic and quantitative generalization of Bell's theorem using causal networks. We first consider the multiobjective optimization problem of matching observed data while minimizing the causal effect of nonlocal variables and prove an inequality for the optimal region that both strengthens and generalizes Bell's theorem. To solve the optimization problem (rather than simply bound it), we develop a genetic algorithm treating as individuals causal networks. By applying our algorithm to a photonic Bell experiment, we demonstrate the trade-off between the quantitative relaxation of one or more local causality assumptions and the ability of data to match quantum correlations.
NASA Technical Reports Server (NTRS)
Mehra, R. K.; Washburn, R. B.; Sajan, S.; Carroll, J. V.
1979-01-01
A hierarchical real time algorithm for optimal three dimensional control of aircraft is described. Systematic methods are developed for real time computation of nonlinear feedback controls by means of singular perturbation theory. The results are applied to a six state, three control variable, point mass model of an F-4 aircraft. Nonlinear feedback laws are presented for computing the optimal control of throttle, bank angle, and angle of attack. Real Time capability is assessed on a TI 9900 microcomputer. The breakdown of the singular perturbation approximation near the terminal point is examined Continuation methods are examined to obtain exact optimal trajectories starting from the singular perturbation solutions.
NASA Technical Reports Server (NTRS)
Frederick, D. K.; Lashmet, P. K.; Sandor, G. N.; Shen, C. N.; Smith, E. J.; Yerazunis, S. W.
1972-01-01
The problems related to the design and control of a mobile planetary vehicle to implement a systematic plan for the exploration of Mars were investigated. Problem areas receiving attention include: vehicle configuration, control, dynamics, systems and propulsion; systems analysis; navigation, terrain modeling and path selection; and chemical analysis of specimens. The following specific tasks were studied: vehicle model design, mathematical modeling of dynamic vehicle, experimental vehicle dynamics, obstacle negotiation, electromechanical controls, collapsibility and deployment, construction of a wheel tester, wheel analysis, payload design, system design optimization, effect of design assumptions, accessory optimal design, on-board computer subsystem, laser range measurement, discrete obstacle detection, obstacle detection systems, terrain modeling, path selection system simulation and evaluation, gas chromatograph/mass spectrometer system concepts, chromatograph model evaluation and improvement and transport parameter evaluation.
Reynolds, Penny S; Tamariz, Francisco J; Barbee, Robert Wayne
2010-04-01
Exploratory pilot studies are crucial to best practice in research but are frequently conducted without a systematic method for maximizing the amount and quality of information obtained. We describe the use of response surface regression models and simultaneous optimization methods to develop a rat model of hemorrhagic shock in the context of chronic hypertension, a clinically relevant comorbidity. Response surface regression model was applied to determine optimal levels of two inputs--dietary NaCl concentration (0.49%, 4%, and 8%) and time on the diet (4, 6, 8 weeks)--to achieve clinically realistic and stable target measures of systolic blood pressure while simultaneously maximizing critical oxygen delivery (a measure of vulnerability to hemorrhagic shock) and body mass M. Simultaneous optimization of the three response variables was performed though a dimensionality reduction strategy involving calculation of a single aggregate measure, the "desirability" function. Optimal conditions for inducing systolic blood pressure of 208 mmHg, critical oxygen delivery of 4.03 mL/min, and M of 290 g were determined to be 4% [NaCl] for 5 weeks. Rats on the 8% diet did not survive past 7 weeks. Response surface regression model and simultaneous optimization method techniques are commonly used in process engineering but have found little application to date in animal pilot studies. These methods will ensure both the scientific and ethical integrity of experimental trials involving animals and provide powerful tools for the development of novel models of clinically interacting comorbidities with shock.
NASA Astrophysics Data System (ADS)
Imvitthaya, Chomchid; Honda, Kiyoshi; Lertlum, Surat; Tangtham, Nipon
2011-01-01
In this paper, we present the results of a net primary production (NPP) modeling of teak (Tectona grandis Lin F.), an important species in tropical deciduous forests. The biome-biogeochemical cycles or Biome-BGC model was calibrated to estimate net NPP through the inverse modeling approach. A genetic algorithm (GA) was linked with Biome-BGC to determine the optimal ecophysiological model parameters. The Biome-BGC was calibrated by adjusting the ecophysiological model parameters to fit the simulated LAI to the satellite LAI (SPOT-Vegetation), and the best fitness confirmed the high accuracy of generated ecophysioligical parameter from GA. The modeled NPP, using optimized parameters from GA as input data, was evaluated using daily NPP derived by the MODIS satellite and the annual field data in northern Thailand. The results showed that NPP obtained using the optimized ecophysiological parameters were more accurate than those obtained using default literature parameterization. This improvement occurred mainly because the model's optimized parameters reduced the bias by reducing systematic underestimation in the model. These Biome-BGC results can be effectively applied in teak forests in tropical areas. The study proposes a more effective method of using GA to determine ecophysiological parameters at the site level and represents a first step toward the analysis of the carbon budget of teak plantations at the regional scale.
Modeling and Reduction With Applications to Semiconductor Processing
1999-01-01
smoothies ,” as they kept my energy level high without resorting to coffee (the beverage of choice, it seems, for graduate students). My advisor gave me all...with POC data, and balancing approach. . . . . . . . . . . . . . . . 312 xii LIST OF FIGURES 1.1 General state-space model reduction methodology ...reduction problem, then, is one of finding a systematic methodology within a given mathematical framework to produce an efficient or optimal trade-off of
Chen, Yu; Dong, Fengqing; Wang, Yonghong
2016-09-01
With determined components and experimental reducibility, the chemically defined medium (CDM) and the minimal chemically defined medium (MCDM) are used in many metabolism and regulation studies. This research aimed to develop the chemically defined medium supporting high cell density growth of Bacillus coagulans, which is a promising producer of lactic acid and other bio-chemicals. In this study, a systematic methodology combining the experimental technique with flux balance analysis (FBA) was proposed to design and simplify a CDM. The single omission technique and single addition technique were employed to determine the essential and stimulatory compounds, before the optimization of their concentrations by the statistical method. In addition, to improve the growth rationally, in silico omission and addition were performed by FBA based on the construction of a medium-size metabolic model of B. coagulans 36D1. Thus, CDMs were developed to obtain considerable biomass production of at least five B. coagulans strains, in which two model strains B. coagulans 36D1 and ATCC 7050 were involved.
Systematic Analysis of Hollow Fiber Model of Tuberculosis Experiments.
Pasipanodya, Jotam G; Nuermberger, Eric; Romero, Klaus; Hanna, Debra; Gumbo, Tawanda
2015-08-15
The in vitro hollow fiber system model of tuberculosis (HFS-TB), in tandem with Monte Carlo experiments, was introduced more than a decade ago. Since then, it has been used to perform a large number of tuberculosis pharmacokinetics/pharmacodynamics (PK/PD) studies that have not been subjected to systematic analysis. We performed a literature search to identify all HFS-TB experiments published between 1 January 2000 and 31 December 2012. There was no exclusion of articles by language. Bias minimization was according to Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA). Steps for reporting systematic reviews were followed. There were 22 HFS-TB studies published, of which 12 were combination therapy studies and 10 were monotherapy studies. There were 4 stand-alone Monte Carlo experiments that utilized quantitative output from the HFS-TB. All experiments reported drug pharmacokinetics, which recapitulated those encountered in humans. HFS-TB studies included log-phase growth studies under ambient air, semidormant bacteria at pH 5.8, and nonreplicating persisters at low oxygen tension of ≤ 10 parts per billion. The studies identified antibiotic exposures associated with optimal kill of Mycobacterium tuberculosis and suppression of acquired drug resistance (ADR) and informed predictions about optimal clinical doses, expected performance of standard doses and regimens in patients, and expected rates of ADR, as well as a proposal of new susceptibility breakpoints. The HFS-TB model offers the ability to perform PK/PD studies including humanlike drug exposures, to identify bactericidal and sterilizing effect rates, and to identify exposures associated with suppression of drug resistance. Because of the ability to perform repetitive sampling from the same unit over time, the HFS-TB vastly improves statistical power and facilitates the execution of time-to-event analyses and repeated event analyses, as well as dynamic system pharmacology mathematical models. © The Author 2015. Published by Oxford University Press on behalf of the Infectious Diseases Society of America. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Basu, Sanjay; Kiernan, Michaela
2016-01-01
While increasingly popular among mid- to large-size employers, using financial incentives to induce health behavior change among employees has been controversial, in part due to poor quality and generalizability of studies to date. Thus, fundamental questions have been left unanswered: To generate positive economic returns on investment, what level of incentive should be offered for any given type of incentive program and among which employees? We constructed a novel modeling framework that systematically identifies how to optimize marginal return on investment from programs incentivizing behavior change by integrating commonly collected data on health behaviors and associated costs. We integrated "demand curves" capturing individual differences in response to any given incentive with employee demographic and risk factor data. We also estimated the degree of self-selection that could be tolerated: that is, the maximum percentage of already-healthy employees who could enroll in a wellness program while still maintaining positive absolute return on investment. In a demonstration analysis, the modeling framework was applied to data from 3000 worksite physical activity programs across the nation. For physical activity programs, the incentive levels that would optimize marginal return on investment ($367/employee/year) were higher than average incentive levels currently offered ($143/employee/year). Yet a high degree of self-selection could undermine the economic benefits of the program; if more than 17% of participants came from the top 10% of the physical activity distribution, the cost of the program would be expected to always be greater than its benefits. Our generalizable framework integrates individual differences in behavior and risk to systematically estimate the incentive level that optimizes marginal return on investment. © The Author(s) 2015.
Basu, Sanjay; Kiernan, Michaela
2015-01-01
Introduction While increasingly popular among mid- to large-size employers, using financial incentives to induce health behavior change among employees has been controversial, in part due to poor quality and generalizability of studies to date. Thus, fundamental questions have been left unanswered: to generate positive economic returns on investment, what level of incentive should be offered for any given type of incentive program and among which employees? Methods We constructed a novel modeling framework that systematically identifies how to optimize marginal return on investment from programs incentivizing behavior change by integrating commonly-collected data on health behaviors and associated costs. We integrated “demand curves” capturing individual differences in response to any given incentive with employee demographic and risk factor data. We also estimated the degree of self-selection that could be tolerated, i.e., the maximum percentage of already-healthy employees who could enroll in a wellness program while still maintaining positive absolute return on investment. In a demonstration analysis, the modeling framework was applied to data from 3,000 worksite physical activity programs across the nation. Results For physical activity programs, the incentive levels that would optimize marginal return on investment ($367/employee/year) were higher than average incentive levels currently offered ($143/employee/year). Yet a high degree of self-selection could undermine the economic benefits of the program; if more than 17% of participants came from the top 10% of the physical activity distribution, the cost of the program would be expected to always be greater than its benefits. Discussion Our generalizable framework integrates individual differences in behavior and risk to systematically estimate the incentive level that optimizes marginal return on investment. PMID:25977362
A systematic reactor design approach for the synthesis of active pharmaceutical ingredients.
Emenike, Victor N; Schenkendorf, René; Krewer, Ulrike
2018-05-01
Today's highly competitive pharmaceutical industry is in dire need of an accelerated transition from the drug development phase to the drug production phase. At the heart of this transition are chemical reactors that facilitate the synthesis of active pharmaceutical ingredients (APIs) and whose design can affect subsequent processing steps. Inspired by this challenge, we present a model-based approach for systematic reactor design. The proposed concept is based on the elementary process functions (EPF) methodology to select an optimal reactor configuration from existing state-of-the-art reactor types or can possibly lead to the design of novel reactors. As a conceptual study, this work summarizes the essential steps in adapting the EPF approach to optimal reactor design problems in the field of API syntheses. Practically, the nucleophilic aromatic substitution of 2,4-difluoronitrobenzene was analyzed as a case study of pharmaceutical relevance. Here, a small-scale tubular coil reactor with controlled heating was identified as the optimal set-up reducing the residence time by 33% in comparison to literature values. Copyright © 2017 Elsevier B.V. All rights reserved.
Finding Optimal Apertures in Kepler Data
NASA Astrophysics Data System (ADS)
Smith, Jeffrey C.; Morris, Robert L.; Jenkins, Jon M.; Bryson, Stephen T.; Caldwell, Douglas A.; Girouard, Forrest R.
2016-12-01
With the loss of two spacecraft reaction wheels precluding further data collection for the Kepler primary mission, even greater pressure is placed on the processing pipeline to eke out every last transit signal in the data. To that end, we have developed a new method to optimize the Kepler Simple Aperture Photometry (SAP) photometric apertures for both planet detection and minimization of systematic effects. The approach uses a per cadence modeling of the raw pixel data and then performs an aperture optimization based on signal-to-noise ratio and the Kepler Combined Differential Photometric Precision (CDPP), which is a measure of the noise over the duration of a reference transit signal. We have found the new apertures to be superior to the previous Kepler apertures. We can now also find a per cadence flux fraction in aperture and crowding metric. The new approach has also been proven to be robust at finding apertures in K2 data that help mitigate the larger motion-induced systematics in the photometry. The method further allows us to identify errors in the Kepler and K2 input catalogs.
NASA Astrophysics Data System (ADS)
Berger, Lukas; Kleinheinz, Konstantin; Attili, Antonio; Bisetti, Fabrizio; Pitsch, Heinz; Mueller, Michael E.
2018-05-01
Modelling unclosed terms in partial differential equations typically involves two steps: First, a set of known quantities needs to be specified as input parameters for a model, and second, a specific functional form needs to be defined to model the unclosed terms by the input parameters. Both steps involve a certain modelling error, with the former known as the irreducible error and the latter referred to as the functional error. Typically, only the total modelling error, which is the sum of functional and irreducible error, is assessed, but the concept of the optimal estimator enables the separate analysis of the total and the irreducible errors, yielding a systematic modelling error decomposition. In this work, attention is paid to the techniques themselves required for the practical computation of irreducible errors. Typically, histograms are used for optimal estimator analyses, but this technique is found to add a non-negligible spurious contribution to the irreducible error if models with multiple input parameters are assessed. Thus, the error decomposition of an optimal estimator analysis becomes inaccurate, and misleading conclusions concerning modelling errors may be drawn. In this work, numerically accurate techniques for optimal estimator analyses are identified and a suitable evaluation of irreducible errors is presented. Four different computational techniques are considered: a histogram technique, artificial neural networks, multivariate adaptive regression splines, and an additive model based on a kernel method. For multiple input parameter models, only artificial neural networks and multivariate adaptive regression splines are found to yield satisfactorily accurate results. Beyond a certain number of input parameters, the assessment of models in an optimal estimator analysis even becomes practically infeasible if histograms are used. The optimal estimator analysis in this paper is applied to modelling the filtered soot intermittency in large eddy simulations using a dataset of a direct numerical simulation of a non-premixed sooting turbulent flame.
Systematic parameter estimation in data-rich environments for cell signalling dynamics
Nim, Tri Hieu; Luo, Le; Clément, Marie-Véronique; White, Jacob K.; Tucker-Kellogg, Lisa
2013-01-01
Motivation: Computational models of biological signalling networks, based on ordinary differential equations (ODEs), have generated many insights into cellular dynamics, but the model-building process typically requires estimating rate parameters based on experimentally observed concentrations. New proteomic methods can measure concentrations for all molecular species in a pathway; this creates a new opportunity to decompose the optimization of rate parameters. Results: In contrast with conventional parameter estimation methods that minimize the disagreement between simulated and observed concentrations, the SPEDRE method fits spline curves through observed concentration points, estimates derivatives and then matches the derivatives to the production and consumption of each species. This reformulation of the problem permits an extreme decomposition of the high-dimensional optimization into a product of low-dimensional factors, each factor enforcing the equality of one ODE at one time slice. Coarsely discretized solutions to the factors can be computed systematically. Then the discrete solutions are combined using loopy belief propagation, and refined using local optimization. SPEDRE has unique asymptotic behaviour with runtime polynomial in the number of molecules and timepoints, but exponential in the degree of the biochemical network. SPEDRE performance is comparatively evaluated on a novel model of Akt activation dynamics including redox-mediated inactivation of PTEN (phosphatase and tensin homologue). Availability and implementation: Web service, software and supplementary information are available at www.LtkLab.org/SPEDRE Supplementary information: Supplementary data are available at Bioinformatics online. Contact: LisaTK@nus.edu.sg PMID:23426255
Multi-objective optimization for generating a weighted multi-model ensemble
NASA Astrophysics Data System (ADS)
Lee, H.
2017-12-01
Many studies have demonstrated that multi-model ensembles generally show better skill than each ensemble member. When generating weighted multi-model ensembles, the first step is measuring the performance of individual model simulations using observations. There is a consensus on the assignment of weighting factors based on a single evaluation metric. When considering only one evaluation metric, the weighting factor for each model is proportional to a performance score or inversely proportional to an error for the model. While this conventional approach can provide appropriate combinations of multiple models, the approach confronts a big challenge when there are multiple metrics under consideration. When considering multiple evaluation metrics, it is obvious that a simple averaging of multiple performance scores or model ranks does not address the trade-off problem between conflicting metrics. So far, there seems to be no best method to generate weighted multi-model ensembles based on multiple performance metrics. The current study applies the multi-objective optimization, a mathematical process that provides a set of optimal trade-off solutions based on a range of evaluation metrics, to combining multiple performance metrics for the global climate models and their dynamically downscaled regional climate simulations over North America and generating a weighted multi-model ensemble. NASA satellite data and the Regional Climate Model Evaluation System (RCMES) software toolkit are used for assessment of the climate simulations. Overall, the performance of each model differs markedly with strong seasonal dependence. Because of the considerable variability across the climate simulations, it is important to evaluate models systematically and make future projections by assigning optimized weighting factors to the models with relatively good performance. Our results indicate that the optimally weighted multi-model ensemble always shows better performance than an arithmetic ensemble mean and may provide reliable future projections.
Genome-scale biological models for industrial microbial systems.
Xu, Nan; Ye, Chao; Liu, Liming
2018-04-01
The primary aims and challenges associated with microbial fermentation include achieving faster cell growth, higher productivity, and more robust production processes. Genome-scale biological models, predicting the formation of an interaction among genetic materials, enzymes, and metabolites, constitute a systematic and comprehensive platform to analyze and optimize the microbial growth and production of biological products. Genome-scale biological models can help optimize microbial growth-associated traits by simulating biomass formation, predicting growth rates, and identifying the requirements for cell growth. With regard to microbial product biosynthesis, genome-scale biological models can be used to design product biosynthetic pathways, accelerate production efficiency, and reduce metabolic side effects, leading to improved production performance. The present review discusses the development of microbial genome-scale biological models since their emergence and emphasizes their pertinent application in improving industrial microbial fermentation of biological products.
Nuclear Hybrid Energy Systems Initial Integrated Case Study Development and Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harrison, Thomas J.; Greenwood, Michael Scott
The US Department of Energy Office of Nuclear Energy established the Nuclear Hybrid Energy System (NHES) project to develop a systematic, rigorous, technically accurate set of methods to model, analyze, and optimize the integration of dispatchable nuclear, fossil, and electric storage with an industrial customer. Ideally, the optimized integration of these systems will provide economic and operational benefits to the overall system compared to independent operation, and it will enhance the stability and responsiveness of the grid as intermittent, nondispatchable, renewable resources provide a greater share of grid power.
Verrest, Luka; Dorlo, Thomas P C
2017-06-01
Neglected tropical diseases (NTDs) affect more than one billion people, mainly living in developing countries. For most of these NTDs, treatment is suboptimal. To optimize treatment regimens, clinical pharmacokinetic studies are required where they have not been previously conducted to enable the use of pharmacometric modeling and simulation techniques in their application, which can provide substantial advantages. Our aim was to provide a systematic overview and summary of all clinical pharmacokinetic studies in NTDs and to assess the use of pharmacometrics in these studies, as well as to identify which of the NTDs or which treatments have not been sufficiently studied. PubMed was systematically searched for all clinical trials and case reports until the end of 2015 that described the pharmacokinetics of a drug in the context of treating any of the NTDs in patients or healthy volunteers. Eighty-two pharmacokinetic studies were identified. Most studies included small patient numbers (only five studies included >50 subjects) and only nine (11 %) studies included pediatric patients. A large part of the studies was not very recent; 56 % of studies were published before 2000. Most studies applied non-compartmental analysis methods for pharmacokinetic analysis (62 %). Twelve studies used population-based compartmental analysis (15 %) and eight (10 %) additionally performed simulations or extrapolation. For ten out of the 17 NTDs, none or only very few pharmacokinetic studies could be identified. For most NTDs, adequate pharmacokinetic studies are lacking and population-based modeling and simulation techniques have not generally been applied. Pharmacokinetic clinical trials that enable population pharmacokinetic modeling are needed to make better use of the available data. Simulation-based studies should be employed to enable the design of improved dosing regimens and more optimally use the limited resources to effectively provide therapy in this neglected area.
A systematic optimization for graphene-based supercapacitors
NASA Astrophysics Data System (ADS)
Deuk Lee, Sung; Lee, Han Sung; Kim, Jin Young; Jeong, Jaesik; Kahng, Yung Ho
2017-08-01
Increasing the energy-storage density for supercapacitors is critical for their applications. Many researchers have attempted to identify optimal candidate component materials to achieve this goal, but investigations into systematically optimizing their mixing rate for maximizing the performance of each candidate material have been insufficient, which hinders the progress in their technology. In this study, we employ a statistically systematic method to determine the optimum mixing ratio of three components that constitute graphene-based supercapacitor electrodes: reduced graphene oxide (rGO), acetylene black (AB), and polyvinylidene fluoride (PVDF). By using the extreme-vertices design, the optimized proportion is determined to be (rGO: AB: PVDF = 0.95: 0.00: 0.05). The corresponding energy-storage density increases by a factor of 2 compared with that of non-optimized electrodes. Electrochemical and microscopic analyses are performed to determine the reason for the performance improvements.
NASA Astrophysics Data System (ADS)
Nuh, M. Z.; Nasir, N. F.
2017-08-01
Biodiesel as a fuel comprised of mono alkyl esters of long chain fatty acids derived from renewable lipid feedstock, such as vegetable oil and animal fat. Biodiesel production is complex process which need systematic design and optimization. However, no case study using the process system engineering (PSE) elements which are superstructure optimization of batch process, it involves complex problems and uses mixed-integer nonlinear programming (MINLP). The PSE offers a solution to complex engineering system by enabling the use of viable tools and techniques to better manage and comprehend the complexity of the system. This study is aimed to apply the PSE tools for the simulation of biodiesel process and optimization and to develop mathematical models for component of the plant for case A, B, C by using published kinetic data. Secondly, to determine economic analysis for biodiesel production, focusing on heterogeneous catalyst. Finally, the objective of this study is to develop the superstructure for biodiesel production by using heterogeneous catalyst. The mathematical models are developed by the superstructure and solving the resulting mixed integer non-linear model and estimation economic analysis by using MATLAB software. The results of the optimization process with the objective function of minimizing the annual production cost by batch process from case C is 23.2587 million USD. Overall, the implementation a study of process system engineering (PSE) has optimized the process of modelling, design and cost estimation. By optimizing the process, it results in solving the complex production and processing of biodiesel by batch.
The Limitations of Model-Based Experimental Design and Parameter Estimation in Sloppy Systems.
White, Andrew; Tolman, Malachi; Thames, Howard D; Withers, Hubert Rodney; Mason, Kathy A; Transtrum, Mark K
2016-12-01
We explore the relationship among experimental design, parameter estimation, and systematic error in sloppy models. We show that the approximate nature of mathematical models poses challenges for experimental design in sloppy models. In many models of complex biological processes it is unknown what are the relevant physical mechanisms that must be included to explain system behaviors. As a consequence, models are often overly complex, with many practically unidentifiable parameters. Furthermore, which mechanisms are relevant/irrelevant vary among experiments. By selecting complementary experiments, experimental design may inadvertently make details that were ommitted from the model become relevant. When this occurs, the model will have a large systematic error and fail to give a good fit to the data. We use a simple hyper-model of model error to quantify a model's discrepancy and apply it to two models of complex biological processes (EGFR signaling and DNA repair) with optimally selected experiments. We find that although parameters may be accurately estimated, the discrepancy in the model renders it less predictive than it was in the sloppy regime where systematic error is small. We introduce the concept of a sloppy system-a sequence of models of increasing complexity that become sloppy in the limit of microscopic accuracy. We explore the limits of accurate parameter estimation in sloppy systems and argue that identifying underlying mechanisms controlling system behavior is better approached by considering a hierarchy of models of varying detail rather than focusing on parameter estimation in a single model.
Active model-based balancing strategy for self-reconfigurable batteries
NASA Astrophysics Data System (ADS)
Bouchhima, Nejmeddine; Schnierle, Marc; Schulte, Sascha; Birke, Kai Peter
2016-08-01
This paper describes a novel balancing strategy for self-reconfigurable batteries where the discharge and charge rates of each cell can be controlled. While much effort has been focused on improving the hardware architecture of self-reconfigurable batteries, energy equalization algorithms have not been systematically optimized in terms of maximizing the efficiency of the balancing system. Our approach includes aspects of such optimization theory. We develop a balancing strategy for optimal control of the discharge rate of battery cells. We first formulate the cell balancing as a nonlinear optimal control problem, which is modeled afterward as a network program. Using dynamic programming techniques and MATLAB's vectorization feature, we solve the optimal control problem by generating the optimal battery operation policy for a given drive cycle. The simulation results show that the proposed strategy efficiently balances the cells over the life of the battery, an obvious advantage that is absent in the other conventional approaches. Our algorithm is shown to be robust when tested against different influencing parameters varying over wide spectrum on different drive cycles. Furthermore, due to the little computation time and the proved low sensitivity to the inaccurate power predictions, our strategy can be integrated in a real-time system.
NASA Astrophysics Data System (ADS)
Nadort, Annemarie; Liang, Liuen; Grebenik, Ekaterina; Guller, Anna; Lu, Yiqing; Qian, Yi; Goldys, Ewa; Zvyagin, Andrei
2015-12-01
Nanoparticle-based delivery of drugs and contrast agents holds great promise in cancer research, because of the increased delivery efficiency compared to `free' drugs and dyes. A versatile platform to investigate nanotechnology is the chick embryo chorioallantoic membrane tumour model, due to its availability (easy, cheap) and accessibility (interventions, imaging). In our group, we developed this model using several tumour cell lines (e.g. breast cancer, colon cancer). In addition, we have synthesized in-house silica coated photoluminescent upconversion nanoparticles with several functional groups (COOH, NH2, PEG). In this work we will present the systematic assessment of their in vivo blood circulation times. To this end, we injected chick embryos grown ex ovo with the functionalized UCNPs and obtained a small amount of blood at several time points after injection to create blood smears The UCNP signal from the blood smears was quantified using a modified inverted microscope imaging set-up. The results of this systematic study are valuable to optimize biochemistry protocols and guide nanomedicine advancement in the versatile chick embryo tumour model.
The Limitations of Model-Based Experimental Design and Parameter Estimation in Sloppy Systems
Tolman, Malachi; Thames, Howard D.; Mason, Kathy A.
2016-01-01
We explore the relationship among experimental design, parameter estimation, and systematic error in sloppy models. We show that the approximate nature of mathematical models poses challenges for experimental design in sloppy models. In many models of complex biological processes it is unknown what are the relevant physical mechanisms that must be included to explain system behaviors. As a consequence, models are often overly complex, with many practically unidentifiable parameters. Furthermore, which mechanisms are relevant/irrelevant vary among experiments. By selecting complementary experiments, experimental design may inadvertently make details that were ommitted from the model become relevant. When this occurs, the model will have a large systematic error and fail to give a good fit to the data. We use a simple hyper-model of model error to quantify a model’s discrepancy and apply it to two models of complex biological processes (EGFR signaling and DNA repair) with optimally selected experiments. We find that although parameters may be accurately estimated, the discrepancy in the model renders it less predictive than it was in the sloppy regime where systematic error is small. We introduce the concept of a sloppy system–a sequence of models of increasing complexity that become sloppy in the limit of microscopic accuracy. We explore the limits of accurate parameter estimation in sloppy systems and argue that identifying underlying mechanisms controlling system behavior is better approached by considering a hierarchy of models of varying detail rather than focusing on parameter estimation in a single model. PMID:27923060
A Novel Protocol for Model Calibration in Biological Wastewater Treatment
Zhu, Ao; Guo, Jianhua; Ni, Bing-Jie; Wang, Shuying; Yang, Qing; Peng, Yongzhen
2015-01-01
Activated sludge models (ASMs) have been widely used for process design, operation and optimization in wastewater treatment plants. However, it is still a challenge to achieve an efficient calibration for reliable application by using the conventional approaches. Hereby, we propose a novel calibration protocol, i.e. Numerical Optimal Approaching Procedure (NOAP), for the systematic calibration of ASMs. The NOAP consists of three key steps in an iterative scheme flow: i) global factors sensitivity analysis for factors fixing; ii) pseudo-global parameter correlation analysis for non-identifiable factors detection; and iii) formation of a parameter subset through an estimation by using genetic algorithm. The validity and applicability are confirmed using experimental data obtained from two independent wastewater treatment systems, including a sequencing batch reactor and a continuous stirred-tank reactor. The results indicate that the NOAP can effectively determine the optimal parameter subset and successfully perform model calibration and validation for these two different systems. The proposed NOAP is expected to use for automatic calibration of ASMs and be applied potentially to other ordinary differential equations models. PMID:25682959
Albaugh, Alex; Head-Gordon, Teresa; Niklasson, Anders M. N.
2018-01-09
Generalized extended Lagrangian Born−Oppenheimer molecular dynamics (XLBOMD) methods provide a framework for fast iteration-free simulations of models that normally require expensive electronic ground state optimizations prior to the force evaluations at every time step. XLBOMD uses dynamically driven auxiliary degrees of freedom that fluctuate about a variationally optimized ground state of an approximate “shadow” potential which approximates the true reference potential. While the requirements for such shadow potentials are well understood, constructing such potentials in practice has previously been ad hoc, and in this work, we present a systematic development of XLBOMD shadow potentials that match the reference potential tomore » any order. We also introduce a framework for combining friction-like dissipation for the auxiliary degrees of freedom with general-order integration, a combination that was not previously possible. These developments are demonstrated with a simple fluctuating charge model and point induced dipole polarization models.« less
Albaugh, Alex; Head-Gordon, Teresa; Niklasson, Anders M N
2018-02-13
Generalized extended Lagrangian Born-Oppenheimer molecular dynamics (XLBOMD) methods provide a framework for fast iteration-free simulations of models that normally require expensive electronic ground state optimizations prior to the force evaluations at every time step. XLBOMD uses dynamically driven auxiliary degrees of freedom that fluctuate about a variationally optimized ground state of an approximate "shadow" potential which approximates the true reference potential. While the requirements for such shadow potentials are well understood, constructing such potentials in practice has previously been ad hoc, and in this work, we present a systematic development of XLBOMD shadow potentials that match the reference potential to any order. We also introduce a framework for combining friction-like dissipation for the auxiliary degrees of freedom with general-order integration, a combination that was not previously possible. These developments are demonstrated with a simple fluctuating charge model and point induced dipole polarization models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Albaugh, Alex; Head-Gordon, Teresa; Niklasson, Anders M. N.
Generalized extended Lagrangian Born−Oppenheimer molecular dynamics (XLBOMD) methods provide a framework for fast iteration-free simulations of models that normally require expensive electronic ground state optimizations prior to the force evaluations at every time step. XLBOMD uses dynamically driven auxiliary degrees of freedom that fluctuate about a variationally optimized ground state of an approximate “shadow” potential which approximates the true reference potential. While the requirements for such shadow potentials are well understood, constructing such potentials in practice has previously been ad hoc, and in this work, we present a systematic development of XLBOMD shadow potentials that match the reference potential tomore » any order. We also introduce a framework for combining friction-like dissipation for the auxiliary degrees of freedom with general-order integration, a combination that was not previously possible. These developments are demonstrated with a simple fluctuating charge model and point induced dipole polarization models.« less
Impact of Chaos Functions on Modern Swarm Optimizers.
Emary, E; Zawbaa, Hossam M
2016-01-01
Exploration and exploitation are two essential components for any optimization algorithm. Much exploration leads to oscillation and premature convergence while too much exploitation slows down the optimization algorithm and the optimizer may be stuck in local minima. Therefore, balancing the rates of exploration and exploitation at the optimization lifetime is a challenge. This study evaluates the impact of using chaos-based control of exploration/exploitation rates against using the systematic native control. Three modern algorithms were used in the study namely grey wolf optimizer (GWO), antlion optimizer (ALO) and moth-flame optimizer (MFO) in the domain of machine learning for feature selection. Results on a set of standard machine learning data using a set of assessment indicators prove advance in optimization algorithm performance when using variational repeated periods of declined exploration rates over using systematically decreased exploration rates.
Parameter optimization for the visco-hyperelastic constitutive model of tendon using FEM.
Tang, C Y; Ng, G Y F; Wang, Z W; Tsui, C P; Zhang, G
2011-01-01
Numerous constitutive models describing the mechanical properties of tendons have been proposed during the past few decades. However, few were widely used owing to the lack of implementation in the general finite element (FE) software, and very few systematic studies have been done on selecting the most appropriate parameters for these constitutive laws. In this work, the visco-hyperelastic constitutive model of the tendon implemented through the use of three-parameter Mooney-Rivlin form and sixty-four-parameter Prony series were firstly analyzed using ANSYS FE software. Afterwards, an integrated optimization scheme was developed by coupling two optimization toolboxes (OPTs) of ANSYS and MATLAB for estimating these unknown constitutive parameters of the tendon. Finally, a group of Sprague-Dawley rat tendons was used to execute experimental and numerical simulation investigation. The simulated results showed good agreement with the experimental data. An important finding revealed that too many Maxwell elements was not necessary for assuring accuracy of the model, which is often neglected in most open literatures. Thus, all these proved that the constitutive parameter optimization scheme was reliable and highly efficient. Furthermore, the approach can be extended to study other tendons or ligaments, as well as any visco-hyperelastic solid materials.
Klous, Miriam; Klous, Sander
2010-07-01
The aim of skin-marker-based motion analysis is to reconstruct the motion of a kinematical model from noisy measured motion of skin markers. Existing kinematic models for reconstruction of chains of segments can be divided into two categories: analytical methods that do not take joint constraints into account and numerical global optimization methods that do take joint constraints into account but require numerical optimization of a large number of degrees of freedom, especially when the number of segments increases. In this study, a new and largely analytical method for a chain of rigid bodies is presented, interconnected in spherical joints (chain-method). In this method, the number of generalized coordinates to be determined through numerical optimization is three, irrespective of the number of segments. This new method is compared with the analytical method of Veldpaus et al. [1988, "A Least-Squares Algorithm for the Equiform Transformation From Spatial Marker Co-Ordinates," J. Biomech., 21, pp. 45-54] (Veldpaus-method, a method of the first category) and the numerical global optimization method of Lu and O'Connor [1999, "Bone Position Estimation From Skin-Marker Co-Ordinates Using Global Optimization With Joint Constraints," J. Biomech., 32, pp. 129-134] (Lu-method, a method of the second category) regarding the effects of continuous noise simulating skin movement artifacts and regarding systematic errors in joint constraints. The study is based on simulated data to allow a comparison of the results of the different algorithms with true (noise- and error-free) marker locations. Results indicate a clear trend that accuracy for the chain-method is higher than the Veldpaus-method and similar to the Lu-method. Because large parts of the equations in the chain-method can be solved analytically, the speed of convergence in this method is substantially higher than in the Lu-method. With only three segments, the average number of required iterations with the chain-method is 3.0+/-0.2 times lower than with the Lu-method when skin movement artifacts are simulated by applying a continuous noise model. When simulating systematic errors in joint constraints, the number of iterations for the chain-method was almost a factor 5 lower than the number of iterations for the Lu-method. However, the Lu-method performs slightly better than the chain-method. The RMSD value between the reconstructed and actual marker positions is approximately 57% of the systematic error on the joint center positions for the Lu-method compared with 59% for the chain-method.
Aljaberi, Ahmad; Chatterji, Ashish; Dong, Zedong; Shah, Navnit H; Malick, Waseem; Singhal, Dharmendra; Sandhu, Harpreet K
2013-01-01
To evaluate and optimize sodium lauryl sulfate (SLS) and magnesium stearate (Mg.St) levels, with respect to dissolution and compaction, in a high dose, poorly soluble drug tablet formulation. A model poorly soluble drug was formulated using high shear aqueous granulation. A D-optimal design was used to evaluate and model the effect of granulation conditions, size of milling screen, SLS and Mg.St levels on tablet compaction and ejection. The compaction profiles were generated using a Presster(©) compaction simulator. Dissolution of the kernels was performed using a USP dissolution apparatus II and intrinsic dissolution was determined using a stationary disk system. Unlike kernels dissolution which failed to discriminate between tablets prepared with various SLS contents, the intrinsic dissolution rate showed that a SLS level of 0.57% was sufficient to achieve the required release profile while having minimal effect on compaction. The formulation factors that affect tablet compaction and ejection were identified and satisfactorily modeled. The design space of best factor setting to achieve optimal compaction and ejection properties was successfully constructed by RSM analysis. A systematic study design helped identify the critical factors and provided means to optimize the functionality of key excipient to design robust drug product.
Musci, Marilena; Yao, Shicong
2017-12-01
Pu-erh tea is a post-fermented tea that has recently gained popularity worldwide, due to potential health benefits related to the antioxidant activity resulting from its high polyphenolic content. The Folin-Ciocalteu method is a simple, rapid, and inexpensive assay widely applied for the determination of total polyphenol content. Over the past years, it has been subjected to many modifications, often without any systematic optimization or validation. In our study, we sought to optimize the Folin-Ciocalteu method, evaluate quality parameters including linearity, precision and stability, and then apply the optimized model to determine the total polyphenol content of 57 Chinese teas, including green tea, aged and ripened Pu-erh tea. Our optimized Folin-Ciocalteu method reduced analysis time, allowed for the analysis of a large number of samples, to discriminate among the different teas, and to assess the effect of the post-fermentation process on polyphenol content.
Modeling human target acquisition in ground-to-air weapon systems
NASA Technical Reports Server (NTRS)
Phatak, A. V.; Mohr, R. L.; Vikmanis, M.; Wei, K. C.
1982-01-01
The problems associated with formulating and validating mathematical models for describing and predicting human target acquisition response are considered. In particular, the extension of the human observer model to include the acquisition phase as well as the tracking segment is presented. Relationship of the Observer model structure to the more complex Standard Optimal Control model formulation and to the simpler Transfer Function/Noise representation is discussed. Problems pertinent to structural identifiability and the form of the parameterization are elucidated. A systematic approach toward the identification of the observer acquisition model parameters from ensemble tracking error data is presented.
The Dilution Effect and Information Integration in Perceptual Decision Making
Hotaling, Jared M.; Cohen, Andrew L.; Shiffrin, Richard M.; Busemeyer, Jerome R.
2015-01-01
In cognitive science there is a seeming paradox: On the one hand, studies of human judgment and decision making have repeatedly shown that people systematically violate optimal behavior when integrating information from multiple sources. On the other hand, optimal models, often Bayesian, have been successful at accounting for information integration in fields such as categorization, memory, and perception. This apparent conflict could be due, in part, to different materials and designs that lead to differences in the nature of processing. Stimuli that require controlled integration of information, such as the quantitative or linguistic information (commonly found in judgment studies), may lead to suboptimal performance. In contrast, perceptual stimuli may lend themselves to automatic processing, resulting in integration that is closer to optimal. We tested this hypothesis with an experiment in which participants categorized faces based on resemblance to a family patriarch. The amount of evidence contained in the top and bottom halves of each test face was independently manipulated. These data allow us to investigate a canonical example of sub-optimal information integration from the judgment and decision making literature, the dilution effect. Splitting the top and bottom halves of a face, a manipulation meant to encourage controlled integration of information, produced farther from optimal behavior and larger dilution effects. The Multi-component Information Accumulation model, a hybrid optimal/averaging model of information integration, successfully accounts for key accuracy, response time, and dilution effects. PMID:26406323
The Dilution Effect and Information Integration in Perceptual Decision Making.
Hotaling, Jared M; Cohen, Andrew L; Shiffrin, Richard M; Busemeyer, Jerome R
2015-01-01
In cognitive science there is a seeming paradox: On the one hand, studies of human judgment and decision making have repeatedly shown that people systematically violate optimal behavior when integrating information from multiple sources. On the other hand, optimal models, often Bayesian, have been successful at accounting for information integration in fields such as categorization, memory, and perception. This apparent conflict could be due, in part, to different materials and designs that lead to differences in the nature of processing. Stimuli that require controlled integration of information, such as the quantitative or linguistic information (commonly found in judgment studies), may lead to suboptimal performance. In contrast, perceptual stimuli may lend themselves to automatic processing, resulting in integration that is closer to optimal. We tested this hypothesis with an experiment in which participants categorized faces based on resemblance to a family patriarch. The amount of evidence contained in the top and bottom halves of each test face was independently manipulated. These data allow us to investigate a canonical example of sub-optimal information integration from the judgment and decision making literature, the dilution effect. Splitting the top and bottom halves of a face, a manipulation meant to encourage controlled integration of information, produced farther from optimal behavior and larger dilution effects. The Multi-component Information Accumulation model, a hybrid optimal/averaging model of information integration, successfully accounts for key accuracy, response time, and dilution effects.
NASA Astrophysics Data System (ADS)
Nsamba, B.; Campante, T. L.; Monteiro, M. J. P. F. G.; Cunha, M. S.; Rendle, B. M.; Reese, D. R.; Verma, K.
2018-07-01
Asteroseismic forward modelling techniques are being used to determine fundamental properties (e.g. mass, radius, and age) of solar-type stars. The need to take into account all possible sources of error is of paramount importance towards a robust determination of stellar properties. We present a study of 34 solar-type stars for which high signal-to-noise asteroseismic data are available from multiyear Kepler photometry. We explore the internal systematics on the stellar properties, that is associated with the uncertainty in the input physics used to construct the stellar models. In particular, we explore the systematics arising from (i) the inclusion of the diffusion of helium and heavy elements; (ii) the uncertainty in solar metallicity mixture; and (iii) different surface correction methods used in optimization/fitting procedures. The systematics arising from comparing results of models with and without diffusion are found to be 0.5 per cent, 0.8 per cent, 2.1 per cent, and 16 per cent in mean density, radius, mass, and age, respectively. The internal systematics in age are significantly larger than the statistical uncertainties. We find the internal systematics resulting from the uncertainty in solar metallicity mixture to be 0.7 per cent in mean density, 0.5 per cent in radius, 1.4 per cent in mass, and 6.7 per cent in age. The surface correction method by Sonoi et al. and Ball & Gizon's two-term correction produce the lowest internal systematics among the different correction methods, namely, ˜1 per cent, ˜1 per cent, ˜2 per cent, and ˜8 per cent in mean density, radius, mass, and age, respectively. Stellar masses obtained using the surface correction methods by Kjeldsen et al. and Ball & Gizon's one-term correction are systematically higher than those obtained using frequency ratios.
ERIC Educational Resources Information Center
Weaver, Christopher
2011-01-01
This study presents a systematic investigation concerning the performance of different rating scales used in the English section of a university entrance examination to assess 1,287 Japanese test takers' ability to write a third-person introduction speech. Although the rating scales did not conform to all of the expectations of the Rasch model,…
NASA Astrophysics Data System (ADS)
Perdigão, R. A. P.
2017-12-01
Predictability assessments are traditionally made on a case-by-case basis, often by running the particular model of interest with randomly perturbed initial/boundary conditions and parameters, producing computationally expensive ensembles. These approaches provide a lumped statistical view of uncertainty evolution, without eliciting the fundamental processes and interactions at play in the uncertainty dynamics. In order to address these limitations, we introduce a systematic dynamical framework for predictability assessment and forecast, by analytically deriving governing equations of predictability in terms of the fundamental architecture of dynamical systems, independent of any particular problem under consideration. The framework further relates multiple uncertainty sources along with their coevolutionary interplay, enabling a comprehensive and explicit treatment of uncertainty dynamics along time, without requiring the actual model to be run. In doing so, computational resources are freed and a quick and effective a-priori systematic dynamic evaluation is made of predictability evolution and its challenges, including aspects in the model architecture and intervening variables that may require optimization ahead of initiating any model runs. It further brings out universal dynamic features in the error dynamics elusive to any case specific treatment, ultimately shedding fundamental light on the challenging issue of predictability. The formulated approach, framed with broad mathematical physics generality in mind, is then implemented in dynamic models of nonlinear geophysical systems with various degrees of complexity, in order to evaluate their limitations and provide informed assistance on how to optimize their design and improve their predictability in fundamental dynamical terms.
NASA Astrophysics Data System (ADS)
Lei, Meizhen; Wang, Liqiang
2018-01-01
To reduce the difficulty of manufacturing and increase the magnetic thrust density, a moving-magnet linear oscillatory motor (MMLOM) without inner-stators was Proposed. To get the optimal design of maximum electromagnetic thrust with minimal permanent magnetic material, firstly, the 3D finite element analysis (FEA) model of the MMLOM was built and verified by comparison with prototype experiment result. Then the influence of design parameters of permanent magnet (PM) on the electromagnetic thrust was systematically analyzed by the 3D FEA to get the design parameters. Secondly, response surface methodology (RSM) was employed to build the response surface model of the new MMLOM, which can obtain an analytical model of the PM volume and thrust. Then a multi-objective optimization methods for design parameters of PM, using response surface methodology (RSM) with a quantum-behaved PSO (QPSO) operator, was proposed. Then the way to choose the best design parameters of PM among the multi-objective optimization solution sets was proposed. Then the 3D FEA of the optimal design candidates was compared. The comparison results showed that the proposed method can obtain the best combination of the geometric parameters of reducing the PM volume and increasing the thrust.
NASA Astrophysics Data System (ADS)
Wallow, Thomas I.; Zhang, Chen; Fumar-Pici, Anita; Chen, Jun; Laenens, Bart; Spence, Christopher A.; Rio, David; van Adrichem, Paul; Dillen, Harm; Wang, Jing; Yang, Peng-Cheng; Gillijns, Werner; Jaenen, Patrick; van Roey, Frieda; van de Kerkhove, Jeroen; Babin, Sergey
2017-03-01
In the course of assessing OPC compact modeling capabilities and future requirements, we chose to investigate the interface between CD-SEM metrology methods and OPC modeling in some detail. Two linked observations motivated our study: 1) OPC modeling is, in principle, agnostic of metrology methods and best practice implementation. 2) Metrology teams across the industry use a wide variety of equipment, hardware settings, and image/data analysis methods to generate the large volumes of CD-SEM measurement data that are required for OPC in advanced technology nodes. Initial analyses led to the conclusion that many independent best practice metrology choices based on systematic study as well as accumulated institutional knowledge and experience can be reasonably made. Furthermore, these choices can result in substantial variations in measurement of otherwise identical model calibration and verification patterns. We will describe several experimental 2D test cases (i.e., metal, via/cut layers) that examine how systematic changes in metrology practice impact both the metrology data itself and the resulting full chip compact model behavior. Assessment of specific methodology choices will include: • CD-SEM hardware configurations and settings: these may range from SEM beam conditions (voltage, current, etc.,) to magnification, to frame integration optimizations that balance signal-to-noise vs. resist damage. • Image and measurement optimization: these may include choice of smoothing filters for noise suppression, threshold settings, etc. • Pattern measurement methodologies: these may include sampling strategies, CD- and contour- based approaches, and various strategies to optimize the measurement of complex 2D shapes. In addition, we will present conceptual frameworks and experimental methods that allow practitioners of OPC metrology to assess impacts of metrology best practice choices on model behavior. Finally, we will also assess requirements posed by node scaling on OPC model accuracy, and evaluate potential consequences for CD-SEM metrology capabilities and practices.
Dosage optimization in positron emission tomography: state-of-the-art methods and future prospects
Karakatsanis, Nicolas A; Fokou, Eleni; Tsoumpas, Charalampos
2015-01-01
Positron emission tomography (PET) is widely used nowadays for tumor staging and therapy response in the clinic. However, average PET radiation exposure has increased due to higher PET utilization. This study aims to review state-of-the-art PET tracer dosage optimization methods after accounting for the effects of human body attenuation and scan protocol parameters on the counting rate. In particular, the relationship between the noise equivalent count rate (NECR) and the dosage (NECR-dosage curve) for a range of clinical PET systems and body attenuation sizes will be systematically studied to prospectively estimate the minimum dosage required for sufficiently high NECR. The optimization criterion can be determined either as a function of the peak of the NECR-dosage curve or as a fixed NECR score when NECR uniformity across a patient population is important. In addition, the systematic NECR assessments within a controllable environment of realistic simulations and phantom experiments can lead to a NECR-dosage response model, capable of predicting the optimal dosage for every individual PET scan. Unlike conventional guidelines suggesting considerably large dosage levels for obese patients, NECR-based optimization recommends: i) moderate dosage to achieve 90% of peak NECR for obese patients, ii) considerable dosage reduction for slimmer patients such that uniform NECR is attained across the patient population, and iii) prolongation of scans for PET/MR protocols, where longer PET acquisitions are affordable due to lengthy MR sequences, with motion compensation becoming important then. Finally, the need for continuous adaptation of dosage optimization to emerging technologies will be discussed. PMID:26550543
Sijwali, P S; Brinen, L S; Rosenthal, P J
2001-06-01
The Plasmodium falciparum cysteine protease falcipain-2 is a potential new target for antimalarial chemotherapy. In order to obtain large quantities of active falcipain-2 for biochemical and structural analysis, a systematic assessment of optimal parameters for the expression and refolding of the protease was carried out. High-yield expression was achieved using M15(pREP4) Escherichia coli transformed with the pQE-30 plasmid containing a truncated profalcipain-2 construct. Recombinant falcipain-2 was expressed as inclusion bodies, solubilized, and purified by nickel affinity chromatography. A systematic approach was then used to optimize refolding parameters. This approach utilized 100-fold dilutions of reduced and denatured falcipain-2 into 203 different buffers in a microtiter plate format. Refolding efficiency varied markedly. Optimal refolding was obtained in an alkaline buffer containing glycerol or sucrose and equal concentrations of reduced and oxidized glutathione. After optimization of the expression and refolding protocols and additional purification with anion-exchange chromatography, 12 mg of falcipain-2 was obtained from 5 liters of E. coli, and crystals of the protease were grown. The systematic approach described here allowed the rapid evaluation of a large number of expression and refolding conditions and provided milligram quantities of recombinant falcipain-2. Copyright 2001 Academic Press.
Novo, Leonardo; Chakraborty, Shantanav; Mohseni, Masoud; Neven, Hartmut; Omar, Yasser
2015-01-01
Continuous time quantum walks provide an important framework for designing new algorithms and modelling quantum transport and state transfer problems. Often, the graph representing the structure of a problem contains certain symmetries that confine the dynamics to a smaller subspace of the full Hilbert space. In this work, we use invariant subspace methods, that can be computed systematically using the Lanczos algorithm, to obtain the reduced set of states that encompass the dynamics of the problem at hand without the specific knowledge of underlying symmetries. First, we apply this method to obtain new instances of graphs where the spatial quantum search algorithm is optimal: complete graphs with broken links and complete bipartite graphs, in particular, the star graph. These examples show that regularity and high-connectivity are not needed to achieve optimal spatial search. We also show that this method considerably simplifies the calculation of quantum transport efficiencies. Furthermore, we observe improved efficiencies by removing a few links from highly symmetric graphs. Finally, we show that this reduction method also allows us to obtain an upper bound for the fidelity of a single qubit transfer on an XY spin network. PMID:26330082
A systematic review of innovative diabetes care models in low-and middle-income countries (LMICs).
Esterson, Yonah B; Carey, Michelle; Piette, John D; Thomas, Nihal; Hawkins, Meredith
2014-02-01
Over 70% of the world's patients with diabetes reside in low-and middle-income countries (LMICs), where adequate infrastructure and resources for diabetes care are often lacking. Therefore, academic institutions, health care organizations, and governments from Western nations and LMICs have worked together to develop a variety of effective diabetes care models for resource-poor settings. A focused search of PubMed was conducted with the goal of identifying reports that addressed the implementation of diabetes care models or initiatives to improve clinical and/or biochemical outcomes in patients with diabetes mellitus. A total of 15 published manuscripts comprising nine diabetes care models in 16 locations in sub-Saharan Africa, Latin America, and Asia identified by the above approach were systematically reviewed. The reviewed models shared a number of principles including collaboration, education, standardization, resource optimization, and technological innovation. The most comprehensive models used a number of these principles, which contributed to their success. Reviewing the principles shared by these successful programs may help guide the development of effective future models for diabetes care in low-income settings.
Daga, Pankaj R; Bolger, Michael B; Haworth, Ian S; Clark, Robert D; Martin, Eric J
2018-03-05
When medicinal chemists need to improve bioavailability (%F) within a chemical series during lead optimization, they synthesize new series members with systematically modified properties mainly by following experience and general rules of thumb. More quantitative models that predict %F of proposed compounds from chemical structure alone have proven elusive. Global empirical %F quantitative structure-property (QSPR) models perform poorly, and projects have too little data to train local %F QSPR models. Mechanistic oral absorption and physiologically based pharmacokinetic (PBPK) models simulate the dissolution, absorption, systemic distribution, and clearance of a drug in preclinical species and humans. Attempts to build global PBPK models based purely on calculated inputs have not achieved the <2-fold average error needed to guide lead optimization. In this work, local GastroPlus PBPK models are instead customized for individual medchem series. The key innovation was building a local QSPR for a numerically fitted effective intrinsic clearance (CL loc ). All inputs are subsequently computed from structure alone, so the models can be applied in advance of synthesis. Training CL loc on the first 15-18 rat %F measurements gave adequate predictions, with clear improvements up to about 30 measurements, and incremental improvements beyond that.
Feng, Lei; Peng, Fuduan; Li, Shanfei; Jiang, Li; Sun, Hui; Ji, Anquan; Zeng, Changqing; Li, Caixia; Liu, Fan
2018-03-23
Estimating individual age from biomarkers may provide key information facilitating forensic investigations. Recent progress has shown DNA methylation at age-associated CpG sites as the most informative biomarkers for estimating the individual age of an unknown donor. Optimal feature selection plays a critical role in determining the performance of the final prediction model. In this study we investigate methylation levels at 153 age-associated CpG sites from 21 previously reported genomic regions using the EpiTYPER system for their predictive power on individual age in 390 Han Chinese males ranging from 15 to 75 years of age. We conducted a systematic feature selection using a stepwise backward multiple linear regression analysis as well as an exhaustive searching algorithm. Both approaches identified the same subset of 9 CpG sites, which in linear combination provided the optimal model fitting with mean absolute deviation (MAD) of 2.89 years of age and explainable variance (R 2 ) of 0.92. The final model was validated in two independent Han Chinese male samples (validation set 1, N = 65, MAD = 2.49, R 2 = 0.95, and validation set 2, N = 62, MAD = 3.36, R 2 = 0.89). Other competing models such as support vector machine and artificial neural network did not outperform the linear model to any noticeable degree. The validation set 1 was additionally analyzed using Pyrosequencing technology for cross-platform validation and was termed as validation set 3. Directly applying our model, in which the methylation levels were detected by the EpiTYPER system, to the data from pyrosequencing technology showed, however, less accurate results in terms of MAD (validation set 3, N = 65 Han Chinese males, MAD = 4.20, R 2 = 0.93), suggesting the presence of a batch effect between different data generation platforms. This batch effect could be partially overcome by a z-score transformation (MAD = 2.76, R 2 = 0.93). Overall, our systematic feature selection identified 9 CpG sites as the optimal subset for forensic age estimation and the prediction model consisting of these 9 markers demonstrated high potential in forensic practice. An age estimator implementing our prediction model allowing missing markers is freely available at http://liufan.big.ac.cn/AgePrediction. Copyright © 2018 Elsevier B.V. All rights reserved.
Optimism and Physical Health: A Meta-analytic Review
Rasmussen, Heather N.; Greenhouse, Joel B.
2010-01-01
Background Prior research links optimism to physical health, but the strength of the association has not been systematically evaluated. Purpose The purpose of this study is to conduct a meta-analytic review to determine the strength of the association between optimism and physical health. Methods The findings from 83 studies, with 108 effect sizes (ESs), were included in the analyses, using random-effects models. Results Overall, the mean ES characterizing the relationship between optimism and physical health outcomes was 0.17, p<.001. ESs were larger for studies using subjective (versus objective) measures of physical health. Subsidiary analyses were also conducted grouping studies into those that focused solely on mortality, survival, cardiovascular outcomes, physiological markers (including immune function), immune function only, cancer outcomes, outcomes related to pregnancy, physical symptoms, or pain. In each case, optimism was a significant predictor of health outcomes or markers, all p<.001. Conclusions Optimism is a significant predictor of positive physical health outcomes. PMID:19711142
Borggren, Marie; Vinner, Lasse; Andresen, Betina Skovgaard; Grevstad, Berit; Repits, Johanna; Melchers, Mark; Elvang, Tara Laura; Sanders, Rogier W; Martinon, Frédéric; Dereuddre-Bosquet, Nathalie; Bowles, Emma Joanne; Stewart-Jones, Guillaume; Biswas, Priscilla; Scarlatti, Gabriella; Jansson, Marianne; Heyndrickx, Leo; Grand, Roger Le; Fomsgaard, Anders
2013-07-19
HIV-1 DNA vaccines have many advantageous features. Evaluation of HIV-1 vaccine candidates often starts in small animal models before macaque and human trials. Here, we selected and optimized DNA vaccine candidates through systematic testing in rabbits for the induction of broadly neutralizing antibodies (bNAb). We compared three different animal models: guinea pigs, rabbits and cynomolgus macaques. Envelope genes from the prototype isolate HIV-1 Bx08 and two elite neutralizers were included. Codon-optimized genes, encoded secreted gp140 or membrane bound gp150, were modified for expression of stabilized soluble trimer gene products, and delivered individually or mixed. Specific IgG after repeated i.d. inoculations with electroporation confirmed in vivo expression and immunogenicity. Evaluations of rabbits and guinea pigs displayed similar results. The superior DNA construct in rabbits was a trivalent mix of non-modified codon-optimized gp140 envelope genes. Despite NAb responses with some potency and breadth in guinea pigs and rabbits, the DNA vaccinated macaques displayed less bNAb activity. It was concluded that a trivalent mix of non-modified gp140 genes from rationally selected clinical isolates was, in this study, the best option to induce high and broad NAb in the rabbit model, but this optimization does not directly translate into similar responses in cynomolgus macaques.
Borggren, Marie; Vinner, Lasse; Andresen, Betina Skovgaard; Grevstad, Berit; Repits, Johanna; Melchers, Mark; Elvang, Tara Laura; Sanders, Rogier W; Martinon, Frédéric; Dereuddre-Bosquet, Nathalie; Bowles, Emma Joanne; Stewart-Jones, Guillaume; Biswas, Priscilla; Scarlatti, Gabriella; Jansson, Marianne; Heyndrickx, Leo; Le Grand, Roger; Fomsgaard, Anders
2013-01-01
HIV-1 DNA vaccines have many advantageous features. Evaluation of HIV-1 vaccine candidates often starts in small animal models before macaque and human trials. Here, we selected and optimized DNA vaccine candidates through systematic testing in rabbits for the induction of broadly neutralizing antibodies (bNAb). We compared three different animal models: guinea pigs, rabbits and cynomolgus macaques. Envelope genes from the prototype isolate HIV-1 Bx08 and two elite neutralizers were included. Codon-optimized genes, encoded secreted gp140 or membrane bound gp150, were modified for expression of stabilized soluble trimer gene products, and delivered individually or mixed. Specific IgG after repeated i.d. inoculations with electroporation confirmed in vivo expression and immunogenicity. Evaluations of rabbits and guinea pigs displayed similar results. The superior DNA construct in rabbits was a trivalent mix of non-modified codon-optimized gp140 envelope genes. Despite NAb responses with some potency and breadth in guinea pigs and rabbits, the DNA vaccinated macaques displayed less bNAb activity. It was concluded that a trivalent mix of non-modified gp140 genes from rationally selected clinical isolates was, in this study, the best option to induce high and broad NAb in the rabbit model, but this optimization does not directly translate into similar responses in cynomolgus macaques. PMID:26344115
Marker optimization for facial motion acquisition and deformation.
Le, Binh H; Zhu, Mingyang; Deng, Zhigang
2013-11-01
A long-standing problem in marker-based facial motion capture is what are the optimal facial mocap marker layouts. Despite its wide range of potential applications, this problem has not yet been systematically explored to date. This paper describes an approach to compute optimized marker layouts for facial motion acquisition as optimization of characteristic control points from a set of high-resolution, ground-truth facial mesh sequences. Specifically, the thin-shell linear deformation model is imposed onto the example pose reconstruction process via optional hard constraints such as symmetry and multiresolution constraints. Through our experiments and comparisons, we validate the effectiveness, robustness, and accuracy of our approach. Besides guiding minimal yet effective placement of facial mocap markers, we also describe and demonstrate its two selected applications: marker-based facial mesh skinning and multiresolution facial performance capture.
Muñoz, P; Pastor, D; Capmany, J; Martínez, A
2003-09-22
In this paper, the procedure to optimize flat-top Arrayed Waveguide Grating (AWG) devices in terms of transmission and dispersion properties is presented. The systematic procedure consists on the stigmatization and minimization of the Light Path Function (LPF) used in classic planar spectrograph theory. The resulting geometry arrangement for the Arrayed Waveguides (AW) and the Output Waveguides (OW) is not the classical Rowland mounting, but an arbitrary geometry arrangement. Simulation using previous published enhanced modeling show how this geometry reduces the passband ripple, asymmetry and dispersion, in a design example.
Zhou, Nanjia; Dudnik, Alexander S; Li, Ting I N G; Manley, Eric F; Aldrich, Thomas J; Guo, Peijun; Liao, Hsueh-Chung; Chen, Zhihua; Chen, Lin X; Chang, Robert P H; Facchetti, Antonio; Olvera de la Cruz, Monica; Marks, Tobin J
2016-02-03
The influence of the number-average molecular weight (Mn) on the blend film morphology and photovoltaic performance of all-polymer solar cells (APSCs) fabricated with the donor polymer poly[5-(2-hexyldodecyl)-1,3-thieno[3,4-c]pyrrole-4,6-dione-alt-5,5-(2,5-bis(3-dodecylthiophen-2-yl)thiophene)] (PTPD3T) and acceptor polymer poly{[N,N'-bis(2-octyldodecyl)naphthalene-1,4,5,8-bis(dicarboximide)-2,6-diyl]-alt-5,5'-(2,2'-bithiophene)} (P(NDI2OD-T2); N2200) is systematically investigated. The Mn effect analysis of both PTPD3T and N2200 is enabled by implementing a polymerization strategy which produces conjugated polymers with tunable Mns. Experimental and coarse-grain modeling results reveal that systematic Mn variation greatly influences both intrachain and interchain interactions and ultimately the degree of phase separation and morphology evolution. Specifically, increasing Mn for both polymers shrinks blend film domain sizes and enhances donor-acceptor polymer-polymer interfacial areas, affording increased short-circuit current densities (Jsc). However, the greater disorder and intermixed feature proliferation accompanying increasing Mn promotes charge carrier recombination, reducing cell fill factors (FF). The optimized photoactive layers exhibit well-balanced exciton dissociation and charge transport characteristics, ultimately providing solar cells with a 2-fold PCE enhancement versus devices with nonoptimal Mns. Overall, it is shown that proper and precise tuning of both donor and acceptor polymer Mns is critical for optimizing APSC performance. In contrast to reports where maximum power conversion efficiencies (PCEs) are achieved for the highest Mns, the present two-dimensional Mn optimization matrix strategy locates a PCE "sweet spot" at intermediate Mns of both donor and acceptor polymers. This study provides synthetic methodologies to predictably access conjugated polymers with desired Mn and highlights the importance of optimizing Mn for both polymer components to realize the full potential of APSC performance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Nanjia; Dudnik, Alexander S.; Li, Ting I. N. G.
2016-01-21
The influence of the number-average molecular weight (Mn) on the blend film morphology and photovoltaic performance of all-polymer solar cells (APSCs) fabricated with the donor polymer poly[5-(2-hexyldodecyl)-1,3-thieno[3,4-c]pyrrole-4,6-dione-alt-5,5-(2,5-bis(3-dodecylthiophen-2-yl)thiophene)] (PTPD3T) and acceptor polymer poly{[N,N'-bis(2-octyldodecyl)naphthalene-1,4,5,8-bis(dicarboximide)-2,6-diyl]-alt-5,5'-(2,2'-bithiophene)} (P(NDI2OD-T2); N2200) is systematically investigated. The Mn effect analysis of both PTPD3T and N2200 is enabled by implementing a polymerization strategy which produces conjugated polymers with tunable Mns. Experimental and coarse-grain modeling results reveal that systematic Mn variation greatly influences both intrachain and interchain interactions and ultimately the degree of phase separation and morphology evolution. Specifically, increasing Mn for both polymers shrinks blend film domain sizes and enhancesmore » donor–acceptor polymer–polymer interfacial areas, affording increased short-circuit current densities (Jsc). However, the greater disorder and intermixed feature proliferation accompanying increasing Mn promotes charge carrier recombination, reducing cell fill factors (FF). The optimized photoactive layers exhibit well-balanced exciton dissociation and charge transport characteristics, ultimately providing solar cells with a 2-fold PCE enhancement versus devices with nonoptimal Mns. Overall, it is shown that proper and precise tuning of both donor and acceptor polymer Mns is critical for optimizing APSC performance. In contrast to reports where maximum power conversion efficiencies (PCEs) are achieved for the highest Mns, the present two-dimensional Mn optimization matrix strategy locates a PCE “sweet spot” at intermediate Mns of both donor and acceptor polymers. This study provides synthetic methodologies to predictably access conjugated polymers with desired Mn and highlights the importance of optimizing Mn for both polymer components to realize the full potential of APSC performance.« less
Simulation models in population breast cancer screening: A systematic review.
Koleva-Kolarova, Rositsa G; Zhan, Zhuozhao; Greuter, Marcel J W; Feenstra, Talitha L; De Bock, Geertruida H
2015-08-01
The aim of this review was to critically evaluate published simulation models for breast cancer screening of the general population and provide a direction for future modeling. A systematic literature search was performed to identify simulation models with more than one application. A framework for qualitative assessment which incorporated model type; input parameters; modeling approach, transparency of input data sources/assumptions, sensitivity analyses and risk of bias; validation, and outcomes was developed. Predicted mortality reduction (MR) and cost-effectiveness (CE) were compared to estimates from meta-analyses of randomized control trials (RCTs) and acceptability thresholds. Seven original simulation models were distinguished, all sharing common input parameters. The modeling approach was based on tumor progression (except one model) with internal and cross validation of the resulting models, but without any external validation. Differences in lead times for invasive or non-invasive tumors, and the option for cancers not to progress were not explicitly modeled. The models tended to overestimate the MR (11-24%) due to screening as compared to optimal RCTs 10% (95% CI - 2-21%) MR. Only recently, potential harms due to regular breast cancer screening were reported. Most scenarios resulted in acceptable cost-effectiveness estimates given current thresholds. The selected models have been repeatedly applied in various settings to inform decision making and the critical analysis revealed high risk of bias in their outcomes. Given the importance of the models, there is a need for externally validated models which use systematical evidence for input data to allow for more critical evaluation of breast cancer screening. Copyright © 2015 Elsevier Ltd. All rights reserved.
Consistent integration of experimental and ab initio data into molecular and coarse-grained models
NASA Astrophysics Data System (ADS)
Vlcek, Lukas
As computer simulations are increasingly used to complement or replace experiments, highly accurate descriptions of physical systems at different time and length scales are required to achieve realistic predictions. The questions of how to objectively measure model quality in relation to reference experimental or ab initio data, and how to transition seamlessly between different levels of resolution are therefore of prime interest. To address these issues, we use the concept of statistical distance to define a measure of similarity between statistical mechanical systems, i.e., a model and its target, and show that its minimization leads to general convergence of the systems' measurable properties. Through systematic coarse-graining, we arrive at appropriate expressions for optimization loss functions consistently incorporating microscopic ab initio data as well as macroscopic experimental data. The design of coarse-grained and multiscale models is then based on factoring the model system partition function into terms describing the system at different resolution levels. The optimization algorithm takes advantage of thermodynamic perturbation expressions for fast exploration of the model parameter space, enabling us to scan millions of parameter combinations per hour on a single CPU. The robustness and generality of the new model optimization framework and its efficient implementation are illustrated on selected examples including aqueous solutions, magnetic systems, and metal alloys.
SPIDER OPTIMIZATION. II. OPTICAL, MAGNETIC, AND FOREGROUND EFFECTS
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Dea, D. T.; Clark, C. N.; Contaldi, C. R.
2011-09-01
SPIDER is a balloon-borne instrument designed to map the polarization of the cosmic microwave background (CMB) with degree-scale resolution over a large fraction of the sky. SPIDER's main goal is to measure the amplitude of primordial gravitational waves through their imprint on the polarization of the CMB if the tensor-to-scalar ratio, r, is greater than 0.03. To achieve this goal, instrumental systematic errors must be controlled with unprecedented accuracy. Here, we build on previous work to use simulations of SPIDER observations to examine the impact of several systematic effects that have been characterized through testing and modeling of various instrumentmore » components. In particular, we investigate the impact of the non-ideal spectral response of the half-wave plates, coupling between focal-plane components and Earth's magnetic field, and beam mismatches and asymmetries. We also present a model of diffuse polarized foreground emission based on a three-dimensional model of the Galactic magnetic field and dust, and study the interaction of this foreground emission with our observation strategy and instrumental effects. We find that the expected level of foreground and systematic contamination is sufficiently low for SPIDER to achieve its science goals.« less
Weaver, Christopher
2011-01-01
This study presents a systematic investigation concerning the performance of different rating scales used in the English section of a university entrance examination to assess 1,287 Japanese test takers' ability to write a third-person introduction speech. Although the rating scales did not conform to all of the expectations of the Rasch model, they successfully defined a meaningful continuum of English communicative competence. In some cases, the expectations of the Rasch model needed to be weighed against the specific assessment needs of the university entrance examination. This investigation also found that the degree of compatibility between the number of points allotted to the different rating scales and the various requirements of an introduction speech played a considerable role in determining the extent to which the different rating scales conformed to the expectations of the Rasch model. Compatibility thus becomes an important factor to consider for optimal rating scale performance.
Data Assimilation by delay-coordinate nudging
NASA Astrophysics Data System (ADS)
Pazo, Diego; Lopez, Juan Manuel; Carrassi, Alberto
2016-04-01
A new nudging method for data assimilation, delay-coordinate nudging, is presented. Delay-coordinate nudging makes explicit use of present and past observations in the formulation of the forcing driving the model evolution at each time-step. Numerical experiments with a low order chaotic system show that the new method systematically outperforms standard nudging in different model and observational scenarios, also when using an un-optimized formulation of the delay-nudging coefficients. A connection between the optimal delay and the dominant Lyapunov exponent of the dynamics is found based on heuristic arguments and is confirmed by the numerical results, providing a guideline for the practical implementation of the algorithm. Delay-coordinate nudging preserves the easiness of implementation, the intuitive functioning and the reduced computational cost of the standard nudging, making it a potential alternative especially in the field of seasonal-to-decadal predictions with large Earth system models that limit the use of more sophisticated data assimilation procedures.
Do Vascular Networks Branch Optimally or Randomly across Spatial Scales?
Newberry, Mitchell G.; Savage, Van M.
2016-01-01
Modern models that derive allometric relationships between metabolic rate and body mass are based on the architectural design of the cardiovascular system and presume sibling vessels are symmetric in terms of radius, length, flow rate, and pressure. Here, we study the cardiovascular structure of the human head and torso and of a mouse lung based on three-dimensional images processed via our software Angicart. In contrast to modern allometric theories, we find systematic patterns of asymmetry in vascular branching, potentially explaining previously documented mismatches between predictions (power-law or concave curvature) and observed empirical data (convex curvature) for the allometric scaling of metabolic rate. To examine why these systematic asymmetries in vascular branching might arise, we construct a mathematical framework to derive predictions based on local, junction-level optimality principles that have been proposed to be favored in the course of natural selection and development. The two most commonly used principles are material-cost optimizations (construction materials or blood volume) and optimization of efficient flow via minimization of power loss. We show that material-cost optimization solutions match with distributions for asymmetric branching across the whole network but do not match well for individual junctions. Consequently, we also explore random branching that is constrained at scales that range from local (junction-level) to global (whole network). We find that material-cost optimizations are the strongest predictor of vascular branching in the human head and torso, whereas locally or intermediately constrained random branching is comparable to material-cost optimizations for the mouse lung. These differences could be attributable to developmentally-programmed local branching for larger vessels and constrained random branching for smaller vessels. PMID:27902691
Song, Hyeon Gi; Byeon, Seon Yeong; Chung, Goo Yong; Jung, Sang-Myung; Choi, Jung Il; Shin, Hwa Sung
2018-05-28
Microalgal carotenoids are attractive health ingredients, but their production should be optimized to improve cost-effectiveness. Understanding cellular physiology centered on carotenoid synthesis is the prerequisite for this work. Therefore, systematic correlation analyses were conducted among chlorophyll, carotenoids, non-pigmented cell mass, and cell number of Dunaliella salina in a specified condition over a relatively long culture time. First, an integrated correlation was performed: a temporal profile of the carotenoids was correlated with those of other factors, including chlorophyll, non-pigmented cell mass, and cell number. Pearson and Spearman correlation analyses were performed to identify linearity and monotonicity of the correlation, respectively, and then cross-correlation was executed to determine if the correlation had a time lag. Second, to understand the cellular potential of metabolism, the procedure was repeated to provide a data set composed of the specific synthesis rates of the factors or growth rate, which additionally provided kinetic correlations among the constituting components of the cell, excluding the effect of cell number. This systematic approach could generate a blueprint model that is composed of only what it needs, which could make it possible to efficiently control and optimize the process.
Drug Target Optimization in Chronic Myeloid Leukemia Using Innovative Computational Platform
Chuang, Ryan; Hall, Benjamin A.; Benque, David; Cook, Byron; Ishtiaq, Samin; Piterman, Nir; Taylor, Alex; Vardi, Moshe; Koschmieder, Steffen; Gottgens, Berthold; Fisher, Jasmin
2015-01-01
Chronic Myeloid Leukemia (CML) represents a paradigm for the wider cancer field. Despite the fact that tyrosine kinase inhibitors have established targeted molecular therapy in CML, patients often face the risk of developing drug resistance, caused by mutations and/or activation of alternative cellular pathways. To optimize drug development, one needs to systematically test all possible combinations of drug targets within the genetic network that regulates the disease. The BioModelAnalyzer (BMA) is a user-friendly computational tool that allows us to do exactly that. We used BMA to build a CML network-model composed of 54 nodes linked by 104 interactions that encapsulates experimental data collected from 160 publications. While previous studies were limited by their focus on a single pathway or cellular process, our executable model allowed us to probe dynamic interactions between multiple pathways and cellular outcomes, suggest new combinatorial therapeutic targets, and highlight previously unexplored sensitivities to Interleukin-3. PMID:25644994
Drug Target Optimization in Chronic Myeloid Leukemia Using Innovative Computational Platform
NASA Astrophysics Data System (ADS)
Chuang, Ryan; Hall, Benjamin A.; Benque, David; Cook, Byron; Ishtiaq, Samin; Piterman, Nir; Taylor, Alex; Vardi, Moshe; Koschmieder, Steffen; Gottgens, Berthold; Fisher, Jasmin
2015-02-01
Chronic Myeloid Leukemia (CML) represents a paradigm for the wider cancer field. Despite the fact that tyrosine kinase inhibitors have established targeted molecular therapy in CML, patients often face the risk of developing drug resistance, caused by mutations and/or activation of alternative cellular pathways. To optimize drug development, one needs to systematically test all possible combinations of drug targets within the genetic network that regulates the disease. The BioModelAnalyzer (BMA) is a user-friendly computational tool that allows us to do exactly that. We used BMA to build a CML network-model composed of 54 nodes linked by 104 interactions that encapsulates experimental data collected from 160 publications. While previous studies were limited by their focus on a single pathway or cellular process, our executable model allowed us to probe dynamic interactions between multiple pathways and cellular outcomes, suggest new combinatorial therapeutic targets, and highlight previously unexplored sensitivities to Interleukin-3.
Kurumbang, Nagendra Prasad; Dvorak, Pavel; Bendl, Jaroslav; Brezovsky, Jan; Prokop, Zbynek; Damborsky, Jiri
2014-03-21
Anthropogenic halogenated compounds were unknown to nature until the industrial revolution, and microorganisms have not had sufficient time to evolve enzymes for their degradation. The lack of efficient enzymes and natural pathways can be addressed through a combination of protein and metabolic engineering. We have assembled a synthetic route for conversion of the highly toxic and recalcitrant 1,2,3-trichloropropane to glycerol in Escherichia coli, and used it for a systematic study of pathway bottlenecks. Optimal ratios of enzymes for the maximal production of glycerol, and minimal toxicity of metabolites were predicted using a mathematical model. The strains containing the expected optimal ratios of enzymes were constructed and characterized for their viability and degradation efficiency. Excellent agreement between predicted and experimental data was observed. The validated model was used to quantitatively describe the kinetic limitations of currently available enzyme variants and predict improvements required for further pathway optimization. This highlights the potential of forward engineering of microorganisms for the degradation of toxic anthropogenic compounds.
NASA Astrophysics Data System (ADS)
Lee, H.
2016-12-01
Precipitation is one of the most important climate variables that are taken into account in studying regional climate. Nevertheless, how precipitation will respond to a changing climate and even its mean state in the current climate are not well represented in regional climate models (RCMs). Hence, comprehensive and mathematically rigorous methodologies to evaluate precipitation and related variables in multiple RCMs are required. The main objective of the current study is to evaluate the joint variability of climate variables related to model performance in simulating precipitation and condense multiple evaluation metrics into a single summary score. We use multi-objective optimization, a mathematical process that provides a set of optimal tradeoff solutions based on a range of evaluation metrics, to characterize the joint representation of precipitation, cloudiness and insolation in RCMs participating in the North American Regional Climate Change Assessment Program (NARCCAP) and Coordinated Regional Climate Downscaling Experiment-North America (CORDEX-NA). We also leverage ground observations, NASA satellite data and the Regional Climate Model Evaluation System (RCMES). Overall, the quantitative comparison of joint probability density functions between the three variables indicates that performance of each model differs markedly between sub-regions and also shows strong seasonal dependence. Because of the large variability across the models, it is important to evaluate models systematically and make future projections using only models showing relatively good performance. Our results indicate that the optimized multi-model ensemble always shows better performance than the arithmetic ensemble mean and may guide reliable future projections.
Optimization of vehicle deceleration to reduce occupant injury risks in frontal impact.
Mizuno, Koji; Itakura, Takuya; Hirabayashi, Satoko; Tanaka, Eiichi; Ito, Daisuke
2014-01-01
In vehicle frontal impacts, vehicle acceleration has a large effect on occupant loadings and injury risks. In this research, an optimal vehicle crash pulse was determined systematically to reduce injury measures of rear seat occupants by using mathematical simulations. The vehicle crash pulse was optimized based on a vehicle deceleration-deformation diagram under the conditions that the initial velocity and the maximum vehicle deformation were constant. Initially, a spring-mass model was used to understand the fundamental parameters for optimization. In order to investigate the optimization under a more realistic situation, the vehicle crash pulse was also optimized using a multibody model of a Hybrid III dummy seated in the rear seat for the objective functions of chest acceleration and chest deflection. A sled test using a Hybrid III dummy was carried out to confirm the simulation results. Finally, the optimal crash pulses determined from the multibody simulation were applied to a human finite element (FE) model. The optimized crash pulse to minimize the occupant deceleration had a concave shape: a high deceleration in the initial phase, low in the middle phase, and high again in the final phase. This crash pulse shape depended on the occupant restraint stiffness. The optimized crash pulse determined from the multibody simulation was comparable to that from the spring-mass model. From the sled test, it was demonstrated that the optimized crash pulse was effective for the reduction of chest acceleration. The crash pulse was also optimized for the objective function of chest deflection. The optimized crash pulse in the final phase was lower than that obtained for the minimization of chest acceleration. In the FE analysis of the human FE model, the optimized pulse for the objective function of the Hybrid III chest deflection was effective in reducing rib fracture risks. The optimized crash pulse has a concave shape and is dependent on the occupant restraint stiffness and maximum vehicle deformation. The shapes of the optimized crash pulse in the final phase were different for the objective functions of chest acceleration and chest deflection due to the inertial forces of the head and upper extremities. From the human FE model analysis it was found that the optimized crash pulse for the Hybrid III chest deflection can substantially reduce the risk of rib cage fractures. Supplemental materials are available for this article. Go to the publisher's online edition of Traffic Injury Prevention to view the supplemental file.
Ultimate explanations and suboptimal choice.
Vasconcelos, Marco; Machado, Armando; Pandeirada, Josefa N S
2018-07-01
Researchers have unraveled multiple cases in which behavior deviates from rationality principles. We propose that such deviations are valuable tools to understand the adaptive significance of the underpinning mechanisms. To illustrate, we discuss in detail an experimental protocol in which animals systematically incur substantial foraging losses by preferring a lean but informative option over a rich but non-informative one. To understand how adaptive mechanisms may fail to maximize food intake, we review a model inspired by optimal foraging principles that reconciles sub-optimal choice with the view that current behavioral mechanisms were pruned by the optimizing action of natural selection. To move beyond retrospective speculation, we then review critical tests of the model, regarding both its assumptions and its (sometimes counterintuitive) predictions, all of which have been upheld. The overall contention is that (a) known mechanisms can be used to develop better ultimate accounts and that (b) to understand why mechanisms that generate suboptimal behavior evolved, we need to consider their adaptive value in the animal's characteristic ecology. Copyright © 2018 Elsevier B.V. All rights reserved.
Furlan, Andréa D; Irvin, Emma; Munhall, Claire; Giraldo-Prieto, Mario; Fullerton, Laura; McMaster, Robert; Danak, Shivang; Costante, Alicia; Pitzul, Kristen B; Bhide, Rohit P; Marchenko, Stanislav; Mahood, Quenby; David, Judy A; Flannery, John F; Bayley, Mark
2018-04-03
To compare models of rehabilitation services for people with mental and/or physical disability in order to determine optimal models for therapy and interventions in low- to middle-income countries. CINAHL, EMBASE, MEDLINE, CENTRAL, PsycINFO, Business Source Premier, HINARI, CEBHA and PubMed. Systematic reviews, randomized control trials and observational studies comparing >2 models of rehabilitation care in any language. Date extraction: Standardized forms were used. Methodological quality was assessed using AMSTAR and quality of evidence was assessed using GRADE. Twenty-four systematic reviews which included 578 studies and 202,307 participants were selected. In addition, four primary studies were included to complement the gaps in the systematic reviews. The studies were all done at various countries. Moderate- to high-quality evidence supports the following models of rehabilitation services: psychological intervention in primary care settings for people with major depression, admission into an inpatient, multidisciplinary, specialized rehabilitation unit for those with recent onset of a severe disabling condition; outpatient rehabilitation with multidisciplinary care in the community, hospital or home is recommended for less severe conditions; However, a model of rehabilitation service that includes early discharge is not recommended for elderly patients with severe stroke, chronic obstructive pulmonary disease, hip fracture and total joints. Models of rehabilitation care in inpatient, multidisciplinary and specialized rehabilitation units are recommended for the treatment of severe conditions with recent onset, as they reduce mortality and the need for institutionalized care, especially among elderly patients, stroke patients, or those with chronic back pain. Results are expected to be generalizable for brain/spinal cord injury and complex fractures.
Aerospace engineering design by systematic decomposition and multilevel optimization
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, J.; Barthelemy, J. F. M.; Giles, G. L.
1984-01-01
A method for systematic analysis and optimization of large engineering systems, by decomposition of a large task into a set of smaller subtasks that is solved concurrently is described. The subtasks may be arranged in hierarchical levels. Analyses are carried out in each subtask using inputs received from other subtasks, and are followed by optimizations carried out from the bottom up. Each optimization at the lower levels is augmented by analysis of its sensitivity to the inputs received from other subtasks to account for the couplings among the subtasks in a formal manner. The analysis and optimization operations alternate iteratively until they converge to a system design whose performance is maximized with all constraints satisfied. The method, which is still under development, is tentatively validated by test cases in structural applications and an aircraft configuration optimization.
Determination of full piezoelectric complex parameters using gradient-based optimization algorithm
NASA Astrophysics Data System (ADS)
Kiyono, C. Y.; Pérez, N.; Silva, E. C. N.
2016-02-01
At present, numerical techniques allow the precise simulation of mechanical structures, but the results are limited by the knowledge of the material properties. In the case of piezoelectric ceramics, the full model determination in the linear range involves five elastic, three piezoelectric, and two dielectric complex parameters. A successful solution to obtaining piezoceramic properties consists of comparing the experimental measurement of the impedance curve and the results of a numerical model by using the finite element method (FEM). In the present work, a new systematic optimization method is proposed to adjust the full piezoelectric complex parameters in the FEM model. Once implemented, the method only requires the experimental data (impedance modulus and phase data acquired by an impedometer), material density, geometry, and initial values for the properties. This method combines a FEM routine implemented using an 8-noded axisymmetric element with a gradient-based optimization routine based on the method of moving asymptotes (MMA). The main objective of the optimization procedure is minimizing the quadratic difference between the experimental and numerical electrical conductance and resistance curves (to consider resonance and antiresonance frequencies). To assure the convergence of the optimization procedure, this work proposes restarting the optimization loop whenever the procedure ends in an undesired or an unfeasible solution. Two experimental examples using PZ27 and APC850 samples are presented to test the precision of the method and to check the dependency of the frequency range used, respectively.
Model-data integration to improve the LPJmL dynamic global vegetation model
NASA Astrophysics Data System (ADS)
Forkel, Matthias; Thonicke, Kirsten; Schaphoff, Sibyll; Thurner, Martin; von Bloh, Werner; Dorigo, Wouter; Carvalhais, Nuno
2017-04-01
Dynamic global vegetation models show large uncertainties regarding the development of the land carbon balance under future climate change conditions. This uncertainty is partly caused by differences in how vegetation carbon turnover is represented in global vegetation models. Model-data integration approaches might help to systematically assess and improve model performances and thus to potentially reduce the uncertainty in terrestrial vegetation responses under future climate change. Here we present several applications of model-data integration with the LPJmL (Lund-Potsdam-Jena managed Lands) dynamic global vegetation model to systematically improve the representation of processes or to estimate model parameters. In a first application, we used global satellite-derived datasets of FAPAR (fraction of absorbed photosynthetic activity), albedo and gross primary production to estimate phenology- and productivity-related model parameters using a genetic optimization algorithm. Thereby we identified major limitations of the phenology module and implemented an alternative empirical phenology model. The new phenology module and optimized model parameters resulted in a better performance of LPJmL in representing global spatial patterns of biomass, tree cover, and the temporal dynamic of atmospheric CO2. Therefore, we used in a second application additionally global datasets of biomass and land cover to estimate model parameters that control vegetation establishment and mortality. The results demonstrate the ability to improve simulations of vegetation dynamics but also highlight the need to improve the representation of mortality processes in dynamic global vegetation models. In a third application, we used multiple site-level observations of ecosystem carbon and water exchange, biomass and soil organic carbon to jointly estimate various model parameters that control ecosystem dynamics. This exercise demonstrates the strong role of individual data streams on the simulated ecosystem dynamics which consequently changed the development of ecosystem carbon stocks and fluxes under future climate and CO2 change. In summary, our results demonstrate challenges and the potential of using model-data integration approaches to improve a dynamic global vegetation model.
A systematic FPGA acceleration design for applications based on convolutional neural networks
NASA Astrophysics Data System (ADS)
Dong, Hao; Jiang, Li; Li, Tianjian; Liang, Xiaoyao
2018-04-01
Most FPGA accelerators for convolutional neural network are designed to optimize the inner acceleration and are ignored of the optimization for the data path between the inner accelerator and the outer system. This could lead to poor performance in applications like real time video object detection. We propose a brand new systematic FPFA acceleration design to solve this problem. This design takes the data path optimization between the inner accelerator and the outer system into consideration and optimizes the data path using techniques like hardware format transformation, frame compression. It also takes fixed-point, new pipeline technique to optimize the inner accelerator. All these make the final system's performance very good, reaching about 10 times the performance comparing with the original system.
The impact on midlevel vision of statistically optimal divisive normalization in V1.
Coen-Cagli, Ruben; Schwartz, Odelia
2013-07-15
The first two areas of the primate visual cortex (V1, V2) provide a paradigmatic example of hierarchical computation in the brain. However, neither the functional properties of V2 nor the interactions between the two areas are well understood. One key aspect is that the statistics of the inputs received by V2 depend on the nonlinear response properties of V1. Here, we focused on divisive normalization, a canonical nonlinear computation that is observed in many neural areas and modalities. We simulated V1 responses with (and without) different forms of surround normalization derived from statistical models of natural scenes, including canonical normalization and a statistically optimal extension that accounted for image nonhomogeneities. The statistics of the V1 population responses differed markedly across models. We then addressed how V2 receptive fields pool the responses of V1 model units with different tuning. We assumed this is achieved by learning without supervision a linear representation that removes correlations, which could be accomplished with principal component analysis. This approach revealed V2-like feature selectivity when we used the optimal normalization and, to a lesser extent, the canonical one but not in the absence of both. We compared the resulting two-stage models on two perceptual tasks; while models encompassing V1 surround normalization performed better at object recognition, only statistically optimal normalization provided systematic advantages in a task more closely matched to midlevel vision, namely figure/ground judgment. Our results suggest that experiments probing midlevel areas might benefit from using stimuli designed to engage the computations that characterize V1 optimality.
Prediction of silicon oxynitride plasma etching using a generalized regression neural network
NASA Astrophysics Data System (ADS)
Kim, Byungwhan; Lee, Byung Teak
2005-08-01
A prediction model of silicon oxynitride (SiON) etching was constructed using a neural network. Model prediction performance was improved by means of genetic algorithm. The etching was conducted in a C2F6 inductively coupled plasma. A 24 full factorial experiment was employed to systematically characterize parameter effects on SiON etching. The process parameters include radio frequency source power, bias power, pressure, and C2F6 flow rate. To test the appropriateness of the trained model, additional 16 experiments were conducted. For comparison, four types of statistical regression models were built. Compared to the best regression model, the optimized neural network model demonstrated an improvement of about 52%. The optimized model was used to infer etch mechanisms as a function of parameters. The pressure effect was noticeably large only as relatively large ion bombardment was maintained in the process chamber. Ion-bombardment-activated polymer deposition played the most significant role in interpreting the complex effect of bias power or C2F6 flow rate. Moreover, [CF2] was expected to be the predominant precursor to polymer deposition.
People adopt optimal policies in simple decision-making, after practice and guidance.
Evans, Nathan J; Brown, Scott D
2017-04-01
Organisms making repeated simple decisions are faced with a tradeoff between urgent and cautious strategies. While animals can adopt a statistically optimal policy for this tradeoff, findings about human decision-makers have been mixed. Some studies have shown that people can optimize this "speed-accuracy tradeoff", while others have identified a systematic bias towards excessive caution. These issues have driven theoretical development and spurred debate about the nature of human decision-making. We investigated a potential resolution to the debate, based on two factors that routinely differ between human and animal studies of decision-making: the effects of practice, and of longer-term feedback. Our study replicated the finding that most people, by default, are overly cautious. When given both practice and detailed feedback, people moved rapidly towards the optimal policy, with many participants reaching optimality with less than 1 h of practice. Our findings have theoretical implications for cognitive and neural models of simple decision-making, as well as methodological implications.
O'Hanley, Jesse R; Wright, Jed; Diebel, Matthew; Fedora, Mark A; Soucy, Charles L
2013-08-15
Systematic methods for prioritizing the repair and removal of fish passage barriers, while growing of late, have hitherto focused almost exclusively on meeting the needs of migratory fish species (e.g., anadromous salmonids). An important but as of yet unaddressed issue is the development of new modeling approaches which are applicable to resident fish species habitat restoration programs. In this paper, we develop a budget constrained optimization model for deciding which barriers to repair or remove in order to maximize habitat availability for stream resident fish. Habitat availability at the local stream reach is determined based on the recently proposed C metric, which accounts for the amount, quality, distance and level of connectivity to different stream habitat types. We assess the computational performance of our model using geospatial barrier and stream data collected from the Pine-Popple Watershed, located in northeast Wisconsin (USA). The optimization model is found to be an efficient and practical decision support tool. Optimal solutions, which are useful in informing basin-wide restoration planning efforts, can be generated on average in only a few minutes. Copyright © 2013 Elsevier Ltd. All rights reserved.
Fragment-Based Drug Discovery of Potent Protein Kinase C Iota Inhibitors.
Kwiatkowski, Jacek; Liu, Boping; Tee, Doris Hui Ying; Chen, Guoying; Ahmad, Nur Huda Binte; Wong, Yun Xuan; Poh, Zhi Ying; Ang, Shi Hua; Tan, Eldwin Sum Wai; Ong, Esther Hq; Nurul Dinie; Poulsen, Anders; Pendharkar, Vishal; Sangthongpitag, Kanda; Lee, May Ann; Sepramaniam, Sugunavathi; Ho, Soo Yei; Cherian, Joseph; Hill, Jeffrey; Keller, Thomas H; Hung, Alvin W
2018-05-24
Protein kinase C iota (PKC-ι) is an atypical kinase implicated in the promotion of different cancer types. A biochemical screen of a fragment library has identified several hits from which an azaindole-based scaffold was chosen for optimization. Driven by a structure-activity relationship and supported by molecular modeling, a weakly bound fragment was systematically grown into a potent and selective inhibitor against PKC-ι.
NASA Astrophysics Data System (ADS)
Mole, Tracey Lawrence
In this work, an effective and systematic model is devised to synthesize the optimal formulation for an explicit engineering application in the nuclear industry, i.e. radioactive decontamination and waste reduction. Identification of an optimal formulation that is suitable for the desired system requires integration of all the interlacing behaviors of the product constituents. This work is unique not only in product design, but also in these design techniques. The common practice of new product development is to design the optimized product for a particular industrial niche and then subsequent research for the production process is conducted, developed and optimized separately from the product formulation. In this proposed optimization design technique, the development process, disposal technique and product formulation is optimized simultaneously to improve production profit, product behavior and disposal emissions. This "cradle to grave" optimization approach allowed a complex product formulation development process to be drastically simplified. The utilization of these modeling techniques took an industrial idea to full scale testing and production in under 18 months by reducing the number of subsequent laboratory trials required to optimize the formula, production and waste treatment aspects of the product simultaneously. This particular development material involves the use of a polymer matrix that is applied to surfaces as part of a decontamination system. The polymer coating serves to initially "fix" the contaminants in place for detection and ultimate elimination. Upon mechanical entrapment and removal, the polymer coating containing the radioactive isotopes can be dissolved in a solvent processor, where separation of the radioactive metallic particles can take place. Ultimately, only the collection of divided solids should be disposed of as nuclear waste. This creates an attractive alternative to direct land filling or incineration. This philosophy also provides waste generators a way to significantly reduce waste and associated costs, and help meet regulatory, safety and environmental requirements. In order for the polymeric film exhibit the desired performance, a combination of discrete constraints must be fulfilled. These interacting characteristics include the choice of polymer used for construction, drying time, storage constraints, decontamination ability, removal behavior, application process, coating strength and dissolvability processes. Identification of an optimized formulation that is suitable for this entire decontamination system requires integration of all the interlacing characteristics of the coating composition that affect the film behavior. A novel systematic method for developing quantitative values for theses qualitative characteristics is being developed in order to simultaneously optimize the design formulation subject to the discrete product specifications. This synthesis procedure encompasses intrinsic characteristics vital to successful product development, which allows for implementation of the derived model optimizations to operate independent of the polymer film application. This contribution illustrates the optimized synthesis example by which a large range of polymeric compounds and mixtures can be completed. (Abstract shortened by UMI.)
MUTLI-OBJECTIVE OPTIMIZATION OF MICROSTRUCTURE IN WROUGHT MAGNESIUM ALLOYS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Radhakrishnan, Balasubramaniam; Gorti, Sarma B; Simunovic, Srdjan
2013-01-01
The microstructural features that govern the mechanical properties of wrought magnesium alloys include grain size, crystallographic texture, and twinning. Several processes based on shear deformation have been developed that promote grain refinement, weakening of the basal texture, as well as the shift of the peak intensity away from the center of the basal pole figure - features that promote room temperature ductility in Mg alloys. At ORNL, we are currently exploring the concept of introducing nano-twins within sub-micron grains as a possible mechanism for simultaneously improving strength and ductility by exploiting a potential dislocation glide along the twin-matrix interface amore » mechanism that was originally proposed for face-centered cubic materials. Specifically, we have developed an integrated modeling and optimization framework in order to identify the combinations of grain size, texture and twin spacing that can maximize strength-ductility combinations. A micromechanical model that relates microstructure to material strength is coupled with a failure model that relates ductility to a critical shear strain and a critical hydrostatic stress. The micro-mechanical model is combined with an optimization tool based on genetic algorithm. A multi-objective optimization technique is used to explore the strength-ductility space in a systematic fashion and identify optimum combinations of the microstructural parameters that will simultaneously maximize the strength-ductility in the alloy.« less
Karri, Rama Rao; Sahu, J N
2018-01-15
Zn (II) is one the common pollutant among heavy metals found in industrial effluents. Removal of pollutant from industrial effluents can be accomplished by various techniques, out of which adsorption was found to be an efficient method. Applications of adsorption limits itself due to high cost of adsorbent. In this regard, a low cost adsorbent produced from palm oil kernel shell based agricultural waste is examined for its efficiency to remove Zn (II) from waste water and aqueous solution. The influence of independent process variables like initial concentration, pH, residence time, activated carbon (AC) dosage and process temperature on the removal of Zn (II) by palm kernel shell based AC from batch adsorption process are studied systematically. Based on the design of experimental matrix, 50 experimental runs are performed with each process variable in the experimental range. The optimal values of process variables to achieve maximum removal efficiency is studied using response surface methodology (RSM) and artificial neural network (ANN) approaches. A quadratic model, which consists of first order and second order degree regressive model is developed using the analysis of variance and RSM - CCD framework. The particle swarm optimization which is a meta-heuristic optimization is embedded on the ANN architecture to optimize the search space of neural network. The optimized trained neural network well depicts the testing data and validation data with R 2 equal to 0.9106 and 0.9279 respectively. The outcomes indicates that the superiority of ANN-PSO based model predictions over the quadratic model predictions provided by RSM. Copyright © 2017 Elsevier Ltd. All rights reserved.
Results and Error Estimates from GRACE Forward Modeling over Greenland, Canada, and Alaska
NASA Astrophysics Data System (ADS)
Bonin, J. A.; Chambers, D. P.
2012-12-01
Forward modeling using a weighted least squares technique allows GRACE information to be projected onto a pre-determined collection of local basins. This decreases the impact of spatial leakage, allowing estimates of mass change to be better localized. The technique is especially valuable where models of current-day mass change are poor, such as over Greenland and Antarctica. However, the accuracy of the forward model technique has not been determined, nor is it known how the distribution of the local basins affects the results. We use a "truth" model composed of hydrology and ice-melt slopes as an example case, to estimate the uncertainties of this forward modeling method and expose those design parameters which may result in an incorrect high-resolution mass distribution. We then apply these optimal parameters in a forward model estimate created from RL05 GRACE data. We compare the resulting mass slopes with the expected systematic errors from the simulation, as well as GIA and basic trend-fitting uncertainties. We also consider whether specific regions (such as Ellesmere Island and Baffin Island) can be estimated reliably using our optimal basin layout.
Treatment Planning and Image Guidance for Radiofrequency Ablations of Large Tumors
Ren, Hongliang; Campos-Nanez, Enrique; Yaniv, Ziv; Banovac, Filip; Abeledo, Hernan; Hata, Nobuhiko; Cleary, Kevin
2014-01-01
This article addresses the two key challenges in computer-assisted percutaneous tumor ablation: planning multiple overlapping ablations for large tumors while avoiding critical structures, and executing the prescribed plan. Towards semi-automatic treatment planning for image-guided surgical interventions, we develop a systematic approach to the needle-based ablation placement task, ranging from pre-operative planning algorithms to an intra-operative execution platform. The planning system incorporates clinical constraints on ablations and trajectories using a multiple objective optimization formulation, which consists of optimal path selection and ablation coverage optimization based on integer programming. The system implementation is presented and validated in phantom studies and on an animal model. The presented system can potentially be further extended for other ablation techniques such as cryotherapy. PMID:24235279
Schory, Abbey; Bidinger, Erik; Wolf, Joshua
2016-01-01
ABSTRACT Purpose The purpose of this systematic review was to determine the exercises that optimize muscle ratios of the periscapular musculature for scapular stability and isolated strengthening. Methods A systematic search was performed in PubMed, CINAHL, SPORTDiscus, Scopus, and Discovery Layer. Studies were included if they examined the muscle activation of the upper trapezius compared to the middle trapezius, lower trapezius, or serratus anterior using EMG during open chain exercises. The participants were required to have healthy, nonpathological shoulders. Information obtained included maximal voluntary isometric contraction (MVIC) values, ratios, standard deviations, exercises, and exercise descriptions. The outcome of interest was determining exercises that create optimal muscle activation ratios between the scapular stabilizers. Results Fifteen observational studies met the inclusion criteria for the systematic review. Exercises with optimal ratios were eccentric exercises in the frontal and sagittal planes, especially flexion between 180 ° and 60 °. External rotation exercises with the elbow flexed to 90 ° also had optimal ratios for activating the middle trapezius in prone and side-lying positions. Exercises with optimal ratios for the lower trapezius were prone flexion, high scapular retraction, and prone external rotation with the shoulder abducted to 90 ° and elbow flexed. Exercises with optimal ratios for the serratus anterior were the diagonal exercises and scapular protraction. Conclusion This review has identified optimal positions and exercises for periscapular stability exercises. Standing exercises tend to activate the upper trapezius at a higher ratio, especially during the 60-120 ° range. The upper trapezius was the least active, while performing exercises in prone, side-lying, and supine positions. More studies need to be conducted to examine these exercises in greater detail and confirm their consistency in producing the optimal ratios determined in this review. Level of evidence 1a PMID:27274418
Formation and mechanism of nanocrystalline AZ91 powders during HDDR processing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Yafen; Fan, Jianfeng, E-mail: fanjianfeng@tyu
2017-03-15
Grain sizes of AZ91 alloy powders were markedly refined to about 15 nm from 100 to 160 μm by an optimized hydrogenation-disproportionation-desorption-recombination (HDDR) process. The effect of temperature, hydrogen pressure and processing time on phase and microstructure evolution of AZ91 alloy powders during HDDR process was investigated systematically by X-ray diffraction, optical microscopy, scanning electron microscopy and transmission electron microscopy, respectively. The optimal HDDR process for preparing nanocrystalline Mg alloy powders is hydriding at temperature of 350 °C under 4 MPa hydrogen pressure for 12 h and dehydriding at 350 °C for 3 h in vacuum. A modified unreacted coremore » model was introduced to describe the mechanism of grain refinement of during HDDR process. - Highlights: • Grain size of the AZ91 alloy powders was significantly refined from 100 μm to 15 nm. • The optimal HDDR technology for nano Mg alloy powders is obtained. • A modified unreacted core model of grain refinement mechanism was proposed.« less
NASA Astrophysics Data System (ADS)
Deliparaschos, Kyriakos M.; Michail, Konstantinos; Zolotas, Argyrios C.; Tzafestas, Spyros G.
2016-05-01
This work presents a field programmable gate array (FPGA)-based embedded software platform coupled with a software-based plant, forming a hardware-in-the-loop (HIL) that is used to validate a systematic sensor selection framework. The systematic sensor selection framework combines multi-objective optimization, linear-quadratic-Gaussian (LQG)-type control, and the nonlinear model of a maglev suspension. A robustness analysis of the closed-loop is followed (prior to implementation) supporting the appropriateness of the solution under parametric variation. The analysis also shows that quantization is robust under different controller gains. While the LQG controller is implemented on an FPGA, the physical process is realized in a high-level system modeling environment. FPGA technology enables rapid evaluation of the algorithms and test designs under realistic scenarios avoiding heavy time penalty associated with hardware description language (HDL) simulators. The HIL technique facilitates significant speed-up in the required execution time when compared to its software-based counterpart model.
Ma, Ning; Yu, Angela J
2016-01-01
Inhibitory control, the ability to stop or modify preplanned actions under changing task conditions, is an important component of cognitive functions. Two lines of models of inhibitory control have previously been proposed for human response in the classical stop-signal task, in which subjects must inhibit a default go response upon presentation of an infrequent stop signal: (1) the race model, which posits two independent go and stop processes that race to determine the behavioral outcome, go or stop; and (2) an optimal decision-making model, which posits that observers decides whether and when to go based on continually (Bayesian) updated information about both the go and stop stimuli. In this work, we probe the relationship between go and stop processing by explicitly manipulating the discrimination difficulty of the go stimulus. While the race model assumes the go and stop processes are independent, and therefore go stimulus discriminability should not affect the stop stimulus processing, we simulate the optimal model to show that it predicts harder go discrimination should result in longer go reaction time (RT), lower stop error rate, as well as faster stop-signal RT. We then present novel behavioral data that validate these model predictions. The results thus favor a fundamentally inseparable account of go and stop processing, in a manner consistent with the optimal model, and contradicting the independence assumption of the race model. More broadly, our findings contribute to the growing evidence that the computations underlying inhibitory control are systematically modulated by cognitive influences in a Bayes-optimal manner, thus opening new avenues for interpreting neural responses underlying inhibitory control.
Shimansky, Y P
2011-05-01
It is well known from numerous studies that perception can be significantly affected by intended action in many everyday situations, indicating that perception and related decision-making is not a simple, one-way sequence, but a complex iterative cognitive process. However, the underlying functional mechanisms are yet unclear. Based on an optimality approach, a quantitative computational model of one such mechanism has been developed in this study. It is assumed in the model that significant uncertainty about task-related parameters of the environment results in parameter estimation errors and an optimal control system should minimize the cost of such errors in terms of the optimality criterion. It is demonstrated that, if the cost of a parameter estimation error is significantly asymmetrical with respect to error direction, the tendency to minimize error cost creates a systematic deviation of the optimal parameter estimate from its maximum likelihood value. Consequently, optimization of parameter estimate and optimization of control action cannot be performed separately from each other under parameter uncertainty combined with asymmetry of estimation error cost, thus making the certainty equivalence principle non-applicable under those conditions. A hypothesis that not only the action, but also perception itself is biased by the above deviation of parameter estimate is supported by ample experimental evidence. The results provide important insights into the cognitive mechanisms of interaction between sensory perception and planning an action under realistic conditions. Implications for understanding related functional mechanisms of optimal control in the CNS are discussed.
Efficient computation paths for the systematic analysis of sensitivities
NASA Astrophysics Data System (ADS)
Greppi, Paolo; Arato, Elisabetta
2013-01-01
A systematic sensitivity analysis requires computing the model on all points of a multi-dimensional grid covering the domain of interest, defined by the ranges of variability of the inputs. The issues to efficiently perform such analyses on algebraic models are handling solution failures within and close to the feasible region and minimizing the total iteration count. Scanning the domain in the obvious order is sub-optimal in terms of total iterations and is likely to cause many solution failures. The problem of choosing a better order can be translated geometrically into finding Hamiltonian paths on certain grid graphs. This work proposes two paths, one based on a mixed-radix Gray code and the other, a quasi-spiral path, produced by a novel heuristic algorithm. Some simple, easy-to-visualize examples are presented, followed by performance results for the quasi-spiral algorithm and the practical application of the different paths in a process simulation tool.
NASA Astrophysics Data System (ADS)
Widesott, L.; Strigari, L.; Pressello, M. C.; Benassi, M.; Landoni, V.
2008-03-01
We investigated the role and the weight of the parameters involved in the intensity modulated radiation therapy (IMRT) optimization based on the generalized equivalent uniform dose (gEUD) method, for prostate and head-and-neck plans. We systematically varied the parameters (gEUDmax and weight) involved in the gEUD-based optimization of rectal wall and parotid glands. We found that the proper value of weight factor, still guaranteeing planning treatment volumes coverage, produced similar organs at risks dose-volume (DV) histograms for different gEUDmax with fixed a = 1. Most of all, we formulated a simple relation that links the reference gEUDmax and the associated weight factor. As secondary objective, we evaluated plans obtained with the gEUD-based optimization and ones based on DV criteria, using the normal tissue complication probability (NTCP) models. gEUD criteria seemed to improve sparing of rectum and parotid glands with respect to DV-based optimization: the mean dose, the V40 and V50 values to the rectal wall were decreased of about 10%, the mean dose to parotids decreased of about 20-30%. But more than the OARs sparing, we underlined the halving of the OARs optimization time with the implementation of the gEUD-based cost function. Using NTCP models we enhanced differences between the two optimization criteria for parotid glands, but no for rectum wall.
NASA Astrophysics Data System (ADS)
Holtz, Ronald; Matic, Peter; Mott, David
2013-03-01
Warfighter performance can be adversely affected by heat load and weight of equipment. Current tactical vest designs are good insulators and lack ventilation, thus do not provide effective management of metabolic heat generated. NRL has undertaken a systematic study of tactical vest thermal management, leading to physics-based strategies that provide improved cooling without undesirable consequences such as added weight, added electrical power requirements, or compromised protection. The approach is based on evaporative cooling of sweat produced by the wearer of the vest, in an air flow provided by ambient wind or ambulatory motion of the wearer. Using an approach including thermodynamic analysis, computational fluid dynamics modeling, air flow measurements of model ventilated vest architectures, and studies of the influence of fabric aerodynamic drag characteristics, materials and geometry were identified that optimize passive cooling of tactical vests. Specific architectural features of the vest design allow for optimal ventilation patterns, and selection of fabrics for vest construction optimize evaporation rates while reducing air flow resistance. Cooling rates consistent with the theoretical and modeling predictions were verified experimentally for 3D mockups.
Wavelet decomposition and radial basis function networks for system monitoring
NASA Astrophysics Data System (ADS)
Ikonomopoulos, A.; Endou, A.
1998-10-01
Two approaches are coupled to develop a novel collection of black box models for monitoring operational parameters in a complex system. The idea springs from the intention of obtaining multiple predictions for each system variable and fusing them before they are used to validate the actual measurement. The proposed architecture pairs the analytical abilities of the discrete wavelet decomposition with the computational power of radial basis function networks. Members of a wavelet family are constructed in a systematic way and chosen through a statistical selection criterion that optimizes the structure of the network. Network parameters are further optimized through a quasi-Newton algorithm. The methodology is demonstrated utilizing data obtained during two transients of the Monju fast breeder reactor. The models developed are benchmarked with respect to similar regressors based on Gaussian basis functions.
Reduced order model based on principal component analysis for process simulation and optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lang, Y.; Malacina, A.; Biegler, L.
2009-01-01
It is well-known that distributed parameter computational fluid dynamics (CFD) models provide more accurate results than conventional, lumped-parameter unit operation models used in process simulation. Consequently, the use of CFD models in process/equipment co-simulation offers the potential to optimize overall plant performance with respect to complex thermal and fluid flow phenomena. Because solving CFD models is time-consuming compared to the overall process simulation, we consider the development of fast reduced order models (ROMs) based on CFD results to closely approximate the high-fidelity equipment models in the co-simulation. By considering process equipment items with complicated geometries and detailed thermodynamic property models,more » this study proposes a strategy to develop ROMs based on principal component analysis (PCA). Taking advantage of commercial process simulation and CFD software (for example, Aspen Plus and FLUENT), we are able to develop systematic CFD-based ROMs for equipment models in an efficient manner. In particular, we show that the validity of the ROM is more robust within well-sampled input domain and the CPU time is significantly reduced. Typically, it takes at most several CPU seconds to evaluate the ROM compared to several CPU hours or more to solve the CFD model. Two case studies, involving two power plant equipment examples, are described and demonstrate the benefits of using our proposed ROM methodology for process simulation and optimization.« less
Optimal integrated abundances for chemical tagging of extragalactic globular clusters
NASA Astrophysics Data System (ADS)
Sakari, Charli M.; Venn, Kim; Shetrone, Matthew; Dotter, Aaron; Mackey, Dougal
2014-09-01
High-resolution integrated light (IL) spectroscopy provides detailed abundances of distant globular clusters whose stars cannot be resolved. Abundance comparisons with other systems (e.g. for chemical tagging) require understanding the systematic offsets that can occur between clusters, such as those due to uncertainties in the underlying stellar population. This paper analyses high-resolution IL spectra of the Galactic globular clusters 47 Tuc, M3, M13, NGC 7006, and M15 to (1) quantify potential systematic uncertainties in Fe, Ca, Ti, Ni, Ba, and Eu and (2) identify the most stable abundance ratios that will be useful in future analyses of unresolved targets. When stellar populations are well modelled, uncertainties are ˜0.1-0.2 dex based on sensitivities to the atmospheric parameters alone; in the worst-case scenarios, uncertainties can rise to 0.2-0.4 dex. The [Ca I/Fe I] ratio is identified as the optimal integrated [α/Fe] indicator (with offsets ≲ 0.1 dex), while [Ni I/Fe I] is also extremely stable to within ≲ 0.1 dex. The [Ba II/Eu II] ratios are also stable when the underlying populations are well modelled and may also be useful for chemical tagging.
Wu, Ruidong; Long, Yongcheng; Malanson, George P; Garber, Paul A; Zhang, Shuang; Li, Diqiang; Zhao, Peng; Wang, Longzhu; Duo, Hairui
2014-01-01
By addressing several key features overlooked in previous studies, i.e. human disturbance, integration of ecosystem- and species-level conservation features, and principles of complementarity and representativeness, we present the first national-scale systematic conservation planning for China to determine the optimized spatial priorities for biodiversity conservation. We compiled a spatial database on the distributions of ecosystem- and species-level conservation features, and modeled a human disturbance index (HDI) by aggregating information using several socioeconomic proxies. We ran Marxan with two scenarios (HDI-ignored and HDI-considered) to investigate the effects of human disturbance, and explored the geographic patterns of the optimized spatial conservation priorities. Compared to when HDI was ignored, the HDI-considered scenario resulted in (1) a marked reduction (∼9%) in the total HDI score and a slight increase (∼7%) in the total area of the portfolio of priority units, (2) a significant increase (∼43%) in the total irreplaceable area and (3) more irreplaceable units being identified in almost all environmental zones and highly-disturbed provinces. Thus the inclusion of human disturbance is essential for cost-effective priority-setting. Attention should be targeted to the areas that are characterized as moderately-disturbed, <2,000 m in altitude, and/or intermediately- to extremely-rugged in terrain to identify potentially important regions for implementing cost-effective conservation. We delineated 23 primary large-scale priority areas that are significant for conserving China's biodiversity, but those isolated priority units in disturbed regions are in more urgent need of conservation actions so as to prevent immediate and severe biodiversity loss. This study presents a spatially optimized national-scale portfolio of conservation priorities--effectively representing the overall biodiversity of China while minimizing conflicts with economic development. Our results offer critical insights for current conservation and strategic land-use planning in China. The approach is transferable and easy to implement by end-users, and applicable for national- and local-scale systematic conservation prioritization practices.
Wu, Ruidong; Long, Yongcheng; Malanson, George P.; Garber, Paul A.; Zhang, Shuang; Li, Diqiang; Zhao, Peng; Wang, Longzhu; Duo, Hairui
2014-01-01
By addressing several key features overlooked in previous studies, i.e. human disturbance, integration of ecosystem- and species-level conservation features, and principles of complementarity and representativeness, we present the first national-scale systematic conservation planning for China to determine the optimized spatial priorities for biodiversity conservation. We compiled a spatial database on the distributions of ecosystem- and species-level conservation features, and modeled a human disturbance index (HDI) by aggregating information using several socioeconomic proxies. We ran Marxan with two scenarios (HDI-ignored and HDI-considered) to investigate the effects of human disturbance, and explored the geographic patterns of the optimized spatial conservation priorities. Compared to when HDI was ignored, the HDI-considered scenario resulted in (1) a marked reduction (∼9%) in the total HDI score and a slight increase (∼7%) in the total area of the portfolio of priority units, (2) a significant increase (∼43%) in the total irreplaceable area and (3) more irreplaceable units being identified in almost all environmental zones and highly-disturbed provinces. Thus the inclusion of human disturbance is essential for cost-effective priority-setting. Attention should be targeted to the areas that are characterized as moderately-disturbed, <2,000 m in altitude, and/or intermediately- to extremely-rugged in terrain to identify potentially important regions for implementing cost-effective conservation. We delineated 23 primary large-scale priority areas that are significant for conserving China's biodiversity, but those isolated priority units in disturbed regions are in more urgent need of conservation actions so as to prevent immediate and severe biodiversity loss. This study presents a spatially optimized national-scale portfolio of conservation priorities – effectively representing the overall biodiversity of China while minimizing conflicts with economic development. Our results offer critical insights for current conservation and strategic land-use planning in China. The approach is transferable and easy to implement by end-users, and applicable for national- and local-scale systematic conservation prioritization practices. PMID:25072933
Optimal information networks: Application for data-driven integrated health in populations
Servadio, Joseph L.; Convertino, Matteo
2018-01-01
Development of composite indicators for integrated health in populations typically relies on a priori assumptions rather than model-free, data-driven evidence. Traditional variable selection processes tend not to consider relatedness and redundancy among variables, instead considering only individual correlations. In addition, a unified method for assessing integrated health statuses of populations is lacking, making systematic comparison among populations impossible. We propose the use of maximum entropy networks (MENets) that use transfer entropy to assess interrelatedness among selected variables considered for inclusion in a composite indicator. We also define optimal information networks (OINs) that are scale-invariant MENets, which use the information in constructed networks for optimal decision-making. Health outcome data from multiple cities in the United States are applied to this method to create a systemic health indicator, representing integrated health in a city. PMID:29423440
NASA Astrophysics Data System (ADS)
Ishijima, K.; Takigawa, M.; Sudo, K.; Toyoda, S.; Yoshida, N.; Röckmann, T.; Kaiser, J.; Aoki, S.; Morimoto, S.; Sugawara, S.; Nakazawa, T.
2015-07-01
This paper presents the development of an atmospheric N2O isotopocule model based on a chemistry-coupled atmospheric general circulation model (ACTM). We also describe a simple method to optimize the model and present its use in estimating the isotopic signatures of surface sources at the hemispheric scale. Data obtained from ground-based observations, measurements of firn air, and balloon and aircraft flights were used to optimize the long-term trends, interhemispheric gradients, and photolytic fractionation, respectively, in the model. This optimization successfully reproduced realistic spatial and temporal variations of atmospheric N2O isotopocules throughout the atmosphere from the surface to the stratosphere. The very small gradients associated with vertical profiles through the troposphere and the latitudinal and vertical distributions within each hemisphere were also reasonably simulated. The results of the isotopic characterization of the global total sources were generally consistent with previous one-box model estimates, indicating that the observed atmospheric trend is the dominant factor controlling the source isotopic signature. However, hemispheric estimates were different from those generated by a previous two-box model study, mainly due to the model accounting for the interhemispheric transport and latitudinal and vertical distributions of tropospheric N2O isotopocules. Comparisons of time series of atmospheric N2O isotopocule ratios between our model and observational data from several laboratories revealed the need for a more systematic and elaborate intercalibration of the standard scales used in N2O isotopic measurements in order to capture a more complete and precise picture of the temporal and spatial variations in atmospheric N2O isotopocule ratios. This study highlights the possibility that inverse estimation of surface N2O fluxes, including the isotopic information as additional constraints, could be realized.
NASA Astrophysics Data System (ADS)
Ishijima, K.; Takigawa, M.; Sudo, K.; Toyoda, S.; Yoshida, N.; Röckmann, T.; Kaiser, J.; Aoki, S.; Morimoto, S.; Sugawara, S.; Nakazawa, T.
2015-12-01
This work presents the development of an atmospheric N2O isotopocule model based on a chemistry-coupled atmospheric general circulation model (ACTM). We also describe a simple method to optimize the model and present its use in estimating the isotopic signatures of surface sources at the hemispheric scale. Data obtained from ground-based observations, measurements of firn air, and balloon and aircraft flights were used to optimize the long-term trends, interhemispheric gradients, and photolytic fractionation, respectively, in the model. This optimization successfully reproduced realistic spatial and temporal variations of atmospheric N2O isotopocules throughout the atmosphere from the surface to the stratosphere. The very small gradients associated with vertical profiles through the troposphere and the latitudinal and vertical distributions within each hemisphere were also reasonably simulated. The results of the isotopic characterization of the global total sources were generally consistent with previous one-box model estimates, indicating that the observed atmospheric trend is the dominant factor controlling the source isotopic signature. However, hemispheric estimates were different from those generated by a previous two-box model study, mainly due to the model accounting for the interhemispheric transport and latitudinal and vertical distributions of tropospheric N2O isotopocules. Comparisons of time series of atmospheric N2O isotopocule ratios between our model and observational data from several laboratories revealed the need for a more systematic and elaborate intercalibration of the standard scales used in N2O isotopic measurements in order to capture a more complete and precise picture of the temporal and spatial variations in atmospheric N2O isotopocule ratios. This study highlights the possibility that inverse estimation of surface N2O fluxes, including the isotopic information as additional constraints, could be realized.
Planning and Scheduling for Fleets of Earth Observing Satellites
NASA Technical Reports Server (NTRS)
Frank, Jeremy; Jonsson, Ari; Morris, Robert; Smith, David E.; Norvig, Peter (Technical Monitor)
2001-01-01
We address the problem of scheduling observations for a collection of earth observing satellites. This scheduling task is a difficult optimization problem, potentially involving many satellites, hundreds of requests, constraints on when and how to service each request, and resources such as instruments, recording devices, transmitters, and ground stations. High-fidelity models are required to ensure the validity of schedules; at the same time, the size and complexity of the problem makes it unlikely that systematic optimization search methods will be able to solve them in a reasonable time. This paper presents a constraint-based approach to solving the Earth Observing Satellites (EOS) scheduling problem, and proposes a stochastic heuristic search method for solving it.
NASA Astrophysics Data System (ADS)
Miclosina, C. O.; Balint, D. I.; Campian, C. V.; Frunzaverde, D.; Ion, I.
2012-11-01
This paper deals with the optimization of the axial hydraulic turbines of Kaplan type. The optimization of the runner blade is presented systematically from two points of view: hydrodynamic and constructive. Combining these aspects in order to gain a safer operation when unsteady effects occur in the runner of the turbine is attempted. The design and optimization of the runner blade is performed with QTurbo3D software developed at the Center for Research in Hydraulics, Automation and Thermal Processes (CCHAPT) from "Eftimie Murgu" University of Resita, Romania. QTurbo3D software offers possibilities to design the meridian channel of hydraulic turbines design the blades and optimize the runner blade. 3D modeling and motion analysis of the runner blade operating mechanism are accomplished using SolidWorks software. The purpose of motion study is to obtain forces, torques or stresses in the runner blade operating mechanism, necessary to estimate its lifetime. This paper clearly states the importance of combining the hydrodynamics with the structural design in the optimization procedure of the runner of hydraulic turbines.
Active Learning to Understand Infectious Disease Models and Improve Policy Making
Vladislavleva, Ekaterina; Broeckhove, Jan; Beutels, Philippe; Hens, Niel
2014-01-01
Modeling plays a major role in policy making, especially for infectious disease interventions but such models can be complex and computationally intensive. A more systematic exploration is needed to gain a thorough systems understanding. We present an active learning approach based on machine learning techniques as iterative surrogate modeling and model-guided experimentation to systematically analyze both common and edge manifestations of complex model runs. Symbolic regression is used for nonlinear response surface modeling with automatic feature selection. First, we illustrate our approach using an individual-based model for influenza vaccination. After optimizing the parameter space, we observe an inverse relationship between vaccination coverage and cumulative attack rate reinforced by herd immunity. Second, we demonstrate the use of surrogate modeling techniques on input-response data from a deterministic dynamic model, which was designed to explore the cost-effectiveness of varicella-zoster virus vaccination. We use symbolic regression to handle high dimensionality and correlated inputs and to identify the most influential variables. Provided insight is used to focus research, reduce dimensionality and decrease decision uncertainty. We conclude that active learning is needed to fully understand complex systems behavior. Surrogate models can be readily explored at no computational expense, and can also be used as emulator to improve rapid policy making in various settings. PMID:24743387
Active learning to understand infectious disease models and improve policy making.
Willem, Lander; Stijven, Sean; Vladislavleva, Ekaterina; Broeckhove, Jan; Beutels, Philippe; Hens, Niel
2014-04-01
Modeling plays a major role in policy making, especially for infectious disease interventions but such models can be complex and computationally intensive. A more systematic exploration is needed to gain a thorough systems understanding. We present an active learning approach based on machine learning techniques as iterative surrogate modeling and model-guided experimentation to systematically analyze both common and edge manifestations of complex model runs. Symbolic regression is used for nonlinear response surface modeling with automatic feature selection. First, we illustrate our approach using an individual-based model for influenza vaccination. After optimizing the parameter space, we observe an inverse relationship between vaccination coverage and cumulative attack rate reinforced by herd immunity. Second, we demonstrate the use of surrogate modeling techniques on input-response data from a deterministic dynamic model, which was designed to explore the cost-effectiveness of varicella-zoster virus vaccination. We use symbolic regression to handle high dimensionality and correlated inputs and to identify the most influential variables. Provided insight is used to focus research, reduce dimensionality and decrease decision uncertainty. We conclude that active learning is needed to fully understand complex systems behavior. Surrogate models can be readily explored at no computational expense, and can also be used as emulator to improve rapid policy making in various settings.
NASA Astrophysics Data System (ADS)
Zhang, Rong-Hua; Tao, Ling-Jiang; Gao, Chuan
2017-09-01
Large uncertainties exist in real-time predictions of the 2015 El Niño event, which have systematic intensity biases that are strongly model-dependent. It is critically important to characterize those model biases so they can be reduced appropriately. In this study, the conditional nonlinear optimal perturbation (CNOP)-based approach was applied to an intermediate coupled model (ICM) equipped with a four-dimensional variational data assimilation technique. The CNOP-based approach was used to quantify prediction errors that can be attributed to initial conditions (ICs) and model parameters (MPs). Two key MPs were considered in the ICM: one represents the intensity of the thermocline effect, and the other represents the relative coupling intensity between the ocean and atmosphere. Two experiments were performed to illustrate the effects of error corrections, one with a standard simulation and another with an optimized simulation in which errors in the ICs and MPs derived from the CNOP-based approach were optimally corrected. The results indicate that simulations of the 2015 El Niño event can be effectively improved by using CNOP-derived error correcting. In particular, the El Niño intensity in late 2015 was adequately captured when simulations were started from early 2015. Quantitatively, the Niño3.4 SST index simulated in Dec. 2015 increased to 2.8 °C in the optimized simulation, compared with only 1.5 °C in the standard simulation. The feasibility and effectiveness of using the CNOP-based technique to improve ENSO simulations are demonstrated in the context of the 2015 El Niño event. The limitations and further applications are also discussed.
Pal, Parimal; Thakura, Ritwik; Chakrabortty, Sankha
2016-05-01
A user-friendly, menu-driven simulation software tool has been developed for the first time to optimize and analyze the system performance of an advanced continuous membrane-integrated pharmaceutical wastewater treatment plant. The software allows pre-analysis and manipulation of input data which helps in optimization and shows the software performance visually on a graphical platform. Moreover, the software helps the user to "visualize" the effects of the operating parameters through its model-predicted output profiles. The software is based on a dynamic mathematical model, developed for a systematically integrated forward osmosis-nanofiltration process for removal of toxic organic compounds from pharmaceutical wastewater. The model-predicted values have been observed to corroborate well with the extensive experimental investigations which were found to be consistent under varying operating conditions like operating pressure, operating flow rate, and draw solute concentration. Low values of the relative error (RE = 0.09) and high values of Willmott-d-index (d will = 0.981) reflected a high degree of accuracy and reliability of the software. This software is likely to be a very efficient tool for system design or simulation of an advanced membrane-integrated treatment plant for hazardous wastewater.
Mathematical modeling for novel cancer drug discovery and development.
Zhang, Ping; Brusic, Vladimir
2014-10-01
Mathematical modeling enables: the in silico classification of cancers, the prediction of disease outcomes, optimization of therapy, identification of promising drug targets and prediction of resistance to anticancer drugs. In silico pre-screened drug targets can be validated by a small number of carefully selected experiments. This review discusses the basics of mathematical modeling in cancer drug discovery and development. The topics include in silico discovery of novel molecular drug targets, optimization of immunotherapies, personalized medicine and guiding preclinical and clinical trials. Breast cancer has been used to demonstrate the applications of mathematical modeling in cancer diagnostics, the identification of high-risk population, cancer screening strategies, prediction of tumor growth and guiding cancer treatment. Mathematical models are the key components of the toolkit used in the fight against cancer. The combinatorial complexity of new drugs discovery is enormous, making systematic drug discovery, by experimentation, alone difficult if not impossible. The biggest challenges include seamless integration of growing data, information and knowledge, and making them available for a multiplicity of analyses. Mathematical models are essential for bringing cancer drug discovery into the era of Omics, Big Data and personalized medicine.
Belciug, Smaranda; Gorunescu, Florin
2015-02-01
Scarce healthcare resources require carefully made policies ensuring optimal bed allocation, quality healthcare service, and adequate financial support. This paper proposes a complex analysis of the resource allocation in a hospital department by integrating in the same framework a queuing system, a compartmental model, and an evolutionary-based optimization. The queuing system shapes the flow of patients through the hospital, the compartmental model offers a feasible structure of the hospital department in accordance to the queuing characteristics, and the evolutionary paradigm provides the means to optimize the bed-occupancy management and the resource utilization using a genetic algorithm approach. The paper also focuses on a "What-if analysis" providing a flexible tool to explore the effects on the outcomes of the queuing system and resource utilization through systematic changes in the input parameters. The methodology was illustrated using a simulation based on real data collected from a geriatric department of a hospital from London, UK. In addition, the paper explores the possibility of adapting the methodology to different medical departments (surgery, stroke, and mental illness). Moreover, the paper also focuses on the practical use of the model from the healthcare point of view, by presenting a simulated application. Copyright © 2014 Elsevier Inc. All rights reserved.
The design and pre-clinical evaluation of knee replacements for osteoarthritis.
Walker, Peter S
2015-03-18
One of the concepts that Rik Huiskes promoted was that implants such as knee and hip replacements could be analyzed and optimized using numerical models such as finite element analysis, or by experimental testing, an area he called pre-clinical testing. The design itself could be formulated or improved by defining a specific goal or asking a key question. These propositions are examined in the light of almost five decades of experience with knee implants. Achieving the required laxity and stability, was achieved by attempting to reproduce anatomical values by suitable radii of curvature and selective ligament retention. Obtaining durable fixation was based on testing many configurations to obtain the most uniform stress distribution at the implant-bone interface. Achieving the best overall kinematics has yet to be fully solved due to the variations in activities and patients. These and many other factors have usually been addressed individually rather than as a composite, although as time has gone on, successful features have gradually been assimilated into most designs. But even a systematic approach has been flawed because some unrecognized response was not accounted for in the pre-clinical model, a limitation of models in general. In terms of the design process, so far no method has emerged for systematically reaching an optimal solution from all aspects, although this is possible in principle. Overall however, predictive numerical or physical models should be an essential element in the design of new or improved knee replacements, a part of the design process itself. Copyright © 2015. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Chen, Yi-Chieh; Li, Tsung-Han; Lin, Hung-Yu; Chen, Kao-Tun; Wu, Chun-Sheng; Lai, Ya-Chieh; Hurat, Philippe
2018-03-01
Along with process improvement and integrated circuit (IC) design complexity increased, failure rate caused by optical getting higher in the semiconductor manufacture. In order to enhance chip quality, optical proximity correction (OPC) plays an indispensable rule in the manufacture industry. However, OPC, includes model creation, correction, simulation and verification, is a bottleneck from design to manufacture due to the multiple iterations and advanced physical behavior description in math. Thus, this paper presented a pattern-based design technology co-optimization (PB-DTCO) flow in cooperation with OPC to find out patterns which will negatively affect the yield and fixed it automatically in advance to reduce the run-time in OPC operation. PB-DTCO flow can generate plenty of test patterns for model creation and yield gaining, classify candidate patterns systematically and furthermore build up bank includes pairs of match and optimization patterns quickly. Those banks can be used for hotspot fixing, layout optimization and also be referenced for the next technology node. Therefore, the combination of PB-DTCO flow with OPC not only benefits for reducing the time-to-market but also flexible and can be easily adapted to diversity OPC flow.
Task-based data-acquisition optimization for sparse image reconstruction systems
NASA Astrophysics Data System (ADS)
Chen, Yujia; Lou, Yang; Kupinski, Matthew A.; Anastasio, Mark A.
2017-03-01
Conventional wisdom dictates that imaging hardware should be optimized by use of an ideal observer (IO) that exploits full statistical knowledge of the class of objects to be imaged, without consideration of the reconstruction method to be employed. However, accurate and tractable models of the complete object statistics are often difficult to determine in practice. Moreover, in imaging systems that employ compressive sensing concepts, imaging hardware and (sparse) image reconstruction are innately coupled technologies. We have previously proposed a sparsity-driven ideal observer (SDIO) that can be employed to optimize hardware by use of a stochastic object model that describes object sparsity. The SDIO and sparse reconstruction method can therefore be "matched" in the sense that they both utilize the same statistical information regarding the class of objects to be imaged. To efficiently compute SDIO performance, the posterior distribution is estimated by use of computational tools developed recently for variational Bayesian inference. Subsequently, the SDIO test statistic can be computed semi-analytically. The advantages of employing the SDIO instead of a Hotelling observer are systematically demonstrated in case studies in which magnetic resonance imaging (MRI) data acquisition schemes are optimized for signal detection tasks.
Optimal Elastomeric Scaffold Leaflet Shape for Pulmonary Heart Valve Leaflet Replacement
Fan, Rong; Bayoumi, Ahmed S.; Chen, Peter; Hobson, Christopher M.; Wagner, William R.; Mayer, John E.; Sacks, Michael S.
2012-01-01
Surgical replacement of the pulmonary valve (PV) is a common treatment option for congenital pulmonary valve defects. Engineered tissue approaches to develop novel PV replacements are intrinsically complex, and will require methodical approaches for their development. Single leaflet replacement utilizing an ovine model is an attractive approach in that candidate materials can be evaluated under valve level stresses in blood contact without the confounding effects of a particular valve design. In the present study an approach for optimal leaflet shape design based on finite element (FE) simulation of a mechanically anisotropic, elastomeric scaffold for PV replacement is presented. The scaffold was modeled as an orthotropic hyperelastic material using a generalized Fung-type constitutive model. The optimal shape of the fully loaded PV replacement leaflet was systematically determined by minimizing the difference between the deformed shape obtained from FE simulation and an ex-vivo microCT scan of a native ovine PV leaflet. Effects of material anisotropy, dimensional changes of PV root, and fiber orientation on the resulting leaflet deformation were investigated. In-situ validation demonstrated that the approach could guide the design of the leaflet shape for PV replacement surgery. PMID:23294966
The impact on midlevel vision of statistically optimal divisive normalization in V1
Coen-Cagli, Ruben; Schwartz, Odelia
2013-01-01
The first two areas of the primate visual cortex (V1, V2) provide a paradigmatic example of hierarchical computation in the brain. However, neither the functional properties of V2 nor the interactions between the two areas are well understood. One key aspect is that the statistics of the inputs received by V2 depend on the nonlinear response properties of V1. Here, we focused on divisive normalization, a canonical nonlinear computation that is observed in many neural areas and modalities. We simulated V1 responses with (and without) different forms of surround normalization derived from statistical models of natural scenes, including canonical normalization and a statistically optimal extension that accounted for image nonhomogeneities. The statistics of the V1 population responses differed markedly across models. We then addressed how V2 receptive fields pool the responses of V1 model units with different tuning. We assumed this is achieved by learning without supervision a linear representation that removes correlations, which could be accomplished with principal component analysis. This approach revealed V2-like feature selectivity when we used the optimal normalization and, to a lesser extent, the canonical one but not in the absence of both. We compared the resulting two-stage models on two perceptual tasks; while models encompassing V1 surround normalization performed better at object recognition, only statistically optimal normalization provided systematic advantages in a task more closely matched to midlevel vision, namely figure/ground judgment. Our results suggest that experiments probing midlevel areas might benefit from using stimuli designed to engage the computations that characterize V1 optimality. PMID:23857950
Optimal atomic structure of amorphous silicon obtained from density functional theory calculations
NASA Astrophysics Data System (ADS)
Pedersen, Andreas; Pizzagalli, Laurent; Jónsson, Hannes
2017-06-01
Atomic structure of amorphous silicon consistent with several reported experimental measurements has been obtained from annealing simulations using electron density functional theory calculations and a systematic removal of weakly bound atoms. The excess energy and density with respect to the crystal are well reproduced in addition to radial distribution function, angular distribution functions, and vibrational density of states. No atom in the optimal configuration is locally in a crystalline environment as deduced by ring analysis and common neighbor analysis, but coordination defects are present at a level of 1%-2%. The simulated samples provide structural models of this archetypal disordered covalent material without preconceived notion of the atomic ordering or fitting to experimental data.
NASA Astrophysics Data System (ADS)
Wu, Xiaohua; Hu, Xiaosong; Moura, Scott; Yin, Xiaofeng; Pickert, Volker
2016-11-01
Energy management strategies are instrumental in the performance and economy of smart homes integrating renewable energy and energy storage. This article focuses on stochastic energy management of a smart home with PEV (plug-in electric vehicle) energy storage and photovoltaic (PV) array. It is motivated by the challenges associated with sustainable energy supplies and the local energy storage opportunity provided by vehicle electrification. This paper seeks to minimize a consumer's energy charges under a time-of-use tariff, while satisfying home power demand and PEV charging requirements, and accommodating the variability of solar power. First, the random-variable models are developed, including Markov Chain model of PEV mobility, as well as predictive models of home power demand and PV power supply. Second, a stochastic optimal control problem is mathematically formulated for managing the power flow among energy sources in the smart home. Finally, based on time-varying electricity price, we systematically examine the performance of the proposed control strategy. As a result, the electric cost is 493.6% less for a Tesla Model S with optimal stochastic dynamic programming (SDP) control relative to the no optimal control case, and it is by 175.89% for a Nissan Leaf.
Processing of angular motion and gravity information through an internal model.
Laurens, Jean; Straumann, Dominik; Hess, Bernhard J M
2010-09-01
The vestibular organs in the base of the skull provide important information about head orientation and motion in space. Previous studies have suggested that both angular velocity information from the semicircular canals and information about head orientation and translation from the otolith organs are centrally processed in an internal model of head motion, using the principles of optimal estimation. This concept has been successfully applied to model behavioral responses to classical vestibular motion paradigms. This study measured the dynamic of the vestibuloocular reflex during postrotatory tilt, tilt during the optokinetic afternystagmus, and off-vertical axis rotation. The influence of otolith signal on the VOR was systematically varied by using a series of tilt angles. We found that the time constants of responses varied almost identically as a function of gravity in these paradigms. We show that Bayesian modeling could predict the experimental results in an accurate and consistent manner. In contrast to other approaches, the Bayesian model also provides a plausible explanation of why these vestibulooculo motor responses occur as a consequence of an internal process of optimal motion estimation.
Optimal patient education for cancer pain: a systematic review and theory-based meta-analysis.
Marie, N; Luckett, T; Davidson, P M; Lovell, M; Lal, S
2013-12-01
Previous systematic reviews have found patient education to be moderately efficacious in decreasing the intensity of cancer pain, but variation in results warrants analysis aimed at identifying which strategies are optimal. A systematic review and meta-analysis was undertaken using a theory-based approach to classifying and comparing educational interventions for cancer pain. The reference lists of previous reviews and MEDLINE, PsycINFO, and CENTRAL were searched in May 2012. Studies had to be published in a peer-reviewed English language journal and compare the effect on cancer pain intensity of education with usual care. Meta-analyses used standardized effect sizes (ES) and a random effects model. Subgroup analyses compared intervention components categorized using the Michie et al. (Implement Sci 6:42, 2011) capability, opportunity, and motivation behavior (COM-B) model. Fifteen randomized controlled trials met the criteria. As expected, meta-analysis identified a small-moderate ES favoring education versus usual care (ES, 0.27 [-0.47, -0.07]; P = 0.007) with substantial heterogeneity (I² = 71 %). Subgroup analyses based on the taxonomy found that interventions using "enablement" were efficacious (ES, 0.35 [-0.63, -0.08]; P = 0.01), whereas those lacking this component were not (ES, 0.18 [-0.46, 0.10]; P = 0.20). However, the subgroup effect was nonsignificant (P = 0.39), and heterogeneity was not reduced. Factoring in the variable of individualized versus non-individualized influenced neither efficacy nor heterogeneity. The current meta-analysis follows a trend in using theory to understand the mechanisms of complex interventions. We suggest that future efforts focus on interventions that target patient self-efficacy. Authors are encouraged to report comprehensive details of interventions and methods to inform synthesis, replication, and refinement.
PSPICE Hybrid Modeling and Simulation of Capacitive Micro-Gyroscopes
Su, Yan; Tong, Xin; Liu, Nan; Han, Guowei; Si, Chaowei; Ning, Jin; Li, Zhaofeng; Yang, Fuhua
2018-01-01
With an aim to reduce the cost of prototype development, this paper establishes a PSPICE hybrid model for the simulation of capacitive microelectromechanical systems (MEMS) gyroscopes. This is achieved by modeling gyroscopes in different modules, then connecting them in accordance with the corresponding principle diagram. Systematic simulations of this model are implemented along with a consideration of details of MEMS gyroscopes, including a capacitance model without approximation, mechanical thermal noise, and the effect of ambient temperature. The temperature compensation scheme and optimization of interface circuits are achieved based on the hybrid closed-loop simulation of MEMS gyroscopes. The simulation results show that the final output voltage is proportional to the angular rate input, which verifies the validity of this model. PMID:29597284
Faria, Rita; Barbieri, Marco; Light, Kate; Elliott, Rachel A.; Sculpher, Mark
2014-01-01
Background This review scopes the evidence on the effectiveness and cost-effectiveness of interventions to improve suboptimal use of medicines in order to determine the evidence gaps and help inform research priorities. Sources of data Systematic searches of the National Health Service (NHS) Economic Evaluation Database, the Cochrane Database of Systematic Reviews and the Database of Abstracts of Reviews of Effects. Areas of agreement The majority of the studies evaluated interventions to improve adherence, inappropriate prescribing and prescribing errors. Areas of controversy Interventions tend to be specific to a particular stage of the pathway and/or to a particular disease and have mostly been evaluated for their effect on intermediate or process outcomes. Growing points Medicines optimization offers an opportunity to improve health outcomes and efficiency of healthcare. Areas timely for developing research The available evidence is insufficient to assess the effectiveness and cost-effectiveness of interventions to address suboptimal medicine use in the UK NHS. Decision modelling, evidence synthesis and elicitation have the potential to address the evidence gaps and help prioritize research. PMID:25190760
Current advances in mathematical modeling of anti-cancer drug penetration into tumor tissues.
Kim, Munju; Gillies, Robert J; Rejniak, Katarzyna A
2013-11-18
Delivery of anti-cancer drugs to tumor tissues, including their interstitial transport and cellular uptake, is a complex process involving various biochemical, mechanical, and biophysical factors. Mathematical modeling provides a means through which to understand this complexity better, as well as to examine interactions between contributing components in a systematic way via computational simulations and quantitative analyses. In this review, we present the current state of mathematical modeling approaches that address phenomena related to drug delivery. We describe how various types of models were used to predict spatio-temporal distributions of drugs within the tumor tissue, to simulate different ways to overcome barriers to drug transport, or to optimize treatment schedules. Finally, we discuss how integration of mathematical modeling with experimental or clinical data can provide better tools to understand the drug delivery process, in particular to examine the specific tissue- or compound-related factors that limit drug penetration through tumors. Such tools will be important in designing new chemotherapy targets and optimal treatment strategies, as well as in developing non-invasive diagnosis to monitor treatment response and detect tumor recurrence.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Nanjia; Dudnik, Alexander S.; Li, Ting I. N. G.
2015-12-31
ABSTRACT: The influence of the number-average molecular weight (Mn) on the blend film morphology and photovoltaic performance of all-polymer solar cells (APSCs) fabricated with the donor polymer poly[5-(2-hexyldodecyl)-1,3-thieno[3,4- c]pyrrole-4,6-dione-alt-5,5-(2,5-bis(3-dodecylthiophen-2-yl)- thiophene)] (PTPD3T) and acceptor polymer poly{[N,N'- bis(2-octyldodecyl)naphthalene-1,4,5,8-bis(dicarboximide)- 2,6-diyl]-alt-5,5'-(2,2'-bithiophene)} (P(NDI2OD-T2); N2200) is systematically investigated. The M n effect analysis of both PTPD3T and N2200 is enabled by implementing a polymerization strategy which produces conjugated polymers with tunable M ns. Experimental and coarse-grain modeling results reveal that systematic M n variation greatly influences both intrachain and interchain interactions and ultimately the degree of phase separation and morphology evolution. Specifically, increasing M n formore » both polymers shrinks blend film domain sizes and enhances donor-acceptor polymer-polymer interfacial areas, affording increased short-circuit current densities (J sc). However, the greater disorder and intermixed feature proliferation accompanying increasing M n promotes charge carrier recombination, reducing cell fill factors (FF). The optimized photoactive layers exhibit well-balanced exciton dissociation and charge transport characteristics, ultimately providing solar cells with a 2-fold PCE enhancement versus devices with nonoptimal M ns. Overall, it is shown that proper and precise tuning of both donor and acceptor polymer M ns is critical for optimizing APSC performance. In contrast to reports where maximum power conversion efficiencies (PCEs) are achieved for the highest M ns, the present two-dimensional M n optimization matrix strategy locates a PCE “sweet spot” at intermediate Mns of both donor and acceptor polymers. This study provides synthetic methodologies to predictably access conjugated polymers with desired M n and highlights the importance of optimizing M n for both polymer components to realize the full potential of APSC performance.« less
NASA Astrophysics Data System (ADS)
Utama, D. N.; Triana, Y. S.; Iqbal, M. M.; Iksal, M.; Fikri, I.; Dharmawan, T.
2018-03-01
Mosque, for Muslim, is not only a place for daily worshipping, however as a center of culture as well. It is an important and valuable building to be well managed. For a responsible department or institution (such as Religion or Plan Department in Indonesia), to practically manage a lot of mosques is not simple task to handle. The challenge is in relation to data number and characteristic problems tackled. Specifically for renovating and rehabilitating the damaged mosques, a decision to determine the first damaged mosque priority to be renovated and rehabilitated is problematic. Through two types of optimization method, simulated-annealing and hill-climbing, a decision support model for mosque renovation and rehabilitation was systematically constructed. The method fuzzy-logic was also operated to establish the priority of eleven selected parameters. The constructed model is able to simulate an efficiency comparison between two optimization methods used and suggest the most objective decision coming from 196 generated alternatives.
NASA Astrophysics Data System (ADS)
Bhargava, K.; Kalnay, E.; Carton, J.; Yang, F.
2017-12-01
Systematic forecast errors, arising from model deficiencies, form a significant portion of the total forecast error in weather prediction models like the Global Forecast System (GFS). While much effort has been expended to improve models, substantial model error remains. The aim here is to (i) estimate the model deficiencies in the GFS that lead to systematic forecast errors, (ii) implement an online correction (i.e., within the model) scheme to correct GFS following the methodology of Danforth et al. [2007] and Danforth and Kalnay [2008, GRL]. Analysis Increments represent the corrections that new observations make on, in this case, the 6-hr forecast in the analysis cycle. Model bias corrections are estimated from the time average of the analysis increments divided by 6-hr, assuming that initial model errors grow linearly and first ignoring the impact of observation bias. During 2012-2016, seasonal means of the 6-hr model bias are generally robust despite changes in model resolution and data assimilation systems, and their broad continental scales explain their insensitivity to model resolution. The daily bias dominates the sub-monthly analysis increments and consists primarily of diurnal and semidiurnal components, also requiring a low dimensional correction. Analysis increments in 2015 and 2016 are reduced over oceans, which is attributed to improvements in the specification of the SSTs. These results encourage application of online correction, as suggested by Danforth and Kalnay, for mean, seasonal and diurnal and semidiurnal model biases in GFS to reduce both systematic and random errors. As the error growth in the short-term is still linear, estimated model bias corrections can be added as a forcing term in the model tendency equation to correct online. Preliminary experiments with GFS, correcting temperature and specific humidity online show reduction in model bias in 6-hr forecast. This approach can then be used to guide and optimize the design of sub-grid scale physical parameterizations, more accurate discretization of the model dynamics, boundary conditions, radiative transfer codes, and other potential model improvements which can then replace the empirical correction scheme. The analysis increments also provide guidance in testing new physical parameterizations.
Effects of a vertical magnetic field on particle confinement in a magnetized plasma torus.
Müller, S H; Fasoli, A; Labit, B; McGrath, M; Podestà, M; Poli, F M
2004-10-15
The particle confinement in a magnetized plasma torus with superimposed vertical magnetic field is modeled and measured experimentally. The formation of an equilibrium characterized by a parallel plasma current canceling out the grad B and curvature drifts is described using a two-fluid model. Characteristic response frequencies and relaxation rates are calculated. The predictions for the particle confinement time as a function of the vertical magnetic field are verified in a systematic experimental study on the TORPEX device, including the existence of an optimal vertical field and the anticorrelation between confinement time and density.
Guideline validation in multiple trauma care through business process modeling.
Stausberg, Jürgen; Bilir, Hüseyin; Waydhas, Christian; Ruchholtz, Steffen
2003-07-01
Clinical guidelines can improve the quality of care in multiple trauma. In our Department of Trauma Surgery a specific guideline is available paper-based as a set of flowcharts. This format is appropriate for the use by experienced physicians but insufficient for electronic support of learning, workflow and process optimization. A formal and logically consistent version represented with a standardized meta-model is necessary for automatic processing. In our project we transferred the paper-based into an electronic format and analyzed the structure with respect to formal errors. Several errors were detected in seven error categories. The errors were corrected to reach a formally and logically consistent process model. In a second step the clinical content of the guideline was revised interactively using a process-modeling tool. Our study reveals that guideline development should be assisted by process modeling tools, which check the content in comparison to a meta-model. The meta-model itself could support the domain experts in formulating their knowledge systematically. To assure sustainability of guideline development a representation independent of specific applications or specific provider is necessary. Then, clinical guidelines could be used for eLearning, process optimization and workflow management additionally.
Immortalized endothelial cell lines for in vitro blood-brain barrier models: A systematic review.
Rahman, Nurul Adhwa; Rasil, Alifah Nur'ain Haji Mat; Meyding-Lamade, Uta; Craemer, Eva Maria; Diah, Suwarni; Tuah, Ani Afiqah; Muharram, Siti Hanna
2016-07-01
Endothelial cells play the most important role in construction of the blood-brain barrier. Many studies have opted to use commercially available, easily transfected or immortalized endothelial cell lines as in vitro blood-brain barrier models. Numerous endothelial cell lines are available, but we do not currently have strong evidence for which cell lines are optimal for establishment of such models. This review aimed to investigate the application of immortalized endothelial cell lines as in vitro blood-brain barrier models. The databases used for this review were PubMed, OVID MEDLINE, ProQuest, ScienceDirect, and SpringerLink. A narrative systematic review was conducted and identified 155 studies. As a result, 36 immortalized endothelial cell lines of human, mouse, rat, porcine and bovine origins were found for the establishment of in vitro blood-brain barrier and brain endothelium models. This review provides a summary of immortalized endothelial cell lines as a guideline for future studies and improvements in the establishment of in vitro blood-brain barrier models. It is important to establish a good and reproducible model that has the potential for multiple applications, in particular a model of such a complex compartment such as the blood-brain barrier. Copyright © 2016 Elsevier B.V. All rights reserved.
Error reduction and parameter optimization of the TAPIR method for fast T1 mapping.
Zaitsev, M; Steinhoff, S; Shah, N J
2003-06-01
A methodology is presented for the reduction of both systematic and random errors in T(1) determination using TAPIR, a Look-Locker-based fast T(1) mapping technique. The relations between various sequence parameters were carefully investigated in order to develop recipes for choosing optimal sequence parameters. Theoretical predictions for the optimal flip angle were verified experimentally. Inversion pulse imperfections were identified as the main source of systematic errors in T(1) determination with TAPIR. An effective remedy is demonstrated which includes extension of the measurement protocol to include a special sequence for mapping the inversion efficiency itself. Copyright 2003 Wiley-Liss, Inc.
A Pareto-optimal moving average multigene genetic programming model for daily streamflow prediction
NASA Astrophysics Data System (ADS)
Danandeh Mehr, Ali; Kahya, Ercan
2017-06-01
Genetic programming (GP) is able to systematically explore alternative model structures of different accuracy and complexity from observed input and output data. The effectiveness of GP in hydrological system identification has been recognized in recent studies. However, selecting a parsimonious (accurate and simple) model from such alternatives still remains a question. This paper proposes a Pareto-optimal moving average multigene genetic programming (MA-MGGP) approach to develop a parsimonious model for single-station streamflow prediction. The three main components of the approach that take us from observed data to a validated model are: (1) data pre-processing, (2) system identification and (3) system simplification. The data pre-processing ingredient uses a simple moving average filter to diminish the lagged prediction effect of stand-alone data-driven models. The multigene ingredient of the model tends to identify the underlying nonlinear system with expressions simpler than classical monolithic GP and, eventually simplification component exploits Pareto front plot to select a parsimonious model through an interactive complexity-efficiency trade-off. The approach was tested using the daily streamflow records from a station on Senoz Stream, Turkey. Comparing to the efficiency results of stand-alone GP, MGGP, and conventional multi linear regression prediction models as benchmarks, the proposed Pareto-optimal MA-MGGP model put forward a parsimonious solution, which has a noteworthy importance of being applied in practice. In addition, the approach allows the user to enter human insight into the problem to examine evolved models and pick the best performing programs out for further analysis.
Analyzing multicomponent receptive fields from neural responses to natural stimuli
Rowekamp, Ryan; Sharpee, Tatyana O
2011-01-01
The challenge of building increasingly better models of neural responses to natural stimuli is to accurately estimate the multiple stimulus features that may jointly affect the neural spike probability. The selectivity for combinations of features is thought to be crucial for achieving classical properties of neural responses such as contrast invariance. The joint search for these multiple stimulus features is difficult because estimating spike probability as a multidimensional function of stimulus projections onto candidate relevant dimensions is subject to the curse of dimensionality. An attractive alternative is to search for relevant dimensions sequentially, as in projection pursuit regression. Here we demonstrate using analytic arguments and simulations of model cells that different types of sequential search strategies exhibit systematic biases when used with natural stimuli. Simulations show that joint optimization is feasible for up to three dimensions with current algorithms. When applied to the responses of V1 neurons to natural scenes, models based on three jointly optimized dimensions had better predictive power in a majority of cases compared to dimensions optimized sequentially, with different sequential methods yielding comparable results. Thus, although the curse of dimensionality remains, at least several relevant dimensions can be estimated by joint information maximization. PMID:21780916
Protein docking by the interface structure similarity: how much structure is needed?
Sinha, Rohita; Kundrotas, Petras J; Vakser, Ilya A
2012-01-01
The increasing availability of co-crystallized protein-protein complexes provides an opportunity to use template-based modeling for protein-protein docking. Structure alignment techniques are useful in detection of remote target-template similarities. The size of the structure involved in the alignment is important for the success in modeling. This paper describes a systematic large-scale study to find the optimal definition/size of the interfaces for the structure alignment-based docking applications. The results showed that structural areas corresponding to the cutoff values <12 Å across the interface inadequately represent structural details of the interfaces. With the increase of the cutoff beyond 12 Å, the success rate for the benchmark set of 99 protein complexes, did not increase significantly for higher accuracy models, and decreased for lower-accuracy models. The 12 Å cutoff was optimal in our interface alignment-based docking, and a likely best choice for the large-scale (e.g., on the scale of the entire genome) applications to protein interaction networks. The results provide guidelines for the docking approaches, including high-throughput applications to modeled structures.
Rousset, Nassim; Monet, Frédéric; Gervais, Thomas
2017-03-21
This work focuses on modelling design and operation of "microfluidic sample traps" (MSTs). MSTs regroup a widely used class of microdevices that incorporate wells, recesses or chambers adjacent to a channel to individually trap, culture and/or release submicroliter 3D tissue samples ranging from simple cell aggregates and spheroids, to ex vivo tissue samples and other submillimetre-scale tissue models. Numerous MST designs employing various trapping mechanisms have been proposed in the literature, spurring the development of 3D tissue models for drug discovery and personalized medicine. Yet, there lacks a general framework to optimize trapping stability, trapping time, shear stress, and sample metabolism. Herein, the effects of hydrodynamics and diffusion-reaction on tissue viability and device operation are investigated using analytical and finite element methods with systematic parametric sweeps over independent design variables chosen to correspond to the four design degrees of freedom. Combining different results, we show that, for a spherical tissue of diameter d < 500 μm, the simplest, closest to optimal trap shape is a cube of dimensions w equal to twice the tissue diameter: w = 2d. Furthermore, to sustain tissues without perfusion, available medium volume per trap needs to be 100× the tissue volume to ensure optimal metabolism for at least 24 hours.
Physiome-model-based state-space framework for cardiac deformation recovery.
Wong, Ken C L; Zhang, Heye; Liu, Huafeng; Shi, Pengcheng
2007-11-01
To more reliably recover cardiac information from noise-corrupted, patient-specific measurements, it is essential to employ meaningful constraining models and adopt appropriate optimization criteria to couple the models with the measurements. Although biomechanical models have been extensively used for myocardial motion recovery with encouraging results, the passive nature of such constraints limits their ability to fully count for the deformation caused by active forces of the myocytes. To overcome such limitations, we propose to adopt a cardiac physiome model as the prior constraint for cardiac motion analysis. The cardiac physiome model comprises an electric wave propagation model, an electromechanical coupling model, and a biomechanical model, which are connected through a cardiac system dynamics for a more complete description of the macroscopic cardiac physiology. Embedded within a multiframe state-space framework, the uncertainties of the model and the patient's measurements are systematically dealt with to arrive at optimal cardiac kinematic estimates and possibly beyond. Experiments have been conducted to compare our proposed cardiac-physiome-model-based framework with the solely biomechanical model-based framework. The results show that our proposed framework recovers more accurate cardiac deformation from synthetic data and obtains more sensible estimates from real magnetic resonance image sequences. With the active components introduced by the cardiac physiome model, cardiac deformations recovered from patient's medical images are more physiologically plausible.
NASA Astrophysics Data System (ADS)
Riccio, A.; Giunta, G.; Galmarini, S.
2007-04-01
In this paper we present an approach for the statistical analysis of multi-model ensemble results. The models considered here are operational long-range transport and dispersion models, also used for the real-time simulation of pollutant dispersion or the accidental release of radioactive nuclides. We first introduce the theoretical basis (with its roots sinking into the Bayes theorem) and then apply this approach to the analysis of model results obtained during the ETEX-1 exercise. We recover some interesting results, supporting the heuristic approach called "median model", originally introduced in Galmarini et al. (2004a, b). This approach also provides a way to systematically reduce (and quantify) model uncertainties, thus supporting the decision-making process and/or regulatory-purpose activities in a very effective manner.
NASA Astrophysics Data System (ADS)
Riccio, A.; Giunta, G.; Galmarini, S.
2007-12-01
In this paper we present an approach for the statistical analysis of multi-model ensemble results. The models considered here are operational long-range transport and dispersion models, also used for the real-time simulation of pollutant dispersion or the accidental release of radioactive nuclides. We first introduce the theoretical basis (with its roots sinking into the Bayes theorem) and then apply this approach to the analysis of model results obtained during the ETEX-1 exercise. We recover some interesting results, supporting the heuristic approach called "median model", originally introduced in Galmarini et al. (2004a, b). This approach also provides a way to systematically reduce (and quantify) model uncertainties, thus supporting the decision-making process and/or regulatory-purpose activities in a very effective manner.
Optimal strategies to consider when peer reviewing a systematic review and meta-analysis.
Moher, David
2015-11-02
Systematic reviews are popular. A recent estimate indicates that 11 new systematic reviews are published daily. Nevertheless, evidence indicates that the quality of reporting of systematic reviews is not optimal. One likely reason is that the authors' reports have received inadequate peer review. There are now many different types of systematic reviews and peer reviewing them can be enhanced by using a reporting guideline to supplement whatever template the journal editors have asked you, as a peer reviewer, to use. Additionally, keeping up with the current literature, whether as a content expert or being aware of advances in systematic review methods is likely be make for a more comprehensive and effective peer review. Providing a brief summary of what the systematic review has reported is an important first step in the peer review process (and not performed frequently enough). At its core, it provides the authors with some sense of what the peer reviewer believes was performed (Methods) and found (Results). Importantly, it also provides clarity regarding any potential problems in the methods, including statistical approaches for meta-analysis, results, and interpretation of the systematic review, for which the peer reviewer can seek explanations from the authors; these clarifications are best presented as questions to the authors.
Delnoy, Peter Paul; Ritter, Philippe; Naegele, Herbert; Orazi, Serafino; Szwed, Hanna; Zupan, Igor; Goscinska-Bis, Kinga; Anselme, Frederic; Martino, Maria; Padeletti, Luigi
2013-08-01
The long-term clinical value of the optimization of atrioventricular (AVD) and interventricular (VVD) delays in cardiac resynchronization therapy (CRT) remains controversial. We studied retrospectively the association between the frequency of AVD and VVD optimization and 1-year clinical outcomes in the 199 CRT patients who completed the Clinical Evaluation on Advanced Resynchronization study. From the 199 patients assigned to CRT-pacemaker (CRT-P) (New York Heart Association, NYHA, class III/IV, left ventricular ejection fraction <35%), two groups were retrospectively composed a posteriori on the basis of the frequency of their AVD and VVD optimization: Group 1 (n = 66) was composed of patients 'systematically' optimized at implant, at 3 and 6 months; Group 2 (n = 133) was composed of all other patients optimized 'non-systematically' (less than three times) during the 1 year study. The primary endpoint was a composite of all-cause mortality, heart failure-related hospitalization, NYHA functional class, and Quality of Life score, at 1 year. Systematic CRT optimization was associated with a higher percentage of improved patients based on the composite endpoint (85% in Group 1 vs. 61% in Group 2, P < 0.001), with fewer deaths (3% in Group 1 vs. 14% in Group 2, P = 0.014) and fewer hospitalizations (8% in Group 1 vs. 23% in Group 2, P = 0.007), at 1 year. These results further suggest that AVD and VVD frequent optimization (at implant, at 3 and 6 months) is associated with improved long-term clinical response in CRT-P patients.
Mitigating Provider Uncertainty in Service Provision Contracts
NASA Astrophysics Data System (ADS)
Smith, Chris; van Moorsel, Aad
Uncertainty is an inherent property of open, distributed and multiparty systems. The viability of the mutually beneficial relationships which motivate these systems relies on rational decision-making by each constituent party under uncertainty. Service provision in distributed systems is one such relationship. Uncertainty is experienced by the service provider in his ability to deliver a service with selected quality level guarantees due to inherent non-determinism, such as load fluctuations and hardware failures. Statistical estimators utilized to model this non-determinism introduce additional uncertainty through sampling error. Inability of the provider to accurately model and analyze uncertainty in the quality level guarantees can result in the formation of sub-optimal service provision contracts. Emblematic consequences include loss of revenue, inefficient resource utilization and erosion of reputation and consumer trust. We propose a utility model for contract-based service provision to provide a systematic approach to optimal service provision contract formation under uncertainty. Performance prediction methods to enable the derivation of statistical estimators for quality level are introduced, with analysis of their resultant accuracy and cost.
Bai, Mingsian R; Hsieh, Ping-Ju; Hur, Kur-Nan
2009-02-01
The performance of the minimum mean-square error noise reduction (MMSE-NR) algorithm in conjunction with time-recursive averaging (TRA) for noise estimation is found to be very sensitive to the choice of two recursion parameters. To address this problem in a more systematic manner, this paper proposes an optimization method to efficiently search the optimal parameters of the MMSE-TRA-NR algorithms. The objective function is based on a regression model, whereas the optimization process is carried out with the simulated annealing algorithm that is well suited for problems with many local optima. Another NR algorithm proposed in the paper employs linear prediction coding as a preprocessor for extracting the correlated portion of human speech. Objective and subjective tests were undertaken to compare the optimized MMSE-TRA-NR algorithm with several conventional NR algorithms. The results of subjective tests were processed by using analysis of variance to justify the statistic significance. A post hoc test, Tukey's Honestly Significant Difference, was conducted to further assess the pairwise difference between the NR algorithms.
Autonomous Energy Grids: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kroposki, Benjamin D; Dall-Anese, Emiliano; Bernstein, Andrey
With much higher levels of distributed energy resources - variable generation, energy storage, and controllable loads just to mention a few - being deployed into power systems, the data deluge from pervasive metering of energy grids, and the shaping of multi-level ancillary-service markets, current frameworks to monitoring, controlling, and optimizing large-scale energy systems are becoming increasingly inadequate. This position paper outlines the concept of 'Autonomous Energy Grids' (AEGs) - systems that are supported by a scalable, reconfigurable, and self-organizing information and control infrastructure, can be extremely secure and resilient (self-healing), and self-optimize themselves in real-time for economic and reliable performancemore » while systematically integrating energy in all forms. AEGs rely on scalable, self-configuring cellular building blocks that ensure that each 'cell' can self-optimize when isolated from a larger grid as well as partaking in the optimal operation of a larger grid when interconnected. To realize this vision, this paper describes the concepts and key research directions in the broad domains of optimization theory, control theory, big-data analytics, and complex system modeling that will be necessary to realize the AEG vision.« less
Systematic parameter inference in stochastic mesoscopic modeling
NASA Astrophysics Data System (ADS)
Lei, Huan; Yang, Xiu; Li, Zhen; Karniadakis, George Em
2017-02-01
We propose a method to efficiently determine the optimal coarse-grained force field in mesoscopic stochastic simulations of Newtonian fluid and polymer melt systems modeled by dissipative particle dynamics (DPD) and energy conserving dissipative particle dynamics (eDPD). The response surfaces of various target properties (viscosity, diffusivity, pressure, etc.) with respect to model parameters are constructed based on the generalized polynomial chaos (gPC) expansion using simulation results on sampling points (e.g., individual parameter sets). To alleviate the computational cost to evaluate the target properties, we employ the compressive sensing method to compute the coefficients of the dominant gPC terms given the prior knowledge that the coefficients are "sparse". The proposed method shows comparable accuracy with the standard probabilistic collocation method (PCM) while it imposes a much weaker restriction on the number of the simulation samples especially for systems with high dimensional parametric space. Fully access to the response surfaces within the confidence range enables us to infer the optimal force parameters given the desirable values of target properties at the macroscopic scale. Moreover, it enables us to investigate the intrinsic relationship between the model parameters, identify possible degeneracies in the parameter space, and optimize the model by eliminating model redundancies. The proposed method provides an efficient alternative approach for constructing mesoscopic models by inferring model parameters to recover target properties of the physics systems (e.g., from experimental measurements), where those force field parameters and formulation cannot be derived from the microscopic level in a straight forward way.
Cooney, Lewis; Loke, Yoon K; Golder, Su; Kirkham, Jamie; Jorgensen, Andrea; Sinha, Ian; Hawcutt, Daniel
2017-06-02
Many medicines are dosed to achieve a particular therapeutic range, and monitored using therapeutic drug monitoring (TDM). The evidence base for a therapeutic range can be evaluated using systematic reviews, to ensure it continues to reflect current indications, doses, routes and formulations, as well as updated adverse effect data. There is no consensus on the optimal methodology for systematic reviews of therapeutic ranges. An overview of systematic reviews of therapeutic ranges was undertaken. The following databases were used: Cochrane Database of Systematic Reviews (CDSR), Database of Abstracts and Reviews of Effects (DARE) and MEDLINE. The published methodologies used when systematically reviewing the therapeutic range of a drug were analyzed. Step by step recommendations to optimize such systematic reviews are proposed. Ten systematic reviews that investigated the correlation between serum concentrations and clinical outcomes encompassing a variety of medicines and indications were assessed. There were significant variations in the methodologies used (including the search terms used, data extraction methods, assessment of bias, and statistical analyses undertaken). Therapeutic ranges should be population and indication specific and based on clinically relevant outcomes. Recommendations for future systematic reviews based on these findings have been developed. Evidence based therapeutic ranges have the potential to improve TDM practice. Current systematic reviews investigating therapeutic ranges have highly variable methodologies and there is no consensus of best practice when undertaking systematic reviews in this field. These recommendations meet a need not addressed by standard protocols.
NASA Technical Reports Server (NTRS)
Volk, Tyler
1992-01-01
The goal of this research is to develop a progressive series of mathematical models for the CELSS hydroponic crops. These models will systematize the experimental findings from the crop researchers in the CELSS Program into a form useful to investigate system-level considerations, for example, dynamic studies of the CELSS Initial Reference Configurations. The crop models will organize data from different crops into a common modeling framework. This is the fifth semiannual report for this project. The following topics are discussed: (1) use of field crop models to explore phasic control of CELSS crops for optimizing yield; (2) seminar presented at Purdue CELSS NSCORT; and (3) paper submitted on analysis of bioprocessing of inedible plant materials.
Optimization of entanglement witnesses
NASA Astrophysics Data System (ADS)
Lewenstein, M.; Kraus, B.; Cirac, J. I.; Horodecki, P.
2000-11-01
An entanglement witness (EW) is an operator that allows the detection of entangled states. We give necessary and sufficient conditions for such operators to be optimal, i.e., to detect entangled states in an optimal way. We show how to optimize general EW, and then we particularize our results to the nondecomposable ones; the latter are those that can detect positive partial transpose entangled states (PPTES's). We also present a method to systematically construct and optimize this last class of operators based on the existence of ``edge'' PPTES's, i.e., states that violate the range separability criterion [Phys. Lett. A 232, 333 (1997)] in an extreme manner. This method also permits a systematic construction of nondecomposable positive maps (PM's). Our results lead to a sufficient condition for entanglement in terms of nondecomposable EW's and PM's. Finally, we illustrate our results by constructing optimal EW acting on H=C2⊗C4. The corresponding PM's constitute examples of PM's with minimal ``qubit'' domains, or-equivalently-minimal Hermitian conjugate codomains.
NASA Astrophysics Data System (ADS)
Perry, Dan; Nakamoto, Mark; Verghese, Nishath; Hurat, Philippe; Rouse, Rich
2007-03-01
Model-based hotspot detection and silicon-aware parametric analysis help designers optimize their chips for yield, area and performance without the high cost of applying foundries' recommended design rules. This set of DFM/ recommended rules is primarily litho-driven, but cannot guarantee a manufacturable design without imposing overly restrictive design requirements. This rule-based methodology of making design decisions based on idealized polygons that no longer represent what is on silicon needs to be replaced. Using model-based simulation of the lithography, OPC, RET and etch effects, followed by electrical evaluation of the resulting shapes, leads to a more realistic and accurate analysis. This analysis can be used to evaluate intelligent design trade-offs and identify potential failures due to systematic manufacturing defects during the design phase. The successful DFM design methodology consists of three parts: 1. Achieve a more aggressive layout through limited usage of litho-related recommended design rules. A 10% to 15% area reduction is achieved by using more aggressive design rules. DFM/recommended design rules are used only if there is no impact on cell size. 2. Identify and fix hotspots using a model-based layout printability checker. Model-based litho and etch simulation are done at the cell level to identify hotspots. Violations of recommended rules may cause additional hotspots, which are then fixed. The resulting design is ready for step 3. 3. Improve timing accuracy with a process-aware parametric analysis tool for transistors and interconnect. Contours of diffusion, poly and metal layers are used for parametric analysis. In this paper, we show the results of this physical and electrical DFM methodology at Qualcomm. We describe how Qualcomm was able to develop more aggressive cell designs that yielded a 10% to 15% area reduction using this methodology. Model-based shape simulation was employed during library development to validate architecture choices and to optimize cell layout. At the physical verification stage, the shape simulator was run at full-chip level to identify and fix residual hotspots on interconnect layers, on poly or metal 1 due to interaction between adjacent cells, or on metal 1 due to interaction between routing (via and via cover) and cell geometry. To determine an appropriate electrical DFM solution, Qualcomm developed an experiment to examine various electrical effects. After reporting the silicon results of this experiment, which showed sizeable delay variations due to lithography-related systematic effects, we also explain how contours of diffusion, poly and metal can be used for silicon-aware parametric analysis of transistors and interconnect at the cell-, block- and chip-level.
Bio-inspired ``jigsaw''-like interlocking sutures: Modeling, optimization, 3D printing and testing
NASA Astrophysics Data System (ADS)
Malik, I. A.; Mirkhalaf, M.; Barthelat, F.
2017-05-01
Structural biological materials such as bone, teeth or mollusk shells draw their remarkable performance from a sophisticated interplay of architectures and weak interfaces. Pushed to the extreme, this concept leads to sutured materials, which contain thin lines with complex geometries. Sutured materials are prominent in nature, and have recently served as bioinspiration for toughened ceramics and glasses. Sutures can generate large deformations, toughness and damping in otherwise all brittle systems and materials. In this study we examine the design and optimization of sutures with a jigsaw puzzle-like geometry, focusing on the non-linear traction behavior generated by the frictional pullout of the jigsaw tabs. We present analytical models which accurately predict the entire pullout response. Pullout strength and energy absorption increase with higher interlocking angles and for higher coefficients of friction, but the associated high stresses in the solid may fracture the tabs. Systematic optimization reveals a counter-intuitive result: the best pullout performance is achieved with interfaces with low coefficient of friction and high interlocking angle. We finally use 3D printing and mechanical testing to verify the accuracy of the models and of the optimization. The models and guidelines we present here can be extended to other types of geometries and sutured materials subjected to other loading/boundary conditions. The nonlinear responses of sutures are particularly attractive to augment the properties and functionalities of inherently brittle materials such as ceramics and glasses.
Thanki, Kaushik; Zeng, Xianghui; Justesen, Sarah; Tejlmann, Sarah; Falkenberg, Emily; Van Driessche, Elize; Mørck Nielsen, Hanne; Franzyk, Henrik; Foged, Camilla
2017-11-01
Safety and efficacy of therapeutics based on RNA interference, e.g., small interfering RNA (siRNA), are dependent on the optimal engineering of the delivery technology, which is used for intracellular delivery of siRNA to the cytosol of target cells. We investigated the hypothesis that commonly used and poorly tolerated cationic lipids might be replaced with more efficacious and safe lipidoids as the lipid component of siRNA-loaded lipid-polymer hybrid nanoparticles (LPNs) for achieving more efficient gene silencing at lower and safer doses. However, formulation design of such a complex formulation is highly challenging due to a strong interplay between several contributing factors. Hence, critical formulation variables, i.e. the lipidoid content and siRNA:lipidoid ratio, were initially identified, followed by a systematic quality-by-design approach to define the optimal operating space (OOS), eventually resulting in the identification of a robust, highly efficacious and safe formulation. A 17-run design of experiment with an I-optimal approach was performed to systematically assess the effect of selected variables on critical quality attributes (CQAs), i.e. physicochemical properties (hydrodynamic size, zeta potential, siRNA encapsulation/loading) and the biological performance (in vitro gene silencing and cell viability). Model fitting of the obtained data to construct predictive models revealed non-linear relationships for all CQAs, which can be readily overlooked in one-factor-at-a-time optimization approaches. The response surface methodology further enabled the identification of an OOS that met the desired quality target product profile. The optimized lipidoid-modified LPNs revealed more than 50-fold higher in vitro gene silencing at well-tolerated doses and approx. a twofold increase in siRNA loading as compared to reference LPNs modified with the commonly used cationic lipid dioleyltrimethylammonium propane (DOTAP). Thus, lipidoid-modified LPNs show highly promising prospects for efficient and safe intracellular delivery of siRNA. Copyright © 2017 Elsevier B.V. All rights reserved.
Optimization and performance of bifacial solar modules: A global perspective
Sun, Xingshu; Khan, Mohammad Ryyan; Deline, Chris; ...
2018-02-06
With the rapidly growing interest in bifacial photovoltaics (PV), a worldwide map of their potential performance can help assess and accelerate the global deployment of this emerging technology. However, the existing literature only highlights optimized bifacial PV for a few geographic locations or develops worldwide performance maps for very specific configurations, such as the vertical installation. It is still difficult to translate these location- and configuration-specific conclusions to a general optimized performance of this technology. In this paper, we present a global study and optimization of bifacial solar modules using a rigorous and comprehensive modeling framework. Our results demonstrate thatmore » with a low albedo of 0.25, the bifacial gain of ground-mounted bifacial modules is less than 10% worldwide. However, increasing the albedo to 0.5 and elevating modules 1 m above the ground can boost the bifacial gain to 30%. Moreover, we derive a set of empirical design rules, which optimize bifacial solar modules across the world and provide the groundwork for rapid assessment of the location-specific performance. We find that ground-mounted, vertical, east-west-facing bifacial modules will outperform their south-north-facing, optimally tilted counterparts by up to 15% below the latitude of 30 degrees, for an albedo of 0.5. The relative energy output is reversed in latitudes above 30 degrees. A detailed and systematic comparison with data from Asia, Africa, Europe, and North America validates the model presented in this paper.« less
Optimization and performance of bifacial solar modules: A global perspective
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Xingshu; Khan, Mohammad Ryyan; Deline, Chris
With the rapidly growing interest in bifacial photovoltaics (PV), a worldwide map of their potential performance can help assess and accelerate the global deployment of this emerging technology. However, the existing literature only highlights optimized bifacial PV for a few geographic locations or develops worldwide performance maps for very specific configurations, such as the vertical installation. It is still difficult to translate these location- and configuration-specific conclusions to a general optimized performance of this technology. In this paper, we present a global study and optimization of bifacial solar modules using a rigorous and comprehensive modeling framework. Our results demonstrate thatmore » with a low albedo of 0.25, the bifacial gain of ground-mounted bifacial modules is less than 10% worldwide. However, increasing the albedo to 0.5 and elevating modules 1 m above the ground can boost the bifacial gain to 30%. Moreover, we derive a set of empirical design rules, which optimize bifacial solar modules across the world and provide the groundwork for rapid assessment of the location-specific performance. We find that ground-mounted, vertical, east-west-facing bifacial modules will outperform their south-north-facing, optimally tilted counterparts by up to 15% below the latitude of 30 degrees, for an albedo of 0.5. The relative energy output is reversed in latitudes above 30 degrees. A detailed and systematic comparison with data from Asia, Africa, Europe, and North America validates the model presented in this paper.« less
Mincarone, Pierpaolo; Leo, Carlo Giacomo; Trujillo-Martín, Maria Del Mar; Manson, Jan; Guarino, Roberto; Ponzini, Giuseppe; Sabina, Saverio
2018-04-01
The importance of working toward quality improvement in healthcare implies an increasing interest in analysing, understanding and optimizing process logic and sequences of activities embedded in healthcare processes. Their graphical representation promotes faster learning, higher retention and better compliance. The study identifies standardized graphical languages and notations applied to patient care processes and investigates their usefulness in the healthcare setting. Peer-reviewed literature up to 19 May 2016. Information complemented by a questionnaire sent to the authors of selected studies. Systematic review conducted in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses statement. Five authors extracted results of selected studies. Ten articles met the inclusion criteria. One notation and language for healthcare process modelling were identified with an application to patient care processes: Business Process Model and Notation and Unified Modeling Language™. One of the authors of every selected study completed the questionnaire. Users' comprehensibility and facilitation of inter-professional analysis of processes have been recognized, in the filled in questionnaires, as major strengths for process modelling in healthcare. Both the notation and the language could increase the clarity of presentation thanks to their visual properties, the capacity of easily managing macro and micro scenarios, the possibility of clearly and precisely representing the process logic. Both could increase guidelines/pathways applicability by representing complex scenarios through charts and algorithms hence contributing to reduce unjustified practice variations which negatively impact on quality of care and patient safety.
Hard Constraints in Optimization Under Uncertainty
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Giesy, Daniel P.; Kenny, Sean P.
2008-01-01
This paper proposes a methodology for the analysis and design of systems subject to parametric uncertainty where design requirements are specified via hard inequality constraints. Hard constraints are those that must be satisfied for all parameter realizations within a given uncertainty model. Uncertainty models given by norm-bounded perturbations from a nominal parameter value, i.e., hyper-spheres, and by sets of independently bounded uncertain variables, i.e., hyper-rectangles, are the focus of this paper. These models, which are also quite practical, allow for a rigorous mathematical treatment within the proposed framework. Hard constraint feasibility is determined by sizing the largest uncertainty set for which the design requirements are satisfied. Analytically verifiable assessments of robustness are attained by comparing this set with the actual uncertainty model. Strategies that enable the comparison of the robustness characteristics of competing design alternatives, the description and approximation of the robust design space, and the systematic search for designs with improved robustness are also proposed. Since the problem formulation is generic and the tools derived only require standard optimization algorithms for their implementation, this methodology is applicable to a broad range of engineering problems.
NASA Astrophysics Data System (ADS)
Baker, D. F.; Oda, T.; O'Dell, C.; Wunch, D.; Jacobson, A. R.; Yoshida, Y.; Partners, T.
2012-12-01
Measurements of column CO2 concentration from space are now being taken at a spatial and temporal density that permits regional CO2 sources and sinks to be estimated. Systematic errors in the satellite retrievals must be minimized for these estimates to be useful, however. CO2 retrievals from the TANSO instrument aboard the GOSAT satellite are compared to similar column retrievals from the Total Carbon Column Observing Network (TCCON) as the primary method of validation; while this is a powerful approach, it can only be done for overflights of 10-20 locations and has not, for example, permitted validation of GOSAT data over the oceans or deserts. Here we present a complementary approach that uses a global atmospheric transport model and flux inversion method to compare different types of CO2 measurements (GOSAT, TCCON, surface in situ, and aircraft) at different locations, at the cost of added transport error. The measurements from any single type of data are used in a variational carbon data assimilation method to optimize surface CO2 fluxes (with a CarbonTracker prior), then the corresponding optimized CO2 concentration fields are compared to those data types not inverted, using the appropriate vertical weighting. With this approach, we find that GOSAT column CO2 retrievals from the ACOS project (version 2.9 and 2.10) contain systematic errors that make the modeled fit to the independent data worse. However, we find that the differences between the GOSAT data and our prior model are correlated with certain physical variables (aerosol amount, surface albedo, correction to total column mass) that are likely driving errors in the retrievals, independent of CO2 concentration. If we correct the GOSAT data using a fit to these variables, then we find the GOSAT data to improve the fit to independent CO2 data, which suggests that the useful information in the measurements outweighs the negative impact of the remaining systematic errors. With this assurance, we compare the flux estimates given by assimilating the ACOS GOSAT retrievals to similar ones given by NIES GOSAT column retrievals, bias-corrected in a similar manner. Finally, we have found systematic differences on the order of a half ppm between column CO2 integrals from 18 TCCON sites and those given by assimilating NOAA in situ data (both surface and aircraft profile) in this approach. We assess how these differences change in switching to a newer version of the TCCON retrieval software.
Exponential H ∞ Synchronization of Chaotic Cryptosystems Using an Improved Genetic Algorithm
Hsiao, Feng-Hsiag
2015-01-01
This paper presents a systematic design methodology for neural-network- (NN-) based secure communications in multiple time-delay chaotic (MTDC) systems with optimal H ∞ performance and cryptography. On the basis of the Improved Genetic Algorithm (IGA), which is demonstrated to have better performance than that of a traditional GA, a model-based fuzzy controller is then synthesized to stabilize the MTDC systems. A fuzzy controller is synthesized to not only realize the exponential synchronization, but also achieve optimal H ∞ performance by minimizing the disturbance attenuation level. Furthermore, the error of the recovered message is stated by using the n-shift cipher and key. Finally, a numerical example with simulations is given to demonstrate the effectiveness of our approach. PMID:26366432
Hasse, Katelyn; Neylon, John; Sheng, Ke; Santhanam, Anand P
2016-03-01
Breast elastography is a critical tool for improving the targeted radiotherapy treatment of breast tumors. Current breast radiotherapy imaging protocols only involve prone and supine CT scans. There is a lack of knowledge on the quantitative accuracy with which breast elasticity can be systematically measured using only prone and supine CT datasets. The purpose of this paper is to describe a quantitative elasticity estimation technique for breast anatomy using only these supine/prone patient postures. Using biomechanical, high-resolution breast geometry obtained from CT scans, a systematic assessment was performed in order to determine the feasibility of this methodology for clinically relevant elasticity distributions. A model-guided inverse analysis approach is presented in this paper. A graphics processing unit (GPU)-based linear elastic biomechanical model was employed as a forward model for the inverse analysis with the breast geometry in a prone position. The elasticity estimation was performed using a gradient-based iterative optimization scheme and a fast-simulated annealing (FSA) algorithm. Numerical studies were conducted to systematically analyze the feasibility of elasticity estimation. For simulating gravity-induced breast deformation, the breast geometry was anchored at its base, resembling the chest-wall/breast tissue interface. Ground-truth elasticity distributions were assigned to the model, representing tumor presence within breast tissue. Model geometry resolution was varied to estimate its influence on convergence of the system. A priori information was approximated and utilized to record the effect on time and accuracy of convergence. The role of the FSA process was also recorded. A novel error metric that combined elasticity and displacement error was used to quantify the systematic feasibility study. For the authors' purposes, convergence was set to be obtained when each voxel of tissue was within 1 mm of ground-truth deformation. The authors' analyses showed that a ∼97% model convergence was systematically observed with no-a priori information. Varying the model geometry resolution showed no significant accuracy improvements. The GPU-based forward model enabled the inverse analysis to be completed within 10-70 min. Using a priori information about the underlying anatomy, the computation time decreased by as much as 50%, while accuracy improved from 96.81% to 98.26%. The use of FSA was observed to allow the iterative estimation methodology to converge more precisely. By utilizing a forward iterative approach to solve the inverse elasticity problem, this work indicates the feasibility and potential of the fast reconstruction of breast tissue elasticity using supine/prone patient postures.
Model-based setup assistant for progressive tools
NASA Astrophysics Data System (ADS)
Springer, Robert; Gräler, Manuel; Homberg, Werner; Henke, Christian; Trächtler, Ansgar
2018-05-01
In the field of production systems, globalization and technological progress lead to increasing requirements regarding part quality, delivery time and costs. Hence, today's production is challenged much more than a few years ago: it has to be very flexible and produce economically small batch sizes to satisfy consumer's demands and avoid unnecessary stock. Furthermore, a trend towards increasing functional integration continues to lead to an ongoing miniaturization of sheet metal components. In the industry of electric connectivity for example, the miniaturized connectors are manufactured by progressive tools, which are usually used for very large batches. These tools are installed in mechanical presses and then set up by a technician, who has to manually adjust a wide range of punch-bending operations. Disturbances like material thickness, temperatures, lubrication or tool wear complicate the setup procedure. In prospect of the increasing demand of production flexibility, this time-consuming process has to be handled more and more often. In this paper, a new approach for a model-based setup assistant is proposed as a solution, which is exemplarily applied in combination with a progressive tool. First, progressive tools, more specifically, their setup process is described and based on that, the challenges are pointed out. As a result, a systematic process to set up the machines is introduced. Following, the process is investigated with an FE-Analysis regarding the effects of the disturbances. In the next step, design of experiments is used to systematically develop a regression model of the system's behaviour. This model is integrated within an optimization in order to calculate optimal machine parameters and the following necessary adjustment of the progressive tool due to the disturbances. Finally, the assistant is tested in a production environment and the results are discussed.
An optimization model for the US Air-Traffic System
NASA Technical Reports Server (NTRS)
Mulvey, J. M.
1986-01-01
A systematic approach for monitoring U.S. air traffic was developed in the context of system-wide planning and control. Towards this end, a network optimization model with nonlinear objectives was chosen as the central element in the planning/control system. The network representation was selected because: (1) it provides a comprehensive structure for depicting essential aspects of the air traffic system, (2) it can be solved efficiently for large scale problems, and (3) the design can be easily communicated to non-technical users through computer graphics. Briefly, the network planning models consider the flow of traffic through a graph as the basic structure. Nodes depict locations and time periods for either individual planes or for aggregated groups of airplanes. Arcs define variables as actual airplanes flying through space or as delays across time periods. As such, a special case of the network can be used to model the so called flow control problem. Due to the large number of interacting variables and the difficulty in subdividing the problem into relatively independent subproblems, an integrated model was designed which will depict the entire high level (above 29000 feet) jet route system for the 48 contiguous states in the U.S. As a first step in demonstrating the concept's feasibility a nonlinear risk/cost model was developed for the Indianapolis Airspace. The nonlinear network program --NLPNETG-- was employed in solving the resulting test cases. This optimization program uses the Truncated-Newton method (quadratic approximation) for determining the search direction at each iteration in the nonlinear algorithm. It was shown that aircraft could be re-routed in an optimal fashion whenever traffic congestion increased beyond an acceptable level, as measured by the nonlinear risk function.
NASA Astrophysics Data System (ADS)
Borhan, Hoseinali
Modern hybrid electric vehicles and many stationary renewable power generation systems combine multiple power generating and energy storage devices to achieve an overall system-level efficiency and flexibility which is higher than their individual components. The power or energy management control, "brain" of these "hybrid" systems, determines adaptively and based on the power demand the power split between multiple subsystems and plays a critical role in overall system-level efficiency. This dissertation proposes that a receding horizon optimal control (aka Model Predictive Control) approach can be a natural and systematic framework for formulating this type of power management controls. More importantly the dissertation develops new results based on the classical theory of optimal control that allow solving the resulting optimal control problem in real-time, in spite of the complexities that arise due to several system nonlinearities and constraints. The dissertation focus is on two classes of hybrid systems: hybrid electric vehicles in the first part and wind farms with battery storage in the second part. The first part of the dissertation proposes and fully develops a real-time optimization-based power management strategy for hybrid electric vehicles. Current industry practice uses rule-based control techniques with "else-then-if" logic and look-up maps and tables in the power management of production hybrid vehicles. These algorithms are not guaranteed to result in the best possible fuel economy and there exists a gap between their performance and a minimum possible fuel economy benchmark. Furthermore, considerable time and effort are spent calibrating the control system in the vehicle development phase, and there is little flexibility in real-time handling of constraints and re-optimization of the system operation in the event of changing operating conditions and varying parameters. In addition, a proliferation of different powertrain configurations may result in the need for repeated control system redesign. To address these shortcomings, we formulate the power management problem as a nonlinear and constrained optimal control problem. Solution of this optimal control problem in real-time on chronometric- and memory-constrained automotive microcontrollers is quite challenging; this computational complexity is due to the highly nonlinear dynamics of the powertrain subsystems, mixed-integer switching modes of their operation, and time-varying and nonlinear hard constraints that system variables should satisfy. The main contribution of the first part of the dissertation is that it establishes methods for systematic and step-by step improvements in fuel economy while maintaining the algorithmic computational requirements in a real-time implementable framework. More specifically a linear time-varying model predictive control approach is employed first which uses sequential quadratic programming to find sub-optimal solutions to the power management problem. Next the objective function is further refined and broken into a short and a long horizon segments; the latter approximated as a function of the state using the connection between the Pontryagin minimum principle and Hamilton-Jacobi-Bellman equations. The power management problem is then solved using a nonlinear MPC framework with a dynamic programming solver and the fuel economy is further improved. Typical simplifying academic assumptions are minimal throughout this work, thanks to close collaboration with research scientists at Ford research labs and their stringent requirement that the proposed solutions be tested on high-fidelity production models. Simulation results on a high-fidelity model of a hybrid electric vehicle over multiple standard driving cycles reveal the potential for substantial fuel economy gains. To address the control calibration challenges, we also present a novel and fast calibration technique utilizing parallel computing techniques. ^ The second part of this dissertation presents an optimization-based control strategy for the power management of a wind farm with battery storage. The strategy seeks to minimize the error between the power delivered by the wind farm with battery storage and the power demand from an operator. In addition, the strategy attempts to maximize battery life. The control strategy has two main stages. The first stage produces a family of control solutions that minimize the power error subject to the battery constraints over an optimization horizon. These solutions are parameterized by a given value for the state of charge at the end of the optimization horizon. The second stage screens the family of control solutions to select one attaining an optimal balance between power error and battery life. The battery life model used in this stage is a weighted Amp-hour (Ah) throughput model. The control strategy is modular, allowing for more sophisticated optimization models in the first stage, or more elaborate battery life models in the second stage. The strategy is implemented in real-time in the framework of Model Predictive Control (MPC).
Program Model Checking: A Practitioner's Guide
NASA Technical Reports Server (NTRS)
Pressburger, Thomas T.; Mansouri-Samani, Masoud; Mehlitz, Peter C.; Pasareanu, Corina S.; Markosian, Lawrence Z.; Penix, John J.; Brat, Guillaume P.; Visser, Willem C.
2008-01-01
Program model checking is a verification technology that uses state-space exploration to evaluate large numbers of potential program executions. Program model checking provides improved coverage over testing by systematically evaluating all possible test inputs and all possible interleavings of threads in a multithreaded system. Model-checking algorithms use several classes of optimizations to reduce the time and memory requirements for analysis, as well as heuristics for meaningful analysis of partial areas of the state space Our goal in this guidebook is to assemble, distill, and demonstrate emerging best practices for applying program model checking. We offer it as a starting point and introduction for those who want to apply model checking to software verification and validation. The guidebook will not discuss any specific tool in great detail, but we provide references for specific tools.
Economic and environmental optimization of a multi-site utility network for an industrial complex.
Kim, Sang Hun; Yoon, Sung-Geun; Chae, Song Hwa; Park, Sunwon
2010-01-01
Most chemical companies consume a lot of steam, water and electrical resources in the production process. Given recent record fuel costs, utility networks must be optimized to reduce the overall cost of production. Environmental concerns must also be considered when preparing modifications to satisfy the requirements for industrial utilities, since wastes discharged from the utility networks are restricted by environmental regulations. Construction of Eco-Industrial Parks (EIPs) has drawn attention as a promising approach for retrofitting existing industrial parks to improve energy efficiency. The optimization of the utility network within an industrial complex is one of the most important undertakings to minimize energy consumption and waste loads in the EIP. In this work, a systematic approach to optimize the utility network of an industrial complex is presented. An important issue in the optimization of a utility network is the desire of the companies to achieve high profits while complying with the environmental regulations. Therefore, the proposed optimization was performed with consideration of both economic and environmental factors. The proposed approach consists of unit modeling using thermodynamic principles, mass and energy balances, development of a multi-period Mixed Integer Linear Programming (MILP) model for the integration of utility systems in an industrial complex, and an economic/environmental analysis of the results. This approach is applied to the Yeosu Industrial Complex, considering seasonal utility demands. The results show that both the total utility cost and waste load are reduced by optimizing the utility network of an industrial complex. 2009 Elsevier Ltd. All rights reserved.
Online optimal experimental re-design in robotic parallel fed-batch cultivation facilities.
Cruz Bournazou, M N; Barz, T; Nickel, D B; Lopez Cárdenas, D C; Glauche, F; Knepper, A; Neubauer, P
2017-03-01
We present an integrated framework for the online optimal experimental re-design applied to parallel nonlinear dynamic processes that aims to precisely estimate the parameter set of macro kinetic growth models with minimal experimental effort. This provides a systematic solution for rapid validation of a specific model to new strains, mutants, or products. In biosciences, this is especially important as model identification is a long and laborious process which is continuing to limit the use of mathematical modeling in this field. The strength of this approach is demonstrated by fitting a macro-kinetic differential equation model for Escherichia coli fed-batch processes after 6 h of cultivation. The system includes two fully-automated liquid handling robots; one containing eight mini-bioreactors and another used for automated at-line analyses, which allows for the immediate use of the available data in the modeling environment. As a result, the experiment can be continually re-designed while the cultivations are running using the information generated by periodical parameter estimations. The advantages of an online re-computation of the optimal experiment are proven by a 50-fold lower average coefficient of variation on the parameter estimates compared to the sequential method (4.83% instead of 235.86%). The success obtained in such a complex system is a further step towards a more efficient computer aided bioprocess development. Biotechnol. Bioeng. 2017;114: 610-619. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Molecular dynamics of reversible self-healing materials
NASA Astrophysics Data System (ADS)
Madden, Ian; Luijten, Erik
Hydrolyzable polymers have numerous industrial applications as degradable materials. Recent experimental work by Cheng and co-workers has introduced the concept of hindered urea bond (HUB) chemistry to design self-healing systems. Important control parameters are the steric hindrance of the HUB structures, which is used to tune the hydrolytic degradation kinetics, and their density. We employ molecular dynamics simulations of polymeric interfaces to systematically explore the role of these properties in a coarse-grained model, and make direct comparison to experimental data. Our model provides direct insight into the self-healing process, permitting optimization of the control parameters.
Kinetic Study of Acetone-Butanol-Ethanol Fermentation in Continuous Culture
Buehler, Edward A.; Mesbah, Ali
2016-01-01
Acetone-butanol-ethanol (ABE) fermentation by clostridia has shown promise for industrial-scale production of biobutanol. However, the continuous ABE fermentation suffers from low product yield, titer, and productivity. Systems analysis of the continuous ABE fermentation will offer insights into its metabolic pathway as well as into optimal fermentation design and operation. For the ABE fermentation in continuous Clostridium acetobutylicum culture, this paper presents a kinetic model that includes the effects of key metabolic intermediates and enzymes as well as culture pH, product inhibition, and glucose inhibition. The kinetic model is used for elucidating the behavior of the ABE fermentation under the conditions that are most relevant to continuous cultures. To this end, dynamic sensitivity analysis is performed to systematically investigate the effects of culture conditions, reaction kinetics, and enzymes on the dynamics of the ABE production pathway. The analysis provides guidance for future metabolic engineering and fermentation optimization studies. PMID:27486663
Insight into carrier lifetime impact on band-modulation devices
NASA Astrophysics Data System (ADS)
Parihar, Mukta Singh; Lee, Kyung Hwa; Park, Hyung Jin; Lacord, Joris; Martinie, Sébastien; Barbé, Jean-Charles; Xu, Yue; El Dirani, Hassan; Taur, Yuan; Cristoloveanu, Sorin; Bawedin, Maryline
2018-05-01
A systematic study to model and characterize the band-modulation Z2-FET device is developed bringing light to the relevance of the carrier lifetime influence. This work provides guidelines to optimize the Z2-FETs for sharp switching, ESD protection, and 1T-DRAM applications. Lower carrier lifetime in the Z2-FET helps in attaining the sharp switch. We provide new insights into the correlation between generation/recombination, diffusion, electrostatic barriers and carrier lifetime.
Predicting patchy particle crystals: variable box shape simulations and evolutionary algorithms.
Bianchi, Emanuela; Doppelbauer, Günther; Filion, Laura; Dijkstra, Marjolein; Kahl, Gerhard
2012-06-07
We consider several patchy particle models that have been proposed in literature and we investigate their candidate crystal structures in a systematic way. We compare two different algorithms for predicting crystal structures: (i) an approach based on Monte Carlo simulations in the isobaric-isothermal ensemble and (ii) an optimization technique based on ideas of evolutionary algorithms. We show that the two methods are equally successful and provide consistent results on crystalline phases of patchy particle systems.
Grøn, A O; Dalsgaard, E-M; Ribe, A R; Seidu, S; Mora, G; Cebrián-Cuenca, A M; Charles, M
2018-04-27
Individuals with severe mental illness (SMI) who suffer from type 2 diabetes (T2DM) are likely to be sub-optimally treated for their physical condition. This study aimed to review the effect of interventions in this population. A systematic search in five databases was conducted in July 2017. Seven studies on multi-faced interventions were included. These comprised nutrition and exercise counselling, behavioural modelling and increased disease awareness aiming to reduce HbA1c, fasting plasma glucose, body mass index and weight. Non-pharmacologic interventions in individuals with SMI and T2DM could possibly improve measures of diabetes care, although with limited clinical impact. Copyright © 2018 Primary Care Diabetes Europe. Published by Elsevier Ltd. All rights reserved.
High-resolution computed tomography of single breast cancer microcalcifications in vivo.
Inoue, Kazumasa; Liu, Fangbing; Hoppin, Jack; Lunsford, Elaine P; Lackas, Christian; Hesterman, Jacob; Lenkinski, Robert E; Fujii, Hirofumi; Frangioni, John V
2011-08-01
Microcalcification is a hallmark of breast cancer and a key diagnostic feature for mammography. We recently described the first robust animal model of breast cancer microcalcification. In this study, we hypothesized that high-resolution computed tomography (CT) could potentially detect the genesis of a single microcalcification in vivo and quantify its growth over time. Using a commercial CT scanner, we systematically optimized acquisition and reconstruction parameters. Two ray-tracing image reconstruction algorithms were tested: a voxel-driven "fast" cone beam algorithm (FCBA) and a detector-driven "exact" cone beam algorithm (ECBA). By optimizing acquisition and reconstruction parameters, we were able to achieve a resolution of 104 μm full width at half-maximum (FWHM). At an optimal detector sampling frequency, the ECBA provided a 28 μm (21%) FWHM improvement in resolution over the FCBA. In vitro, we were able to image a single 300 μm × 100 μm hydroxyapatite crystal. In a syngeneic rat model of breast cancer, we were able to detect the genesis of a single microcalcification in vivo and follow its growth longitudinally over weeks. Taken together, this study provides an in vivo "gold standard" for the development of calcification-specific contrast agents and a model system for studying the mechanism of breast cancer microcalcification.
Teixeira, Ana P; Carinhas, Nuno; Dias, João M L; Cruz, Pedro; Alves, Paula M; Carrondo, Manuel J T; Oliveira, Rui
2007-12-01
Systems biology is an integrative science that aims at the global characterization of biological systems. Huge amounts of data regarding gene expression, proteins activity and metabolite concentrations are collected by designing systematic genetic or environmental perturbations. Then the challenge is to integrate such data in a global model in order to provide a global picture of the cell. The analysis of these data is largely dominated by nonparametric modelling tools. In contrast, classical bioprocess engineering has been primarily founded on first principles models, but it has systematically overlooked the details of the embedded biological system. The full complexity of biological systems is currently assumed by systems biology and this knowledge can now be taken by engineers to decide how to optimally design and operate their processes. This paper discusses possible methodologies for the integration of systems biology and bioprocess engineering with emphasis on applications involving animal cell cultures. At the mathematical systems level, the discussion is focused on hybrid semi-parametric systems as a way to bridge systems biology and bioprocess engineering.
Simple construction and performance of a conical plastic cryocooler
NASA Technical Reports Server (NTRS)
Lambert, N.
1985-01-01
Low power cryocoolers with conical displacers offer several advantages over stepped displacers. The described fabrication process allows quick and reproducible manufacturing of plastic conical displacer units. This could be of commercial interest, but it also makes systematic optimization feasible by constructing a number of different models. The process allows for a wide range of displacer profiles. Low temperature performance as dominated by regenerator losses, and several effects are discussed. A simple device is described which controls gas flow during expansion.
Wang, Ruifei; Unrean, Pornkamol; Franzén, Carl Johan
2016-01-01
High content of water-insoluble solids (WIS) is required for simultaneous saccharification and co-fermentation (SSCF) operations to reach the high ethanol concentrations that meet the techno-economic requirements of industrial-scale production. The fundamental challenges of such processes are related to the high viscosity and inhibitor contents of the medium. Poor mass transfer and inhibition of the yeast lead to decreased ethanol yield, titre and productivity. In the present work, high-solid SSCF of pre-treated wheat straw was carried out by multi-feed SSCF which is a fed-batch process with additions of substrate, enzymes and cells, integrated with yeast propagation and adaptation on the pre-treatment liquor. The combined feeding strategies were systematically compared and optimized using experiments and simulations. For high-solid SSCF process of SO2-catalyzed steam pre-treated wheat straw, the boosted solubilisation of WIS achieved by having all enzyme loaded at the beginning of the process is crucial for increased rates of both enzymatic hydrolysis and SSCF. A kinetic model was adapted to simulate the release of sugars during separate hydrolysis as well as during SSCF. Feeding of solid substrate to reach the instantaneous WIS content of 13 % (w/w) was carried out when 60 % of the cellulose was hydrolysed, according to simulation results. With this approach, accumulated WIS additions reached more than 20 % (w/w) without encountering mixing problems in a standard bioreactor. Feeding fresh cells to the SSCF reactor maintained the fermentation activity, which otherwise ceased when the ethanol concentration reached 40-45 g L(-1). In lab scale, the optimized multi-feed SSCF produced 57 g L(-1) ethanol in 72 h. The process was reproducible and resulted in 52 g L(-1) ethanol in 10 m(3) scale at the SP Biorefinery Demo Plant. SSCF of WIS content up to 22 % (w/w) is reproducible and scalable with the multi-feed SSCF configuration and model-aided process design. For simultaneous saccharification and fermentation, the overall efficiency relies on balanced rates of substrate feeding and conversion. Multi-feed SSCF provides the possibilities to balance interdependent rates by systematic optimization of the feeding strategies. The optimization routine presented in this work can easily be adapted for optimization of other lignocellulose-based fermentation systems.
Optimized pulses for the control of uncertain qubits
Grace, Matthew D.; Dominy, Jason M.; Witzel, Wayne M.; ...
2012-05-18
The construction of high-fidelity control fields that are robust to control, system, and/or surrounding environment uncertainties is a crucial objective for quantum information processing. Using the two-state Landau-Zener model for illustrative simulations of a controlled qubit, we generate optimal controls for π/2 and π pulses and investigate their inherent robustness to uncertainty in the magnitude of the drift Hamiltonian. Next, we construct a quantum-control protocol to improve system-drift robustness by combining environment-decoupling pulse criteria and optimal control theory for unitary operations. By perturbatively expanding the unitary time-evolution operator for an open quantum system, previous analysis of environment-decoupling control pulses hasmore » calculated explicit control-field criteria to suppress environment-induced errors up to (but not including) third order from π/2 and π pulses. We systematically integrate this criteria with optimal control theory, incorporating an estimate of the uncertain parameter to produce improvements in gate fidelity and robustness, demonstrated via a numerical example based on double quantum dot qubits. For the qubit model used in this work, postfacto analysis of the resulting controls suggests that realistic control-field fluctuations and noise may contribute just as significantly to gate errors as system and environment fluctuations.« less
Design of clinical trials involving multiple hypothesis tests with a common control.
Schou, I Manjula; Marschner, Ian C
2017-07-01
Randomized clinical trials comparing several treatments to a common control are often reported in the medical literature. For example, multiple experimental treatments may be compared with placebo, or in combination therapy trials, a combination therapy may be compared with each of its constituent monotherapies. Such trials are typically designed using a balanced approach in which equal numbers of individuals are randomized to each arm, however, this can result in an inefficient use of resources. We provide a unified framework and new theoretical results for optimal design of such single-control multiple-comparator studies. We consider variance optimal designs based on D-, A-, and E-optimality criteria, using a general model that allows for heteroscedasticity and a range of effect measures that include both continuous and binary outcomes. We demonstrate the sensitivity of these designs to the type of optimality criterion by showing that the optimal allocation ratios are systematically ordered according to the optimality criterion. Given this sensitivity to the optimality criterion, we argue that power optimality is a more suitable approach when designing clinical trials where testing is the objective. Weighted variance optimal designs are also discussed, which, like power optimal designs, allow the treatment difference to play a major role in determining allocation ratios. We illustrate our methods using two real clinical trial examples taken from the medical literature. Some recommendations on the use of optimal designs in single-control multiple-comparator trials are also provided. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Optimal CCD readout by digital correlated double sampling
NASA Astrophysics Data System (ADS)
Alessandri, C.; Abusleme, A.; Guzman, D.; Passalacqua, I.; Alvarez-Fontecilla, E.; Guarini, M.
2016-01-01
Digital correlated double sampling (DCDS), a readout technique for charge-coupled devices (CCD), is gaining popularity in astronomical applications. By using an oversampling ADC and a digital filter, a DCDS system can achieve a better performance than traditional analogue readout techniques at the expense of a more complex system analysis. Several attempts to analyse and optimize a DCDS system have been reported, but most of the work presented in the literature has been experimental. Some approximate analytical tools have been presented for independent parameters of the system, but the overall performance and trade-offs have not been yet modelled. Furthermore, there is disagreement among experimental results that cannot be explained by the analytical tools available. In this work, a theoretical analysis of a generic DCDS readout system is presented, including key aspects such as the signal conditioning stage, the ADC resolution, the sampling frequency and the digital filter implementation. By using a time-domain noise model, the effect of the digital filter is properly modelled as a discrete-time process, thus avoiding the imprecision of continuous-time approximations that have been used so far. As a result, an accurate, closed-form expression for the signal-to-noise ratio at the output of the readout system is reached. This expression can be easily optimized in order to meet a set of specifications for a given CCD, thus providing a systematic design methodology for an optimal readout system. Simulated results are presented to validate the theory, obtained with both time- and frequency-domain noise generation models for completeness.
Zhao, Jane Y.; Song, Buer; Anand, Edwin; Schwartz, Diane; Panesar, Mandip; Jackson, Gretchen P.; Elkin, Peter L.
2017-01-01
Patient portal and personal health record adoption and usage rates have been suboptimal. A systematic review of the literature was performed to capture all published studies that specifically addressed barriers, facilitators, and solutions to optimal patient portal and personal health record enrollment and use. Consistent themes emerged from the review. Patient attitudes were critical as either barrier or facilitator. Institutional buy-in, information technology support, and aggressive tailored marketing were important facilitators. Interface redesign was a popular solution. Quantitative studies identified many barriers to optimal patient portal and personal health record enrollment and use, and qualitative and mixed methods research revealed thoughtful explanations for why they existed. Our study demonstrated the value of qualitative and mixed research methodologies in understanding the adoption of consumer health technologies. Results from the systematic review should be used to guide the design and implementation of future patient portals and personal health records, and ultimately, close the digital divide. PMID:29854263
Jiang, Ludi; Chen, Jiahua; He, Yusu; Zhang, Yanling; Li, Gongyu
2016-02-01
The blood-brain barrier (BBB), a highly selective barrier between central nervous system (CNS) and the blood stream, restricts and regulates the penetration of compounds from the blood into the brain. Drugs that affect the CNS interact with the BBB prior to their target site, so the prediction research on BBB permeability is a fundamental and significant research direction in neuropharmacology. In this study, we combed through the available data and then with the help of support vector machine (SVM), we established an experiment process for discovering potential CNS compounds and investigating the mechanisms of BBB permeability of them to advance the research in this field four types of prediction models, referring to CNS activity, BBB permeability, passive diffusion and efflux transport, were obtained in the experiment process. The first two models were used to discover compounds which may have CNS activity and also cross the BBB at the same time; the latter two were used to elucidate the mechanism of BBB permeability of those compounds. Three optimization parameter methods, Grid Search, Genetic Algorithm (GA), and Particle Swarm Optimization (PSO), were used to optimize the SVM models. Then, four optimal models were selected with excellent evaluation indexes (the accuracy, sensitivity and specificity of each model were all above 85%). Furthermore, discrimination models were utilized to study the BBB properties of the known CNS activity compounds in Chinese herbs and this may guide the CNS drug development. With the relatively systematic and quick approach, the application rationality of traditional Chinese medicines for treating nervous system disease in the clinical practice will be improved.
New Vistas in Chemical Product and Process Design.
Zhang, Lei; Babi, Deenesh K; Gani, Rafiqul
2016-06-07
Design of chemicals-based products is broadly classified into those that are process centered and those that are product centered. In this article, the designs of both classes of products are reviewed from a process systems point of view; developments related to the design of the chemical product, its corresponding process, and its integration are highlighted. Although significant advances have been made in the development of systematic model-based techniques for process design (also for optimization, operation, and control), much work is needed to reach the same level for product design. Timeline diagrams illustrating key contributions in product design, process design, and integrated product-process design are presented. The search for novel, innovative, and sustainable solutions must be matched by consideration of issues related to the multidisciplinary nature of problems, the lack of data needed for model development, solution strategies that incorporate multiscale options, and reliability versus predictive power. The need for an integrated model-experiment-based design approach is discussed together with benefits of employing a systematic computer-aided framework with built-in design templates.
Mapping and correcting the influence of gaze position on pupil size measurements
Petrov, Alexander A.
2015-01-01
Pupil size is correlated with a wide variety of important cognitive variables and is increasingly being used by cognitive scientists. Pupil data can be recorded inexpensively and non-invasively by many commonly used video-based eye-tracking cameras. Despite the relative ease of data collection and increasing prevalence of pupil data in the cognitive literature, researchers often underestimate the methodological challenges associated with controlling for confounds that can result in misinterpretation of their data. One serious confound that is often not properly controlled is pupil foreshortening error (PFE)—the foreshortening of the pupil image as the eye rotates away from the camera. Here we systematically map PFE using an artificial eye model and then apply a geometric model correction. Three artificial eyes with different fixed pupil sizes were used to systematically measure changes in pupil size as a function of gaze position with a desktop EyeLink 1000 tracker. A grid-based map of pupil measurements was recorded with each artificial eye across three experimental layouts of the eye-tracking camera and display. Large, systematic deviations in pupil size were observed across all nine maps. The measured PFE was corrected by a geometric model that expressed the foreshortening of the pupil area as a function of the cosine of the angle between the eye-to-camera axis and the eye-to-stimulus axis. The model reduced the root mean squared error of pupil measurements by 82.5 % when the model parameters were pre-set to the physical layout dimensions, and by 97.5 % when they were optimized to fit the empirical error surface. PMID:25953668
Excess electron localization in solvated DNA bases.
Smyth, Maeve; Kohanoff, Jorge
2011-06-10
We present a first-principles molecular dynamics study of an excess electron in condensed phase models of solvated DNA bases. Calculations on increasingly large microsolvated clusters taken from liquid phase simulations show that adiabatic electron affinities increase systematically upon solvation, as for optimized gas-phase geometries. Dynamical simulations after vertical attachment indicate that the excess electron, which is initially found delocalized, localizes around the nucleobases within a 15 fs time scale. This transition requires small rearrangements in the geometry of the bases.
Excess Electron Localization in Solvated DNA Bases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smyth, Maeve; Kohanoff, Jorge
2011-06-10
We present a first-principles molecular dynamics study of an excess electron in condensed phase models of solvated DNA bases. Calculations on increasingly large microsolvated clusters taken from liquid phase simulations show that adiabatic electron affinities increase systematically upon solvation, as for optimized gas-phase geometries. Dynamical simulations after vertical attachment indicate that the excess electron, which is initially found delocalized, localizes around the nucleobases within a 15 fs time scale. This transition requires small rearrangements in the geometry of the bases.
The Influence of Intrinsic Framework Flexibility on Adsorption in Nanoporous Materials
Witman, Matthew; Ling, Sanliang; Jawahery, Sudi; ...
2017-03-30
For applications of metal–organic frameworks (MOFs) such as gas storage and separation, flexibility is often seen as a parameter that can tune material performance. In this work we aim to determine the optimal flexibility for the shape selective separation of similarly sized molecules (e.g., Xe/Kr mixtures). To obtain systematic insight into how the flexibility impacts this type of separation, we develop a simple analytical model that predicts a material’s Henry regime adsorption and selectivity as a function of flexibility. We elucidate the complex dependence of selectivity on a framework’s intrinsic flexibility whereby performance is either improved or reduced with increasingmore » flexibility, depending on the material’s pore size characteristics. However, the selectivity of a material with the pore size and chemistry that already maximizes selectivity in the rigid approximation is continuously diminished with increasing flexibility, demonstrating that the globally optimal separation exists within an entirely rigid pore. Molecular simulations show that our simple model predicts performance trends that are observed when screening the adsorption behavior of flexible MOFs. These flexible simulations provide better agreement with experimental adsorption data in a high-performance material that is not captured when modeling this framework as rigid, an approximation typically made in high-throughput screening studies. We conclude that, for shape selective adsorption applications, the globally optimal material will have the optimal pore size/chemistry and minimal intrinsic flexibility even though other nonoptimal materials’ selectivity can actually be improved by flexibility. In conclusion, equally important, we find that flexible simulations can be critical for correctly modeling adsorption in these types of systems.« less
The Influence of Intrinsic Framework Flexibility on Adsorption in Nanoporous Materials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Witman, Matthew; Ling, Sanliang; Jawahery, Sudi
For applications of metal–organic frameworks (MOFs) such as gas storage and separation, flexibility is often seen as a parameter that can tune material performance. In this work we aim to determine the optimal flexibility for the shape selective separation of similarly sized molecules (e.g., Xe/Kr mixtures). To obtain systematic insight into how the flexibility impacts this type of separation, we develop a simple analytical model that predicts a material’s Henry regime adsorption and selectivity as a function of flexibility. We elucidate the complex dependence of selectivity on a framework’s intrinsic flexibility whereby performance is either improved or reduced with increasingmore » flexibility, depending on the material’s pore size characteristics. However, the selectivity of a material with the pore size and chemistry that already maximizes selectivity in the rigid approximation is continuously diminished with increasing flexibility, demonstrating that the globally optimal separation exists within an entirely rigid pore. Molecular simulations show that our simple model predicts performance trends that are observed when screening the adsorption behavior of flexible MOFs. These flexible simulations provide better agreement with experimental adsorption data in a high-performance material that is not captured when modeling this framework as rigid, an approximation typically made in high-throughput screening studies. We conclude that, for shape selective adsorption applications, the globally optimal material will have the optimal pore size/chemistry and minimal intrinsic flexibility even though other nonoptimal materials’ selectivity can actually be improved by flexibility. In conclusion, equally important, we find that flexible simulations can be critical for correctly modeling adsorption in these types of systems.« less
NASA Astrophysics Data System (ADS)
Acebron, Ana; Jullo, Eric; Limousin, Marceau; Tilquin, André; Giocoli, Carlo; Jauzac, Mathilde; Mahler, Guillaume; Richard, Johan
2017-09-01
Strong gravitational lensing by galaxy clusters is a fundamental tool to study dark matter and constrain the geometry of the Universe. Recently, the Hubble Space Telescope Frontier Fields programme has allowed a significant improvement of mass and magnification measurements but lensing models still have a residual root mean square between 0.2 arcsec and few arcseconds, not yet completely understood. Systematic errors have to be better understood and treated in order to use strong lensing clusters as reliable cosmological probes. We have analysed two simulated Hubble-Frontier-Fields-like clusters from the Hubble Frontier Fields Comparison Challenge, Ares and Hera. We use several estimators (relative bias on magnification, density profiles, ellipticity and orientation) to quantify the goodness of our reconstructions by comparing our multiple models, optimized with the parametric software lenstool, with the input models. We have quantified the impact of systematic errors arising, first, from the choice of different density profiles and configurations and, secondly, from the availability of constraints (spectroscopic or photometric redshifts, redshift ranges of the background sources) in the parametric modelling of strong lensing galaxy clusters and therefore on the retrieval of cosmological parameters. We find that substructures in the outskirts have a significant impact on the position of the multiple images, yielding tighter cosmological contours. The need for wide-field imaging around massive clusters is thus reinforced. We show that competitive cosmological constraints can be obtained also with complex multimodal clusters and that photometric redshifts improve the constraints on cosmological parameters when considering a narrow range of (spectroscopic) redshifts for the sources.
Gorahava, Kaushik K; Rosenberger, Jay M; Mubayi, Anuj
2015-07-01
Visceral leishmaniasis (VL) is the most deadly form of the leishmaniasis family of diseases, which affects numerous developing countries. The Indian state of Bihar has the highest prevalence and mortality rate of VL in the world. Insecticide spraying is believed to be an effective vector control program for controlling the spread of VL in Bihar; however, it is expensive and less effective if not implemented systematically. This study develops and analyzes a novel optimization model for VL control in Bihar that identifies an optimal (best possible) allocation of chosen insecticide (dichlorodiphenyltrichloroethane [DDT] or deltamethrin) based on the sizes of human and cattle populations in the region. The model maximizes the insecticide-induced sandfly death rate in human and cattle dwellings while staying within the current state budget for VL vector control efforts. The model results suggest that deltamethrin might not be a good replacement for DDT because the insecticide-induced sandfly deaths are 3.72 times more in case of DDT even after 90 days post spray. Different insecticide allocation strategies between the two types of sites (houses and cattle sheds) are suggested based on the state VL-control budget and have a direct implication on VL elimination efforts in a resource-limited region. © The American Society of Tropical Medicine and Hygiene.
Ingvarsson, Pall Thor; Yang, Mingshi; Mulvad, Helle; Nielsen, Hanne Mørck; Rantanen, Jukka; Foged, Camilla
2013-11-01
The purpose of this study was to identify and optimize spray drying parameters of importance for the design of an inhalable powder formulation of a cationic liposomal adjuvant composed of dimethyldioctadecylammonium (DDA) bromide and trehalose-6,6'-dibehenate (TDB). A quality by design (QbD) approach was applied to identify and link critical process parameters (CPPs) of the spray drying process to critical quality attributes (CQAs) using risk assessment and design of experiments (DoE), followed by identification of an optimal operating space (OOS). A central composite face-centered design was carried out followed by multiple linear regression analysis. Four CQAs were identified; the mass median aerodynamic diameter (MMAD), the liposome stability (size) during processing, the moisture content and the yield. Five CPPs (drying airflow, feed flow rate, feedstock concentration, atomizing airflow and outlet temperature) were identified and tested in a systematic way. The MMAD and the yield were successfully modeled. For the liposome size stability, the ratio between the size after and before spray drying was modeled successfully. The model for the residual moisture content was poor, although, the moisture content was below 3% in the entire design space. Finally, the OOS was drafted from the constructed models for the spray drying of trehalose stabilized DDA/TDB liposomes. The QbD approach for the spray drying process should include a careful consideration of the quality target product profile. This approach implementing risk assessment and DoE was successfully applied to optimize the spray drying of an inhalable DDA/TDB liposomal adjuvant designed for pulmonary vaccination.
Ekwunife, Obinna I; Grote, Andreas Gerber; Mosch, Christoph; O'Mahony, James F; Lhachimi, Stefan K
2015-05-12
Cervical cancer poses a huge health burden, both to developed and developing nations, making prevention and control strategies necessary. However, the challenges of designing and implementing prevention strategies differ for low- and middle-income countries (LMICs) as compared to countries with fully developed health care systems. Moreover, for many LMICs, much of the data needed for decision analytic modelling, such as prevalence, will most likely only be partly available or measured with much larger uncertainty. Lastly, imperfect implementation of human papillomavirus (HPV) vaccination may influence the effectiveness of cervical cancer prevention in unpredictable ways. This systematic review aims to assess how decision analytic modelling studies of HPV cost-effectiveness in LMICs accounted for the particular challenges faced in such countries. Specifically, the study will assess the following: (1) whether the existing literature on cost-effectiveness modelling of HPV vaccines acknowledges the distinct challenges of LMICs, (2) how these challenges were accommodated in the models, (3) whether certain parameters systemically exhibited large degrees of uncertainty due to lack of data and how influential were these parameters on model-based recommendations, and (4) whether the choice of modelling herd immunity influences model-based recommendations, especially when coverage of a HPV vaccination program is not optimal. We will conduct a systematic review to identify suitable studies from MEDLINE (via PubMed), EMBASE, NHS Economic Evaluation Database (NHS EED), EconLit, Web of Science, and CEA Registry. Searches will be conducted for studies of interest published since 2006. The searches will be supplemented by hand searching of the most relevant papers found in the search. Studies will be critically appraised using Consolidated Health Economic Evaluation Reporting Standards (CHEERS) statement checklist. We will undertake a descriptive, narrative, and interpretative synthesis of data to address the study objectives. The proposed systematic review will assess how the cost-effectiveness studies of HPV vaccines accounted for the distinct challenges of LMICs. The gaps identified will expose areas for additional research as well as challenges that need to be accounted for in future modelling studies. PROSPERO CRD42015017870.
Systematic Propulsion Optimization Tools (SPOT)
NASA Technical Reports Server (NTRS)
Bower, Mark; Celestian, John
1992-01-01
This paper describes a computer program written by senior-level Mechanical Engineering students at the University of Alabama in Huntsville which is capable of optimizing user-defined delivery systems for carrying payloads into orbit. The custom propulsion system is designed by the user through the input of configuration, payload, and orbital parameters. The primary advantages of the software, called Systematic Propulsion Optimization Tools (SPOT), are a user-friendly interface and a modular FORTRAN 77 code designed for ease of modification. The optimization of variables in an orbital delivery system is of critical concern in the propulsion environment. The mass of the overall system must be minimized within the maximum stress, force, and pressure constraints. SPOT utilizes the Design Optimization Tools (DOT) program for the optimization techniques. The SPOT program is divided into a main program and five modules: aerodynamic losses, orbital parameters, liquid engines, solid engines, and nozzles. The program is designed to be upgraded easily and expanded to meet specific user needs. A user's manual and a programmer's manual are currently being developed to facilitate implementation and modification.
NASA Astrophysics Data System (ADS)
Kim, Young-Min; Jung, In-Ho
2015-06-01
A complete literature review, critical evaluation, and thermodynamic optimization of phase equilibrium and thermodynamic properties of all available oxide phases in the MnO-B2O3 and MnO-B2O3-SiO2 systems at 1 bar pressure are presented. Due to the lack of the experimental data in these systems, the systematic trend of CaO- and MgO-containing systems were taken into account in the optimization. The molten oxide phase is described by the Modified Quasichemical Model. A set of optimized model parameters of all phases is obtained which reproduces all available and reliable thermodynamic and phase equilibrium data. The unexplored binary and ternary phase diagrams of the MnO-B2O3 and MnO-B2O3-SiO2 systems have been predicted for the first time. The thermodynamic calculations relevant to the oxidation of advanced high-strength steels containing boron were performed to find that B can form liquid B2O3-SiO2-rich phase in the annealing furnace under reducing N2-H2 atmosphere, which can significantly influence the wetting behavior of liquid Zn in Zn galvanizing process.
Systematicity and a Categorical Theory of Cognitive Architecture: Universal Construction in Context
Phillips, Steven; Wilson, William H.
2016-01-01
Why does the capacity to think certain thoughts imply the capacity to think certain other, structurally related, thoughts? Despite decades of intensive debate, cognitive scientists have yet to reach a consensus on an explanation for this property of cognitive architecture—the basic processes and modes of composition that together afford cognitive capacity—called systematicity. Systematicity is generally considered to involve a capacity to represent/process common structural relations among the equivalently cognizable entities. However, the predominant theoretical approaches to the systematicity problem, i.e., classical (symbolic) and connectionist (subsymbolic), require arbitrary (ad hoc) assumptions to derive systematicity. That is, their core principles and assumptions do not provide the necessary and sufficient conditions from which systematicity follows, as required of a causal theory. Hence, these approaches fail to fully explain why systematicity is a (near) universal property of human cognition, albeit in restricted contexts. We review an alternative, category theory approach to the systematicity problem. As a mathematical theory of structure, category theory provides necessary and sufficient conditions for systematicity in the form of universal construction: each systematically related cognitive capacity is composed of a common component and a unique component. Moreover, every universal construction can be viewed as the optimal construction in the given context (category). From this view, universal constructions are derived from learning, as an optimization. The ultimate challenge, then, is to explain the determination of context. If context is a category, then a natural extension toward addressing this question is higher-order category theory, where categories themselves are the objects of construction. PMID:27524975
Systematicity and a Categorical Theory of Cognitive Architecture: Universal Construction in Context.
Phillips, Steven; Wilson, William H
2016-01-01
Why does the capacity to think certain thoughts imply the capacity to think certain other, structurally related, thoughts? Despite decades of intensive debate, cognitive scientists have yet to reach a consensus on an explanation for this property of cognitive architecture-the basic processes and modes of composition that together afford cognitive capacity-called systematicity. Systematicity is generally considered to involve a capacity to represent/process common structural relations among the equivalently cognizable entities. However, the predominant theoretical approaches to the systematicity problem, i.e., classical (symbolic) and connectionist (subsymbolic), require arbitrary (ad hoc) assumptions to derive systematicity. That is, their core principles and assumptions do not provide the necessary and sufficient conditions from which systematicity follows, as required of a causal theory. Hence, these approaches fail to fully explain why systematicity is a (near) universal property of human cognition, albeit in restricted contexts. We review an alternative, category theory approach to the systematicity problem. As a mathematical theory of structure, category theory provides necessary and sufficient conditions for systematicity in the form of universal construction: each systematically related cognitive capacity is composed of a common component and a unique component. Moreover, every universal construction can be viewed as the optimal construction in the given context (category). From this view, universal constructions are derived from learning, as an optimization. The ultimate challenge, then, is to explain the determination of context. If context is a category, then a natural extension toward addressing this question is higher-order category theory, where categories themselves are the objects of construction.
NASA Astrophysics Data System (ADS)
Kim, U.; Parker, J.
2016-12-01
Many dense non-aqueous phase liquid (DNAPL) contaminated sites in the U.S. are reported as "remediation in progress" (RIP). However, the cost to complete (CTC) remediation at these sites is highly uncertain and in many cases, the current remediation plan may need to be modified or replaced to achieve remediation objectives. This study evaluates the effectiveness of iterative stochastic cost optimization that incorporates new field data for periodic parameter recalibration to incrementally reduce prediction uncertainty and implement remediation design modifications as needed to minimize the life cycle cost (i.e., CTC). This systematic approach, using the Stochastic Cost Optimization Toolkit (SCOToolkit), enables early identification and correction of problems to stay on track for completion while minimizing the expected (i.e., probability-weighted average) CTC. This study considers a hypothetical site involving multiple DNAPL sources in an unconfined aquifer using thermal treatment for source reduction and electron donor injection for dissolved plume control. The initial design is based on stochastic optimization using model parameters and their joint uncertainty based on calibration to site characterization data. The model is periodically recalibrated using new monitoring data and performance data for the operating remediation systems. Projected future performance using the current remediation plan is assessed and reoptimization of operational variables for the current system or consideration of alternative designs are considered depending on the assessment results. We compare remediation duration and cost for the stepwise re-optimization approach with single stage optimization as well as with a non-optimized design based on typical engineering practice.
Improved model predictive control of resistive wall modes by error field estimator in EXTRAP T2R
NASA Astrophysics Data System (ADS)
Setiadi, A. C.; Brunsell, P. R.; Frassinetti, L.
2016-12-01
Many implementations of a model-based approach for toroidal plasma have shown better control performance compared to the conventional type of feedback controller. One prerequisite of model-based control is the availability of a control oriented model. This model can be obtained empirically through a systematic procedure called system identification. Such a model is used in this work to design a model predictive controller to stabilize multiple resistive wall modes in EXTRAP T2R reversed-field pinch. Model predictive control is an advanced control method that can optimize the future behaviour of a system. Furthermore, this paper will discuss an additional use of the empirical model which is to estimate the error field in EXTRAP T2R. Two potential methods are discussed that can estimate the error field. The error field estimator is then combined with the model predictive control and yields better radial magnetic field suppression.
Aerospace engineering design by systematic decomposition and multilevel optimization
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, J.; Giles, G. L.; Barthelemy, J.-F. M.
1984-01-01
This paper describes a method for systematic analysis and optimization of large engineering systems, e.g., aircraft, by decomposition of a large task into a set of smaller, self-contained subtasks that can be solved concurrently. The subtasks may be arranged in many hierarchical levels with the assembled system at the top level. Analyses are carried out in each subtask using inputs received from other subtasks, and are followed by optimizations carried out from the bottom up. Each optimization at the lower levels is augmented by analysis of its sensitivity to the inputs received from other subtasks to account for the couplings among the subtasks in a formal manner. The analysis and optimization operations alternate iteratively until they converge to a system design whose performance is maximized with all constraints satisfied. The method, which is still under development, is tentatively validated by test cases in structural applications and an aircraft configuration optimization. It is pointed out that the method is intended to be compatible with the typical engineering organization and the modern technology of distributed computing.
Flight test trajectory control analysis
NASA Technical Reports Server (NTRS)
Walker, R.; Gupta, N.
1983-01-01
Recent extensions to optimal control theory applied to meaningful linear models with sufficiently flexible software tools provide powerful techniques for designing flight test trajectory controllers (FTTCs). This report describes the principal steps for systematic development of flight trajectory controllers, which can be summarized as planning, modeling, designing, and validating a trajectory controller. The techniques have been kept as general as possible and should apply to a wide range of problems where quantities must be computed and displayed to a pilot to improve pilot effectiveness and to reduce workload and fatigue. To illustrate the approach, a detailed trajectory guidance law is developed and demonstrated for the F-15 aircraft flying the zoom-and-pushover maneuver.
Optimization of Price and Quality in Service Systems,
Price and service quality are important variables in the design of optimal service systems. Price is important because of the strong consumption...priorities of service offered. The paper takes a systematic view of this problem, and presents techniques for quantitative determination of the optimal prices and service quality in a wide class of systems. (Author)
Lee, Chanwoo; Kim, Sung Tae; Jeong, Byeong Geun; Yun, Seok Joon; Song, Young Jae; Lee, Young Hee; Park, Doo Jae; Jeong, Mun Seok
2017-01-13
We successfully achieve the tip-enhanced nano Raman scattering images of a tungsten disulfide monolayer with optimizing a fabrication method of gold nanotip by controlling the concentration of etchant in an electrochemical etching process. By applying a square-wave voltage supplied from an arbitrary waveform generator to a gold wire, which is immersed in a hydrochloric acid solution diluted with ethanol at various ratios, we find that both the conical angle and radius of curvature of the tip apex can be varied by changing the ratio of hydrochloric acid and ethanol. We also suggest a model to explain the origin of these variations in the tip shape. From the systematic study, we find an optimal condition for achieving the yield of ~60% with the radius of ~34 nm and the cone angle of ~35°. Using representative tips fabricated under the optimal etching condition, we demonstrate the tip-enhanced Raman scattering experiment of tungsten disulfide monolayer grown by a chemical vapor deposition method with a spatial resolution of ~40 nm and a Raman enhancement factor of ~4,760.
NASA Technical Reports Server (NTRS)
Kuchynka, P.; Laskar, J.; Fienga, A.
2011-01-01
Mars ranging observations are available over the past 10 years with an accuracy of a few meters. Such precise measurements of the Earth-Mars distance provide valuable constraints on the masses of the asteroids perturbing both planets. Today more than 30 asteroid masses have thus been estimated from planetary ranging data (see [1] and [2]). Obtaining unbiased mass estimations is nevertheless difficult. Various systematic errors can be introduced by imperfect reduction of spacecraft tracking observations to planetary ranging data. The large number of asteroids and the limited a priori knowledge of their masses is also an obstacle for parameter selection. Fitting in a model a mass of a negligible perturber, or on the contrary omitting a significant perturber, will induce important bias in determined asteroid masses. In this communication, we investigate a simplified version of the mass determination problem. Instead of planetary ranging observations from spacecraft or radar data, we consider synthetic ranging observations generated with the INPOP [2] ephemeris for a test model containing 25000 asteroids. We then suggest a method for optimal parameter selection and estimation in this simplified framework.
Systematic parameter inference in stochastic mesoscopic modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lei, Huan; Yang, Xiu; Li, Zhen
2017-02-01
We propose a method to efficiently determine the optimal coarse-grained force field in mesoscopic stochastic simulations of Newtonian fluid and polymer melt systems modeled by dissipative particle dynamics (DPD) and energy conserving dissipative particle dynamics (eDPD). The response surfaces of various target properties (viscosity, diffusivity, pressure, etc.) with respect to model parameters are constructed based on the generalized polynomial chaos (gPC) expansion using simulation results on sampling points (e.g., individual parameter sets). To alleviate the computational cost to evaluate the target properties, we employ the compressive sensing method to compute the coefficients of the dominant gPC terms given the priormore » knowledge that the coefficients are “sparse”. The proposed method shows comparable accuracy with the standard probabilistic collocation method (PCM) while it imposes a much weaker restriction on the number of the simulation samples especially for systems with high dimensional parametric space. Fully access to the response surfaces within the confidence range enables us to infer the optimal force parameters given the desirable values of target properties at the macroscopic scale. Moreover, it enables us to investigate the intrinsic relationship between the model parameters, identify possible degeneracies in the parameter space, and optimize the model by eliminating model redundancies. The proposed method provides an efficient alternative approach for constructing mesoscopic models by inferring model parameters to recover target properties of the physics systems (e.g., from experimental measurements), where those force field parameters and formulation cannot be derived from the microscopic level in a straight forward way.« less
Defining the optimal animal model for translational research using gene set enrichment analysis.
Weidner, Christopher; Steinfath, Matthias; Opitz, Elisa; Oelgeschläger, Michael; Schönfelder, Gilbert
2016-08-01
The mouse is the main model organism used to study the functions of human genes because most biological processes in the mouse are highly conserved in humans. Recent reports that compared identical transcriptomic datasets of human inflammatory diseases with datasets from mouse models using traditional gene-to-gene comparison techniques resulted in contradictory conclusions regarding the relevance of animal models for translational research. To reduce susceptibility to biased interpretation, all genes of interest for the biological question under investigation should be considered. Thus, standardized approaches for systematic data analysis are needed. We analyzed the same datasets using gene set enrichment analysis focusing on pathways assigned to inflammatory processes in either humans or mice. The analyses revealed a moderate overlap between all human and mouse datasets, with average positive and negative predictive values of 48 and 57% significant correlations. Subgroups of the septic mouse models (i.e., Staphylococcus aureus injection) correlated very well with most human studies. These findings support the applicability of targeted strategies to identify the optimal animal model and protocol to improve the success of translational research. © 2016 The Authors. Published under the terms of the CC BY 4.0 license.
Schumann, Marcel; Armen, Roger S
2013-05-30
Molecular docking of small-molecules is an important procedure for computer-aided drug design. Modeling receptor side chain flexibility is often important or even crucial, as it allows the receptor to adopt new conformations as induced by ligand binding. However, the accurate and efficient incorporation of receptor side chain flexibility has proven to be a challenge due to the huge computational complexity required to adequately address this problem. Here we describe a new docking approach with a very fast, graph-based optimization algorithm for assignment of the near-optimal set of residue rotamers. We extensively validate our approach using the 40 DUD target benchmarks commonly used to assess virtual screening performance and demonstrate a large improvement using the developed side chain optimization over rigid receptor docking (average ROC AUC of 0.693 vs. 0.623). Compared to numerous benchmarks, the overall performance is better than nearly all other commonly used procedures. Furthermore, we provide a detailed analysis of the level of receptor flexibility observed in docking results for different classes of residues and elucidate potential avenues for further improvement. Copyright © 2013 Wiley Periodicals, Inc.
A predictive machine learning approach for microstructure optimization and materials design
NASA Astrophysics Data System (ADS)
Liu, Ruoqian; Kumar, Abhishek; Chen, Zhengzhang; Agrawal, Ankit; Sundararaghavan, Veera; Choudhary, Alok
2015-06-01
This paper addresses an important materials engineering question: How can one identify the complete space (or as much of it as possible) of microstructures that are theoretically predicted to yield the desired combination of properties demanded by a selected application? We present a problem involving design of magnetoelastic Fe-Ga alloy microstructure for enhanced elastic, plastic and magnetostrictive properties. While theoretical models for computing properties given the microstructure are known for this alloy, inversion of these relationships to obtain microstructures that lead to desired properties is challenging, primarily due to the high dimensionality of microstructure space, multi-objective design requirement and non-uniqueness of solutions. These challenges render traditional search-based optimization methods incompetent in terms of both searching efficiency and result optimality. In this paper, a route to address these challenges using a machine learning methodology is proposed. A systematic framework consisting of random data generation, feature selection and classification algorithms is developed. Experiments with five design problems that involve identification of microstructures that satisfy both linear and nonlinear property constraints show that our framework outperforms traditional optimization methods with the average running time reduced by as much as 80% and with optimality that would not be achieved otherwise.
Uncertainty Analysis in 3D Equilibrium Reconstruction
Cianciosa, Mark R.; Hanson, James D.; Maurer, David A.
2018-02-21
Reconstruction is an inverse process where a parameter space is searched to locate a set of parameters with the highest probability of describing experimental observations. Due to systematic errors and uncertainty in experimental measurements, this optimal set of parameters will contain some associated uncertainty. This uncertainty in the optimal parameters leads to uncertainty in models derived using those parameters. V3FIT is a three-dimensional (3D) equilibrium reconstruction code that propagates uncertainty from the input signals, to the reconstructed parameters, and to the final model. Here in this paper, we describe the methods used to propagate uncertainty in V3FIT. Using the resultsmore » of whole shot 3D equilibrium reconstruction of the Compact Toroidal Hybrid, this propagated uncertainty is validated against the random variation in the resulting parameters. Two different model parameterizations demonstrate how the uncertainty propagation can indicate the quality of a reconstruction. As a proxy for random sampling, the whole shot reconstruction results in a time interval that will be used to validate the propagated uncertainty from a single time slice.« less
Uncertainty Analysis in 3D Equilibrium Reconstruction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cianciosa, Mark R.; Hanson, James D.; Maurer, David A.
Reconstruction is an inverse process where a parameter space is searched to locate a set of parameters with the highest probability of describing experimental observations. Due to systematic errors and uncertainty in experimental measurements, this optimal set of parameters will contain some associated uncertainty. This uncertainty in the optimal parameters leads to uncertainty in models derived using those parameters. V3FIT is a three-dimensional (3D) equilibrium reconstruction code that propagates uncertainty from the input signals, to the reconstructed parameters, and to the final model. Here in this paper, we describe the methods used to propagate uncertainty in V3FIT. Using the resultsmore » of whole shot 3D equilibrium reconstruction of the Compact Toroidal Hybrid, this propagated uncertainty is validated against the random variation in the resulting parameters. Two different model parameterizations demonstrate how the uncertainty propagation can indicate the quality of a reconstruction. As a proxy for random sampling, the whole shot reconstruction results in a time interval that will be used to validate the propagated uncertainty from a single time slice.« less
Lange, Karin; Ziegler, Ralph; Neu, Andreas; Reinehr, Thomas; Daab, Iris; Walz, Marion; Maraun, Michael; Schnell, Oliver; Kulzer, Bernhard; Reichel, Andreas; Heinemann, Lutz; Parkin, Christopher G; Haak, Thomas
2015-03-01
Use of continuous subcutaneous insulin infusion (CSII) therapy improves glycemic control, reduces hypoglycemia and increases treatment satisfaction in individuals with diabetes. As a number of patient- and clinician-related factors can hinder the effectiveness and optimal usage of CSII therapy, new approaches are needed to address these obstacles. Ceriello and colleagues recently proposed a model of care that incorporates the collaborative use of structured SMBG into a formal approach to personalized diabetes management within all diabetes populations. We adapted this model for use in CSII-treated patients in order to enable the implementation of a workflow structure that enhances patient-physician communication and supports patients' diabetes self-management skills. We recognize that time constraints and current reimbursement policies pose significant challenges to healthcare providers integrating the Personalised Diabetes Management (PDM) process into clinical practice. We believe, however, that the time invested in modifying practice workflow and learning to apply the various steps of the PDM process will be offset by improved workflow and more effective patient consultations. This article describes how to implement PDM into clinical practice as a systematic, standardized process that can optimize CSII therapy.
Optimization and purification of l-asparaginase from fungi: A systematic review.
Souza, Paula Monteiro; de Freitas, Marcela Medeiros; Cardoso, Samuel Leite; Pessoa, Adalberto; Guerra, Eliete Neves Silva; Magalhães, Pérola Oliveira
2017-12-01
The purpose of this systematic review was to identify the available literature of the l-asparaginase producing fungi. This study followed the Preferred Reporting Items for Systematic Reviews. The search was conducted on five databases: LILACS, PubMed, Science Direct, Scopus and Web of Science up until July 20th, 2016, with no time or language restrictions. The reference list of the included studies was crosschecked and a partial gray literature search was undertaken. The methodology of the selected studies was evaluated using GRADE. Asparaginase production, optimization using statistical design, purification and characterization were the main evaluated outcomes. Of the 1686 initially gathered studies, 19 met the inclusion criteria after a two-step selection process. Nine species of fungi were reported in the selected studies, out of which 13 studies optimized the medium composition using statistical design for enhanced asparaginase production and six reported purification and characterization of the enzyme. The genera Aspergillus were identified as producers of asparaginase in both solid and submerged fermentation and l-asparagine was the amino acid most used as nitrogen source. This systematic review demonstrated that different fungi produce l-asparaginase, which possesses a potential in leukemia treatment. However, further investigations are required to confirm the promising effect of these fungal enzymes. Copyright © 2017 Elsevier B.V. All rights reserved.
Automatic design of fiber-reinforced soft actuators for trajectory matching
NASA Astrophysics Data System (ADS)
Connolly, Fionnuala; Walsh, Conor J.; Bertoldi, Katia
2017-01-01
Soft actuators are the components responsible for producing motion in soft robots. Although soft actuators have allowed for a variety of innovative applications, there is a need for design tools that can help to efficiently and systematically design actuators for particular functions. Mathematical modeling of soft actuators is an area that is still in its infancy but has the potential to provide quantitative insights into the response of the actuators. These insights can be used to guide actuator design, thus accelerating the design process. Here, we study fluid-powered fiber-reinforced actuators, because these have previously been shown to be capable of producing a wide range of motions. We present a design strategy that takes a kinematic trajectory as its input and uses analytical modeling based on nonlinear elasticity and optimization to identify the optimal design parameters for an actuator that will follow this trajectory upon pressurization. We experimentally verify our modeling approach, and finally we demonstrate how the strategy works, by designing actuators that replicate the motion of the index finger and thumb.
Automatic design of fiber-reinforced soft actuators for trajectory matching
Connolly, Fionnuala; Walsh, Conor J.; Bertoldi, Katia
2017-01-01
Soft actuators are the components responsible for producing motion in soft robots. Although soft actuators have allowed for a variety of innovative applications, there is a need for design tools that can help to efficiently and systematically design actuators for particular functions. Mathematical modeling of soft actuators is an area that is still in its infancy but has the potential to provide quantitative insights into the response of the actuators. These insights can be used to guide actuator design, thus accelerating the design process. Here, we study fluid-powered fiber-reinforced actuators, because these have previously been shown to be capable of producing a wide range of motions. We present a design strategy that takes a kinematic trajectory as its input and uses analytical modeling based on nonlinear elasticity and optimization to identify the optimal design parameters for an actuator that will follow this trajectory upon pressurization. We experimentally verify our modeling approach, and finally we demonstrate how the strategy works, by designing actuators that replicate the motion of the index finger and thumb. PMID:27994133
Automatic design of fiber-reinforced soft actuators for trajectory matching.
Connolly, Fionnuala; Walsh, Conor J; Bertoldi, Katia
2017-01-03
Soft actuators are the components responsible for producing motion in soft robots. Although soft actuators have allowed for a variety of innovative applications, there is a need for design tools that can help to efficiently and systematically design actuators for particular functions. Mathematical modeling of soft actuators is an area that is still in its infancy but has the potential to provide quantitative insights into the response of the actuators. These insights can be used to guide actuator design, thus accelerating the design process. Here, we study fluid-powered fiber-reinforced actuators, because these have previously been shown to be capable of producing a wide range of motions. We present a design strategy that takes a kinematic trajectory as its input and uses analytical modeling based on nonlinear elasticity and optimization to identify the optimal design parameters for an actuator that will follow this trajectory upon pressurization. We experimentally verify our modeling approach, and finally we demonstrate how the strategy works, by designing actuators that replicate the motion of the index finger and thumb.
Ataman, Meric
2017-01-01
Genome-scale metabolic reconstructions have proven to be valuable resources in enhancing our understanding of metabolic networks as they encapsulate all known metabolic capabilities of the organisms from genes to proteins to their functions. However the complexity of these large metabolic networks often hinders their utility in various practical applications. Although reduced models are commonly used for modeling and in integrating experimental data, they are often inconsistent across different studies and laboratories due to different criteria and detail, which can compromise transferability of the findings and also integration of experimental data from different groups. In this study, we have developed a systematic semi-automatic approach to reduce genome-scale models into core models in a consistent and logical manner focusing on the central metabolism or subsystems of interest. The method minimizes the loss of information using an approach that combines graph-based search and optimization methods. The resulting core models are shown to be able to capture key properties of the genome-scale models and preserve consistency in terms of biomass and by-product yields, flux and concentration variability and gene essentiality. The development of these “consistently-reduced” models will help to clarify and facilitate integration of different experimental data to draw new understanding that can be directly extendable to genome-scale models. PMID:28727725
A research model--forecasting incident rates from optimized safety program intervention strategies.
Iyer, P S; Haight, J M; Del Castillo, E; Tink, B W; Hawkins, P W
2005-01-01
INTRODUCTION/PROBLEM: Property damage incidents, workplace injuries, and safety programs designed to prevent them, are expensive aspects of doing business in contemporary industry. The National Safety Council (2002) estimated that workplace injuries cost $146.6 billion per year. Because companies are resource limited, optimizing intervention strategies to decrease incidents with less costly programs can contribute to improved productivity. Systematic data collection methods were employed and the forecasting ability of a time-lag relationship between interventions and incident rates was studied using various statistical methods (an intervention is not expected to have an immediate nor infinitely lasting effect on the incident rate). As a follow up to the initial work, researchers developed two models designed to forecast incident rates. One is based on past incident rate performance and the other on the configuration and level of effort applied to the safety and health program. Researchers compared actual incident performance to the prediction capability of each model over 18 months in the forestry operations at an electricity distribution company and found the models to allow accurate prediction of incident rates. These models potentially have powerful implications as a business-planning tool for human resource allocation and for designing an optimized safety and health intervention program to minimize incidents. Depending on the mathematical relationship, one can determine what interventions, where and how much to apply them, and when to increase or reduce human resource input as determined by the forecasted performance.
Optimization of bottom-hinged flap-type wave energy converter for a specific wave rose
NASA Astrophysics Data System (ADS)
Behzad, Hamed; Panahi, Roozbeh
2017-06-01
In this paper, we conducted a numerical analysis on the bottom-hinged flap-type Wave Energy Convertor (WEC). The basic model, implemented through the study using ANSYS-AQWA, has been validated by a three-dimensional physical model of a pitching vertical cylinder. Then, a systematic parametric assessment has been performed on stiffness, damping, and WEC direction against an incoming wave rose, resulting in an optimized flap-type WEC for a specific spot in the Persian Gulf. Here, stiffness is tuned to have a near-resonance condition considering the wave rose, while damping is modified to capture the highest energy for each device direction. Moreover, such sets of specifications have been checked at different directions to present the best combination of stiffness, damping, and device heading. It has been shown that for a real condition, including different wave heights, periods, and directions, it is very important to implement the methodology introduced here to guarantee device performance.
NASA Astrophysics Data System (ADS)
Kumar, S.; Kaushal, D. R.; Gosain, A. K.
2017-12-01
Urban hydrology will have an increasing role to play in the sustainability of human settlements. Expansion of urban areas brings significant changes in physical characteristics of landuse. Problems with administration of urban flooding have their roots in concentration of population within a relatively small area. As watersheds are urbanized, infiltration decreases, pattern of surface runoff is changed generating high peak flows, large runoff volumes from urban areas. Conceptual rainfall-runoff models have become a foremost tool for predicting surface runoff and flood forecasting. Manual calibration is often time consuming and tedious because of the involved subjectivity, which makes automatic approach more preferable. The calibration of parameters usually includes numerous criteria for evaluating the performances with respect to the observed data. Moreover, derivation of objective function assosciat6ed with the calibration of model parameters is quite challenging. Various studies dealing with optimization methods has steered the embracement of evolution based optimization algorithms. In this paper, a systematic comparison of two evolutionary approaches to multi-objective optimization namely shuffled frog leaping algorithm (SFLA) and genetic algorithms (GA) is done. SFLA is a cooperative search metaphor, stimulated by natural memetics based on the population while, GA is based on principle of survival of the fittest and natural evolution. SFLA and GA has been employed for optimizing the major parameters i.e. width, imperviousness, Manning's coefficient and depression storage for the highly urbanized catchment of Delhi, India. The study summarizes the auto-tuning of a widely used storm water management model (SWMM), by internal coupling of SWMM with SFLA and GA separately. The values of statistical parameters such as, Nash-Sutcliffe efficiency (NSE) and Percent Bias (PBIAS) were found to lie within the acceptable limit, indicating reasonably good model performance. Overall, this study proved promising for assessing risk in urban drainage systems and should prove useful to improve integrity of the urban system, its reliability and provides guidance for inundation preparedness.Keywords: Hydrologic model, SWMM, Urbanization, SFLA and GA.
Topographical optimization of structures for use in musical instruments and other applications
NASA Astrophysics Data System (ADS)
Kirkland, William Brandon
Mallet percussion instruments such as the xylophone, marimba, and vibraphone have been produced and tuned since their inception by arduously grinding the keys to achieve harmonic ratios between their 1st, 2 nd, and 3rd transverse modes. In consideration of this, it would be preferable to have defined mathematical models such that the keys of these instruments can be produced quickly and reliably. Additionally, physical modeling of these keys or beams provides a useful application of non-uniform beam vibrations as studied by Euler-Bernoulli and Timoshenko beam theories. This thesis work presents a literature review of previous studies regarding mallet percussion instrument design and optimization of non-uniform keys. The progression of previous research from strictly mathematical approaches to finite element methods is shown, ultimately arriving at the most current optimization techniques used by other authors. However, previous research varies slightly in the relative degree of accuracy to which a non-uniform beam can be modeled. Typically, accuracies are shown in literature as 1% to 2% error. While this seems attractive, musical tolerances require 0.25% error and beams are otherwise unsuitable. This research seeks to build on and add to the previous field research by optimizing beam topology and machining keys within tolerances that no further tuning is required. The optimization methods relied on finite element analysis and used harmonic modal frequencies as constraints rather than arguments of an error function to be optimized. Instead, the beam mass was minimized while the modal frequency constraints were required to be satisfied within 0.25% tolerance. The final optimized and machined keys of an A4 vibraphone were shown to be accurate within the required musical tolerances, with strong resonance at the designed frequencies. The findings solidify a systematic method for designing musical structures for accuracy and repeatability upon manufacture.
Multi-objective optimization integrated with life cycle assessment for rainwater harvesting systems
NASA Astrophysics Data System (ADS)
Li, Yi; Huang, Youyi; Ye, Quanliang; Zhang, Wenlong; Meng, Fangang; Zhang, Shanxue
2018-03-01
The major limitation of optimization models applied previously for rainwater harvesting (RWH) systems is the systematic evaluation of environmental and human health impacts across all the lifecycle stages. This study integrated life cycle assessment (LCA) into a multi-objective optimization model to optimize the construction areas of green rooftops, porous pavements and green lands in Beijing of China, considering the trade-offs among 24 h-interval RWH volume (QR), stormwater runoff volume control ratio (R), economic cost (EC), and environmental impacts (EI). Eleven life cycle impact indicators were assessed with a functional unit of 10,000 m2 of RWH construction areas. The LCA results showed that green lands performed the smallest lifecycle impacts of all assessment indicators, in contrast, porous pavements showed the largest impact values except Abiotic Depletion Potential (ADP) elements. Based on the standardization results, ADP fossil was chosen as the representative indicator for the calculation of EI objective in multi-objective optimization model due to its largest value in all RWH systems lifecycle. The optimization results for QR, R, EC and EI were 238.80 million m3, 78.5%, 66.68 billion RMB Yuan, and 1.05E + 16 MJ, respectively. After the construction of optimal RWH system, 14.7% of annual domestic water consumption and 78.5% of maximum daily rainfall would be supplied and controlled in Beijing, respectively, which would make a great contribution to reduce the stress of water scarcity and water logging problems. Green lands have been the first choice for RWH in Beijing according to the capacity of rainwater harvesting and less environmental and human impacts. Porous pavements played a good role in water logging alleviation (R for 67.5%), however, did not show a large construction result in this study due to the huge ADP fossil across the lifecycle. Sensitivity analysis revealed the daily maximum precipitation to be key factor for the robustness of the results for three RWH systems construction in this study.
NASA Astrophysics Data System (ADS)
Wu, Xiaohua; Hu, Xiaosong; Teng, Yanqiong; Qian, Shide; Cheng, Rui
2017-09-01
Hybrid solar-battery power source is essential in the nexus of plug-in electric vehicle (PEV), renewables, and smart building. This paper devises an optimization framework for efficient energy management and components sizing of a single smart home with home battery, PEV, and potovoltatic (PV) arrays. We seek to maximize the home economy, while satisfying home power demand and PEV driving. Based on the structure and system models of the smart home nanogrid, a convex programming (CP) problem is formulated to rapidly and efficiently optimize both the control decision and parameters of the home battery energy storage system (BESS). Considering different time horizons of optimization, home BESS prices, types and control modes of PEVs, the parameters of home BESS and electric cost are systematically investigated. Based on the developed CP control law in home to vehicle (H2V) mode and vehicle to home (V2H) mode, the home with BESS does not buy electric energy from the grid during the electric price's peak periods.
Shin, Sangmun; Choi, Du Hyung; Truong, Nguyen Khoa Viet; Kim, Nam Ah; Chu, Kyung Rok; Jeong, Seong Hoon
2011-04-04
A new experimental design methodology was developed by integrating the response surface methodology and the time series modeling. The major purposes were to identify significant factors in determining swelling and release rate from matrix tablets and their relative factor levels for optimizing the experimental responses. Properties of tablet swelling and drug release were assessed with ten factors and two default factors, a hydrophilic model drug (terazosin) and magnesium stearate, and compared with target values. The selected input control factors were arranged in a mixture simplex lattice design with 21 experimental runs. The obtained optimal settings for gelation were PEO, LH-11, Syloid, and Pharmacoat with weight ratios of 215.33 (88.50%), 5.68 (2.33%), 19.27 (7.92%), and 3.04 (1.25%), respectively. The optimal settings for drug release were PEO and citric acid with weight ratios of 191.99 (78.91%) and 51.32 (21.09%), respectively. Based on the results of matrix swelling and drug release, the optimal solutions, target values, and validation experiment results over time were similar and showed consistent patterns with very small biases. The experimental design methodology could be a very promising experimental design method to obtain maximum information with limited time and resources. It could also be very useful in formulation studies by providing a systematic and reliable screening method to characterize significant factors in the sustained release matrix tablet. Copyright © 2011 Elsevier B.V. All rights reserved.
Identification of terrain cover using the optimum polarimetric classifier
NASA Technical Reports Server (NTRS)
Kong, J. A.; Swartz, A. A.; Yueh, H. A.; Novak, L. M.; Shin, R. T.
1988-01-01
A systematic approach for the identification of terrain media such as vegetation canopy, forest, and snow-covered fields is developed using the optimum polarimetric classifier. The covariance matrices for various terrain cover are computed from theoretical models of random medium by evaluating the scattering matrix elements. The optimal classification scheme makes use of a quadratic distance measure and is applied to classify a vegetation canopy consisting of both trees and grass. Experimentally measured data are used to validate the classification scheme. Analytical and Monte Carlo simulated classification errors using the fully polarimetric feature vector are compared with classification based on single features which include the phase difference between the VV and HH polarization returns. It is shown that the full polarimetric results are optimal and provide better classification performance than single feature measurements.
Capsule Performance Optimization in the National Ignition Campaign
DOE Office of Scientific and Technical Information (OSTI.GOV)
Landen, O L; MacGowan, B J; Haan, S W
2009-10-13
A capsule performance optimization campaign will be conducted at the National Ignition Facility to substantially increase the probability of ignition. The campaign will experimentally correct for residual uncertainties in the implosion and hohlraum physics used in our radiation-hydrodynamic computational models before proceeding to cryogenic-layered implosions and ignition attempts. The required tuning techniques using a variety of ignition capsule surrogates have been demonstrated at the Omega facility under scaled hohlraum and capsule conditions relevant to the ignition design and shown to meet the required sensitivity and accuracy. In addition, a roll-up of all expected random and systematic uncertainties in setting themore » key ignition laser and target parameters due to residual measurement, calibration, cross-coupling, surrogacy, and scale-up errors has been derived that meets the required budget.« less
Capsule performance optimization in the national ignition campaign
NASA Astrophysics Data System (ADS)
Landen, O. L.; MacGowan, B. J.; Haan, S. W.; Edwards, J.
2010-08-01
A capsule performance optimization campaign will be conducted at the National Ignition Facility [1] to substantially increase the probability of ignition. The campaign will experimentally correct for residual uncertainties in the implosion and hohlraum physics used in our radiation-hydrodynamic computational models before proceeding to cryogenic-layered implosions and ignition attempts. The required tuning techniques using a variety of ignition capsule surrogates have been demonstrated at the Omega facility under scaled hohlraum and capsule conditions relevant to the ignition design and shown to meet the required sensitivity and accuracy. In addition, a roll-up of all expected random and systematic uncertainties in setting the key ignition laser and target parameters due to residual measurement, calibration, cross-coupling, surrogacy, and scale-up errors has been derived that meets the required budget.
Bavorova, Miroslava; Imamverdiyev, Nizami; Ponkina, Elena
2018-01-01
In the agricultural Altai Krai in Russian Siberia, soil degradation problems are prevalent. Agronomists recommend "reduced tillage systems," especially no-till, as a sustainable way to cultivate land that is threatened by soil degradation. In the Altai Krai, less is known about the technologies in practice. In this paper, we provide information on plant cultivation technologies used in the Altai Krai and on selected factors preventing farm managers in this region from adopting no-till technology based on our own quantitative survey conducted across 107 farms in 2015 and 2016. The results of the quantitative survey show that farm managers have high uncertainty regarding the use of no-till technology including its economics. To close this gap, we provide systematic analysis of factors influencing the economy of the plant production systems by using a farm optimization model (linear programming) for a real farm, together with expert estimations. The farm-specific results of the optimization model show that under optimal management and climatic conditions, the expert Modern Canadian no-till technology outperforms the farm min-till technology, but this is not the case for suboptimal conditions with lower yields.
Optimal dynamic control of invasions: applying a systematic conservation approach.
Adams, Vanessa M; Setterfield, Samantha A
2015-06-01
The social, economic, and environmental impacts of invasive plants are well recognized. However, these variable impacts are rarely accounted for in the spatial prioritization of funding for weed management. We examine how current spatially explicit prioritization methods can be extended to identify optimal budget allocations to both eradication and control measures of invasive species to minimize the costs and likelihood of invasion. Our framework extends recent approaches to systematic prioritization of weed management to account for multiple values that are threatened by weed invasions with a multi-year dynamic prioritization approach. We apply our method to the northern portion of the Daly catchment in the Northern Territory, which has significant conservation values that are threatened by gamba grass (Andropogon gayanus), a highly invasive species recognized by the Australian government as a Weed of National Significance (WONS). We interface Marxan, a widely applied conservation planning tool, with a dynamic biophysical model of gamba grass to optimally allocate funds to eradication and control programs under two budget scenarios comparing maximizing gain (MaxGain) and minimizing loss (MinLoss) optimization approaches. The prioritizations support previous findings that a MinLoss approach is a better strategy when threats are more spatially variable than conservation values. Over a 10-year simulation period, we find that a MinLoss approach reduces future infestations by ~8% compared to MaxGain in the constrained budget scenarios and ~12% in the unlimited budget scenarios. We find that due to the extensive current invasion and rapid rate of spread, allocating the annual budget to control efforts is more efficient than funding eradication efforts when there is a constrained budget. Under a constrained budget, applying the most efficient optimization scenario (control, minloss) reduces spread by ~27% compared to no control. Conversely, if the budget is unlimited it is more efficient to fund eradication efforts and reduces spread by ~65% compared to no control.
NASA Astrophysics Data System (ADS)
Wang, Zi Shuai; Sha, Wei E. I.; Choy, Wallace C. H.
2016-12-01
Modeling the charge-generation process is highly important to understand device physics and optimize power conversion efficiency of bulk-heterojunction organic solar cells (OSCs). Free carriers are generated by both ultrafast exciton delocalization and slow exciton diffusion and dissociation at the heterojunction interface. In this work, we developed a systematic numerical simulation to describe the charge-generation process by a modified drift-diffusion model. The transport, recombination, and collection of free carriers are incorporated to fully capture the device response. The theoretical results match well with the state-of-the-art high-performance organic solar cells. It is demonstrated that the increase of exciton delocalization ratio reduces the energy loss in the exciton diffusion-dissociation process, and thus, significantly improves the device efficiency, especially for the short-circuit current. By changing the exciton delocalization ratio, OSC performances are comprehensively investigated under the conditions of short-circuit and open-circuit. Particularly, bulk recombination dependent fill factor saturation is unveiled and understood. As a fundamental electrical analysis of the delocalization mechanism, our work is important to understand and optimize the high-performance OSCs.
Barlow, Andrew; Klima, Matej; Shashkov, Mikhail
2018-04-02
In hydrocodes, voids are used to represent vacuum and model free boundaries between vacuum and real materials. We give a systematic description of a new treatment of void closure in the framework of the multimaterial arbitrary Lagrangian–Eulerian (ALE) methods. This includes a new formulation of the interface-aware sub-scale-dynamics (IA-SSD) closure model for multimaterial cells with voids, which is used in the Lagrangian stage of our indirect ALE scheme. The results of the comprehensive testing of the new model are presented for one- and two-dimensional multimaterial calculations in the presence of voids. Finally, we also present a sneak peek of amore » realistic shaped charge calculation in the presence of voids and solids.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barlow, Andrew; Klima, Matej; Shashkov, Mikhail
In hydrocodes, voids are used to represent vacuum and model free boundaries between vacuum and real materials. We give a systematic description of a new treatment of void closure in the framework of the multimaterial arbitrary Lagrangian–Eulerian (ALE) methods. This includes a new formulation of the interface-aware sub-scale-dynamics (IA-SSD) closure model for multimaterial cells with voids, which is used in the Lagrangian stage of our indirect ALE scheme. The results of the comprehensive testing of the new model are presented for one- and two-dimensional multimaterial calculations in the presence of voids. Finally, we also present a sneak peek of amore » realistic shaped charge calculation in the presence of voids and solids.« less
Choi, Du Hyung; Shin, Sangmun; Khoa Viet Truong, Nguyen; Jeong, Seong Hoon
2012-09-01
A robust experimental design method was developed with the well-established response surface methodology and time series modeling to facilitate the formulation development process with magnesium stearate incorporated into hydrophilic matrix tablets. Two directional analyses and a time-oriented model were utilized to optimize the experimental responses. Evaluations of tablet gelation and drug release were conducted with two factors x₁ and x₂: one was a formulation factor (the amount of magnesium stearate) and the other was a processing factor (mixing time), respectively. Moreover, different batch sizes (100 and 500 tablet batches) were also evaluated to investigate an effect of batch size. The selected input control factors were arranged in a mixture simplex lattice design with 13 experimental runs. The obtained optimal settings of magnesium stearate for gelation were 0.46 g, 2.76 min (mixing time) for a 100 tablet batch and 1.54 g, 6.51 min for a 500 tablet batch. The optimal settings for drug release were 0.33 g, 7.99 min for a 100 tablet batch and 1.54 g, 6.51 min for a 500 tablet batch. The exact ratio and mixing time of magnesium stearate could be formulated according to the resulting hydrophilic matrix tablet properties. The newly designed experimental method provided very useful information for characterizing significant factors and hence to obtain optimum formulations allowing for a systematic and reliable experimental design method.
Geostatistical modeling of riparian forest microclimate and its implications for sampling
Eskelson, B.N.I.; Anderson, P.D.; Hagar, J.C.; Temesgen, H.
2011-01-01
Predictive models of microclimate under various site conditions in forested headwater stream - riparian areas are poorly developed, and sampling designs for characterizing underlying riparian microclimate gradients are sparse. We used riparian microclimate data collected at eight headwater streams in the Oregon Coast Range to compare ordinary kriging (OK), universal kriging (UK), and kriging with external drift (KED) for point prediction of mean maximum air temperature (Tair). Several topographic and forest structure characteristics were considered as site-specific parameters. Height above stream and distance to stream were the most important covariates in the KED models, which outperformed OK and UK in terms of root mean square error. Sample patterns were optimized based on the kriging variance and the weighted means of shortest distance criterion using the simulated annealing algorithm. The optimized sample patterns outperformed systematic sample patterns in terms of mean kriging variance mainly for small sample sizes. These findings suggest methods for increasing efficiency of microclimate monitoring in riparian areas.
Design sensitivity analysis and optimization tool (DSO) for sizing design applications
NASA Technical Reports Server (NTRS)
Chang, Kuang-Hua; Choi, Kyung K.; Perng, Jyh-Hwa
1992-01-01
The DSO tool, a structural design software system that provides the designer with a graphics-based menu-driven design environment to perform easy design optimization for general applications, is presented. Three design stages, preprocessing, design sensitivity analysis, and postprocessing, are implemented in the DSO to allow the designer to carry out the design process systematically. A framework, including data base, user interface, foundation class, and remote module, has been designed and implemented to facilitate software development for the DSO. A number of dedicated commercial software/packages have been integrated in the DSO to support the design procedures. Instead of parameterizing an FEM, design parameters are defined on a geometric model associated with physical quantities, and the continuum design sensitivity analysis theory is implemented to compute design sensitivity coefficients using postprocessing data from the analysis codes. A tracked vehicle road wheel is given as a sizing design application to demonstrate the DSO's easy and convenient design optimization process.
Parameterization of Model Validating Sets for Uncertainty Bound Optimizations. Revised
NASA Technical Reports Server (NTRS)
Lim, K. B.; Giesy, D. P.
2000-01-01
Given measurement data, a nominal model and a linear fractional transformation uncertainty structure with an allowance on unknown but bounded exogenous disturbances, easily computable tests for the existence of a model validating uncertainty set are given. Under mild conditions, these tests are necessary and sufficient for the case of complex, nonrepeated, block-diagonal structure. For the more general case which includes repeated and/or real scalar uncertainties, the tests are only necessary but become sufficient if a collinearity condition is also satisfied. With the satisfaction of these tests, it is shown that a parameterization of all model validating sets of plant models is possible. The new parameterization is used as a basis for a systematic way to construct or perform uncertainty tradeoff with model validating uncertainty sets which have specific linear fractional transformation structure for use in robust control design and analysis. An illustrative example which includes a comparison of candidate model validating sets is given.
Machine Learning and Neurosurgical Outcome Prediction: A Systematic Review.
Senders, Joeky T; Staples, Patrick C; Karhade, Aditya V; Zaki, Mark M; Gormley, William B; Broekman, Marike L D; Smith, Timothy R; Arnaout, Omar
2018-01-01
Accurate measurement of surgical outcomes is highly desirable to optimize surgical decision-making. An important element of surgical decision making is identification of the patient cohort that will benefit from surgery before the intervention. Machine learning (ML) enables computers to learn from previous data to make accurate predictions on new data. In this systematic review, we evaluate the potential of ML for neurosurgical outcome prediction. A systematic search in the PubMed and Embase databases was performed to identify all potential relevant studies up to January 1, 2017. Thirty studies were identified that evaluated ML algorithms used as prediction models for survival, recurrence, symptom improvement, and adverse events in patients undergoing surgery for epilepsy, brain tumor, spinal lesions, neurovascular disease, movement disorders, traumatic brain injury, and hydrocephalus. Depending on the specific prediction task evaluated and the type of input features included, ML models predicted outcomes after neurosurgery with a median accuracy and area under the receiver operating curve of 94.5% and 0.83, respectively. Compared with logistic regression, ML models performed significantly better and showed a median absolute improvement in accuracy and area under the receiver operating curve of 15% and 0.06, respectively. Some studies also demonstrated a better performance in ML models compared with established prognostic indices and clinical experts. In the research setting, ML has been studied extensively, demonstrating an excellent performance in outcome prediction for a wide range of neurosurgical conditions. However, future studies should investigate how ML can be implemented as a practical tool supporting neurosurgical care. Copyright © 2017 Elsevier Inc. All rights reserved.
McCabe, O Lee; Everly, George S; Brown, Lisa M; Wendelboe, Aaron M; Abd Hamid, Nor Hashidah; Tallchief, Vicki L; Links, Jonathan M
2014-04-01
Surges in demand for professional mental health services occasioned by disasters represent a major public health challenge. To build response capacity, numerous psychological first aid (PFA) training models for professional and lay audiences have been developed that, although often concurring on broad intervention aims, have not systematically addressed pedagogical elements necessary for optimal learning or teaching. We describe a competency-based model of PFA training developed under the auspices of the Centers for Disease Control and Prevention and the Association of Schools of Public Health. We explain the approach used for developing and refining the competency set and summarize the observable knowledge, skills, and attitudes underlying the 6 core competency domains. We discuss the strategies for model dissemination, validation, and adoption in professional and lay communities.
Psychological First Aid: A Consensus-Derived, Empirically Supported, Competency-Based Training Model
Everly, George S.; Brown, Lisa M.; Wendelboe, Aaron M.; Abd Hamid, Nor Hashidah; Tallchief, Vicki L.; Links, Jonathan M.
2014-01-01
Surges in demand for professional mental health services occasioned by disasters represent a major public health challenge. To build response capacity, numerous psychological first aid (PFA) training models for professional and lay audiences have been developed that, although often concurring on broad intervention aims, have not systematically addressed pedagogical elements necessary for optimal learning or teaching. We describe a competency-based model of PFA training developed under the auspices of the Centers for Disease Control and Prevention and the Association of Schools of Public Health. We explain the approach used for developing and refining the competency set and summarize the observable knowledge, skills, and attitudes underlying the 6 core competency domains. We discuss the strategies for model dissemination, validation, and adoption in professional and lay communities. PMID:23865656
Modeling U-shaped dose-response curves for manganese using categorical regression.
Milton, Brittany; Krewski, Daniel; Mattison, Donald R; Karyakina, Nataliya A; Ramoju, Siva; Shilnikova, Natalia; Birkett, Nicholas; Farrell, Patrick J; McGough, Doreen
2017-01-01
Manganese is an essential nutrient which can cause adverse effects if ingested to excess or in insufficient amounts, leading to a U-shaped exposure-response relationship. Methods have recently been developed to describe such relationships by simultaneously modeling the exposure-response curves for excess and deficiency. These methods incorporate information from studies with diverse adverse health outcomes within the same analysis by assigning severity scores to achieve a common response metric for exposure-response modeling. We aimed to provide an estimate of the optimal dietary intake of manganese to balance adverse effects from deficient or excess intake. We undertook a systematic review of the literature from 1930 to 2013 and extracted information on adverse effects from manganese deficiency and excess to create a database on manganese toxicity following oral exposure. Although data were available for seven different species, only the data from rats was sufficiently comprehensive to support analytical modelling. The toxicological outcomes were standardized on an 18-point severity scale, allowing for a common analysis of all available toxicological data. Logistic regression modelling was used to simultaneously estimate the exposure-response profile for dietary deficiency and excess for manganese and generate a U-shaped exposure-response curve for all outcomes. Data were available on the adverse effects of 6113 rats. The nadir of the U-shaped joint response curve occurred at a manganese intake of 2.70mg/kgbw/day with a 95% confidence interval of 2.51-3.02. The extremes of both deficient and excess intake were associated with a 90% probability of some measurable adverse event. The manganese database supports estimation of optimal intake based on combining information on adverse effects from systematic review of published experiments. There is a need for more studies on humans. Translation of our results from rats to humans will require adjustment for interspecies differences in sensitivity to manganese. Copyright © 2016 Elsevier B.V. All rights reserved.
Optimal monetary policy and oil price shocks
NASA Astrophysics Data System (ADS)
Kormilitsina, Anna
This dissertation is comprised of two chapters. In the first chapter, I investigate the role of systematic U.S. monetary policy in the presence of oil price shocks. The second chapter is devoted to studying different approaches to modeling energy demand. In an influential paper, Bernanke, Gertler, and Watson (1997) and (2004) argue that systematic monetary policy exacerbated the recessions the U.S. economy experienced in the aftermath of post World War II oil price shocks. In the first chapter of this dissertation, I critically evaluate this claim in the context of an estimated medium-scale model of the U.S. business cycle. Specifically, I solve for the Ramsey optimal monetary policy in the medium-scale dynamic stochastic general equilibrium model (henceforth DSGE) of Schmitt-Grohe and Uribe (2005). To model the demand for oil, I use the approach of Finn (2000). According to this approach, the utilization of capital services requires oil usage. In the related literature on the macroeconomic effects of oil price shocks, it is common to calibrate structural parameters of the model. In contrast to this literature, I estimate the parameters of my DSGE model. The estimation strategy involves matching the impulse responses from the theoretical model to responses predicted by an empirical model. For estimation, I use the alternative to the classical Laplace type estimator proposed by Chernozhukov and Hong (2003). To obtain the empirical impulse responses, I identify an oil price shock in a structural VAR (SVAR) model of the U.S. business cycle. The SVAR model predicts that, in response to an oil price increase, GDP, investment, hours, capital utilization, and the real wage fall, while the nominal interest rate and inflation rise. These findings are economically intuitive and in line with the existing empirical evidence. Comparing the actual and the Ramsey optimal monetary policy response to an oil price shock, I find that the optimal policy allows for more inflation, a larger drop in wages, and a rise in hours compared to those actually observed. The central finding of this Chapter is that the optimal policy is associated with a smaller drop in GDP and other macroeconomic variables. The latter results therefore confirm the claim of Bernanke, Gertler and Watson that monetary policy was to a large extent responsible for the recessions that followed the oil price shocks. However, under the optimal policy, interest rates are tightened even more than what is predicted by the empirical model. This result contrasts sharply with the claim of Bernanke, Gertler, and Watson that the Federal Reserve exacerbated recessions by the excessive tightening of interest rates in response to the oil price increases. In contrast to related studies that focus on output stabilization, I find that eliminating the negative response of GDP to an oil price shock is not desirable. In the second chapter of this dissertation, I compare two approaches to modeling energy sector. Because the share of energy in GDP is small, models of energy have been criticized for their inability to explain sizeable effects of energy price increases on the economic activity. I find that if the price of energy is an exogenous AR(1) process, then the two modeling approaches produce the responses of GDP similar in size to responses observed in most empirical studies, but fail to produce the timing and the shape of the response. DSGE framework can solve the timing and the shape of impulse responses problem, however, fails to replicate the size of the impulse responses. Thus, in DSGE frameworks, amplifying mechanisms for the effect of the energy price shock and estimation based calibration of model parameters are needed to produce the size of the GDP response to the energy price shock.
Application of an Optimal Tuner Selection Approach for On-Board Self-Tuning Engine Models
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Armstrong, Jeffrey B.; Garg, Sanjay
2012-01-01
An enhanced design methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented in this paper. It specific-ally addresses the under-determined estimation problem, in which there are more unknown parameters than available sensor measurements. This work builds upon an existing technique for systematically selecting a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. While the existing technique was optimized for open-loop engine operation at a fixed design point, in this paper an alternative formulation is presented that enables the technique to be optimized for an engine operating under closed-loop control throughout the flight envelope. The theoretical Kalman filter mean squared estimation error at a steady-state closed-loop operating point is derived, and the tuner selection approach applied to minimize this error is discussed. A technique for constructing a globally optimal tuning parameter vector, which enables full-envelope application of the technology, is also presented, along with design steps for adjusting the dynamic response of the Kalman filter state estimates. Results from the application of the technique to linear and nonlinear aircraft engine simulations are presented and compared to the conventional approach of tuner selection. The new methodology is shown to yield a significant improvement in on-line Kalman filter estimation accuracy.
Kim, Byoungsu; Takechi, Kensuke; Ma, Sichao; Verma, Sumit; Fu, Shiqi; Desai, Amit; Pawate, Ashtamurthy S; Mizuno, Fuminori; Kenis, Paul J A
2017-09-22
A primary Li-air battery has been developed with a flowing Li-ion free ionic liquid as the recyclable electrolyte, boosting power capability by promoting superoxide diffusion and enhancing discharge capacity through separately stored discharge products. Experimental and computational tools are used to analyze the cathode properties, leading to a set of parameters that improve the discharge current density of the non-aqueous Li-air flow battery. The structure and configuration of the cathode gas diffusion layers (GDLs) are systematically modified by using different levels of hot pressing and the presence or absence of a microporous layer (MPL). These experiments reveal that the use of thinner but denser MPLs is key for performance optimization; indeed, this leads to an improvement in discharge current density. Also, computational results indicate that the extent of electrolyte immersion and porosity of the cathode can be optimized to achieve higher current density. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Fundamental Principles of Tremor Propagation in the Upper Limb.
Davidson, Andrew D; Charles, Steven K
2017-04-01
Although tremor is the most common movement disorder, there exist few effective tremor-suppressing devices, in part because the characteristics of tremor throughout the upper limb are unknown. To clarify, optimally suppressing tremor requires a knowledge of the mechanical origin, propagation, and distribution of tremor throughout the upper limb. Here we present the first systematic investigation of how tremor propagates between the shoulder, elbow, forearm, and wrist. We simulated tremor propagation using a linear, time-invariant, lumped-parameter model relating joint torques and the resulting joint displacements. The model focused on the seven main degrees of freedom from the shoulder to the wrist and included coupled joint inertia, damping, and stiffness. We deliberately implemented a simple model to focus first on the most basic effects. Simulating tremorogenic joint torque as a sinusoidal input, we used the model to establish fundamental principles describing how input parameters (torque location and frequency) and joint impedance (inertia, damping, and stiffness) affect tremor propagation. We expect that the methods and principles presented here will serve as the groundwork for future refining studies to understand the origin, propagation, and distribution of tremor throughout the upper limb in order to enable the future development of optimal tremor-suppressing devices.
Fundamental Principles of Tremor Propagation in the Upper Limb
Davidson, Andrew D.; Charles, Steven K.
2017-01-01
Although tremor is the most common movement disorder, there exist few effective tremor-suppressing devices, in part because the characteristics of tremor throughout the upper limb are unknown. To clarify, optimally suppressing tremor requires a knowledge of the mechanical origin, propagation, and distribution of tremor throughout the upper limb. Here we present the first systematic investigation of how tremor propagates between the shoulder, elbow, forearm, and wrist. We simulated tremor propagation using a linear, time-invariant, lumped-parameter model relating joint torques and the resulting joint displacements. The model focused on the seven main degrees of freedom from the shoulder to the wrist and included coupled joint inertia, damping, and stiffness. We deliberately implemented a simple model to focus first on the most basic effects. Simulating tremorogenic joint torque as a sinusoidal input, we used the model to establish fundamental principles describing how input parameters (torque location and frequency) and joint impedance (inertia, damping, and stiffness) affect tremor propagation. We expect that the methods and principles presented here will serve as the groundwork for future refining studies to understand the origin, propagation, and distribution of tremor throughout the upper limb in order to enable the future development of optimal tremor-suppressing devices. PMID:27957608
Kasap, Bahadir; van Opstal, A John
2017-08-01
Single-unit recordings suggest that the midbrain superior colliculus (SC) acts as an optimal controller for saccadic gaze shifts. The SC is proposed to be the site within the visuomotor system where the nonlinear spatial-to-temporal transformation is carried out: the population encodes the intended saccade vector by its location in the motor map (spatial), and its trajectory and velocity by the distribution of firing rates (temporal). The neurons' burst profiles vary systematically with their anatomical positions and intended saccade vectors, to account for the nonlinear main-sequence kinematics of saccades. Yet, the underlying collicular mechanisms that could result in these firing patterns are inaccessible to current neurobiological techniques. Here, we propose a simple spiking neural network model that reproduces the spike trains of saccade-related cells in the intermediate and deep SC layers during saccades. The model assumes that SC neurons have distinct biophysical properties for spike generation that depend on their anatomical position in combination with a center-surround lateral connectivity. Both factors are needed to account for the observed firing patterns. Our model offers a basis for neuronal algorithms for spatiotemporal transformations and bio-inspired optimal controllers.
Improving effectiveness of systematic conservation planning with density data.
Veloz, Samuel; Salas, Leonardo; Altman, Bob; Alexander, John; Jongsomjit, Dennis; Elliott, Nathan; Ballard, Grant
2015-08-01
Systematic conservation planning aims to design networks of protected areas that meet conservation goals across large landscapes. The optimal design of these conservation networks is most frequently based on the modeled habitat suitability or probability of occurrence of species, despite evidence that model predictions may not be highly correlated with species density. We hypothesized that conservation networks designed using species density distributions more efficiently conserve populations of all species considered than networks designed using probability of occurrence models. To test this hypothesis, we used the Zonation conservation prioritization algorithm to evaluate conservation network designs based on probability of occurrence versus density models for 26 land bird species in the U.S. Pacific Northwest. We assessed the efficacy of each conservation network based on predicted species densities and predicted species diversity. High-density model Zonation rankings protected more individuals per species when networks protected the highest priority 10-40% of the landscape. Compared with density-based models, the occurrence-based models protected more individuals in the lowest 50% priority areas of the landscape. The 2 approaches conserved species diversity in similar ways: predicted diversity was higher in higher priority locations in both conservation networks. We conclude that both density and probability of occurrence models can be useful for setting conservation priorities but that density-based models are best suited for identifying the highest priority areas. Developing methods to aggregate species count data from unrelated monitoring efforts and making these data widely available through ecoinformatics portals such as the Avian Knowledge Network will enable species count data to be more widely incorporated into systematic conservation planning efforts. © 2015, Society for Conservation Biology.
Zener Diode Compact Model Parameter Extraction Using Xyce-Dakota Optimization.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buchheit, Thomas E.; Wilcox, Ian Zachary; Sandoval, Andrew J
This report presents a detailed process for compact model parameter extraction for DC circuit Zener diodes. Following the traditional approach of Zener diode parameter extraction, circuit model representation is defined and then used to capture the different operational regions of a real diode's electrical behavior. The circuit model contains 9 parameters represented by resistors and characteristic diodes as circuit model elements. The process of initial parameter extraction, the identification of parameter values for the circuit model elements, is presented in a way that isolates the dependencies between certain electrical parameters and highlights both the empirical nature of the extraction andmore » portions of the real diode physical behavior which of the parameters are intended to represent. Optimization of the parameters, a necessary part of a robost parameter extraction process, is demonstrated using a 'Xyce-Dakota' workflow, discussed in more detail in the report. Among other realizations during this systematic approach of electrical model parameter extraction, non-physical solutions are possible and can be difficult to avoid because of the interdependencies between the different parameters. The process steps described are fairly general and can be leveraged for other types of semiconductor device model extractions. Also included in the report are recommendations for experiment setups for generating optimum dataset for model extraction and the Parameter Identification and Ranking Table (PIRT) for Zener diodes.« less
A design space exploration for control of Critical Quality Attributes of mAb.
Bhatia, Hemlata; Read, Erik; Agarabi, Cyrus; Brorson, Kurt; Lute, Scott; Yoon, Seongkyu
2016-10-15
A unique "design space (DSp) exploration strategy," defined as a function of four key scenarios, was successfully integrated and validated to enhance the DSp building exercise, by increasing the accuracy of analyses and interpretation of processed data. The four key scenarios, defining the strategy, were based on cumulative analyses of individual models developed for the Critical Quality Attributes (23 Glycan Profiles) considered for the study. The analyses of the CQA estimates and model performances were interpreted as (1) Inside Specification/Significant Model (2) Inside Specification/Non-significant Model (3) Outside Specification/Significant Model (4) Outside Specification/Non-significant Model. Each scenario was defined and illustrated through individual models of CQA aligning the description. The R(2), Q(2), Model Validity and Model Reproducibility estimates of G2, G2FaGbGN, G0 and G2FaG2, respectively, signified the four scenarios stated above. Through further optimizations, including the estimation of Edge of Failure and Set Point Analysis, wider and accurate DSps were created for each scenario, establishing critical functional relationship between Critical Process Parameters (CPPs) and Critical Quality Attributes (CQAs). A DSp provides the optimal region for systematic evaluation, mechanistic understanding and refining of a QbD approach. DSp exploration strategy will aid the critical process of consistently and reproducibly achieving predefined quality of a product throughout its lifecycle. Copyright © 2016 Elsevier B.V. All rights reserved.
System-level modeling of acetone-butanol-ethanol fermentation.
Liao, Chen; Seo, Seung-Oh; Lu, Ting
2016-05-01
Acetone-butanol-ethanol (ABE) fermentation is a metabolic process of clostridia that produces bio-based solvents including butanol. It is enabled by an underlying metabolic reaction network and modulated by cellular gene regulation and environmental cues. Mathematical modeling has served as a valuable strategy to facilitate the understanding, characterization and optimization of this process. In this review, we highlight recent advances in system-level, quantitative modeling of ABE fermentation. We begin with an overview of integrative processes underlying the fermentation. Next we survey modeling efforts including early simple models, models with a systematic metabolic description, and those incorporating metabolism through simple gene regulation. Particular focus is given to a recent system-level model that integrates the metabolic reactions, gene regulation and environmental cues. We conclude by discussing the remaining challenges and future directions towards predictive understanding of ABE fermentation. © FEMS 2016. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Perioperative feedback in surgical training: A systematic review.
McKendy, Katherine M; Watanabe, Yusuke; Lee, Lawrence; Bilgic, Elif; Enani, Ghada; Feldman, Liane S; Fried, Gerald M; Vassiliou, Melina C
2017-07-01
Changes in surgical training have raised concerns about residents' operative exposure and preparedness for independent practice. One way of addressing this concern is by optimizing teaching and feedback in the operating room (OR). The objective of this study was to perform a systematic review on perioperative teaching and feedback. A systematic literature search identified articles from 1994 to 2014 that addressed teaching, feedback, guidance, or debriefing in the perioperative period. Data was extracted according to ENTREQ guidelines, and a qualitative analysis was performed. Thematic analysis of the 26 included studies identified four major topics. Observation of teaching behaviors in the OR described current teaching practices. Identification of effective teaching strategies analyzed teaching behaviors, differentiating positive and negative teaching strategies. Perceptions of teaching behaviors described resident and attending satisfaction with teaching in the OR. Finally models for delivering structured feedback cited examples of feedback strategies and measured their effectiveness. This study provides an overview of perioperative teaching and feedback for surgical trainees and identifies a need for improved quality and quantity of structured feedback. Copyright © 2016 Elsevier Inc. All rights reserved.
Global dietary calcium intake among adults: a systematic review
USDA-ARS?s Scientific Manuscript database
Purpose: Low calcium intake may adversely affect bone health in adults. Recognizing the presence of low calcium intake is necessary to develop national strategies to optimize intake. To highlight regions where calcium intake should be improved, we systematically searched for the most representative ...
A Unified Fisher's Ratio Learning Method for Spatial Filter Optimization.
Li, Xinyang; Guan, Cuntai; Zhang, Haihong; Ang, Kai Keng
To detect the mental task of interest, spatial filtering has been widely used to enhance the spatial resolution of electroencephalography (EEG). However, the effectiveness of spatial filtering is undermined due to the significant nonstationarity of EEG. Based on regularization, most of the conventional stationary spatial filter design methods address the nonstationarity at the cost of the interclass discrimination. Moreover, spatial filter optimization is inconsistent with feature extraction when EEG covariance matrices could not be jointly diagonalized due to the regularization. In this paper, we propose a novel framework for a spatial filter design. With Fisher's ratio in feature space directly used as the objective function, the spatial filter optimization is unified with feature extraction. Given its ratio form, the selection of the regularization parameter could be avoided. We evaluate the proposed method on a binary motor imagery data set of 16 subjects, who performed the calibration and test sessions on different days. The experimental results show that the proposed method yields improvement in classification performance for both single broadband and filter bank settings compared with conventional nonunified methods. We also provide a systematic attempt to compare different objective functions in modeling data nonstationarity with simulation studies.To detect the mental task of interest, spatial filtering has been widely used to enhance the spatial resolution of electroencephalography (EEG). However, the effectiveness of spatial filtering is undermined due to the significant nonstationarity of EEG. Based on regularization, most of the conventional stationary spatial filter design methods address the nonstationarity at the cost of the interclass discrimination. Moreover, spatial filter optimization is inconsistent with feature extraction when EEG covariance matrices could not be jointly diagonalized due to the regularization. In this paper, we propose a novel framework for a spatial filter design. With Fisher's ratio in feature space directly used as the objective function, the spatial filter optimization is unified with feature extraction. Given its ratio form, the selection of the regularization parameter could be avoided. We evaluate the proposed method on a binary motor imagery data set of 16 subjects, who performed the calibration and test sessions on different days. The experimental results show that the proposed method yields improvement in classification performance for both single broadband and filter bank settings compared with conventional nonunified methods. We also provide a systematic attempt to compare different objective functions in modeling data nonstationarity with simulation studies.
Kasper, Sigrid M; Dueholm, Margit; Marinovskij, Edvard; Blaakær, Jan
2017-03-01
To analyze the ability of magnetic resonance imaging (MRI) and systematic evaluation at surgery to predict optimal cytoreduction in primary advanced ovarian cancer and to develop a preoperative scoring system for cancer staging. Preoperative MRI and standard laparotomy were performed in 99 women with either ovarian or primary peritoneal cancer. Using univariate and multivariate logistic regression analysis of a systematic description of the tumor in nine abdominal compartments obtained by MRI and during surgery plus clinical parameters, a scoring system was designed that predicted non-optimal cytoreduction. Non-optimal cytoreduction at operation was predicted by the following: (A) presence of comorbidities group 3 or 4 (ASA); (B) tumor presence in multiple numbers of different compartments, and (C) numbers of specified sites of organ involvement. The score includes: number of compartments involved (1-9 points), >1 subdiaphragmal location with presence of tumor (1 point); deep organ involvement of liver (1 point), porta hepatis (1 point), spleen (1 point), mesentery/vessel (1 point), cecum/ileocecal (1 point), rectum/vessels (1 point): ASA groups 3 and 4 (2 points). Use of the scoring system based on operative findings gave an area under the curve (AUC) of 91% (85-98%) for patients in whom optimal cytoreduction could not be achieved. The score AUC obtained by MRI was 84% (76-92%), and 43% of non-optimal cytoreduction patients were identified, with only 8% of potentially operable patients being falsely evaluated as suitable for non-optimal cytoreduction at the most optimal cut-off value. Tumor in individual locations did not predict operability. This systematic scoring system based on operative findings and MRI may predict non-optimal cytoreduction. MRI is able to assess ovarian cancer with peritoneal carcinomatosis with satisfactory concordance with laparotomic findings. This scoring system could be useful as a clinical guideline and should be evaluated and developed further in larger studies. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Assessment of regional management strategies for controlling seawater intrusion
Reichard, E.G.; Johnson, T.A.
2005-01-01
Simulation-optimization methods, applied with adequate sensitivity tests, can provide useful quantitative guidance for controlling seawater intrusion. This is demonstrated in an application to the West Coast Basin of coastal Los Angeles that considers two management options for improving hydraulic control of seawater intrusion: increased injection into barrier wells and in lieu delivery of surface water to replace current pumpage. For the base-case optimization analysis, assuming constant groundwater demand, in lieu delivery was determined to be most cost effective. Reduced-cost information from the optimization provided guidance for prioritizing locations for in lieu delivery. Model sensitivity to a suite of hydrologic, economic, and policy factors was tested. Raising the imposed average water-level constraint at the hydraulic-control locations resulted in nonlinear increases in cost. Systematic varying of the relative costs of injection and in lieu water yielded a trade-off curve between relative costs and injection/in lieu amounts. Changing the assumed future scenario to one of increasing pumpage in the adjacent Central Basin caused a small increase in the computed costs of seawater intrusion control. Changing the assumed boundary condition representing interaction with an adjacent basin did not affect the optimization results. Reducing the assumed hydraulic conductivity of the main productive aquifer resulted in a large increase in the model-computed cost. Journal of Water Resources Planning and Management ?? ASCE.
NASA Astrophysics Data System (ADS)
Cheng, Xi; He, Li; Lu, Hongwei; Chen, Yizhong; Ren, Lixia
2016-09-01
A major concern associated with current shale-gas extraction is high consumption of water resources. However, decision-making problems regarding water consumption and shale-gas extraction have not yet been solved through systematic approaches. This study develops a new bilevel optimization problem based on goals at two different levels: minimization of water demands at the lower level and maximization of system benefit at the upper level. The model is used to solve a real-world case across Pennsylvania and West Virginia. Results show that surface water would be the largest contributor to gas production (with over 80.00% from 2015 to 2030) and groundwater occupies for the least proportion (with less than 2.00% from 2015 to 2030) in both districts over the planning span. Comparative analysis between the proposed model and conventional single-level models indicates that the bilevel model could provide coordinated schemes to comprehensively attain the goals from both water resources authorities and energy sectors. Sensitivity analysis shows that the change of water use of per unit gas production (WU) has significant effects upon system benefit, gas production and pollutants (i.e., barium, chloride and bromide) discharge, but not significantly changes water demands.
Boelaert, Marleen; Matlashewski, Greg; Mondal, Dinesh; Arana, Byron; Kroeger, Axel; Olliaro, Piero
2016-01-01
Background As Bangladesh, India and Nepal progress towards visceral leishmaniasis (VL) elimination, it is important to understand the role of asymptomatic Leishmania infection (ALI), VL treatment relapse and post kala-azar dermal leishmaniasis (PKDL) in transmission. Methodology/ Principal Finding We reviewed evidence systematically on ALI, relapse and PKDL. We searched multiple databases to include studies on burden, risk factors, biomarkers, natural history, and infectiveness of ALI, PKDL and relapse. After screening 292 papers, 98 were included covering the years 1942 through 2016. ALI, PKDL and relapse studies lacked a reference standard and appropriate biomarker. The prevalence of ALI was 4–17-fold that of VL. The risk of ALI was higher in VL case contacts. Most infections remained asymptomatic or resolved spontaneously. The proportion of ALI that progressed to VL disease within a year was 1.5–23%, and was higher amongst those with high antibody titres. The natural history of PKDL showed variability; 3.8–28.6% had no past history of VL treatment. The infectiveness of PKDL was 32–53%. The risk of VL relapse was higher with HIV co-infection. Modelling studies predicted a range of scenarios. One model predicted VL elimination was unlikely in the long term with early diagnosis. Another model estimated that ALI contributed to 82% of the overall transmission, VL to 10% and PKDL to 8%. Another model predicted that VL cases were the main driver for transmission. Different models predicted VL elimination if the sandfly density was reduced by 67% by killing the sandfly or by 79% by reducing their breeding sites, or with 4–6y of optimal IRS or 10y of sub-optimal IRS and only in low endemic setting. Conclusion/ Significance There is a need for xenodiagnostic and longitudinal studies to understand the potential of ALI and PKDL as reservoirs of infection. PMID:27490264
Toward the First Data Acquisition Standard in Synthetic Biology.
Sainz de Murieta, Iñaki; Bultelle, Matthieu; Kitney, Richard I
2016-08-19
This paper describes the development of a new data acquisition standard for synthetic biology. This comprises the creation of a methodology that is designed to capture all the data, metadata, and protocol information associated with biopart characterization experiments. The new standard, called DICOM-SB, is based on the highly successful Digital Imaging and Communications in Medicine (DICOM) standard in medicine. A data model is described which has been specifically developed for synthetic biology. The model is a modular, extensible data model for the experimental process, which can optimize data storage for large amounts of data. DICOM-SB also includes services orientated toward the automatic exchange of data and information between modalities and repositories. DICOM-SB has been developed in the context of systematic design in synthetic biology, which is based on the engineering principles of modularity, standardization, and characterization. The systematic design approach utilizes the design, build, test, and learn design cycle paradigm. DICOM-SB has been designed to be compatible with and complementary to other standards in synthetic biology, including SBOL. In this regard, the software provides effective interoperability. The new standard has been tested by experiments and data exchange between Nanyang Technological University in Singapore and Imperial College London.
Uga, Minako; Dan, Ippeita; Sano, Toshifumi; Dan, Haruka; Watanabe, Eiju
2014-01-01
Abstract. An increasing number of functional near-infrared spectroscopy (fNIRS) studies utilize a general linear model (GLM) approach, which serves as a standard statistical method for functional magnetic resonance imaging (fMRI) data analysis. While fMRI solely measures the blood oxygen level dependent (BOLD) signal, fNIRS measures the changes of oxy-hemoglobin (oxy-Hb) and deoxy-hemoglobin (deoxy-Hb) signals at a temporal resolution severalfold higher. This suggests the necessity of adjusting the temporal parameters of a GLM for fNIRS signals. Thus, we devised a GLM-based method utilizing an adaptive hemodynamic response function (HRF). We sought the optimum temporal parameters to best explain the observed time series data during verbal fluency and naming tasks. The peak delay of the HRF was systematically changed to achieve the best-fit model for the observed oxy- and deoxy-Hb time series data. The optimized peak delay showed different values for each Hb signal and task. When the optimized peak delays were adopted, the deoxy-Hb data yielded comparable activations with similar statistical power and spatial patterns to oxy-Hb data. The adaptive HRF method could suitably explain the behaviors of both Hb parameters during tasks with the different cognitive loads during a time course, and thus would serve as an objective method to fully utilize the temporal structures of all fNIRS data. PMID:26157973
Benefits of probiotics on enteral nutrition in preterm neonates: a systematic review.
Athalye-Jape, Gayatri; Deshpande, Girish; Rao, Shripada; Patole, Sanjay
2014-12-01
The optimization of enteral nutrition is a priority in preterm neonates worldwide. Probiotics are known to improve gut maturity and function in preterm neonates. To our knowledge, previous systematic reviews have not adequately assessed the effects of probiotic supplementation on enteral nutrition in preterm neonates. We assessed the evidence on effects of probiotics on enteral nutrition in preterm neonates. A systematic review of randomized controlled trials (RCTs) of probiotic supplementation in preterm (gestation <37 wk) or low-birth-weight (birth weight <2500 g) neonates was conducted. With the use of the Cochrane Neonatal Review Group strategy, we searched the Cochrane Central Register of Controlled Trials, PubMed, EMBASE, and Cumulative Index of Nursing and Allied Health Literature databases and proceedings of Pediatric Academic Society meetings in April 2014. A total of 25 RCTs (n = 5895) were included in the review. A meta-analysis (random-effects model) of data from 19 of 25 trials (n = 4527) estimated that the time to full enteral feeds was shorter in the probiotic group (mean difference: -1.54 d; 95% CI: -2.75, -0.32 d; P < 0.00001, I(2) = 93%). Other benefits included fewer episodes of feed intolerance, better weight gain and growth velocity, decreased transition time from orogastric to breast feeds, and increased postprandial mesenteric flow. There were no adverse effects of probiotic supplementation. Probiotics reduced the time to full enteral feeds in preterm neonates. Additional research is necessary to assess the optimal dose, duration, and probiotic strain or strains used specifically for facilitating enteral nutrition in this population. © 2014 American Society for Nutrition.
An Approach to Remove the Systematic Bias from the Storm Surge forecasts in the Venice Lagoon
NASA Astrophysics Data System (ADS)
Canestrelli, A.
2017-12-01
In this work a novel approach is proposed for removing the systematic bias from the storm surge forecast computed by a two-dimensional shallow-water model. The model covers both the Adriatic and Mediterranean seas and provides the forecast at the entrance of the Venice Lagoon. The wind drag coefficient at the water-air interface is treated as a calibration parameter, with a different value for each range of wind velocities and wind directions. This sums up to a total of 16-64 parameters to be calibrated, depending on the chosen resolution. The best set of parameters is determined by means of an optimization procedure, which minimizes the RMS error between measured and modeled water level in Venice for the period 2011-2015. It is shown that a bias is present, for which the peaks of wind velocities provided by the weather forecast are largely underestimated, and that the calibration procedure removes this bias. When the calibrated model is used to reproduce events not included in the calibration dataset, the forecast error is strongly reduced, thus confirming the quality of our procedure. The proposed approach it is not site-specific and could be applied to different situations, such as storm surges caused by intense hurricanes.
NASA Technical Reports Server (NTRS)
Meeks, Ellen; Naik, Chitral V.; Puduppakkam, Karthik V.; Modak, Abhijit; Egolfopoulos, Fokion N.; Tsotsis, Theo; Westbrook, Charles K.
2011-01-01
The objectives of this project have been to develop a comprehensive set of fundamental data regarding the combustion behavior of jet fuels and appropriately associated model fuels. Based on the fundamental study results, an auxiliary objective was to identify differentiating characteristics of molecular fuel components that can be used to explain different fuel behavior and that may ultimately be used in the planning and design of optimal fuel-production processes. The fuels studied in this project were Fischer-Tropsch (F-T) fuels and biomass-derived jet fuels that meet certain specifications of currently used jet propulsion applications. Prior to this project, there were no systematic experimental flame data available for such fuels. One of the key goals has been to generate such data, and to use this data in developing and verifying effective kinetic models. The models have then been reduced through automated means to enable multidimensional simulation of the combustion characteristics of such fuels in real combustors. Such reliable kinetic models, validated against fundamental data derived from laminar flames using idealized flow models, are key to the development and design of optimal combustors and fuels. The models provide direct information about the relative contribution of different molecular constituents to the fuel performance and can be used to assess both combustion and emissions characteristics.
CHEERS: Chemical enrichment of clusters of galaxies measured using a large XMM-Newton sample
NASA Astrophysics Data System (ADS)
de Plaa, J.; Mernier, F.; Kaastra, J.; Pinto, C.
2017-10-01
The Chemical Enrichment RGS Sample (CHEERS) is aimed to be a sample of the most optimal clusters of galaxies for observation with the Reflection Grating Spectrometer (RGS) aboard XMM-Newton. It consists of 5 Ms of deep cluster observations of 44 objects obtained through a very large program and archival observations. The main goal is to measure chemical abundances in the hot Intra-Cluster Medium (ICM) of clusters to provide constraints on chemical evolution models. Especially the origin and evolution of type Ia supernovae is still poorly known and X-ray observations could contribute to constrain models regarding the SNIa explosion mechanism. Due to the high quality of the data, the uncertainties on the abundances are dominated by systematic effects. By carefully treating each systematic effect, we increase the accuracy or estimate the remaining uncertainty on the measurement. The resulting abundances are then compared to supernova models. In addition, also radial abundance profiles are derived. In the talk, we present an overview of the results that the CHEERS collaboration obtained based on the CHEERS data. We focus on the abundance measurements. The other topics range from turbulence measurements through line broadening to cool gas in groups.
NASA Astrophysics Data System (ADS)
Zhu, Hong; Huang, Mai; Sadagopan, Sriram; Yao, Hong
2017-09-01
With increasing vehicle fuel economy standards, automotive OEMs are widely using various AHSS grades including DP, TRIP, CP and 3rd Gen AHSS to reduce vehicle weight due to their good combination of strength and formability. As one of enabling technologies for AHSS application, the requirement for requiring accurate prediction of springback for cold stamped AHSS parts stimulated a large number of investigations in the past decade with reversed loading path at large strains followed by constitutive modeling. With a spectrum of complex loading histories occurring in production stamping processes, there were many challenges in this field including issues of test data reliability, loading path representability, constitutive model robustness and non-unique constitutive parameter-identification. In this paper, various testing approaches and constitutive modeling will be reviewed briefly and a systematic methodology from stress-strain characterization, constitutive model parameter identification for material card generation will be presented in order to support automotive OEM’s need on virtual stamping. This systematic methodology features a tension-compression test at large strain with robust anti-buckling device with concurrent friction force correction, properly selected loading paths to represent material behavior during different springback modes as well as the 10-parameter Yoshida model with knowledge-based parameter-identification through nonlinear optimization. Validation cases for lab AHSS parts will also be discussed to check applicability of this methodology.
Abidi, Mustufa Haider; Al-Ahmari, Abdulrahman; Ahmad, Ali
2018-01-01
Advanced graphics capabilities have enabled the use of virtual reality as an efficient design technique. The integration of virtual reality in the design phase still faces impediment because of issues linked to the integration of CAD and virtual reality software. A set of empirical tests using the selected conversion parameters was found to yield properly represented virtual reality models. The reduced model yields an R-sq (pred) value of 72.71% and an R-sq (adjusted) value of 86.64%, indicating that 86.64% of the response variability can be explained by the model. The R-sq (pred) is 67.45%, which is not very high, indicating that the model should be further reduced by eliminating insignificant terms. The reduced model yields an R-sq (pred) value of 73.32% and an R-sq (adjusted) value of 79.49%, indicating that 79.49% of the response variability can be explained by the model. Using the optimization software MODE Frontier (Optimization, MOGA-II, 2014), four types of response surfaces for the three considered response variables were tested for the data of DOE. The parameter values obtained using the proposed experimental design methodology result in better graphics quality, and other necessary design attributes.
Optimizing the noise characteristics of high-power fiber laser systems
NASA Astrophysics Data System (ADS)
Jauregui, Cesar; Müller, Michael; Kienel, Marco; Emaury, Florian; Saraceno, Clara J.; Limpert, Jens; Keller, Ursula; Tünnermann, Andreas
2017-02-01
The noise characteristics of high-power fiber lasers, unlike those of other solid-state lasers such as thin-disks, have not been systematically studied up to now. However, novel applications for high-power fiber laser systems, such as attosecond pulse generation, put stringent limits to the maximum noise level of these sources. Therefore, in order to address these applications, a detailed knowledge and understanding of the characteristics of noise and its behavior in a fiber laser system is required. In this work we have carried out a systematic study of the propagation of the relative intensity noise (RIN) along the amplification chain of a state-of-the-art high-power fiber laser system. The most striking feature of these measurements is that the RIN level is progressively attenuated after each amplification stage. In order to understand this unexpected behavior, we have simulated the transfer function of the RIN in a fiber amplification stage ( 80μm core) as a function of the seed power and the frequency. Our simulation model shows that this damping of the amplitude noise is related to saturation. Additionally, we show, for the first time to the best of our knowledge, that the fiber design (e.g. core size, glass composition, doping geometry) can be modified to optimize the noise characteristics of high-power fiber laser systems.
A Systems Approach to Designing Effective Clinical Trials Using Simulations
Fusaro, Vincent A.; Patil, Prasad; Chi, Chih-Lin; Contant, Charles F.; Tonellato, Peter J.
2013-01-01
Background Pharmacogenetics in warfarin clinical trials have failed to show a significant benefit compared to standard clinical therapy. This study demonstrates a computational framework to systematically evaluate pre-clinical trial design of target population, pharmacogenetic algorithms, and dosing protocols to optimize primary outcomes. Methods and Results We programmatically created an end-to-end framework that systematically evaluates warfarin clinical trial designs. The framework includes options to create a patient population, multiple dosing strategies including genetic-based and non-genetic clinical-based, multiple dose adjustment protocols, pharmacokinetic/pharmacodynamics (PK/PD) modeling and international normalization ratio (INR) prediction, as well as various types of outcome measures. We validated the framework by conducting 1,000 simulations of the CoumaGen clinical trial primary endpoints. The simulation predicted a mean time in therapeutic range (TTR) of 70.6% and 72.2% (P = 0.47) in the standard and pharmacogenetic arms, respectively. Then, we evaluated another dosing protocol under the same original conditions and found a significant difference in TTR between the pharmacogenetic and standard arm (78.8% vs. 73.8%; P = 0.0065), respectively. Conclusions We demonstrate that this simulation framework is useful in the pre-clinical assessment phase to study and evaluate design options and provide evidence to optimize the clinical trial for patient efficacy and reduced risk. PMID:23261867
Consistent Chemical Mechanism from Collaborative Data Processing
Slavinskaya, Nadezda; Starcke, Jan-Hendrik; Abbasi, Mehdi; ...
2016-04-01
Numerical tool of Process Informatics Model (PrIMe) is mathematically rigorous and numerically efficient approach for analysis and optimization of chemical systems. It handles heterogeneous data and is scalable to a large number of parameters. The Boundto-Bound Data Collaboration module of the automated data-centric infrastructure of PrIMe was used for the systematic uncertainty and data consistency analyses of the H 2/CO reaction model (73/17) and 94 experimental targets (ignition delay times). The empirical rule for evaluation of the shock tube experimental data is proposed. The initial results demonstrate clear benefits of the PrIMe methods for an evaluation of the kinetic datamore » quality and data consistency and for developing predictive kinetic models.« less
The Use of a Code-generating System for the Derivation of the Equations for Wind Turbine Dynamics
NASA Astrophysics Data System (ADS)
Ganander, Hans
2003-10-01
For many reasons the size of wind turbines on the rapidly growing wind energy market is increasing. Relations between aeroelastic properties of these new large turbines change. Modifications of turbine designs and control concepts are also influenced by growing size. All these trends require development of computer codes for design and certification. Moreover, there is a strong desire for design optimization procedures, which require fast codes. General codes, e.g. finite element codes, normally allow such modifications and improvements of existing wind turbine models. This is done relatively easy. However, the calculation times of such codes are unfavourably long, certainly for optimization use. The use of an automatic code generating system is an alternative for relevance of the two key issues, the code and the design optimization. This technique can be used for rapid generation of codes of particular wind turbine simulation models. These ideas have been followed in the development of new versions of the wind turbine simulation code VIDYN. The equations of the simulation model were derived according to the Lagrange equation and using Mathematica®, which was directed to output the results in Fortran code format. In this way the simulation code is automatically adapted to an actual turbine model, in terms of subroutines containing the equations of motion, definitions of parameters and degrees of freedom. Since the start in 1997, these methods, constituting a systematic way of working, have been used to develop specific efficient calculation codes. The experience with this technique has been very encouraging, inspiring the continued development of new versions of the simulation code as the need has arisen, and the interest for design optimization is growing.
Optimizing Tactics for Use of the U.S. Antiviral Strategic National Stockpile for Pandemic Influenza
Dimitrov, Nedialko B.; Goll, Sebastian; Hupert, Nathaniel; Pourbohloul, Babak; Meyers, Lauren Ancel
2011-01-01
In 2009, public health agencies across the globe worked to mitigate the impact of the swine-origin influenza A (pH1N1) virus. These efforts included intensified surveillance, social distancing, hygiene measures, and the targeted use of antiviral medications to prevent infection (prophylaxis). In addition, aggressive antiviral treatment was recommended for certain patient subgroups to reduce the severity and duration of symptoms. To assist States and other localities meet these needs, the U.S. Government distributed a quarter of the antiviral medications in the Strategic National Stockpile within weeks of the pandemic's start. However, there are no quantitative models guiding the geo-temporal distribution of the remainder of the Stockpile in relation to pandemic spread or severity. We present a tactical optimization model for distributing this stockpile for treatment of infected cases during the early stages of a pandemic like 2009 pH1N1, prior to the wide availability of a strain-specific vaccine. Our optimization method efficiently searches large sets of intervention strategies applied to a stochastic network model of pandemic influenza transmission within and among U.S. cities. The resulting optimized strategies depend on the transmissability of the virus and postulated rates of antiviral uptake and wastage (through misallocation or loss). Our results suggest that an aggressive community-based antiviral treatment strategy involving early, widespread, pro-rata distribution of antivirals to States can contribute to slowing the transmission of mildly transmissible strains, like pH1N1. For more highly transmissible strains, outcomes of antiviral use are more heavily impacted by choice of distribution intervals, quantities per shipment, and timing of shipments in relation to pandemic spread. This study supports previous modeling results suggesting that appropriate antiviral treatment may be an effective mitigation strategy during the early stages of future influenza pandemics, increasing the need for systematic efforts to optimize distribution strategies and provide tactical guidance for public health policy-makers. PMID:21283514
Bouwman, R W; van Engen, R E; Young, K C; Veldkamp, W J H; Dance, D R
2015-01-07
Slabs of polymethyl methacrylate (PMMA) or a combination of PMMA and polyethylene (PE) slabs are used to simulate standard model breasts for the evaluation of the average glandular dose (AGD) in digital mammography (DM) and digital breast tomosynthesis (DBT). These phantoms are optimized for the energy spectra used in DM and DBT, which normally have a lower average energy than used in contrast enhanced digital mammography (CEDM). In this study we have investigated whether these phantoms can be used for the evaluation of AGD with the high energy x-ray spectra used in CEDM. For this purpose the calculated values of the incident air kerma for dosimetry phantoms and standard model breasts were compared in a zero degree projection with the use of an anti scatter grid. It was found that the difference in incident air kerma compared to standard model breasts ranges between -10% to +4% for PMMA slabs and between 6% and 15% for PMMA-PE slabs. The estimated systematic error in the measured AGD for both sets of phantoms were considered to be sufficiently small for the evaluation of AGD in quality control procedures for CEDM. However, the systematic error can be substantial if AGD values from different phantoms are compared.
NASA Astrophysics Data System (ADS)
Du, Wenbo
A common attribute of electric-powered aerospace vehicles and systems such as unmanned aerial vehicles, hybrid- and fully-electric aircraft, and satellites is that their performance is usually limited by the energy density of their batteries. Although lithium-ion batteries offer distinct advantages such as high voltage and low weight over other battery technologies, they are a relatively new development, and thus significant gaps in the understanding of the physical phenomena that govern battery performance remain. As a result of this limited understanding, batteries must often undergo a cumbersome design process involving many manual iterations based on rules of thumb and ad-hoc design principles. A systematic study of the relationship between operational, geometric, morphological, and material-dependent properties and performance metrics such as energy and power density is non-trivial due to the multiphysics, multiphase, and multiscale nature of the battery system. To address these challenges, two numerical frameworks are established in this dissertation: a process for analyzing and optimizing several key design variables using surrogate modeling tools and gradient-based optimizers, and a multi-scale model that incorporates more detailed microstructural information into the computationally efficient but limited macro-homogeneous model. In the surrogate modeling process, multi-dimensional maps for the cell energy density with respect to design variables such as the particle size, ion diffusivity, and electron conductivity of the porous cathode material are created. A combined surrogate- and gradient-based approach is employed to identify optimal values for cathode thickness and porosity under various operating conditions, and quantify the uncertainty in the surrogate model. The performance of multiple cathode materials is also compared by defining dimensionless transport parameters. The multi-scale model makes use of detailed 3-D FEM simulations conducted at the particle-level. A monodisperse system of ellipsoidal particles is used to simulate the effective transport coefficients and interfacial reaction current density within the porous microstructure. Microscopic simulation results are shown to match well with experimental measurements, while differing significantly from homogenization approximations used in the macroscopic model. Global sensitivity analysis and surrogate modeling tools are applied to couple the two length scales and complete the multi-scale model.
On the symbolic manipulation and code generation for elasto-plastic material matrices
NASA Technical Reports Server (NTRS)
Chang, T. Y.; Saleeb, A. F.; Wang, P. S.; Tan, H. Q.
1991-01-01
A computerized procedure for symbolic manipulations and FORTRAN code generation of an elasto-plastic material matrix for finite element applications is presented. Special emphasis is placed on expression simplifications during intermediate derivations, optimal code generation, and interface with the main program. A systematic procedure is outlined to avoid redundant algebraic manipulations. Symbolic expressions of the derived material stiffness matrix are automatically converted to RATFOR code which is then translated into FORTRAN statements through a preprocessor. To minimize the interface problem with the main program, a template file is prepared so that the translated FORTRAN statements can be merged into the file to form a subroutine (or a submodule). Three constitutive models; namely, von Mises plasticity, Drucker-Prager model, and a concrete plasticity model, are used as illustrative examples.
Review of dynamic optimization methods in renewable natural resource management
Williams, B.K.
1989-01-01
In recent years, the applications of dynamic optimization procedures in natural resource management have proliferated. A systematic review of these applications is given in terms of a number of optimization methodologies and natural resource systems. The applicability of the methods to renewable natural resource systems are compared in terms of system complexity, system size, and precision of the optimal solutions. Recommendations are made concerning the appropriate methods for certain kinds of biological resource problems.
Mazumdar, Maitreyi; Pandharipande, Pari; Poduri, Annapurna
2007-02-01
A recent trial suggested that albendazole reduces seizures in adults with neurocysticercosis. There is still no consensus regarding optimal management of neurocysticercosis in children. The authors conducted a systematic review and meta-analysis to assess the efficacy of albendazole in children with neurocysticercosis, by searching the Cochrane Databases, MEDLINE, EMBASE, and LILACS. Three reviewers extracted data using an intent-to-treat analysis. Random effects models were used to estimate relative risks. Four randomized trials were selected for meta-analysis, and 10 observational studies were selected for qualitative review. The relative risk of seizure remission in treatment versus control was 1.26 (1.09, 1.46). The relative risk of improvement in computed tomography in these trials was 1.15 (0.97, 1.36). Review of observational studies showed conflicting results, likely owing to preferential administration of albendazole to sicker children.
Multivoxel neurofeedback selectively modulates confidence without changing perceptual performance
Cortese, Aurelio; Amano, Kaoru; Koizumi, Ai; Kawato, Mitsuo; Lau, Hakwan
2016-01-01
A central controversy in metacognition studies concerns whether subjective confidence directly reflects the reliability of perceptual or cognitive processes, as suggested by normative models based on the assumption that neural computations are generally optimal. This view enjoys popularity in the computational and animal literatures, but it has also been suggested that confidence may depend on a late-stage estimation dissociable from perceptual processes. Yet, at least in humans, experimental tools have lacked the power to resolve these issues convincingly. Here, we overcome this difficulty by using the recently developed method of decoded neurofeedback (DecNef) to systematically manipulate multivoxel correlates of confidence in a frontoparietal network. Here we report that bi-directional changes in confidence do not affect perceptual accuracy. Further psychophysical analyses rule out accounts based on simple shifts in reporting strategy. Our results provide clear neuroscientific evidence for the systematic dissociation between confidence and perceptual performance, and thereby challenge current theoretical thinking. PMID:27976739
NASA Technical Reports Server (NTRS)
Calhoun, Philip C.; Sedlak, Joseph E.; Superfin, Emil
2011-01-01
Precision attitude determination for recent and planned space missions typically includes quaternion star trackers (ST) and a three-axis inertial reference unit (IRU). Sensor selection is based on estimates of knowledge accuracy attainable from a Kalman filter (KF), which provides the optimal solution for the case of linear dynamics with measurement and process errors characterized by random Gaussian noise with white spectrum. Non-Gaussian systematic errors in quaternion STs are often quite large and have an unpredictable time-varying nature, particularly when used in non-inertial pointing applications. Two filtering methods are proposed to reduce the attitude estimation error resulting from ST systematic errors, 1) extended Kalman filter (EKF) augmented with Markov states, 2) Unscented Kalman filter (UKF) with a periodic measurement model. Realistic assessments of the attitude estimation performance gains are demonstrated with both simulation and flight telemetry data from the Lunar Reconnaissance Orbiter.
A systematic investigation of computation models for predicting Adverse Drug Reactions (ADRs).
Kuang, Qifan; Wang, MinQi; Li, Rong; Dong, YongCheng; Li, Yizhou; Li, Menglong
2014-01-01
Early and accurate identification of adverse drug reactions (ADRs) is critically important for drug development and clinical safety. Computer-aided prediction of ADRs has attracted increasing attention in recent years, and many computational models have been proposed. However, because of the lack of systematic analysis and comparison of the different computational models, there remain limitations in designing more effective algorithms and selecting more useful features. There is therefore an urgent need to review and analyze previous computation models to obtain general conclusions that can provide useful guidance to construct more effective computational models to predict ADRs. In the current study, the main work is to compare and analyze the performance of existing computational methods to predict ADRs, by implementing and evaluating additional algorithms that have been earlier used for predicting drug targets. Our results indicated that topological and intrinsic features were complementary to an extent and the Jaccard coefficient had an important and general effect on the prediction of drug-ADR associations. By comparing the structure of each algorithm, final formulas of these algorithms were all converted to linear model in form, based on this finding we propose a new algorithm called the general weighted profile method and it yielded the best overall performance among the algorithms investigated in this paper. Several meaningful conclusions and useful findings regarding the prediction of ADRs are provided for selecting optimal features and algorithms.
Toward a systematic design theory for silicon solar cells using optimization techniques
NASA Technical Reports Server (NTRS)
Misiakos, K.; Lindholm, F. A.
1986-01-01
This work is a first detailed attempt to systematize the design of silicon solar cells. Design principles follow from three theorems. Although the results hold only under low injection conditions in base and emitter regions, they hold for arbitrary doping profiles and include the effects of drift fields, high/low junctions and heavy doping concentrations of donor or acceptor atoms. Several optimal designs are derived from the theorems, one of which involves a three-dimensional morphology in the emitter region. The theorems are derived from a nonlinear differential equation of the Riccati form, the dependent variable of which is a normalized recombination particle current.
A predictive machine learning approach for microstructure optimization and materials design
Liu, Ruoqian; Kumar, Abhishek; Chen, Zhengzhang; ...
2015-06-23
This paper addresses an important materials engineering question: How can one identify the complete space (or as much of it as possible) of microstructures that are theoretically predicted to yield the desired combination of properties demanded by a selected application? We present a problem involving design of magnetoelastic Fe-Ga alloy microstructure for enhanced elastic, plastic and magnetostrictive properties. While theoretical models for computing properties given the microstructure are known for this alloy, inversion of these relationships to obtain microstructures that lead to desired properties is challenging, primarily due to the high dimensionality of microstructure space, multi-objective design requirement and non-uniquenessmore » of solutions. These challenges render traditional search-based optimization methods incompetent in terms of both searching efficiency and result optimality. In this paper, a route to address these challenges using a machine learning methodology is proposed. A systematic framework consisting of random data generation, feature selection and classification algorithms is developed. In conclusion, experiments with five design problems that involve identification of microstructures that satisfy both linear and nonlinear property constraints show that our framework outperforms traditional optimization methods with the average running time reduced by as much as 80% and with optimality that would not be achieved otherwise.« less
Halper, Sean M; Cetnar, Daniel P; Salis, Howard M
2018-01-01
Engineering many-enzyme metabolic pathways suffers from the design curse of dimensionality. There are an astronomical number of synonymous DNA sequence choices, though relatively few will express an evolutionary robust, maximally productive pathway without metabolic bottlenecks. To solve this challenge, we have developed an integrated, automated computational-experimental pipeline that identifies a pathway's optimal DNA sequence without high-throughput screening or many cycles of design-build-test. The first step applies our Operon Calculator algorithm to design a host-specific evolutionary robust bacterial operon sequence with maximally tunable enzyme expression levels. The second step applies our RBS Library Calculator algorithm to systematically vary enzyme expression levels with the smallest-sized library. After characterizing a small number of constructed pathway variants, measurements are supplied to our Pathway Map Calculator algorithm, which then parameterizes a kinetic metabolic model that ultimately predicts the pathway's optimal enzyme expression levels and DNA sequences. Altogether, our algorithms provide the ability to efficiently map the pathway's sequence-expression-activity space and predict DNA sequences with desired metabolic fluxes. Here, we provide a step-by-step guide to applying the Pathway Optimization Pipeline on a desired multi-enzyme pathway in a bacterial host.
Li, Mengdi; Fan, Juntao; Zhang, Yuan; Guo, Fen; Liu, Lusan; Xia, Rui; Xu, Zongxue; Wu, Fengchang
2018-05-15
Aiming to protect freshwater ecosystems, river ecological restoration has been brought into the research spotlight. However, it is challenging for decision makers to set appropriate objectives and select a combination of rehabilitation acts from numerous possible solutions to meet ecological, economic, and social demands. In this study, we developed a systematic approach to help make an optimal strategy for watershed restoration, which incorporated ecological security assessment and multi-objectives optimization (MOO) into the planning process to enhance restoration efficiency and effectiveness. The river ecological security status was evaluated by using a pressure-state-function-response (PSFR) assessment framework, and MOO was achieved by searching for the Pareto optimal solutions via Non-dominated Sorting Genetic Algorithm II (NSGA-II) to balance tradeoffs between different objectives. Further, we clustered the searched solutions into three types in terms of different optimized objective function values in order to provide insightful information for decision makers. The proposed method was applied in an example rehabilitation project in the Taizi River Basin in northern China. The MOO result in the Taizi River presented a set of Pareto optimal solutions that were classified into three types: I - high ecological improvement, high cost and high benefits solution; II - medial ecological improvement, medial cost and medial economic benefits solution; III - low ecological improvement, low cost and low economic benefits solution. The proposed systematic approach in our study can enhance the effectiveness of riverine ecological restoration project and could provide valuable reference for other ecological restoration planning. Copyright © 2018 Elsevier B.V. All rights reserved.
Systematic and deliberate orientation and instruction for dedicated education unit staff.
Smyer, Tish; Tejada, Marianne Bundalian; Tan, Rhigel Alforque
2015-03-01
On the basis of increasing complexity of the health care environment and recommended changes in how nurses are educated to meet these challenges, the University of Nevada Las Vegas, School of Nursing established an academic-practice partnership with Summerlin Hospital Medical Center to develop a dedicated education unit (DEU). When the DEU model was implemented, variables that were not discussed in the literature needed to be addressed. One such challenge was how to impart pedagogy related to clinical teaching to the DEU nursing staff who would be acting as clinical dedicated unit instructors (CDIs). Of chief concern was the evaluation and monitoring of the quality of CDI-student interactions to ensure optimal student learning outcomes. This article addresses the development of a deliberate, systematic approach to the orientation and continued education of CDIs in the DEU. This information will assist other nursing programs as they begin to implement DEUs. Copyright 2015, SLACK Incorporated.
Fuereder, Markus; Majeed, Imthiyas N; Panke, Sven; Bechtold, Matthias
2014-06-13
Teicoplanin aglycone columns allow efficient separation of amino acid enantiomers in aqueous mobile phases and enable robust and predictable simulated moving bed (SMB) separation of racemic methionine despite a dependency of the adsorption behavior on the column history (memory effect). In this work we systematically investigated the influence of the mobile phase (methanol content) and temperature on SMB performance using a model-based optimization approach that accounts for methionine solubility, adsorption behavior and back pressure. Adsorption isotherms became more favorable with increasing methanol content but methionine solubility was decreased and back pressure increased. Numerical optimization suggested a moderate methanol content (25-35%) for most efficient operation. Higher temperature had a positive effect on specific productivity and desorbent requirement due to higher methionine solubility, lower back pressure and virtually invariant selectivity at high loadings of racemic methionine. However, process robustness (defined as a difference in flow rate ratios) decreased strongly with increasing temperature to the extent that any significant increase in temperature over 32°C will likely result in operating points that cannot be realized technically even with the lab-scale piston pump SMB system employed in this study. Copyright © 2014. Published by Elsevier B.V.
The neural basis of financial risk taking.
Kuhnen, Camelia M; Knutson, Brian
2005-09-01
Investors systematically deviate from rationality when making financial decisions, yet the mechanisms responsible for these deviations have not been identified. Using event-related fMRI, we examined whether anticipatory neural activity would predict optimal and suboptimal choices in a financial decision-making task. We characterized two types of deviations from the optimal investment strategy of a rational risk-neutral agent as risk-seeking mistakes and risk-aversion mistakes. Nucleus accumbens activation preceded risky choices as well as risk-seeking mistakes, while anterior insula activation preceded riskless choices as well as risk-aversion mistakes. These findings suggest that distinct neural circuits linked to anticipatory affect promote different types of financial choices and indicate that excessive activation of these circuits may lead to investing mistakes. Thus, consideration of anticipatory neural mechanisms may add predictive power to the rational actor model of economic decision making.
A new systematic calibration method of ring laser gyroscope inertial navigation system
NASA Astrophysics Data System (ADS)
Wei, Guo; Gao, Chunfeng; Wang, Qi; Wang, Qun; Xiong, Zhenyu; Long, Xingwu
2016-10-01
Inertial navigation system has been the core component of both military and civil navigation systems. Before the INS is put into application, it is supposed to be calibrated in the laboratory in order to compensate repeatability error caused by manufacturing. Discrete calibration method cannot fulfill requirements of high-accurate calibration of the mechanically dithered ring laser gyroscope navigation system with shock absorbers. This paper has analyzed theories of error inspiration and separation in detail and presented a new systematic calibration method for ring laser gyroscope inertial navigation system. Error models and equations of calibrated Inertial Measurement Unit are given. Then proper rotation arrangement orders are depicted in order to establish the linear relationships between the change of velocity errors and calibrated parameter errors. Experiments have been set up to compare the systematic errors calculated by filtering calibration result with those obtained by discrete calibration result. The largest position error and velocity error of filtering calibration result are only 0.18 miles and 0.26m/s compared with 2 miles and 1.46m/s of discrete calibration result. These results have validated the new systematic calibration method and proved its importance for optimal design and accuracy improvement of calibration of mechanically dithered ring laser gyroscope inertial navigation system.
Optimal energy harvesting from vortex-induced vibrations of cables.
Antoine, G O; de Langre, E; Michelin, S
2016-11-01
Vortex-induced vibrations (VIV) of flexible cables are an example of flow-induced vibrations that can act as energy harvesting systems by converting energy associated with the spontaneous cable motion into electricity. This work investigates the optimal positioning of the harvesting devices along the cable, using numerical simulations with a wake oscillator model to describe the unsteady flow forcing. Using classical gradient-based optimization, the optimal harvesting strategy is determined for the generic configuration of a flexible cable fixed at both ends, including the effect of flow forces and gravity on the cable's geometry. The optimal strategy is found to consist systematically in a concentration of the harvesting devices at one of the cable's ends, relying on deformation waves along the cable to carry the energy towards this harvesting site. Furthermore, we show that the performance of systems based on VIV of flexible cables is significantly more robust to flow velocity variations, in comparison with a rigid cylinder device. This results from two passive control mechanisms inherent to the cable geometry: (i) the adaptability to the flow velocity of the fundamental frequencies of cables through the flow-induced tension and (ii) the selection of successive vibration modes by the flow velocity for cables with gravity-induced tension.
Optimal energy harvesting from vortex-induced vibrations of cables
de Langre, E.; Michelin, S.
2016-01-01
Vortex-induced vibrations (VIV) of flexible cables are an example of flow-induced vibrations that can act as energy harvesting systems by converting energy associated with the spontaneous cable motion into electricity. This work investigates the optimal positioning of the harvesting devices along the cable, using numerical simulations with a wake oscillator model to describe the unsteady flow forcing. Using classical gradient-based optimization, the optimal harvesting strategy is determined for the generic configuration of a flexible cable fixed at both ends, including the effect of flow forces and gravity on the cable’s geometry. The optimal strategy is found to consist systematically in a concentration of the harvesting devices at one of the cable’s ends, relying on deformation waves along the cable to carry the energy towards this harvesting site. Furthermore, we show that the performance of systems based on VIV of flexible cables is significantly more robust to flow velocity variations, in comparison with a rigid cylinder device. This results from two passive control mechanisms inherent to the cable geometry: (i) the adaptability to the flow velocity of the fundamental frequencies of cables through the flow-induced tension and (ii) the selection of successive vibration modes by the flow velocity for cables with gravity-induced tension. PMID:27956880
Optimal energy harvesting from vortex-induced vibrations of cables
NASA Astrophysics Data System (ADS)
Antoine, G. O.; de Langre, E.; Michelin, S.
2016-11-01
Vortex-induced vibrations (VIV) of flexible cables are an example of flow-induced vibrations that can act as energy harvesting systems by converting energy associated with the spontaneous cable motion into electricity. This work investigates the optimal positioning of the harvesting devices along the cable, using numerical simulations with a wake oscillator model to describe the unsteady flow forcing. Using classical gradient-based optimization, the optimal harvesting strategy is determined for the generic configuration of a flexible cable fixed at both ends, including the effect of flow forces and gravity on the cable's geometry. The optimal strategy is found to consist systematically in a concentration of the harvesting devices at one of the cable's ends, relying on deformation waves along the cable to carry the energy towards this harvesting site. Furthermore, we show that the performance of systems based on VIV of flexible cables is significantly more robust to flow velocity variations, in comparison with a rigid cylinder device. This results from two passive control mechanisms inherent to the cable geometry: (i) the adaptability to the flow velocity of the fundamental frequencies of cables through the flow-induced tension and (ii) the selection of successive vibration modes by the flow velocity for cables with gravity-induced tension.
Saridakis, Emmanuel; Chayen, Naomi E.
2003-01-01
A systematic approach for improving protein crystals by growing them in the metastable zone using the vapor diffusion technique is described. This is a simple technique for optimization of crystallization conditions. Screening around known conditions is performed to establish a working phase diagram for the crystallization of the protein. Dilutions of the crystallization drops across the supersolubility curve into the metastable zone are then carried out as follows: the coverslips holding the hanging drops are transferred, after being incubated for some time at conditions normally giving many small crystals, over reservoirs at concentrations which normally yield clear drops. Fewer, much larger crystals are obtained when the incubation times are optimized, compared with conventional crystallization at similar conditions. This systematic approach has led to the structure determination of the light-harvesting protein C-phycocyanin to the highest-ever resolution of 1.45 Å. PMID:12547801
NASA Astrophysics Data System (ADS)
Morávek, Zdenek; Rickhey, Mark; Hartmann, Matthias; Bogner, Ludwig
2009-08-01
Treatment plans for intensity-modulated proton therapy may be sensitive to some sources of uncertainty. One source is correlated with approximations of the algorithms applied in the treatment planning system and another one depends on how robust the optimization is with regard to intra-fractional tissue movements. The irradiated dose distribution may substantially deteriorate from the planning when systematic errors occur in the dose algorithm. This can influence proton ranges and lead to improper modeling of the Braggpeak degradation in heterogeneous structures or particle scatter or the nuclear interaction part. Additionally, systematic errors influence the optimization process, which leads to the convergence error. Uncertainties with regard to organ movements are related to the robustness of a chosen beam setup to tissue movements on irradiation. We present the inverse Monte Carlo treatment planning system IKO for protons (IKO-P), which tries to minimize the errors described above to a large extent. Additionally, robust planning is introduced by beam angle optimization according to an objective function penalizing paths representing strongly longitudinal and transversal tissue heterogeneities. The same score function is applied to optimize spot planning by the selection of a robust choice of spots. As spots can be positioned on different energy grids or on geometric grids with different space filling factors, a variety of grids were used to investigate the influence on the spot-weight distribution as a result of optimization. A tighter distribution of spot weights was assumed to result in a more robust plan with respect to movements. IKO-P is described in detail and demonstrated on a test case and a lung cancer case as well. Different options of spot planning and grid types are evaluated, yielding a superior plan quality with dose delivery to the spots from all beam directions over optimized beam directions. This option shows a tighter spot-weight distribution and should therefore be less sensitive to movements compared to optimized directions. But accepting a slight loss in plan quality, the latter choice could potentially improve robustness even further by accepting only spots from the most proper direction. The choice of a geometric grid instead of an energy grid for spot positioning has only a minor influence on the plan quality, at least for the investigated lung case.
2014-01-01
Background Research has shown that nursing students find it difficult to translate and apply their theoretical knowledge in a clinical context. Virtual patients (VPs) have been proposed as a learning activity that can support nursing students in their learning of scientific knowledge and help them integrate theory and practice. Although VPs are increasingly used in health care education, they still lack a systematic consistency that would allow their reuse outside of their original context. There is therefore a need to develop a model for the development and implementation of VPs in nursing education. Objective The aim of this study was to develop and evaluate a virtual patient model optimized to the learning and assessment needs in nursing education. Methods The process of modeling started by reviewing theoretical frameworks reported in the literature and used by practitioners when designing learning and assessment activities. The Outcome-Present State Test (OPT) model was chosen as the theoretical framework. The model was then, in an iterative manner, developed and optimized to the affordances of virtual patients. Content validation was performed with faculty both in terms of the relevance of the chosen theories but also its applicability in nursing education. The virtual patient nursing model was then instantiated in two VPs. The students’ perceived usefulness of the VPs was investigated using a questionnaire. The result was analyzed using descriptive statistics. Results A virtual patient Nursing Design Model (vpNDM) composed of three layers was developed. Layer 1 contains the patient story and ways of interacting with the data, Layer 2 includes aspects of the iterative process of clinical reasoning, and finally Layer 3 includes measurable outcomes. A virtual patient Nursing Activity Model (vpNAM) was also developed as a guide when creating VP-centric learning activities. The students perceived the global linear VPs as a relevant learning activity for the integration of theory and practice. Conclusions Virtual patients that are adapted to the nursing paradigm can support nursing students’ development of clinical reasoning skills. The proposed virtual patient nursing design and activity models will allow the systematic development of different types of virtual patients from a common model and thereby create opportunities for sharing pedagogical designs across technical solutions. PMID:24727709
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, X; Wang, J; Hu, W
Purpose: The Varian RapidPlan™ is a commercial knowledge-based optimization process which uses a set of clinically used treatment plans to train a model that can predict individualized dose-volume objectives. The purpose of this study is to evaluate the performance of RapidPlan to generate intensity modulated radiation therapy (IMRT) plans for cervical cancer. Methods: Totally 70 IMRT plans for cervical cancer with varying clinical and physiological indications were enrolled in this study. These patients were all previously treated in our institution. There were two prescription levels usually used in our institution: 45Gy/25 fractions and 50.4Gy/28 fractions. 50 of these plans weremore » selected to train the RapidPlan model for predicting dose-volume constraints. After model training, this model was validated with 10 plans from training pool(internal validation) and additional other 20 new plans(external validation). All plans used for the validation were re-optimized with the original beam configuration and the generated priorities from RapidPlan were manually adjusted to ensure that re-optimized DVH located in the range of the model prediction. DVH quantitative analysis was performed to compare the RapidPlan generated and the original manual optimized plans. Results: For all the validation cases, RapidPlan based plans (RapidPlan) showed similar or superior results compared to the manual optimized ones. RapidPlan increased the result of D98% and homogeneity in both two validations. For organs at risk, the RapidPlan decreased mean doses of bladder by 1.25Gy/1.13Gy (internal/external validation) on average, with p=0.12/p<0.01. The mean dose of rectum and bowel were also decreased by an average of 2.64Gy/0.83Gy and 0.66Gy/1.05Gy,with p<0.01/ p<0.01and p=0.04/<0.01 for the internal/external validation, respectively. Conclusion: The RapidPlan model based cervical cancer plans shows ability to systematically improve the IMRT plan quality. It suggests that RapidPlan has great potential to make the treatment planning process more efficient.« less
Preserving privacy whilst maintaining robust epidemiological predictions.
Werkman, Marleen; Tildesley, Michael J; Brooks-Pollock, Ellen; Keeling, Matt J
2016-12-01
Mathematical models are invaluable tools for quantifying potential epidemics and devising optimal control strategies in case of an outbreak. State-of-the-art models increasingly require detailed individual farm-based and sensitive data, which may not be available due to either lack of capacity for data collection or privacy concerns. However, in many situations, aggregated data are available for use. In this study, we systematically investigate the accuracy of predictions made by mathematical models initialised with varying data aggregations, using the UK 2001 Foot-and-Mouth Disease Epidemic as a case study. We consider the scenario when the only data available are aggregated into spatial grid cells, and develop a metapopulation model where individual farms in a single subpopulation are assumed to behave uniformly and transmit randomly. We also adapt this standard metapopulation model to capture heterogeneity in farm size and composition, using farm census data. Our results show that homogeneous models based on aggregated data overestimate final epidemic size but can perform well for predicting spatial spread. Recognising heterogeneity in farm sizes improves predictions of the final epidemic size, identifying risk areas, determining the likelihood of epidemic take-off and identifying the optimal control strategy. In conclusion, in cases where individual farm-based data are not available, models can still generate meaningful predictions, although care must be taken in their interpretation and use. Copyright © 2016. Published by Elsevier B.V.
Gobin, Oliver C; Schüth, Ferdi
2008-01-01
Genetic algorithms are widely used to solve and optimize combinatorial problems and are more often applied for library design in combinatorial chemistry. Because of their flexibility, however, their implementation can be challenging. In this study, the influence of the representation of solid catalysts on the performance of genetic algorithms was systematically investigated on the basis of a new, constrained, multiobjective, combinatorial test problem with properties common to problems in combinatorial materials science. Constraints were satisfied by penalty functions, repair algorithms, or special representations. The tests were performed using three state-of-the-art evolutionary multiobjective algorithms by performing 100 optimization runs for each algorithm and test case. Experimental data obtained during the optimization of a noble metal-free solid catalyst system active in the selective catalytic reduction of nitric oxide with propene was used to build up a predictive model to validate the results of the theoretical test problem. A significant influence of the representation on the optimization performance was observed. Binary encodings were found to be the preferred encoding in most of the cases, and depending on the experimental test unit, repair algorithms or penalty functions performed best.
Optimized Hypernetted-Chain Solutions for Helium -4 Surfaces and Metal Surfaces
NASA Astrophysics Data System (ADS)
Qian, Guo-Xin
This thesis is a study of inhomogeneous Bose systems such as liquid ('4)He slabs and inhomogeneous Fermi systems such as the electron gas in metal films, at zero temperature. Using a Jastrow-type many-body wavefunction, the ground state energy is expressed by means of Bogoliubov-Born-Green-Kirkwood -Yvon and Hypernetted-Chain techniques. For Bose systems, Euler-Lagrange equations are derived for the one- and two -body functions and systematic approximation methods are physically motivated. It is shown that the optimized variational method includes a self-consistent summation of ladder- and ring-diagrams of conventional many-body theory. For Fermi systems, a linear potential model is adopted to generate the optimized Hartree-Fock basis. Euler-Lagrange equations are derived for the two-body correlations which serve to screen the strong bare Coulomb interaction. The optimization of the pair correlation leads to an expression of correlation energy in which the state averaged RPA part is separated. Numerical applications are presented for the density profile and pair distribution function for both ('4)He surfaces and metal surfaces. Both the bulk and surface energies are calculated in good agreement with experiments.
Kobler, Jan-Philipp; Nuelle, Kathrin; Lexow, G Jakob; Rau, Thomas S; Majdani, Omid; Kahrs, Lueder A; Kotlarski, Jens; Ortmaier, Tobias
2016-03-01
Minimally invasive cochlear implantation is a novel surgical technique which requires highly accurate guidance of a drilling tool along a trajectory from the mastoid surface toward the basal turn of the cochlea. The authors propose a passive, reconfigurable, parallel robot which can be directly attached to bone anchors implanted in a patient's skull, avoiding the need for surgical tracking systems. Prior to clinical trials, methods are necessary to patient specifically optimize the configuration of the mechanism with respect to accuracy and stability. Furthermore, the achievable accuracy has to be determined experimentally. A comprehensive error model of the proposed mechanism is established, taking into account all relevant error sources identified in previous studies. Two optimization criteria to exploit the given task redundancy and reconfigurability of the passive robot are derived from the model. The achievable accuracy of the optimized robot configurations is first estimated with the help of a Monte Carlo simulation approach and finally evaluated in drilling experiments using synthetic temporal bone specimen. Experimental results demonstrate that the bone-attached mechanism exhibits a mean targeting accuracy of [Formula: see text] mm under realistic conditions. A systematic targeting error is observed, which indicates that accurate identification of the passive robot's kinematic parameters could further reduce deviations from planned drill trajectories. The accuracy of the proposed mechanism demonstrates its suitability for minimally invasive cochlear implantation. Future work will focus on further evaluation experiments on temporal bone specimen.
NASA Astrophysics Data System (ADS)
Qi, D.; Majda, A.
2017-12-01
A low-dimensional reduced-order statistical closure model is developed for quantifying the uncertainty in statistical sensitivity and intermittency in principal model directions with largest variability in high-dimensional turbulent system and turbulent transport models. Imperfect model sensitivity is improved through a recent mathematical strategy for calibrating model errors in a training phase, where information theory and linear statistical response theory are combined in a systematic fashion to achieve the optimal model performance. The idea in the reduced-order method is from a self-consistent mathematical framework for general systems with quadratic nonlinearity, where crucial high-order statistics are approximated by a systematic model calibration procedure. Model efficiency is improved through additional damping and noise corrections to replace the expensive energy-conserving nonlinear interactions. Model errors due to the imperfect nonlinear approximation are corrected by tuning the model parameters using linear response theory with an information metric in a training phase before prediction. A statistical energy principle is adopted to introduce a global scaling factor in characterizing the higher-order moments in a consistent way to improve model sensitivity. Stringent models of barotropic and baroclinic turbulence are used to display the feasibility of the reduced-order methods. Principal statistical responses in mean and variance can be captured by the reduced-order models with accuracy and efficiency. Besides, the reduced-order models are also used to capture crucial passive tracer field that is advected by the baroclinic turbulent flow. It is demonstrated that crucial principal statistical quantities like the tracer spectrum and fat-tails in the tracer probability density functions in the most important large scales can be captured efficiently with accuracy using the reduced-order tracer model in various dynamical regimes of the flow field with distinct statistical structures.
Schuster, Christina; Elamin, Marwa; Hardiman, Orla; Bede, Peter
2015-10-01
Recent quantitative neuroimaging studies have been successful in capturing phenotype and genotype-specific changes in dementia syndromes, amyotrophic lateral sclerosis, Parkinson's disease and other neurodegenerative conditions. However, the majority of imaging studies are cross-sectional, despite the obvious superiority of longitudinal study designs in characterising disease trajectories, response to therapy, progression rates and evaluating the presymptomatic phase of neurodegenerative conditions. The aim of this work is to perform a systematic review of longitudinal imaging initiatives in neurodegeneration focusing on methodology, optimal statistical models, follow-up intervals, attrition rates, primary study outcomes and presymptomatic studies. Longitudinal imaging studies were identified from 'PubMed' and reviewed from 1990 to 2014. The search terms 'longitudinal', 'MRI', 'presymptomatic' and 'imaging' were utilised in combination with one of the following degenerative conditions; Alzheimer's disease, amyotrophic lateral sclerosis/motor neuron disease, frontotemporal dementia, Huntington's disease, multiple sclerosis, Parkinson's disease, ataxia, HIV, alcohol abuse/dependence. A total of 423 longitudinal imaging papers and 103 genotype-based presymptomatic studies were identified and systematically reviewed. Imaging techniques, follow-up intervals and attrition rates showed significant variation depending on the primary diagnosis. Commonly used statistical models included analysis of annualised percentage change, mixed and random effect models, and non-linear cumulative models with acceleration-deceleration components. Although longitudinal imaging studies have the potential to provide crucial insights into the presymptomatic phase and natural trajectory of neurodegenerative processes a standardised design is required to enable meaningful data interpretation. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Voltammetric methods for determination of total sulfide concentrations in anoxic sediments utilizing a previously described [1] gold-based mercury amalgam microelectrode were optimized. Systematic studies in NaCl (supporting electrolyte) and porewater indicate variations in ionic...
Optimization of Multimedia English Teaching in Context Creation
ERIC Educational Resources Information Center
Yang, Weiyan; Fang, Fan
2008-01-01
Using multimedia to create a context to teach English has its unique advantages. This paper explores the characteristics of multimedia and integrates how to use multimedia to optimize the context of English teaching as its purpose. In this paper, eight principles, specifically Systematization, Authenticity, Appropriateness, Interactivity,…
NASA Astrophysics Data System (ADS)
Pasquier, B.; Holzer, M.; Frants, M.
2016-02-01
We construct a data-constrained mechanistic inverse model of the ocean's coupled phosphorus and iron cycles. The nutrient cycling is embedded in a data-assimilated steady global circulation. Biological nutrient uptake is parameterized in terms of nutrient, light, and temperature limitations on growth for two classes of phytoplankton that are not transported explicitly. A matrix formulation of the discretized nutrient tracer equations allows for efficient numerical solutions, which facilitates the objective optimization of the key biogeochemical parameters. The optimization minimizes the misfit between the modelled and observed nutrient fields of the current climate. We systematically assess the nonlinear response of the biological pump to changes in the aeolian iron supply for a variety of scenarios. Specifically, Green-function techniques are employed to quantify in detail the pathways and timescales with which those perturbations are propagated throughout the world oceans, determining the global teleconnections that mediate the response of the global ocean ecosystem. We confirm previous findings from idealized studies that increased iron fertilization decreases biological production in the subtropical gyres and we quantify the counterintuitive and asymmetric response of global productivity to increases and decreases in the aeolian iron supply.
Systematic Sensor Selection Strategy (S4) User Guide
NASA Technical Reports Server (NTRS)
Sowers, T. Shane
2012-01-01
This paper describes a User Guide for the Systematic Sensor Selection Strategy (S4). S4 was developed to optimally select a sensor suite from a larger pool of candidate sensors based on their performance in a diagnostic system. For aerospace systems, selecting the proper sensors is important for ensuring adequate measurement coverage to satisfy operational, maintenance, performance, and system diagnostic criteria. S4 optimizes the selection of sensors based on the system fault diagnostic approach while taking conflicting objectives such as cost, weight and reliability into consideration. S4 can be described as a general architecture structured to accommodate application-specific components and requirements. It performs combinational optimization with a user defined merit or cost function to identify optimum or near-optimum sensor suite solutions. The S4 User Guide describes the sensor selection procedure and presents an example problem using an open source turbofan engine simulation to demonstrate its application.
Thakore, Vaibhav; Molnar, Peter; Hickman, James J.
2014-01-01
Extracellular neuroelectronic interfacing is an emerging field with important applications in the fields of neural prosthetics, biological computation and biosensors. Traditionally, neuron-electrode interfaces have been modeled as linear point or area contact equivalent circuits but it is now being increasingly realized that such models cannot explain the shapes and magnitudes of the observed extracellular signals. Here, results were compared and contrasted from an unprecedented optimization based study of the point contact models for an extracellular ‘on-cell’ neuron-patch electrode and a planar neuron-microelectrode interface. Concurrent electrophysiological recordings from a single neuron simultaneously interfaced to three distinct electrodes (intracellular, ‘on-cell’ patch and planar microelectrode) allowed novel insights into the mechanism of signal transduction at the neuron-electrode interface. After a systematic isolation of the nonlinear neuronal contribution to the extracellular signal, a consistent underestimation of the simulated supra-threshold extracellular signals compared to the experimentally recorded signals was observed. This conclusively demonstrated that the dynamics of the interfacial medium contribute nonlinearly to the process of signal transduction at the neuron-electrode interface. Further, an examination of the optimized model parameters for the experimental extracellular recordings from sub- and supra-threshold stimulations of the neuron-electrode junctions revealed that ionic transport at the ‘on-cell’ neuron-patch electrode is dominated by diffusion whereas at the neuron-microelectrode interface the electric double layer (EDL) effects dominate. Based on this study, the limitations of the equivalent circuit models in their failure to account for the nonlinear EDL and ionic electrodiffusion effects occurring during signal transduction at the neuron-electrode interfaces are discussed. PMID:22695342
Use of constrained optimization in the conceptual design of a medium-range subsonic transport
NASA Technical Reports Server (NTRS)
Sliwa, S. M.
1980-01-01
Constrained parameter optimization was used to perform the optimal conceptual design of a medium range transport configuration. The impact of choosing a given performance index was studied, and the required income for a 15 percent return on investment was proposed as a figure of merit. A number of design constants and constraint functions were systematically varied to document the sensitivities of the optimal design to a variety of economic and technological assumptions. A comparison was made for each of the parameter variations between the baseline configuration and the optimally redesigned configuration.
The capital-asset-pricing model and arbitrage pricing theory: a unification.
Ali Khan, M; Sun, Y
1997-04-15
We present a model of a financial market in which naive diversification, based simply on portfolio size and obtained as a consequence of the law of large numbers, is distinguished from efficient diversification, based on mean-variance analysis. This distinction yields a valuation formula involving only the essential risk embodied in an asset's return, where the overall risk can be decomposed into a systematic and an unsystematic part, as in the arbitrage pricing theory; and the systematic component further decomposed into an essential and an inessential part, as in the capital-asset-pricing model. The two theories are thus unified, and their individual asset-pricing formulas shown to be equivalent to the pervasive economic principle of no arbitrage. The factors in the model are endogenously chosen by a procedure analogous to the Karhunen-Loéve expansion of continuous time stochastic processes; it has an optimality property justifying the use of a relatively small number of them to describe the underlying correlational structures. Our idealized limit model is based on a continuum of assets indexed by a hyperfinite Loeb measure space, and it is asymptotically implementable in a setting with a large but finite number of assets. Because the difficulties in the formulation of the law of large numbers with a standard continuum of random variables are well known, the model uncovers some basic phenomena not amenable to classical methods, and whose approximate counterparts are not already, or even readily, apparent in the asymptotic setting.
The capital-asset-pricing model and arbitrage pricing theory: A unification
Khan, M. Ali; Sun, Yeneng
1997-01-01
We present a model of a financial market in which naive diversification, based simply on portfolio size and obtained as a consequence of the law of large numbers, is distinguished from efficient diversification, based on mean-variance analysis. This distinction yields a valuation formula involving only the essential risk embodied in an asset’s return, where the overall risk can be decomposed into a systematic and an unsystematic part, as in the arbitrage pricing theory; and the systematic component further decomposed into an essential and an inessential part, as in the capital-asset-pricing model. The two theories are thus unified, and their individual asset-pricing formulas shown to be equivalent to the pervasive economic principle of no arbitrage. The factors in the model are endogenously chosen by a procedure analogous to the Karhunen–Loéve expansion of continuous time stochastic processes; it has an optimality property justifying the use of a relatively small number of them to describe the underlying correlational structures. Our idealized limit model is based on a continuum of assets indexed by a hyperfinite Loeb measure space, and it is asymptotically implementable in a setting with a large but finite number of assets. Because the difficulties in the formulation of the law of large numbers with a standard continuum of random variables are well known, the model uncovers some basic phenomena not amenable to classical methods, and whose approximate counterparts are not already, or even readily, apparent in the asymptotic setting. PMID:11038614
Multiscale modelling in immunology: a review.
Cappuccio, Antonio; Tieri, Paolo; Castiglione, Filippo
2016-05-01
One of the greatest challenges in biomedicine is to get a unified view of observations made from the molecular up to the organism scale. Towards this goal, multiscale models have been highly instrumental in contexts such as the cardiovascular field, angiogenesis, neurosciences and tumour biology. More recently, such models are becoming an increasingly important resource to address immunological questions as well. Systematic mining of the literature in multiscale modelling led us to identify three main fields of immunological applications: host-virus interactions, inflammatory diseases and their treatment and development of multiscale simulation platforms for immunological research and for educational purposes. Here, we review the current developments in these directions, which illustrate that multiscale models can consistently integrate immunological data generated at several scales, and can be used to describe and optimize therapeutic treatments of complex immune diseases. © The Author 2015. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
Schwingshackl, Lukas; Hoffmann, Georg; Lampousi, Anna-Maria; Knüppel, Sven; Iqbal, Khalid; Schwedhelm, Carolina; Bechthold, Angela; Schlesinger, Sabrina; Boeing, Heiner
2017-05-01
The aim of this systematic review and meta-analysis was to synthesize the knowledge about the relation between intake of 12 major food groups and risk of type 2 diabetes (T2D). We conducted a systematic search in PubMed, Embase, Medline (Ovid), Cochrane Central, and Google Scholar for prospective studies investigating the association between whole grains, refined grains, vegetables, fruits, nuts, legumes, eggs, dairy, fish, red meat, processed meat, and sugar-sweetened beverages (SSB) on risk of T2D. Summary relative risks were estimated using a random effects model by contrasting categories, and for linear and non-linear dose-response relationships. Six out of the 12 food-groups showed a significant relation with risk of T2D, three of them a decrease of risk with increasing consumption (whole grains, fruits, and dairy), and three an increase of risk with increasing consumption (red meat, processed meat, and SSB) in the linear dose-response meta-analysis. There was evidence of a non-linear relationship between fruits, vegetables, processed meat, whole grains, and SSB and T2D risk. Optimal consumption of risk-decreasing foods resulted in a 42% reduction, and consumption of risk-increasing foods was associated with a threefold T2D risk, compared to non-consumption. The meta-evidence was graded "low" for legumes and nuts; "moderate" for refined grains, vegetables, fruit, eggs, dairy, and fish; and "high" for processed meat, red meat, whole grains, and SSB. Among the investigated food groups, selecting specific optimal intakes can lead to a considerable change in risk of T2D.
Zou, Meng; Liu, Zhaoqi; Zhang, Xiang-Sun; Wang, Yong
2015-10-15
In prognosis and survival studies, an important goal is to identify multi-biomarker panels with predictive power using molecular characteristics or clinical observations. Such analysis is often challenged by censored, small-sample-size, but high-dimensional genomic profiles or clinical data. Therefore, sophisticated models and algorithms are in pressing need. In this study, we propose a novel Area Under Curve (AUC) optimization method for multi-biomarker panel identification named Nearest Centroid Classifier for AUC optimization (NCC-AUC). Our method is motived by the connection between AUC score for classification accuracy evaluation and Harrell's concordance index in survival analysis. This connection allows us to convert the survival time regression problem to a binary classification problem. Then an optimization model is formulated to directly maximize AUC and meanwhile minimize the number of selected features to construct a predictor in the nearest centroid classifier framework. NCC-AUC shows its great performance by validating both in genomic data of breast cancer and clinical data of stage IB Non-Small-Cell Lung Cancer (NSCLC). For the genomic data, NCC-AUC outperforms Support Vector Machine (SVM) and Support Vector Machine-based Recursive Feature Elimination (SVM-RFE) in classification accuracy. It tends to select a multi-biomarker panel with low average redundancy and enriched biological meanings. Also NCC-AUC is more significant in separation of low and high risk cohorts than widely used Cox model (Cox proportional-hazards regression model) and L1-Cox model (L1 penalized in Cox model). These performance gains of NCC-AUC are quite robust across 5 subtypes of breast cancer. Further in an independent clinical data, NCC-AUC outperforms SVM and SVM-RFE in predictive accuracy and is consistently better than Cox model and L1-Cox model in grouping patients into high and low risk categories. In summary, NCC-AUC provides a rigorous optimization framework to systematically reveal multi-biomarker panel from genomic and clinical data. It can serve as a useful tool to identify prognostic biomarkers for survival analysis. NCC-AUC is available at http://doc.aporc.org/wiki/NCC-AUC. ywang@amss.ac.cn Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Effects of Zn on magnetic properties and pseudogap of optimally doped La 2-xSr xCuO 4
NASA Astrophysics Data System (ADS)
Islam, R. S.; Naqib, S. H.
2010-01-01
The effects of Zn substitution on the uniform ( q = 0) magnetic susceptibility, χ( T), of optimally doped ( x = 0.15) La 2-xSr xCu 1-yZn yO 4 sintered samples were investigated over a wide range of Zn contents ( y). Non-magnetic Zn was found to enhance χ( T) systematically and depress T c very effectively. We have extracted the characteristic pseudogap energy scale, ε g, from the analysis of χ( T) data. Unlike T c, ε g was found to be fairly insensitive to the level of Zn substitution. This supports the scenario where the pseudogap phenomenon has non-superconducting origin. We have also analyzed the Zn-induced Curie-like enhancement of the χ( T) data using different models and discussed the various possible implications.
Evolving cell models for systems and synthetic biology.
Cao, Hongqing; Romero-Campero, Francisco J; Heeb, Stephan; Cámara, Miguel; Krasnogor, Natalio
2010-03-01
This paper proposes a new methodology for the automated design of cell models for systems and synthetic biology. Our modelling framework is based on P systems, a discrete, stochastic and modular formal modelling language. The automated design of biological models comprising the optimization of the model structure and its stochastic kinetic constants is performed using an evolutionary algorithm. The evolutionary algorithm evolves model structures by combining different modules taken from a predefined module library and then it fine-tunes the associated stochastic kinetic constants. We investigate four alternative objective functions for the fitness calculation within the evolutionary algorithm: (1) equally weighted sum method, (2) normalization method, (3) randomly weighted sum method, and (4) equally weighted product method. The effectiveness of the methodology is tested on four case studies of increasing complexity including negative and positive autoregulation as well as two gene networks implementing a pulse generator and a bandwidth detector. We provide a systematic analysis of the evolutionary algorithm's results as well as of the resulting evolved cell models.
Symons, Jennifer E; Fyhrie, David P; Hawkins, David A; Upadhyaya, Shrinivasa K; Stover, Susan M
2015-02-26
Race surfaces have been associated with the incidence of racehorse musculoskeletal injury, the leading cause of racehorse attrition. Optimal race surface mechanical behaviors that minimize injury risk are unknown. Computational models are an economical method to determine optimal mechanical behaviors. Previously developed equine musculoskeletal models utilized ground reaction floor models designed to simulate a stiff, smooth floor appropriate for a human gait laboratory. Our objective was to develop a computational race surface model (two force-displacement functions, one linear and one nonlinear) that reproduced experimental race surface mechanical behaviors for incorporation in equine musculoskeletal models. Soil impact tests were simulated in a musculoskeletal modeling environment and compared to experimental force and displacement data collected during initial and repeat impacts at two racetracks with differing race surfaces - (i) dirt and (ii) synthetic. Best-fit model coefficients (7 total) were compared between surface types and initial and repeat impacts using a mixed model ANCOVA. Model simulation results closely matched empirical force, displacement and velocity data (Mean R(2)=0.930-0.997). Many model coefficients were statistically different between surface types and impacts. Principal component analysis of model coefficients showed systematic differences based on surface type and impact. In the future, the race surface model may be used in conjunction with previously developed the equine musculoskeletal models to understand the effects of race surface mechanical behaviors on limb dynamics, and determine race surface mechanical behaviors that reduce the incidence of racehorse musculoskeletal injury through modulation of limb dynamics. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Han, Jiang; Chen, Ye-Hwa; Zhao, Xiaomin; Dong, Fangfang
2018-04-01
A novel fuzzy dynamical system approach to the control design of flexible joint manipulators with mismatched uncertainty is proposed. Uncertainties of the system are assumed to lie within prescribed fuzzy sets. The desired system performance includes a deterministic phase and a fuzzy phase. First, by creatively implanting a fictitious control, a robust control scheme is constructed to render the system uniformly bounded and uniformly ultimately bounded. Both the manipulator modelling and control scheme are deterministic and not IF-THEN heuristic rules-based. Next, a fuzzy-based performance index is proposed. An optimal design problem for a control design parameter is formulated as a constrained optimisation problem. The global solution to this problem can be obtained from solving two quartic equations. The fuzzy dynamical system approach is systematic and is able to assure the deterministic performance as well as to minimise the fuzzy performance index.
Optimized Beam Sculpting with Generalized Fringe-rate Filters
NASA Astrophysics Data System (ADS)
Parsons, Aaron R.; Liu, Adrian; Ali, Zaki S.; Cheng, Carina
2016-03-01
We generalize the technique of fringe-rate filtering, whereby visibilities measured by a radio interferometer are re-weighted according to their temporal variation. As the Earth rotates, radio sources traverse through an interferometer’s fringe pattern at rates that depend on their position on the sky. Capitalizing on this geometric interpretation of fringe rates, we employ time-domain convolution kernels to enact fringe-rate filters that sculpt the effective primary beam of antennas in an interferometer. As we show, beam sculpting through fringe-rate filtering can be used to optimize measurements for a variety of applications, including mapmaking, minimizing polarization leakage, suppressing instrumental systematics, and enhancing the sensitivity of power-spectrum measurements. We show that fringe-rate filtering arises naturally in minimum variance treatments of many of these problems, enabling optimal visibility-based approaches to analyses of interferometric data that avoid systematics potentially introduced by traditional approaches such as imaging. Our techniques have recently been demonstrated in Ali et al., where new upper limits were placed on the 21 {cm} power spectrum from reionization, showcasing the ability of fringe-rate filtering to successfully boost sensitivity and reduce the impact of systematics in deep observations.
Pakvasa, Mitali Atul; Saroha, Vivek; Patel, Ravi Mangal
2018-06-01
Caffeine reduces the risk of bronchopulmonary dysplasia (BPD). Optimizing caffeine use could increase therapeutic benefit. We performed a systematic-review and random-effects meta-analysis of studies comparing different timing of initiation and dose of caffeine on the risk of BPD. Earlier initiation, compared to later, was associated with a decreased risk of BPD (5 observational studies; n = 63,049, adjusted OR 0.69; 95% CI 0.64-0.75, GRADE: low quality). High-dose caffeine, compared to standard-dose, was associated with a decreased risk of BPD (3 randomized trials, n = 432, OR 0.65; 95% CI 0.43-0.97; GRADE: low quality). Higher quality evidence is needed to guide optimal caffeine use. Copyright © 2018 Elsevier Inc. All rights reserved.
Coarse-graining errors and numerical optimization using a relative entropy framework
NASA Astrophysics Data System (ADS)
Chaimovich, Aviel; Shell, M. Scott
2011-03-01
The ability to generate accurate coarse-grained models from reference fully atomic (or otherwise "first-principles") ones has become an important component in modeling the behavior of complex molecular systems with large length and time scales. We recently proposed a novel coarse-graining approach based upon variational minimization of a configuration-space functional called the relative entropy, Srel, that measures the information lost upon coarse-graining. Here, we develop a broad theoretical framework for this methodology and numerical strategies for its use in practical coarse-graining settings. In particular, we show that the relative entropy offers tight control over the errors due to coarse-graining in arbitrary microscopic properties, and suggests a systematic approach to reducing them. We also describe fundamental connections between this optimization methodology and other coarse-graining strategies like inverse Monte Carlo, force matching, energy matching, and variational mean-field theory. We suggest several new numerical approaches to its minimization that provide new coarse-graining strategies. Finally, we demonstrate the application of these theoretical considerations and algorithms to a simple, instructive system and characterize convergence and errors within the relative entropy framework.
Application of zonal model on indoor air sensor network design
NASA Astrophysics Data System (ADS)
Chen, Y. Lisa; Wen, Jin
2007-04-01
Growing concerns over the safety of the indoor environment have made the use of sensors ubiquitous. Sensors that detect chemical and biological warfare agents can offer early warning of dangerous contaminants. However, current sensor system design is more informed by intuition and experience rather by systematic design. To develop a sensor system design methodology, a proper indoor airflow modeling approach is needed. Various indoor airflow modeling techniques, from complicated computational fluid dynamics approaches to simplified multi-zone approaches, exist in the literature. In this study, the effects of two airflow modeling techniques, multi-zone modeling technique and zonal modeling technique, on indoor air protection sensor system design are discussed. Common building attack scenarios, using a typical CBW agent, are simulated. Both multi-zone and zonal models are used to predict airflows and contaminant dispersion. Genetic Algorithm is then applied to optimize the sensor location and quantity. Differences in the sensor system design resulting from the two airflow models are discussed for a typical office environment and a large hall environment.
Eliciting naturalistic cortical responses with a sensory prosthesis via optimized microstimulation
NASA Astrophysics Data System (ADS)
Choi, John S.; Brockmeier, Austin J.; McNiel, David B.; von Kraus, Lee M.; Príncipe, José C.; Francis, Joseph T.
2016-10-01
Objective. Lost sensations, such as touch, could one day be restored by electrical stimulation along the sensory neural pathways. Such stimulation, when informed by electronic sensors, could provide naturalistic cutaneous and proprioceptive feedback to the user. Perceptually, microstimulation of somatosensory brain regions produces localized, modality-specific sensations, and several spatiotemporal parameters have been studied for their discernibility. However, systematic methods for encoding a wide array of naturally occurring stimuli into biomimetic percepts via multi-channel microstimulation are lacking. More specifically, generating spatiotemporal patterns for explicitly evoking naturalistic neural activation has not yet been explored. Approach. We address this problem by first modeling the dynamical input-output relationship between multichannel microstimulation and downstream neural responses, and then optimizing the input pattern to reproduce naturally occurring touch responses as closely as possible. Main results. Here we show that such optimization produces responses in the S1 cortex of the anesthetized rat that are highly similar to natural, tactile-stimulus-evoked counterparts. Furthermore, information on both pressure and location of the touch stimulus was found to be highly preserved. Significance. Our results suggest that the currently presented stimulus optimization approach holds great promise for restoring naturalistic levels of sensation.
Wu, Xin; Koslowski, Axel; Thiel, Walter
2012-07-10
In this work, we demonstrate that semiempirical quantum chemical calculations can be accelerated significantly by leveraging the graphics processing unit (GPU) as a coprocessor on a hybrid multicore CPU-GPU computing platform. Semiempirical calculations using the MNDO, AM1, PM3, OM1, OM2, and OM3 model Hamiltonians were systematically profiled for three types of test systems (fullerenes, water clusters, and solvated crambin) to identify the most time-consuming sections of the code. The corresponding routines were ported to the GPU and optimized employing both existing library functions and a GPU kernel that carries out a sequence of noniterative Jacobi transformations during pseudodiagonalization. The overall computation times for single-point energy calculations and geometry optimizations of large molecules were reduced by one order of magnitude for all methods, as compared to runs on a single CPU core.
Optimizing information flow in small genetic networks. IV. Spatial coupling
NASA Astrophysics Data System (ADS)
Sokolowski, Thomas R.; Tkačik, Gašper
2015-06-01
We typically think of cells as responding to external signals independently by regulating their gene expression levels, yet they often locally exchange information and coordinate. Can such spatial coupling be of benefit for conveying signals subject to gene regulatory noise? Here we extend our information-theoretic framework for gene regulation to spatially extended systems. As an example, we consider a lattice of nuclei responding to a concentration field of a transcriptional regulator (the input) by expressing a single diffusible target gene. When input concentrations are low, diffusive coupling markedly improves information transmission; optimal gene activation functions also systematically change. A qualitatively different regulatory strategy emerges where individual cells respond to the input in a nearly steplike fashion that is subsequently averaged out by strong diffusion. While motivated by early patterning events in the Drosophila embryo, our framework is generically applicable to spatially coupled stochastic gene expression models.
Design optimization of condenser microphone: a design of experiment perspective.
Tan, Chee Wee; Miao, Jianmin
2009-06-01
A well-designed condenser microphone backplate is very important in the attainment of good frequency response characteristics--high sensitivity and wide bandwidth with flat response--and low mechanical-thermal noise. To study the design optimization of the backplate, a 2(6) factorial design with a single replicate, which consists of six backplate parameters and four responses, has been undertaken on a comprehensive condenser microphone model developed by Zuckerwar. Through the elimination of insignificant parameters via normal probability plots of the effect estimates, the projection of an unreplicated factorial design into a replicated one can be performed to carry out an analysis of variance on the factorial design. The air gap and slot have significant effects on the sensitivity, mechanical-thermal noise, and bandwidth while the slot/hole location interaction has major influence over the latter two responses. An organized and systematic approach of designing the backplate is summarized.
Assessment of combating-desertification strategies using the linear assignment method
NASA Astrophysics Data System (ADS)
Hassan Sadeghravesh, Mohammad; Khosravi, Hassan; Ghasemian, Soudeh
2016-04-01
Nowadays desertification, as a global problem, affects many countries in the world, especially developing countries like Iran. With respect to increasing importance of desertification and its complexity, the necessity of attention to the optimal combating-desertification alternatives is essential. Selecting appropriate strategies according to all effective criteria to combat the desertification process can be useful in rehabilitating degraded lands and avoiding degradation in vulnerable fields. This study provides systematic and optimal strategies of combating desertification by use of a group decision-making model. To this end, the preferences of indexes were obtained through using the Delphi model, within the framework of multi-attribute decision making (MADM). Then, priorities of strategies were evaluated by using linear assignment (LA) method. According to the results, the strategies to prevent improper change of land use (A18), development and reclamation of plant cover (A23), and control overcharging of groundwater resources (A31) were identified as the most important strategies for combating desertification in this study area. Therefore, it is suggested that the aforementioned ranking results be considered in projects which control and reduce the effects of desertification and rehabilitate degraded lands.
Statistical and engineering methods for model enhancement
NASA Astrophysics Data System (ADS)
Chang, Chia-Jung
Models which describe the performance of physical process are essential for quality prediction, experimental planning, process control and optimization. Engineering models developed based on the underlying physics/mechanics of the process such as analytic models or finite element models are widely used to capture the deterministic trend of the process. However, there usually exists stochastic randomness in the system which may introduce the discrepancy between physics-based model predictions and observations in reality. Alternatively, statistical models can be used to develop models to obtain predictions purely based on the data generated from the process. However, such models tend to perform poorly when predictions are made away from the observed data points. This dissertation contributes to model enhancement research by integrating physics-based model and statistical model to mitigate the individual drawbacks and provide models with better accuracy by combining the strengths of both models. The proposed model enhancement methodologies including the following two streams: (1) data-driven enhancement approach and (2) engineering-driven enhancement approach. Through these efforts, more adequate models are obtained, which leads to better performance in system forecasting, process monitoring and decision optimization. Among different data-driven enhancement approaches, Gaussian Process (GP) model provides a powerful methodology for calibrating a physical model in the presence of model uncertainties. However, if the data contain systematic experimental errors, the GP model can lead to an unnecessarily complex adjustment of the physical model. In Chapter 2, we proposed a novel enhancement procedure, named as “Minimal Adjustment”, which brings the physical model closer to the data by making minimal changes to it. This is achieved by approximating the GP model by a linear regression model and then applying a simultaneous variable selection of the model and experimental bias terms. Two real examples and simulations are presented to demonstrate the advantages of the proposed approach. Different from enhancing the model based on data-driven perspective, an alternative approach is to focus on adjusting the model by incorporating the additional domain or engineering knowledge when available. This often leads to models that are very simple and easy to interpret. The concepts of engineering-driven enhancement are carried out through two applications to demonstrate the proposed methodologies. In the first application where polymer composite quality is focused, nanoparticle dispersion has been identified as a crucial factor affecting the mechanical properties. Transmission Electron Microscopy (TEM) images are commonly used to represent nanoparticle dispersion without further quantifications on its characteristics. In Chapter 3, we developed the engineering-driven nonhomogeneous Poisson random field modeling strategy to characterize nanoparticle dispersion status of nanocomposite polymer, which quantitatively represents the nanomaterial quality presented through image data. The model parameters are estimated through the Bayesian MCMC technique to overcome the challenge of limited amount of accessible data due to the time consuming sampling schemes. The second application is to calibrate the engineering-driven force models of laser-assisted micro milling (LAMM) process statistically, which facilitates a systematic understanding and optimization of targeted processes. In Chapter 4, the force prediction interval has been derived by incorporating the variability in the runout parameters as well as the variability in the measured cutting forces. The experimental results indicate that the model predicts the cutting force profile with good accuracy using a 95% confidence interval. To conclude, this dissertation is the research drawing attention to model enhancement, which has considerable impacts on modeling, design, and optimization of various processes and systems. The fundamental methodologies of model enhancement are developed and further applied to various applications. These research activities developed engineering compliant models for adequate system predictions based on observational data with complex variable relationships and uncertainty, which facilitate process planning, monitoring, and real-time control.
Parametric study of a canard-configured transport using conceptual design optimization
NASA Technical Reports Server (NTRS)
Arbuckle, P. D.; Sliwa, S. M.
1985-01-01
Constrained-parameter optimization is used to perform optimal conceptual design of both canard and conventional configurations of a medium-range transport. A number of design constants and design constraints are systematically varied to compare the sensitivities of canard and conventional configurations to a variety of technology assumptions. Main-landing-gear location and canard surface high-lift performance are identified as critical design parameters for a statically stable, subsonic, canard-configured transport.
Fundamental Limits of Delay and Security in Device-to-Device Communication
2013-01-01
systematic MDS (maximum distance separable) codes and random binning strategies that achieve a Pareto optimal delayreconstruction tradeoff. The erasure MD...file, and a coding scheme based on erasure compression and Slepian-Wolf binning is presented. The coding scheme is shown to provide a Pareto optimal...ble) codes and random binning strategies that achieve a Pareto optimal delay- reconstruction tradeoff. The erasure MD setup is then used to propose a
DOE Office of Scientific and Technical Information (OSTI.GOV)
Egbendewe-Mondzozo, Aklesso; Swinton, S.; Izaurralde, Roberto C.
2013-03-01
This paper evaluates environmental policy effects on ligno-cellulosic biomass production and environ- mental outcomes using an integrated bioeconomic optimization model. The environmental policy integrated climate (EPIC) model is used to simulate crop yields and environmental indicators in current and future potential bioenergy cropping systems based on weather, topographic and soil data. The crop yield and environmental outcome parameters from EPIC are combined with biomass transport costs and economic parameters in a representative farmer profit-maximizing mathematical optimization model. The model is used to predict the impact of alternative policies on biomass production and environmental outcomes. We find that without environmental policy,more » rising biomass prices initially trigger production of annual crop residues, resulting in increased greenhouse gas emissions, soil erosion, and nutrient losses to surface and ground water. At higher biomass prices, perennial bioenergy crops replace annual crop residues as biomass sources, resulting in lower environmental impacts. Simulations of three environmental policies namely a carbon price, a no-till area subsidy, and a fertilizer tax reveal that only the carbon price policy systematically mitigates environmental impacts. The fertilizer tax is ineffectual and too costly to farmers. The no-till subsidy is effective only at low biomass prices and is too costly to government.« less
NASA Astrophysics Data System (ADS)
Štolc, Svorad; Bajla, Ivan
2010-01-01
In the paper we describe basic functions of the Hierarchical Temporal Memory (HTM) network based on a novel biologically inspired model of the large-scale structure of the mammalian neocortex. The focus of this paper is in a systematic exploration of possibilities how to optimize important controlling parameters of the HTM model applied to the classification of hand-written digits from the USPS database. The statistical properties of this database are analyzed using the permutation test which employs a randomization distribution of the training and testing data. Based on a notion of the homogeneous usage of input image pixels, a methodology of the HTM parameter optimization is proposed. In order to study effects of two substantial parameters of the architecture: the
System principles, mathematical models and methods to ensure high reliability of safety systems
NASA Astrophysics Data System (ADS)
Zaslavskyi, V.
2017-04-01
Modern safety and security systems are composed of a large number of various components designed for detection, localization, tracking, collecting, and processing of information from the systems of monitoring, telemetry, control, etc. They are required to be highly reliable in a view to correctly perform data aggregation, processing and analysis for subsequent decision making support. On design and construction phases of the manufacturing of such systems a various types of components (elements, devices, and subsystems) are considered and used to ensure high reliability of signals detection, noise isolation, and erroneous commands reduction. When generating design solutions for highly reliable systems a number of restrictions and conditions such as types of components and various constrains on resources should be considered. Various types of components perform identical functions; however, they are implemented using diverse principles, approaches and have distinct technical and economic indicators such as cost or power consumption. The systematic use of different component types increases the probability of tasks performing and eliminates the common cause failure. We consider type-variety principle as an engineering principle of system analysis, mathematical models based on this principle, and algorithms for solving optimization problems of highly reliable safety and security systems design. Mathematical models are formalized in a class of two-level discrete optimization problems of large dimension. The proposed approach, mathematical models, algorithms can be used for problem solving of optimal redundancy on the basis of a variety of methods and control devices for fault and defects detection in technical systems, telecommunication networks, and energy systems.
Todd, Thomas; Dunn, Natalie; Xiang, Zuoshuang; He, Yongqun
2016-01-01
Animal models are indispensable for vaccine research and development. However, choosing which species to use and designing a vaccine study that is optimized for that species is often challenging. Vaxar (http://www.violinet.org/vaxar/) is a web-based database and analysis system that stores manually curated data regarding vaccine-induced responses in animals. To date, Vaxar encompasses models from 35 animal species including rodents, rabbits, ferrets, primates, and birds. These 35 species have been used to study more than 1300 experimentally tested vaccines for 164 pathogens and diseases significant to humans and domestic animals. The responses to vaccines by animals in more than 1500 experimental studies are recorded in Vaxar; these data can be used for systematic meta-analysis of various animal responses to a particular vaccine. For example, several variables, including animal strain, animal age, and the dose or route of either vaccination or challenge, might affect host response outcomes. Vaxar can also be used to identify variables that affect responses to different vaccines in a specific animal model. All data stored in Vaxar are publically available for web-based queries and analyses. Overall Vaxar provides a unique systematic approach for understanding vaccine-induced host immunity. PMID:27053566
Automation effects in a multiloop manual control system
NASA Technical Reports Server (NTRS)
Hess, R. A.; Mcnally, B. D.
1986-01-01
An experimental and analytical study was undertaken to investigate human interaction with a simple multiloop manual control system in which the human's activity was systematically varied by changing the level of automation. The system simulated was the longitudinal dynamics of a hovering helicopter. The automation-systems-stabilized vehicle responses from attitude to velocity to position and also provided for display automation in the form of a flight director. The control-loop structure resulting from the task definition can be considered a simple stereotype of a hierarchical control system. The experimental study was complemented by an analytical modeling effort which utilized simple crossover models of the human operator. It was shown that such models can be extended to the description of multiloop tasks involving preview and precognitive human operator behavior. The existence of time optimal manual control behavior was established for these tasks and the role which internal models may play in establishing human-machine performance was discussed.
SHARPEN-systematic hierarchical algorithms for rotamers and proteins on an extended network.
Loksha, Ilya V; Maiolo, James R; Hong, Cheng W; Ng, Albert; Snow, Christopher D
2009-04-30
Algorithms for discrete optimization of proteins play a central role in recent advances in protein structure prediction and design. We wish to improve the resources available for computational biologists to rapidly prototype such algorithms and to easily scale these algorithms to many processors. To that end, we describe the implementation and use of two new open source resources, citing potential benefits over existing software. We discuss CHOMP, a new object-oriented library for macromolecular optimization, and SHARPEN, a framework for scaling CHOMP scripts to many computers. These tools allow users to develop new algorithms for a variety of applications including protein repacking, protein-protein docking, loop rebuilding, or homology model remediation. Particular care was taken to allow modular energy function design; protein conformations may currently be scored using either the OPLSaa molecular mechanical energy function or an all-atom semiempirical energy function employed by Rosetta. (c) 2009 Wiley Periodicals, Inc.
Baseline estimation in flame's spectra by using neural networks and robust statistics
NASA Astrophysics Data System (ADS)
Garces, Hugo; Arias, Luis; Rojas, Alejandro
2014-09-01
This work presents a baseline estimation method in flame spectra based on artificial intelligence structure as a neural network, combining robust statistics with multivariate analysis to automatically discriminate measured wavelengths belonging to continuous feature for model adaptation, surpassing restriction of measuring target baseline for training. The main contributions of this paper are: to analyze a flame spectra database computing Jolliffe statistics from Principal Components Analysis detecting wavelengths not correlated with most of the measured data corresponding to baseline; to systematically determine the optimal number of neurons in hidden layers based on Akaike's Final Prediction Error; to estimate baseline in full wavelength range sampling measured spectra; and to train an artificial intelligence structure as a Neural Network which allows to generalize the relation between measured and baseline spectra. The main application of our research is to compute total radiation with baseline information, allowing to diagnose combustion process state for optimization in early stages.
Adeyanju, Oyinlolu O.; Al-Angari, Haitham M.; Sahakian, Alan V.
2012-01-01
Background Irreversible electroporation (IRE) is a novel ablation tool that uses brief high-voltage pulses to treat cancer. The efficacy of the therapy depends upon the distribution of the electric field, which in turn depends upon the configuration of electrodes used. Methods We sought to optimize the electrode configuration in terms of the distance between electrodes, the depth of electrode insertion, and the number of electrodes. We employed a 3D Finite Element Model and systematically varied the distance between the electrodes and the depth of electrode insertion, monitoring the lowest voltage sufficient to ablate the tumor, VIRE. We also measured the amount of normal (non-cancerous) tissue ablated. Measurements were performed for two electrodes, three electrodes, and four electrodes. The optimal electrode configuration was determined to be the one with the lowest VIRE, as that minimized damage to normal tissue. Results The optimal electrode configuration to ablate a 2.5 cm spheroidal tumor used two electrodes with a distance of 2 cm between the electrodes and a depth of insertion of 1 cm below the halfway point in the spherical tumor, as measured from the bottom of the electrode. This produced a VIRE of 3700 V. We found that it was generally best to have a small distance between the electrodes and for the center of the electrodes to be inserted at a depth equal to or deeper than the center of the tumor. We also found the distance between electrodes was far more important in influencing the outcome measures when compared with the depth of electrode insertion. Conclusions Overall, the distribution of electric field is highly dependent upon the electrode configuration, but the optimal configuration can be determined using numerical modeling. Our findings can help guide the clinical application of IRE as well as the selection of the best optimization algorithm to use in finding the optimal electrode configuration. PMID:23077449
Objective calibration of regional climate models
NASA Astrophysics Data System (ADS)
Bellprat, O.; Kotlarski, S.; Lüthi, D.; SchäR, C.
2012-12-01
Climate models are subject to high parametric uncertainty induced by poorly confined model parameters of parameterized physical processes. Uncertain model parameters are typically calibrated in order to increase the agreement of the model with available observations. The common practice is to adjust uncertain model parameters manually, often referred to as expert tuning, which lacks objectivity and transparency in the use of observations. These shortcomings often haze model inter-comparisons and hinder the implementation of new model parameterizations. Methods which would allow to systematically calibrate model parameters are unfortunately often not applicable to state-of-the-art climate models, due to computational constraints facing the high dimensionality and non-linearity of the problem. Here we present an approach to objectively calibrate a regional climate model, using reanalysis driven simulations and building upon a quadratic metamodel presented by Neelin et al. (2010) that serves as a computationally cheap surrogate of the model. Five model parameters originating from different parameterizations are selected for the optimization according to their influence on the model performance. The metamodel accurately estimates spatial averages of 2 m temperature, precipitation and total cloud cover, with an uncertainty of similar magnitude as the internal variability of the regional climate model. The non-linearities of the parameter perturbations are well captured, such that only a limited number of 20-50 simulations are needed to estimate optimal parameter settings. Parameter interactions are small, which allows to further reduce the number of simulations. In comparison to an ensemble of the same model which has undergone expert tuning, the calibration yields similar optimal model configurations, but leading to an additional reduction of the model error. The performance range captured is much wider than sampled with the expert-tuned ensemble and the presented methodology is effective and objective. It is argued that objective calibration is an attractive tool and could become standard procedure after introducing new model implementations, or after a spatial transfer of a regional climate model. Objective calibration of parameterizations with regional models could also serve as a strategy toward improving parameterization packages of global climate models.
High-Reproducibility and High-Accuracy Method for Automated Topic Classification
NASA Astrophysics Data System (ADS)
Lancichinetti, Andrea; Sirer, M. Irmak; Wang, Jane X.; Acuna, Daniel; Körding, Konrad; Amaral, Luís A. Nunes
2015-01-01
Much of human knowledge sits in large databases of unstructured text. Leveraging this knowledge requires algorithms that extract and record metadata on unstructured text documents. Assigning topics to documents will enable intelligent searching, statistical characterization, and meaningful classification. Latent Dirichlet allocation (LDA) is the state of the art in topic modeling. Here, we perform a systematic theoretical and numerical analysis that demonstrates that current optimization techniques for LDA often yield results that are not accurate in inferring the most suitable model parameters. Adapting approaches from community detection in networks, we propose a new algorithm that displays high reproducibility and high accuracy and also has high computational efficiency. We apply it to a large set of documents in the English Wikipedia and reveal its hierarchical structure.
Psychodynamic Emotional Regulation in View of Wolpe's Desensitization Model.
Rabinovich, Merav
2016-01-01
The current research belongs to the stream of theoretical integration and establishes a theoretical platform for integrative psychotherapy in anxiety disorders. Qualitative metasynthesis procedures were applied to 40 peer-reviewed psychoanalytic articles involving emotional regulation. The concept of psychodynamic emotional regulation was found to be connected with the categories of desensitization, gradual exposure, containment, and transference. This article presents a model according to which psychoanalytic psychotherapy allows anxiety to be tolerated while following the core principles of systematic desensitization. It is shown that despite the antiresearch image of psychoanalytic psychotherapy, its foundations obey evidence-based principles. The findings imply that anxiety tolerance might be a key goal in which the cumulative wisdom of the different therapies can be used to optimize psychotherapy outcomes.
Optimization of structures on the basis of fracture mechanics and reliability criteria
NASA Technical Reports Server (NTRS)
Heer, E.; Yang, J. N.
1973-01-01
Systematic summary of factors which are involved in optimization of given structural configuration is part of report resulting from study of analysis of objective function. Predicted reliability of performance of finished structure is sharply dependent upon results of coupon tests. Optimization analysis developed by study also involves expected cost of proof testing.
Control-Relevant Modeling, Analysis, and Design for Scramjet-Powered Hypersonic Vehicles
NASA Technical Reports Server (NTRS)
Rodriguez, Armando A.; Dickeson, Jeffrey J.; Sridharan, Srikanth; Benavides, Jose; Soloway, Don; Kelkar, Atul; Vogel, Jerald M.
2009-01-01
Within this paper, control-relevant vehicle design concepts are examined using a widely used 3 DOF (plus flexibility) nonlinear model for the longitudinal dynamics of a generic carrot-shaped scramjet powered hypersonic vehicle. Trade studies associated with vehicle/engine parameters are examined. The impact of parameters on control-relevant static properties (e.g. level-flight trimmable region, trim controls, AOA, thrust margin) and dynamic properties (e.g. instability and right half plane zero associated with flight path angle) are examined. Specific parameters considered include: inlet height, diffuser area ratio, lower forebody compression ramp inclination angle, engine location, center of gravity, and mass. Vehicle optimizations is also examined. Both static and dynamic considerations are addressed. The gap-metric optimized vehicle is obtained to illustrate how this control-centric concept can be used to "reduce" scheduling requirements for the final control system. A classic inner-outer loop control architecture and methodology is used to shed light on how specific vehicle/engine design parameter selections impact control system design. In short, the work represents an important first step toward revealing fundamental tradeoffs and systematically treating control-relevant vehicle design.
Operations Optimization of Hybrid Energy Systems under Variable Markets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Jun; Garcia, Humberto E.
Hybrid energy systems (HES) have been proposed to be an important element to enable increasing penetration of clean energy. This paper investigates the operations flexibility of HES, and develops a methodology for operations optimization to maximize its economic value based on predicted renewable generation and market information. The proposed operations optimizer allows systematic control of energy conversion for maximal economic value, and is illustrated by numerical results.
Guerrero-Torrelles, Mariona; Monforte-Royo, Cristina; Rodríguez-Prat, Andrea; Porta-Sales, Josep; Balaguer, Albert
2017-10-01
Among patients with advanced disease, meaning in life is thought to enhance well-being, promote coping and improve the tolerance of physical symptoms. It may also act as a buffer against depression and hopelessness. As yet, there has been no synthesis of meaning in life interventions in which contextual factors, procedures and outcomes are described and evaluated. To identify meaning in life interventions implemented in patients with advanced disease and to describe their context, mechanisms and outcomes. Systematic review according to Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines and realist synthesis of meaning in life interventions using criteria from the Realist And Meta-narrative Evidence Syntheses: Evolving Standards project. The CINAHL, PsycINFO, PubMed and Web of Science databases were searched. A total of 12 articles were included in the systematic review, corresponding to nine different interventions. Five articles described randomized controlled trials, two were qualitative studies, two were commentaries or reflections, and there was one pre-post evaluation, one exploratory study and one description of a model of care. Analysis of context, mechanisms and outcomes configurations showed that a core component of all the interventions was the interpersonal encounter between patient and therapist, in which sources of meaning were explored and a sense of connectedness was re-established. Meaning in life interventions were associated with clinical benefits on measures of purpose-in-life, quality of life, spiritual well-being, self-efficacy, optimism, distress, hopelessness, anxiety, depression and wish to hasten death. This review provides an explanatory model of the contextual factors and mechanisms that may be involved in promoting meaning in life. These approaches could provide useful tools for relieving existential suffering at the end of life.
Coarse-graining using the relative entropy and simplex-based optimization methods in VOTCA
NASA Astrophysics Data System (ADS)
Rühle, Victor; Jochum, Mara; Koschke, Konstantin; Aluru, N. R.; Kremer, Kurt; Mashayak, S. Y.; Junghans, Christoph
2014-03-01
Coarse-grained (CG) simulations are an important tool to investigate systems on larger time and length scales. Several methods for systematic coarse-graining were developed, varying in complexity and the property of interest. Thus, the question arises which method best suits a specific class of system and desired application. The Versatile Object-oriented Toolkit for Coarse-graining Applications (VOTCA) provides a uniform platform for coarse-graining methods and allows for their direct comparison. We present recent advances of VOTCA, namely the implementation of the relative entropy method and downhill simplex optimization for coarse-graining. The methods are illustrated by coarse-graining SPC/E bulk water and a water-methanol mixture. Both CG models reproduce the pair distributions accurately. SYM is supported by AFOSR under grant 11157642 and by NSF under grant 1264282. CJ was supported in part by the NSF PHY11-25915 at KITP. K. Koschke acknowledges funding by the Nestle Research Center.
Capsule performance optimization in the National Ignition Campaigna)
NASA Astrophysics Data System (ADS)
Landen, O. L.; Boehly, T. R.; Bradley, D. K.; Braun, D. G.; Callahan, D. A.; Celliers, P. M.; Collins, G. W.; Dewald, E. L.; Divol, L.; Glenzer, S. H.; Hamza, A.; Hicks, D. G.; Hoffman, N.; Izumi, N.; Jones, O. S.; Kirkwood, R. K.; Kyrala, G. A.; Michel, P.; Milovich, J.; Munro, D. H.; Nikroo, A.; Olson, R. E.; Robey, H. F.; Spears, B. K.; Thomas, C. A.; Weber, S. V.; Wilson, D. C.; Marinak, M. M.; Suter, L. J.; Hammel, B. A.; Meyerhofer, D. D.; Atherton, J.; Edwards, J.; Haan, S. W.; Lindl, J. D.; MacGowan, B. J.; Moses, E. I.
2010-05-01
A capsule performance optimization campaign will be conducted at the National Ignition Facility [G. H. Miller et al., Nucl. Fusion 44, 228 (2004)] to substantially increase the probability of ignition by laser-driven hohlraums [J. D. Lindl et al., Phys. Plasmas 11, 339 (2004)]. The campaign will experimentally correct for residual uncertainties in the implosion and hohlraum physics used in our radiation-hydrodynamic computational models before proceeding to cryogenic-layered implosions and ignition attempts. The required tuning techniques using a variety of ignition capsule surrogates have been demonstrated at the OMEGA facility under scaled hohlraum and capsule conditions relevant to the ignition design and shown to meet the required sensitivity and accuracy. In addition, a roll-up of all expected random and systematic uncertainties in setting the key ignition laser and target parameters due to residual measurement, calibration, cross-coupling, surrogacy, and scale-up errors has been derived that meets the required budget.
Capsule performance optimization in the National Ignition Campaign
DOE Office of Scientific and Technical Information (OSTI.GOV)
Landen, O. L.; Bradley, D. K.; Braun, D. G.
2010-05-15
A capsule performance optimization campaign will be conducted at the National Ignition Facility [G. H. Miller et al., Nucl. Fusion 44, 228 (2004)] to substantially increase the probability of ignition by laser-driven hohlraums [J. D. Lindl et al., Phys. Plasmas 11, 339 (2004)]. The campaign will experimentally correct for residual uncertainties in the implosion and hohlraum physics used in our radiation-hydrodynamic computational models before proceeding to cryogenic-layered implosions and ignition attempts. The required tuning techniques using a variety of ignition capsule surrogates have been demonstrated at the OMEGA facility under scaled hohlraum and capsule conditions relevant to the ignition designmore » and shown to meet the required sensitivity and accuracy. In addition, a roll-up of all expected random and systematic uncertainties in setting the key ignition laser and target parameters due to residual measurement, calibration, cross-coupling, surrogacy, and scale-up errors has been derived that meets the required budget.« less
The Problem with Big Data: Operating on Smaller Datasets to Bridge the Implementation Gap.
Mann, Richard P; Mushtaq, Faisal; White, Alan D; Mata-Cervantes, Gabriel; Pike, Tom; Coker, Dalton; Murdoch, Stuart; Hiles, Tim; Smith, Clare; Berridge, David; Hinchliffe, Suzanne; Hall, Geoff; Smye, Stephen; Wilkie, Richard M; Lodge, J Peter A; Mon-Williams, Mark
2016-01-01
Big datasets have the potential to revolutionize public health. However, there is a mismatch between the political and scientific optimism surrounding big data and the public's perception of its benefit. We suggest a systematic and concerted emphasis on developing models derived from smaller datasets to illustrate to the public how big data can produce tangible benefits in the long term. In order to highlight the immediate value of a small data approach, we produced a proof-of-concept model predicting hospital length of stay. The results demonstrate that existing small datasets can be used to create models that generate a reasonable prediction, facilitating health-care delivery. We propose that greater attention (and funding) needs to be directed toward the utilization of existing information resources in parallel with current efforts to create and exploit "big data."
Improved model of the retardance in citric acid coated ferrofluids using stepwise regression
NASA Astrophysics Data System (ADS)
Lin, J. F.; Qiu, X. R.
2017-06-01
Citric acid (CA) coated Fe3O4 ferrofluids (FFs) have been conducted for biomedical application. The magneto-optical retardance of CA coated FFs was measured by a Stokes polarimeter. Optimization and multiple regression of retardance in FFs were executed by Taguchi method and Microsoft Excel previously, and the F value of regression model was large enough. However, the model executed by Excel was not systematic. Instead we adopted the stepwise regression to model the retardance of CA coated FFs. From the results of stepwise regression by MATLAB, the developed model had highly predictable ability owing to F of 2.55897e+7 and correlation coefficient of one. The average absolute error of predicted retardances to measured retardances was just 0.0044%. Using the genetic algorithm (GA) in MATLAB, the optimized parametric combination was determined as [4.709 0.12 39.998 70.006] corresponding to the pH of suspension, molar ratio of CA to Fe3O4, CA volume, and coating temperature. The maximum retardance was found as 31.712°, close to that obtained by evolutionary solver in Excel and a relative error of -0.013%. Above all, the stepwise regression method was successfully used to model the retardance of CA coated FFs, and the maximum global retardance was determined by the use of GA.
Amasya, Gulin; Badilli, Ulya; Aksu, Buket; Tarimci, Nilufer
2016-03-10
With Quality by Design (QbD), a systematic approach involving design and development of all production processes to achieve the final product with a predetermined quality, you work within a design space that determines the critical formulation and process parameters. Verification of the quality of the final product is no longer necessary. In the current study, the QbD approach was used in the preparation of lipid nanoparticle formulations to improve skin penetration of 5-Fluorouracil, a widely-used compound for treating non-melanoma skin cancer. 5-Fluorouracil-loaded lipid nanoparticles were prepared by the W/O/W double emulsion - solvent evaporation method. Artificial neural network software was used to evaluate the data obtained from the lipid nanoparticle formulations, to establish the design space, and to optimize the formulations. Two different artificial neural network models were developed. The limit values of the design space of the inputs and outputs obtained by both models were found to be within the knowledge space. The optimal formulations recommended by the models were prepared and the critical quality attributes belonging to those formulations were assigned. The experimental results remained within the design space limit values. Consequently, optimal formulations with the critical quality attributes determined to achieve the Quality Target Product Profile were successfully obtained within the design space by following the QbD steps. Copyright © 2016 Elsevier B.V. All rights reserved.
Optimization Techniques for Analysis of Biological and Social Networks
2012-03-28
analyzing a new metaheuristic technique, variable objective search. 3. Experimentation and application: Implement the proposed algorithms , test and fine...alternative mathematical programming formulations, their theoretical analysis, the development of exact algorithms , and heuristics. Originally, clusters...systematic fashion under a unifying theoretical and algorithmic framework. Optimization, Complex Networks, Social Network Analysis, Computational
Malavera, Alejandra; Vasquez, Alejandra; Fregni, Felipe
2015-01-01
Transcranial direct current stimulation (tDCS) is a neuromodulatory technique that has been extensively studied. While there have been initial positive results in some clinical trials, there is still variability in tDCS results. The aim of this article is to review and discuss patents assessing novel methods to optimize the use of tDCS. A systematic review was performed using Google patents database with tDCS as the main technique, with patents filling date between 2010 and 2015. Twenty-two patents met our inclusion criteria. These patents attempt to address current tDCS limitations. Only a few of them have been investigated in clinical trials (i.e., high-definition tDCS), and indeed most of them have not been tested before in human trials. Further clinical testing is required to assess which patents are more likely to optimize the effects of tDCS. We discuss the potential optimization of tDCS based on these patents and the current experience with standard tDCS.
Sponge-supported cultures of primary head and neck tumors for an optimized preclinical model.
Dohmen, Amy J C; Sanders, Joyce; Canisius, Sander; Jordanova, Ekaterina S; Aalbersberg, Else A; van den Brekel, Michiel W M; Neefjes, Jacques; Zuur, Charlotte L
2018-05-18
Treatment of advanced head and neck cancer is associated with low survival, high toxicity and a widely divergent individual response. The sponge-gel-supported histoculture model was previously developed to serve as a preclinical model for predicting individual treatment responses. We aimed to optimize the sponge-gel-supported histoculture model and provide more insight in cell specific behaviour by evaluating the tumor and its microenvironment using immunohistochemistry. We collected fresh tumor biopsies from 72 untreated patients and cultured them for 7 days. Biopsies from 57 patients (79%) were successfully cultured and 1451 tumor fragments (95.4%) were evaluated. Fragments were scored for percentage of tumor, tumor viability and proliferation, EGF-receptor expression and presence of T-cells and macrophages. Median tumor percentage increased from 53% at day 0 to 80% at day 7. Viability and proliferation decreased after 7 days, from 90% to 30% and from 30% to 10%, respectively. Addition of EGF, folic acid and hydrocortisone can lead to improved viability and proliferation, however this was not systematically observed. No patient subgroup could be identified with higher culture success rates. Immune cells were still present at day 7, illustrating that the tumor microenvironment is sustained. EGF supplementation did not increase viability and proliferation in patients overexpressing EGF-Receptor.
Silva, Aleidy; Lee, Bai-Yu; Clemens, Daniel L; Kee, Theodore; Ding, Xianting; Ho, Chih-Ming; Horwitz, Marcus A
2016-04-12
Tuberculosis (TB) remains a major global public health problem, and improved treatments are needed to shorten duration of therapy, decrease disease burden, improve compliance, and combat emergence of drug resistance. Ideally, the most effective regimen would be identified by a systematic and comprehensive combinatorial search of large numbers of TB drugs. However, optimization of regimens by standard methods is challenging, especially as the number of drugs increases, because of the extremely large number of drug-dose combinations requiring testing. Herein, we used an optimization platform, feedback system control (FSC) methodology, to identify improved drug-dose combinations for TB treatment using a fluorescence-based human macrophage cell culture model of TB, in which macrophages are infected with isopropyl β-D-1-thiogalactopyranoside (IPTG)-inducible green fluorescent protein (GFP)-expressing Mycobacterium tuberculosis (Mtb). On the basis of only a single screening test and three iterations, we identified highly efficacious three- and four-drug combinations. To verify the efficacy of these combinations, we further evaluated them using a methodologically independent assay for intramacrophage killing of Mtb; the optimized combinations showed greater efficacy than the current standard TB drug regimen. Surprisingly, all top three- and four-drug optimized regimens included the third-line drug clofazimine, and none included the first-line drugs isoniazid and rifampin, which had insignificant or antagonistic impacts on efficacy. Because top regimens also did not include a fluoroquinolone or aminoglycoside, they are potentially of use for treating many cases of multidrug- and extensively drug-resistant TB. Our study shows the power of an FSC platform to identify promising previously unidentified drug-dose combinations for treatment of TB.
Osendarp, Saskia J M; Broersen, Britt; van Liere, Marti J; De-Regil, Luz M; Bahirathan, Lavannya; Klassen, Eva; Neufeld, Lynnette M
2016-12-01
The question whether diets composed of local foods can meet recommended nutrient intakes in children aged 6 to 23 months living in low- and middle-income countries is contested. To review evidence of studies evaluating whether (1) macro- and micronutrient requirements of children aged 6 to 23 months from low- and middle-income countries are met by the consumption of locally available foods ("observed intake") and (2) nutrient requirements can be met when the use of local foods is optimized, using modeling techniques ("modeled intake"). Twenty-three articles were included after conducting a systematic literature search. To allow for comparisons between studies, findings of 15 observed intake studies were compared against their contribution to a standardized recommended nutrient intake from complementary foods. For studies with data on intake distribution, %< estimated average requirements were calculated. Data from the observed intake studies indicate that children aged 6 to 23 months meet requirements of protein, while diets are inadequate in calcium, iron, and zinc. Also for energy, vitamin A, thiamin, riboflavin, niacin, folate, and vitamin C, children did not always fulfill their requirements. Very few studies reported on vitamin B6, B12, and magnesium, and no conclusions can be drawn for these nutrients. When diets are optimized using modeling techniques, most of these nutrient requirements can be met, with the exception of iron and zinc and in some settings calcium, folate, and B vitamins. Our findings suggest that optimizing the use of local foods in diets of children aged 6 to 23 months can improve nutrient intakes; however, additional cost-effective strategies are needed to ensure adequate intakes of iron and zinc. © The Author(s) 2016.
NASA Astrophysics Data System (ADS)
Ke, Weiyao; Moreland, J. Scott; Bernhard, Jonah E.; Bass, Steffen A.
2017-10-01
We study the initial three-dimensional spatial configuration of the quark-gluon plasma (QGP) produced in relativistic heavy-ion collisions using centrality and pseudorapidity-dependent measurements of the medium's charged particle density and two-particle correlations. A cumulant-generating function is first used to parametrize the rapidity dependence of local entropy deposition and extend arbitrary boost-invariant initial conditions to nonzero beam rapidities. The model is then compared to p +Pb and Pb + Pb charged-particle pseudorapidity densities and two-particle pseudorapidity correlations and systematically optimized using Bayesian parameter estimation to extract high-probability initial condition parameters. The optimized initial conditions are then compared to a number of experimental observables including the pseudorapidity-dependent anisotropic flows, event-plane decorrelations, and flow correlations. We find that the form of the initial local longitudinal entropy profile is well constrained by these experimental measurements.
Jiang, Jie; Yu, Wenbo; Zhang, Guangjun
2017-01-01
Navigation accuracy is one of the key performance indicators of an inertial navigation system (INS). Requirements for an accuracy assessment of an INS in a real work environment are exceedingly urgent because of enormous differences between real work and laboratory test environments. An attitude accuracy assessment of an INS based on the intensified high dynamic star tracker (IHDST) is particularly suitable for a real complex dynamic environment. However, the coupled systematic coordinate errors of an INS and the IHDST severely decrease the attitude assessment accuracy of an INS. Given that, a high-accuracy decoupling estimation method of the above systematic coordinate errors based on the constrained least squares (CLS) method is proposed in this paper. The reference frame of the IHDST is firstly converted to be consistent with that of the INS because their reference frames are completely different. Thereafter, the decoupling estimation model of the systematic coordinate errors is established and the CLS-based optimization method is utilized to estimate errors accurately. After compensating for error, the attitude accuracy of an INS can be assessed based on IHDST accurately. Both simulated experiments and real flight experiments of aircraft are conducted, and the experimental results demonstrate that the proposed method is effective and shows excellent performance for the attitude accuracy assessment of an INS in a real work environment. PMID:28991179
A systematic approach for the location of hand sanitizer dispensers in hospitals.
Cure, Laila; Van Enk, Richard; Tiong, Ewing
2014-09-01
Compliance with hand hygiene practices is directly affected by the accessibility and availability of cleaning agents. Nevertheless, the decision of where to locate these dispensers is often not explicitly or fully addressed in the literature. In this paper, we study the problem of selecting the locations to install alcohol-based hand sanitizer dispensers throughout a hospital unit as an indirect approach to maximize compliance with hand hygiene practices. We investigate the relevant criteria in selecting dispenser locations that promote hand hygiene compliance, propose metrics for the evaluation of various location configurations, and formulate a dispenser location optimization model that systematically incorporates such criteria. A complete methodology to collect data and obtain the model parameters is described. We illustrate the proposed approach using data from a general care unit at a collaborating hospital. A cost analysis was performed to study the trade-offs between usability and cost. The proposed methodology can help in evaluating the current location configuration, determining the need for change, and establishing the best possible configuration. It can be adapted to incorporate alternative metrics, tailored to different institutions and updated as needed with new internal policies or safety regulation.
Son, Jino; Vavra, Janna; Li, Yusong; Seymour, Megan; Forbes, Valery
2015-04-01
The preparation of a stable nanoparticle stock suspension is the first step in nanotoxicological studies, but how different preparation methods influence the physicochemical properties of nanoparticles in a solution, even in Milli-Q water, is often under-appreciated. In this study, a systematic approach using a central composite design (CCD) was employed to investigate the effects of sonication time and suspension concentration on the physicochemical properties (i.e. hydrodynamic diameter, zeta potential and ion dissolution) of silver (Ag) and copper oxide (CuO) nanoparticles (NPs) and to identify optimal conditions for suspension preparation in Milli-Q water; defined as giving the smallest particle sizes, highest suspension stability and lowest ion dissolution. Indeed, all the physicochemical properties of AgNPs and CuONPs varied dramatically depending on how the stock suspensions were prepared and differed profoundly between nanoparticle types, indicating the importance of suspension preparation. Moreover, the physicochemical properties of AgNPs and CuONPs, at least in simple media (Milli-Q water), behaved in predictable ways as a function of sonication time and suspension concentration, confirming the validity of our models. Overall, the approach allows systematic assessment of the influence of various factors on key properties of nanoparticle suspensions, which will facilitate optimization of the preparation of nanoparticle stock suspensions and improve the reproducibility of nanotoxicological results. We recommend that further attention be given to details of stock suspension preparation before conducting nanotoxicological studies as these can have an important influence on the behavior and subsequent toxicity of nanoparticles. Copyright © 2014 Elsevier Ltd. All rights reserved.
Salem, A; Salem, A F; Al-Ibraheem, A; Lataifeh, I; Almousa, A; Jaradat, I
2011-01-01
In recent years, the role of positron emission tomography (PET) in the staging and management of gynecological cancers has been increasing. The aim of this study was to systematically review the role of PET in radiotherapy planning and brachytherapy treatment optimization in patients with cervical cancer. Systematic literature review. Systematic review of relevant literature addressing the utilization of PET and/or PET-computed tomography (CT) in external-beam radiotherapy planning and brachytherapy treatment optimization. We performed an extensive PubMed database search on 20 April 2011. Nineteen studies, including 759 patients, formed the basis of this systematic review. PET/ PET-CT is the most sensitive imaging modality for detecting nodal metastases in patients with cervical cancer and has been shown to impact external-beam radiotherapy planning by modifying the treatment field and customizing the radiation dose. This particularly applies to detection of previously uncovered para-aortic and inguinal nodal metastases. Furthermore, PET/ PET-CT guided intensity-modulated radiation therapy (IMRT) allows delivery of higher doses of radiation to the primary tumor, if brachytherapy is unsuitable, and to grossly involved nodal disease while minimizing treatment-related toxicity. PET/ PET-CT based brachytherapy optimization allows improved tumor-volume dose distribution and detailed 3D dosimetric evaluation of risk organs. Sequential PET/ PET-CT imaging performed during the course of brachytherapy form the basis of âadaptiveâ brachytherapy in cervical cancer. This review demonstrates the effectiveness of pretreatment PET/ PET-CT in cervical cancer patients treated by radiotherapy. Further prospective studies are required to define the group of patients who would benefit the most from this procedure.
A systematic review on current status of health technology reassessment: insights for South Korea.
Seo, Hyun-Ju; Park, Ji Jeong; Lee, Seon Heui
2016-11-11
To systematically investigate the current status and methodology of health technology reassessment (HTR) in various countries to draw insights for the healthcare system in South Korea. A systematic literature search was conducted on the articles published between January 2000 and February 2015 on Medline, EMBASE, the Cochrane Library, CINAHL, and PubMed. The titles and abstracts of retrieved records were screened and selected by two independent reviewers. Data related to HTR were extracted using a pre-standardised form. The review was conducted using narrative synthesis to understand and summarise the HTR process and policies. Forty five studies, conducted in seven countries, including the United Kingdom, Australia, Canada, Spain, Sweden, Denmark, and the United States of America, fulfilled the inclusion criteria. Informed by the literature review, and complemented by informant interviews, we focused on HTR activities in four jurisdictions: the United Kingdom, Canada, Australia, and Spain. There were similarities in the HTR processes, namely the use of existing health technology assessment agencies, reassessment candidate technology identification and priority setting, stakeholder involvement, support for reimbursement coverage, and implementation strategies. Considering the findings of the systematic review in the context of the domestic healthcare environment in Korea, an appropriate HTR model was developed. This model included four stages, those of identification, prioritisation, reassessment and decision. Disinvestment and reinvestment through the HTR was used to increase the efficiency and quality of care to help patients receive optimal treatment. Based on the lessons learnt from other countries' experiences, Korea should make efforts to establish an HTR process that optimises the National Healthcare Insurance system through revision of the existing Medical Service Act.
A Systematic Investigation of Computation Models for Predicting Adverse Drug Reactions (ADRs)
Kuang, Qifan; Wang, MinQi; Li, Rong; Dong, YongCheng; Li, Yizhou; Li, Menglong
2014-01-01
Background Early and accurate identification of adverse drug reactions (ADRs) is critically important for drug development and clinical safety. Computer-aided prediction of ADRs has attracted increasing attention in recent years, and many computational models have been proposed. However, because of the lack of systematic analysis and comparison of the different computational models, there remain limitations in designing more effective algorithms and selecting more useful features. There is therefore an urgent need to review and analyze previous computation models to obtain general conclusions that can provide useful guidance to construct more effective computational models to predict ADRs. Principal Findings In the current study, the main work is to compare and analyze the performance of existing computational methods to predict ADRs, by implementing and evaluating additional algorithms that have been earlier used for predicting drug targets. Our results indicated that topological and intrinsic features were complementary to an extent and the Jaccard coefficient had an important and general effect on the prediction of drug-ADR associations. By comparing the structure of each algorithm, final formulas of these algorithms were all converted to linear model in form, based on this finding we propose a new algorithm called the general weighted profile method and it yielded the best overall performance among the algorithms investigated in this paper. Conclusion Several meaningful conclusions and useful findings regarding the prediction of ADRs are provided for selecting optimal features and algorithms. PMID:25180585
Optimal Multi-Type Sensor Placement for Structural Identification by Static-Load Testing
Papadopoulou, Maria; Vernay, Didier; Smith, Ian F. C.
2017-01-01
Assessing ageing infrastructure is a critical challenge for civil engineers due to the difficulty in the estimation and integration of uncertainties in structural models. Field measurements are increasingly used to improve knowledge of the real behavior of a structure; this activity is called structural identification. Error-domain model falsification (EDMF) is an easy-to-use model-based structural-identification methodology which robustly accommodates systematic uncertainties originating from sources such as boundary conditions, numerical modelling and model fidelity, as well as aleatory uncertainties from sources such as measurement error and material parameter-value estimations. In most practical applications of structural identification, sensors are placed using engineering judgment and experience. However, since sensor placement is fundamental to the success of structural identification, a more rational and systematic method is justified. This study presents a measurement system design methodology to identify the best sensor locations and sensor types using information from static-load tests. More specifically, three static-load tests were studied for the sensor system design using three types of sensors for a performance evaluation of a full-scale bridge in Singapore. Several sensor placement strategies are compared using joint entropy as an information-gain metric. A modified version of the hierarchical algorithm for sensor placement is proposed to take into account mutual information between load tests. It is shown that a carefully-configured measurement strategy that includes multiple sensor types and several load tests maximizes information gain. PMID:29240684
Davy, Carol; Bleasel, Jonathan; Liu, Hueiming; Tchan, Maria; Ponniah, Sharon; Brown, Alex
2015-05-10
The increasing prevalence of chronic disease and even multiple chronic diseases faced by both developed and developing countries is of considerable concern. Many of the interventions to address this within primary healthcare settings are based on a chronic care model first developed by MacColl Institute for Healthcare Innovation at Group Health Cooperative. This systematic literature review aimed to identify and synthesise international evidence on the effectiveness of elements that have been included in a chronic care model for improving healthcare practices and health outcomes within primary healthcare settings. The review broadens the work of other similar reviews by focusing on effectiveness of healthcare practice as well as health outcomes associated with implementing a chronic care model. In addition, relevant case series and case studies were also included. Of the 77 papers which met the inclusion criteria, all but two reported improvements to healthcare practice or health outcomes for people living with chronic disease. While the most commonly used elements of a chronic care model were self-management support and delivery system design, there were considerable variations between studies regarding what combination of elements were included as well as the way in which chronic care model elements were implemented. This meant that it was impossible to clearly identify any optimal combination of chronic care model elements that led to the reported improvements. While the main argument for excluding papers reporting case studies and case series in systematic literature reviews is that they are not of sufficient quality or generalizability, we found that they provided a more detailed account of how various chronic care models were developed and implemented. In particular, these papers suggested that several factors including supporting reflective healthcare practice, sending clear messages about the importance of chronic disease care and ensuring that leaders support the implementation and sustainability of interventions may have been just as important as a chronic care model's elements in contributing to the improvements in healthcare practice or health outcomes for people living with chronic disease.
Robophysical study of jumping dynamics on granular media
NASA Astrophysics Data System (ADS)
Aguilar, Jeffrey; Goldman, Daniel I.
2016-03-01
Characterizing forces on deformable objects intruding into sand and soil requires understanding the solid- and fluid-like responses of such substrates and their effect on the state of the object. The most detailed studies of intrusion in dry granular media have revealed that interactions of fixed-shape objects during free impact (for example, cannonballs) and forced slow penetration can be described by hydrostatic- and hydrodynamic-like forces. Here we investigate a new class of granular interactions: rapid intrusions by objects that change shape (self-deform) through passive and active means. Systematic studies of a simple spring-mass robot jumping on dry granular media reveal that jumping performance is explained by an interplay of nonlinear frictional and hydrodynamic drag as well as induced added mass (unaccounted by traditional intrusion models) characterized by a rapidly solidified region of grains accelerated by the foot. A model incorporating these dynamics reveals that added mass degrades the performance of certain self-deformations owing to a shift in optimal timing during push-off. Our systematic robophysical experiment reveals both new soft-matter physics and principles for robotic self-deformation and control, which together provide principles of movement in deformable terrestrial environments.
Gagliardi, Anna R; Abdallah, Flavia; Faulkner, Guy; Ciliska, Donna; Hicks, Audrey
2015-04-01
Physical activity (PA) counselling in primary care increases PA but is not consistently practiced. This study examined factors that optimise the delivery and impact of PA counselling. A realist systematic review based on the PRECEDE-PROCEED model and RAMESES principles was conducted to identify essential components of PA counselling. MEDLINE, EMBASE, Cochrane Library, PsycINFO, and Physical Education Index were searched from 2000 to 2013 for studies that evaluated family practice PA counselling. Of 1546 articles identified, 10 were eligible for review (3 systematic reviews, 5 randomised controlled trials, 2 observational studies). Counselling provided by clinicians or counsellors alone that explored motivation increased self-reported PA at least 12 months following intervention. Multiple sessions may sustain increased PA beyond 12 months. Given the paucity of eligible studies and limited detail reported about interventions, further research is needed to establish the optimal design and delivery of PA counselling. Research and planning should consider predisposing, reinforcing and enabling design features identified in these studies. Since research shows that PA counselling promotes PA but is not widely practiced, primary care providers will require training and tools to operationalize PA counselling. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Petersen, Inge; Fairall, Lara; Egbe, Catherine O; Bhana, Arvin
2014-05-01
To conduct a qualitative systematic review on the use of lay counsellors in South Africa to provide lessons on optimizing their use for psychological and behavioural change counselling for chronic long-term care in scare-resource contexts. A qualitative systematic review of the literature on lay counsellor services in South Africa. Twenty-nine studies met the inclusion criteria. Five randomized control trials and two cohort studies reported that lay counsellors can provide behaviour change counselling with good outcomes. One multi-centre cohort study provided promising evidence of improved anti-retroviral treatment adherence and one non-randomized controlled study provided promising results for counselling for depression. Six studies found low fidelity of lay counsellor-delivered interventions in routine care. Reasons for low fidelity include poor role definition, inconsistent remuneration, lack of standardized training, and poor supervision and logistical support. Within resource-constrained settings, adjunct behaviour change and psychological services provided by lay counsellors can be harnessed to promote chronic care at primary health care level. Optimizing lay counsellor services requires interventions at an organizational level that provide a clear role definition and scope of practice; in-service training and formal supervision; and sensitization of health managers to the importance and logistical requirements of counselling. Copyright © 2014 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.
Koutinas, Michalis; Kiparissides, Alexandros; Pistikopoulos, Efstratios N; Mantalaris, Athanasios
2012-01-01
The complexity of the regulatory network and the interactions that occur in the intracellular environment of microorganisms highlight the importance in developing tractable mechanistic models of cellular functions and systematic approaches for modelling biological systems. To this end, the existing process systems engineering approaches can serve as a vehicle for understanding, integrating and designing biological systems and processes. Here, we review the application of a holistic approach for the development of mathematical models of biological systems, from the initial conception of the model to its final application in model-based control and optimisation. We also discuss the use of mechanistic models that account for gene regulation, in an attempt to advance the empirical expressions traditionally used to describe micro-organism growth kinetics, and we highlight current and future challenges in mathematical biology. The modelling research framework discussed herein could prove beneficial for the design of optimal bioprocesses, employing rational and feasible approaches towards the efficient production of chemicals and pharmaceuticals.
Koutinas, Michalis; Kiparissides, Alexandros; Pistikopoulos, Efstratios N.; Mantalaris, Athanasios
2013-01-01
The complexity of the regulatory network and the interactions that occur in the intracellular environment of microorganisms highlight the importance in developing tractable mechanistic models of cellular functions and systematic approaches for modelling biological systems. To this end, the existing process systems engineering approaches can serve as a vehicle for understanding, integrating and designing biological systems and processes. Here, we review the application of a holistic approach for the development of mathematical models of biological systems, from the initial conception of the model to its final application in model-based control and optimisation. We also discuss the use of mechanistic models that account for gene regulation, in an attempt to advance the empirical expressions traditionally used to describe micro-organism growth kinetics, and we highlight current and future challenges in mathematical biology. The modelling research framework discussed herein could prove beneficial for the design of optimal bioprocesses, employing rational and feasible approaches towards the efficient production of chemicals and pharmaceuticals. PMID:24688682
Biomass to Liquid Fuels and Electrical Power
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taylor, Steven; McDonald, Timothy; Gallagher, Thomas
This research program provided data on immediate applicability of forest biomass production and logistics models. Also, the research further developed and optimized fractionation techniques that can be used to separate biomass feedstocks into their basic chemical constituents. Finally, additional research established systematic techniques to determine economically feasible technologies for production of biomass-derived synthesis gases that will be used for clean, renewable power generation and for production of liquid transportation fuels. Moreover, this research program continued our efforts to educate the next generation of engineers and scientists needed to implement these technologies.
Decorrelated jet substructure tagging using adversarial neural networks
NASA Astrophysics Data System (ADS)
Shimmin, Chase; Sadowski, Peter; Baldi, Pierre; Weik, Edison; Whiteson, Daniel; Goul, Edward; Søgaard, Andreas
2017-10-01
We describe a strategy for constructing a neural network jet substructure tagger which powerfully discriminates boosted decay signals while remaining largely uncorrelated with the jet mass. This reduces the impact of systematic uncertainties in background modeling while enhancing signal purity, resulting in improved discovery significance relative to existing taggers. The network is trained using an adversarial strategy, resulting in a tagger that learns to balance classification accuracy with decorrelation. As a benchmark scenario, we consider the case where large-radius jets originating from a boosted resonance decay are discriminated from a background of nonresonant quark and gluon jets. We show that in the presence of systematic uncertainties on the background rate, our adversarially trained, decorrelated tagger considerably outperforms a conventionally trained neural network, despite having a slightly worse signal-background separation power. We generalize the adversarial training technique to include a parametric dependence on the signal hypothesis, training a single network that provides optimized, interpolatable decorrelated jet tagging across a continuous range of hypothetical resonance masses, after training on discrete choices of the signal mass.
Nekkanti, Vijaykumar; Marwah, Ashwani; Pillai, Raviraj
2015-01-01
Design of experiments (DOE), a component of Quality by Design (QbD), is systematic and simultaneous evaluation of process variables to develop a product with predetermined quality attributes. This article presents a case study to understand the effects of process variables in a bead milling process used for manufacture of drug nanoparticles. Experiments were designed and results were computed according to a 3-factor, 3-level face-centered central composite design (CCD). The factors investigated were motor speed, pump speed and bead volume. Responses analyzed for evaluating these effects and interactions were milling time, particle size and process yield. Process validation batches were executed using the optimum process conditions obtained from software Design-Expert® to evaluate both the repeatability and reproducibility of bead milling technique. Milling time was optimized to <5 h to obtain the desired particle size (d90 < 400 nm). The desirability function used to optimize the response variables and observed responses were in agreement with experimental values. These results demonstrated the reliability of selected model for manufacture of drug nanoparticles with predictable quality attributes. The optimization of bead milling process variables by applying DOE resulted in considerable decrease in milling time to achieve the desired particle size. The study indicates the applicability of DOE approach to optimize critical process parameters in the manufacture of drug nanoparticles.
Chopp-Hurley, Jaclyn N; Brookham, Rebecca L; Dickerson, Clark R
2016-12-01
Biomechanical models are often used to estimate the muscular demands of various activities. However, specific muscle dysfunctions typical of unique clinical populations are rarely considered. Due to iatrogenic tissue damage, pectoralis major capability is markedly reduced in breast cancer population survivors, which could influence arm internal and external rotation muscular strategies. Accordingly, an optimization-based muscle force prediction model was systematically modified to emulate breast cancer population survivors through adjusting pectoralis capability and enforcing an empirical muscular co-activation relationship. Model permutations were evaluated through comparisons between predicted muscle forces and empirically measured muscle activations in survivors. Similarities between empirical data and model outputs were influenced by muscle type, hand force, pectoralis major capability and co-activation constraints. Differences in magnitude were lower when the co-activation constraint was enforced (-18.4% [31.9]) than unenforced (-23.5% [27.6]) (p<0.0001). This research demonstrates that muscle dysfunction in breast cancer population survivors can be reflected through including a capability constraint for pectoralis major. Further refinement of the co-activation constraint for survivors could improve its generalizability across this population and activities. Improving biomechanical models to more accurately represent clinical populations can provide novel information that can help in the development of optimal treatment programs for breast cancer population survivors. Copyright © 2016 Elsevier Ltd. All rights reserved.
Schiuma, D; Brianza, S; Tami, A E
2011-03-01
A method was developed to improve the design of locking implants by finding the optimal paths for the anchoring elements, based on a high resolution pQCT assessment of local bone mineral density (BMD) distribution and bone micro-architecture (BMA). The method consists of three steps: (1) partial fixation of the implant to the bone and creation of a reference system, (2) implant removal and pQCT scan of the bone, and (3) determination of BMD and BMA of all implant-anchoring locations along the actual and alternative directions. Using a PHILOS plate, the method uncertainty was tested on an artificial humerus bone model. A cadaveric humerus was used to quantify how the uncertainty of the method affects the assessment of bone parameters. BMD and BMA were determined along four possible alternative screw paths as possible criteria for implant optimization. The method is biased by a 0.87 ± 0.12 mm systematic uncertainty and by a 0.44 ± 0.09 mm random uncertainty in locating the virtual screw position. This study shows that this method can be used to find alternative directions for the anchoring elements, which may possess better bone properties. This modification will thus produce an optimized implant design. Copyright © 2010 IPEM. Published by Elsevier Ltd. All rights reserved.
LIFESPAN: A tool for the computer-aided design of longitudinal studies
Brandmaier, Andreas M.; von Oertzen, Timo; Ghisletta, Paolo; Hertzog, Christopher; Lindenberger, Ulman
2015-01-01
Researchers planning a longitudinal study typically search, more or less informally, a multivariate space of possible study designs that include dimensions such as the hypothesized true variance in change, indicator reliability, the number and spacing of measurement occasions, total study time, and sample size. The main search goal is to select a research design that best addresses the guiding questions and hypotheses of the planned study while heeding applicable external conditions and constraints, including time, money, feasibility, and ethical considerations. Because longitudinal study selection ultimately requires optimization under constraints, it is amenable to the general operating principles of optimization in computer-aided design. Based on power equivalence theory (MacCallum et al., 2010; von Oertzen, 2010), we propose a computational framework to promote more systematic searches within the study design space. Starting with an initial design, the proposed framework generates a set of alternative models with equal statistical power to detect hypothesized effects, and delineates trade-off relations among relevant parameters, such as total study time and the number of measurement occasions. We present LIFESPAN (Longitudinal Interactive Front End Study Planner), which implements this framework. LIFESPAN boosts the efficiency, breadth, and precision of the search for optimal longitudinal designs. Its initial version, which is freely available at http://www.brandmaier.de/lifespan, is geared toward the power to detect variance in change as specified in a linear latent growth curve model. PMID:25852596
NASA Technical Reports Server (NTRS)
Dennehy, Cornelius J.
2010-01-01
This final report summarizes the results of a comparative assessment of the fault tolerance and reliability of different Guidance, Navigation and Control (GN&C) architectural approaches. This study was proactively performed by a combined Massachusetts Institute of Technology (MIT) and Draper Laboratory team as a GN&C "Discipline-Advancing" activity sponsored by the NASA Engineering and Safety Center (NESC). This systematic comparative assessment of GN&C system architectural approaches was undertaken as a fundamental step towards understanding the opportunities for, and limitations of, architecting highly reliable and fault tolerant GN&C systems composed of common avionic components. The primary goal of this study was to obtain architectural 'rules of thumb' that could positively influence future designs in the direction of an optimized (i.e., most reliable and cost-efficient) GN&C system. A secondary goal was to demonstrate the application and the utility of a systematic modeling approach that maps the entire possible architecture solution space.
Effectiveness of Reablement: A Systematic Review.
Tessier, Annie; Beaulieu, Marie-Dominique; Mcginn, Carrie Anna; Latulippe, Renée
2016-05-01
The ageing of the population and the increasing need for long-term care services are global issues. Some countries have adapted homecare programs by introducing an intervention called reablement, which is aimed at optimizing independence. The effectiveness of reablement, as well as its different service models, was examined. A systematic literature review was conducted using MEDLINE, CINAHL, PsycINFO and EBM Reviews to search from 2001 to 2014. Core characteristics and facilitators of reablement implementation were identified from international experiences. Ten studies comprising a total of 14,742 participants (including four randomized trials, most of excellent or good quality) showed a positive impact of reablement, especially on health-related quality of life and service utilization. The implementation of reablement was studied in three regions, and all observed a reduction in healthcare service utilization. Considering its effectiveness and positive impact observed in several countries, the implementation of reablement is a promising avenue to be pursued by policy makers. Copyright © 2016 Longwoods Publishing.
Verstraelen, Toon; Van Speybroeck, Veronique; Waroquier, Michel
2009-07-28
An extensive benchmark of the electronegativity equalization method (EEM) and the split charge equilibration (SQE) model on a very diverse set of organic molecules is presented. These models efficiently compute atomic partial charges and are used in the development of polarizable force fields. The predicted partial charges that depend on empirical parameters are calibrated to reproduce results from quantum mechanical calculations. Recently, SQE is presented as an extension of the EEM to obtain the correct size dependence of the molecular polarizability. In this work, 12 parametrization protocols are applied to each model and the optimal parameters are benchmarked systematically. The training data for the empirical parameters comprise of MP2/Aug-CC-pVDZ calculations on 500 organic molecules containing the elements H, C, N, O, F, S, Cl, and Br. These molecules have been selected by an ingenious and autonomous protocol from an initial set of almost 500,000 small organic molecules. It is clear that the SQE model outperforms the EEM in all benchmark assessments. When using Hirshfeld-I charges for the calibration, the SQE model optimally reproduces the molecular electrostatic potential from the ab initio calculations. Applications on chain molecules, i.e., alkanes, alkenes, and alpha alanine helices, confirm that the EEM gives rise to a divergent behavior for the polarizability, while the SQE model shows the correct trends. We conclude that the SQE model is an essential component of a polarizable force field, showing several advantages over the original EEM.
NASA Astrophysics Data System (ADS)
Tang, Xiao-Dan
2017-09-01
The charge transport properties of phosphapentacene (P-PEN) derivatives were systematically explored by theoretical calculation. The dehydrogenated P-PENs have reasonable frontier molecular orbital energy levels to facilitate both electron and hole injection. The reduced reorganization energies of dehydrogenated P-PENs could be intimately connected to the bonding nature of phosphorus atoms. From the idea of homology modeling, the crystal structure of TIPSE-4P-2p is constructed and fully optimized. Fascinatingly, TIPSE-4P-2p shows the intrinsic property of ambipolar transport in both hopping and band models. Thus, introducing dehydrogenated phosphorus atoms into pentacene core could be an efficient strategy for designing ambipolar material.
Namboodiri, Vijay Mohan K; Levy, Joshua M; Mihalas, Stefan; Sims, David W; Hussain Shuler, Marshall G
2016-08-02
Understanding the exploration patterns of foragers in the wild provides fundamental insight into animal behavior. Recent experimental evidence has demonstrated that path lengths (distances between consecutive turns) taken by foragers are well fitted by a power law distribution. Numerous theoretical contributions have posited that "Lévy random walks"-which can produce power law path length distributions-are optimal for memoryless agents searching a sparse reward landscape. It is unclear, however, whether such a strategy is efficient for cognitively complex agents, from wild animals to humans. Here, we developed a model to explain the emergence of apparent power law path length distributions in animals that can learn about their environments. In our model, the agent's goal during search is to build an internal model of the distribution of rewards in space that takes into account the cost of time to reach distant locations (i.e., temporally discounting rewards). For an agent with such a goal, we find that an optimal model of exploration in fact produces hyperbolic path lengths, which are well approximated by power laws. We then provide support for our model by showing that humans in a laboratory spatial exploration task search space systematically and modify their search patterns under a cost of time. In addition, we find that path length distributions in a large dataset obtained from free-ranging marine vertebrates are well described by our hyperbolic model. Thus, we provide a general theoretical framework for understanding spatial exploration patterns of cognitively complex foragers.
How Conjunctive Use of Surface and Ground Water could Increase Resiliency in US?
NASA Astrophysics Data System (ADS)
Josset, L.; Rising, J. A.; Russo, T. A.; Troy, T. J.; Lall, U.; Allaire, M.
2016-12-01
Optimized management practices are crucial to ensuring water availability in the future. However this presents a tremendous challenge due to the many functions of water: water is not only central for our survival as drinking water or for irrigation, but it is also valued for industrial and recreational use. Sources of water meeting these needs range from rain water harvesting to reservoirs, water reuse, groundwater abstraction and desalination. A global conjunctive management approach is thus necessary to develop sustainable practices as all sectors are strongly coupled. Policy-makers and researchers have identified pluralism in water sources as a key solution to reach water security. We propose a novel approach to sustainable water management that accounts for multiple sources of water in an integrated manner. We formulate this challenge as an optimization problem where the choice of water sources is driven both by the availability of the sources and their relative cost. The results determine the optimal operational decisions for each sources (e.g. reservoirs releases, surface water withdrawals, groundwater abstraction and/or desalination water use) at each time step for a given time horizon. The physical surface and ground water systems are simulated inside the optimization by setting state equations as constraints. Additional constraints may be added to the model to represent the influence of policy decisions. To account for uncertainty in weather conditions and its impact on availability, the optimization is performed for an ensemble of climate scenarios. While many sectors and their interactions are represented, the computational cost is limited as the problem remains linear and thus enables large-scale applications and the propagation of uncertainty. The formulation is implemented within the model "America's Water Analysis, Synthesis and Heuristic", an integrated model for the conterminous US discretized at the county-scale. This enables a systematic evaluation of stresses on water resources. We explore in particular geographic and temporal trends in function of user-types to develop a better understanding of the dynamics at play. We conclude with a comparison between the optimization results and current water use to identify potential solutions to increase resiliency.
Preparatory studies for the WFIRST supernova cosmology measurements
NASA Astrophysics Data System (ADS)
Perlmutter, Saul
In the context of the WFIRST-AFTA Science Definition Team we developed a first version of a supernova program, described in the WFIRST-AFTA SDT report. This program uses the imager to discover supernova candidates and an Integral Field Spectrograph (IFS) to obtain spectrophotometric light curves and higher signal to noise spectra of the supernovae near peak to better characterize the supernovae and thus minimize systematic errors. While this program was judged a robust one, and the estimates of the sensitivity to the cosmological parameters were felt to be reliable, due to limitation of time the analysis was clearly limited in depth on a number of issues. The goal of this proposal is to further develop this program and refine the estimates of the sensitivities to the cosmological parameters using more sophisticated systematic uncertainty models and covariance error matrices that fold in more realistic data concerning observed populations of SNe Ia as well as more realistic instrument models. We propose to develop analysis algorithms and approaches that are needed to build, optimize, and refine the WFIRST instrument and program requirements to accomplish the best supernova cosmology measurements possible. We plan to address the following: a) Use realistic Supernova populations, subclasses and population drift. One bothersome uncertainty with the supernova technique is the possibility of population drift with redshift. We are in a unique position to characterize and mitigate such effects using the spectrophotometric time series of real Type Ia supernovae from the Nearby Supernova Factory (SNfactory). Each supernova in this sample has global galaxy measurements as well as additional local environment information derived from the IFS spectroscopy. We plan to develop methods of coping with this issue, e.g., by selecting similar subsamples of supernovae and allowing additional model flexibility, in order to reduce systematic uncertainties. These studies will allow us to tune details, like the wavelength coverage and S/N requirements, of the WFIRST IFS to capitalize on these systematic error reduction methods. b) Supernova extraction and host galaxy subtractions. The underlying light of the host galaxy must be subtracted from the supernova images making up the lightcurves. Using the IFS to provide the lightcurve points via spectrophotometry requires the subtraction of a reference spectrum of the galaxy taken after the supernova light has faded to a negligible level. We plan to apply the expertise obtained from the SNfactory to develop galaxy background procedures that minimize the systematic errors introduced by this step in the analysis. c) Instrument calibration and ground to space cross calibration. Calibrating the entire supernova sample will be a challenge as no standard stars exist that span the range of magnitudes and wavelengths relevant to the WFIRST survey. Linking the supernova measurements to the relatively brighter standards will require several links. WFIRST will produce the high redshift sample, but the nearby supernova to anchor the Hubble diagram will have to come from ground based observations. Developing algorithms to carry out the cross calibration of these two samples to the required one percent level will be an important goal of our proposal. An integral part of this calibration will be to remove all instrumental signatures and to develop unbiased measurement techniques starting at the pixel level. We then plan to pull the above studies together in a synthesis to produce a correlated error matrix. We plan to develop a Fisher Matrix based model to evaluate the correlated error matrix due to the various systematic errors discussed above. A realistic error model will allow us to carry out a more reliable estimates of the eventual errors on the measurement of the cosmological parameters, as well as serve as a means of optimizing and fine tuning the requirements for the instruments and survey strategies.
NASA Astrophysics Data System (ADS)
Chen, Biao; Jing, Zhenxue; Smith, Andrew
2005-04-01
Contrast enhanced digital mammography (CEDM), which is based upon the analysis of a series of x-ray projection images acquired before/after the administration of contrast agents, may provide physicians critical physiologic and morphologic information of breast lesions to determine the malignancy of lesions. This paper proposes to combine the kinetic analysis (KA) of contrast agent uptake/washout process and the dual-energy (DE) contrast enhancement together to formulate a hybrid contrast enhanced breast-imaging framework. The quantitative characteristics of materials and imaging components in the x-ray imaging chain, including x-ray tube (tungsten) spectrum, filter, breast tissues/lesions, contrast agents (non-ionized iodine solution), and selenium detector, were systematically modeled. The contrast-noise-ration (CNR) of iodinated lesions and mean absorbed glandular dose were estimated mathematically. The x-ray techniques optimization was conducted through a series of computer simulations to find the optimal tube voltage, filter thickness, and exposure levels for various breast thicknesses, breast density, and detectable contrast agent concentration levels in terms of detection efficiency (CNR2/dose). A phantom study was performed on a modified Selenia full field digital mammography system to verify the simulated results. The dose level was comparable to the dose in diagnostic mode (less than 4 mGy for an average 4.2 cm compressed breast). The results from the computer simulations and phantom study are being used to optimize an ongoing clinical study.
Risk prediction models for graft failure in kidney transplantation: a systematic review.
Kaboré, Rémi; Haller, Maria C; Harambat, Jérôme; Heinze, Georg; Leffondré, Karen
2017-04-01
Risk prediction models are useful for identifying kidney recipients at high risk of graft failure, thus optimizing clinical care. Our objective was to systematically review the models that have been recently developed and validated to predict graft failure in kidney transplantation recipients. We used PubMed and Scopus to search for English, German and French language articles published in 2005-15. We selected studies that developed and validated a new risk prediction model for graft failure after kidney transplantation, or validated an existing model with or without updating the model. Data on recipient characteristics and predictors, as well as modelling and validation methods were extracted. In total, 39 articles met the inclusion criteria. Of these, 34 developed and validated a new risk prediction model and 5 validated an existing one with or without updating the model. The most frequently predicted outcome was graft failure, defined as dialysis, re-transplantation or death with functioning graft. Most studies used the Cox model. There was substantial variability in predictors used. In total, 25 studies used predictors measured at transplantation only, and 14 studies used predictors also measured after transplantation. Discrimination performance was reported in 87% of studies, while calibration was reported in 56%. Performance indicators were estimated using both internal and external validation in 13 studies, and using external validation only in 6 studies. Several prediction models for kidney graft failure in adults have been published. Our study highlights the need to better account for competing risks when applicable in such studies, and to adequately account for post-transplant measures of predictors in studies aiming at improving monitoring of kidney transplant recipients. © The Author 2017. Published by Oxford University Press on behalf of ERA-EDTA. All rights reserved.
NASA Astrophysics Data System (ADS)
Ju, Yaping; Zhang, Chuhua
2016-03-01
Blade fouling has been proved to be a great threat to compressor performance in operating stage. The current researches on fouling-induced performance degradations of centrifugal compressors are based mainly on simplified roughness models without taking into account the realistic factors such as spatial non-uniformity and randomness of the fouling-induced surface roughness. Moreover, little attention has been paid to the robust design optimization of centrifugal compressor impellers with considerations of blade fouling. In this paper, a multi-objective robust design optimization method is developed for centrifugal impellers under surface roughness uncertainties due to blade fouling. A three-dimensional surface roughness map is proposed to describe the nonuniformity and randomness of realistic fouling accumulations on blades. To lower computational cost in robust design optimization, the support vector regression (SVR) metamodel is combined with the Monte Carlo simulation (MCS) method to conduct the uncertainty analysis of fouled impeller performance. The analyzed results show that the critical fouled region associated with impeller performance degradations lies at the leading edge of blade tip. The SVR metamodel has been proved to be an efficient and accurate means in the detection of impeller performance variations caused by roughness uncertainties. After design optimization, the robust optimal design is found to be more efficient and less sensitive to fouling uncertainties while maintaining good impeller performance in the clean condition. This research proposes a systematic design optimization method for centrifugal compressors with considerations of blade fouling, providing a practical guidance to the design of advanced centrifugal compressors.
NASA Astrophysics Data System (ADS)
Thiboult, A.; Anctil, F.
2015-10-01
Forecast reliability and accuracy is a prerequisite for successful hydrological applications. This aim may be attained by using data assimilation techniques such as the popular Ensemble Kalman filter (EnKF). Despite its recognized capacity to enhance forecasting by creating a new set of initial conditions, implementation tests have been mostly carried out with a single model and few catchments leading to case specific conclusions. This paper performs an extensive testing to assess ensemble bias and reliability on 20 conceptual lumped models and 38 catchments in the Province of Québec with perfect meteorological forecast forcing. The study confirms that EnKF is a powerful tool for short range forecasting but also that it requires a more subtle setting than it is frequently recommended. The success of the updating procedure depends to a great extent on the specification of the hyper-parameters. In the implementation of the EnKF, the identification of the hyper-parameters is very unintuitive if the model error is not explicitly accounted for and best estimates of forcing and observation error lead to overconfident forecasts. It is shown that performance are also related to the choice of updated state variables and that all states variables should not systematically be updated. Additionally, the improvement over the open loop scheme depends on the watershed and hydrological model structure, as some models exhibit a poor compatibility with EnKF updating. Thus, it is not possible to conclude in detail on a single ideal manner to identify an optimal implementation; conclusions drawn from a unique event, catchment, or model are likely to be misleading since transferring hyper-parameters from a case to another may be hazardous. Finally, achieving reliability and bias jointly is a daunting challenge as the optimization of one score is done at the cost of the other.
All-optical nanomechanical heat engine.
Dechant, Andreas; Kiesel, Nikolai; Lutz, Eric
2015-05-08
We propose and theoretically investigate a nanomechanical heat engine. We show how a levitated nanoparticle in an optical trap inside a cavity can be used to realize a Stirling cycle in the underdamped regime. The all-optical approach enables fast and flexible control of all thermodynamical parameters and the efficient optimization of the performance of the engine. We develop a systematic optimization procedure to determine optimal driving protocols. Further, we perform numerical simulations with realistic parameters and evaluate the maximum power and the corresponding efficiency.
All-Optical Nanomechanical Heat Engine
NASA Astrophysics Data System (ADS)
Dechant, Andreas; Kiesel, Nikolai; Lutz, Eric
2015-05-01
We propose and theoretically investigate a nanomechanical heat engine. We show how a levitated nanoparticle in an optical trap inside a cavity can be used to realize a Stirling cycle in the underdamped regime. The all-optical approach enables fast and flexible control of all thermodynamical parameters and the efficient optimization of the performance of the engine. We develop a systematic optimization procedure to determine optimal driving protocols. Further, we perform numerical simulations with realistic parameters and evaluate the maximum power and the corresponding efficiency.
Trân, Kien; Murza, Alexandre; Sainsily, Xavier; Coquerel, David; Côté, Jérôme; Belleville, Karine; Haroune, Lounès; Longpré, Jean-Michel; Dumaine, Robert; Salvail, Dany; Lesur, Olivier; Auger-Messier, Mannix; Sarret, Philippe; Marsault, Éric
2018-03-22
The apelin receptor generates increasing interest as a potential target across several cardiovascular indications. However, the short half-life of its cognate ligands, the apelin peptides, is a limiting factor for pharmacological use. In this study, we systematically explored each position of apelin-13 to find the best position to cyclize the peptide, with the goal to improve its stability while optimizing its binding affinity and signaling profile. Macrocyclic analogues showed a remarkably higher stability in rat plasma (half-life >3 h versus 24 min for Pyr-apelin-13), accompanied by improved affinity (analogue 15, K i 0.15 nM and t 1/2 6.8 h). Several compounds displayed higher inotropic effects ex vivo in the Langendorff isolated heart model in rats (analogues 13 and 15, maximum response at 0.003 nM versus 0.03 nM of apelin-13). In conclusion, this study provides stable and active compounds to better characterize the pharmacology of the apelinergic system.
Modeling the frequency response of microwave radiometers with QUCS
NASA Astrophysics Data System (ADS)
Zonca, A.; Roucaries, B.; Williams, B.; Rubin, I.; D'Arcangelo, O.; Meinhold, P.; Lubin, P.; Franceschet, C.; Jahn, S.; Mennella, A.; Bersanelli, M.
2010-12-01
Characterization of the frequency response of coherent radiometric receivers is a key element in estimating the flux of astrophysical emissions, since the measured signal depends on the convolution of the source spectral emission with the instrument band shape. Laboratory Radio Frequency (RF) measurements of the instrument bandpass often require complex test setups and are subject to a number of systematic effects driven by thermal issues and impedance matching, particularly if cryogenic operation is involved. In this paper we present an approach to modeling radiometers bandpasses by integrating simulations and RF measurements of individual components. This method is based on QUCS (Quasi Universal Circuit Simulator), an open-source circuit simulator, which gives the flexibility of choosing among the available devices, implementing new analytical software models or using measured S-parameters. Therefore an independent estimate of the instrument bandpass is achieved using standard individual component measurements and validated analytical simulations. In order to automate the process of preparing input data, running simulations and exporting results we developed the Python package python-qucs and released it under GNU Public License. We discuss, as working cases, bandpass response modeling of the COFE and Planck Low Frequency Instrument (LFI) radiometers and compare results obtained with QUCS and with a commercial circuit simulator software. The main purpose of bandpass modeling in COFE is to optimize component matching, while in LFI they represent the best estimation of frequency response, since end-to-end measurements were strongly affected by systematic effects.
BioPreDyn-bench: a suite of benchmark problems for dynamic modelling in systems biology.
Villaverde, Alejandro F; Henriques, David; Smallbone, Kieran; Bongard, Sophia; Schmid, Joachim; Cicin-Sain, Damjan; Crombach, Anton; Saez-Rodriguez, Julio; Mauch, Klaus; Balsa-Canto, Eva; Mendes, Pedro; Jaeger, Johannes; Banga, Julio R
2015-02-20
Dynamic modelling is one of the cornerstones of systems biology. Many research efforts are currently being invested in the development and exploitation of large-scale kinetic models. The associated problems of parameter estimation (model calibration) and optimal experimental design are particularly challenging. The community has already developed many methods and software packages which aim to facilitate these tasks. However, there is a lack of suitable benchmark problems which allow a fair and systematic evaluation and comparison of these contributions. Here we present BioPreDyn-bench, a set of challenging parameter estimation problems which aspire to serve as reference test cases in this area. This set comprises six problems including medium and large-scale kinetic models of the bacterium E. coli, baker's yeast S. cerevisiae, the vinegar fly D. melanogaster, Chinese Hamster Ovary cells, and a generic signal transduction network. The level of description includes metabolism, transcription, signal transduction, and development. For each problem we provide (i) a basic description and formulation, (ii) implementations ready-to-run in several formats, (iii) computational results obtained with specific solvers, (iv) a basic analysis and interpretation. This suite of benchmark problems can be readily used to evaluate and compare parameter estimation methods. Further, it can also be used to build test problems for sensitivity and identifiability analysis, model reduction and optimal experimental design methods. The suite, including codes and documentation, can be freely downloaded from the BioPreDyn-bench website, https://sites.google.com/site/biopredynbenchmarks/ .
NASA Technical Reports Server (NTRS)
Eckstein, Miguel P.; Abbey, Craig K.; Pham, Binh T.; Shimozaki, Steven S.
2004-01-01
Human performance in visual detection, discrimination, identification, and search tasks typically improves with practice. Psychophysical studies suggest that perceptual learning is mediated by an enhancement in the coding of the signal, and physiological studies suggest that it might be related to the plasticity in the weighting or selection of sensory units coding task relevant information (learning through attention optimization). We propose an experimental paradigm (optimal perceptual learning paradigm) to systematically study the dynamics of perceptual learning in humans by allowing comparisons to that of an optimal Bayesian algorithm and a number of suboptimal learning models. We measured improvement in human localization (eight-alternative forced-choice with feedback) performance of a target randomly sampled from four elongated Gaussian targets with different orientations and polarities and kept as a target for a block of four trials. The results suggest that the human perceptual learning can occur within a lapse of four trials (<1 min) but that human learning is slower and incomplete with respect to the optimal algorithm (23.3% reduction in human efficiency from the 1st-to-4th learning trials). The greatest improvement in human performance, occurring from the 1st-to-2nd learning trial, was also present in the optimal observer, and, thus reflects a property inherent to the visual task and not a property particular to the human perceptual learning mechanism. One notable source of human inefficiency is that, unlike the ideal observer, human learning relies more heavily on previous decisions than on the provided feedback, resulting in no human learning on trials following a previous incorrect localization decision. Finally, the proposed theory and paradigm provide a flexible framework for future studies to evaluate the optimality of human learning of other visual cues and/or sensory modalities.
Optimization Of PVDF-TrFE Processing Conditions For The Fabrication Of Organic MEMS Resonators
Ducrot, Pierre-Henri; Dufour, Isabelle; Ayela, Cédric
2016-01-01
This paper reports a systematic optimization of processing conditions of PVDF-TrFE piezoelectric thin films, used as integrated transducers in organic MEMS resonators. Indeed, despite data on electromechanical properties of PVDF found in the literature, optimized processing conditions that lead to these properties remain only partially described. In this work, a rigorous optimization of parameters enabling state-of-the-art piezoelectric properties of PVDF-TrFE thin films has been performed via the evaluation of the actuation performance of MEMS resonators. Conditions such as annealing duration, poling field and poling duration have been optimized and repeatability of the process has been demonstrated. PMID:26792224
Optimization Of PVDF-TrFE Processing Conditions For The Fabrication Of Organic MEMS Resonators.
Ducrot, Pierre-Henri; Dufour, Isabelle; Ayela, Cédric
2016-01-21
This paper reports a systematic optimization of processing conditions of PVDF-TrFE piezoelectric thin films, used as integrated transducers in organic MEMS resonators. Indeed, despite data on electromechanical properties of PVDF found in the literature, optimized processing conditions that lead to these properties remain only partially described. In this work, a rigorous optimization of parameters enabling state-of-the-art piezoelectric properties of PVDF-TrFE thin films has been performed via the evaluation of the actuation performance of MEMS resonators. Conditions such as annealing duration, poling field and poling duration have been optimized and repeatability of the process has been demonstrated.
Numerical study of entrainment of the human circadian system and recovery by light treatment.
Kim, Soon Ho; Goh, Segun; Han, Kyungreem; Kim, Jong Won; Choi, MooYoung
2018-05-09
While the effects of light as a zeitgeber are well known, the way the effects are modulated by features of the sleep-wake system still remains to be studied in detail. A mathematical model for disturbance and recovery of the human circadian system is presented. The model combines a circadian oscillator and a sleep-wake switch that includes the effects of orexin. By means of simulations, we characterize the period-locking zone of the model, where a stable 24-hour circadian rhythm exists, and the occurrence of circadian disruption due to both insufficient light and imbalance in orexin. We also investigate how daily bright light treatments of short duration can recover the normal circadian rhythm. It is found that the system exhibits continuous phase advance/delay at lower/higher orexin levels. Bright light treatment simulations disclose two optimal time windows, corresponding to morning and evening light treatments. Among the two, the morning light treatment is found effective in a wider range of parameter values, with shorter recovery time. This approach offers a systematic way to determine the conditions under which circadian disruption occurs, and to evaluate the effects of light treatment. In particular, it could potentially offer a way to optimize light treatments for patients with circadian disruption, e.g., sleep and mood disorders, in clinical settings.
Dynamic emulation modelling for the optimal operation of water systems: an overview
NASA Astrophysics Data System (ADS)
Castelletti, A.; Galelli, S.; Giuliani, M.
2014-12-01
Despite sustained increase in computing power over recent decades, computational limitations remain a major barrier to the effective and systematic use of large-scale, process-based simulation models in rational environmental decision-making. Whereas complex models may provide clear advantages when the goal of the modelling exercise is to enhance our understanding of the natural processes, they introduce problems of model identifiability caused by over-parameterization and suffer from high computational burden when used in management and planning problems. As a result, increasing attention is now being devoted to emulation modelling (or model reduction) as a way of overcoming these limitations. An emulation model, or emulator, is a low-order approximation of the process-based model that can be substituted for it in order to solve high resource-demanding problems. In this talk, an overview of emulation modelling within the context of the optimal operation of water systems will be provided. Particular emphasis will be given to Dynamic Emulation Modelling (DEMo), a special type of model complexity reduction in which the dynamic nature of the original process-based model is preserved, with consequent advantages in a wide range of problems, particularly feedback control problems. This will be contrasted with traditional non-dynamic emulators (e.g. response surface and surrogate models) that have been studied extensively in recent years and are mainly used for planning purposes. A number of real world numerical experiences will be used to support the discussion ranging from multi-outlet water quality control in water reservoir through erosion/sedimentation rebalancing in the operation of run-off-river power plants to salinity control in lake and reservoirs.
Optimal iodine staining of cardiac tissue for X-ray computed tomography.
Butters, Timothy D; Castro, Simon J; Lowe, Tristan; Zhang, Yanmin; Lei, Ming; Withers, Philip J; Zhang, Henggui
2014-01-01
X-ray computed tomography (XCT) has been shown to be an effective imaging technique for a variety of materials. Due to the relatively low differential attenuation of X-rays in biological tissue, a high density contrast agent is often required to obtain optimal contrast. The contrast agent, iodine potassium iodide ([Formula: see text]), has been used in several biological studies to augment the use of XCT scanning. Recently I2KI was used in XCT scans of animal hearts to study cardiac structure and to generate 3D anatomical computer models. However, to date there has been no thorough study into the optimal use of I2KI as a contrast agent in cardiac muscle with respect to the staining times required, which has been shown to impact significantly upon the quality of results. In this study we address this issue by systematically scanning samples at various stages of the staining process. To achieve this, mouse hearts were stained for up to 58 hours and scanned at regular intervals of 6-7 hours throughout this process. Optimal staining was found to depend upon the thickness of the tissue; a simple empirical exponential relationship was derived to allow calculation of the required staining time for cardiac samples of an arbitrary size.
DCTune Perceptual Optimization of Compressed Dental X-Rays
NASA Technical Reports Server (NTRS)
Watson, Andrew B.; Null, Cynthia H. (Technical Monitor)
1996-01-01
In current dental practice, x-rays of completed dental work are often sent to the insurer for verification. It is faster and cheaper to transmit instead digital scans of the x-rays. Further economies result if the images are sent in compressed form. DCTune is a technology for optimizing DCT (digital communication technology) quantization matrices to yield maximum perceptual quality for a given bit-rate, or minimum bit-rate for a given perceptual quality. Perceptual optimization of DCT color quantization matrices. In addition, the technology provides a means of setting the perceptual quality of compressed imagery in a systematic way. The purpose of this research was, with respect to dental x-rays, 1) to verify the advantage of DCTune over standard JPEG (Joint Photographic Experts Group), 2) to verify the quality control feature of DCTune, and 3) to discover regularities in the optimized matrices of a set of images. We optimized matrices for a total of 20 images at two resolutions (150 and 300 dpi) and four bit-rates (0.25, 0.5, 0.75, 1.0 bits/pixel), and examined structural regularities in the resulting matrices. We also conducted psychophysical studies (1) to discover the DCTune quality level at which the images became 'visually lossless,' and (2) to rate the relative quality of DCTune and standard JPEG images at various bitrates. Results include: (1) At both resolutions, DCTune quality is a linear function of bit-rate. (2) DCTune quantization matrices for all images at all bitrates and resolutions are modeled well by an inverse Gaussian, with parameters of amplitude and width. (3) As bit-rate is varied, optimal values of both amplitude and width covary in an approximately linear fashion. (4) Both amplitude and width vary in systematic and orderly fashion with either bit-rate or DCTune quality; simple mathematical functions serve to describe these relationships. (5) In going from 150 to 300 dpi, amplitude parameters are substantially lower and widths larger at corresponding bit-rates or qualities. (6) Visually lossless compression occurs at a DCTune quality value of about 1. (7) At 0.25 bits/pixel, comparative ratings give DCTune a substantial advantage over standard JPEG. As visually lossless bit-rates are approached, this advantage of necessity diminishes. We have concluded that DCTune optimized quantization matrices provide better visual quality than standard JPEG. Meaningful quality levels may be specified by means of the DCTune metric. Optimized matrices are very similar across the class of dental x-rays, suggesting the possibility of a 'class-optimal' matrix. DCTune technology appears to provide some value in the context of compressed dental x-rays.
Influence of ultrasound speckle tracking strategies for motion and strain estimation.
Curiale, Ariel H; Vegas-Sánchez-Ferrero, Gonzalo; Aja-Fernández, Santiago
2016-08-01
Speckle Tracking is one of the most prominent techniques used to estimate the regional movement of the heart based on ultrasound acquisitions. Many different approaches have been proposed, proving their suitability to obtain quantitative and qualitative information regarding myocardial deformation, motion and function assessment. New proposals to improve the basic algorithm usually focus on one of these three steps: (1) the similarity measure between images and the speckle model; (2) the transformation model, i.e. the type of motion considered between images; (3) the optimization strategies, such as the use of different optimization techniques in the transformation step or the inclusion of structural information. While many contributions have shown their good performance independently, it is not always clear how they perform when integrated in a whole pipeline. Every step will have a degree of influence over the following and hence over the final result. Thus, a Speckle Tracking pipeline must be analyzed as a whole when developing novel methods, since improvements in a particular step might be undermined by the choices taken in further steps. This work presents two main contributions: (1) We provide a complete analysis of the influence of the different steps in a Speckle Tracking pipeline over the motion and strain estimation accuracy. (2) The study proposes a methodology for the analysis of Speckle Tracking systems specifically designed to provide an easy and systematic way to include other strategies. We close the analysis with some conclusions and recommendations that can be used as an orientation of the degree of influence of the models for speckle, the transformation models, interpolation schemes and optimization strategies over the estimation of motion features. They can be further use to evaluate and design new strategy into a Speckle Tracking system. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Pitts, James Daniel
Rotary ultrasonic machining (RUM), a hybrid process combining ultrasonic machining and diamond grinding, was created to increase material removal rates for the fabrication of hard and brittle workpieces. The objective of this research was to experimentally derive empirical equations for the prediction of multiple machined surface roughness parameters for helically pocketed rotary ultrasonic machined Zerodur glass-ceramic workpieces by means of a systematic statistical experimental approach. A Taguchi parametric screening design of experiments was employed to systematically determine the RUM process parameters with the largest effect on mean surface roughness. Next empirically determined equations for the seven common surface quality metrics were developed via Box-Behnken surface response experimental trials. Validation trials were conducted resulting in predicted and experimental surface roughness in varying levels of agreement. The reductions in cutting force and tool wear associated with RUM, reported by previous researchers, was experimentally verified to also extended to helical pocketing of Zerodur glass-ceramic.
Super-cool paints: optimizing composition with a modified four-flux model
NASA Astrophysics Data System (ADS)
Gali, Marc A.; Arnold, Matthew D.; Gentle, Angus R.; Smith, Geoffrey B.
2017-09-01
The scope for maximizing the albedo of a painted surface to produce low cost new and retro-fitted super-cool roofing is explored systematically. The aim is easy to apply, low cost paint formulations yielding albedos in the range 0.90 to 0.95. This requires raising the near-infrared (NIR) spectral reflectance into this range, while not reducing the more easily obtained high visible reflectance values. Our modified version of the four-flux method has enabled results on more complex composites. Key parameters to be optimized include; fill factors, particle size and material, using more than one mean size, thickness, substrate and binder materials. The model used is a variation of the classical four-flux method that solves the energy transfer problem through four balance differential equations. We use a different approach to the characteristic parameters to define the absorptance and scattering of the complete composite. This generalization allows extension to inclusion of size dispersion of the pigment particle and various binder resins, including those most commonly in use based on acrylics. Thus, the pigment scattering model has to take account of the matrix having loss in the NIR. A paint ranking index aimed specifically at separating paints with albedo above 0.80 is introduced representing the fraction of time at a sub-ambient temperature.
The neural optimal control hierarchy for motor control
NASA Astrophysics Data System (ADS)
DeWolf, T.; Eliasmith, C.
2011-10-01
Our empirical, neuroscientific understanding of biological motor systems has been rapidly growing in recent years. However, this understanding has not been systematically mapped to a quantitative characterization of motor control based in control theory. Here, we attempt to bridge this gap by describing the neural optimal control hierarchy (NOCH), which can serve as a foundation for biologically plausible models of neural motor control. The NOCH has been constructed by taking recent control theoretic models of motor control, analyzing the required processes, generating neurally plausible equivalent calculations and mapping them on to the neural structures that have been empirically identified to form the anatomical basis of motor control. We demonstrate the utility of the NOCH by constructing a simple model based on the identified principles and testing it in two ways. First, we perturb specific anatomical elements of the model and compare the resulting motor behavior with clinical data in which the corresponding area of the brain has been damaged. We show that damaging the assigned functions of the basal ganglia and cerebellum can cause the movement deficiencies seen in patients with Huntington's disease and cerebellar lesions. Second, we demonstrate that single spiking neuron data from our model's motor cortical areas explain major features of single-cell responses recorded from the same primate areas. We suggest that together these results show how NOCH-based models can be used to unify a broad range of data relevant to biological motor control in a quantitative, control theoretic framework.
Modeling the trade-off between diet costs and methane emissions: A goal programming approach.
Moraes, L E; Fadel, J G; Castillo, A R; Casper, D P; Tricarico, J M; Kebreab, E
2015-08-01
Enteric methane emission is a major greenhouse gas from livestock production systems worldwide. Dietary manipulation may be an effective emission-reduction tool; however, the associated costs may preclude its use as a mitigation strategy. Several studies have identified dietary manipulation strategies for the mitigation of emissions, but studies examining the costs of reducing methane by manipulating diets are scarce. Furthermore, the trade-off between increase in dietary costs and reduction in methane emissions has only been determined for a limited number of production scenarios. The objective of this study was to develop an optimization framework for the joint minimization of dietary costs and methane emissions based on the identification of a set of feasible solutions for various levels of trade-off between emissions and costs. Such a set of solutions was created by the specification of a systematic grid of goal programming weights, enabling the decision maker to choose the solution that achieves the desired trade-off level. Moreover, the model enables the calculation of emission-mitigation costs imputing a trading value for methane emissions. Emission imputed costs can be used in emission-unit trading schemes, such as cap-and-trade policy designs. An application of the model using data from lactating cows from dairies in the California Central Valley is presented to illustrate the use of model-generated results in the identification of optimal diets when reducing emissions. The optimization framework is flexible and can be adapted to jointly minimize diet costs and other potential environmental impacts (e.g., nitrogen excretion). It is also flexible so that dietary costs, feed nutrient composition, and animal nutrient requirements can be altered to accommodate various production systems. Copyright © 2015 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Zhou, Lu; Yang, Lei; Yu, Mengjie; Jiang, Yi; Liu, Cheng-Fang; Lai, Wen-Yong; Huang, Wei
2017-11-22
Manufacturing small-molecule organic light-emitting diodes (OLEDs) via inkjet printing is rather attractive for realizing high-efficiency and long-life-span devices, yet it is challenging. In this paper, we present our efforts on systematical investigation and optimization of the ink properties and the printing process to enable facile inkjet printing of conjugated light-emitting small molecules. Various factors on influencing the inkjet-printed film quality during the droplet generation, the ink spreading on the substrates, and its solidification processes have been systematically investigated and optimized. Consequently, halogen-free inks have been developed and large-area patterning inkjet printing on flexible substrates with efficient blue emission has been successfully demonstrated. Moreover, OLEDs manufactured by inkjet printing the light-emitting small molecules manifested superior performance as compared with their corresponding spin-cast counterparts.
Optimizing hydraulic fracture design in the diatomite formation, Lost Hills Field
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nelson, D.G.; Klins, M.A.; Manrique, J.F.
1996-12-31
Since 1988, over 1.3 billion pounds of proppant have been placed in the Lost Hills Field of Kern County. California in over 2700 hydraulic fracture treatments involving investments of about $150 million. In 1995, systematic reevaluation of the standard, field trial-based fracture design began. Reservoir, geomechanical, and hydraulic fracture characterization; production and fracture modeling; sensitivity analysis; and field test results were integrated to optimize designs with regard to proppant volume, proppant ramps, and perforating strategy. The results support a reduction in proppant volume from 2500 to 1700 lb/ft which will save about $50,000 per well, totalling over $3 million permore » year. Vertical coverage was found to be a key component of fracture quality which could be optimized by eliminating perforations from lower stress intervals, reducing the total number of perforations, and reducing peak slurry loading from 16 to 12 ppa. A relationship between variations in lithology, pore pressure, and stress was observed. Point-source, perforating strategies were investigated and variable multiple fracture behavior was observed. The discussed approach has application in areas where stresses are variable; pay zones are thick; hydraulic fracture design is based primarily on empirical, trial-and-error field test results; and effective, robust predictive models involving real-data feedback have not been incorporated into the design improvement process.« less
Enhanced understanding of non-axisymmetric intrinsic and controlled field impacts in tokamaks
DOE Office of Scientific and Technical Information (OSTI.GOV)
In, Y.; Park, J. -K.; Jeon, Y. M.
Here, an extensive study of intrinsic and controlled non-axisymmetric field (δB) impacts in KSTAR has enhanced the understanding about non-axisymmetric field physics and its implications, in particular, on resonant magnetic perturbation (RMP) physics and power threshold (P th) for L–H transition. The n=1 intrinsic non-axisymmetric field in KSTAR was measured to remain as low as δB/B 0 ~ 4×10 –5 even at high-beta plasmas (β N ~ 2), which corresponds to approximately 20% below the targeted ITER tolerance level. As for the RMP edge-localized-modes (ELM) control, robust n=1 RMP ELM-crash-suppression has been not only sustained for more than ~90 τ E, but also confirmed to be compatible with rotating RMP. An optimal window of radial position of lower X-point (i.e. R x =more » $$1.44\\pm 0.02\\,$$ m) proved to be quite critical to reach full n=1 RMP-driven ELM-crash-suppression, while a constraint of the safety factor could be relaxed (q 95 = 5 $$\\pm $$ 0.25). A more encouraging finding was that even when R x cannot be positioned in the optimal window, another systematic scan in the vicinity of the previously optimal R x allows for a new optimal window with relatively small variations of plasma parameters. Also, we have addressed the importance of optimal phasing (i.e. toroidal phase difference between adjacent rows) for n=1 RMP-driven ELM control, consistent with an ideal plasma response modeling which could predict phasing-dependent ELM suppression windows. In support of ITER RMP study, intentionally misaligned RMPs have been found to be quite effective during ELM-mitigation stage in lowering the peaks of divertor heat flux, as well as in broadening the 'wet' areas. Besides, a systematic survey of P th dependence on non-axisymmetric field has revealed the potential limit of the merit of low intrinsic non-axisymmetry. Considering that the ITER RMP coils are composed of 3-rows, just like in KSTAR, further 3D physics study in KSTAR is expected to help us minimize the uncertainties of the ITER RMP coils, as well as establish an optimal 3D configuration for ITER and future reactors.« less
Enhanced understanding of non-axisymmetric intrinsic and controlled field impacts in tokamaks
NASA Astrophysics Data System (ADS)
In, Y.; Park, J.-K.; Jeon, Y. M.; Kim, J.; Park, G. Y.; Ahn, J.-W.; Loarte, A.; Ko, W. H.; Lee, H. H.; Yoo, J. W.; Juhn, J. W.; Yoon, S. W.; Park, H.; Physics Task Force in KSTAR, 3D
2017-11-01
An extensive study of intrinsic and controlled non-axisymmetric field (δB) impacts in KSTAR has enhanced the understanding about non-axisymmetric field physics and its implications, in particular, on resonant magnetic perturbation (RMP) physics and power threshold (P th) for L-H transition. The n = 1 intrinsic non-axisymmetric field in KSTAR was measured to remain as low as δB/B 0 ~ 4 × 10-5 even at high-beta plasmas (β N ~ 2), which corresponds to approximately 20% below the targeted ITER tolerance level. As for the RMP edge-localized-modes (ELM) control, robust n = 1 RMP ELM-crash-suppression has been not only sustained for more than ~90 τ E, but also confirmed to be compatible with rotating RMP. An optimal window of radial position of lower X-point (i.e. R x = 1.44+/- 0.02 m) proved to be quite critical to reach full n = 1 RMP-driven ELM-crash-suppression, while a constraint of the safety factor could be relaxed (q 95 = 5 +/- 0.25). A more encouraging finding was that even when R x cannot be positioned in the optimal window, another systematic scan in the vicinity of the previously optimal R x allows for a new optimal window with relatively small variations of plasma parameters. Also, we have addressed the importance of optimal phasing (i.e. toroidal phase difference between adjacent rows) for n = 1 RMP-driven ELM control, consistent with an ideal plasma response modeling which could predict phasing-dependent ELM suppression windows. In support of ITER RMP study, intentionally misaligned RMPs have been found to be quite effective during ELM-mitigation stage in lowering the peaks of divertor heat flux, as well as in broadening the ‘wet’ areas. Besides, a systematic survey of P th dependence on non-axisymmetric field has revealed the potential limit of the merit of low intrinsic non-axisymmetry. Considering that the ITER RMP coils are composed of 3-rows, just like in KSTAR, further 3D physics study in KSTAR is expected to help us minimize the uncertainties of the ITER RMP coils, as well as establish an optimal 3D configuration for ITER and future reactors.
Enhanced understanding of non-axisymmetric intrinsic and controlled field impacts in tokamaks
In, Y.; Park, J. -K.; Jeon, Y. M.; ...
2017-08-24
Here, an extensive study of intrinsic and controlled non-axisymmetric field (δB) impacts in KSTAR has enhanced the understanding about non-axisymmetric field physics and its implications, in particular, on resonant magnetic perturbation (RMP) physics and power threshold (P th) for L–H transition. The n=1 intrinsic non-axisymmetric field in KSTAR was measured to remain as low as δB/B 0 ~ 4×10 –5 even at high-beta plasmas (β N ~ 2), which corresponds to approximately 20% below the targeted ITER tolerance level. As for the RMP edge-localized-modes (ELM) control, robust n=1 RMP ELM-crash-suppression has been not only sustained for more than ~90 τ E, but also confirmed to be compatible with rotating RMP. An optimal window of radial position of lower X-point (i.e. R x =more » $$1.44\\pm 0.02\\,$$ m) proved to be quite critical to reach full n=1 RMP-driven ELM-crash-suppression, while a constraint of the safety factor could be relaxed (q 95 = 5 $$\\pm $$ 0.25). A more encouraging finding was that even when R x cannot be positioned in the optimal window, another systematic scan in the vicinity of the previously optimal R x allows for a new optimal window with relatively small variations of plasma parameters. Also, we have addressed the importance of optimal phasing (i.e. toroidal phase difference between adjacent rows) for n=1 RMP-driven ELM control, consistent with an ideal plasma response modeling which could predict phasing-dependent ELM suppression windows. In support of ITER RMP study, intentionally misaligned RMPs have been found to be quite effective during ELM-mitigation stage in lowering the peaks of divertor heat flux, as well as in broadening the 'wet' areas. Besides, a systematic survey of P th dependence on non-axisymmetric field has revealed the potential limit of the merit of low intrinsic non-axisymmetry. Considering that the ITER RMP coils are composed of 3-rows, just like in KSTAR, further 3D physics study in KSTAR is expected to help us minimize the uncertainties of the ITER RMP coils, as well as establish an optimal 3D configuration for ITER and future reactors.« less
Han, Bing; Mao, Jialin; Chien, Jenny Y; Hall, Stephen D
2013-07-01
Ketoconazole is a potent CYP3A inhibitor used to assess the contribution of CYP3A to drug clearance and quantify the increase in drug exposure due to a strong inhibitor. Physiologically based pharmacokinetic (PBPK) models have been used to evaluate treatment regimens resulting in maximal CYP3A inhibition by ketoconazole but have reached different conclusions. We compare two PBPK models of the ketoconazole-midazolam interaction, model 1 (Chien et al., 2006) and model 2 implemented in Simcyp (version 11), to predict 16 published treatment regimens. With use of model 2, 41% of the study point estimates of area under the curve (AUC) ratio and 71% of the 90% confidence intervals were predicted within 1.5-fold of the observed, but these increased to 82 and 100%, respectively, with model 1. For midazolam, model 2 predicted a maximal midazolam AUC ratio of 8 and a hepatic fraction metabolized by CYP3A (f(m)) of 0.97, whereas model 1 predicted 17 and 0.90, respectively, which are more consistent with observed data. On the basis of model 1, ketoconazole (400 mg QD) for at least 3 days and substrate administration within 2 hours is required for maximal CYP3A inhibition. Ketoconazole treatment regimens that use 200 mg BID underestimate the systemic fraction metabolized by CYP3A (0.86 versus 0.90) for midazolam. The systematic underprediction also applies to CYP3A substrates with high bioavailability and long half-lives. The superior predictive performance of model 1 reflects the need for accumulation of ketoconazole at enzyme site and protracted inhibition. Model 2 is not recommended for inferring optimal study design and estimation of fraction metabolized by CYP3A.
Fang, Xingang; Bagui, Sikha; Bagui, Subhash
2017-08-01
The readily available high throughput screening (HTS) data from the PubChem database provides an opportunity for mining of small molecules in a variety of biological systems using machine learning techniques. From the thousands of available molecular descriptors developed to encode useful chemical information representing the characteristics of molecules, descriptor selection is an essential step in building an optimal quantitative structural-activity relationship (QSAR) model. For the development of a systematic descriptor selection strategy, we need the understanding of the relationship between: (i) the descriptor selection; (ii) the choice of the machine learning model; and (iii) the characteristics of the target bio-molecule. In this work, we employed the Signature descriptor to generate a dataset on the Human kallikrein 5 (hK 5) inhibition confirmatory assay data and compared multiple classification models including logistic regression, support vector machine, random forest and k-nearest neighbor. Under optimal conditions, the logistic regression model provided extremely high overall accuracy (98%) and precision (90%), with good sensitivity (65%) in the cross validation test. In testing the primary HTS screening data with more than 200K molecular structures, the logistic regression model exhibited the capability of eliminating more than 99.9% of the inactive structures. As part of our exploration of the descriptor-model-target relationship, the excellent predictive performance of the combination of the Signature descriptor and the logistic regression model on the assay data of the Human kallikrein 5 (hK 5) target suggested a feasible descriptor/model selection strategy on similar targets. Copyright © 2017 Elsevier Ltd. All rights reserved.
Reliable inference of light curve parameters in the presence of systematics
NASA Astrophysics Data System (ADS)
Gibson, Neale P.
2016-10-01
Time-series photometry and spectroscopy of transiting exoplanets allow us to study their atmospheres. Unfortunately, the required precision to extract atmospheric information surpasses the design specifications of most general purpose instrumentation. This results in instrumental systematics in the light curves that are typically larger than the target precision. Systematics must therefore be modelled, leaving the inference of light-curve parameters conditioned on the subjective choice of systematics models and model-selection criteria. Here, I briefly review the use of systematics models commonly used for transmission and emission spectroscopy, including model selection, marginalisation over models, and stochastic processes. These form a hierarchy of models with increasing degree of objectivity. I argue that marginalisation over many systematics models is a minimal requirement for robust inference. Stochastic models provide even more flexibility and objectivity, and therefore produce the most reliable results. However, no systematics models are perfect, and the best strategy is to compare multiple methods and repeat observations where possible.
Mission Activity Planning for Humans and Robots on the Moon
NASA Technical Reports Server (NTRS)
Weisbin, C.; Shelton, K.; Lincoln, W.; Elfes, A.; Smith, J.H.; Mrozinski, J.; Hua, H.; Adumitroaie, V.; Silberg, R.
2008-01-01
A series of studies is conducted to develop a systematic approach to optimizing, both in terms of the distribution and scheduling of tasks, scenarios in which astronauts and robots accomplish a group of activities on the Moon, given an objective function (OF) and specific resources and constraints. An automated planning tool is developed as a key element of this optimization system.
Evaluation of subset matching methods and forms of covariate balance.
de Los Angeles Resa, María; Zubizarreta, José R
2016-11-30
This paper conducts a Monte Carlo simulation study to evaluate the performance of multivariate matching methods that select a subset of treatment and control observations. The matching methods studied are the widely used nearest neighbor matching with propensity score calipers and the more recently proposed methods, optimal matching of an optimally chosen subset and optimal cardinality matching. The main findings are: (i) covariate balance, as measured by differences in means, variance ratios, Kolmogorov-Smirnov distances, and cross-match test statistics, is better with cardinality matching because by construction it satisfies balance requirements; (ii) for given levels of covariate balance, the matched samples are larger with cardinality matching than with the other methods; (iii) in terms of covariate distances, optimal subset matching performs best; (iv) treatment effect estimates from cardinality matching have lower root-mean-square errors, provided strong requirements for balance, specifically, fine balance, or strength-k balance, plus close mean balance. In standard practice, a matched sample is considered to be balanced if the absolute differences in means of the covariates across treatment groups are smaller than 0.1 standard deviations. However, the simulation results suggest that stronger forms of balance should be pursued in order to remove systematic biases due to observed covariates when a difference in means treatment effect estimator is used. In particular, if the true outcome model is additive, then marginal distributions should be balanced, and if the true outcome model is additive with interactions, then low-dimensional joints should be balanced. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Conformational Phase Diagram for Polymers Adsorbed on Ultrathin Nanowires
NASA Astrophysics Data System (ADS)
Vogel, Thomas; Bachmann, Michael
2010-05-01
We study the conformational behavior of a polymer adsorbed at an attractive stringlike nanowire and construct the complete structural phase diagram in dependence of the binding strength and effective thickness of the nanowire. For this purpose, Monte Carlo optimization techniques are employed to identify lowest-energy structures for a coarse-grained model of a polymer in contact with the nanowire. Among the representative conformations in the different phases are, for example, compact droplets attached to the wire and also nanotubelike monolayer films wrapping it in a very ordered way. We here systematically analyze low-energy shapes and structural order parameters to elucidate the transitions between the structural phases.
Conformational phase diagram for polymers adsorbed on ultrathin nanowires.
Vogel, Thomas; Bachmann, Michael
2010-05-14
We study the conformational behavior of a polymer adsorbed at an attractive stringlike nanowire and construct the complete structural phase diagram in dependence of the binding strength and effective thickness of the nanowire. For this purpose, Monte Carlo optimization techniques are employed to identify lowest-energy structures for a coarse-grained model of a polymer in contact with the nanowire. Among the representative conformations in the different phases are, for example, compact droplets attached to the wire and also nanotubelike monolayer films wrapping it in a very ordered way. We here systematically analyze low-energy shapes and structural order parameters to elucidate the transitions between the structural phases.
NASA Astrophysics Data System (ADS)
Cheng, Yung-Chang; Lee, Cheng-Kang
2017-10-01
This paper proposes a systematic method, integrating the uniform design (UD) of experiments and quantum-behaved particle swarm optimization (QPSO), to solve the problem of a robust design for a railway vehicle suspension system. Based on the new nonlinear creep model derived from combining Hertz contact theory, Kalker's linear theory and a heuristic nonlinear creep model, the modeling and dynamic analysis of a 24 degree-of-freedom railway vehicle system were investigated. The Lyapunov indirect method was used to examine the effects of suspension parameters, wheel conicities and wheel rolling radii on critical hunting speeds. Generally, the critical hunting speeds of a vehicle system resulting from worn wheels with different wheel rolling radii are lower than those of a vehicle system having original wheels without different wheel rolling radii. Because of worn wheels, the critical hunting speed of a running railway vehicle substantially declines over the long term. For safety reasons, it is necessary to design the suspension system parameters to increase the robustness of the system and decrease the sensitive of wheel noises. By applying UD and QPSO, the nominal-the-best signal-to-noise ratio of the system was increased from -48.17 to -34.05 dB. The rate of improvement was 29.31%. This study has demonstrated that the integration of UD and QPSO can successfully reveal the optimal solution of suspension parameters for solving the robust design problem of a railway vehicle suspension system.
Forsthoefel, David J; Waters, Forrest A; Newmark, Phillip A
2014-12-21
Efforts to elucidate the cellular and molecular mechanisms of regeneration have required the application of methods to detect specific cell types and tissues in a growing cohort of experimental animal models. For example, in the planarian Schmidtea mediterranea, substantial improvements to nucleic acid hybridization and electron microscopy protocols have facilitated the visualization of regenerative events at the cellular level. By contrast, immunological resources have been slower to emerge. Specifically, the repertoire of antibodies recognizing planarian antigens remains limited, and a more systematic approach is needed to evaluate the effects of processing steps required during sample preparation for immunolabeling. To address these issues and to facilitate studies of planarian digestive system regeneration, we conducted a monoclonal antibody (mAb) screen using phagocytic intestinal cells purified from the digestive tracts of living planarians as immunogens. This approach yielded ten antibodies that recognized intestinal epitopes, as well as markers for the central nervous system, musculature, secretory cells, and epidermis. In order to improve signal intensity and reduce non-specific background for a subset of mAbs, we evaluated the effects of fixation and other steps during sample processing. We found that fixative choice, treatments to remove mucus and bleach pigment, as well as methods for tissue permeabilization and antigen retrieval profoundly influenced labeling by individual antibodies. These experiments led to the development of a step-by-step workflow for determining optimal specimen preparation for labeling whole planarians as well as unbleached histological sections. We generated a collection of monoclonal antibodies recognizing the planarian intestine and other tissues; these antibodies will facilitate studies of planarian tissue morphogenesis. We also developed a protocol for optimizing specimen processing that will accelerate future efforts to generate planarian-specific antibodies, and to extend functional genetic studies of regeneration to post-transcriptional aspects of gene expression, such as protein localization or modification. Our efforts demonstrate the importance of systematically testing multiple approaches to species-specific idiosyncracies, such as mucus removal and pigment bleaching, and may serve as a template for the development of immunological resources in other emerging model organisms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krishnan, Shankar; Karri, Naveen K.; Gogna, Pawan K.
2012-03-13
Enormous military and commercial interests exist in developing quiet, lightweight, and compact thermoelectric (TE) power generation systems. This paper investigates design integration and analysis of an advanced TE power generation system implementing JP-8 fueled combustion and thermal recuperation. Design and development of a portable TE power system using a JP-8 combustor as a high temperature heat source and optimal process flows depend on efficient heat generation, transfer, and recovery within the system are explored. Design optimization of the system required considering the combustion system efficiency and TE conversion efficiency simultaneously. The combustor performance and TE sub-system performance were coupled directlymore » through exhaust temperatures, fuel and air mass flow rates, heat exchanger performance, subsequent hot-side temperatures, and cold-side cooling techniques and temperatures. Systematic investigation of this system relied on accurate thermodynamic modeling of complex, high-temperature combustion processes concomitantly with detailed thermoelectric converter thermal/mechanical modeling. To this end, this work reports on design integration of systemlevel process flow simulations using commercial software CHEMCADTM with in-house thermoelectric converter and module optimization, and heat exchanger analyses using COMSOLTM software. High-performance, high-temperature TE materials and segmented TE element designs are incorporated in coupled design analyses to achieve predicted TE subsystem level conversion efficiencies exceeding 10%. These TE advances are integrated with a high performance microtechnology combustion reactor based on recent advances at the Pacific Northwest National Laboratory (PNNL). Predictions from this coupled simulation established a basis for optimal selection of fuel and air flow rates, thermoelectric module design and operating conditions, and microtechnology heat-exchanger design criteria. This paper will discuss this simulation process that leads directly to system efficiency power maps defining potentially available optimal system operating conditions and regimes. This coupled simulation approach enables pathways for integrated use of high-performance combustor components, high performance TE devices, and microtechnologies to produce a compact, lightweight, combustion driven TE power system prototype that operates on common fuels.« less
Personalized Guideline-Based Treatment Recommendations Using Natural Language Processing Techniques.
Becker, Matthias; Böckmann, Britta
2017-01-01
Clinical guidelines and clinical pathways are accepted and proven instruments for quality assurance and process optimization. Today, electronic representation of clinical guidelines exists as unstructured text, but is not well-integrated with patient-specific information from electronic health records. Consequently, generic content of the clinical guidelines is accessible, but it is not possible to visualize the position of the patient on the clinical pathway, decision support cannot be provided by personalized guidelines for the next treatment step. The Systematized Nomenclature of Medicine - Clinical Terms (SNOMED CT) provides common reference terminology as well as the semantic link for combining the pathways and the patient-specific information. This paper proposes a model-based approach to support the development of guideline-compliant pathways combined with patient-specific structured and unstructured information using SNOMED CT. To identify SNOMED CT concepts, a software was developed to extract SNOMED CT codes out of structured and unstructured German data to map these with clinical pathways annotated in accordance with the systematized nomenclature.
Prospects for detecting a net photon circular polarization produced by decaying dark matter
NASA Astrophysics Data System (ADS)
Elagin, Andrey; Kumar, Jason; Sandick, Pearl; Teng, Fei
2017-11-01
If dark matter interactions with Standard Model particles are C P violating, then dark matter annihilation/decay can produce photons with a net circular polarization. We consider the prospects for experimentally detecting evidence for such a circular polarization. We identify optimal models for dark matter interactions with the Standard Model, from the point of view of detectability of the net polarization, for the case of either symmetric or asymmetric dark matter. We find that, for symmetric dark matter, evidence for net polarization could be found by a search of the Galactic center by an instrument sensitive to circular polarization with an efficiency-weighted exposure of at least 50 ,000 cm2 yr , provided the systematic detector uncertainties are constrained at the 1% level. Better sensitivity can be obtained in the case of asymmetric dark matter. We discuss the prospects for achieving the needed level of performance using possible detector technologies.
NASA Astrophysics Data System (ADS)
Elliott, Thomas J.; Gu, Mile
2018-03-01
Continuous-time stochastic processes pervade everyday experience, and the simulation of models of these processes is of great utility. Classical models of systems operating in continuous-time must typically track an unbounded amount of information about past behaviour, even for relatively simple models, enforcing limits on precision due to the finite memory of the machine. However, quantum machines can require less information about the past than even their optimal classical counterparts to simulate the future of discrete-time processes, and we demonstrate that this advantage extends to the continuous-time regime. Moreover, we show that this reduction in the memory requirement can be unboundedly large, allowing for arbitrary precision even with a finite quantum memory. We provide a systematic method for finding superior quantum constructions, and a protocol for analogue simulation of continuous-time renewal processes with a quantum machine.
New Multi-objective Uncertainty-based Algorithm for Water Resource Models' Calibration
NASA Astrophysics Data System (ADS)
Keshavarz, Kasra; Alizadeh, Hossein
2017-04-01
Water resource models are powerful tools to support water management decision making process and are developed to deal with a broad range of issues including land use and climate change impacts analysis, water allocation, systems design and operation, waste load control and allocation, etc. These models are divided into two categories of simulation and optimization models whose calibration has been addressed in the literature where great relevant efforts in recent decades have led to two main categories of auto-calibration methods of uncertainty-based algorithms such as GLUE, MCMC and PEST and optimization-based algorithms including single-objective optimization such as SCE-UA and multi-objective optimization such as MOCOM-UA and MOSCEM-UA. Although algorithms which benefit from capabilities of both types, such as SUFI-2, were rather developed, this paper proposes a new auto-calibration algorithm which is capable of both finding optimal parameters values regarding multiple objectives like optimization-based algorithms and providing interval estimations of parameters like uncertainty-based algorithms. The algorithm is actually developed to improve quality of SUFI-2 results. Based on a single-objective, e.g. NSE and RMSE, SUFI-2 proposes a routine to find the best point and interval estimation of parameters and corresponding prediction intervals (95 PPU) of time series of interest. To assess the goodness of calibration, final results are presented using two uncertainty measures of p-factor quantifying percentage of observations covered by 95PPU and r-factor quantifying degree of uncertainty, and the analyst has to select the point and interval estimation of parameters which are actually non-dominated regarding both of the uncertainty measures. Based on the described properties of SUFI-2, two important questions are raised, answering of which are our research motivation: Given that in SUFI-2, final selection is based on the two measures or objectives and on the other hand, knowing that there is no multi-objective optimization mechanism in SUFI-2, are the final estimations Pareto-optimal? Can systematic methods be applied to select the final estimations? Dealing with these questions, a new auto-calibration algorithm was proposed where the uncertainty measures were considered as two objectives to find non-dominated interval estimations of parameters by means of coupling Monte Carlo simulation and Multi-Objective Particle Swarm Optimization. Both the proposed algorithm and SUFI-2 were applied to calibrate parameters of water resources planning model of Helleh river basin, Iran. The model is a comprehensive water quantity-quality model developed in the previous researches using WEAP software in order to analyze the impacts of different water resources management strategies including dam construction, increasing cultivation area, utilization of more efficient irrigation technologies, changing crop pattern, etc. Comparing the Pareto frontier resulted from the proposed auto-calibration algorithm with SUFI-2 results, it was revealed that the new algorithm leads to a better and also continuous Pareto frontier, even though it is more computationally expensive. Finally, Nash and Kalai-Smorodinsky bargaining methods were used to choose compromised interval estimation regarding Pareto frontier.
NASA Astrophysics Data System (ADS)
Giannaros, Christos; Nenes, Athanasios; Giannaros, Theodore M.; Kourtidis, Konstantinos; Melas, Dimitrios
2018-03-01
This study presents a comprehensive modeling approach for simulating the spatiotemporal distribution of urban air temperatures with a modeling system that includes the Weather Research and Forecasting (WRF) model and the Single-Layer Urban Canopy Model (SLUCM) with a modified treatment of the impervious surface temperature. The model was applied to simulate a 3-day summer heat wave event over the city of Athens, Greece. The simulation, using default SLUCM parameters, is capable of capturing the observed diurnal variation of urban temperatures and the Urban Heat Island (UHI) in the greater Athens Area (GAA), albeit with systematic biases that are prominent during nighttime hours. These biases are particularly evident over low-intensity residential areas, and they are associated with the surface and urban canopy properties representing the urban environment. A series of sensitivity simulations unravels the importance of the sub-grid urban fraction parameter, surface albedo, and street canyon geometry in the overall causation and development of the UHI effect. The sensitivities are then used to determine optimal values of the street canyon geometry, which reproduces the observed temperatures throughout the simulation domain. The optimal parameters, apart from considerably improving model performance (reductions in mean temperature bias from 0.30 °C to 1.58 °C), are also consistent with actual city building characteristics - which gives confidence that the model set-up is robust, and can be used to study the UHI in the GAA in the anticipated warmer conditions in the future.
Henriques, David; Alonso-Del-Real, Javier; Querol, Amparo; Balsa-Canto, Eva
2018-01-01
Wineries face unprecedented challenges due to new market demands and climate change effects on wine quality. New yeast starters including non-conventional Saccharomyces species, such as S. kudriavzevii , may contribute to deal with some of these challenges. The design of new fermentations using non-conventional yeasts requires an improved understanding of the physiology and metabolism of these cells. Dynamic modeling brings the potential of exploring the most relevant mechanisms and designing optimal processes more systematically. In this work we explore mechanisms by means of a model selection, reduction and cross-validation pipeline which enables to dissect the most relevant fermentation features for the species under consideration, Saccharomyces cerevisiae T73 and Saccharomyces kudriavzevii CR85. The pipeline involved the comparison of a collection of models which incorporate several alternative mechanisms with emphasis on the inhibitory effects due to temperature and ethanol. We focused on defining a minimal model with the minimum number of parameters, to maximize the identifiability and the quality of cross-validation. The selected model was then used to highlight differences in behavior between species. The analysis of model parameters would indicate that the specific growth rate and the transport of hexoses at initial times are higher for S. cervisiae T73 while S. kudriavzevii CR85 diverts more flux for glycerol production and cellular maintenance. As a result, the fermentations with S. kudriavzevii CR85 are typically slower; produce less ethanol but higher glycerol. Finally, we also explored optimal initial inoculation and process temperature to find the best compromise between final product characteristics and fermentation duration. Results reveal that the production of glycerol is distinctive in S. kudriavzevii CR85, it was not possible to achieve the same production of glycerol with S. cervisiae T73 in any of the conditions tested. This result brings the idea that the optimal design of mixed cultures may have an enormous potential for the improvement of final wine quality.
Henriques, David; Alonso-del-Real, Javier; Querol, Amparo; Balsa-Canto, Eva
2018-01-01
Wineries face unprecedented challenges due to new market demands and climate change effects on wine quality. New yeast starters including non-conventional Saccharomyces species, such as S. kudriavzevii, may contribute to deal with some of these challenges. The design of new fermentations using non-conventional yeasts requires an improved understanding of the physiology and metabolism of these cells. Dynamic modeling brings the potential of exploring the most relevant mechanisms and designing optimal processes more systematically. In this work we explore mechanisms by means of a model selection, reduction and cross-validation pipeline which enables to dissect the most relevant fermentation features for the species under consideration, Saccharomyces cerevisiae T73 and Saccharomyces kudriavzevii CR85. The pipeline involved the comparison of a collection of models which incorporate several alternative mechanisms with emphasis on the inhibitory effects due to temperature and ethanol. We focused on defining a minimal model with the minimum number of parameters, to maximize the identifiability and the quality of cross-validation. The selected model was then used to highlight differences in behavior between species. The analysis of model parameters would indicate that the specific growth rate and the transport of hexoses at initial times are higher for S. cervisiae T73 while S. kudriavzevii CR85 diverts more flux for glycerol production and cellular maintenance. As a result, the fermentations with S. kudriavzevii CR85 are typically slower; produce less ethanol but higher glycerol. Finally, we also explored optimal initial inoculation and process temperature to find the best compromise between final product characteristics and fermentation duration. Results reveal that the production of glycerol is distinctive in S. kudriavzevii CR85, it was not possible to achieve the same production of glycerol with S. cervisiae T73 in any of the conditions tested. This result brings the idea that the optimal design of mixed cultures may have an enormous potential for the improvement of final wine quality. PMID:29456524
A computational fluid dynamics simulation framework for ventricular catheter design optimization.
Weisenberg, Sofy H; TerMaath, Stephanie C; Barbier, Charlotte N; Hill, Judith C; Killeffer, James A
2017-11-10
OBJECTIVE Cerebrospinal fluid (CSF) shunts are the primary treatment for patients suffering from hydrocephalus. While proven effective in symptom relief, these shunt systems are plagued by high failure rates and often require repeated revision surgeries to replace malfunctioning components. One of the leading causes of CSF shunt failure is obstruction of the ventricular catheter by aggregations of cells, proteins, blood clots, or fronds of choroid plexus that occlude the catheter's small inlet holes or even the full internal catheter lumen. Such obstructions can disrupt CSF diversion out of the ventricular system or impede it entirely. Previous studies have suggested that altering the catheter's fluid dynamics may help to reduce the likelihood of complete ventricular catheter failure caused by obstruction. However, systematic correlation between a ventricular catheter's design parameters and its performance, specifically its likelihood to become occluded, still remains unknown. Therefore, an automated, open-source computational fluid dynamics (CFD) simulation framework was developed for use in the medical community to determine optimized ventricular catheter designs and to rapidly explore parameter influence for a given flow objective. METHODS The computational framework was developed by coupling a 3D CFD solver and an iterative optimization algorithm and was implemented in a high-performance computing environment. The capabilities of the framework were demonstrated by computing an optimized ventricular catheter design that provides uniform flow rates through the catheter's inlet holes, a common design objective in the literature. The baseline computational model was validated using 3D nuclear imaging to provide flow velocities at the inlet holes and through the catheter. RESULTS The optimized catheter design achieved through use of the automated simulation framework improved significantly on previous attempts to reach a uniform inlet flow rate distribution using the standard catheter hole configuration as a baseline. While the standard ventricular catheter design featuring uniform inlet hole diameters and hole spacing has a standard deviation of 14.27% for the inlet flow rates, the optimized design has a standard deviation of 0.30%. CONCLUSIONS This customizable framework, paired with high-performance computing, provides a rapid method of design testing to solve complex flow problems. While a relatively simplified ventricular catheter model was used to demonstrate the framework, the computational approach is applicable to any baseline catheter model, and it is easily adapted to optimize catheters for the unique needs of different patients as well as for other fluid-based medical devices.
Strict Constraint Feasibility in Analysis and Design of Uncertain Systems
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Giesy, Daniel P.; Kenny, Sean P.
2006-01-01
This paper proposes a methodology for the analysis and design optimization of models subject to parametric uncertainty, where hard inequality constraints are present. Hard constraints are those that must be satisfied for all parameter realizations prescribed by the uncertainty model. Emphasis is given to uncertainty models prescribed by norm-bounded perturbations from a nominal parameter value, i.e., hyper-spheres, and by sets of independently bounded uncertain variables, i.e., hyper-rectangles. These models make it possible to consider sets of parameters having comparable as well as dissimilar levels of uncertainty. Two alternative formulations for hyper-rectangular sets are proposed, one based on a transformation of variables and another based on an infinity norm approach. The suite of tools developed enable us to determine if the satisfaction of hard constraints is feasible by identifying critical combinations of uncertain parameters. Since this practice is performed without sampling or partitioning the parameter space, the resulting assessments of robustness are analytically verifiable. Strategies that enable the comparison of the robustness of competing design alternatives, the approximation of the robust design space, and the systematic search for designs with improved robustness characteristics are also proposed. Since the problem formulation is generic and the solution methods only require standard optimization algorithms for their implementation, the tools developed are applicable to a broad range of problems in several disciplines.
Howe, D S; Dunning, J; Zorman, C; Garverick, S L; Bogie, K M
2015-02-01
Ideally, all chronic wounds would be prevented as they can become life threatening complications. The concept that a wound produces a 'current of injury' due to the discontinuity in the electrical field of intact skin provides the basis for the concept that electrical stimulation (ES) may provide an effective treatment for chronic wounds. The optimal stimulation waveform parameters are unknown, limiting the reliability of achieving a successful clinical therapeutic outcome. In order to gain a more thorough understanding of ES for chronic wound therapy, systematic evaluation using a valid in vivo model is required. The focus of the current paper is development of the flexible modular surface stimulation (MSS) device by our group. This device can be programed to deliver a variety of clinically relevant stimulation paradigms and is essential to facilitate systematic in vivo studies. The MSS version 2.0 for small animal use provides all components of a single-channel, programmable current-controlled ES system within a lightweight, flexible, independently-powered portable device. Benchtop testing and validation indicates that custom electronics and control algorithms support the generation of high-voltage, low duty-cycle current pulses in a power-efficient manner, extending battery life and allowing ES therapy to be delivered for up to 7 days without needing to replace or disturb the wound dressing.
ERIC Educational Resources Information Center
Burns, Nicholas R.; Lee, Michael D.; Vickers, Douglas
2006-01-01
Studies of human problem solving have traditionally used deterministic tasks that require the execution of a systematic series of steps to reach a rational and optimal solution. Most real-world problems, however, are characterized by uncertainty, the need to consider an enormous number of variables and possible courses of action at each stage in…
Ouyang, Liwen; Apley, Daniel W; Mehrotra, Sanjay
2016-04-01
Electronic medical record (EMR) databases offer significant potential for developing clinical hypotheses and identifying disease risk associations by fitting statistical models that capture the relationship between a binary response variable and a set of predictor variables that represent clinical, phenotypical, and demographic data for the patient. However, EMR response data may be error prone for a variety of reasons. Performing a manual chart review to validate data accuracy is time consuming, which limits the number of chart reviews in a large database. The authors' objective is to develop a new design-of-experiments-based systematic chart validation and review (DSCVR) approach that is more powerful than the random validation sampling used in existing approaches. The DSCVR approach judiciously and efficiently selects the cases to validate (i.e., validate whether the response values are correct for those cases) for maximum information content, based only on their predictor variable values. The final predictive model will be fit using only the validation sample, ignoring the remainder of the unvalidated and unreliable error-prone data. A Fisher information based D-optimality criterion is used, and an algorithm for optimizing it is developed. The authors' method is tested in a simulation comparison that is based on a sudden cardiac arrest case study with 23 041 patients' records. This DSCVR approach, using the Fisher information based D-optimality criterion, results in a fitted model with much better predictive performance, as measured by the receiver operating characteristic curve and the accuracy in predicting whether a patient will experience the event, than a model fitted using a random validation sample. The simulation comparisons demonstrate that this DSCVR approach can produce predictive models that are significantly better than those produced from random validation sampling, especially when the event rate is low. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Lumb, A.M.; McCammon, R.B.; Kittle, J.L.
1994-01-01
Expert system software was developed to assist less experienced modelers with calibration of a watershed model and to facilitate the interaction between the modeler and the modeling process not provided by mathematical optimization. A prototype was developed with artificial intelligence software tools, a knowledge engineer, and two domain experts. The manual procedures used by the domain experts were identified and the prototype was then coded by the knowledge engineer. The expert system consists of a set of hierarchical rules designed to guide the calibration of the model through a systematic evaluation of model parameters. When the prototype was completed and tested, it was rewritten for portability and operational use and was named HSPEXP. The watershed model Hydrological Simulation Program--Fortran (HSPF) is used in the expert system. This report is the users manual for HSPEXP and contains a discussion of the concepts and detailed steps and examples for using the software. The system has been tested on watersheds in the States of Washington and Maryland, and the system correctly identified the model parameters to be adjusted and the adjustments led to improved calibration.
NASA Astrophysics Data System (ADS)
Mena-Carrasco, M.; Carmichael, G. R.; Campbell, J. E.; Tang, Y.; Chai, T.
2007-05-01
During the MILAGRO campaign in March 2006 the University of Iowa provided regional air quality forecasting for scientific flight planning for the C-130 and DC-8. Model performance showed positive bias of ozone prediction (~15ppbv), associated to overpredictions in precursor concentrations (~2.15 ppbv NOy and ~1ppmv ARO1). Model bias showed a distinct geographical pattern in which the higher values were in and near Mexico City. Newer runs in which NOx and VOC emissions were decreased improved ozone prediction, decreasing bias and increasing model correlation, at the same time reducing regional bias over Mexico. This work will evaluate model performance using the newly published Mexico National Emissions Inventory, and the introduction of data assimilation to recover emissions scaling factors to optimize model performance. Finally the results of sensitivity runs showing the regional impact of Mexico City emissions on ozone concentrations will be shown, along with the influence of Mexico City aerosol concentrations on regional photochemistry.
Model-based Optimization and Feedback Control of the Current Density Profile Evolution in NSTX-U
NASA Astrophysics Data System (ADS)
Ilhan, Zeki Okan
Nuclear fusion research is a highly challenging, multidisciplinary field seeking contributions from both plasma physics and multiple engineering areas. As an application of plasma control engineering, this dissertation mainly explores methods to control the current density profile evolution within the National Spherical Torus eXperiment-Upgrade (NSTX-U), which is a substantial upgrade based on the NSTX device, which is located in Princeton Plasma Physics Laboratory (PPPL), Princeton, NJ. Active control of the toroidal current density profile is among those plasma control milestones that the NSTX-U program must achieve to realize its next-step operational goals, which are characterized by high-performance, long-pulse, MHD-stable plasma operation with neutral beam heating. Therefore, the aim of this work is to develop model-based, feedforward and feedback controllers that can enable time regulation of the current density profile in NSTX-U by actuating the total plasma current, electron density, and the powers of the individual neutral beam injectors. Motivated by the coupled, nonlinear, multivariable, distributed-parameter plasma dynamics, the first step towards control design is the development of a physics-based, control-oriented model for the current profile evolution in NSTX-U in response to non-inductive current drives and heating systems. Numerical simulations of the proposed control-oriented model show qualitative agreement with the high-fidelity physics code TRANSP. The next step is to utilize the proposed control-oriented model to design an open-loop actuator trajectory optimizer. Given a desired operating state, the optimizer produces the actuator trajectories that can steer the plasma to such state. The objective of the feedforward control design is to provide a more systematic approach to advanced scenario planning in NSTX-U since the development of such scenarios is conventionally carried out experimentally by modifying the tokamak's actuator trajectories and analyzing the resulting plasma evolution. Finally, the proposed control-oriented model is embedded in feedback control schemes based on optimal control and Model Predictive Control (MPC) approaches. Integrators are added to the standard Linear Quadratic Gaussian (LQG) and MPC formulations to provide robustness against various modeling uncertainties and external disturbances. The effectiveness of the proposed feedback controllers in regulating the current density profile in NSTX-U is demonstrated in closed-loop nonlinear simulations. Moreover, the optimal feedback control algorithm has been implemented successfully in closed-loop control simulations within TRANSP through the recently developed Expert routine. (Abstract shortened by ProQuest.).
Improved distorted wave theory with the localized virial conditions
NASA Astrophysics Data System (ADS)
Hahn, Y. K.; Zerrad, E.
2009-12-01
The distorted wave theory is operationally improved to treat the full collision amplitude, such that the corrections to the distorted wave Born amplitude can be systematically calculated. The localized virial conditions provide the tools necessary to test the quality of successive approximations at each stage and to optimize the solution. The details of the theoretical procedure are explained in concrete terms using a collisional ionization model and variational trial functions. For the first time, adjustable parameters associated with an approximate scattering solution can be fully determined by the theory. A small number of linear parameters are introduced to examine the convergence property and the effectiveness of the new approach.
Parameter Sweep and Optimization of Loosely Coupled Simulations Using the DAKOTA Toolkit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elwasif, Wael R; Bernholdt, David E; Pannala, Sreekanth
2012-01-01
The increasing availability of large scale computing capabilities has accelerated the development of high-fidelity coupled simulations. Such simulations typically involve the integration of models that implement various aspects of the complex phenomena under investigation. Coupled simulations are playing an integral role in fields such as climate modeling, earth systems modeling, rocket simulations, computational chemistry, fusion research, and many other computational fields. Model coupling provides scientists with systematic ways to virtually explore the physical, mathematical, and computational aspects of the problem. Such exploration is rarely done using a single execution of a simulation, but rather by aggregating the results from manymore » simulation runs that, together, serve to bring to light novel knowledge about the system under investigation. Furthermore, it is often the case (particularly in engineering disciplines) that the study of the underlying system takes the form of an optimization regime, where the control parameter space is explored to optimize an objective functions that captures system realizability, cost, performance, or a combination thereof. Novel and flexible frameworks that facilitate the integration of the disparate models into a holistic simulation are used to perform this research, while making efficient use of the available computational resources. In this paper, we describe the integration of the DAKOTA optimization and parameter sweep toolkit with the Integrated Plasma Simulator (IPS), a component-based framework for loosely coupled simulations. The integration allows DAKOTA to exploit the internal task and resource management of the IPS to dynamically instantiate simulation instances within a single IPS instance, allowing for greater control over the trade-off between efficiency of resource utilization and time to completion. We present a case study showing the use of the combined DAKOTA-IPS system to aid in the design of a lithium ion battery (LIB) cell, by studying a coupled system involving the electrochemistry and ion transport at the lower length scales and thermal energy transport at the device scales. The DAKOTA-IPS system provides a flexible tool for use in optimization and parameter sweep studies involving loosely coupled simulations that is suitable for use in situations where changes to the constituent components in the coupled simulation are impractical due to intellectual property or code heritage issues.« less
Determining optimal parameters in magnetic spacecraft stabilization via attitude feedback
NASA Astrophysics Data System (ADS)
Bruni, Renato; Celani, Fabio
2016-10-01
The attitude control of a spacecraft using magnetorquers can be achieved by a feedback control law which has four design parameters. However, the practical determination of appropriate values for these parameters is a critical open issue. We propose here an innovative systematic approach for finding these values: they should be those that minimize the convergence time to the desired attitude. This a particularly diffcult optimization problem, for several reasons: 1) such time cannot be expressed in analytical form as a function of parameters and initial conditions; 2) design parameters may range over very wide intervals; 3) convergence time depends also on the initial conditions of the spacecraft, which are not known in advance. To overcome these diffculties, we present a solution approach based on derivative-free optimization. These algorithms do not need to write analytically the objective function: they only need to compute it in a number of points. We also propose a fast probing technique to identify which regions of the search space have to be explored densely. Finally, we formulate a min-max model to find robust parameters, namely design parameters that minimize convergence time under the worst initial conditions. Results are very promising.
Papadakis, G; Friedt, J M; Eck, M; Rabus, D; Jobst, G; Gizeli, E
2017-09-01
The development of integrated platforms incorporating an acoustic device as the detection element requires addressing simultaneously several challenges of technological and scientific nature. The present work was focused on the design of a microfluidic module, which, combined with a dual or array type Love wave acoustic chip could be applied to biomedical applications and molecular diagnostics. Based on a systematic study we optimized the mechanics of the flow cell attachment and the sealing material so that fluidic interfacing/encapsulation would impose minimal losses to the acoustic wave. We have also investigated combinations of operating frequencies with waveguide materials and thicknesses for maximum sensitivity during the detection of protein and DNA biomarkers. Within our investigations neutravidin was used as a model protein biomarker and unpurified PCR amplified Salmonella DNA as the model genetic target. Our results clearly indicate the need for experimental verification of the optimum engineering and analytical parameters, in order to develop commercially viable systems for integrated analysis. The good reproducibility of the signal together with the ability of the array biochip to detect multiple samples hold promise for the future use of the integrated system in a Lab-on-a-Chip platform for application to molecular diagnostics.
Coarse-graining errors and numerical optimization using a relative entropy framework.
Chaimovich, Aviel; Shell, M Scott
2011-03-07
The ability to generate accurate coarse-grained models from reference fully atomic (or otherwise "first-principles") ones has become an important component in modeling the behavior of complex molecular systems with large length and time scales. We recently proposed a novel coarse-graining approach based upon variational minimization of a configuration-space functional called the relative entropy, S(rel), that measures the information lost upon coarse-graining. Here, we develop a broad theoretical framework for this methodology and numerical strategies for its use in practical coarse-graining settings. In particular, we show that the relative entropy offers tight control over the errors due to coarse-graining in arbitrary microscopic properties, and suggests a systematic approach to reducing them. We also describe fundamental connections between this optimization methodology and other coarse-graining strategies like inverse Monte Carlo, force matching, energy matching, and variational mean-field theory. We suggest several new numerical approaches to its minimization that provide new coarse-graining strategies. Finally, we demonstrate the application of these theoretical considerations and algorithms to a simple, instructive system and characterize convergence and errors within the relative entropy framework. © 2011 American Institute of Physics.
B-type natriuretic peptides help in cardioembolic stroke diagnosis: pooled data meta-analysis.
Llombart, Víctor; Antolin-Fontes, Albert; Bustamante, Alejandro; Giralt, Dolors; Rost, Natalia S; Furie, Karen; Shibazaki, Kensaku; Biteker, Murat; Castillo, José; Rodríguez-Yáñez, Manuel; Fonseca, Ana Catarina; Watanabe, Tetsu; Purroy, Francisco; Zhixin, Wu; Etgen, Thorleif; Hosomi, Naohisa; Jafarian Kerman, Scott Reza; Sharma, Jagdish C; Knauer, Carolin; Santamarina, Estevo; Giannakoulas, George; García-Berrocoso, Teresa; Montaner, Joan
2015-05-01
Determining the underlying cause of stroke is important to optimize secondary prevention treatment. Increased blood levels of natriuretic peptides (B-type natriuretic peptide/N-terminal pro-BNP [BNP/NT-proBNP]) have been repeatedly associated with cardioembolic stroke. Here, we evaluate their clinical value as pathogenic biomarkers for stroke through a literature systematic review and individual participants' data meta-analysis. We searched publications in PubMed database until November 2013 that compared BNP and NT-proBNP circulating levels among stroke causes. Standardized individual participants' data were collected to estimate predictive values of BNP/NT-proBNP for cardioembolic stroke. Dichotomized BNP/NT-proBNP levels were included in logistic regression models together with clinical variables to assess the sensitivity and specificity to identify cardioembolic strokes and the additional value of biomarkers using area under the curve and integrated discrimination improvement index. From 23 selected articles, we collected information of 2834 patients with a defined cause. BNP/NT-proBNP levels were significantly elevated in cardioembolic stroke until 72 hours from symptoms onset. Predictive models showed a sensitivity >90% and specificity >80% when BNP/NT-proBNP were added considering the lowest and the highest quartile, respectively. Both peptides also increased significantly the area under the curve and integrated discrimination improvement index compared with clinical models. Sensitivity, specificity, and precision of the models were validated in 197 patients with initially undetermined stroke with final pathogenic diagnosis after ancillary follow-up. Natriuretic peptides are strongly increased in cardioembolic strokes. Future multicentre prospective studies comparing BNP and NT-proBNP might aid in finding the optimal biomarker, the best time point, and the optimal cutoff points for cardioembolic stroke identification. © 2015 American Heart Association, Inc.