Crown, William; Buyukkaramikli, Nasuh; Thokala, Praveen; Morton, Alec; Sir, Mustafa Y; Marshall, Deborah A; Tosh, Jon; Padula, William V; Ijzerman, Maarten J; Wong, Peter K; Pasupathy, Kalyan S
2017-03-01
Providing health services with the greatest possible value to patients and society given the constraints imposed by patient characteristics, health care system characteristics, budgets, and so forth relies heavily on the design of structures and processes. Such problems are complex and require a rigorous and systematic approach to identify the best solution. Constrained optimization is a set of methods designed to identify efficiently and systematically the best solution (the optimal solution) to a problem characterized by a number of potential solutions in the presence of identified constraints. This report identifies 1) key concepts and the main steps in building an optimization model; 2) the types of problems for which optimal solutions can be determined in real-world health applications; and 3) the appropriate optimization methods for these problems. We first present a simple graphical model based on the treatment of "regular" and "severe" patients, which maximizes the overall health benefit subject to time and budget constraints. We then relate it back to how optimization is relevant in health services research for addressing present day challenges. We also explain how these mathematical optimization methods relate to simulation methods, to standard health economic analysis techniques, and to the emergent fields of analytics and machine learning. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Liu, Jian; Cheng, Yuhu; Wang, Xuesong; Zhang, Lin; Liu, Hui
2017-08-17
It is urgent to diagnose colorectal cancer in the early stage. Some feature genes which are important to colorectal cancer development have been identified. However, for the early stage of colorectal cancer, less is known about the identity of specific cancer genes that are associated with advanced clinical stage. In this paper, we conducted a feature extraction method named Optimal Mean based Block Robust Feature Extraction method (OMBRFE) to identify feature genes associated with advanced colorectal cancer in clinical stage by using the integrated colorectal cancer data. Firstly, based on the optimal mean and L 2,1 -norm, a novel feature extraction method called Optimal Mean based Robust Feature Extraction method (OMRFE) is proposed to identify feature genes. Then the OMBRFE method which introduces the block ideology into OMRFE method is put forward to process the colorectal cancer integrated data which includes multiple genomic data: copy number alterations, somatic mutations, methylation expression alteration, as well as gene expression changes. Experimental results demonstrate that the OMBRFE is more effective than previous methods in identifying the feature genes. Moreover, genes identified by OMBRFE are verified to be closely associated with advanced colorectal cancer in clinical stage.
The application of artificial intelligence in the optimal design of mechanical systems
NASA Astrophysics Data System (ADS)
Poteralski, A.; Szczepanik, M.
2016-11-01
The paper is devoted to new computational techniques in mechanical optimization where one tries to study, model, analyze and optimize very complex phenomena, for which more precise scientific tools of the past were incapable of giving low cost and complete solution. Soft computing methods differ from conventional (hard) computing in that, unlike hard computing, they are tolerant of imprecision, uncertainty, partial truth and approximation. The paper deals with an application of the bio-inspired methods, like the evolutionary algorithms (EA), the artificial immune systems (AIS) and the particle swarm optimizers (PSO) to optimization problems. Structures considered in this work are analyzed by the finite element method (FEM), the boundary element method (BEM) and by the method of fundamental solutions (MFS). The bio-inspired methods are applied to optimize shape, topology and material properties of 2D, 3D and coupled 2D/3D structures, to optimize the termomechanical structures, to optimize parameters of composites structures modeled by the FEM, to optimize the elastic vibrating systems to identify the material constants for piezoelectric materials modeled by the BEM and to identify parameters in acoustics problem modeled by the MFS.
Analytical Approach to the Fuel Optimal Impulsive Transfer Problem Using Primer Vector Method
NASA Astrophysics Data System (ADS)
Fitrianingsih, E.; Armellin, R.
2018-04-01
One of the objectives of mission design is selecting an optimum orbital transfer which often translated as a transfer which requires minimum propellant consumption. In order to assure the selected trajectory meets the requirement, the optimality of transfer should first be analyzed either by directly calculating the ΔV of the candidate trajectories and select the one that gives a minimum value or by evaluating the trajectory according to certain criteria of optimality. The second method is performed by analyzing the profile of the modulus of the thrust direction vector which is known as primer vector. Both methods come with their own advantages and disadvantages. However, it is possible to use the primer vector method to verify if the result from the direct method is truly optimal or if the ΔV can be reduced further by implementing correction maneuver to the reference trajectory. In addition to its capability to evaluate the transfer optimality without the need to calculate the transfer ΔV, primer vector also enables us to identify the time and position to apply correction maneuver in order to optimize a non-optimum transfer. This paper will present the analytical approach to the fuel optimal impulsive transfer using primer vector method. The validity of the method is confirmed by comparing the result to those from the numerical method. The investigation of the optimality of direct transfer is used to give an example of the application of the method. The case under study is the prograde elliptic transfers from Earth to Mars. The study enables us to identify the optimality of all the possible transfers.
Decision support for operations and maintenance (DSOM) system
Jarrell, Donald B [Kennewick, WA; Meador, Richard J [Richland, WA; Sisk, Daniel R [Richland, WA; Hatley, Darrel D [Kennewick, WA; Brown, Daryl R [Richland, WA; Keibel, Gary R [Richland, WA; Gowri, Krishnan [Richland, WA; Reyes-Spindola, Jorge F [Richland, WA; Adams, Kevin J [San Bruno, CA; Yates, Kenneth R [Lake Oswego, OR; Eschbach, Elizabeth J [Fort Collins, CO; Stratton, Rex C [Richland, WA
2006-03-21
A method for minimizing the life cycle cost of processes such as heating a building. The method utilizes sensors to monitor various pieces of equipment used in the process, for example, boilers, turbines, and the like. The method then performs the steps of identifying a set optimal operating conditions for the process, identifying and measuring parameters necessary to characterize the actual operating condition of the process, validating data generated by measuring those parameters, characterizing the actual condition of the process, identifying an optimal condition corresponding to the actual condition, comparing said optimal condition with the actual condition and identifying variances between the two, and drawing from a set of pre-defined algorithms created using best engineering practices, an explanation of at least one likely source and at least one recommended remedial action for selected variances, and providing said explanation as an output to at least one user.
Engineering applications of heuristic multilevel optimization methods
NASA Technical Reports Server (NTRS)
Barthelemy, Jean-Francois M.
1988-01-01
Some engineering applications of heuristic multilevel optimization methods are presented and the discussion focuses on the dependency matrix that indicates the relationship between problem functions and variables. Coordination of the subproblem optimizations is shown to be typically achieved through the use of exact or approximate sensitivity analysis. Areas for further development are identified.
Engineering applications of heuristic multilevel optimization methods
NASA Technical Reports Server (NTRS)
Barthelemy, Jean-Francois M.
1989-01-01
Some engineering applications of heuristic multilevel optimization methods are presented and the discussion focuses on the dependency matrix that indicates the relationship between problem functions and variables. Coordination of the subproblem optimizations is shown to be typically achieved through the use of exact or approximate sensitivity analysis. Areas for further development are identified.
Selection of Thermal Worst-Case Orbits via Modified Efficient Global Optimization
NASA Technical Reports Server (NTRS)
Moeller, Timothy M.; Wilhite, Alan W.; Liles, Kaitlin A.
2014-01-01
Efficient Global Optimization (EGO) was used to select orbits with worst-case hot and cold thermal environments for the Stratospheric Aerosol and Gas Experiment (SAGE) III. The SAGE III system thermal model changed substantially since the previous selection of worst-case orbits (which did not use the EGO method), so the selections were revised to ensure the worst cases are being captured. The EGO method consists of first conducting an initial set of parametric runs, generated with a space-filling Design of Experiments (DoE) method, then fitting a surrogate model to the data and searching for points of maximum Expected Improvement (EI) to conduct additional runs. The general EGO method was modified by using a multi-start optimizer to identify multiple new test points at each iteration. This modification facilitates parallel computing and decreases the burden of user interaction when the optimizer code is not integrated with the model. Thermal worst-case orbits for SAGE III were successfully identified and shown by direct comparison to be more severe than those identified in the previous selection. The EGO method is a useful tool for this application and can result in computational savings if the initial Design of Experiments (DoE) is selected appropriately.
NASA Astrophysics Data System (ADS)
Lv, Yongfeng; Na, Jing; Yang, Qinmin; Wu, Xing; Guo, Yu
2016-01-01
An online adaptive optimal control is proposed for continuous-time nonlinear systems with completely unknown dynamics, which is achieved by developing a novel identifier-critic-based approximate dynamic programming algorithm with a dual neural network (NN) approximation structure. First, an adaptive NN identifier is designed to obviate the requirement of complete knowledge of system dynamics, and a critic NN is employed to approximate the optimal value function. Then, the optimal control law is computed based on the information from the identifier NN and the critic NN, so that the actor NN is not needed. In particular, a novel adaptive law design method with the parameter estimation error is proposed to online update the weights of both identifier NN and critic NN simultaneously, which converge to small neighbourhoods around their ideal values. The closed-loop system stability and the convergence to small vicinity around the optimal solution are all proved by means of the Lyapunov theory. The proposed adaptation algorithm is also improved to achieve finite-time convergence of the NN weights. Finally, simulation results are provided to exemplify the efficacy of the proposed methods.
Optimel: Software for selecting the optimal method
NASA Astrophysics Data System (ADS)
Popova, Olga; Popov, Boris; Romanov, Dmitry; Evseeva, Marina
Optimel: software for selecting the optimal method automates the process of selecting a solution method from the optimization methods domain. Optimel features practical novelty. It saves time and money when conducting exploratory studies if its objective is to select the most appropriate method for solving an optimization problem. Optimel features theoretical novelty because for obtaining the domain a new method of knowledge structuring was used. In the Optimel domain, extended quantity of methods and their properties are used, which allows identifying the level of scientific studies, enhancing the user's expertise level, expand the prospects the user faces and opening up new research objectives. Optimel can be used both in scientific research institutes and in educational institutions.
Singularities in Optimal Structural Design
NASA Technical Reports Server (NTRS)
Patnaik, S. N.; Guptill, J. D.; Berke, L.
1992-01-01
Singularity conditions that arise during structural optimization can seriously degrade the performance of the optimizer. The singularities are intrinsic to the formulation of the structural optimization problem and are not associated with the method of analysis. Certain conditions that give rise to singularities have been identified in earlier papers, encompassing the entire structure. Further examination revealed more complex sets of conditions in which singularities occur. Some of these singularities are local in nature, being associated with only a segment of the structure. Moreover, the likelihood that one of these local singularities may arise during an optimization procedure can be much greater than that of the global singularity identified earlier. Examples are provided of these additional forms of singularities. A framework is also given in which these singularities can be recognized. In particular, the singularities can be identified by examination of the stress displacement relations along with the compatibility conditions and/or the displacement stress relations derived in the integrated force method of structural analysis.
Singularities in optimal structural design
NASA Technical Reports Server (NTRS)
Patnaik, S. N.; Guptill, J. D.; Berke, L.
1992-01-01
Singularity conditions that arise during structural optimization can seriously degrade the performance of the optimizer. The singularities are intrinsic to the formulation of the structural optimization problem and are not associated with the method of analysis. Certain conditions that give rise to singularities have been identified in earlier papers, encompassing the entire structure. Further examination revealed more complex sets of conditions in which singularities occur. Some of these singularities are local in nature, being associated with only a segment of the structure. Moreover, the likelihood that one of these local singularities may arise during an optimization procedure can be much greater than that of the global singularity identified earlier. Examples are provided of these additional forms of singularities. A framework is also given in which these singularities can be recognized. In particular, the singularities can be identified by examination of the stress displacement relations along with the compatibility conditions and/or the displacement stress relations derived in the integrated force method of structural analysis.
How to determine an optimal threshold to classify real-time crash-prone traffic conditions?
Yang, Kui; Yu, Rongjie; Wang, Xuesong; Quddus, Mohammed; Xue, Lifang
2018-08-01
One of the proactive approaches in reducing traffic crashes is to identify hazardous traffic conditions that may lead to a traffic crash, known as real-time crash prediction. Threshold selection is one of the essential steps of real-time crash prediction. And it provides the cut-off point for the posterior probability which is used to separate potential crash warnings against normal traffic conditions, after the outcome of the probability of a crash occurring given a specific traffic condition on the basis of crash risk evaluation models. There is however a dearth of research that focuses on how to effectively determine an optimal threshold. And only when discussing the predictive performance of the models, a few studies utilized subjective methods to choose the threshold. The subjective methods cannot automatically identify the optimal thresholds in different traffic and weather conditions in real application. Thus, a theoretical method to select the threshold value is necessary for the sake of avoiding subjective judgments. The purpose of this study is to provide a theoretical method for automatically identifying the optimal threshold. Considering the random effects of variable factors across all roadway segments, the mixed logit model was utilized to develop the crash risk evaluation model and further evaluate the crash risk. Cross-entropy, between-class variance and other theories were employed and investigated to empirically identify the optimal threshold. And K-fold cross-validation was used to validate the performance of proposed threshold selection methods with the help of several evaluation criteria. The results indicate that (i) the mixed logit model can obtain a good performance; (ii) the classification performance of the threshold selected by the minimum cross-entropy method outperforms the other methods according to the criteria. This method can be well-behaved to automatically identify thresholds in crash prediction, by minimizing the cross entropy between the original dataset with continuous probability of a crash occurring and the binarized dataset after using the thresholds to separate potential crash warnings against normal traffic conditions. Copyright © 2018 Elsevier Ltd. All rights reserved.
Optimization of a Tube Hydroforming Process
NASA Astrophysics Data System (ADS)
Abedrabbo, Nader; Zafar, Naeem; Averill, Ron; Pourboghrat, Farhang; Sidhu, Ranny
2004-06-01
An approach is presented to optimize a tube hydroforming process using a Genetic Algorithm (GA) search method. The goal of the study is to maximize formability by identifying the optimal internal hydraulic pressure and feed rate while satisfying the forming limit diagram (FLD). The optimization software HEEDS is used in combination with the nonlinear structural finite element code LS-DYNA to carry out the investigation. In particular, a sub-region of a circular tube blank is formed into a square die. Compared to the best results of a manual optimization procedure, a 55% increase in expansion was achieved when using the pressure and feed profiles identified by the automated optimization procedure.
Optimal Frequency-Domain System Realization with Weighting
NASA Technical Reports Server (NTRS)
Juang, Jer-Nan; Maghami, Peiman G.
1999-01-01
Several approaches are presented to identify an experimental system model directly from frequency response data. The formulation uses a matrix-fraction description as the model structure. Frequency weighting such as exponential weighting is introduced to solve a weighted least-squares problem to obtain the coefficient matrices for the matrix-fraction description. A multi-variable state-space model can then be formed using the coefficient matrices of the matrix-fraction description. Three different approaches are introduced to fine-tune the model using nonlinear programming methods to minimize the desired cost function. The first method uses an eigenvalue assignment technique to reassign a subset of system poles to improve the identified model. The second method deals with the model in the real Schur or modal form, reassigns a subset of system poles, and adjusts the columns (rows) of the input (output) influence matrix using a nonlinear optimizer. The third method also optimizes a subset of poles, but the input and output influence matrices are refined at every optimization step through least-squares procedures.
Optimizing Robinson Operator with Ant Colony Optimization As a Digital Image Edge Detection Method
NASA Astrophysics Data System (ADS)
Yanti Nasution, Tarida; Zarlis, Muhammad; K. M Nasution, Mahyuddin
2017-12-01
Edge detection serves to identify the boundaries of an object against a background of mutual overlap. One of the classic method for edge detection is operator Robinson. Operator Robinson produces a thin, not assertive and grey line edge. To overcome these deficiencies, the proposed improvements to edge detection method with the approach graph with Ant Colony Optimization algorithm. The repairs may be performed are thicken the edge and connect the edges cut off. Edge detection research aims to do optimization of operator Robinson with Ant Colony Optimization then compare the output and generated the inferred extent of Ant Colony Optimization can improve result of edge detection that has not been optimized and improve the accuracy of the results of Robinson edge detection. The parameters used in performance measurement of edge detection are morphology of the resulting edge line, MSE and PSNR. The result showed that Robinson and Ant Colony Optimization method produces images with a more assertive and thick edge. Ant Colony Optimization method is able to be used as a method for optimizing operator Robinson by improving the image result of Robinson detection average 16.77 % than classic Robinson result.
Barriers to Quality Care for Dying Patients in Rural Communities
ERIC Educational Resources Information Center
Van Vorst, Rebecca F.; Crane, Lori A.; Barton, Phoebe Lindsey; Kutner, Jean S.; Kallail, K. James; Westfall, John M.
2006-01-01
Context: Barriers to providing optimal palliative care in rural communities are not well understood. Purpose: To identify health care personnel's perceptions of the care provided to dying patients in rural Kansas and Colorado and to identify barriers to providing optimal care. Methods: An anonymous self-administered survey was sent to health care…
NASA Technical Reports Server (NTRS)
Stepner, D. E.; Mehra, R. K.
1973-01-01
A new method of extracting aircraft stability and control derivatives from flight test data is developed based on the maximum likelihood cirterion. It is shown that this new method is capable of processing data from both linear and nonlinear models, both with and without process noise and includes output error and equation error methods as special cases. The first application of this method to flight test data is reported for lateral maneuvers of the HL-10 and M2/F3 lifting bodies, including the extraction of stability and control derivatives in the presence of wind gusts. All the problems encountered in this identification study are discussed. Several different methods (including a priori weighting, parameter fixing and constrained parameter values) for dealing with identifiability and uniqueness problems are introduced and the results given. The method for the design of optimal inputs for identifying the parameters of linear dynamic systems is also given. The criterion used for the optimization is the sensitivity of the system output to the unknown parameters. Several simple examples are first given and then the results of an extensive stability and control dervative identification simulation for a C-8 aircraft are detailed.
Chen, Xianglong; Zhang, Bingzhi; Feng, Fuzhou; Jiang, Pengcheng
2017-01-01
The kurtosis-based indexes are usually used to identify the optimal resonant frequency band. However, kurtosis can only describe the strength of transient impulses, which cannot differentiate impulse noises and repetitive transient impulses cyclically generated in bearing vibration signals. As a result, it may lead to inaccurate results in identifying resonant frequency bands, in demodulating fault features and hence in fault diagnosis. In view of those drawbacks, this manuscript redefines the correlated kurtosis based on kurtosis and auto-correlative function, puts forward an improved correlated kurtosis based on squared envelope spectrum of bearing vibration signals. Meanwhile, this manuscript proposes an optimal resonant band demodulation method, which can adaptively determine the optimal resonant frequency band and accurately demodulate transient fault features of rolling bearings, by combining the complex Morlet wavelet filter and the Particle Swarm Optimization algorithm. Analysis of both simulation data and experimental data reveal that the improved correlated kurtosis can effectively remedy the drawbacks of kurtosis-based indexes and the proposed optimal resonant band demodulation is more accurate in identifying the optimal central frequencies and bandwidth of resonant bands. Improved fault diagnosis results in experiment verified the validity and advantage of the proposed method over the traditional kurtosis-based indexes. PMID:28208820
C-learning: A new classification framework to estimate optimal dynamic treatment regimes.
Zhang, Baqun; Zhang, Min
2017-12-11
A dynamic treatment regime is a sequence of decision rules, each corresponding to a decision point, that determine that next treatment based on each individual's own available characteristics and treatment history up to that point. We show that identifying the optimal dynamic treatment regime can be recast as a sequential optimization problem and propose a direct sequential optimization method to estimate the optimal treatment regimes. In particular, at each decision point, the optimization is equivalent to sequentially minimizing a weighted expected misclassification error. Based on this classification perspective, we propose a powerful and flexible C-learning algorithm to learn the optimal dynamic treatment regimes backward sequentially from the last stage until the first stage. C-learning is a direct optimization method that directly targets optimizing decision rules by exploiting powerful optimization/classification techniques and it allows incorporation of patient's characteristics and treatment history to improve performance, hence enjoying advantages of both the traditional outcome regression-based methods (Q- and A-learning) and the more recent direct optimization methods. The superior performance and flexibility of the proposed methods are illustrated through extensive simulation studies. © 2017, The International Biometric Society.
NASA Astrophysics Data System (ADS)
Xu, Wenfu; Hu, Zhonghua; Zhang, Yu; Liang, Bin
2017-03-01
After being launched into space to perform some tasks, the inertia parameters of a space robotic system may change due to fuel consumption, hardware reconfiguration, target capturing, and so on. For precision control and simulation, it is required to identify these parameters on orbit. This paper proposes an effective method for identifying the complete inertia parameters (including the mass, inertia tensor and center of mass position) of a space robotic system. The key to the method is to identify two types of simple dynamics systems: equivalent single-body and two-body systems. For the former, all of the joints are locked into a designed configuration and the thrusters are used for orbital maneuvering. The object function for optimization is defined in terms of acceleration and velocity of the equivalent single body. For the latter, only one joint is unlocked and driven to move along a planned (exiting) trajectory in free-floating mode. The object function is defined based on the linear and angular momentum equations. Then, the parameter identification problems are transformed into non-linear optimization problems. The Particle Swarm Optimization (PSO) algorithm is applied to determine the optimal parameters, i.e. the complete dynamic parameters of the two equivalent systems. By sequentially unlocking the 1st to nth joints (or unlocking the nth to 1st joints), the mass properties of body 0 to n (or n to 0) are completely identified. For the proposed method, only simple dynamics equations are needed for identification. The excitation motion (orbit maneuvering and joint motion) is also easily realized. Moreover, the method does not require prior knowledge of the mass properties of any body. It is general and practical for identifying a space robotic system on-orbit.
Obtaining the Optimal Dose in Alcohol Dependence Studies
Wages, Nolan A.; Liu, Lei; O’Quigley, John; Johnson, Bankole A.
2012-01-01
In alcohol dependence studies, the treatment effect at different dose levels remains to be ascertained. Establishing this effect would aid us in identifying the best dose that has satisfactory efficacy while minimizing the rate of adverse events. We advocate the use of dose-finding methodology that has been successfully implemented in the cancer and HIV settings to identify the optimal dose in a cost-effective way. Specifically, we describe the continual reassessment method (CRM), an adaptive design proposed for cancer trials to reconcile the needs of dose-finding experiments with the ethical demands of established medical practice. We are applying adaptive designs for identifying the optimal dose of medications for the first time in the context of pharmacotherapy research in alcoholism. We provide an example of a topiramate trial as an illustration of how adaptive designs can be used to locate the optimal dose in alcohol treatment trials. It is believed that the introduction of adaptive design methods will enable the development of medications for the treatment of alcohol dependence to be accelerated. PMID:23189064
Attitude determination using vector observations: A fast optimal matrix algorithm
NASA Technical Reports Server (NTRS)
Markley, F. Landis
1993-01-01
The attitude matrix minimizing Wahba's loss function is computed directly by a method that is competitive with the fastest known algorithm for finding this optimal estimate. The method also provides an estimate of the attitude error covariance matrix. Analysis of the special case of two vector observations identifies those cases for which the TRIAD or algebraic method minimizes Wahba's loss function.
Manavalan, Balachandran; Shin, Tae Hwan; Lee, Gwang
2018-01-05
DNase I hypersensitive sites (DHSs) are genomic regions that provide important information regarding the presence of transcriptional regulatory elements and the state of chromatin. Therefore, identifying DHSs in uncharacterized DNA sequences is crucial for understanding their biological functions and mechanisms. Although many experimental methods have been proposed to identify DHSs, they have proven to be expensive for genome-wide application. Therefore, it is necessary to develop computational methods for DHS prediction. In this study, we proposed a support vector machine (SVM)-based method for predicting DHSs, called DHSpred (DNase I Hypersensitive Site predictor in human DNA sequences), which was trained with 174 optimal features. The optimal combination of features was identified from a large set that included nucleotide composition and di- and trinucleotide physicochemical properties, using a random forest algorithm. DHSpred achieved a Matthews correlation coefficient and accuracy of 0.660 and 0.871, respectively, which were 3% higher than those of control SVM predictors trained with non-optimized features, indicating the efficiency of the feature selection method. Furthermore, the performance of DHSpred was superior to that of state-of-the-art predictors. An online prediction server has been developed to assist the scientific community, and is freely available at: http://www.thegleelab.org/DHSpred.html.
Manavalan, Balachandran; Shin, Tae Hwan; Lee, Gwang
2018-01-01
DNase I hypersensitive sites (DHSs) are genomic regions that provide important information regarding the presence of transcriptional regulatory elements and the state of chromatin. Therefore, identifying DHSs in uncharacterized DNA sequences is crucial for understanding their biological functions and mechanisms. Although many experimental methods have been proposed to identify DHSs, they have proven to be expensive for genome-wide application. Therefore, it is necessary to develop computational methods for DHS prediction. In this study, we proposed a support vector machine (SVM)-based method for predicting DHSs, called DHSpred (DNase I Hypersensitive Site predictor in human DNA sequences), which was trained with 174 optimal features. The optimal combination of features was identified from a large set that included nucleotide composition and di- and trinucleotide physicochemical properties, using a random forest algorithm. DHSpred achieved a Matthews correlation coefficient and accuracy of 0.660 and 0.871, respectively, which were 3% higher than those of control SVM predictors trained with non-optimized features, indicating the efficiency of the feature selection method. Furthermore, the performance of DHSpred was superior to that of state-of-the-art predictors. An online prediction server has been developed to assist the scientific community, and is freely available at: http://www.thegleelab.org/DHSpred.html PMID:29416743
Adaptation to sensory-motor reflex perturbations is blind to the source of errors.
Hudson, Todd E; Landy, Michael S
2012-01-06
In the study of visual-motor control, perhaps the most familiar findings involve adaptation to externally imposed movement errors. Theories of visual-motor adaptation based on optimal information processing suppose that the nervous system identifies the sources of errors to effect the most efficient adaptive response. We report two experiments using a novel perturbation based on stimulating a visually induced reflex in the reaching arm. Unlike adaptation to an external force, our method induces a perturbing reflex within the motor system itself, i.e., perturbing forces are self-generated. This novel method allows a test of the theory that error source information is used to generate an optimal adaptive response. If the self-generated source of the visually induced reflex perturbation is identified, the optimal response will be via reflex gain control. If the source is not identified, a compensatory force should be generated to counteract the reflex. Gain control is the optimal response to reflex perturbation, both because energy cost and movement errors are minimized. Energy is conserved because neither reflex-induced nor compensatory forces are generated. Precision is maximized because endpoint variance is proportional to force production. We find evidence against source-identified adaptation in both experiments, suggesting that sensory-motor information processing is not always optimal.
Zhu, Li-Wen; Wang, Cheng-Cheng; Liu, Rui-Sang; Li, Hong-Mei; Wan, Duan-Ji; Tang, Ya-Jie
2012-01-01
As a potential intermediary feedstock, succinic acid takes an important place in bulk chemical productions. For the first time, a method combining Plackett-Burman design (PBD), steepest ascent method (SA), and Box-Behnken design (BBD) was developed to optimize Actinobacillus succinogenes ATCC 55618 fermentation medium. First, glucose, yeast extract, and MgCO3 were identified to be key medium components by PBD. Second, preliminary optimization was run by SA method to access the optimal region of the key medium components. Finally, the responses, that is, the production of succinic acid, were optimized simultaneously by using BBD, and the optimal concentration was located to be 84.6 g L−1 of glucose, 14.5 g L−1 of yeast extract, and 64.7 g L−1 of MgCO3. Verification experiment indicated that the maximal succinic acid production of 52.7 ± 0.8 g L−1 was obtained under the identified optimal conditions. The result agreed with the predicted value well. Compared with that of the basic medium, the production of succinic acid and yield of succinic acid against glucose were enhanced by 67.3% and 111.1%, respectively. The results obtained in this study may be useful for the industrial commercial production of succinic acid. PMID:23093852
Absolute Points for Multiple Assignment Problems
ERIC Educational Resources Information Center
Adlakha, V.; Kowalski, K.
2006-01-01
An algorithm is presented to solve multiple assignment problems in which a cost is incurred only when an assignment is made at a given cell. The proposed method recursively searches for single/group absolute points to identify cells that must be loaded in any optimal solution. Unlike other methods, the first solution is the optimal solution. The…
Optimal input shaping for Fisher identifiability of control-oriented lithium-ion battery models
NASA Astrophysics Data System (ADS)
Rothenberger, Michael J.
This dissertation examines the fundamental challenge of optimally shaping input trajectories to maximize parameter identifiability of control-oriented lithium-ion battery models. Identifiability is a property from information theory that determines the solvability of parameter estimation for mathematical models using input-output measurements. This dissertation creates a framework that exploits the Fisher information metric to quantify the level of battery parameter identifiability, optimizes this metric through input shaping, and facilitates faster and more accurate estimation. The popularity of lithium-ion batteries is growing significantly in the energy storage domain, especially for stationary and transportation applications. While these cells have excellent power and energy densities, they are plagued with safety and lifespan concerns. These concerns are often resolved in the industry through conservative current and voltage operating limits, which reduce the overall performance and still lack robustness in detecting catastrophic failure modes. New advances in automotive battery management systems mitigate these challenges through the incorporation of model-based control to increase performance, safety, and lifespan. To achieve these goals, model-based control requires accurate parameterization of the battery model. While many groups in the literature study a variety of methods to perform battery parameter estimation, a fundamental issue of poor parameter identifiability remains apparent for lithium-ion battery models. This fundamental challenge of battery identifiability is studied extensively in the literature, and some groups are even approaching the problem of improving the ability to estimate the model parameters. The first approach is to add additional sensors to the battery to gain more information that is used for estimation. The other main approach is to shape the input trajectories to increase the amount of information that can be gained from input-output measurements, and is the approach used in this dissertation. Research in the literature studies optimal current input shaping for high-order electrochemical battery models and focuses on offline laboratory cycling. While this body of research highlights improvements in identifiability through optimal input shaping, each optimal input is a function of nominal parameters, which creates a tautology. The parameter values must be known a priori to determine the optimal input for maximizing estimation speed and accuracy. The system identification literature presents multiple studies containing methods that avoid the challenges of this tautology, but these methods are absent from the battery parameter estimation domain. The gaps in the above literature are addressed in this dissertation through the following five novel and unique contributions. First, this dissertation optimizes the parameter identifiability of a thermal battery model, which Sergio Mendoza experimentally validates through a close collaboration with this dissertation's author. Second, this dissertation extends input-shaping optimization to a linear and nonlinear equivalent-circuit battery model and illustrates the substantial improvements in Fisher identifiability for a periodic optimal signal when compared against automotive benchmark cycles. Third, this dissertation presents an experimental validation study of the simulation work in the previous contribution. The estimation study shows that the automotive benchmark cycles either converge slower than the optimized cycle, or not at all for certain parameters. Fourth, this dissertation examines how automotive battery packs with additional power electronic components that dynamically route current to individual cells/modules can be used for parameter identifiability optimization. While the user and vehicle supervisory controller dictate the current demand for these packs, the optimized internal allocation of current still improves identifiability. Finally, this dissertation presents a robust Bayesian sequential input shaping optimization study to maximize the conditional Fisher information of the battery model parameters without prior knowledge of the nominal parameter set. This iterative algorithm only requires knowledge of the prior parameter distributions to converge to the optimal input trajectory.
Globally optimal trial design for local decision making.
Eckermann, Simon; Willan, Andrew R
2009-02-01
Value of information methods allows decision makers to identify efficient trial design following a principle of maximizing the expected value to decision makers of information from potential trial designs relative to their expected cost. However, in health technology assessment (HTA) the restrictive assumption has been made that, prospectively, there is only expected value of sample information from research commissioned within jurisdiction. This paper extends the framework for optimal trial design and decision making within jurisdiction to allow for optimal trial design across jurisdictions. This is illustrated in identifying an optimal trial design for decision making across the US, the UK and Australia for early versus late external cephalic version for pregnant women presenting in the breech position. The expected net gain from locally optimal trial designs of US$0.72M is shown to increase to US$1.14M with a globally optimal trial design. In general, the proposed method of globally optimal trial design improves on optimal trial design within jurisdictions by: (i) reflecting the global value of non-rival information; (ii) allowing optimal allocation of trial sample across jurisdictions; (iii) avoiding market failure associated with free-rider effects, sub-optimal spreading of fixed costs and heterogeneity of trial information with multiple trials. Copyright (c) 2008 John Wiley & Sons, Ltd.
Pojić, Milica; Rakić, Dušan; Lazić, Zivorad
2015-01-01
A chemometric approach was applied for the optimization of the robustness of the NIRS method for wheat quality control. Due to the high number of experimental (n=6) and response variables to be studied (n=7) the optimization experiment was divided into two stages: screening stage in order to evaluate which of the considered variables were significant, and optimization stage to optimize the identified factors in the previously selected experimental domain. The significant variables were identified by using fractional factorial experimental design, whilst Box-Wilson rotatable central composite design (CCRD) was run to obtain the optimal values for the significant variables. The measured responses included: moisture, protein and wet gluten content, Zeleny sedimentation value and deformation energy. In order to achieve the minimal variation in responses, the optimal factor settings were found by minimizing the propagation of error (POE). The simultaneous optimization of factors was conducted by desirability function. The highest desirability of 87.63% was accomplished by setting up experimental conditions as follows: 19.9°C for sample temperature, 19.3°C for ambient temperature and 240V for instrument voltage. Copyright © 2014 Elsevier B.V. All rights reserved.
Optimal sensor placement for spatial lattice structure based on genetic algorithms
NASA Astrophysics Data System (ADS)
Liu, Wei; Gao, Wei-cheng; Sun, Yi; Xu, Min-jian
2008-10-01
Optimal sensor placement technique plays a key role in structural health monitoring of spatial lattice structures. This paper considers the problem of locating sensors on a spatial lattice structure with the aim of maximizing the data information so that structural dynamic behavior can be fully characterized. Based on the criterion of optimal sensor placement for modal test, an improved genetic algorithm is introduced to find the optimal placement of sensors. The modal strain energy (MSE) and the modal assurance criterion (MAC) have been taken as the fitness function, respectively, so that three placement designs were produced. The decimal two-dimension array coding method instead of binary coding method is proposed to code the solution. Forced mutation operator is introduced when the identical genes appear via the crossover procedure. A computational simulation of a 12-bay plain truss model has been implemented to demonstrate the feasibility of the three optimal algorithms above. The obtained optimal sensor placements using the improved genetic algorithm are compared with those gained by exiting genetic algorithm using the binary coding method. Further the comparison criterion based on the mean square error between the finite element method (FEM) mode shapes and the Guyan expansion mode shapes identified by data-driven stochastic subspace identification (SSI-DATA) method are employed to demonstrate the advantage of the different fitness function. The results showed that some innovations in genetic algorithm proposed in this paper can enlarge the genes storage and improve the convergence of the algorithm. More importantly, the three optimal sensor placement methods can all provide the reliable results and identify the vibration characteristics of the 12-bay plain truss model accurately.
NASA Astrophysics Data System (ADS)
Zheng, Y.; Chen, J.
2017-09-01
A modified multi-objective particle swarm optimization method is proposed for obtaining Pareto-optimal solutions effectively. Different from traditional multi-objective particle swarm optimization methods, Kriging meta-models and the trapezoid index are introduced and integrated with the traditional one. Kriging meta-models are built to match expensive or black-box functions. By applying Kriging meta-models, function evaluation numbers are decreased and the boundary Pareto-optimal solutions are identified rapidly. For bi-objective optimization problems, the trapezoid index is calculated as the sum of the trapezoid's area formed by the Pareto-optimal solutions and one objective axis. It can serve as a measure whether the Pareto-optimal solutions converge to the Pareto front. Illustrative examples indicate that to obtain Pareto-optimal solutions, the method proposed needs fewer function evaluations than the traditional multi-objective particle swarm optimization method and the non-dominated sorting genetic algorithm II method, and both the accuracy and the computational efficiency are improved. The proposed method is also applied to the design of a deepwater composite riser example in which the structural performances are calculated by numerical analysis. The design aim was to enhance the tension strength and minimize the cost. Under the buckling constraint, the optimal trade-off of tensile strength and material volume is obtained. The results demonstrated that the proposed method can effectively deal with multi-objective optimizations with black-box functions.
Optimal ranking regime analysis of U.S. climate variablility. Part II: Precipitation and streamflow
USDA-ARS?s Scientific Manuscript database
In a preceding companion paper the Optimal Ranking Regime (ORR) method was used to identify intra- to multi-decadal (IMD) regimes in U.S. climate division temperature data during 1896-2012. Here, the method is used to test for annual and seasonal precipitation regimes during that same period. In add...
Multi-Criterion Preliminary Design of a Tetrahedral Truss Platform
NASA Technical Reports Server (NTRS)
Wu, K. Chauncey
1995-01-01
An efficient method is presented for multi-criterion preliminary design and demonstrated for a tetrahedral truss platform. The present method requires minimal analysis effort and permits rapid estimation of optimized truss behavior for preliminary design. A 14-m-diameter, 3-ring truss platform represents a candidate reflector support structure for space-based science spacecraft. The truss members are divided into 9 groups by truss ring and position. Design variables are the cross-sectional area of all members in a group, and are either 1, 3 or 5 times the minimum member area. Non-structural mass represents the node and joint hardware used to assemble the truss structure. Taguchi methods are used to efficiently identify key points in the set of Pareto-optimal truss designs. Key points identified using Taguchi methods are the maximum frequency, minimum mass, and maximum frequency-to-mass ratio truss designs. Low-order polynomial curve fits through these points are used to approximate the behavior of the full set of Pareto-optimal designs. The resulting Pareto-optimal design curve is used to predict frequency and mass for optimized trusses. Performance improvements are plotted in frequency-mass (criterion) space and compared to results for uniform trusses. Application of constraints to frequency and mass and sensitivity to constraint variation are demonstrated.
Optimal Experimental Design for Model Discrimination
Myung, Jay I.; Pitt, Mark A.
2009-01-01
Models of a psychological process can be difficult to discriminate experimentally because it is not easy to determine the values of the critical design variables (e.g., presentation schedule, stimulus structure) that will be most informative in differentiating them. Recent developments in sampling-based search methods in statistics make it possible to determine these values, and thereby identify an optimal experimental design. After describing the method, it is demonstrated in two content areas in cognitive psychology in which models are highly competitive: retention (i.e., forgetting) and categorization. The optimal design is compared with the quality of designs used in the literature. The findings demonstrate that design optimization has the potential to increase the informativeness of the experimental method. PMID:19618983
Wang, ShaoPeng; Zhang, Yu-Hang; Huang, GuoHua; Chen, Lei; Cai, Yu-Dong
2017-01-01
Myristoylation is an important hydrophobic post-translational modification that is covalently bound to the amino group of Gly residues on the N-terminus of proteins. The many diverse functions of myristoylation on proteins, such as membrane targeting, signal pathway regulation and apoptosis, are largely due to the lipid modification, whereas abnormal or irregular myristoylation on proteins can lead to several pathological changes in the cell. To better understand the function of myristoylated sites and to correctly identify them in protein sequences, this study conducted a novel computational investigation on identifying myristoylation sites in protein sequences. A training dataset with 196 positive and 84 negative peptide segments were obtained. Four types of features derived from the peptide segments following the myristoylation sites were used to specify myristoylatedand non-myristoylated sites. Then, feature selection methods including maximum relevance and minimum redundancy (mRMR), incremental feature selection (IFS), and a machine learning algorithm (extreme learning machine method) were adopted to extract optimal features for the algorithm to identify myristoylation sites in protein sequences, thereby building an optimal prediction model. As a result, 41 key features were extracted and used to build an optimal prediction model. The effectiveness of the optimal prediction model was further validated by its performance on a test dataset. Furthermore, detailed analyses were also performed on the extracted 41 features to gain insight into the mechanism of myristoylation modification. This study provided a new computational method for identifying myristoylation sites in protein sequences. We believe that it can be a useful tool to predict myristoylation sites from protein sequences. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Wang, Hui; Liu, Chunyue; Rong, Luge; Wang, Xiaoxu; Sun, Lina; Luo, Qing; Wu, Hao
2018-01-09
River monitoring networks play an important role in water environmental management and assessment, and it is critical to develop an appropriate method to optimize the monitoring network. In this study, an effective method was proposed based on the attainment rate of National Grade III water quality, optimal partition analysis and Euclidean distance, and Hun River was taken as a method validation case. There were 7 sampling sites in the monitoring network of the Hun River, and 17 monitoring items were analyzed once a month during January 2009 to December 2010. The results showed that the main monitoring items in the surface water of Hun River were ammonia nitrogen (NH 4 + -N), chemical oxygen demand, and biochemical oxygen demand. After optimization, the required number of monitoring sites was reduced from seven to three, and 57% of the cost was saved. In addition, there were no significant differences between non-optimized and optimized monitoring networks, and the optimized monitoring networks could correctly represent the original monitoring network. The duplicate setting degree of monitoring sites decreased after optimization, and the rationality of the monitoring network was improved. Therefore, the optimal method was identified as feasible, efficient, and economic.
He, Guilin; Zhang, Tuqiao; Zheng, Feifei; Zhang, Qingzhou
2018-06-20
Water quality security within water distribution systems (WDSs) has been an important issue due to their inherent vulnerability associated with contamination intrusion. This motivates intensive studies to identify optimal water quality sensor placement (WQSP) strategies, aimed to timely/effectively detect (un)intentional intrusion events. However, these available WQSP optimization methods have consistently presumed that each WDS node has an equal contamination probability. While being simple in implementation, this assumption may do not conform to the fact that the nodal contamination probability may be significantly regionally varied owing to variations in population density and user properties. Furthermore, the low computational efficiency is another important factor that has seriously hampered the practical applications of the currently available WQSP optimization approaches. To address these two issues, this paper proposes an efficient multi-objective WQSP optimization method to explicitly account for contamination probability variations. Four different contamination probability functions (CPFs) are proposed to represent the potential variations of nodal contamination probabilities within the WDS. Two real-world WDSs are used to demonstrate the utility of the proposed method. Results show that WQSP strategies can be significantly affected by the choice of the CPF. For example, when the proposed method is applied to the large case study with the CPF accounting for user properties, the event detection probabilities of the resultant solutions are approximately 65%, while these values are around 25% for the traditional approach, and such design solutions are achieved approximately 10,000 times faster than the traditional method. This paper provides an alternative method to identify optimal WQSP solutions for the WDS, and also builds knowledge regarding the impacts of different CPFs on sensor deployments. Copyright © 2018 Elsevier Ltd. All rights reserved.
Optimization Strategies for Sensor and Actuator Placement
NASA Technical Reports Server (NTRS)
Padula, Sharon L.; Kincaid, Rex K.
1999-01-01
This paper provides a survey of actuator and sensor placement problems from a wide range of engineering disciplines and a variety of applications. Combinatorial optimization methods are recommended as a means for identifying sets of actuators and sensors that maximize performance. Several sample applications from NASA Langley Research Center, such as active structural acoustic control, are covered in detail. Laboratory and flight tests of these applications indicate that actuator and sensor placement methods are effective and important. Lessons learned in solving these optimization problems can guide future research.
DEVELOPMENT OF A MOLECULAR METHOD TO IDENTIFY HEPATITIS E VIRUS IN WATER
Hepatitis E virus (HEV) causes an infectious form of hepatitis associated with contaminated water. By analyzing the sequence of several HEV isolates, a reverse transciption-polymerase chain reaction method was developed and optimized that should be able to identify all of the kn...
Guo, Song; Liu, Chunhua; Zhou, Peng; Li, Yanling
2016-01-01
Tyrosine sulfation is one of the ubiquitous protein posttranslational modifications, where some sulfate groups are added to the tyrosine residues. It plays significant roles in various physiological processes in eukaryotic cells. To explore the molecular mechanism of tyrosine sulfation, one of the prerequisites is to correctly identify possible protein tyrosine sulfation residues. In this paper, a novel method was presented to predict protein tyrosine sulfation residues from primary sequences. By means of informative feature construction and elaborate feature selection and parameter optimization scheme, the proposed predictor achieved promising results and outperformed many other state-of-the-art predictors. Using the optimal features subset, the proposed method achieved mean MCC of 94.41% on the benchmark dataset, and a MCC of 90.09% on the independent dataset. The experimental performance indicated that our new proposed method could be effective in identifying the important protein posttranslational modifications and the feature selection scheme would be powerful in protein functional residues prediction research fields.
Liu, Chunhua; Zhou, Peng; Li, Yanling
2016-01-01
Tyrosine sulfation is one of the ubiquitous protein posttranslational modifications, where some sulfate groups are added to the tyrosine residues. It plays significant roles in various physiological processes in eukaryotic cells. To explore the molecular mechanism of tyrosine sulfation, one of the prerequisites is to correctly identify possible protein tyrosine sulfation residues. In this paper, a novel method was presented to predict protein tyrosine sulfation residues from primary sequences. By means of informative feature construction and elaborate feature selection and parameter optimization scheme, the proposed predictor achieved promising results and outperformed many other state-of-the-art predictors. Using the optimal features subset, the proposed method achieved mean MCC of 94.41% on the benchmark dataset, and a MCC of 90.09% on the independent dataset. The experimental performance indicated that our new proposed method could be effective in identifying the important protein posttranslational modifications and the feature selection scheme would be powerful in protein functional residues prediction research fields. PMID:27034949
SU-F-J-06: Optimized Patient Inclusion for NaF PET Response-Based Biopsies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roth, A; Harmon, S; Perk, T
Purpose: A method to guide mid-treatment biopsies using quantitative [F-18]NaF PET/CT response is being investigated in a clinical trial. This study aims to develop methodology to identify patients amenable to mid-treatment biopsy based on pre-treatment imaging characteristics. Methods: 35 metastatic prostate cancer patients had NaF PET/CT scans taken prior to the start of treatment and 9–12 weeks into treatment. For mid-treatment biopsy targeting, lesions must be at least 1.5 cm{sup 3} and located in a clinically feasible region (lumbar/sacral spine, pelvis, humerus, or femur). Three methods were developed based on number of lesions present prior to treatment: a feasibility-restricted method,more » a location-restricted method, and an unrestricted method. The feasibility restricted method only utilizes information from lesions meeting biopsy requirements in the pre-treatment scan. The unrestricted method accounts for all lesions present in the pre-treatment scan. For each method, optimized classification cutoffs for candidate patients were determined. Results: 13 of the 35 patients had enough lesions at the mid-treatment for biopsy candidacy. Of 1749 lesions identified in all 35 patients at mid-treatment, only 9.8% were amenable to biopsy. Optimizing the feasibility-restricted method required 4 lesions at pre-treatment meeting volume and region requirements for biopsy, resulting patient identification sensitivity of 0.8 and specificity of 0.7. Of 6 false positive patients, only one patient lacked lesions for biopsy. Restricting for location alone showed poor results (sensitivity 0.2 and specificity 0.3). The optimized unrestricted method required patients have at least 37 lesions in pretreatment scan, resulting in a sensitivity of 0.8 and specificity of 0.8. There were 5 false positives, only one lacked lesions for biopsy. Conclusion: Incorporating the overall pre-treatment number of NaF PET/CT identified lesions provided best prediction for identifying candidate patients for mid-treatment biopsy. This study provides validity for prediction-based inclusion criteria that can be extended to various clinical trial scenarios. Funded by Prostate Cancer Foundation.« less
An, Yongkai; Lu, Wenxi; Cheng, Weiguo
2015-01-01
This paper introduces a surrogate model to identify an optimal exploitation scheme, while the western Jilin province was selected as the study area. A numerical simulation model of groundwater flow was established first, and four exploitation wells were set in the Tongyu county and Qian Gorlos county respectively so as to supply water to Daan county. Second, the Latin Hypercube Sampling (LHS) method was used to collect data in the feasible region for input variables. A surrogate model of the numerical simulation model of groundwater flow was developed using the regression kriging method. An optimization model was established to search an optimal groundwater exploitation scheme using the minimum average drawdown of groundwater table and the minimum cost of groundwater exploitation as multi-objective functions. Finally, the surrogate model was invoked by the optimization model in the process of solving the optimization problem. Results show that the relative error and root mean square error of the groundwater table drawdown between the simulation model and the surrogate model for 10 validation samples are both lower than 5%, which is a high approximation accuracy. The contrast between the surrogate-based simulation optimization model and the conventional simulation optimization model for solving the same optimization problem, shows the former only needs 5.5 hours, and the latter needs 25 days. The above results indicate that the surrogate model developed in this study could not only considerably reduce the computational burden of the simulation optimization process, but also maintain high computational accuracy. This can thus provide an effective method for identifying an optimal groundwater exploitation scheme quickly and accurately. PMID:26264008
Choudhuri, Indrajit; MacCarter, Dean; Shaw, Rachael; Anderson, Steve; St Cyr, John; Niazi, Imran
2014-11-01
One-third of eligible patients fail to respond to cardiac resynchronization therapy (CRT). Current methods to "optimize" the atrio-ventricular (A-V) interval are performed at rest, which may limit its efficacy during daily activities. We hypothesized that low-intensity cardiopulmonary exercise testing (CPX) could identify the most favorable physiologic combination of specific gas exchange parameters reflecting pulmonary blood flow or cardiac output, stroke volume, and left atrial pressure to guide determination of the optimal A-V interval. We assessed relative feasibility of determining the optimal A-V interval by three methods in 17 patients who underwent optimization of CRT: (1) resting echocardiographic optimization (the Ritter method), (2) resting electrical optimization (intrinsic A-V interval and QRS duration), and (3) during low-intensity, steady-state CPX. Five sequential, incremental A-V intervals were programmed in each method. Assessment of cardiopulmonary stability and potential influence on the CPX-based method were assessed. CPX and determination of a physiological optimal A-V interval was successfully completed in 94.1% of patients, slightly higher than the resting echo-based approach (88.2%). There was a wide variation in the optimal A-V delay determined by each method. There was no observed cardiopulmonary instability or impact of the implant procedure that affected determination of the CPX-based optimized A-V interval. Determining optimized A-V intervals by CPX is feasible. Proposed mechanisms explaining this finding and long-term impact require further study. ©2014 Wiley Periodicals, Inc.
Stone, David B.; Tamburro, Gabriella; Fiedler, Patrique; Haueisen, Jens; Comani, Silvia
2018-01-01
Data contamination due to physiological artifacts such as those generated by eyeblinks, eye movements, and muscle activity continues to be a central concern in the acquisition and analysis of electroencephalographic (EEG) data. This issue is further compounded in EEG sports science applications where the presence of artifacts is notoriously difficult to control because behaviors that generate these interferences are often the behaviors under investigation. Therefore, there is a need to develop effective and efficient methods to identify physiological artifacts in EEG recordings during sports applications so that they can be isolated from cerebral activity related to the activities of interest. We have developed an EEG artifact detection model, the Fingerprint Method, which identifies different spatial, temporal, spectral, and statistical features indicative of physiological artifacts and uses these features to automatically classify artifactual independent components in EEG based on a machine leaning approach. Here, we optimized our method using artifact-rich training data and a procedure to determine which features were best suited to identify eyeblinks, eye movements, and muscle artifacts. We then applied our model to an experimental dataset collected during endurance cycling. Results reveal that unique sets of features are suitable for the detection of distinct types of artifacts and that the Optimized Fingerprint Method was able to correctly identify over 90% of the artifactual components with physiological origin present in the experimental data. These results represent a significant advancement in the search for effective means to address artifact contamination in EEG sports science applications. PMID:29618975
Stone, David B; Tamburro, Gabriella; Fiedler, Patrique; Haueisen, Jens; Comani, Silvia
2018-01-01
Data contamination due to physiological artifacts such as those generated by eyeblinks, eye movements, and muscle activity continues to be a central concern in the acquisition and analysis of electroencephalographic (EEG) data. This issue is further compounded in EEG sports science applications where the presence of artifacts is notoriously difficult to control because behaviors that generate these interferences are often the behaviors under investigation. Therefore, there is a need to develop effective and efficient methods to identify physiological artifacts in EEG recordings during sports applications so that they can be isolated from cerebral activity related to the activities of interest. We have developed an EEG artifact detection model, the Fingerprint Method, which identifies different spatial, temporal, spectral, and statistical features indicative of physiological artifacts and uses these features to automatically classify artifactual independent components in EEG based on a machine leaning approach. Here, we optimized our method using artifact-rich training data and a procedure to determine which features were best suited to identify eyeblinks, eye movements, and muscle artifacts. We then applied our model to an experimental dataset collected during endurance cycling. Results reveal that unique sets of features are suitable for the detection of distinct types of artifacts and that the Optimized Fingerprint Method was able to correctly identify over 90% of the artifactual components with physiological origin present in the experimental data. These results represent a significant advancement in the search for effective means to address artifact contamination in EEG sports science applications.
Comparison of four methods to assess colostral IgG concentration in dairy cows.
Chigerwe, Munashe; Tyler, Jeff W; Middleton, John R; Spain, James N; Dill, Jeffrey S; Steevens, Barry J
2008-09-01
To determine sensitivity and specificity of 4 methods to assess colostral IgG concentration in dairy cows and determine the optimal cutpoint for each method. Cross-sectional study. 160 Holstein dairy cows. 171 composite colostrum samples collected within 2 hours after parturition were used in the study. Test methods used to estimate colostral IgG concentration consisted of weight of the first milking, 2 hydrometers, and an electronic refractometer. Results of the test methods were compared with colostral IgG concentration determined by means of radial immunodiffusion. For each method, sensitivity and specificity for detecting colostral IgG concentration < 50 g/L were calculated across a range of potential cutpoints, and the optimal cutpoint for each test was selected to maximize sensitivity and specificity. At the optimal cutpoint for each method, sensitivity for weight of the first milking (0.42) was significantly lower than sensitivity for each of the other 3 methods (hydrometer 1, 0.75; hydrometer 2, 0.76; refractometer, 0.75), but no significant differences were identified among the other 3 methods with regard to sensitivity. Specificities at the optimal cutpoint were similar for all 4 methods. Results suggested that use of either hydrometer or the electronic refractometer was an acceptable method of screening colostrum for low IgG concentration; however, the manufacturer-defined scale for both hydrometers overestimated colostral IgG concentration. Use of weight of the first milking as a screening test to identify bovine colostrum with inadequate IgG concentration could not be justified because of the low sensitivity.
Hybrid Genetic Algorithm - Local Search Method for Ground-Water Management
NASA Astrophysics Data System (ADS)
Chiu, Y.; Nishikawa, T.; Martin, P.
2008-12-01
Ground-water management problems commonly are formulated as a mixed-integer, non-linear programming problem (MINLP). Relying only on conventional gradient-search methods to solve the management problem is computationally fast; however, the methods may become trapped in a local optimum. Global-optimization schemes can identify the global optimum, but the convergence is very slow when the optimal solution approaches the global optimum. In this study, we developed a hybrid optimization scheme, which includes a genetic algorithm and a gradient-search method, to solve the MINLP. The genetic algorithm identifies a near- optimal solution, and the gradient search uses the near optimum to identify the global optimum. Our methodology is applied to a conjunctive-use project in the Warren ground-water basin, California. Hi- Desert Water District (HDWD), the primary water-manager in the basin, plans to construct a wastewater treatment plant to reduce future septic-tank effluent from reaching the ground-water system. The treated wastewater instead will recharge the ground-water basin via percolation ponds as part of a larger conjunctive-use strategy, subject to State regulations (e.g. minimum distances and travel times). HDWD wishes to identify the least-cost conjunctive-use strategies that control ground-water levels, meet regulations, and identify new production-well locations. As formulated, the MINLP objective is to minimize water-delivery costs subject to constraints including pump capacities, available recharge water, water-supply demand, water-level constraints, and potential new-well locations. The methodology was demonstrated by an enumerative search of the entire feasible solution and comparing the optimum solution with results from the branch-and-bound algorithm. The results also indicate that the hybrid method identifies the global optimum within an affordable computation time. Sensitivity analyses, which include testing different recharge-rate scenarios, pond layouts, and water-supply constraints, indicate that the number of new wells is insensitive to water-supply constraints; however, pumping rates and patterns of the existing wells are sensitive. The locations of new wells are mildly sensitive to the pond layout.
Pan, Xiaoyong; Hu, Xiaohua; Zhang, Yu Hang; Feng, Kaiyan; Wang, Shao Peng; Chen, Lei; Huang, Tao; Cai, Yu Dong
2018-04-12
Atrioventricular septal defect (AVSD) is a clinically significant subtype of congenital heart disease (CHD) that severely influences the health of babies during birth and is associated with Down syndrome (DS). Thus, exploring the differences in functional genes in DS samples with and without AVSD is a critical way to investigate the complex association between AVSD and DS. In this study, we present a computational method to distinguish DS patients with AVSD from those without AVSD using the newly proposed self-normalizing neural network (SNN). First, each patient was encoded by using the copy number of probes on chromosome 21. The encoded features were ranked by the reliable Monte Carlo feature selection (MCFS) method to obtain a ranked feature list. Based on this feature list, we used a two-stage incremental feature selection to construct two series of feature subsets and applied SNNs to build classifiers to identify optimal features. Results show that 2737 optimal features were obtained, and the corresponding optimal SNN classifier constructed on optimal features yielded a Matthew's correlation coefficient (MCC) value of 0.748. For comparison, random forest was also used to build classifiers and uncover optimal features. This method received an optimal MCC value of 0.582 when top 132 features were utilized. Finally, we analyzed some key features derived from the optimal features in SNNs found in literature support to further reveal their essential roles.
Bayesian Spatial Design of Optimal Deep Tubewell Locations in Matlab, Bangladesh.
Warren, Joshua L; Perez-Heydrich, Carolina; Yunus, Mohammad
2013-09-01
We introduce a method for statistically identifying the optimal locations of deep tubewells (dtws) to be installed in Matlab, Bangladesh. Dtw installations serve to mitigate exposure to naturally occurring arsenic found at groundwater depths less than 200 meters, a serious environmental health threat for the population of Bangladesh. We introduce an objective function, which incorporates both arsenic level and nearest town population size, to identify optimal locations for dtw placement. Assuming complete knowledge of the arsenic surface, we then demonstrate how minimizing the objective function over a domain favors dtws placed in areas with high arsenic values and close to largely populated regions. Given only a partial realization of the arsenic surface over a domain, we use a Bayesian spatial statistical model to predict the full arsenic surface and estimate the optimal dtw locations. The uncertainty associated with these estimated locations is correctly characterized as well. The new method is applied to a dataset from a village in Matlab and the estimated optimal locations are analyzed along with their respective 95% credible regions.
Muscat Galea, Charlene; Didion, David; Clicq, David; Mangelings, Debby; Vander Heyden, Yvan
2017-12-01
A supercritical chromatographic method for the separation of a drug and its impurities has been developed and optimized applying an experimental design approach and chromatogram simulations. Stationary phase screening was followed by optimization of the modifier and injection solvent composition. A design-of-experiment (DoE) approach was then used to optimize column temperature, back-pressure and the gradient slope simultaneously. Regression models for the retention times and peak widths of all mixture components were built. The factor levels for different grid points were then used to predict the retention times and peak widths of the mixture components using the regression models and the best separation for the worst separated peak pair in the experimental domain was identified. A plot of the minimal resolutions was used to help identifying the factor levels leading to the highest resolution between consecutive peaks. The effects of the DoE factors were visualized in a way that is familiar to the analytical chemist, i.e. by simulating the resulting chromatogram. The mixture of an active ingredient and seven impurities was separated in less than eight minutes. The approach discussed in this paper demonstrates how SFC methods can be developed and optimized efficiently using simple concepts and tools. Copyright © 2017 Elsevier B.V. All rights reserved.
Identifying Optimal Measurement Subspace for the Ensemble Kalman Filter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Ning; Huang, Zhenyu; Welch, Greg
2012-05-24
To reduce the computational load of the ensemble Kalman filter while maintaining its efficacy, an optimization algorithm based on the generalized eigenvalue decomposition method is proposed for identifying the most informative measurement subspace. When the number of measurements is large, the proposed algorithm can be used to make an effective tradeoff between computational complexity and estimation accuracy. This algorithm also can be extended to other Kalman filters for measurement subspace selection.
Improving predicted protein loop structure ranking using a Pareto-optimality consensus method.
Li, Yaohang; Rata, Ionel; Chiu, See-wing; Jakobsson, Eric
2010-07-20
Accurate protein loop structure models are important to understand functions of many proteins. Identifying the native or near-native models by distinguishing them from the misfolded ones is a critical step in protein loop structure prediction. We have developed a Pareto Optimal Consensus (POC) method, which is a consensus model ranking approach to integrate multiple knowledge- or physics-based scoring functions. The procedure of identifying the models of best quality in a model set includes: 1) identifying the models at the Pareto optimal front with respect to a set of scoring functions, and 2) ranking them based on the fuzzy dominance relationship to the rest of the models. We apply the POC method to a large number of decoy sets for loops of 4- to 12-residue in length using a functional space composed of several carefully-selected scoring functions: Rosetta, DOPE, DDFIRE, OPLS-AA, and a triplet backbone dihedral potential developed in our lab. Our computational results show that the sets of Pareto-optimal decoys, which are typically composed of approximately 20% or less of the overall decoys in a set, have a good coverage of the best or near-best decoys in more than 99% of the loop targets. Compared to the individual scoring function yielding best selection accuracy in the decoy sets, the POC method yields 23%, 37%, and 64% less false positives in distinguishing the native conformation, indentifying a near-native model (RMSD < 0.5A from the native) as top-ranked, and selecting at least one near-native model in the top-5-ranked models, respectively. Similar effectiveness of the POC method is also found in the decoy sets from membrane protein loops. Furthermore, the POC method outperforms the other popularly-used consensus strategies in model ranking, such as rank-by-number, rank-by-rank, rank-by-vote, and regression-based methods. By integrating multiple knowledge- and physics-based scoring functions based on Pareto optimality and fuzzy dominance, the POC method is effective in distinguishing the best loop models from the other ones within a loop model set.
Improving predicted protein loop structure ranking using a Pareto-optimality consensus method
2010-01-01
Background Accurate protein loop structure models are important to understand functions of many proteins. Identifying the native or near-native models by distinguishing them from the misfolded ones is a critical step in protein loop structure prediction. Results We have developed a Pareto Optimal Consensus (POC) method, which is a consensus model ranking approach to integrate multiple knowledge- or physics-based scoring functions. The procedure of identifying the models of best quality in a model set includes: 1) identifying the models at the Pareto optimal front with respect to a set of scoring functions, and 2) ranking them based on the fuzzy dominance relationship to the rest of the models. We apply the POC method to a large number of decoy sets for loops of 4- to 12-residue in length using a functional space composed of several carefully-selected scoring functions: Rosetta, DOPE, DDFIRE, OPLS-AA, and a triplet backbone dihedral potential developed in our lab. Our computational results show that the sets of Pareto-optimal decoys, which are typically composed of ~20% or less of the overall decoys in a set, have a good coverage of the best or near-best decoys in more than 99% of the loop targets. Compared to the individual scoring function yielding best selection accuracy in the decoy sets, the POC method yields 23%, 37%, and 64% less false positives in distinguishing the native conformation, indentifying a near-native model (RMSD < 0.5A from the native) as top-ranked, and selecting at least one near-native model in the top-5-ranked models, respectively. Similar effectiveness of the POC method is also found in the decoy sets from membrane protein loops. Furthermore, the POC method outperforms the other popularly-used consensus strategies in model ranking, such as rank-by-number, rank-by-rank, rank-by-vote, and regression-based methods. Conclusions By integrating multiple knowledge- and physics-based scoring functions based on Pareto optimality and fuzzy dominance, the POC method is effective in distinguishing the best loop models from the other ones within a loop model set. PMID:20642859
Method for nonlinear optimization for gas tagging and other systems
Chen, Ting; Gross, Kenny C.; Wegerich, Stephan
1998-01-01
A method and system for providing nuclear fuel rods with a configuration of isotopic gas tags. The method includes selecting a true location of a first gas tag node, selecting initial locations for the remaining n-1 nodes using target gas tag compositions, generating a set of random gene pools with L nodes, applying a Hopfield network for computing on energy, or cost, for each of the L gene pools and using selected constraints to establish minimum energy states to identify optimal gas tag nodes with each energy compared to a convergence threshold and then upon identifying the gas tag node continuing this procedure until establishing the next gas tag node until all remaining n nodes have been established.
Method for nonlinear optimization for gas tagging and other systems
Chen, T.; Gross, K.C.; Wegerich, S.
1998-01-06
A method and system are disclosed for providing nuclear fuel rods with a configuration of isotopic gas tags. The method includes selecting a true location of a first gas tag node, selecting initial locations for the remaining n-1 nodes using target gas tag compositions, generating a set of random gene pools with L nodes, applying a Hopfield network for computing on energy, or cost, for each of the L gene pools and using selected constraints to establish minimum energy states to identify optimal gas tag nodes with each energy compared to a convergence threshold and then upon identifying the gas tag node continuing this procedure until establishing the next gas tag node until all remaining n nodes have been established. 6 figs.
NASA Astrophysics Data System (ADS)
Takahashi, Hiroki; Hasegawa, Hideyuki; Kanai, Hiroshi
2011-07-01
In most methods for evaluation of cardiac function based on echocardiography, the heart wall is currently identified manually by an operator. However, this task is very time-consuming and suffers from inter- and intraobserver variability. The present paper proposes a method that uses multiple features of ultrasonic echo signals for automated identification of the heart wall region throughout an entire cardiac cycle. In addition, the optimal cardiac phase to select a frame of interest, i.e., the frame for the initiation of tracking, was determined. The heart wall region at the frame of interest in this cardiac phase was identified by the expectation-maximization (EM) algorithm, and heart wall regions in the following frames were identified by tracking each point classified in the initial frame as the heart wall region using the phased tracking method. The results for two subjects indicate the feasibility of the proposed method in the longitudinal axis view of the heart.
Hierarchical multistage MCMC follow-up of continuous gravitational wave candidates
NASA Astrophysics Data System (ADS)
Ashton, G.; Prix, R.
2018-05-01
Leveraging Markov chain Monte Carlo optimization of the F statistic, we introduce a method for the hierarchical follow-up of continuous gravitational wave candidates identified by wide-parameter space semicoherent searches. We demonstrate parameter estimation for continuous wave sources and develop a framework and tools to understand and control the effective size of the parameter space, critical to the success of the method. Monte Carlo tests of simulated signals in noise demonstrate that this method is close to the theoretical optimal performance.
NASA Astrophysics Data System (ADS)
Deufel, Christopher L.; Furutani, Keith M.
2014-02-01
As dose optimization for high dose rate brachytherapy becomes more complex, it becomes increasingly important to have a means of verifying that optimization results are reasonable. A method is presented for using a simple optimization as quality assurance for the more complex optimization algorithms typically found in commercial brachytherapy treatment planning systems. Quality assurance tests may be performed during commissioning, at regular intervals, and/or on a patient specific basis. A simple optimization method is provided that optimizes conformal target coverage using an exact, variance-based, algebraic approach. Metrics such as dose volume histogram, conformality index, and total reference air kerma agree closely between simple and complex optimizations for breast, cervix, prostate, and planar applicators. The simple optimization is shown to be a sensitive measure for identifying failures in a commercial treatment planning system that are possibly due to operator error or weaknesses in planning system optimization algorithms. Results from the simple optimization are surprisingly similar to the results from a more complex, commercial optimization for several clinical applications. This suggests that there are only modest gains to be made from making brachytherapy optimization more complex. The improvements expected from sophisticated linear optimizations, such as PARETO methods, will largely be in making systems more user friendly and efficient, rather than in finding dramatically better source strength distributions.
Optimizing complex phenotypes through model-guided multiplex genome engineering
Kuznetsov, Gleb; Goodman, Daniel B.; Filsinger, Gabriel T.; ...
2017-05-25
Here, we present a method for identifying genomic modifications that optimize a complex phenotype through multiplex genome engineering and predictive modeling. We apply our method to identify six single nucleotide mutations that recover 59% of the fitness defect exhibited by the 63-codon E. coli strain C321.ΔA. By introducing targeted combinations of changes in multiplex we generate rich genotypic and phenotypic diversity and characterize clones using whole-genome sequencing and doubling time measurements. Regularized multivariate linear regression accurately quantifies individual allelic effects and overcomes bias from hitchhiking mutations and context-dependence of genome editing efficiency that would confound other strategies.
Optimizing complex phenotypes through model-guided multiplex genome engineering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuznetsov, Gleb; Goodman, Daniel B.; Filsinger, Gabriel T.
Here, we present a method for identifying genomic modifications that optimize a complex phenotype through multiplex genome engineering and predictive modeling. We apply our method to identify six single nucleotide mutations that recover 59% of the fitness defect exhibited by the 63-codon E. coli strain C321.ΔA. By introducing targeted combinations of changes in multiplex we generate rich genotypic and phenotypic diversity and characterize clones using whole-genome sequencing and doubling time measurements. Regularized multivariate linear regression accurately quantifies individual allelic effects and overcomes bias from hitchhiking mutations and context-dependence of genome editing efficiency that would confound other strategies.
Fonslow, Bryan R.; Niessen, Sherry M.; Singh, Meha; Wong, Catherine C.; Xu, Tao; Carvalho, Paulo C.; Choi, Jeong; Park, Sung Kyu; Yates, John R.
2012-01-01
Herein we report the characterization and optimization of single-step inline enrichment of phosphopeptides directly from small amounts of whole cell and tissue lysates (100 – 500 μg) using a hydroxyapatite (HAP) microcolumn and Multidimensional Protein Identification Technology (MudPIT). In comparison to a triplicate HILIC-IMAC phosphopeptide enrichment study, ~80% of the phosphopeptides identified using HAP-MudPIT were unique. Similarly, analysis of the consensus phosphorylation motifs between the two enrichment methods illustrates the complementarity of calcium-and iron-based enrichment methods and the higher sensitivity and selectivity of HAP-MudPIT for acidic motifs. We demonstrate how the identification of more multiply phosphorylated peptides from HAP-MudPIT can be used to quantify phosphorylation cooperativity. Through optimization of HAP-MudPIT on a whole cell lysate we routinely achieved identification and quantification of ca. 1000 phosphopeptides from a ~1 hr enrichment and 12 hr MudPIT analysis on small quantities of material. Finally, we applied this optimized method to identify phosphorylation sites from a mass-limited mouse brain region, the amygdala (200 – 500 μg), identifying up to 4000 phosphopeptides per run. PMID:22509746
A Higher Harmonic Optimal Controller to Optimise Rotorcraft Aeromechanical Behaviour
NASA Technical Reports Server (NTRS)
Leyland, Jane Anne
1996-01-01
Three methods to optimize rotorcraft aeromechanical behavior for those cases where the rotorcraft plant can be adequately represented by a linear model system matrix were identified and implemented in a stand-alone code. These methods determine the optimal control vector which minimizes the vibration metric subject to constraints at discrete time points, and differ from the commonly used non-optimal constraint penalty methods such as those employed by conventional controllers in that the constraints are handled as actual constraints to an optimization problem rather than as just additional terms in the performance index. The first method is to use a Non-linear Programming algorithm to solve the problem directly. The second method is to solve the full set of non-linear equations which define the necessary conditions for optimality. The third method is to solve each of the possible reduced sets of equations defining the necessary conditions for optimality when the constraints are pre-selected to be either active or inactive, and then to simply select the best solution. The effects of maneuvers and aeroelasticity on the systems matrix are modelled by using a pseudo-random pseudo-row-dependency scheme to define the systems matrix. Cases run to date indicate that the first method of solution is reliable, robust, and easiest to use, and that it was superior to the conventional controllers which were considered.
Peak-Seeking Optimization of Spanwise Lift Distribution for Wings in Formation Flight
NASA Technical Reports Server (NTRS)
Hanson, Curtis E.; Ryan, Jack
2012-01-01
A method is presented for the in-flight optimization of the lift distribution across the wing for minimum drag of an aircraft in formation flight. The usual elliptical distribution that is optimal for a given wing with a given span is no longer optimal for the trailing wing in a formation due to the asymmetric nature of the encountered flow field. Control surfaces along the trailing edge of the wing can be configured to obtain a non-elliptical profile that is more optimal in terms of minimum combined induced and profile drag. Due to the difficult-to-predict nature of formation flight aerodynamics, a Newton-Raphson peak-seeking controller is used to identify in real time the best aileron and flap deployment scheme for minimum total drag. Simulation results show that the peak-seeking controller correctly identifies an optimal trim configuration that provides additional drag savings above those achieved with conventional anti-symmetric aileron trim.
Janiga, Gábor; Daróczy, László; Berg, Philipp; Thévenin, Dominique; Skalej, Martin; Beuing, Oliver
2015-11-05
The optimal treatment of intracranial aneurysms using flow diverting devices is a fundamental issue for neuroradiologists as well as neurosurgeons. Due to highly irregular manifold aneurysm shapes and locations, the choice of the stent and the patient-specific deployment strategy can be a very difficult decision. To support the therapy planning, a new method is introduced that combines a three-dimensional CFD-based optimization with a realistic deployment of a virtual flow diverting stent for a given aneurysm. To demonstrate the feasibility of this method, it was applied to a patient-specific intracranial giant aneurysm that was successfully treated using a commercial flow diverter. Eight treatment scenarios with different local compressions were considered in a fully automated simulation loop. The impact on the corresponding blood flow behavior was evaluated qualitatively as well as quantitatively, and the optimal configuration for this specific case was identified. The virtual deployment of an uncompressed flow diverter reduced the inflow into the aneurysm by 24.4% compared to the untreated case. Depending on the positioning of the local stent compression below the ostium, blood flow reduction could vary between 27.3% and 33.4%. Therefore, a broad range of potential treatment outcomes was identified, illustrating the variability of a given flow diverter deployment in general. This method represents a proof of concept to automatically identify the optimal treatment for a patient in a virtual study under certain assumptions. Hence, it contributes to the improvement of virtual stenting for intracranial aneurysms and can support physicians during therapy planning in the future. Copyright © 2015 Elsevier Ltd. All rights reserved.
Applying Mathematical Optimization Methods to an ACT-R Instance-Based Learning Model.
Said, Nadia; Engelhart, Michael; Kirches, Christian; Körkel, Stefan; Holt, Daniel V
2016-01-01
Computational models of cognition provide an interface to connect advanced mathematical tools and methods to empirically supported theories of behavior in psychology, cognitive science, and neuroscience. In this article, we consider a computational model of instance-based learning, implemented in the ACT-R cognitive architecture. We propose an approach for obtaining mathematical reformulations of such cognitive models that improve their computational tractability. For the well-established Sugar Factory dynamic decision making task, we conduct a simulation study to analyze central model parameters. We show how mathematical optimization techniques can be applied to efficiently identify optimal parameter values with respect to different optimization goals. Beyond these methodological contributions, our analysis reveals the sensitivity of this particular task with respect to initial settings and yields new insights into how average human performance deviates from potential optimal performance. We conclude by discussing possible extensions of our approach as well as future steps towards applying more powerful derivative-based optimization methods.
Comparison of stochastic optimization methods for all-atom folding of the Trp-Cage protein.
Schug, Alexander; Herges, Thomas; Verma, Abhinav; Lee, Kyu Hwan; Wenzel, Wolfgang
2005-12-09
The performances of three different stochastic optimization methods for all-atom protein structure prediction are investigated and compared. We use the recently developed all-atom free-energy force field (PFF01), which was demonstrated to correctly predict the native conformation of several proteins as the global optimum of the free energy surface. The trp-cage protein (PDB-code 1L2Y) is folded with the stochastic tunneling method, a modified parallel tempering method, and the basin-hopping technique. All the methods correctly identify the native conformation, and their relative efficiency is discussed.
Mathieson, Luke; Mendes, Alexandre; Marsden, John; Pond, Jeffrey; Moscato, Pablo
2017-01-01
This chapter introduces a new method for knowledge extraction from databases for the purpose of finding a discriminative set of features that is also a robust set for within-class classification. Our method is generic and we introduce it here in the field of breast cancer diagnosis from digital mammography data. The mathematical formalism is based on a generalization of the k-Feature Set problem called (α, β)-k-Feature Set problem, introduced by Cotta and Moscato (J Comput Syst Sci 67(4):686-690, 2003). This method proceeds in two steps: first, an optimal (α, β)-k-feature set of minimum cardinality is identified and then, a set of classification rules using these features is obtained. We obtain the (α, β)-k-feature set in two phases; first a series of extremely powerful reduction techniques, which do not lose the optimal solution, are employed; and second, a metaheuristic search to identify the remaining features to be considered or disregarded. Two algorithms were tested with a public domain digital mammography dataset composed of 71 malignant and 75 benign cases. Based on the results provided by the algorithms, we obtain classification rules that employ only a subset of these features.
Global Optimal Trajectory in Chaos and NP-Hardness
NASA Astrophysics Data System (ADS)
Latorre, Vittorio; Gao, David Yang
This paper presents an unconventional theory and method for solving general nonlinear dynamical systems. Instead of the direct iterative methods, the discretized nonlinear system is first formulated as a global optimization problem via the least squares method. A newly developed canonical duality theory shows that this nonconvex minimization problem can be solved deterministically in polynomial time if a global optimality condition is satisfied. The so-called pseudo-chaos produced by linear iterative methods are mainly due to the intrinsic numerical error accumulations. Otherwise, the global optimization problem could be NP-hard and the nonlinear system can be really chaotic. A conjecture is proposed, which reveals the connection between chaos in nonlinear dynamics and NP-hardness in computer science. The methodology and the conjecture are verified by applications to the well-known logistic equation, a forced memristive circuit and the Lorenz system. Computational results show that the canonical duality theory can be used to identify chaotic systems and to obtain realistic global optimal solutions in nonlinear dynamical systems. The method and results presented in this paper should bring some new insights into nonlinear dynamical systems and NP-hardness in computational complexity theory.
Searching for transcription factor binding sites in vector spaces
2012-01-01
Background Computational approaches to transcription factor binding site identification have been actively researched in the past decade. Learning from known binding sites, new binding sites of a transcription factor in unannotated sequences can be identified. A number of search methods have been introduced over the years. However, one can rarely find one single method that performs the best on all the transcription factors. Instead, to identify the best method for a particular transcription factor, one usually has to compare a handful of methods. Hence, it is highly desirable for a method to perform automatic optimization for individual transcription factors. Results We proposed to search for transcription factor binding sites in vector spaces. This framework allows us to identify the best method for each individual transcription factor. We further introduced two novel methods, the negative-to-positive vector (NPV) and optimal discriminating vector (ODV) methods, to construct query vectors to search for binding sites in vector spaces. Extensive cross-validation experiments showed that the proposed methods significantly outperformed the ungapped likelihood under positional background method, a state-of-the-art method, and the widely-used position-specific scoring matrix method. We further demonstrated that motif subtypes of a TF can be readily identified in this framework and two variants called the k NPV and k ODV methods benefited significantly from motif subtype identification. Finally, independent validation on ChIP-seq data showed that the ODV and NPV methods significantly outperformed the other compared methods. Conclusions We conclude that the proposed framework is highly flexible. It enables the two novel methods to automatically identify a TF-specific subspace to search for binding sites. Implementations are available as source code at: http://biogrid.engr.uconn.edu/tfbs_search/. PMID:23244338
Optimized emission in nanorod arrays through quasi-aperiodic inverse design.
Anderson, P Duke; Povinelli, Michelle L
2015-06-01
We investigate a new class of quasi-aperiodic nanorod structures for the enhancement of incoherent light emission. We identify one optimized structure using an inverse design algorithm and the finite-difference time-domain method. We carry out emission calculations on both the optimized structure as well as a simple periodic array. The optimized structure achieves nearly perfect light extraction while maintaining a high spontaneous emission rate. Overall, the optimized structure can achieve a 20%-42% increase in external quantum efficiency relative to a simple periodic design, depending on material quality.
NASA Astrophysics Data System (ADS)
Ryzhikov, I. S.; Semenkin, E. S.; Akhmedova, Sh A.
2017-02-01
A novel order reduction method for linear time invariant systems is described. The method is based on reducing the initial problem to an optimization one, using the proposed model representation, and solving the problem with an efficient optimization algorithm. The proposed method of determining the model allows all the parameters of the model with lower order to be identified and by definition, provides the model with the required steady-state. As a powerful optimization tool, the meta-heuristic Co-Operation of Biology-Related Algorithms was used. Experimental results proved that the proposed approach outperforms other approaches and that the reduced order model achieves a high level of accuracy.
NASA Astrophysics Data System (ADS)
Hou, Liqiang; Cai, Yuanli; Liu, Jin; Hou, Chongyuan
2016-04-01
A variable fidelity robust optimization method for pulsed laser orbital debris removal (LODR) under uncertainty is proposed. Dempster-shafer theory of evidence (DST), which merges interval-based and probabilistic uncertainty modeling, is used in the robust optimization. The robust optimization method optimizes the performance while at the same time maximizing its belief value. A population based multi-objective optimization (MOO) algorithm based on a steepest descent like strategy with proper orthogonal decomposition (POD) is used to search robust Pareto solutions. Analytical and numerical lifetime predictors are used to evaluate the debris lifetime after the laser pulses. Trust region based fidelity management is designed to reduce the computational cost caused by the expensive model. When the solutions fall into the trust region, the analytical model is used to reduce the computational cost. The proposed robust optimization method is first tested on a set of standard problems and then applied to the removal of Iridium 33 with pulsed lasers. It will be shown that the proposed approach can identify the most robust solutions with minimum lifetime under uncertainty.
Optimal ranking regime analysis of TreeFlow dendrohydrological reconstructions
USDA-ARS?s Scientific Manuscript database
The Optimal Ranking Regime (ORR) method was used to identify 6-100 year time windows containing significant ranking sequences in 55 western U.S. streamflow reconstructions, and reconstructions of the level of the Great Salt Lake and San Francisco Bay salinity during 1500-2007. The method’s ability t...
Tire-road friction estimation and traction control strategy for motorized electric vehicle.
Jin, Li-Qiang; Ling, Mingze; Yue, Weiqiang
2017-01-01
In this paper, an optimal longitudinal slip ratio system for real-time identification of electric vehicle (EV) with motored wheels is proposed based on the adhesion between tire and road surface. First and foremost, the optimal longitudinal slip rate torque control can be identified in real time by calculating the derivative and slip rate of the adhesion coefficient. Secondly, the vehicle speed estimation method is also brought. Thirdly, an ideal vehicle simulation model is proposed to verify the algorithm with simulation, and we find that the slip ratio corresponds to the detection of the adhesion limit in real time. Finally, the proposed strategy is applied to traction control system (TCS). The results showed that the method can effectively identify the state of wheel and calculate the optimal slip ratio without wheel speed sensor; in the meantime, it can improve the accelerated stability of electric vehicle with traction control system (TCS).
Tire-road friction estimation and traction control strategy for motorized electric vehicle
Jin, Li-Qiang; Yue, Weiqiang
2017-01-01
In this paper, an optimal longitudinal slip ratio system for real-time identification of electric vehicle (EV) with motored wheels is proposed based on the adhesion between tire and road surface. First and foremost, the optimal longitudinal slip rate torque control can be identified in real time by calculating the derivative and slip rate of the adhesion coefficient. Secondly, the vehicle speed estimation method is also brought. Thirdly, an ideal vehicle simulation model is proposed to verify the algorithm with simulation, and we find that the slip ratio corresponds to the detection of the adhesion limit in real time. Finally, the proposed strategy is applied to traction control system (TCS). The results showed that the method can effectively identify the state of wheel and calculate the optimal slip ratio without wheel speed sensor; in the meantime, it can improve the accelerated stability of electric vehicle with traction control system (TCS). PMID:28662053
NASA Astrophysics Data System (ADS)
Mariajayaprakash, Arokiasamy; Senthilvelan, Thiyagarajan; Vivekananthan, Krishnapillai Ponnambal
2013-07-01
The various process parameters affecting the quality characteristics of the shock absorber during the process were identified using the Ishikawa diagram and by failure mode and effect analysis. The identified process parameters are welding process parameters (squeeze, heat control, wheel speed, and air pressure), damper sealing process parameters (load, hydraulic pressure, air pressure, and fixture height), washing process parameters (total alkalinity, temperature, pH value of rinsing water, and timing), and painting process parameters (flowability, coating thickness, pointage, and temperature). In this paper, the process parameters, namely, painting and washing process parameters, are optimized by Taguchi method. Though the defects are reasonably minimized by Taguchi method, in order to achieve zero defects during the processes, genetic algorithm technique is applied on the optimized parameters obtained by Taguchi method.
Improving Free-Piston Stirling Engine Specific Power
NASA Technical Reports Server (NTRS)
Briggs, Maxwell Henry
2014-01-01
This work uses analytical methods to demonstrate the potential benefits of optimizing piston and/or displacer motion in a Stirling Engine. Isothermal analysis was used to show the potential benefits of ideal motion in ideal Stirling engines. Nodal analysis is used to show that ideal piston and displacer waveforms are not optimal in real Stirling engines. Constrained optimization was used to identify piston and displacer waveforms that increase Stirling engine specific power.
Improving Free-Piston Stirling Engine Specific Power
NASA Technical Reports Server (NTRS)
Briggs, Maxwell H.
2015-01-01
This work uses analytical methods to demonstrate the potential benefits of optimizing piston and/or displacer motion in a Stirling engine. Isothermal analysis was used to show the potential benefits of ideal motion in ideal Stirling engines. Nodal analysis is used to show that ideal piston and displacer waveforms are not optimal in real Stirling engines. Constrained optimization was used to identify piston and displacer waveforms that increase Stirling engine specific power.
An evolutionary firefly algorithm for the estimation of nonlinear biological model parameters.
Abdullah, Afnizanfaizal; Deris, Safaai; Anwar, Sohail; Arjunan, Satya N V
2013-01-01
The development of accurate computational models of biological processes is fundamental to computational systems biology. These models are usually represented by mathematical expressions that rely heavily on the system parameters. The measurement of these parameters is often difficult. Therefore, they are commonly estimated by fitting the predicted model to the experimental data using optimization methods. The complexity and nonlinearity of the biological processes pose a significant challenge, however, to the development of accurate and fast optimization methods. We introduce a new hybrid optimization method incorporating the Firefly Algorithm and the evolutionary operation of the Differential Evolution method. The proposed method improves solutions by neighbourhood search using evolutionary procedures. Testing our method on models for the arginine catabolism and the negative feedback loop of the p53 signalling pathway, we found that it estimated the parameters with high accuracy and within a reasonable computation time compared to well-known approaches, including Particle Swarm Optimization, Nelder-Mead, and Firefly Algorithm. We have also verified the reliability of the parameters estimated by the method using an a posteriori practical identifiability test.
An Evolutionary Firefly Algorithm for the Estimation of Nonlinear Biological Model Parameters
Abdullah, Afnizanfaizal; Deris, Safaai; Anwar, Sohail; Arjunan, Satya N. V.
2013-01-01
The development of accurate computational models of biological processes is fundamental to computational systems biology. These models are usually represented by mathematical expressions that rely heavily on the system parameters. The measurement of these parameters is often difficult. Therefore, they are commonly estimated by fitting the predicted model to the experimental data using optimization methods. The complexity and nonlinearity of the biological processes pose a significant challenge, however, to the development of accurate and fast optimization methods. We introduce a new hybrid optimization method incorporating the Firefly Algorithm and the evolutionary operation of the Differential Evolution method. The proposed method improves solutions by neighbourhood search using evolutionary procedures. Testing our method on models for the arginine catabolism and the negative feedback loop of the p53 signalling pathway, we found that it estimated the parameters with high accuracy and within a reasonable computation time compared to well-known approaches, including Particle Swarm Optimization, Nelder-Mead, and Firefly Algorithm. We have also verified the reliability of the parameters estimated by the method using an a posteriori practical identifiability test. PMID:23469172
Wang, Quanfu; Hou, Yanhua; Yan, Peisheng
2012-06-01
Statistical experimental designs were employed to optimize culture conditions for cold-adapted lysozyme production of a psychrophilic yeast Debaryomyces hansenii. In the first step of optimization using Plackett-Burman design (PBD), peptone, glucose, temperature, and NaCl were identified as significant variables that affected lysozyme production, the formula was further optimized using a four factor central composite design (CCD) to understand their interaction and to determine their optimal levels. A quadratic model was developed and validated. Compared to the initial level (18.8 U/mL), the maximum lysozyme production (65.8 U/mL) observed was approximately increased by 3.5-fold under the optimized conditions. Cold-adapted lysozymes production was first optimized using statistical experimental methods. A 3.5-fold enhancement of microbial lysozyme was gained after optimization. Such an improved production will facilitate the application of microbial lysozyme. Thus, D. hansenii lysozyme may be a good and new resource for the industrial production of cold-adapted lysozymes. © 2012 Institute of Food Technologists®
Discrete particle swarm optimization for identifying community structures in signed social networks.
Cai, Qing; Gong, Maoguo; Shen, Bo; Ma, Lijia; Jiao, Licheng
2014-10-01
Modern science of networks has facilitated us with enormous convenience to the understanding of complex systems. Community structure is believed to be one of the notable features of complex networks representing real complicated systems. Very often, uncovering community structures in networks can be regarded as an optimization problem, thus, many evolutionary algorithms based approaches have been put forward. Particle swarm optimization (PSO) is an artificial intelligent algorithm originated from social behavior such as birds flocking and fish schooling. PSO has been proved to be an effective optimization technique. However, PSO was originally designed for continuous optimization which confounds its applications to discrete contexts. In this paper, a novel discrete PSO algorithm is suggested for identifying community structures in signed networks. In the suggested method, particles' status has been redesigned in discrete form so as to make PSO proper for discrete scenarios, and particles' updating rules have been reformulated by making use of the topology of the signed network. Extensive experiments compared with three state-of-the-art approaches on both synthetic and real-world signed networks demonstrate that the proposed method is effective and promising. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Arsenault, Richard; Gatien, Philippe; Renaud, Benoit; Brissette, François; Martel, Jean-Luc
2015-10-01
This study aims to test whether a weighted combination of several hydrological models can simulate flows more accurately than the models taken individually. In addition, the project attempts to identify the most efficient model averaging method and the optimal number of models to include in the weighting scheme. In order to address the first objective, streamflow was simulated using four lumped hydrological models (HSAMI, HMETS, MOHYSE and GR4J-6), each of which were calibrated with three different objective functions on 429 watersheds. The resulting 12 hydrographs (4 models × 3 metrics) were weighted and combined with the help of 9 averaging methods which are the simple arithmetic mean (SAM), Akaike information criterion (AICA), Bates-Granger (BGA), Bayes information criterion (BICA), Bayesian model averaging (BMA), Granger-Ramanathan average variant A, B and C (GRA, GRB and GRC) and the average by SCE-UA optimization (SCA). The same weights were then applied to the hydrographs in validation mode, and the Nash-Sutcliffe Efficiency metric was measured between the averaged and observed hydrographs. Statistical analyses were performed to compare the accuracy of weighted methods to that of individual models. A Kruskal-Wallis test and a multi-objective optimization algorithm were then used to identify the most efficient weighted method and the optimal number of models to integrate. Results suggest that the GRA, GRB, GRC and SCA weighted methods perform better than the individual members. Model averaging from these four methods were superior to the best of the individual members in 76% of the cases. Optimal combinations on all watersheds included at least one of each of the four hydrological models. None of the optimal combinations included all members of the ensemble of 12 hydrographs. The Granger-Ramanathan average variant C (GRC) is recommended as the best compromise between accuracy, speed of execution, and simplicity.
An efficient graph theory based method to identify every minimal reaction set in a metabolic network
2014-01-01
Background Development of cells with minimal metabolic functionality is gaining importance due to their efficiency in producing chemicals and fuels. Existing computational methods to identify minimal reaction sets in metabolic networks are computationally expensive. Further, they identify only one of the several possible minimal reaction sets. Results In this paper, we propose an efficient graph theory based recursive optimization approach to identify all minimal reaction sets. Graph theoretical insights offer systematic methods to not only reduce the number of variables in math programming and increase its computational efficiency, but also provide efficient ways to find multiple optimal solutions. The efficacy of the proposed approach is demonstrated using case studies from Escherichia coli and Saccharomyces cerevisiae. In case study 1, the proposed method identified three minimal reaction sets each containing 38 reactions in Escherichia coli central metabolic network with 77 reactions. Analysis of these three minimal reaction sets revealed that one of them is more suitable for developing minimal metabolism cell compared to other two due to practically achievable internal flux distribution. In case study 2, the proposed method identified 256 minimal reaction sets from the Saccharomyces cerevisiae genome scale metabolic network with 620 reactions. The proposed method required only 4.5 hours to identify all the 256 minimal reaction sets and has shown a significant reduction (approximately 80%) in the solution time when compared to the existing methods for finding minimal reaction set. Conclusions Identification of all minimal reactions sets in metabolic networks is essential since different minimal reaction sets have different properties that effect the bioprocess development. The proposed method correctly identified all minimal reaction sets in a both the case studies. The proposed method is computationally efficient compared to other methods for finding minimal reaction sets and useful to employ with genome-scale metabolic networks. PMID:24594118
Structural optimization for joined-wing synthesis
NASA Technical Reports Server (NTRS)
Gallman, John W.; Kroo, Ilan M.
1992-01-01
The differences between fully stressed and minimum-weight joined-wing structures are identified, and these differences are quantified in terms of weight, stress, and direct operating cost. A numerical optimization method and a fully stressed design method are used to design joined-wing structures. Both methods determine the sizes of 204 structural members, satisfying 1020 stress constraints and five buckling constraints. Monotonic splines are shown to be a very effective way of linking spanwise distributions of material to a few design variables. Both linear and nonlinear analyses are employed to formulate the buckling constraints. With a constraint on buckling, the fully stressed design is shown to be very similar to the minimum-weight structure. It is suggested that a fully stressed design method based on nonlinear analysis is adequate for an aircraft optimization study.
From properties to materials: An efficient and simple approach.
Huwig, Kai; Fan, Chencheng; Springborg, Michael
2017-12-21
We present an inverse-design method, the poor man's materials optimization, that is designed to identify materials within a very large class with optimized values for a pre-chosen property. The method combines an efficient genetic-algorithm-based optimization, an automatic approach for generating modified molecules, a simple approach for calculating the property of interest, and a mathematical formulation of the quantity whose value shall be optimized. In order to illustrate the performance of our approach, we study the properties of organic molecules related to those used in dye-sensitized solar cells, whereby we, for the sake of proof of principle, consider benzene as a simple test system. Using a genetic algorithm, the substituents attached to the organic backbone are varied and the best performing molecules are identified. We consider several properties to describe the performance of organic molecules, including the HOMO-LUMO gap, the sunlight absorption, the spatial distance of the orbitals, and the reorganisation energy. The results show that our method is able to identify a large number of good candidate structures within a short time. In some cases, chemical/physical intuition can be used to rationalize the substitution pattern of the best structures, although this is not always possible. The present investigations provide a solid foundation for dealing with more complex and technically relevant systems such as porphyrins. Furthermore, our "properties first, materials second" approach is not limited to solar-energy harvesting but can be applied to many other fields, as briefly is discussed in the paper.
From properties to materials: An efficient and simple approach
NASA Astrophysics Data System (ADS)
Huwig, Kai; Fan, Chencheng; Springborg, Michael
2017-12-01
We present an inverse-design method, the poor man's materials optimization, that is designed to identify materials within a very large class with optimized values for a pre-chosen property. The method combines an efficient genetic-algorithm-based optimization, an automatic approach for generating modified molecules, a simple approach for calculating the property of interest, and a mathematical formulation of the quantity whose value shall be optimized. In order to illustrate the performance of our approach, we study the properties of organic molecules related to those used in dye-sensitized solar cells, whereby we, for the sake of proof of principle, consider benzene as a simple test system. Using a genetic algorithm, the substituents attached to the organic backbone are varied and the best performing molecules are identified. We consider several properties to describe the performance of organic molecules, including the HOMO-LUMO gap, the sunlight absorption, the spatial distance of the orbitals, and the reorganisation energy. The results show that our method is able to identify a large number of good candidate structures within a short time. In some cases, chemical/physical intuition can be used to rationalize the substitution pattern of the best structures, although this is not always possible. The present investigations provide a solid foundation for dealing with more complex and technically relevant systems such as porphyrins. Furthermore, our "properties first, materials second" approach is not limited to solar-energy harvesting but can be applied to many other fields, as briefly is discussed in the paper.
Autonomous Modelling of X-ray Spectra Using Robust Global Optimization Methods
NASA Astrophysics Data System (ADS)
Rogers, Adam; Safi-Harb, Samar; Fiege, Jason
2015-08-01
The standard approach to model fitting in X-ray astronomy is by means of local optimization methods. However, these local optimizers suffer from a number of problems, such as a tendency for the fit parameters to become trapped in local minima, and can require an involved process of detailed user intervention to guide them through the optimization process. In this work we introduce a general GUI-driven global optimization method for fitting models to X-ray data, written in MATLAB, which searches for optimal models with minimal user interaction. We directly interface with the commonly used XSPEC libraries to access the full complement of pre-existing spectral models that describe a wide range of physics appropriate for modelling astrophysical sources, including supernova remnants and compact objects. Our algorithm is powered by the Ferret genetic algorithm and Locust particle swarm optimizer from the Qubist Global Optimization Toolbox, which are robust at finding families of solutions and identifying degeneracies. This technique will be particularly instrumental for multi-parameter models and high-fidelity data. In this presentation, we provide details of the code and use our techniques to analyze X-ray data obtained from a variety of astrophysical sources.
Interplanetary Program to Optimize Simulated Trajectories (IPOST). Volume 2: Analytic manual
NASA Technical Reports Server (NTRS)
Hong, P. E.; Kent, P. D.; Olson, D. W.; Vallado, C. A.
1992-01-01
The Interplanetary Program to Optimize Space Trajectories (IPOST) is intended to support many analysis phases, from early interplanetary feasibility studies through spacecraft development and operations. The IPOST output provides information for sizing and understanding mission impacts related to propulsion, guidance, communications, sensor/actuators, payload, and other dynamic and geometric environments. IPOST models three degree of freedom trajectory events, such as launch/ascent, orbital coast, propulsive maneuvering (impulsive and finite burn), gravity assist, and atmospheric entry. Trajectory propagation is performed using a choice of Cowell, Encke, Multiconic, Onestep, or Conic methods. The user identifies a desired sequence of trajectory events, and selects which parameters are independent (controls) and dependent (targets), as well as other constraints and the cost function. Targeting and optimization is performed using the Stanford NPSOL algorithm. IPOST structure allows subproblems within a master optimization problem to aid in the general constrained parameter optimization solution. An alternate optimization method uses implicit simulation and collocation techniques.
AIC identifies optimal representation of longitudinal dietary variables.
VanBuren, John; Cavanaugh, Joseph; Marshall, Teresa; Warren, John; Levy, Steven M
2017-09-01
The Akaike Information Criterion (AIC) is a well-known tool for variable selection in multivariable modeling as well as a tool to help identify the optimal representation of explanatory variables. However, it has been discussed infrequently in the dental literature. The purpose of this paper is to demonstrate the use of AIC in determining the optimal representation of dietary variables in a longitudinal dental study. The Iowa Fluoride Study enrolled children at birth and dental examinations were conducted at ages 5, 9, 13, and 17. Decayed or filled surfaces (DFS) trend clusters were created based on age 13 DFS counts and age 13-17 DFS increments. Dietary intake data (water, milk, 100 percent-juice, and sugar sweetened beverages) were collected semiannually using a food frequency questionnaire. Multinomial logistic regression models were fit to predict DFS cluster membership (n=344). Multiple approaches could be used to represent the dietary data including averaging across all collected surveys or over different shorter time periods to capture age-specific trends or using the individual time points of dietary data. AIC helped identify the optimal representation. Averaging data for all four dietary variables for the whole period from age 9.0 to 17.0 provided a better representation in the multivariable full model (AIC=745.0) compared to other methods assessed in full models (AICs=750.6 for age 9 and 9-13 increment dietary measurements and AIC=762.3 for age 9, 13, and 17 individual measurements). The results illustrate that AIC can help researchers identify the optimal way to summarize information for inclusion in a statistical model. The method presented here can be used by researchers performing statistical modeling in dental research. This method provides an alternative approach for assessing the propriety of variable representation to significance-based procedures, which could potentially lead to improved research in the dental community. © 2017 American Association of Public Health Dentistry.
Kohlmayer, Florian; Prasser, Fabian; Kuhn, Klaus A
2015-12-01
With the ARX data anonymization tool structured biomedical data can be de-identified using syntactic privacy models, such as k-anonymity. Data is transformed with two methods: (a) generalization of attribute values, followed by (b) suppression of data records. The former method results in data that is well suited for analyses by epidemiologists, while the latter method significantly reduces loss of information. Our tool uses an optimal anonymization algorithm that maximizes output utility according to a given measure. To achieve scalability, existing optimal anonymization algorithms exclude parts of the search space by predicting the outcome of data transformations regarding privacy and utility without explicitly applying them to the input dataset. These optimizations cannot be used if data is transformed with generalization and suppression. As optimal data utility and scalability are important for anonymizing biomedical data, we had to develop a novel method. In this article, we first confirm experimentally that combining generalization with suppression significantly increases data utility. Next, we proof that, within this coding model, the outcome of data transformations regarding privacy and utility cannot be predicted. As a consequence, existing algorithms fail to deliver optimal data utility. We confirm this finding experimentally. The limitation of previous work can be overcome at the cost of increased computational complexity. However, scalability is important for anonymizing data with user feedback. Consequently, we identify properties of datasets that may be predicted in our context and propose a novel and efficient algorithm. Finally, we evaluate our solution with multiple datasets and privacy models. This work presents the first thorough investigation of which properties of datasets can be predicted when data is anonymized with generalization and suppression. Our novel approach adopts existing optimization strategies to our context and combines different search methods. The experiments show that our method is able to efficiently solve a broad spectrum of anonymization problems. Our work shows that implementing syntactic privacy models is challenging and that existing algorithms are not well suited for anonymizing data with transformation models which are more complex than generalization alone. As such models have been recommended for use in the biomedical domain, our results are of general relevance for de-identifying structured biomedical data. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
Adaptive optimal stochastic state feedback control of resistive wall modes in tokamaks
NASA Astrophysics Data System (ADS)
Sun, Z.; Sen, A. K.; Longman, R. W.
2006-01-01
An adaptive optimal stochastic state feedback control is developed to stabilize the resistive wall mode (RWM) instability in tokamaks. The extended least-square method with exponential forgetting factor and covariance resetting is used to identify (experimentally determine) the time-varying stochastic system model. A Kalman filter is used to estimate the system states. The estimated system states are passed on to an optimal state feedback controller to construct control inputs. The Kalman filter and the optimal state feedback controller are periodically redesigned online based on the identified system model. This adaptive controller can stabilize the time-dependent RWM in a slowly evolving tokamak discharge. This is accomplished within a time delay of roughly four times the inverse of the growth rate for the time-invariant model used.
Adaptive Optimal Stochastic State Feedback Control of Resistive Wall Modes in Tokamaks
NASA Astrophysics Data System (ADS)
Sun, Z.; Sen, A. K.; Longman, R. W.
2007-06-01
An adaptive optimal stochastic state feedback control is developed to stabilize the resistive wall mode (RWM) instability in tokamaks. The extended least square method with exponential forgetting factor and covariance resetting is used to identify the time-varying stochastic system model. A Kalman filter is used to estimate the system states. The estimated system states are passed on to an optimal state feedback controller to construct control inputs. The Kalman filter and the optimal state feedback controller are periodically redesigned online based on the identified system model. This adaptive controller can stabilize the time dependent RWM in a slowly evolving tokamak discharge. This is accomplished within a time delay of roughly four times the inverse of the growth rate for the time-invariant model used.
NASA Astrophysics Data System (ADS)
Paasche, H.; Tronicke, J.
2012-04-01
In many near surface geophysical applications multiple tomographic data sets are routinely acquired to explore subsurface structures and parameters. Linking the model generation process of multi-method geophysical data sets can significantly reduce ambiguities in geophysical data analysis and model interpretation. Most geophysical inversion approaches rely on local search optimization methods used to find an optimal model in the vicinity of a user-given starting model. The final solution may critically depend on the initial model. Alternatively, global optimization (GO) methods have been used to invert geophysical data. They explore the solution space in more detail and determine the optimal model independently from the starting model. Additionally, they can be used to find sets of optimal models allowing a further analysis of model parameter uncertainties. Here we employ particle swarm optimization (PSO) to realize the global optimization of tomographic data. PSO is an emergent methods based on swarm intelligence characterized by fast and robust convergence towards optimal solutions. The fundamental principle of PSO is inspired by nature, since the algorithm mimics the behavior of a flock of birds searching food in a search space. In PSO, a number of particles cruise a multi-dimensional solution space striving to find optimal model solutions explaining the acquired data. The particles communicate their positions and success and direct their movement according to the position of the currently most successful particle of the swarm. The success of a particle, i.e. the quality of the currently found model by a particle, must be uniquely quantifiable to identify the swarm leader. When jointly inverting disparate data sets, the optimization solution has to satisfy multiple optimization objectives, at least one for each data set. Unique determination of the most successful particle currently leading the swarm is not possible. Instead, only statements about the Pareto optimality of the found solutions can be made. Identification of the leading particle traditionally requires a costly combination of ranking and niching techniques. In our approach, we use a decision rule under uncertainty to identify the currently leading particle of the swarm. In doing so, we consider the different objectives of our optimization problem as competing agents with partially conflicting interests. Analysis of the maximin fitness function allows for robust and cheap identification of the currently leading particle. The final optimization result comprises a set of possible models spread along the Pareto front. For convex Pareto fronts, solution density is expected to be maximal in the region ideally compromising all objectives, i.e. the region of highest curvature.
Singularity in structural optimization
NASA Technical Reports Server (NTRS)
Patnaik, S. N.; Guptill, J. D.; Berke, L.
1993-01-01
The conditions under which global and local singularities may arise in structural optimization are examined. Examples of these singularities are presented, and a framework is given within which the singularities can be recognized. It is shown, in particular, that singularities can be identified through the analysis of stress-displacement relations together with compatibility conditions or the displacement-stress relations derived by the integrated force method of structural analysis. Methods of eliminating the effects of singularities are suggested and illustrated numerically.
Modares, Hamidreza; Lewis, Frank L; Naghibi-Sistani, Mohammad-Bagher
2013-10-01
This paper presents an online policy iteration (PI) algorithm to learn the continuous-time optimal control solution for unknown constrained-input systems. The proposed PI algorithm is implemented on an actor-critic structure where two neural networks (NNs) are tuned online and simultaneously to generate the optimal bounded control policy. The requirement of complete knowledge of the system dynamics is obviated by employing a novel NN identifier in conjunction with the actor and critic NNs. It is shown how the identifier weights estimation error affects the convergence of the critic NN. A novel learning rule is developed to guarantee that the identifier weights converge to small neighborhoods of their ideal values exponentially fast. To provide an easy-to-check persistence of excitation condition, the experience replay technique is used. That is, recorded past experiences are used simultaneously with current data for the adaptation of the identifier weights. Stability of the whole system consisting of the actor, critic, system state, and system identifier is guaranteed while all three networks undergo adaptation. Convergence to a near-optimal control law is also shown. The effectiveness of the proposed method is illustrated with a simulation example.
USDA-ARS?s Scientific Manuscript database
The Optimal Ranking Regime (ORR) method was used to identify intra- to multi-decadal (IMD) time windows containing significant ranking sequences in U.S. climate division temperature data. The simplicity of the ORR procedure’s output – a time series’ most significant non-overlapping periods of high o...
Manoharan, Prabu; Ghoshal, Nanda
2018-05-01
Traditional structure-based virtual screening method to identify drug-like small molecules for BACE1 is so far unsuccessful. Location of BACE1, poor Blood Brain Barrier permeability and P-glycoprotein (Pgp) susceptibility of the inhibitors make it even more difficult. Fragment-based drug design method is suitable for efficient optimization of initial hit molecules for target like BACE1. We have developed a fragment-based virtual screening approach to identify/optimize the fragment molecules as a starting point. This method combines the shape, electrostatic, and pharmacophoric features of known fragment molecules, bound to protein conjugate crystal structure, and aims to identify both chemically and energetically feasible small fragment ligands that bind to BACE1 active site. The two top-ranked fragment hits were subjected for a 53 ns MD simulation. Principle component analysis and free energy landscape analysis reveal that the new ligands show the characteristic features of established BACE1 inhibitors. The potent method employed in this study may serve for the development of potential lead molecules for BACE1-directed Alzheimer's disease therapeutics.
Multidisciplinary optimization in aircraft design using analytic technology models
NASA Technical Reports Server (NTRS)
Malone, Brett; Mason, W. H.
1991-01-01
An approach to multidisciplinary optimization is presented which combines the Global Sensitivity Equation method, parametric optimization, and analytic technology models. The result is a powerful yet simple procedure for identifying key design issues. It can be used both to investigate technology integration issues very early in the design cycle, and to establish the information flow framework between disciplines for use in multidisciplinary optimization projects using much more computational intense representations of each technology. To illustrate the approach, an examination of the optimization of a short takeoff heavy transport aircraft is presented for numerous combinations of performance and technology constraints.
Sang, Jun; Sang, Jie; Ma, Qun; Hou, Xiao-Fang; Li, Cui-Qin
2017-03-01
This study aimed to extract and identify anthocyanins from Nitraria tangutorun Bobr. seed meal and establish a green analytical method of anthocyanins. Ultrasound-assisted extraction of anthocyanins from N. tangutorun seed meal was optimized using response surface methodology. Extraction at 70°C for 32.73 min using 51.15% ethanol rendered an extract with 65.04mg/100g of anthocyanins and 947.39mg/100g of polyphenols. An in vitro antioxidant assay showed that the extract exhibited a potent DPPH radical-scavenging capacity. Eight anthocyanins in N. tangutorun seed meal were identified by HPLC-MS, and the main anthocyanin was cyanidin-3-O-(trans-p-coumaroyl)-diglucoside (18.17mg/100g). A green HPLC-DAD method was developed to analyse anthocyanins. A mixtures of ethanol and a 5% (v/v) formic acid aqueous solution at a 20:80 (v/v) ratio was used as the optimized mobile phase. The method was accurate, stable and reliable and could be used to investigate anthocyanins from N. tangutorun seed meal. Copyright © 2016 Elsevier Ltd. All rights reserved.
Yan, Gang; Zhou, Li
2018-02-21
This paper proposes an innovative method for identifying the locations of multiple simultaneous acoustic emission (AE) events in plate-like structures from the view of image processing. By using a linear lead zirconium titanate (PZT) sensor array to record the AE wave signals, a reverse-time frequency-wavenumber (f-k) migration is employed to produce images displaying the locations of AE sources by back-propagating the AE waves. Lamb wave theory is included in the f-k migration to consider the dispersive property of the AE waves. Since the exact occurrence time of the AE events is usually unknown when recording the AE wave signals, a heuristic artificial bee colony (ABC) algorithm combined with an optimal criterion using minimum Shannon entropy is used to find the image with the identified AE source locations and occurrence time that mostly approximate the actual ones. Experimental studies on an aluminum plate with AE events simulated by PZT actuators are performed to validate the applicability and effectiveness of the proposed optimal image-based AE source identification method.
Zhou, Li
2018-01-01
This paper proposes an innovative method for identifying the locations of multiple simultaneous acoustic emission (AE) events in plate-like structures from the view of image processing. By using a linear lead zirconium titanate (PZT) sensor array to record the AE wave signals, a reverse-time frequency-wavenumber (f-k) migration is employed to produce images displaying the locations of AE sources by back-propagating the AE waves. Lamb wave theory is included in the f-k migration to consider the dispersive property of the AE waves. Since the exact occurrence time of the AE events is usually unknown when recording the AE wave signals, a heuristic artificial bee colony (ABC) algorithm combined with an optimal criterion using minimum Shannon entropy is used to find the image with the identified AE source locations and occurrence time that mostly approximate the actual ones. Experimental studies on an aluminum plate with AE events simulated by PZT actuators are performed to validate the applicability and effectiveness of the proposed optimal image-based AE source identification method. PMID:29466310
NASA Technical Reports Server (NTRS)
Lan, C. Edward; Ge, Fuying
1989-01-01
Control system design for general nonlinear flight dynamic models is considered through numerical simulation. The design is accomplished through a numerical optimizer coupled with analysis of flight dynamic equations. The general flight dynamic equations are numerically integrated and dynamic characteristics are then identified from the dynamic response. The design variables are determined iteratively by the optimizer to optimize a prescribed objective function which is related to desired dynamic characteristics. Generality of the method allows nonlinear effects to aerodynamics and dynamic coupling to be considered in the design process. To demonstrate the method, nonlinear simulation models for an F-5A and an F-16 configurations are used to design dampers to satisfy specifications on flying qualities and control systems to prevent departure. The results indicate that the present method is simple in formulation and effective in satisfying the design objectives.
Optimal design of geodesically stiffened composite cylindrical shells
NASA Technical Reports Server (NTRS)
Gendron, G.; Guerdal, Z.
1992-01-01
An optimization system based on the finite element code Computations Structural Mechanics (CSM) Testbed and the optimization program, Automated Design Synthesis (ADS), is described. The optimization system can be used to obtain minimum-weight designs of composite stiffened structures. Ply thickness, ply orientations, and stiffener heights can be used as design variables. Buckling, displacement, and material failure constraints can be imposed on the design. The system is used to conduct a design study of geodesically stiffened shells. For comparison purposes, optimal designs of unstiffened shells and shells stiffened by rings and stingers are also obtained. Trends in the design of geodesically stiffened shells are identified. An approach to include local stress concentrations during the design optimization process is then presented. The method is based on a global/local analysis technique. It employs spline interpolation functions to determine displacements and rotations from a global model which are used as 'boundary conditions' for the local model. The organization of the strategy in the context of an optimization process is described. The method is validated with an example.
An approach to optimal semi-active control of vibration energy harvesting based on MEMS
NASA Astrophysics Data System (ADS)
Rojas, Rafael A.; Carcaterra, Antonio
2018-07-01
In this paper the energy harvesting problem involving typical MEMS technology is reduced to an optimal control problem, where the objective function is the absorption of the maximum amount of energy in a given time interval from a vibrating environment. The interest here is to identify a physical upper bound for this energy storage. The mathematical tool is a new optimal control called Krotov's method, that has not yet been applied to engineering problems, except in quantum dynamics. This approach leads to identify new maximum bounds to the energy harvesting performance. Novel MEMS-based device control configurations for vibration energy harvesting are proposed with particular emphasis to piezoelectric, electromagnetic and capacitive circuits.
Chaudhary, Neha; Tøndel, Kristin; Bhatnagar, Rakesh; dos Santos, Vítor A P Martins; Puchałka, Jacek
2016-03-01
Genome-Scale Metabolic Reconstructions (GSMRs), along with optimization-based methods, predominantly Flux Balance Analysis (FBA) and its derivatives, are widely applied for assessing and predicting the behavior of metabolic networks upon perturbation, thereby enabling identification of potential novel drug targets and biotechnologically relevant pathways. The abundance of alternate flux profiles has led to the evolution of methods to explore the complete solution space aiming to increase the accuracy of predictions. Herein we present a novel, generic algorithm to characterize the entire flux space of GSMR upon application of FBA, leading to the optimal value of the objective (the optimal flux space). Our method employs Modified Latin-Hypercube Sampling (LHS) to effectively border the optimal space, followed by Principal Component Analysis (PCA) to identify and explain the major sources of variability within it. The approach was validated with the elementary mode analysis of a smaller network of Saccharomyces cerevisiae and applied to the GSMR of Pseudomonas aeruginosa PAO1 (iMO1086). It is shown to surpass the commonly used Monte Carlo Sampling (MCS) in providing a more uniform coverage for a much larger network in less number of samples. Results show that although many fluxes are identified as variable upon fixing the objective value, majority of the variability can be reduced to several main patterns arising from a few alternative pathways. In iMO1086, initial variability of 211 reactions could almost entirely be explained by 7 alternative pathway groups. These findings imply that the possibilities to reroute greater portions of flux may be limited within metabolic networks of bacteria. Furthermore, the optimal flux space is subject to change with environmental conditions. Our method may be a useful device to validate the predictions made by FBA-based tools, by describing the optimal flux space associated with these predictions, thus to improve them.
Study of Flapping Flight Using Discrete Vortex Method Based Simulations
NASA Astrophysics Data System (ADS)
Devranjan, S.; Jalikop, Shreyas V.; Sreenivas, K. R.
2013-12-01
In recent times, research in the area of flapping flight has attracted renewed interest with an endeavor to use this mechanism in Micro Air vehicles (MAVs). For a sustained and high-endurance flight, having larger payload carrying capacity we need to identify a simple and efficient flapping-kinematics. In this paper, we have used flow visualizations and Discrete Vortex Method (DVM) based simulations for the study of flapping flight. Our results highlight that simple flapping kinematics with down-stroke period (tD) shorter than the upstroke period (tU) would produce a sustained lift. We have identified optimal asymmetry ratio (Ar = tD/tU), for which flapping-wings will produce maximum lift and find that introducing optimal wing flexibility will further enhances the lift.
Kusumoto, Dai; Lachmann, Mark; Kunihiro, Takeshi; Yuasa, Shinsuke; Kishino, Yoshikazu; Kimura, Mai; Katsuki, Toshiomi; Itoh, Shogo; Seki, Tomohisa; Fukuda, Keiichi
2018-06-05
Deep learning technology is rapidly advancing and is now used to solve complex problems. Here, we used deep learning in convolutional neural networks to establish an automated method to identify endothelial cells derived from induced pluripotent stem cells (iPSCs), without the need for immunostaining or lineage tracing. Networks were trained to predict whether phase-contrast images contain endothelial cells based on morphology only. Predictions were validated by comparison to immunofluorescence staining for CD31, a marker of endothelial cells. Method parameters were then automatically and iteratively optimized to increase prediction accuracy. We found that prediction accuracy was correlated with network depth and pixel size of images to be analyzed. Finally, K-fold cross-validation confirmed that optimized convolutional neural networks can identify endothelial cells with high performance, based only on morphology. Copyright © 2018 The Author(s). Published by Elsevier Inc. All rights reserved.
Protein construct storage: Bayesian variable selection and prediction with mixtures.
Clyde, M A; Parmigiani, G
1998-07-01
Determining optimal conditions for protein storage while maintaining a high level of protein activity is an important question in pharmaceutical research. A designed experiment based on a space-filling design was conducted to understand the effects of factors affecting protein storage and to establish optimal storage conditions. Different model-selection strategies to identify important factors may lead to very different answers about optimal conditions. Uncertainty about which factors are important, or model uncertainty, can be a critical issue in decision-making. We use Bayesian variable selection methods for linear models to identify important variables in the protein storage data, while accounting for model uncertainty. We also use the Bayesian framework to build predictions based on a large family of models, rather than an individual model, and to evaluate the probability that certain candidate storage conditions are optimal.
Trescher, Saskia; Münchmeyer, Jannes; Leser, Ulf
2017-03-27
Gene regulation is one of the most important cellular processes, indispensable for the adaptability of organisms and closely interlinked with several classes of pathogenesis and their progression. Elucidation of regulatory mechanisms can be approached by a multitude of experimental methods, yet integration of the resulting heterogeneous, large, and noisy data sets into comprehensive and tissue or disease-specific cellular models requires rigorous computational methods. Recently, several algorithms have been proposed which model genome-wide gene regulation as sets of (linear) equations over the activity and relationships of transcription factors, genes and other factors. Subsequent optimization finds those parameters that minimize the divergence of predicted and measured expression intensities. In various settings, these methods produced promising results in terms of estimating transcription factor activity and identifying key biomarkers for specific phenotypes. However, despite their common root in mathematical optimization, they vastly differ in the types of experimental data being integrated, the background knowledge necessary for their application, the granularity of their regulatory model, the concrete paradigm used for solving the optimization problem and the data sets used for evaluation. Here, we review five recent methods of this class in detail and compare them with respect to several key properties. Furthermore, we quantitatively compare the results of four of the presented methods based on publicly available data sets. The results show that all methods seem to find biologically relevant information. However, we also observe that the mutual result overlaps are very low, which contradicts biological intuition. Our aim is to raise further awareness of the power of these methods, yet also to identify common shortcomings and necessary extensions enabling focused research on the critical points.
Optimization of multi-stage dynamic treatment regimes utilizing accumulated data.
Huang, Xuelin; Choi, Sangbum; Wang, Lu; Thall, Peter F
2015-11-20
In medical therapies involving multiple stages, a physician's choice of a subject's treatment at each stage depends on the subject's history of previous treatments and outcomes. The sequence of decisions is known as a dynamic treatment regime or treatment policy. We consider dynamic treatment regimes in settings where each subject's final outcome can be defined as the sum of longitudinally observed values, each corresponding to a stage of the regime. Q-learning, which is a backward induction method, is used to first optimize the last stage treatment then sequentially optimize each previous stage treatment until the first stage treatment is optimized. During this process, model-based expectations of outcomes of late stages are used in the optimization of earlier stages. When the outcome models are misspecified, bias can accumulate from stage to stage and become severe, especially when the number of treatment stages is large. We demonstrate that a modification of standard Q-learning can help reduce the accumulated bias. We provide a computational algorithm, estimators, and closed-form variance formulas. Simulation studies show that the modified Q-learning method has a higher probability of identifying the optimal treatment regime even in settings with misspecified models for outcomes. It is applied to identify optimal treatment regimes in a study for advanced prostate cancer and to estimate and compare the final mean rewards of all the possible discrete two-stage treatment sequences. Copyright © 2015 John Wiley & Sons, Ltd.
New approaches to optimization in aerospace conceptual design
NASA Technical Reports Server (NTRS)
Gage, Peter J.
1995-01-01
Aerospace design can be viewed as an optimization process, but conceptual studies are rarely performed using formal search algorithms. Three issues that restrict the success of automatic search are identified in this work. New approaches are introduced to address the integration of analyses and optimizers, to avoid the need for accurate gradient information and a smooth search space (required for calculus-based optimization), and to remove the restrictions imposed by fixed complexity problem formulations. (1) Optimization should be performed in a flexible environment. A quasi-procedural architecture is used to conveniently link analysis modules and automatically coordinate their execution. It efficiently controls a large-scale design tasks. (2) Genetic algorithms provide a search method for discontinuous or noisy domains. The utility of genetic optimization is demonstrated here, but parameter encodings and constraint-handling schemes must be carefully chosen to avoid premature convergence to suboptimal designs. The relationship between genetic and calculus-based methods is explored. (3) A variable-complexity genetic algorithm is created to permit flexible parameterization, so that the level of description can change during optimization. This new optimizer automatically discovers novel designs in structural and aerodynamic tasks.
Interplanetary Program to Optimize Simulated Trajectories (IPOST). Volume 1: User's guide
NASA Technical Reports Server (NTRS)
Hong, P. E.; Kent, P. D.; Olson, D. W.; Vallado, C. A.
1992-01-01
IPOST is intended to support many analysis phases, from early interplanetary feasibility studies through spacecraft development and operations. The IPOST output provides information for sizing and understanding mission impacts related to propulsion, guidance, communications, sensor/actuators, payload, and other dynamic and geometric environments. IPOST models three degree of freedom trajectory events, such as launch/ascent, orbital coast, propulsive maneuvering (impulsive and finite burn), gravity assist, and atmospheric entry. Trajectory propagation is performed using a choice of Cowell, Encke, Multiconic, Onestep, or Conic methods. The user identifies a desired sequence fo trajectory events, and selects which parameters are independent (controls) and dependent (targets), as well as other constraints and the coat function. Targeting and optimization is performed using the Stanford NPSOL algorithm. IPOST structure allows sub-problems within a master optimization problem to aid in the general constrained parameter optimization solution. An alternate optimization method uses implicit simulation and collocation techniques.
Improving Kinematic Accuracy of Soft Wearable Data Gloves by Optimizing Sensor Locations
Kim, Dong Hyun; Lee, Sang Wook; Park, Hyung-Soon
2016-01-01
Bending sensors enable compact, wearable designs when used for measuring hand configurations in data gloves. While existing data gloves can accurately measure angular displacement of the finger and distal thumb joints, accurate measurement of thumb carpometacarpal (CMC) joint movements remains challenging due to crosstalk between the multi-sensor outputs required to measure the degrees of freedom (DOF). To properly measure CMC-joint configurations, sensor locations that minimize sensor crosstalk must be identified. This paper presents a novel approach to identifying optimal sensor locations. Three-dimensional hand surface data from ten subjects was collected in multiple thumb postures with varied CMC-joint flexion and abduction angles. For each posture, scanned CMC-joint contours were used to estimate CMC-joint flexion and abduction angles by varying the positions and orientations of two bending sensors. Optimal sensor locations were estimated by the least squares method, which minimized the difference between the true CMC-joint angles and the joint angle estimates. Finally, the resultant optimal sensor locations were experimentally validated. Placing sensors at the optimal locations, CMC-joint angle measurement accuracies improved (flexion, 2.8° ± 1.9°; abduction, 1.9° ± 1.2°). The proposed method for improving the accuracy of the sensing system can be extended to other types of soft wearable measurement devices. PMID:27240364
IDENTIFICATION OF SEDIMENT SOURCE AREAS WITHIN A WATERSHED
Two methods, one using a travel time approach and the other based on optimization techniques, were developed to identify sediment generating areas within a watershed. Both methods rely on hydrograph and sedimentograph data collected at the mouth of the watershed. Data from severa...
Defining a region of optimization based on engine usage data
Jiang, Li; Lee, Donghoon; Yilmaz, Hakan; Stefanopoulou, Anna
2015-08-04
Methods and systems for engine control optimization are provided. One or more operating conditions of a vehicle engine are detected. A value for each of a plurality of engine control parameters is determined based on the detected one or more operating conditions of the vehicle engine. A range of the most commonly detected operating conditions of the vehicle engine is identified and a region of optimization is defined based on the range of the most commonly detected operating conditions of the vehicle engine. The engine control optimization routine is initiated when the one or more operating conditions of the vehicle engine are within the defined region of optimization.
An Iterative Approach for the Optimization of Pavement Maintenance Management at the Network Level
Torres-Machí, Cristina; Chamorro, Alondra; Videla, Carlos; Yepes, Víctor
2014-01-01
Pavement maintenance is one of the major issues of public agencies. Insufficient investment or inefficient maintenance strategies lead to high economic expenses in the long term. Under budgetary restrictions, the optimal allocation of resources becomes a crucial aspect. Two traditional approaches (sequential and holistic) and four classes of optimization methods (selection based on ranking, mathematical optimization, near optimization, and other methods) have been applied to solve this problem. They vary in the number of alternatives considered and how the selection process is performed. Therefore, a previous understanding of the problem is mandatory to identify the most suitable approach and method for a particular network. This study aims to assist highway agencies, researchers, and practitioners on when and how to apply available methods based on a comparative analysis of the current state of the practice. Holistic approach tackles the problem considering the overall network condition, while the sequential approach is easier to implement and understand, but may lead to solutions far from optimal. Scenarios defining the suitability of these approaches are defined. Finally, an iterative approach gathering the advantages of traditional approaches is proposed and applied in a case study. The proposed approach considers the overall network condition in a simpler and more intuitive manner than the holistic approach. PMID:24741352
An iterative approach for the optimization of pavement maintenance management at the network level.
Torres-Machí, Cristina; Chamorro, Alondra; Videla, Carlos; Pellicer, Eugenio; Yepes, Víctor
2014-01-01
Pavement maintenance is one of the major issues of public agencies. Insufficient investment or inefficient maintenance strategies lead to high economic expenses in the long term. Under budgetary restrictions, the optimal allocation of resources becomes a crucial aspect. Two traditional approaches (sequential and holistic) and four classes of optimization methods (selection based on ranking, mathematical optimization, near optimization, and other methods) have been applied to solve this problem. They vary in the number of alternatives considered and how the selection process is performed. Therefore, a previous understanding of the problem is mandatory to identify the most suitable approach and method for a particular network. This study aims to assist highway agencies, researchers, and practitioners on when and how to apply available methods based on a comparative analysis of the current state of the practice. Holistic approach tackles the problem considering the overall network condition, while the sequential approach is easier to implement and understand, but may lead to solutions far from optimal. Scenarios defining the suitability of these approaches are defined. Finally, an iterative approach gathering the advantages of traditional approaches is proposed and applied in a case study. The proposed approach considers the overall network condition in a simpler and more intuitive manner than the holistic approach.
Interplanetary program to optimize simulated trajectories (IPOST). Volume 4: Sample cases
NASA Technical Reports Server (NTRS)
Hong, P. E.; Kent, P. D; Olson, D. W.; Vallado, C. A.
1992-01-01
The Interplanetary Program to Optimize Simulated Trajectories (IPOST) is intended to support many analysis phases, from early interplanetary feasibility studies through spacecraft development and operations. The IPOST output provides information for sizing and understanding mission impacts related to propulsion, guidance, communications, sensor/actuators, payload, and other dynamic and geometric environments. IPOST models three degree of freedom trajectory events, such as launch/ascent, orbital coast, propulsive maneuvering (impulsive and finite burn), gravity assist, and atmospheric entry. Trajectory propagation is performed using a choice of Cowell, Encke, Multiconic, Onestep, or Conic methods. The user identifies a desired sequence of trajectory events, and selects which parameters are independent (controls) and dependent (targets), as well as other constraints and the cost function. Targeting and optimization are performed using the Standard NPSOL algorithm. The IPOST structure allows sub-problems within a master optimization problem to aid in the general constrained parameter optimization solution. An alternate optimization method uses implicit simulation and collocation techniques.
Joint optimization of regional water-power systems
NASA Astrophysics Data System (ADS)
Pereira-Cardenal, Silvio J.; Mo, Birger; Gjelsvik, Anders; Riegels, Niels D.; Arnbjerg-Nielsen, Karsten; Bauer-Gottwein, Peter
2016-06-01
Energy and water resources systems are tightly coupled; energy is needed to deliver water and water is needed to extract or produce energy. Growing pressure on these resources has raised concerns about their long-term management and highlights the need to develop integrated solutions. A method for joint optimization of water and electric power systems was developed in order to identify methodologies to assess the broader interactions between water and energy systems. The proposed method is to include water users and power producers into an economic optimization problem that minimizes the cost of power production and maximizes the benefits of water allocation, subject to constraints from the power and hydrological systems. The method was tested on the Iberian Peninsula using simplified models of the seven major river basins and the power market. The optimization problem was successfully solved using stochastic dual dynamic programming. The results showed that current water allocation to hydropower producers in basins with high irrigation productivity, and to irrigation users in basins with high hydropower productivity was sub-optimal. Optimal allocation was achieved by managing reservoirs in very distinct ways, according to the local inflow, storage capacity, hydropower productivity, and irrigation demand and productivity. This highlights the importance of appropriately representing the water users' spatial distribution and marginal benefits and costs when allocating water resources optimally. The method can handle further spatial disaggregation and can be extended to include other aspects of the water-energy nexus.
An ant colony optimization based algorithm for identifying gene regulatory elements.
Liu, Wei; Chen, Hanwu; Chen, Ling
2013-08-01
It is one of the most important tasks in bioinformatics to identify the regulatory elements in gene sequences. Most of the existing algorithms for identifying regulatory elements are inclined to converge into a local optimum, and have high time complexity. Ant Colony Optimization (ACO) is a meta-heuristic method based on swarm intelligence and is derived from a model inspired by the collective foraging behavior of real ants. Taking advantage of the ACO in traits such as self-organization and robustness, this paper designs and implements an ACO based algorithm named ACRI (ant-colony-regulatory-identification) for identifying all possible binding sites of transcription factor from the upstream of co-expressed genes. To accelerate the ants' searching process, a strategy of local optimization is presented to adjust the ants' start positions on the searched sequences. By exploiting the powerful optimization ability of ACO, the algorithm ACRI can not only improve precision of the results, but also achieve a very high speed. Experimental results on real world datasets show that ACRI can outperform other traditional algorithms in the respects of speed and quality of solutions. Copyright © 2013 Elsevier Ltd. All rights reserved.
Large scale nonlinear programming for the optimization of spacecraft trajectories
NASA Astrophysics Data System (ADS)
Arrieta-Camacho, Juan Jose
Despite the availability of high fidelity mathematical models, the computation of accurate optimal spacecraft trajectories has never been an easy task. While simplified models of spacecraft motion can provide useful estimates on energy requirements, sizing, and cost; the actual launch window and maneuver scheduling must rely on more accurate representations. We propose an alternative for the computation of optimal transfers that uses an accurate representation of the spacecraft dynamics. Like other methodologies for trajectory optimization, this alternative is able to consider all major disturbances. In contrast, it can handle explicitly equality and inequality constraints throughout the trajectory; it requires neither the derivation of costate equations nor the identification of the constrained arcs. The alternative consist of two steps: (1) discretizing the dynamic model using high-order collocation at Radau points, which displays numerical advantages, and (2) solution to the resulting Nonlinear Programming (NLP) problem using an interior point method, which does not suffer from the performance bottleneck associated with identifying the active set, as required by sequential quadratic programming methods; in this way the methodology exploits the availability of sound numerical methods, and next generation NLP solvers. In practice the methodology is versatile; it can be applied to a variety of aerospace problems like homing, guidance, and aircraft collision avoidance; the methodology is particularly well suited for low-thrust spacecraft trajectory optimization. Examples are presented which consider the optimization of a low-thrust orbit transfer subject to the main disturbances due to Earth's gravity field together with Lunar and Solar attraction. Other example considers the optimization of a multiple asteroid rendezvous problem. In both cases, the ability of our proposed methodology to consider non-standard objective functions and constraints is illustrated. Future research directions are identified, involving the automatic scheduling and optimization of trajectory correction maneuvers. The sensitivity information provided by the methodology is expected to be invaluable in such research pursuit. The collocation scheme and nonlinear programming algorithm presented in this work, complement other existing methodologies by providing reliable and efficient numerical methods able to handle large scale, nonlinear dynamic models.
Economic Analysis and Optimal Sizing for behind-the-meter Battery Storage
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Di; Kintner-Meyer, Michael CW; Yang, Tao
This paper proposes methods to estimate the potential benefits and determine the optimal energy and power capacity for behind-the-meter BSS. In the proposed method, a linear programming is first formulated only using typical load profiles, energy/demand charge rates, and a set of battery parameters to determine the maximum saving in electric energy cost. The optimization formulation is then adapted to include battery cost as a function of its power and energy capacity in order to capture the trade-off between benefits and cost, and therefore to determine the most economic battery size. Using the proposed methods, economic analysis and optimal sizingmore » have been performed for a few commercial buildings and utility rate structures that are representative of those found in the various regions of the Continental United States. The key factors that affect the economic benefits and optimal size have been identified. The proposed methods and case study results cannot only help commercial and industrial customers or battery vendors to evaluate and size the storage system for behind-the-meter application, but can also assist utilities and policy makers to design electricity rate or subsidies to promote the development of energy storage.« less
Usha, Rajamanickam; Mala, Krishnaswami Kanjana; Venil, Chidambaram Kulandaisamy; Palaniswamy, Muthusamy
2011-01-01
Marine actinomycetes were isolated from sediment samples collected from Pitchavaram mangrove ecosystem situated along the southeast coast of India. Maximum actinomycete population was noted in rhizosphere region. About 38% of the isolates produced L-asparaginase. One potential strain KUA106 produced higher level of enzyme using tryptone glucose yeast extract medium. Based on the studied phenotypic characteristics, strain KUA106 was identified as Streptomyces parvulus KUA106. The optimization method that combines the Plackett-Burman design, a factorial design and the response surface method, which were used to optimize the medium for the production of L-asparaginase by Streptomycetes parvulus. Four medium factors were screened from eleven medium factors by Plackett-Burman design experiments and subsequent optimization process to find out the optimum values of the selected parameters using central composite design was performed. Asparagine, tryptone, d) extrose and NaCl components were found to be the best medium for the L-asparaginase production. The combined optimization method described here is the effective method for screening medium factors as well as determining their optimum level for the production of L-asparaginase by Streptomycetes parvulus KUAP106.
Ahlawat, Sonika; Sharma, Rekha; Maitra, A.; Roy, Manoranjan; Tantia, M.S.
2014-01-01
New, quick, and inexpensive methods for genotyping novel caprine Fec gene polymorphisms through tetra-primer ARMS PCR were developed in the present investigation. Single nucleotide polymorphism (SNP) genotyping needs to be attempted to establish association between the identified mutations and traits of economic importance. In the current study, we have successfully genotyped three new SNPs identified in caprine fecundity genes viz. T(-242)C (BMPR1B), G1189A (GDF9) and G735A (BMP15). Tetra-primer ARMS PCR protocol was optimized and validated for these SNPs with short turn-around time and costs. The optimized techniques were tested on 158 random samples of Black Bengal goat breed. Samples with known genotypes for the described genes, previously tested in duplicate using the sequencing methods, were employed for validation of the assay. Upon validation, complete concordance was observed between the tetra-primer ARMS PCR assays and the sequencing results. These results highlight the ability of tetra-primer ARMS PCR in genotyping of mutations in Fec genes. Any associated SNP could be used to accelerate the improvement of goat reproductive traits by identifying high prolific animals at an early stage of life. Our results provide direct evidence that tetra-primer ARMS-PCR is a rapid, reliable, and cost-effective method for SNP genotyping of mutations in caprine Fec genes. PMID:25606428
Mixture experiment methods in the development and optimization of microemulsion formulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Furlanetto, Sandra; Cirri, Marzia; Piepel, Gregory F.
2011-06-25
Microemulsion formulations represent an interesting delivery vehicle for lipophilic drugs, allowing for improving their solubility and dissolution properties. This work developed effective microemulsion formulations using glyburide (a very poorly-water-soluble hypoglycaemic agent) as a model drug. First, the area of stable microemulsion (ME) formations was identified using a new approach based on mixture experiment methods. A 13-run mixture design was carried out in an experimental region defined by constraints on three components: aqueous, oil, and surfactant/cosurfactant. The transmittance percentage (at 550 nm) of ME formulations (indicative of their transparency and thus of their stability) was chosen as the response variable. Themore » results obtained using the mixture experiment approach corresponded well with those obtained using the traditional approach based on pseudo-ternary phase diagrams. However, the mixture experiment approach required far less experimental effort than the traditional approach. A subsequent 13-run mixture experiment, in the region of stable MEs, was then performed to identify the optimal formulation (i.e., having the best glyburide dissolution properties). Percent drug dissolved and dissolution efficiency were selected as the responses to be maximized. The ME formulation optimized via the mixture experiment approach consisted of 78% surfactant/cosurfacant (a mixture of Tween 20 and Transcutol, 1:1 v/v), 5% oil (Labrafac Hydro) and 17% aqueous (water). The stable region of MEs was identified using mixture experiment methods for the first time.« less
Sustaining Enthusiasm in the Classroom: Reinvestment Strategies that Work
ERIC Educational Resources Information Center
Poczwardowski, Artur; Grosshans, Onie; Trunnell, Eric
2003-01-01
Objective: To identify reinvestment strategies of 11 senior health-education faculty from 3 degree programs. Methods: Data from individual, in-depth interviews were inductively analyzed for content. Results: The identified strategies grouped around 6 themes: growth and success in work, realization of an optimal fit into profession, investment into…
ERIC Educational Resources Information Center
Hazelwood, R. Jordan; Armeson, Kent E.; Hill, Elizabeth G.; Bonilha, Heather Shaw; Martin-Harris, Bonnie
2017-01-01
Purpose: The purpose of this study was to identify which swallowing task(s) yielded the worst performance during a standardized modified barium swallow study (MBSS) in order to optimize the detection of swallowing impairment. Method: This secondary data analysis of adult MBSSs estimated the probability of each swallowing task yielding the derived…
Method of optimization onboard communication network
NASA Astrophysics Data System (ADS)
Platoshin, G. A.; Selvesuk, N. I.; Semenov, M. E.; Novikov, V. M.
2018-02-01
In this article the optimization levels of onboard communication network (OCN) are proposed. We defined the basic parameters, which are necessary for the evaluation and comparison of modern OCN, we identified also a set of initial data for possible modeling of the OCN. We also proposed a mathematical technique for implementing the OCN optimization procedure. This technique is based on the principles and ideas of binary programming. It is shown that the binary programming technique allows to obtain an inherently optimal solution for the avionics tasks. An example of the proposed approach implementation to the problem of devices assignment in OCN is considered.
Kandadai, Venk; Yang, Haodong; Jiang, Ling; Yang, Christopher C; Fleisher, Linda; Winston, Flaura Koplin
2016-05-05
Little is known about the ability of individual stakeholder groups to achieve health information dissemination goals through Twitter. This study aimed to develop and apply methods for the systematic evaluation and optimization of health information dissemination by stakeholders through Twitter. Tweet content from 1790 followers of @SafetyMD (July-November 2012) was examined. User emphasis, a new indicator of Twitter information dissemination, was defined and applied to retweets across two levels of retweeters originating from @SafetyMD. User interest clusters were identified based on principal component analysis (PCA) and hierarchical cluster analysis (HCA) of a random sample of 170 followers. User emphasis of keywords remained across levels but decreased by 9.5 percentage points. PCA and HCA identified 12 statistically unique clusters of followers within the @SafetyMD Twitter network. This study is one of the first to develop methods for use by stakeholders to evaluate and optimize their use of Twitter to disseminate health information. Our new methods provide preliminary evidence that individual stakeholders can evaluate the effectiveness of health information dissemination and create content-specific clusters for more specific targeted messaging.
Improving the Dynamic Characteristics of Body-in-White Structure Using Structural Optimization
Yahaya Rashid, Aizzat S.; Mohamed Haris, Sallehuddin; Alias, Anuar
2014-01-01
The dynamic behavior of a body-in-white (BIW) structure has significant influence on the noise, vibration, and harshness (NVH) and crashworthiness of a car. Therefore, by improving the dynamic characteristics of BIW, problems and failures associated with resonance and fatigue can be prevented. The design objectives attempt to improve the existing torsion and bending modes by using structural optimization subjected to dynamic load without compromising other factors such as mass and stiffness of the structure. The natural frequency of the design was modified by identifying and reinforcing the structure at critical locations. These crucial points are first identified by topology optimization using mass and natural frequencies as the design variables. The individual components obtained from the analysis go through a size optimization step to find their target thickness of the structure. The thickness of affected regions of the components will be modified according to the analysis. The results of both optimization steps suggest several design modifications to achieve the target vibration specifications without compromising the stiffness of the structure. A method of combining both optimization approaches is proposed to improve the design modification process. PMID:25101312
Learning optimal embedded cascades.
Saberian, Mohammad Javad; Vasconcelos, Nuno
2012-10-01
The problem of automatic and optimal design of embedded object detector cascades is considered. Two main challenges are identified: optimization of the cascade configuration and optimization of individual cascade stages, so as to achieve the best tradeoff between classification accuracy and speed, under a detection rate constraint. Two novel boosting algorithms are proposed to address these problems. The first, RCBoost, formulates boosting as a constrained optimization problem which is solved with a barrier penalty method. The constraint is the target detection rate, which is met at all iterations of the boosting process. This enables the design of embedded cascades of known configuration without extensive cross validation or heuristics. The second, ECBoost, searches over cascade configurations to achieve the optimal tradeoff between classification risk and speed. The two algorithms are combined into an overall boosting procedure, RCECBoost, which optimizes both the cascade configuration and its stages under a detection rate constraint, in a fully automated manner. Extensive experiments in face, car, pedestrian, and panda detection show that the resulting detectors achieve an accuracy versus speed tradeoff superior to those of previous methods.
Tolrà, R P; Alonso, R; Poschenrieder, C; Barceló, D; Barceló, J
2000-08-11
Liquid chromatography-atmospheric pressure chemical ionization mass spectrometry was used to identify glucosinolates in plant extracts. Optimization of the analytical conditions and the determination of the method detection limit was performed using commercial 2-propenylglucosinolate (sinigrin). Optimal values for the following parameters were determined: nebulization pressure, gas temperature, flux of drying gas, capillar voltage, corona current and fragmentor conditions. The method detection limit for sinigrin was 2.85 ng. For validation of the method the glucosinolates in reference material (rapeseed) from the Community Bureau of Reference Materials (BCR) were analyzed. The method was applied for the determination of glucosinolates in Thlaspi caerulescens plants.
Method of determining the optimal dilution ratio for fluorescence fingerprint of food constituents.
Trivittayasil, Vipavee; Tsuta, Mizuki; Kokawa, Mito; Yoshimura, Masatoshi; Sugiyama, Junichi; Fujita, Kaori; Shibata, Mario
2015-01-01
Quantitative determination by fluorescence spectroscopy is possible because of the linear relationship between the intensity of emitted fluorescence and the fluorophore concentration. However, concentration quenching may cause the relationship to become nonlinear, and thus, the optimal dilution ratio has to be determined. In the case of fluorescence fingerprint (FF) measurement, fluorescence is measured under multiple wavelength conditions and a method of determining the optimal dilution ratio for multivariate data such as FFs has not been reported. In this study, the FFs of mixed solutions of tryptophan and epicatechin of different concentrations and composition ratios were measured. Principal component analysis was applied, and the resulting loading plots were found to contain useful information about each constituent. The optimal concentration ranges could be determined by identifying the linear region of the PC score plotted against total concentration.
Boundary Control of Linear Uncertain 1-D Parabolic PDE Using Approximate Dynamic Programming.
Talaei, Behzad; Jagannathan, Sarangapani; Singler, John
2018-04-01
This paper develops a near optimal boundary control method for distributed parameter systems governed by uncertain linear 1-D parabolic partial differential equations (PDE) by using approximate dynamic programming. A quadratic surface integral is proposed to express the optimal cost functional for the infinite-dimensional state space. Accordingly, the Hamilton-Jacobi-Bellman (HJB) equation is formulated in the infinite-dimensional domain without using any model reduction. Subsequently, a neural network identifier is developed to estimate the unknown spatially varying coefficient in PDE dynamics. Novel tuning law is proposed to guarantee the boundedness of identifier approximation error in the PDE domain. A radial basis network (RBN) is subsequently proposed to generate an approximate solution for the optimal surface kernel function online. The tuning law for near optimal RBN weights is created, such that the HJB equation error is minimized while the dynamics are identified and closed-loop system remains stable. Ultimate boundedness (UB) of the closed-loop system is verified by using the Lyapunov theory. The performance of the proposed controller is successfully confirmed by simulation on an unstable diffusion-reaction process.
NASA Astrophysics Data System (ADS)
Dobson, B.; Pianosi, F.; Wagener, T.
2016-12-01
Extensive scientific literature exists on the study of how operation decisions in water resource systems can be made more effectively through the use of optimization methods. However, to the best of the authors' knowledge, there is little in the literature on the implementation of these optimization methods by practitioners. We have performed a survey among UK reservoir operators to assess the current state of method implementation in practice. We also ask questions to assess the potential for implementation of operation optimization. This will help academics to target industry in their current research, identify any misconceptions in industry about the area and open new branches of research for which there is an unsatisfied demand. The UK is a good case study because the regulatory framework is changing to impose "no build" solutions for supply issues, as well as planning across entire water resource systems rather than individual components. Additionally there is a high appetite for efficiency due to the water industry's privatization and most operators are part of companies that control multiple water resources, increasing the potential for cooperation and coordination.
System and method for bullet tracking and shooter localization
Roberts, Randy S [Livermore, CA; Breitfeller, Eric F [Dublin, CA
2011-06-21
A system and method of processing infrared imagery to determine projectile trajectories and the locations of shooters with a high degree of accuracy. The method includes image processing infrared image data to reduce noise and identify streak-shaped image features, using a Kalman filter to estimate optimal projectile trajectories, updating the Kalman filter with new image data, determining projectile source locations by solving a combinatorial least-squares solution for all optimal projectile trajectories, and displaying all of the projectile source locations. Such a shooter-localization system is of great interest for military and law enforcement applications to determine sniper locations, especially in urban combat scenarios.
A Multivariate Quality Loss Function Approach for Optimization of Spinning Processes
NASA Astrophysics Data System (ADS)
Chakraborty, Shankar; Mitra, Ankan
2018-05-01
Recent advancements in textile industry have given rise to several spinning techniques, such as ring spinning, rotor spinning etc., which can be used to produce a wide variety of textile apparels so as to fulfil the end requirements of the customers. To achieve the best out of these processes, they should be utilized at their optimal parametric settings. However, in presence of multiple yarn characteristics which are often conflicting in nature, it becomes a challenging task for the spinning industry personnel to identify the best parametric mix which would simultaneously optimize all the responses. Hence, in this paper, the applicability of a new systematic approach in the form of multivariate quality loss function technique is explored for optimizing multiple quality characteristics of yarns while identifying the ideal settings of two spinning processes. It is observed that this approach performs well against the other multi-objective optimization techniques, such as desirability function, distance function and mean squared error methods. With slight modifications in the upper and lower specification limits of the considered quality characteristics, and constraints of the non-linear optimization problem, it can be successfully applied to other processes in textile industry to determine their optimal parametric settings.
Reverse engineering time discrete finite dynamical systems: a feasible undertaking?
Delgado-Eckert, Edgar
2009-01-01
With the advent of high-throughput profiling methods, interest in reverse engineering the structure and dynamics of biochemical networks is high. Recently an algorithm for reverse engineering of biochemical networks was developed by Laubenbacher and Stigler. It is a top-down approach using time discrete dynamical systems. One of its key steps includes the choice of a term order, a technicality imposed by the use of Gröbner-bases calculations. The aim of this paper is to identify minimal requirements on data sets to be used with this algorithm and to characterize optimal data sets. We found minimal requirements on a data set based on how many terms the functions to be reverse engineered display. Furthermore, we identified optimal data sets, which we characterized using a geometric property called "general position". Moreover, we developed a constructive method to generate optimal data sets, provided a codimensional condition is fulfilled. In addition, we present a generalization of their algorithm that does not depend on the choice of a term order. For this method we derived a formula for the probability of finding the correct model, provided the data set used is optimal. We analyzed the asymptotic behavior of the probability formula for a growing number of variables n (i.e. interacting chemicals). Unfortunately, this formula converges to zero as fast as , where and . Therefore, even if an optimal data set is used and the restrictions in using term orders are overcome, the reverse engineering problem remains unfeasible, unless prodigious amounts of data are available. Such large data sets are experimentally impossible to generate with today's technologies.
Optimal Ranking Regime Analysis of TreeFlow Dendrohydrological Reconstructions
NASA Astrophysics Data System (ADS)
Mauget, S. A.
2017-12-01
The Optimal Ranking Regime (ORR) method was used to identify 6-100 year time windows containing significant ranking sequences in 55 western U.S. streamflow reconstructions, and reconstructions of the level of the Great Salt Lake and San Francisco Bay salinity during 1500-2007. The method's ability to identify optimally significant and non-overlapping runs of low and high rankings allows it to re-express a reconstruction time series as a simplified sequence of regime segments marking intra- to multi-decadal (IMD) periods of low or high streamflow, lake level, or salinity. Those ORR sequences, referred to here as Z-lines, can be plotted to identify consistent regime patterns in the analysis of numerous reconstructions. The Z-lines for the 57 reconstructions evaluated here show a common pattern of IMD cycles of drought and pluvial periods during the late 16th and 17th centuries, a relatively dormant period during the 18th century, and the reappearance of alternating dry and wet IMD periods during the 19th and early 20th centuries. Although this pattern suggests the possibility of similarly active and inactive oceanic modes in the North Pacific and North Atlantic, such centennial-scale patterns are not evident in the ORR analyses of reconstructed Pacific Decadal Oscillation (PDO), El Niño-Southern Oscillation, and North Atlantic seas-surface temperature variation. But given the inconsistency in the analyses of four PDO reconstructions the possible role of centennial-scale oceanic mechanisms is uncertain. In future research the ORR method might be applied to climate reconstructions around the Pacific Basin to try to resolve this uncertainty. Given its ability to compare regime patterns in climate reconstructions derived using different methods and proxies, the method may also be used in future research to evaluate long-term regional temperature reconstructions.
Continuous Adaptive Population Reduction (CAPR) for Differential Evolution Optimization.
Wong, Ieong; Liu, Wenjia; Ho, Chih-Ming; Ding, Xianting
2017-06-01
Differential evolution (DE) has been applied extensively in drug combination optimization studies in the past decade. It allows for identification of desired drug combinations with minimal experimental effort. This article proposes an adaptive population-sizing method for the DE algorithm. Our new method presents improvements in terms of efficiency and convergence over the original DE algorithm and constant stepwise population reduction-based DE algorithm, which would lead to a reduced number of cells and animals required to identify an optimal drug combination. The method continuously adjusts the reduction of the population size in accordance with the stage of the optimization process. Our adaptive scheme limits the population reduction to occur only at the exploitation stage. We believe that continuously adjusting for a more effective population size during the evolutionary process is the major reason for the significant improvement in the convergence speed of the DE algorithm. The performance of the method is evaluated through a set of unimodal and multimodal benchmark functions. In combining with self-adaptive schemes for mutation and crossover constants, this adaptive population reduction method can help shed light on the future direction of a completely parameter tune-free self-adaptive DE algorithm.
Motion prediction of a non-cooperative space target
NASA Astrophysics Data System (ADS)
Zhou, Bang-Zhao; Cai, Guo-Ping; Liu, Yun-Meng; Liu, Pan
2018-01-01
Capturing a non-cooperative space target is a tremendously challenging research topic. Effective acquisition of motion information of the space target is the premise to realize target capture. In this paper, motion prediction of a free-floating non-cooperative target in space is studied and a motion prediction algorithm is proposed. In order to predict the motion of the free-floating non-cooperative target, dynamic parameters of the target must be firstly identified (estimated), such as inertia, angular momentum and kinetic energy and so on; then the predicted motion of the target can be acquired by substituting these identified parameters into the Euler's equations of the target. Accurate prediction needs precise identification. This paper presents an effective method to identify these dynamic parameters of a free-floating non-cooperative target. This method is based on two steps, (1) the rough estimation of the parameters is computed using the motion observation data to the target, and (2) the best estimation of the parameters is found by an optimization method. In the optimization problem, the objective function is based on the difference between the observed and the predicted motion, and the interior-point method (IPM) is chosen as the optimization algorithm, which starts at the rough estimate obtained in the first step and finds a global minimum to the objective function with the guidance of objective function's gradient. So the speed of IPM searching for the global minimum is fast, and an accurate identification can be obtained in time. The numerical results show that the proposed motion prediction algorithm is able to predict the motion of the target.
Study on feed forward neural network convex optimization for LiFePO4 battery parameters
NASA Astrophysics Data System (ADS)
Liu, Xuepeng; Zhao, Dongmei
2017-08-01
Based on the modern facility agriculture automatic walking equipment LiFePO4 Battery, the parameter identification of LiFePO4 Battery is analyzed. An improved method for the process model of li battery is proposed, and the on-line estimation algorithm is presented. The parameters of the battery are identified using feed forward network neural convex optimization algorithm.
NASA Astrophysics Data System (ADS)
Xie, Fengle; Jiang, Zhansi; Jiang, Hui
2018-05-01
This paper presents a multi-damages identification method for Cantilever Beam. First, the damage location is identified by using the mode shape curvatures. Second, samples of varying damage severities at the damage location and their corresponding natural frequencies are used to construct the initial Kriging surrogate model. Then a particle swarm optimization (PSO) algorithm is employed to identify the damage severities based on Kriging surrogate model. The simulation study of a double-damaged cantilever beam demonstrated that the proposed method is effective.
Full space device optimization for solar cells.
Baloch, Ahmer A B; Aly, Shahzada P; Hossain, Mohammad I; El-Mellouhi, Fedwa; Tabet, Nouar; Alharbi, Fahhad H
2017-09-20
Advances in computational materials have paved a way to design efficient solar cells by identifying the optimal properties of the device layers. Conventionally, the device optimization has been governed by single or double descriptors for an individual layer; mostly the absorbing layer. However, the performance of the device depends collectively on all the properties of the material and the geometry of each layer in the cell. To address this issue of multi-property optimization and to avoid the paradigm of reoccurring materials in the solar cell field, a full space material-independent optimization approach is developed and presented in this paper. The method is employed to obtain an optimized material data set for maximum efficiency and for targeted functionality for each layer. To ensure the robustness of the method, two cases are studied; namely perovskite solar cells device optimization and cadmium-free CIGS solar cell. The implementation determines the desirable optoelectronic properties of transport mediums and contacts that can maximize the efficiency for both cases. The resulted data sets of material properties can be matched with those in materials databases or by further microscopic material design. Moreover, the presented multi-property optimization framework can be extended to design any solid-state device.
Optimal robust control strategy of a solid oxide fuel cell system
NASA Astrophysics Data System (ADS)
Wu, Xiaojuan; Gao, Danhui
2018-01-01
Optimal control can ensure system safe operation with a high efficiency. However, only a few papers discuss optimal control strategies for solid oxide fuel cell (SOFC) systems. Moreover, the existed methods ignore the impact of parameter uncertainty on system instantaneous performance. In real SOFC systems, several parameters may vary with the variation of operation conditions and can not be identified exactly, such as load current. Therefore, a robust optimal control strategy is proposed, which involves three parts: a SOFC model with parameter uncertainty, a robust optimizer and robust controllers. During the model building process, boundaries of the uncertain parameter are extracted based on Monte Carlo algorithm. To achieve the maximum efficiency, a two-space particle swarm optimization approach is employed to obtain optimal operating points, which are used as the set points of the controllers. To ensure the SOFC safe operation, two feed-forward controllers and a higher-order robust sliding mode controller are presented to control fuel utilization ratio, air excess ratio and stack temperature afterwards. The results show the proposed optimal robust control method can maintain the SOFC system safe operation with a maximum efficiency under load and uncertainty variations.
Ray, Chad A; Patel, Vimal; Shih, Judy; Macaraeg, Chris; Wu, Yuling; Thway, Theingi; Ma, Mark; Lee, Jean W; Desilva, Binodh
2009-02-20
Developing a process that generates robust immunoassays that can be used to support studies with tight timelines is a common challenge for bioanalytical laboratories. Design of experiments (DOEs) is a tool that has been used by many industries for the purpose of optimizing processes. The approach is capable of identifying critical factors and their interactions with a minimal number of experiments. The challenge for implementing this tool in the bioanalytical laboratory is to develop a user-friendly approach that scientists can understand and apply. We have successfully addressed these challenges by eliminating the screening design, introducing automation, and applying a simple mathematical approach for the output parameter. A modified central composite design (CCD) was applied to three ligand binding assays. The intra-plate factors selected were coating, detection antibody concentration, and streptavidin-HRP concentrations. The inter-plate factors included incubation times for each step. The objective was to maximize the logS/B (S/B) of the low standard to the blank. The maximum desirable conditions were determined using JMP 7.0. To verify the validity of the predictions, the logS/B prediction was compared against the observed logS/B during pre-study validation experiments. The three assays were optimized using the multi-factorial DOE. The total error for all three methods was less than 20% which indicated method robustness. DOE identified interactions in one of the methods. The model predictions for logS/B were within 25% of the observed pre-study validation values for all methods tested. The comparison between the CCD and hybrid screening design yielded comparable parameter estimates. The user-friendly design enables effective application of multi-factorial DOE to optimize ligand binding assays for therapeutic proteins. The approach allows for identification of interactions between factors, consistency in optimal parameter determination, and reduced method development time.
Edge detection and mathematic fitting for corneal surface with Matlab software.
Di, Yue; Li, Mei-Yan; Qiao, Tong; Lu, Na
2017-01-01
To select the optimal edge detection methods to identify the corneal surface, and compare three fitting curve equations with Matlab software. Fifteen subjects were recruited. The corneal images from optical coherence tomography (OCT) were imported into Matlab software. Five edge detection methods (Canny, Log, Prewitt, Roberts, Sobel) were used to identify the corneal surface. Then two manual identifying methods (ginput and getpts) were applied to identify the edge coordinates respectively. The differences among these methods were compared. Binomial curve (y=Ax 2 +Bx+C), Polynomial curve [p(x)=p1x n +p2x n-1 +....+pnx+pn+1] and Conic section (Ax 2 +Bxy+Cy 2 +Dx+Ey+F=0) were used for curve fitting the corneal surface respectively. The relative merits among three fitting curves were analyzed. Finally, the eccentricity (e) obtained by corneal topography and conic section were compared with paired t -test. Five edge detection algorithms all had continuous coordinates which indicated the edge of the corneal surface. The ordinates of manual identifying were close to the inside of the actual edges. Binomial curve was greatly affected by tilt angle. Polynomial curve was lack of geometrical properties and unstable. Conic section could calculate the tilted symmetry axis, eccentricity, circle center, etc . There were no significant differences between 'e' values by corneal topography and conic section ( t =0.9143, P =0.3760 >0.05). It is feasible to simulate the corneal surface with mathematical curve with Matlab software. Edge detection has better repeatability and higher efficiency. The manual identifying approach is an indispensable complement for detection. Polynomial and conic section are both the alternative methods for corneal curve fitting. Conic curve was the optimal choice based on the specific geometrical properties.
NASA Astrophysics Data System (ADS)
Rangarajan, Ramsharan; Gao, Huajian
2015-09-01
We introduce a finite element method to compute equilibrium configurations of fluid membranes, identified as stationary points of a curvature-dependent bending energy functional under certain geometric constraints. The reparameterization symmetries in the problem pose a challenge in designing parametric finite element methods, and existing methods commonly resort to Lagrange multipliers or penalty parameters. In contrast, we exploit these symmetries by representing solution surfaces as normal offsets of given reference surfaces and entirely bypass the need for artificial constraints. We then resort to a Galerkin finite element method to compute discrete C1 approximations of the normal offset coordinate. The variational framework presented is suitable for computing deformations of three-dimensional membranes subject to a broad range of external interactions. We provide a systematic algorithm for computing large deformations, wherein solutions at subsequent load steps are identified as perturbations of previously computed ones. We discuss the numerical implementation of the method in detail and demonstrate its optimal convergence properties using examples. We discuss applications of the method to studying adhesive interactions of fluid membranes with rigid substrates and to investigate the influence of membrane tension in tether formation.
Identifiability and identification of trace continuous pollutant source.
Qu, Hongquan; Liu, Shouwen; Pang, Liping; Hu, Tao
2014-01-01
Accidental pollution events often threaten people's health and lives, and a pollutant source is very necessary so that prompt remedial actions can be taken. In this paper, a trace continuous pollutant source identification method is developed to identify a sudden continuous emission pollutant source in an enclosed space. The location probability model is set up firstly, and then the identification method is realized by searching a global optimal objective value of the location probability. In order to discuss the identifiability performance of the presented method, a conception of a synergy degree of velocity fields is presented in order to quantitatively analyze the impact of velocity field on the identification performance. Based on this conception, some simulation cases were conducted. The application conditions of this method are obtained according to the simulation studies. In order to verify the presented method, we designed an experiment and identified an unknown source appearing in the experimental space. The result showed that the method can identify a sudden trace continuous source when the studied situation satisfies the application conditions.
Automation and Optimization of Multipulse Laser Zona Drilling of Mouse Embryos During Embryo Biopsy.
Wong, Christopher Yee; Mills, James K
2017-03-01
Laser zona drilling (LZD) is a required step in many embryonic surgical procedures, for example, assisted hatching and preimplantation genetic diagnosis. LZD involves the ablation of the zona pellucida (ZP) using a laser while minimizing potentially harmful thermal effects on critical internal cell structures. Develop a method for the automation and optimization of multipulse LZD, applied to cleavage-stage embryos. A two-stage optimization is used. The first stage uses computer vision algorithms to identify embryonic structures and determines the optimal ablation zone farthest away from critical structures such as blastomeres. The second stage combines a genetic algorithm with a previously reported thermal analysis of LZD to optimize the combination of laser pulse locations and pulse durations. The goal is to minimize the peak temperature experienced by the blastomeres while creating the desired opening in the ZP. A proof of concept of the proposed LZD automation and optimization method is demonstrated through experiments on mouse embryos with positive results, as adequately sized openings are created. Automation of LZD is feasible and is a viable step toward the automation of embryo biopsy procedures. LZD is a common but delicate procedure performed by human operators using subjective methods to gauge proper LZD procedure. Automation of LZD removes human error to increase the success rate of LZD. Although the proposed methods are developed for cleavage-stage embryos, the same methods may be applied to most types LZD procedures, embryos at different developmental stages, or nonembryonic cells.
Miri, Raz; Graf, Iulia M; Dössel, Olaf
2009-11-01
Electrode positions and timing delays influence the efficacy of biventricular pacing (BVP). Accordingly, this study focuses on BVP optimization, using a detailed 3-D electrophysiological model of the human heart, which is adapted to patient-specific anatomy and pathophysiology. The research is effectuated on ten heart models with left bundle branch block and myocardial infarction derived from magnetic resonance and computed tomography data. Cardiac electrical activity is simulated with the ten Tusscher cell model and adaptive cellular automaton at physiological and pathological conduction levels. The optimization methods are based on a comparison between the electrical response of the healthy and diseased heart models, measured in terms of root mean square error (E(RMS)) of the excitation front and the QRS duration error (E(QRS)). Intra- and intermethod associations of the pacing electrodes and timing delays variables were analyzed with statistical methods, i.e., t -test for dependent data, one-way analysis of variance for electrode pairs, and Pearson model for equivalent parameters from the two optimization methods. The results indicate that lateral the left ventricle and the upper or middle septal area are frequently (60% of cases) the optimal positions of the left and right electrodes, respectively. Statistical analysis proves that the two optimization methods are in good agreement. In conclusion, a noninvasive preoperative BVP optimization strategy based on computer simulations can be used to identify the most beneficial patient-specific electrode configuration and timing delays.
NASA Astrophysics Data System (ADS)
Scarpelli, Matthew; Eickhoff, Jens; Cuna, Enrique; Perlman, Scott; Jeraj, Robert
2018-02-01
The statistical analysis of positron emission tomography (PET) standardized uptake value (SUV) measurements is challenging due to the skewed nature of SUV distributions. This limits utilization of powerful parametric statistical models for analyzing SUV measurements. An ad-hoc approach, which is frequently used in practice, is to blindly use a log transformation, which may or may not result in normal SUV distributions. This study sought to identify optimal transformations leading to normally distributed PET SUVs extracted from tumors and assess the effects of therapy on the optimal transformations. Methods. The optimal transformation for producing normal distributions of tumor SUVs was identified by iterating the Box-Cox transformation parameter (λ) and selecting the parameter that maximized the Shapiro-Wilk P-value. Optimal transformations were identified for tumor SUVmax distributions at both pre and post treatment. This study included 57 patients that underwent 18F-fluorodeoxyglucose (18F-FDG) PET scans (publically available dataset). In addition, to test the generality of our transformation methodology, we included analysis of 27 patients that underwent 18F-Fluorothymidine (18F-FLT) PET scans at our institution. Results. After applying the optimal Box-Cox transformations, neither the pre nor the post treatment 18F-FDG SUV distributions deviated significantly from normality (P > 0.10). Similar results were found for 18F-FLT PET SUV distributions (P > 0.10). For both 18F-FDG and 18F-FLT SUV distributions, the skewness and kurtosis increased from pre to post treatment, leading to a decrease in the optimal Box-Cox transformation parameter from pre to post treatment. There were types of distributions encountered for both 18F-FDG and 18F-FLT where a log transformation was not optimal for providing normal SUV distributions. Conclusion. Optimization of the Box-Cox transformation, offers a solution for identifying normal SUV transformations for when the log transformation is insufficient. The log transformation is not always the appropriate transformation for producing normally distributed PET SUVs.
Stahmer, Aubyn C.; Suhrheinrich, Jessica; Reed, Sarah; Schreibman, Laura
2012-01-01
Several evidence-based practices (EBPs) have been identified as efficacious for the education of students with autism spectrum disorders (ASD). However, effectiveness research has rarely been conducted in schools and teachers express skepticism about the clinical utility of EBPs for the classroom. Innovative methods are needed to optimally adapt EBPs for community use. This study utilizes qualitative methods to identify perceived benefits and barriers of classroom implementation of a specific EBP for ASD, Pivotal Response Training (PRT). Teachers' perspectives on the components of PRT, use of PRT as a classroom intervention strategy, and barriers to the use of PRT were identified through guided discussion. Teachers found PRT valuable; however, they also found some components challenging. Specific teacher recommendations for adaptation and resource development are discussed. This process of obtaining qualitative feedback from frontline practitioners provides a generalizable model for researchers to collaborate with teachers to optimally promote EBPs for classroom use. PMID:23209896
Stahmer, Aubyn C; Suhrheinrich, Jessica; Reed, Sarah; Schreibman, Laura
2012-01-01
Several evidence-based practices (EBPs) have been identified as efficacious for the education of students with autism spectrum disorders (ASD). However, effectiveness research has rarely been conducted in schools and teachers express skepticism about the clinical utility of EBPs for the classroom. Innovative methods are needed to optimally adapt EBPs for community use. This study utilizes qualitative methods to identify perceived benefits and barriers of classroom implementation of a specific EBP for ASD, Pivotal Response Training (PRT). Teachers' perspectives on the components of PRT, use of PRT as a classroom intervention strategy, and barriers to the use of PRT were identified through guided discussion. Teachers found PRT valuable; however, they also found some components challenging. Specific teacher recommendations for adaptation and resource development are discussed. This process of obtaining qualitative feedback from frontline practitioners provides a generalizable model for researchers to collaborate with teachers to optimally promote EBPs for classroom use.
Sui, Jing; Adali, Tülay; Pearlson, Godfrey D.; Calhoun, Vince D.
2013-01-01
Extraction of relevant features from multitask functional MRI (fMRI) data in order to identify potential biomarkers for disease, is an attractive goal. In this paper, we introduce a novel feature-based framework, which is sensitive and accurate in detecting group differences (e.g. controls vs. patients) by proposing three key ideas. First, we integrate two goal-directed techniques: coefficient-constrained independent component analysis (CC-ICA) and principal component analysis with reference (PCA-R), both of which improve sensitivity to group differences. Secondly, an automated artifact-removal method is developed for selecting components of interest derived from CC-ICA, with an average accuracy of 91%. Finally, we propose a strategy for optimal feature/component selection, aiming to identify optimal group-discriminative brain networks as well as the tasks within which these circuits are engaged. The group-discriminating performance is evaluated on 15 fMRI feature combinations (5 single features and 10 joint features) collected from 28 healthy control subjects and 25 schizophrenia patients. Results show that a feature from a sensorimotor task and a joint feature from a Sternberg working memory (probe) task and an auditory oddball (target) task are the top two feature combinations distinguishing groups. We identified three optimal features that best separate patients from controls, including brain networks consisting of temporal lobe, default mode and occipital lobe circuits, which when grouped together provide improved capability in classifying group membership. The proposed framework provides a general approach for selecting optimal brain networks which may serve as potential biomarkers of several brain diseases and thus has wide applicability in the neuroimaging research community. PMID:19457398
Co-Optimization of Blunt Body Shapes for Moving Vehicles
NASA Technical Reports Server (NTRS)
Kinney, David J. (Inventor); Mansour, Nagi N (Inventor); Brown, James L. (Inventor); Garcia, Joseph A (Inventor); Bowles, Jeffrey V (Inventor)
2014-01-01
A method and associated system for multi-disciplinary optimization of various parameters associated with a space vehicle that experiences aerocapture and atmospheric entry in a specified atmosphere. In one embodiment, simultaneous maximization of a ratio of landed payload to vehicle atmospheric entry mass, maximization of fluid flow distance before flow separation from vehicle, and minimization of heat transfer to the vehicle are performed with respect to vehicle surface geometric parameters, and aerostructure and aerothermal vehicle response for the vehicle moving along a specified trajectory. A Pareto Optimal set of superior performance parameters is identified.
Optimizing prescribed fire allocation for managing fire risk in central Catalonia.
Alcasena, Fermín J; Ager, Alan A; Salis, Michele; Day, Michelle A; Vega-Garcia, Cristina
2018-04-15
We used spatial optimization to allocate and prioritize prescribed fire treatments in the fire-prone Bages County, central Catalonia (northeastern Spain). The goal of this study was to identify suitable strategic locations on forest lands for fuel treatments in order to: 1) disrupt major fire movements, 2) reduce ember emissions, and 3) reduce the likelihood of large fires burning into residential communities. We first modeled fire spread, hazard and exposure metrics under historical extreme fire weather conditions, including node influence grid for surface fire pathways, crown fraction burned and fire transmission to residential structures. Then, we performed an optimization analysis on individual planning areas to identify production possibility frontiers for addressing fire exposure and explore alternative prescribed fire treatment configurations. The results revealed strong trade-offs among different fire exposure metrics, showed treatment mosaics that optimize the allocation of prescribed fire, and identified specific opportunities to achieve multiple objectives. Our methods can contribute to improving the efficiency of prescribed fire treatment investments and wildfire management programs aimed at creating fire resilient ecosystems, facilitating safe and efficient fire suppression, and safeguarding rural communities from catastrophic wildfires. The analysis framework can be used to optimally allocate prescribed fire in other fire-prone areas within the Mediterranean region and elsewhere. Copyright © 2017 Elsevier B.V. All rights reserved.
Selection of Sustainable Processes using Sustainability ...
Chemical products can be obtained by process pathways involving varying amounts and types of resources, utilities, and byproduct formation. When such competing process options such as six processes for making methanol as are considered in this study, it is necessary to identify the most sustainable option. Sustainability of a chemical process is generally evaluated with indicators that require process and chemical property data. These indicators individually reflect the impacts of the process on areas of sustainability, such as the environment or society. In order to choose among several alternative processes an overall comparative analysis is essential. Generally net profit will show the most economic process. A mixed integer optimization problem can also be solved to identify the most economic among competing processes. This method uses economic optimization and leaves aside the environmental and societal impacts. To make a decision on the most sustainable process, the method presented here rationally aggregates the sustainability indicators into a single index called sustainability footprint (De). Process flow and economic data were used to compute the indicator values. Results from sustainability footprint (De) are compared with those from solving a mixed integer optimization problem. In order to identify the rank order of importance of the indicators, a multivariate analysis is performed using partial least square variable importance in projection (PLS-VIP)
Cho, Ming-Yuan; Hoang, Thi Thom
2017-01-01
Fast and accurate fault classification is essential to power system operations. In this paper, in order to classify electrical faults in radial distribution systems, a particle swarm optimization (PSO) based support vector machine (SVM) classifier has been proposed. The proposed PSO based SVM classifier is able to select appropriate input features and optimize SVM parameters to increase classification accuracy. Further, a time-domain reflectometry (TDR) method with a pseudorandom binary sequence (PRBS) stimulus has been used to generate a dataset for purposes of classification. The proposed technique has been tested on a typical radial distribution network to identify ten different types of faults considering 12 given input features generated by using Simulink software and MATLAB Toolbox. The success rate of the SVM classifier is over 97%, which demonstrates the effectiveness and high efficiency of the developed method.
Allawi, Mohammed Falah; Jaafar, Othman; Mohamad Hamzah, Firdaus; Abdullah, Sharifah Mastura Syed; El-Shafie, Ahmed
2018-05-01
Efficacious operation for dam and reservoir system could guarantee not only a defenselessness policy against natural hazard but also identify rule to meet the water demand. Successful operation of dam and reservoir systems to ensure optimal use of water resources could be unattainable without accurate and reliable simulation models. According to the highly stochastic nature of hydrologic parameters, developing accurate predictive model that efficiently mimic such a complex pattern is an increasing domain of research. During the last two decades, artificial intelligence (AI) techniques have been significantly utilized for attaining a robust modeling to handle different stochastic hydrological parameters. AI techniques have also shown considerable progress in finding optimal rules for reservoir operation. This review research explores the history of developing AI in reservoir inflow forecasting and prediction of evaporation from a reservoir as the major components of the reservoir simulation. In addition, critical assessment of the advantages and disadvantages of integrated AI simulation methods with optimization methods has been reported. Future research on the potential of utilizing new innovative methods based AI techniques for reservoir simulation and optimization models have also been discussed. Finally, proposal for the new mathematical procedure to accomplish the realistic evaluation of the whole optimization model performance (reliability, resilience, and vulnerability indices) has been recommended.
Mixture experiment methods in the development and optimization of microemulsion formulations.
Furlanetto, S; Cirri, M; Piepel, G; Mennini, N; Mura, P
2011-06-25
Microemulsion formulations represent an interesting delivery vehicle for lipophilic drugs, allowing for improving their solubility and dissolution properties. This work developed effective microemulsion formulations using glyburide (a very poorly-water-soluble hypoglycaemic agent) as a model drug. First, the area of stable microemulsion (ME) formations was identified using a new approach based on mixture experiment methods. A 13-run mixture design was carried out in an experimental region defined by constraints on three components: aqueous, oil and surfactant/cosurfactant. The transmittance percentage (at 550 nm) of ME formulations (indicative of their transparency and thus of their stability) was chosen as the response variable. The results obtained using the mixture experiment approach corresponded well with those obtained using the traditional approach based on pseudo-ternary phase diagrams. However, the mixture experiment approach required far less experimental effort than the traditional approach. A subsequent 13-run mixture experiment, in the region of stable MEs, was then performed to identify the optimal formulation (i.e., having the best glyburide dissolution properties). Percent drug dissolved and dissolution efficiency were selected as the responses to be maximized. The ME formulation optimized via the mixture experiment approach consisted of 78% surfactant/cosurfacant (a mixture of Tween 20 and Transcutol, 1:1, v/v), 5% oil (Labrafac Hydro) and 17% aqueous phase (water). The stable region of MEs was identified using mixture experiment methods for the first time. Copyright © 2011 Elsevier B.V. All rights reserved.
Extraction of Polysaccharide from Spirulina and Evaluation of Its Activities.
Wang, Bingyue; Liu, Qian; Huang, Yinghong; Yuan, Yueling; Ma, Qianqian; Du, Manling; Cai, Tiange; Cai, Yu
2018-01-01
Polysaccharide of Spirulina platensis (PSP) is a kind of water-soluble polysaccharide extracted from Spirulina platensis . It has been proved to have antitumor, antioxidation, antiaging, and antivirus properties. And it has a promising prospect for wide application. This study aims to identify an extraction process for high-purity polysaccharide in Spirulina (PSP) through a series of optimization methods and then evaluates its initial antiaging activities. Four kinds of extraction methods-hot-water extraction, alkali extraction, ultrasonic-assisted extraction, and freeze-thaw extraction-were compared to find the optimal one, which was further optimized by response surface methodology. PSP was obtained after the crude PSP was deproteinized and depigmented. The antiaging effects of PSP were preliminarily evaluated through in vitro cell experiments. The alkali extraction method was determined as the optimal method, with the optimized extraction process consisting of a solid-liquid ratio of 1 : 50, a pH value of 10.25, a temperature of 89.24°C, and a time of 9.99 h. The final PSP contained 71.65% of polysaccharide and 8.54% of protein. At a concentration of 50 μ g/mL, PSP exerted a significant promoting effect on the proliferation and traumatic fusion of human immortalized epidermal cells HaCaT. An extraction method for high-purity PSP with a high extraction rate was established, and in vitro results suggest antioxidation and antiaging activities.
Gao, JianZhao; Tao, Xue-Wen; Zhao, Jia; Feng, Yuan-Ming; Cai, Yu-Dong; Zhang, Ning
2017-01-01
Lysine acetylation, as one type of post-translational modifications (PTM), plays key roles in cellular regulations and can be involved in a variety of human diseases. However, it is often high-cost and time-consuming to use traditional experimental approaches to identify the lysine acetylation sites. Therefore, effective computational methods should be developed to predict the acetylation sites. In this study, we developed a position-specific method for epsilon lysine acetylation site prediction. Sequences of acetylated proteins were retrieved from the UniProt database. Various kinds of features such as position specific scoring matrix (PSSM), amino acid factors (AAF), and disorders were incorporated. A feature selection method based on mRMR (Maximum Relevance Minimum Redundancy) and IFS (Incremental Feature Selection) was employed. Finally, 319 optimal features were selected from total 541 features. Using the 319 optimal features to encode peptides, a predictor was constructed based on dagging. As a result, an accuracy of 69.56% with MCC of 0.2792 was achieved. We analyzed the optimal features, which suggested some important factors determining the lysine acetylation sites. We developed a position-specific method for epsilon lysine acetylation site prediction. A set of optimal features was selected. Analysis of the optimal features provided insights into the mechanism of lysine acetylation sites, providing guidance of experimental validation. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Power line identification of millimeter wave radar based on PCA-GS-SVM
NASA Astrophysics Data System (ADS)
Fang, Fang; Zhang, Guifeng; Cheng, Yansheng
2017-12-01
Aiming at the problem that the existing detection method can not effectively solve the security of UAV's ultra low altitude flight caused by power line, a power line recognition method based on grid search (GS) and the principal component analysis and support vector machine (PCA-SVM) is proposed. Firstly, the candidate line of Hough transform is reduced by PCA, and the main feature of candidate line is extracted. Then, upport vector machine (SVM is) optimized by grid search method (GS). Finally, using support vector machine classifier optimized parameters to classify the candidate line. MATLAB simulation results show that this method can effectively identify the power line and noise, and has high recognition accuracy and algorithm efficiency.
NASA Technical Reports Server (NTRS)
Tiffany, S. H.; Adams, W. M., Jr.
1984-01-01
A technique which employs both linear and nonlinear methods in a multilevel optimization structure to best approximate generalized unsteady aerodynamic forces for arbitrary motion is described. Optimum selection of free parameters is made in a rational function approximation of the aerodynamic forces in the Laplace domain such that a best fit is obtained, in a least squares sense, to tabular data for purely oscillatory motion. The multilevel structure and the corresponding formulation of the objective models are presented which separate the reduction of the fit error into linear and nonlinear problems, thus enabling the use of linear methods where practical. Certain equality and inequality constraints that may be imposed are identified; a brief description of the nongradient, nonlinear optimizer which is used is given; and results which illustrate application of the method are presented.
Using a derivative-free optimization method for multiple solutions of inverse transport problems
Armstrong, Jerawan C.; Favorite, Jeffrey A.
2016-01-14
Identifying unknown components of an object that emits radiation is an important problem for national and global security. Radiation signatures measured from an object of interest can be used to infer object parameter values that are not known. This problem is called an inverse transport problem. An inverse transport problem may have multiple solutions and the most widely used approach for its solution is an iterative optimization method. This paper proposes a stochastic derivative-free global optimization algorithm to find multiple solutions of inverse transport problems. The algorithm is an extension of a multilevel single linkage (MLSL) method where a meshmore » adaptive direct search (MADS) algorithm is incorporated into the local phase. Furthermore, numerical test cases using uncollided fluxes of discrete gamma-ray lines are presented to show the performance of this new algorithm.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dumidu Wijayasekara; Ondrej Linda; Milos Manic
Building Energy Management Systems (BEMSs) are essential components of modern buildings that utilize digital control technologies to minimize energy consumption while maintaining high levels of occupant comfort. However, BEMSs can only achieve these energy savings when properly tuned and controlled. Since indoor environment is dependent on uncertain criteria such as weather, occupancy, and thermal state, performance of BEMS can be sub-optimal at times. Unfortunately, the complexity of BEMS control mechanism, the large amount of data available and inter-relations between the data can make identifying these sub-optimal behaviors difficult. This paper proposes a novel Fuzzy Anomaly Detection and Linguistic Description (Fuzzy-ADLD)more » based method for improving the understandability of BEMS behavior for improved state-awareness. The presented method is composed of two main parts: 1) detection of anomalous BEMS behavior and 2) linguistic representation of BEMS behavior. The first part utilizes modified nearest neighbor clustering algorithm and fuzzy logic rule extraction technique to build a model of normal BEMS behavior. The second part of the presented method computes the most relevant linguistic description of the identified anomalies. The presented Fuzzy-ADLD method was applied to real-world BEMS system and compared against a traditional alarm based BEMS. In six different scenarios, the Fuzzy-ADLD method identified anomalous behavior either as fast as or faster (an hour or more), that the alarm based BEMS. In addition, the Fuzzy-ADLD method identified cases that were missed by the alarm based system, demonstrating potential for increased state-awareness of abnormal building behavior.« less
A Hybrid Wavelet-Based Method for the Peak Detection of Photoplethysmography Signals.
Li, Suyi; Jiang, Shanqing; Jiang, Shan; Wu, Jiang; Xiong, Wenji; Diao, Shu
2017-01-01
The noninvasive peripheral oxygen saturation (SpO 2 ) and the pulse rate can be extracted from photoplethysmography (PPG) signals. However, the accuracy of the extraction is directly affected by the quality of the signal obtained and the peak of the signal identified; therefore, a hybrid wavelet-based method is proposed in this study. Firstly, we suppressed the partial motion artifacts and corrected the baseline drift by using a wavelet method based on the principle of wavelet multiresolution. And then, we designed a quadratic spline wavelet modulus maximum algorithm to identify the PPG peaks automatically. To evaluate this hybrid method, a reflective pulse oximeter was used to acquire ten subjects' PPG signals under sitting, raising hand, and gently walking postures, and the peak recognition results on the raw signal and on the corrected signal were compared, respectively. The results showed that the hybrid method not only corrected the morphologies of the signal well but also optimized the peaks identification quality, subsequently elevating the measurement accuracy of SpO 2 and the pulse rate. As a result, our hybrid wavelet-based method profoundly optimized the evaluation of respiratory function and heart rate variability analysis.
A Hybrid Wavelet-Based Method for the Peak Detection of Photoplethysmography Signals
Jiang, Shanqing; Jiang, Shan; Wu, Jiang; Xiong, Wenji
2017-01-01
The noninvasive peripheral oxygen saturation (SpO2) and the pulse rate can be extracted from photoplethysmography (PPG) signals. However, the accuracy of the extraction is directly affected by the quality of the signal obtained and the peak of the signal identified; therefore, a hybrid wavelet-based method is proposed in this study. Firstly, we suppressed the partial motion artifacts and corrected the baseline drift by using a wavelet method based on the principle of wavelet multiresolution. And then, we designed a quadratic spline wavelet modulus maximum algorithm to identify the PPG peaks automatically. To evaluate this hybrid method, a reflective pulse oximeter was used to acquire ten subjects' PPG signals under sitting, raising hand, and gently walking postures, and the peak recognition results on the raw signal and on the corrected signal were compared, respectively. The results showed that the hybrid method not only corrected the morphologies of the signal well but also optimized the peaks identification quality, subsequently elevating the measurement accuracy of SpO2 and the pulse rate. As a result, our hybrid wavelet-based method profoundly optimized the evaluation of respiratory function and heart rate variability analysis. PMID:29250135
2007-04-01
optimization methodology we introduce. State-of-the-art protein - protein docking approaches start by identifying conformations with good surface /chemical com...side-chains on the interface ). The protein - protein docking literature (e.g., [8] and the references therein) is predominantly treating the docking...mations by various measures of surface complementarity which can be efficiently computed using fast Fourier correlation tech- niques (FFTs). However, when
Marchiori, G; Lopomo, N; Boi, M; Berni, M; Bianchi, M; Gambardella, A; Visani, A; Russo, A; Marcacci, M
2016-01-01
Realizing hard ceramic coatings on the plastic component of a joint prosthesis can be strategic for the mechanical preservation of the whole implant and to extend its lifetime. Recently, thanks to the Plasma Pulsed Deposition (PPD) method, zirconia coatings on ultra-high molecular weight polyethylene (UHMWPE) substrates resulted in a feasible outcome. Focusing on both the highly specific requirements defined by the biomedical application and the effective possibilities given by the deposition method in the perspectives of technological transfer, it is mandatory to optimize the coating in terms of load bearing capacity. The main goal of this study was to identify through Finite Element Analysis (FEA) the optimal coating thickness that would be able to minimize UHMWPE strain, possible insurgence of cracks within the coating and stresses at coating-substrate interface. Simulations of nanoindentation and microindentation tests were specifically carried out. FEA findings demonstrated that, in general, thickening the zirconia coating strongly reduced the strains in the UHMWPE substrate, although the 1 μm thickness value was identified as critical for the presence of high stresses within the coating and at the interface with the substrate. Therefore, the optimal thickness resulted to be highly dependent on the specific loading condition and final applications. Copyright © 2015 Elsevier B.V. All rights reserved.
Sun, Jie; Li, Zhengdong; Pan, Shaoyou; Feng, Hao; Shao, Yu; Liu, Ningguo; Huang, Ping; Zou, Donghua; Chen, Yijiu
2018-05-01
The aim of the present study was to develop an improved method, using MADYMO multi-body simulation software combined with an optimization method and three-dimensional (3D) motion capture, for identifying the pre-impact conditions of a cyclist (walking or cycling) involved in a vehicle-bicycle accident. First, a 3D motion capture system was used to analyze coupled motions of a volunteer while walking and cycling. The motion capture results were used to define the posture of the human model during walking and cycling simulations. Then, cyclist, bicycle and vehicle models were developed. Pre-impact parameters of the models were treated as unknown design variables. Finally, a multi-objective genetic algorithm, the nondominated sorting genetic algorithm II, was used to find optimal solutions. The objective functions of the walk parameter were significantly lower than cycle parameter; thus, the cyclist was more likely to have been walking with the bicycle than riding the bicycle. In the most closely matched result found, all observed contact points matched and the injury parameters correlated well with the real injuries sustained by the cyclist. Based on the real accident reconstruction, the present study indicates that MADYMO multi-body simulation software, combined with an optimization method and 3D motion capture, can be used to identify the pre-impact conditions of a cyclist involved in a vehicle-bicycle accident. Copyright © 2018. Published by Elsevier Ltd.
CORSSTOL: Cylinder Optimization of Rings, Skin, and Stringers with Tolerance sensitivity
NASA Technical Reports Server (NTRS)
Finckenor, J.; Bevill, M.
1995-01-01
Cylinder Optimization of Rings, Skin, and Stringers with Tolerance (CORSSTOL) sensitivity is a design optimization program incorporating a method to examine the effects of user-provided manufacturing tolerances on weight and failure. CORSSTOL gives designers a tool to determine tolerances based on need. This is a decisive way to choose the best design among several manufacturing methods with differing capabilities and costs. CORSSTOL initially optimizes a stringer-stiffened cylinder for weight without tolerances. The skin and stringer geometry are varied, subject to stress and buckling constraints. Then the same analysis and optimization routines are used to minimize the maximum material condition weight subject to the least favorable combination of tolerances. The adjusted optimum dimensions are provided with the weight and constraint sensitivities of each design variable. The designer can immediately identify critical tolerances. The safety of parts made out of tolerance can also be determined. During design and development of weight-critical systems, design/analysis tools that provide product-oriented results are of vital significance. The development of this program and methodology provides designers with an effective cost- and weight-saving design tool. The tolerance sensitivity method can be applied to any system defined by a set of deterministic equations.
Hao, Ge-Fei; Yang, Sheng-Gang; Huang, Wei; Wang, Le; Shen, Yan-Qing; Tu, Wen-Long; Li, Hui; Huang, Li-Shar; Wu, Jia-Wei; Berry, Edward A.; Yang, Guang-Fu
2015-01-01
Hit to lead (H2L) optimization is a key step for drug and agrochemical discovery. A critical challenge for H2L optimization is the low efficiency due to the lack of predictive method with high accuracy. We described a new computational method called Computational Substitution Optimization (CSO) that has allowed us to rapidly identify compounds with cytochrome bc1 complex inhibitory activity in the nanomolar and subnanomolar range. The comprehensively optimized candidate has proved to be a slow binding inhibitor of bc1 complex, ~73-fold more potent (Ki = 4.1 nM) than the best commercial fungicide azoxystrobin (AZ; Ki = 297.6 nM) and shows excellent in vivo fungicidal activity against downy mildew and powdery mildew disease. The excellent correlation between experimental and calculated binding free-energy shifts together with further crystallographic analysis confirmed the prediction accuracy of CSO method. To the best of our knowledge, CSO is a new computational approach to substitution-scanning mutagenesis of ligand and could be used as a general strategy of H2L optimisation in drug and agrochemical design.
Guthke, Reinhard; Möller, Ulrich; Hoffmann, Martin; Thies, Frank; Töpfer, Susanne
2005-04-15
The immune response to bacterial infection represents a complex network of dynamic gene and protein interactions. We present an optimized reverse engineering strategy aimed at a reconstruction of this kind of interaction networks. The proposed approach is based on both microarray data and available biological knowledge. The main kinetics of the immune response were identified by fuzzy clustering of gene expression profiles (time series). The number of clusters was optimized using various evaluation criteria. For each cluster a representative gene with a high fuzzy-membership was chosen in accordance with available physiological knowledge. Then hypothetical network structures were identified by seeking systems of ordinary differential equations, whose simulated kinetics could fit the gene expression profiles of the cluster-representative genes. For the construction of hypothetical network structures singular value decomposition (SVD) based methods and a newly introduced heuristic Network Generation Method here were compared. It turned out that the proposed novel method could find sparser networks and gave better fits to the experimental data. Reinhard.Guthke@hki-jena.de.
NASA Astrophysics Data System (ADS)
Wang, Geng; Zhou, Kexin; Zhang, Yeming
2018-04-01
The widely used Bouc-Wen hysteresis model can be utilized to accurately simulate the voltage-displacement curves of piezoelectric actuators. In order to identify the unknown parameters of the Bouc-Wen model, an improved artificial bee colony (IABC) algorithm is proposed in this paper. A guiding strategy for searching the current optimal position of the food source is proposed in the method, which can help balance the local search ability and global exploitation capability. And the formula for the scout bees to search for the food source is modified to increase the convergence speed. Some experiments were conducted to verify the effectiveness of the IABC algorithm. The results show that the identified hysteresis model agreed well with the actual actuator response. Moreover, the identification results were compared with the standard particle swarm optimization (PSO) method, and it can be seen that the search performance in convergence rate of the IABC algorithm is better than that of the standard PSO method.
An optimization method for defects reduction in fiber laser keyhole welding
NASA Astrophysics Data System (ADS)
Ai, Yuewei; Jiang, Ping; Shao, Xinyu; Wang, Chunming; Li, Peigen; Mi, Gaoyang; Liu, Yang; Liu, Wei
2016-01-01
Laser welding has been widely used in automotive, power, chemical, nuclear and aerospace industries. The quality of welded joints is closely related to the existing defects which are primarily determined by the welding process parameters. This paper proposes a defects optimization method that takes the formation mechanism of welding defects and weld geometric features into consideration. The analysis of welding defects formation mechanism aims to investigate the relationship between welding defects and process parameters, and weld features are considered to identify the optimal process parameters for the desired welded joints with minimum defects. The improved back-propagation neural network possessing good modeling for nonlinear problems is adopted to establish the mathematical model and the obtained model is solved by genetic algorithm. The proposed method is validated by macroweld profile, microstructure and microhardness in the confirmation tests. The results show that the proposed method is effective at reducing welding defects and obtaining high-quality joints for fiber laser keyhole welding in practical production.
An effective model for ergonomic optimization applied to a new automotive assembly line
NASA Astrophysics Data System (ADS)
Duraccio, Vincenzo; Elia, Valerio; Forcina, Antonio
2016-06-01
An efficient ergonomic optimization can lead to a significant improvement in production performance and a considerable reduction of costs. In the present paper new model for ergonomic optimization is proposed. The new approach is based on the criteria defined by National Institute of Occupational Safety and Health and, adapted to Italian legislation. The proposed model provides an ergonomic optimization, by analyzing ergonomic relations between manual work in correct conditions. The model includes a schematic and systematic analysis method of the operations, and identifies all possible ergonomic aspects to be evaluated. The proposed approach has been applied to an automotive assembly line, where the operation repeatability makes the optimization fundamental. The proposed application clearly demonstrates the effectiveness of the new approach.
Optimizing Aesthetic Outcomes in Delayed Breast Reconstruction
2017-01-01
Background: The need to restore both the missing breast volume and breast surface area makes achieving excellent aesthetic outcomes in delayed breast reconstruction especially challenging. Autologous breast reconstruction can be used to achieve both goals. The aim of this study was to identify surgical maneuvers that can optimize aesthetic outcomes in delayed breast reconstruction. Methods: This is a retrospective review of operative and clinical records of all patients who underwent unilateral or bilateral delayed breast reconstruction with autologous tissue between April 2014 and January 2017. Three groups of delayed breast reconstruction patients were identified based on patient characteristics. Results: A total of 26 flaps were successfully performed in 17 patients. Key surgical maneuvers for achieving aesthetically optimal results were identified. A statistically significant difference for volume requirements was identified in cases where a delayed breast reconstruction and a contralateral immediate breast reconstruction were performed simultaneously. Conclusions: Optimal aesthetic results can be achieved with: (1) restoration of breast skin envelope with tissue expansion when possible, (2) optimal positioning of a small skin paddle to be later incorporated entirely into a nipple areola reconstruction when adequate breast skin surface area is present, (3) limiting the reconstructed breast mound to 2 skin tones when large area skin resurfacing is required, (4) increasing breast volume by deepithelializing, not discarding, the inferior mastectomy flap skin, (5) eccentric division of abdominal flaps when an immediate and delayed bilateral breast reconstructions are performed simultaneously; and (6) performing second-stage breast reconstruction revisions and fat grafting. PMID:28894666
Approximation of Nash equilibria and the network community structure detection problem
2017-01-01
Game theory based methods designed to solve the problem of community structure detection in complex networks have emerged in recent years as an alternative to classical and optimization based approaches. The Mixed Nash Extremal Optimization uses a generative relation for the characterization of Nash equilibria to identify the community structure of a network by converting the problem into a non-cooperative game. This paper proposes a method to enhance this algorithm by reducing the number of payoff function evaluations. Numerical experiments performed on synthetic and real-world networks show that this approach is efficient, with results better or just as good as other state-of-the-art methods. PMID:28467496
Evolutionary Optimization of Centrifugal Nozzles for Organic Vapours
NASA Astrophysics Data System (ADS)
Persico, Giacomo
2017-03-01
This paper discusses the shape-optimization of non-conventional centrifugal turbine nozzles for Organic Rankine Cycle applications. The optimal aerodynamic design is supported by the use of a non-intrusive, gradient-free technique specifically developed for shape optimization of turbomachinery profiles. The method is constructed as a combination of a geometrical parametrization technique based on B-Splines, a high-fidelity and experimentally validated Computational Fluid Dynamic solver, and a surrogate-based evolutionary algorithm. The non-ideal gas behaviour featuring the flow of organic fluids in the cascades of interest is introduced via a look-up-table approach, which is rigorously applied throughout the whole optimization process. Two transonic centrifugal nozzles are considered, featuring very different loading and radial extension. The use of a systematic and automatic design method to such a non-conventional configuration highlights the character of centrifugal cascades; the blades require a specific and non-trivial definition of the shape, especially in the rear part, to avoid the onset of shock waves. It is shown that the optimization acts in similar way for the two cascades, identifying an optimal curvature of the blade that both provides a relevant increase of cascade performance and a reduction of downstream gradients.
Chhaya, Urvish; Gupte, Akshaya
2010-02-01
Laccase production by solid state fermentation (SSF) using an indigenously isolated litter dwelling fungus Fusarium incarnatum LD-3 was optimized. Fourteen medium components were screened by the initial screening method of Plackett-Burman. Each of the components was screened on the basis of 'p' (probability value) which was above 95% confidence level. Ortho-dianisidine, thiamine HCl and CuSO(4) . 5 H(2)O were identified as significant components for laccase production. The Central Composite Design response surface methodology was then applied to further optimize the laccase production. The optimal concentration of these three medium components for higher laccase production were (g/l): CuSO(4) . 5 H(2)O, 0.01; thiamine HCl, 0.0136 and ortho-dianisidine, 0.388 mM served as an inducer. Wheat straw, 5.0 g was used as a solid substrate. Using this statistical optimization method the laccase production was found to increase from 40 U/g to 650 U/g of wheat straw, which was sixteen times higher than non optimized medium. This is the first report on statistical optimization of laccase production from Fusarium incarnatum LD-3.
Optimal External Wrench Distribution During a Multi-Contact Sit-to-Stand Task.
Bonnet, Vincent; Azevedo-Coste, Christine; Robert, Thomas; Fraisse, Philippe; Venture, Gentiane
2017-07-01
This paper aims at developing and evaluating a new practical method for the real-time estimate of joint torques and external wrenches during multi-contact sit-to-stand (STS) task using kinematics data only. The proposed method allows also identifying subject specific body inertial segment parameters that are required to perform inverse dynamics. The identification phase is performed using simple and repeatable motions. Thanks to an accurately identified model the estimate of the total external wrench can be used as an input to solve an under-determined multi-contact problem. It is solved using a constrained quadratic optimization process minimizing a hybrid human-like energetic criterion. The weights of this hybrid cost function are adjusted and a sensitivity analysis is performed in order to reproduce robustly human external wrench distribution. The results showed that the proposed method could successfully estimate the external wrenches under buttocks, feet, and hands during STS tasks (RMS error lower than 20 N and 6 N.m). The simplicity and generalization abilities of the proposed method allow paving the way of future diagnosis solutions and rehabilitation applications, including in-home use.
Improving cell mixture deconvolution by identifying optimal DNA methylation libraries (IDOL).
Koestler, Devin C; Jones, Meaghan J; Usset, Joseph; Christensen, Brock C; Butler, Rondi A; Kobor, Michael S; Wiencke, John K; Kelsey, Karl T
2016-03-08
Confounding due to cellular heterogeneity represents one of the foremost challenges currently facing Epigenome-Wide Association Studies (EWAS). Statistical methods leveraging the tissue-specificity of DNA methylation for deconvoluting the cellular mixture of heterogenous biospecimens offer a promising solution, however the performance of such methods depends entirely on the library of methylation markers being used for deconvolution. Here, we introduce a novel algorithm for Identifying Optimal Libraries (IDOL) that dynamically scans a candidate set of cell-specific methylation markers to find libraries that optimize the accuracy of cell fraction estimates obtained from cell mixture deconvolution. Application of IDOL to training set consisting of samples with both whole-blood DNA methylation data (Illumina HumanMethylation450 BeadArray (HM450)) and flow cytometry measurements of cell composition revealed an optimized library comprised of 300 CpG sites. When compared existing libraries, the library identified by IDOL demonstrated significantly better overall discrimination of the entire immune cell landscape (p = 0.038), and resulted in improved discrimination of 14 out of the 15 pairs of leukocyte subtypes. Estimates of cell composition across the samples in the training set using the IDOL library were highly correlated with their respective flow cytometry measurements, with all cell-specific R (2)>0.99 and root mean square errors (RMSEs) ranging from [0.97 % to 1.33 %] across leukocyte subtypes. Independent validation of the optimized IDOL library using two additional HM450 data sets showed similarly strong prediction performance, with all cell-specific R (2)>0.90 and R M S E<4.00 %. In simulation studies, adjustments for cell composition using the IDOL library resulted in uniformly lower false positive rates compared to competing libraries, while also demonstrating an improved capacity to explain epigenome-wide variation in DNA methylation within two large publicly available HM450 data sets. Despite consisting of half as many CpGs compared to existing libraries for whole blood mixture deconvolution, the optimized IDOL library identified herein resulted in outstanding prediction performance across all considered data sets and demonstrated potential to improve the operating characteristics of EWAS involving adjustments for cell distribution. In addition to providing the EWAS community with an optimized library for whole blood mixture deconvolution, our work establishes a systematic and generalizable framework for the assembly of libraries that improve the accuracy of cell mixture deconvolution.
NASA Astrophysics Data System (ADS)
Haneda, Kiyofumi; Kajima, Toshio; Koyama, Tadashi; Muranaka, Hiroyuki; Dojo, Hirofumi; Aratani, Yasuhiko
2002-05-01
The target of our study is to analyze the level of necessary security requirements, to search for suitable security measures and to optimize security distribution to every portion of the medical practice. Quantitative expression must be introduced to our study, if possible, to enable simplified follow-up security procedures and easy evaluation of security outcomes or results. Using fault tree analysis (FTA), system analysis showed that system elements subdivided into groups by details result in a much more accurate analysis. Such subdivided composition factors greatly depend on behavior of staff, interactive terminal devices, kinds of services provided, and network routes. Security measures were then implemented based on the analysis results. In conclusion, we identified the methods needed to determine the required level of security and proposed security measures for each medical information system, and the basic events and combinations of events that comprise the threat composition factors. Methods for identifying suitable security measures were found and implemented. Risk factors for each basic event, a number of elements for each composition factor, and potential security measures were found. Methods to optimize the security measures for each medical information system were proposed, developing the most efficient distribution of risk factors for basic events.
OPTIMIZING USABILITY OF AN ECONOMIC DECISION SUPPORT TOOL: PROTOTYPE OF THE EQUIPT TOOL.
Cheung, Kei Long; Hiligsmann, Mickaël; Präger, Maximilian; Jones, Teresa; Józwiak-Hagymásy, Judit; Muñoz, Celia; Lester-George, Adam; Pokhrel, Subhash; López-Nicolás, Ángel; Trapero-Bertran, Marta; Evers, Silvia M A A; de Vries, Hein
2018-01-01
Economic decision-support tools can provide valuable information for tobacco control stakeholders, but their usability may impact the adoption of such tools. This study aims to illustrate a mixed-method usability evaluation of an economic decision-support tool for tobacco control, using the EQUIPT ROI tool prototype as a case study. A cross-sectional mixed methods design was used, including a heuristic evaluation, a thinking aloud approach, and a questionnaire testing and exploring the usability of the Return of Investment tool. A total of sixty-six users evaluated the tool (thinking aloud) and completed the questionnaire. For the heuristic evaluation, four experts evaluated the interface. In total twenty-one percent of the respondents perceived good usability. A total of 118 usability problems were identified, from which twenty-six problems were categorized as most severe, indicating high priority to fix them before implementation. Combining user-based and expert-based evaluation methods is recommended as these were shown to identify unique usability problems. The evaluation provides input to optimize usability of a decision-support tool, and may serve as a vantage point for other developers to conduct usability evaluations to refine similar tools before wide-scale implementation. Such studies could reduce implementation gaps by optimizing usability, enhancing in turn the research impact of such interventions.
Recent advances in stellarator optimization
Gates, D. A.; Boozer, A. H.; Brown, T.; ...
2017-10-27
Computational optimization has revolutionized the field of stellarator design. To date, optimizations have focused primarily on optimization of neoclassical confinement and ideal MHD stability, although limited optimization of other parameters has also been performed. Here, we outline a select set of new concepts for stellarator optimization that, when taken as a group, present a significant step forward in the stellarator concept. One of the criticisms that has been leveled at existing methods of design is the complexity of the resultant field coils. Recently, a new coil optimization code—COILOPT++, which uses a spline instead of a Fourier representation of the coils,—wasmore » written and included in the STELLOPT suite of codes. The advantage of this method is that it allows the addition of real space constraints on the locations of the coils. The code has been tested by generating coil designs for optimized quasi-axisymmetric stellarator plasma configurations of different aspect ratios. As an initial exercise, a constraint that the windings be vertical was placed on large major radius half of the non-planar coils. Further constraints were also imposed that guaranteed that sector blanket modules could be removed from between the coils, enabling a sector maintenance scheme. Results of this exercise will be presented. New ideas on methods for the optimization of turbulent transport have garnered much attention since these methods have led to design concepts that are calculated to have reduced turbulent heat loss. We have explored possibilities for generating an experimental database to test whether the reduction in transport that is predicted is consistent with experimental observations. Thus, a series of equilibria that can be made in the now latent QUASAR experiment have been identified that will test the predicted transport scalings. Fast particle confinement studies aimed at developing a generalized optimization algorithm are also discussed. A new algorithm developed for the design of the scraper element on W7-X is presented along with ideas for automating the optimization approach.« less
Recent advances in stellarator optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gates, D. A.; Boozer, A. H.; Brown, T.
Computational optimization has revolutionized the field of stellarator design. To date, optimizations have focused primarily on optimization of neoclassical confinement and ideal MHD stability, although limited optimization of other parameters has also been performed. Here, we outline a select set of new concepts for stellarator optimization that, when taken as a group, present a significant step forward in the stellarator concept. One of the criticisms that has been leveled at existing methods of design is the complexity of the resultant field coils. Recently, a new coil optimization code—COILOPT++, which uses a spline instead of a Fourier representation of the coils,—wasmore » written and included in the STELLOPT suite of codes. The advantage of this method is that it allows the addition of real space constraints on the locations of the coils. The code has been tested by generating coil designs for optimized quasi-axisymmetric stellarator plasma configurations of different aspect ratios. As an initial exercise, a constraint that the windings be vertical was placed on large major radius half of the non-planar coils. Further constraints were also imposed that guaranteed that sector blanket modules could be removed from between the coils, enabling a sector maintenance scheme. Results of this exercise will be presented. New ideas on methods for the optimization of turbulent transport have garnered much attention since these methods have led to design concepts that are calculated to have reduced turbulent heat loss. We have explored possibilities for generating an experimental database to test whether the reduction in transport that is predicted is consistent with experimental observations. Thus, a series of equilibria that can be made in the now latent QUASAR experiment have been identified that will test the predicted transport scalings. Fast particle confinement studies aimed at developing a generalized optimization algorithm are also discussed. A new algorithm developed for the design of the scraper element on W7-X is presented along with ideas for automating the optimization approach.« less
A novel method for energy harvesting simulation based on scenario generation
NASA Astrophysics Data System (ADS)
Wang, Zhe; Li, Taoshen; Xiao, Nan; Ye, Jin; Wu, Min
2018-06-01
Energy harvesting network (EHN) is a new form of computer networks. It converts ambient energy into usable electric energy and supply the electrical energy as a primary or secondary power source to the communication devices. However, most of the EHN uses the analytical probability distribution function to describe the energy harvesting process, which cannot accurately identify the actual situation for the lack of authenticity. We propose an EHN simulation method based on scenario generation in this paper. Firstly, instead of setting a probability distribution in advance, it uses optimal scenario reduction technology to generate representative scenarios in single period based on the historical data of the harvested energy. Secondly, it uses homogeneous simulated annealing algorithm to generate optimal daily energy harvesting scenario sequences to get a more accurate simulation of the random characteristics of the energy harvesting network. Then taking the actual wind power data as an example, the accuracy and stability of the method are verified by comparing with the real data. Finally, we cite an instance to optimize the network throughput, which indicate the feasibility and effectiveness of the method we proposed from the optimal solution and data analysis in energy harvesting simulation.
Locally adaptive methods for KDE-based random walk models of reactive transport in porous media
NASA Astrophysics Data System (ADS)
Sole-Mari, G.; Fernandez-Garcia, D.
2017-12-01
Random Walk Particle Tracking (RWPT) coupled with Kernel Density Estimation (KDE) has been recently proposed to simulate reactive transport in porous media. KDE provides an optimal estimation of the area of influence of particles which is a key element to simulate nonlinear chemical reactions. However, several important drawbacks can be identified: (1) the optimal KDE method is computationally intensive and thereby cannot be used at each time step of the simulation; (2) it does not take advantage of the prior information about the physical system and the previous history of the solute plume; (3) even if the kernel is optimal, the relative error in RWPT simulations typically increases over time as the particle density diminishes by dilution. To overcome these problems, we propose an adaptive branching random walk methodology that incorporates the physics, the particle history and maintains accuracy with time. The method allows particles to efficiently split and merge when necessary as well as to optimally adapt their local kernel shape without having to recalculate the kernel size. We illustrate the advantage of the method by simulating complex reactive transport problems in randomly heterogeneous porous media.
MMASS: an optimized array-based method for assessing CpG island methylation.
Ibrahim, Ashraf E K; Thorne, Natalie P; Baird, Katie; Barbosa-Morais, Nuno L; Tavaré, Simon; Collins, V Peter; Wyllie, Andrew H; Arends, Mark J; Brenton, James D
2006-01-01
We describe an optimized microarray method for identifying genome-wide CpG island methylation called microarray-based methylation assessment of single samples (MMASS) which directly compares methylated to unmethylated sequences within a single sample. To improve previous methods we used bioinformatic analysis to predict an optimized combination of methylation-sensitive enzymes that had the highest utility for CpG-island probes and different methods to produce unmethylated representations of test DNA for more sensitive detection of differential methylation by hybridization. Subtraction or methylation-dependent digestion with McrBC was used with optimized (MMASS-v2) or previously described (MMASS-v1, MMASS-sub) methylation-sensitive enzyme combinations and compared with a published McrBC method. Comparison was performed using DNA from the cell line HCT116. We show that the distribution of methylation microarray data is inherently skewed and requires exogenous spiked controls for normalization and that analysis of digestion of methylated and unmethylated control sequences together with linear fit models of replicate data showed superior statistical power for the MMASS-v2 method. Comparison with previous methylation data for HCT116 and validation of CpG islands from PXMP4, SFRP2, DCC, RARB and TSEN2 confirmed the accuracy of MMASS-v2 results. The MMASS-v2 method offers improved sensitivity and statistical power for high-throughput microarray identification of differential methylation.
Optimisation of a double-centrifugation method for preparation of canine platelet-rich plasma.
Shin, Hyeok-Soo; Woo, Heung-Myong; Kang, Byung-Jae
2017-06-26
Platelet-rich plasma (PRP) has been expected for regenerative medicine because of its growth factors. However, there is considerable variability in the recovery and yield of platelets and the concentration of growth factors in PRP preparations. The aim of this study was to identify optimal relative centrifugal force and spin time for the preparation of PRP from canine blood using a double-centrifugation tube method. Whole blood samples were collected in citrate blood collection tubes from 12 healthy beagles. For the first centrifugation step, 10 different run conditions were compared to determine which condition produced optimal recovery of platelets. Once the optimal condition was identified, platelet-containing plasma prepared using that condition was subjected to a second centrifugation to pellet platelets. For the second centrifugation, 12 different run conditions were compared to identify the centrifugal force and spin time to produce maximal pellet recovery and concentration increase. Growth factor levels were estimated by using ELISA to measure platelet-derived growth factor-BB (PDGF-BB) concentrations in optimised CaCl 2 -activated platelet fractions. The highest platelet recovery rate and yield were obtained by first centrifuging whole blood at 1000 g for 5 min and then centrifuging the recovered platelet-enriched plasma at 1500 g for 15 min. This protocol recovered 80% of platelets from whole blood and increased platelet concentration six-fold and produced the highest concentration of PDGF-BB in activated fractions. We have described an optimised double-centrifugation tube method for the preparation of PRP from canine blood. This optimised method does not require particularly expensive equipment or high technical ability and can readily be carried out in a veterinary clinical setting.
Welded joints integrity analysis and optimization for fiber laser welding of dissimilar materials
NASA Astrophysics Data System (ADS)
Ai, Yuewei; Shao, Xinyu; Jiang, Ping; Li, Peigen; Liu, Yang; Liu, Wei
2016-11-01
Dissimilar materials welded joints provide many advantages in power, automotive, chemical, and spacecraft industries. The weld bead integrity which is determined by process parameters plays a significant role in the welding quality during the fiber laser welding (FLW) of dissimilar materials. In this paper, an optimization method by taking the integrity of the weld bead and weld area into consideration is proposed for FLW of dissimilar materials, the low carbon steel and stainless steel. The relationships between the weld bead integrity and process parameters are developed by the genetic algorithm optimized back propagation neural network (GA-BPNN). The particle swarm optimization (PSO) algorithm is taken for optimizing the predicted outputs from GA-BPNN for the objective. Through the optimization process, the desired weld bead with good integrity and minimum weld area are obtained and the corresponding microstructure and microhardness are excellent. The mechanical properties of the optimized joints are greatly improved compared with that of the un-optimized welded joints. Moreover, the effects of significant factors are analyzed based on the statistical approach and the laser power (LP) is identified as the most significant factor on the weld bead integrity and weld area. The results indicate that the proposed method is effective for improving the reliability and stability of welded joints in the practical production.
NASA Astrophysics Data System (ADS)
Taner, M. U.; Ray, P.; Brown, C.
2016-12-01
Hydroclimatic nonstationarity due to climate change poses challenges for long-term water infrastructure planning in river basin systems. While designing strategies that are flexible or adaptive hold intuitive appeal, development of well-performing strategies requires rigorous quantitative analysis that address uncertainties directly while making the best use of scientific information on the expected evolution of future climate. Multi-stage robust optimization (RO) offers a potentially effective and efficient technique for addressing the problem of staged basin-level planning under climate change, however the necessity of assigning probabilities to future climate states or scenarios is an obstacle to implementation, given that methods to reliably assign probabilities to future climate states are not well developed. We present a method that overcomes this challenge by creating a bottom-up RO-based framework that decreases the dependency on probability distributions of future climate and rather employs them after optimization to aid selection amongst competing alternatives. The iterative process yields a vector of `optimal' decision pathways each under the associated set of probabilistic assumptions. In the final phase, the vector of optimal decision pathways is evaluated to identify the solutions that are least sensitive to the scenario probabilities and are most-likely conditional on the climate information. The framework is illustrated for the planning of new dam and hydro-agricultural expansions projects in the Niger River Basin over a 45-year planning period from 2015 to 2060.
NASA Astrophysics Data System (ADS)
Hu, Xuemin; Chen, Long; Tang, Bo; Cao, Dongpu; He, Haibo
2018-02-01
This paper presents a real-time dynamic path planning method for autonomous driving that avoids both static and moving obstacles. The proposed path planning method determines not only an optimal path, but also the appropriate acceleration and speed for a vehicle. In this method, we first construct a center line from a set of predefined waypoints, which are usually obtained from a lane-level map. A series of path candidates are generated by the arc length and offset to the center line in the s - ρ coordinate system. Then, all of these candidates are converted into Cartesian coordinates. The optimal path is selected considering the total cost of static safety, comfortability, and dynamic safety; meanwhile, the appropriate acceleration and speed for the optimal path are also identified. Various types of roads, including single-lane roads and multi-lane roads with static and moving obstacles, are designed to test the proposed method. The simulation results demonstrate the effectiveness of the proposed method, and indicate its wide practical application to autonomous driving.
NASA Astrophysics Data System (ADS)
Rosenberg, D. E.; Alafifi, A.
2016-12-01
Water resources systems analysis often focuses on finding optimal solutions. Yet an optimal solution is optimal only for the modelled issues and managers often seek near-optimal alternatives that address un-modelled objectives, preferences, limits, uncertainties, and other issues. Early on, Modelling to Generate Alternatives (MGA) formalized near-optimal as the region comprising the original problem constraints plus a new constraint that allowed performance within a specified tolerance of the optimal objective function value. MGA identified a few maximally-different alternatives from the near-optimal region. Subsequent work applied Markov Chain Monte Carlo (MCMC) sampling to generate a larger number of alternatives that span the near-optimal region of linear problems or select portions for non-linear problems. We extend the MCMC Hit-And-Run method to generate alternatives that span the full extent of the near-optimal region for non-linear, non-convex problems. First, start at a feasible hit point within the near-optimal region, then run a random distance in a random direction to a new hit point. Next, repeat until generating the desired number of alternatives. The key step at each iterate is to run a random distance along the line in the specified direction to a new hit point. If linear equity constraints exist, we construct an orthogonal basis and use a null space transformation to confine hits and runs to a lower-dimensional space. Linear inequity constraints define the convex bounds on the line that runs through the current hit point in the specified direction. We then use slice sampling to identify a new hit point along the line within bounds defined by the non-linear inequity constraints. This technique is computationally efficient compared to prior near-optimal alternative generation techniques such MGA, MCMC Metropolis-Hastings, evolutionary, or firefly algorithms because search at each iteration is confined to the hit line, the algorithm can move in one step to any point in the near-optimal region, and each iterate generates a new, feasible alternative. We use the method to generate alternatives that span the near-optimal regions of simple and more complicated water management problems and may be preferred to optimal solutions. We also discuss extensions to handle non-linear equity constraints.
Optimizing substance detection by integration of canine-human team with machine technology
NASA Astrophysics Data System (ADS)
Prestrude, Al M.; Ternes, J. W.
1994-02-01
There are several promising methods and technologies for substance detection. The oldest of these methods is the trained detector or `sniffer' dog. We summarize what is known about the capabilities of dogs in substance detection and recommend comparative testing of the canine- human team with current technology to identify the optimum combination of methods to maximize the detection of explosives and contraband.
Assessing the Value of Information for Identifying Optimal Floodplain Management Portfolios
NASA Astrophysics Data System (ADS)
Read, L.; Bates, M.; Hui, R.; Lund, J. R.
2014-12-01
Floodplain management is a complex portfolio problem that can be analyzed from an integrated perspective incorporating traditionally structural and nonstructural options. One method to identify effective strategies for preparing, responding to, and recovering from floods is to optimize for a portfolio of temporary (emergency) and permanent floodplain management options. A risk-based optimization approach to this problem assigns probabilities to specific flood events and calculates the associated expected damages. This approach is currently limited by: (1) the assumption of perfect flood forecast information, i.e. implementing temporary management activities according to the actual flood event may differ from optimizing based on forecasted information and (2) the inability to assess system resilience across a range of possible future events (risk-centric approach). Resilience is defined here as the ability of a system to absorb and recover from a severe disturbance or extreme event. In our analysis, resilience is a system property that requires integration of physical, social, and information domains. This work employs a 3-stage linear program to identify the optimal mix of floodplain management options using conditional probabilities to represent perfect and imperfect flood stages (forecast vs. actual events). We assess the value of information in terms of minimizing damage costs for two theoretical cases - urban and rural systems. We use portfolio analysis to explore how the set of optimal management options differs depending on whether the goal is for the system to be risk-adverse to a specified event or resilient over a range of events.
Selecting the selector: Comparison of update rules for discrete global optimization
Theiler, James; Zimmer, Beate G.
2017-05-24
In this paper, we compare some well-known Bayesian global optimization methods in four distinct regimes, corresponding to high and low levels of measurement noise and to high and low levels of “quenched noise” (which term we use to describe the roughness of the function we are trying to optimize). We isolate the two stages of this optimization in terms of a “regressor,” which fits a model to the data measured so far, and a “selector,” which identifies the next point to be measured. Finally, the focus of this paper is to investigate the choice of selector when the regressor ismore » well matched to the data.« less
Research using qualitative, quantitative or mixed methods and choice based on the research.
McCusker, K; Gunaydin, S
2015-10-01
Research is fundamental to the advancement of medicine and critical to identifying the most optimal therapies unique to particular societies. This is easily observed through the dynamics associated with pharmacology, surgical technique and the medical equipment used today versus short years ago. Advancements in knowledge synthesis and reporting guidelines enhance the quality, scope and applicability of results; thus, improving health science and clinical practice and advancing health policy. While advancements are critical to the progression of optimal health care, the high cost associated with these endeavors cannot be ignored. Research fundamentally needs to be evaluated to identify the most efficient methods of evaluation. The primary objective of this paper is to look at a specific research methodology when applied to the area of clinical research, especially extracorporeal circulation and its prognosis for the future. © The Author(s) 2014.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ulvestad, A.; Menickelly, M.; Wild, S. M.
Defects such as dislocations impact materials properties and their response during external stimuli. Imaging these defects in their native operating conditions to establish the structure-function relationship and, ultimately, to improve performance via defect engineering has remained a considerable challenge for both electron-based and x-ray-based imaging techniques. While Bragg coherent x-ray diffractive imaging (BCDI) is successful in many cases, nuances in identifying the dislocations has left manual identification as the preferred method. Derivative-based methods are also used, but they can be inaccurate and are computationally inefficient. Here we demonstrate a derivative-free method that is both more accurate and more computationally efficientmore » than either derivative-or human-based methods for identifying 3D dislocation lines in nanocrystal images produced by BCDI. We formulate the problem as a min-max optimization problem and show exceptional accuracy for experimental images. We demonstrate a 227x speedup for a typical experimental dataset with higher accuracy over current methods. We discuss the possibility of using this algorithm as part of a sparsity-based phase retrieval process. We also provide MATLAB code for use by other researchers.« less
NASA Astrophysics Data System (ADS)
Ulvestad, A.; Menickelly, M.; Wild, S. M.
2018-01-01
Defects such as dislocations impact materials properties and their response during external stimuli. Imaging these defects in their native operating conditions to establish the structure-function relationship and, ultimately, to improve performance via defect engineering has remained a considerable challenge for both electron-based and x-ray-based imaging techniques. While Bragg coherent x-ray diffractive imaging (BCDI) is successful in many cases, nuances in identifying the dislocations has left manual identification as the preferred method. Derivative-based methods are also used, but they can be inaccurate and are computationally inefficient. Here we demonstrate a derivative-free method that is both more accurate and more computationally efficient than either derivative- or human-based methods for identifying 3D dislocation lines in nanocrystal images produced by BCDI. We formulate the problem as a min-max optimization problem and show exceptional accuracy for experimental images. We demonstrate a 227x speedup for a typical experimental dataset with higher accuracy over current methods. We discuss the possibility of using this algorithm as part of a sparsity-based phase retrieval process. We also provide MATLAB code for use by other researchers.
Constrained Multi-Level Algorithm for Trajectory Optimization
NASA Astrophysics Data System (ADS)
Adimurthy, V.; Tandon, S. R.; Jessy, Antony; Kumar, C. Ravi
The emphasis on low cost access to space inspired many recent developments in the methodology of trajectory optimization. Ref.1 uses a spectral patching method for optimization, where global orthogonal polynomials are used to describe the dynamical constraints. A two-tier approach of optimization is used in Ref.2 for a missile mid-course trajectory optimization. A hybrid analytical/numerical approach is described in Ref.3, where an initial analytical vacuum solution is taken and gradually atmospheric effects are introduced. Ref.4 emphasizes the fact that the nonlinear constraints which occur in the initial and middle portions of the trajectory behave very nonlinearly with respect the variables making the optimization very difficult to solve in the direct and indirect shooting methods. The problem is further made complex when different phases of the trajectory have different objectives of optimization and also have different path constraints. Such problems can be effectively addressed by multi-level optimization. In the multi-level methods reported so far, optimization is first done in identified sub-level problems, where some coordination variables are kept fixed for global iteration. After all the sub optimizations are completed, higher-level optimization iteration with all the coordination and main variables is done. This is followed by further sub system optimizations with new coordination variables. This process is continued until convergence. In this paper we use a multi-level constrained optimization algorithm which avoids the repeated local sub system optimizations and which also removes the problem of non-linear sensitivity inherent in the single step approaches. Fall-zone constraints, structural load constraints and thermal constraints are considered. In this algorithm, there is only a single multi-level sequence of state and multiplier updates in a framework of an augmented Lagrangian. Han Tapia multiplier updates are used in view of their special role in diagonalised methods, being the only single update with quadratic convergence. For a single level, the diagonalised multiplier method (DMM) is described in Ref.5. The main advantage of the two-level analogue of the DMM approach is that it avoids the inner loop optimizations required in the other methods. The scheme also introduces a gradient change measure to reduce the computational time needed to calculate the gradients. It is demonstrated that the new multi-level scheme leads to a robust procedure to handle the sensitivity of the constraints, and the multiple objectives of different trajectory phases. Ref. 1. Fahroo, F and Ross, M., " A Spectral Patching Method for Direct Trajectory Optimization" The Journal of the Astronautical Sciences, Vol.48, 2000, pp.269-286 Ref. 2. Phililps, C.A. and Drake, J.C., "Trajectory Optimization for a Missile using a Multitier Approach" Journal of Spacecraft and Rockets, Vol.37, 2000, pp.663-669 Ref. 3. Gath, P.F., and Calise, A.J., " Optimization of Launch Vehicle Ascent Trajectories with Path Constraints and Coast Arcs", Journal of Guidance, Control, and Dynamics, Vol. 24, 2001, pp.296-304 Ref. 4. Betts, J.T., " Survey of Numerical Methods for Trajectory Optimization", Journal of Guidance, Control, and Dynamics, Vol.21, 1998, pp. 193-207 Ref. 5. Adimurthy, V., " Launch Vehicle Trajectory Optimization", Acta Astronautica, Vol.15, 1987, pp.845-850.
Yang, Runtao; Zhang, Chengjin; Gao, Rui; Zhang, Lina
2016-01-01
The Golgi Apparatus (GA) is a major collection and dispatch station for numerous proteins destined for secretion, plasma membranes and lysosomes. The dysfunction of GA proteins can result in neurodegenerative diseases. Therefore, accurate identification of protein subGolgi localizations may assist in drug development and understanding the mechanisms of the GA involved in various cellular processes. In this paper, a new computational method is proposed for identifying cis-Golgi proteins from trans-Golgi proteins. Based on the concept of Common Spatial Patterns (CSP), a novel feature extraction technique is developed to extract evolutionary information from protein sequences. To deal with the imbalanced benchmark dataset, the Synthetic Minority Over-sampling Technique (SMOTE) is adopted. A feature selection method called Random Forest-Recursive Feature Elimination (RF-RFE) is employed to search the optimal features from the CSP based features and g-gap dipeptide composition. Based on the optimal features, a Random Forest (RF) module is used to distinguish cis-Golgi proteins from trans-Golgi proteins. Through the jackknife cross-validation, the proposed method achieves a promising performance with a sensitivity of 0.889, a specificity of 0.880, an accuracy of 0.885, and a Matthew’s Correlation Coefficient (MCC) of 0.765, which remarkably outperforms previous methods. Moreover, when tested on a common independent dataset, our method also achieves a significantly improved performance. These results highlight the promising performance of the proposed method to identify Golgi-resident protein types. Furthermore, the CSP based feature extraction method may provide guidelines for protein function predictions. PMID:26861308
Identifiability and Identification of Trace Continuous Pollutant Source
Qu, Hongquan; Liu, Shouwen; Pang, Liping; Hu, Tao
2014-01-01
Accidental pollution events often threaten people's health and lives, and a pollutant source is very necessary so that prompt remedial actions can be taken. In this paper, a trace continuous pollutant source identification method is developed to identify a sudden continuous emission pollutant source in an enclosed space. The location probability model is set up firstly, and then the identification method is realized by searching a global optimal objective value of the location probability. In order to discuss the identifiability performance of the presented method, a conception of a synergy degree of velocity fields is presented in order to quantitatively analyze the impact of velocity field on the identification performance. Based on this conception, some simulation cases were conducted. The application conditions of this method are obtained according to the simulation studies. In order to verify the presented method, we designed an experiment and identified an unknown source appearing in the experimental space. The result showed that the method can identify a sudden trace continuous source when the studied situation satisfies the application conditions. PMID:24892041
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shrestha, Roshan; Houser, Paul R.; Anantharaj, Valentine G.
2011-04-01
Precipitation products are currently available from various sources at higher spatial and temporal resolution than any time in the past. Each of the precipitation products has its strengths and weaknesses in availability, accuracy, resolution, retrieval techniques and quality control. By merging the precipitation data obtained from multiple sources, one can improve its information content by minimizing these issues. However, precipitation data merging poses challenges of scale-mismatch, and accurate error and bias assessment. In this paper we present Optimal Merging of Precipitation (OMP), a new method to merge precipitation data from multiple sources that are of different spatial and temporal resolutionsmore » and accuracies. This method is a combination of scale conversion and merging weight optimization, involving performance-tracing based on Bayesian statistics and trend-analysis, which yields merging weights for each precipitation data source. The weights are optimized at multiple scales to facilitate multiscale merging and better precipitation downscaling. Precipitation data used in the experiment include products from the 12-km resolution North American Land Data Assimilation (NLDAS) system, the 8-km resolution CMORPH and the 4-km resolution National Stage-IV QPE. The test cases demonstrate that the OMP method is capable of identifying a better data source and allocating a higher priority for them in the merging procedure, dynamically over the region and time period. This method is also effective in filtering out poor quality data introduced into the merging process.« less
Scott, WE; Weegman, BP; Balamurugan, AN; Ferrer-Fabrega, J; Anazawa, T; Karatzas, T; Jie, T; Hammer, BE; Matsumoto, S; Avgoustiniatos, ES; Maynard, KS; Sutherland, DER; Hering, BJ; Papas, KK
2014-01-01
Background Porcine islet xenotransplantation is emerging as a potential alternative for allogeneic clinical islet transplantation. Optimization of porcine islet isolation in terms of yield and quality is critical for the success and cost effectiveness of this approach. Incomplete pancreas distension and inhomogeneous enzyme distribution have been identified as key factors for limiting viable islet yield per porcine pancreas. The aim of this study was to explore the utility of Magnetic Resonance Imaging (MRI) as a tool to investigate the homogeneity of enzyme delivery in porcine pancreata. Traditional and novel methods for enzyme delivery aimed at optimizing enzyme distribution were examined. Methods Pancreata were procured from Landrace pigs via en bloc viscerectomy. The main pancreatic duct was then cannulated with an 18g winged catheter and MRI performed at 1.5 T. Images were collected before and after ductal infusion of chilled MRI contrast agent (gadolinium) in physiological saline. Results Regions of the distal aspect of the splenic lobe and portions of the connecting lobe and bridge exhibited reduced delivery of solution when traditional methods of distension were utilized. Use of alternative methods of delivery (such as selective re-cannulation and distension of identified problem regions) resolved these issues and MRI was successfully utilized as a guide and assessment tool for improved delivery. Conclusion Current methods of porcine pancreas distension do not consistently deliver enzyme uniformly or adequately to all regions of the pancreas. Novel methods of enzyme delivery should be investigated and implemented for improved enzyme distribution. MRI serves as a valuable tool to visualize and evaluate the efficacy of current and prospective methods of pancreas distension and enzyme delivery. PMID:24986758
An effective model for ergonomic optimization applied to a new automotive assembly line
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duraccio, Vincenzo; Elia, Valerio; Forcina, Antonio
2016-06-08
An efficient ergonomic optimization can lead to a significant improvement in production performance and a considerable reduction of costs. In the present paper new model for ergonomic optimization is proposed. The new approach is based on the criteria defined by National Institute of Occupational Safety and Health and, adapted to Italian legislation. The proposed model provides an ergonomic optimization, by analyzing ergonomic relations between manual work in correct conditions. The model includes a schematic and systematic analysis method of the operations, and identifies all possible ergonomic aspects to be evaluated. The proposed approach has been applied to an automotive assemblymore » line, where the operation repeatability makes the optimization fundamental. The proposed application clearly demonstrates the effectiveness of the new approach.« less
An Enhanced Memetic Algorithm for Single-Objective Bilevel Optimization Problems.
Islam, Md Monjurul; Singh, Hemant Kumar; Ray, Tapabrata; Sinha, Ankur
2017-01-01
Bilevel optimization, as the name reflects, deals with optimization at two interconnected hierarchical levels. The aim is to identify the optimum of an upper-level leader problem, subject to the optimality of a lower-level follower problem. Several problems from the domain of engineering, logistics, economics, and transportation have an inherent nested structure which requires them to be modeled as bilevel optimization problems. Increasing size and complexity of such problems has prompted active theoretical and practical interest in the design of efficient algorithms for bilevel optimization. Given the nested nature of bilevel problems, the computational effort (number of function evaluations) required to solve them is often quite high. In this article, we explore the use of a Memetic Algorithm (MA) to solve bilevel optimization problems. While MAs have been quite successful in solving single-level optimization problems, there have been relatively few studies exploring their potential for solving bilevel optimization problems. MAs essentially attempt to combine advantages of global and local search strategies to identify optimum solutions with low computational cost (function evaluations). The approach introduced in this article is a nested Bilevel Memetic Algorithm (BLMA). At both upper and lower levels, either a global or a local search method is used during different phases of the search. The performance of BLMA is presented on twenty-five standard test problems and two real-life applications. The results are compared with other established algorithms to demonstrate the efficacy of the proposed approach.
Williams, Perry J.; Kendall, William L.
2017-01-01
Choices in ecological research and management are the result of balancing multiple, often competing, objectives. Multi-objective optimization (MOO) is a formal decision-theoretic framework for solving multiple objective problems. MOO is used extensively in other fields including engineering, economics, and operations research. However, its application for solving ecological problems has been sparse, perhaps due to a lack of widespread understanding. Thus, our objective was to provide an accessible primer on MOO, including a review of methods common in other fields, a review of their application in ecology, and a demonstration to an applied resource management problem.A large class of methods for solving MOO problems can be separated into two strategies: modelling preferences pre-optimization (the a priori strategy), or modelling preferences post-optimization (the a posteriori strategy). The a priori strategy requires describing preferences among objectives without knowledge of how preferences affect the resulting decision. In the a posteriori strategy, the decision maker simultaneously considers a set of solutions (the Pareto optimal set) and makes a choice based on the trade-offs observed in the set. We describe several methods for modelling preferences pre-optimization, including: the bounded objective function method, the lexicographic method, and the weighted-sum method. We discuss modelling preferences post-optimization through examination of the Pareto optimal set. We applied each MOO strategy to the natural resource management problem of selecting a population target for cackling goose (Branta hutchinsii minima) abundance. Cackling geese provide food security to Native Alaskan subsistence hunters in the goose's nesting area, but depredate crops on private agricultural fields in wintering areas. We developed objective functions to represent the competing objectives related to the cackling goose population target and identified an optimal solution first using the a priori strategy, and then by examining trade-offs in the Pareto set using the a posteriori strategy. We used four approaches for selecting a final solution within the a posteriori strategy; the most common optimal solution, the most robust optimal solution, and two solutions based on maximizing a restricted portion of the Pareto set. We discuss MOO with respect to natural resource management, but MOO is sufficiently general to cover any ecological problem that contains multiple competing objectives that can be quantified using objective functions.
Ogourtsova, Tatiana; Archambault, Philippe S; Lamontagne, Anouk
2017-11-07
Hemineglect, defined as a failure to attend to the contralesional side of space, is a prevalent and disabling post-stroke deficit. Conventional hemineglect assessments lack sensitivity as they contain mainly non-functional tasks performed in near-extrapersonal space, using static, two-dimensional methods. This is of concern given that hemineglect is a strong predictor for functional deterioration, limited post-stroke recovery, and difficulty in community reintegration. With the emerging field of virtual reality, several virtual tools have been proposed and have reported better sensitivity in neglect-related deficits detection than conventional methods. However, these and future virtual reality-based tools are yet to be implemented in clinical practice. The present study aimed to explore the barriers/facilitators perceived by clinicians in the use of virtual reality for hemineglect assessment; and to identify features of an optimal virtual assessment. A qualitative descriptive process, in the form of focus groups, self-administered questionnaire and individual interviews was used. Two focus groups (n = 11 clinicians) were conducted and experts in the field (n = 3) were individually interviewed. Several barriers and facilitators, including personal, institutional, client suitability, and equipment factors, were identified. Clinicians and experts in the field reported numerous features for the virtual tool optimization. Factors identified through this study lay the foundation for the development of a knowledge translation initiative towards an implementation of a virtual assessment for hemineglect. Addressing the identified barriers/facilitators during implementation and incorporating the optimal features in the design of the virtual assessment could assist and promote its eventual adoption in clinical settings. Implications for rehabilitation A multimodal and active knowledge translation intervention built on the presently identified modifiable factors is suggested to be implemented to support the clinical integration of a virtual reality-based assessment for post-stroke hemineglect. To amplify application and usefulness of a virtual-reality based tool in the assessment of post-stroke hemineglect, optimal features identified in the present study should be incorporated in the design of such technology.
Mode perturbation method for optimal guided wave mode and frequency selection.
Philtron, J H; Rose, J L
2014-09-01
With a thorough understanding of guided wave mechanics, researchers can predict which guided wave modes will have a high probability of success in a particular nondestructive evaluation application. However, work continues to find optimal mode and frequency selection for a given application. This "optimal" mode could give the highest sensitivity to defects or the greatest penetration power, increasing inspection efficiency. Since material properties used for modeling work may be estimates, in many cases guided wave mode and frequency selection can be adjusted for increased inspection efficiency in the field. In this paper, a novel mode and frequency perturbation method is described and used to identify optimal mode points based on quantifiable wave characteristics. The technique uses an ultrasonic phased array comb transducer to sweep in phase velocity and frequency space. It is demonstrated using guided interface waves for bond evaluation. After searching nearby mode points, an optimal mode and frequency can be selected which has the highest sensitivity to a defect, or gives the greatest penetration power. The optimal mode choice for a given application depends on the requirements of the inspection. Copyright © 2014 Elsevier B.V. All rights reserved.
Guglielmo, F; Bergemann, S E; Gonthier, P; Nicolotti, G; Garbelotto, M
2007-11-01
The goal of this research was the development of a PCR-based assay to identify important decay fungi from wood of hardwood tree species in northern temperate regions. Eleven taxon-specific primers were designed for PCR amplification of either nuclear or mitochondrial ribosomal DNA regions of Armillaria spp., Ganoderma spp., Hericium spp., Hypoxylon thouarsianum var. thouarsianum, Inonotus/Phellinus-group, Laetiporus spp., Perenniporia fraxinea, Pleurotus spp., Schizophyllum spp., Stereum spp. and Trametes spp. Multiplex PCR reactions were developed and optimized to detect fungal DNA and identify each taxon with a sensitivity of at least 1 pg of target DNA in the template. This assay correctly identified the agents of decay in 82% of tested wood samples. The development and optimization of multiplex PCRs allowed for reliable identification of wood rotting fungi directly from wood. Early detection of wood decay fungi is crucial for assessment of tree stability in urban landscapes. Furthermore, this method may prove useful for prediction of the severity and the evolution of decay in standing trees.
Jung, Melissa R; Horgen, F David; Orski, Sara V; Rodriguez C, Viviana; Beers, Kathryn L; Balazs, George H; Jones, T Todd; Work, Thierry M; Brignac, Kayla C; Royer, Sarah-Jeanne; Hyrenbach, K David; Jensen, Brenda A; Lynch, Jennifer M
2018-02-01
Polymer identification of plastic marine debris can help identify its sources, degradation, and fate. We optimized and validated a fast, simple, and accessible technique, attenuated total reflectance Fourier transform infrared spectroscopy (ATR FT-IR), to identify polymers contained in plastic ingested by sea turtles. Spectra of consumer good items with known resin identification codes #1-6 and several #7 plastics were compared to standard and raw manufactured polymers. High temperature size exclusion chromatography measurements confirmed ATR FT-IR could differentiate these polymers. High-density (HDPE) and low-density polyethylene (LDPE) discrimination is challenging but a clear step-by-step guide is provided that identified 78% of ingested PE samples. The optimal cleaning methods consisted of wiping ingested pieces with water or cutting. Of 828 ingested plastics pieces from 50 Pacific sea turtles, 96% were identified by ATR FT-IR as HDPE, LDPE, unknown PE, polypropylene (PP), PE and PP mixtures, polystyrene, polyvinyl chloride, and nylon. Published by Elsevier Ltd.
León, Ileana R.; Schwämmle, Veit; Jensen, Ole N.; Sprenger, Richard R.
2013-01-01
The majority of mass spectrometry-based protein quantification studies uses peptide-centric analytical methods and thus strongly relies on efficient and unbiased protein digestion protocols for sample preparation. We present a novel objective approach to assess protein digestion efficiency using a combination of qualitative and quantitative liquid chromatography-tandem MS methods and statistical data analysis. In contrast to previous studies we employed both standard qualitative as well as data-independent quantitative workflows to systematically assess trypsin digestion efficiency and bias using mitochondrial protein fractions. We evaluated nine trypsin-based digestion protocols, based on standard in-solution or on spin filter-aided digestion, including new optimized protocols. We investigated various reagents for protein solubilization and denaturation (dodecyl sulfate, deoxycholate, urea), several trypsin digestion conditions (buffer, RapiGest, deoxycholate, urea), and two methods for removal of detergents before analysis of peptides (acid precipitation or phase separation with ethyl acetate). Our data-independent quantitative liquid chromatography-tandem MS workflow quantified over 3700 distinct peptides with 96% completeness between all protocols and replicates, with an average 40% protein sequence coverage and an average of 11 peptides identified per protein. Systematic quantitative and statistical analysis of physicochemical parameters demonstrated that deoxycholate-assisted in-solution digestion combined with phase transfer allows for efficient, unbiased generation and recovery of peptides from all protein classes, including membrane proteins. This deoxycholate-assisted protocol was also optimal for spin filter-aided digestions as compared with existing methods. PMID:23792921
Assessment of Trading Partners for China's Rare Earth Exports Using a Decision Analytic Approach
He, Chunyan; Lei, Yalin; Ge, Jianping
2014-01-01
Chinese rare earth export policies currently result in accelerating its depletion. Thus adopting an optimal export trade selection strategy is crucial to determining and ultimately identifying the ideal trading partners. This paper introduces a multi-attribute decision-making methodology which is then used to select the optimal trading partner. In the method, an evaluation criteria system is established to assess the seven top trading partners based on three dimensions: political relationships, economic benefits and industrial security. Specifically, a simple additive weighing model derived from an additive utility function is utilized to calculate, rank and select alternatives. Results show that Japan would be the optimal trading partner for Chinese rare earths. The criteria evaluation method of trading partners for China's rare earth exports provides the Chinese government with a tool to enhance rare earth industrial policies. PMID:25051534
Assessment of trading partners for China's rare earth exports using a decision analytic approach.
He, Chunyan; Lei, Yalin; Ge, Jianping
2014-01-01
Chinese rare earth export policies currently result in accelerating its depletion. Thus adopting an optimal export trade selection strategy is crucial to determining and ultimately identifying the ideal trading partners. This paper introduces a multi-attribute decision-making methodology which is then used to select the optimal trading partner. In the method, an evaluation criteria system is established to assess the seven top trading partners based on three dimensions: political relationships, economic benefits and industrial security. Specifically, a simple additive weighing model derived from an additive utility function is utilized to calculate, rank and select alternatives. Results show that Japan would be the optimal trading partner for Chinese rare earths. The criteria evaluation method of trading partners for China's rare earth exports provides the Chinese government with a tool to enhance rare earth industrial policies.
glmnetLRC f/k/a lrc package: Logistic Regression Classification
DOE Office of Scientific and Technical Information (OSTI.GOV)
2016-06-09
Methods for fitting and predicting logistic regression classifiers (LRC) with an arbitrary loss function using elastic net or best subsets. This package adds additional model fitting features to the existing glmnet and bestglm R packages. This package was created to perform the analyses described in Amidan BG, Orton DJ, LaMarche BL, et al. 2014. Signatures for Mass Spectrometry Data Quality. Journal of Proteome Research. 13(4), 2215-2222. It makes the model fitting available in the glmnet and bestglm packages more general by identifying optimal model parameters via cross validation with an customizable loss function. It also identifies the optimal threshold formore » binary classification.« less
Selection of sampling rate for digital control of aircrafts
NASA Technical Reports Server (NTRS)
Katz, P.; Powell, J. D.
1974-01-01
The considerations in selecting the sample rates for digital control of aircrafts are identified and evaluated using the optimal discrete method. A high performance aircraft model which includes a bending mode and wind gusts was studied. The following factors which influence the selection of the sampling rates were identified: (1) the time and roughness response to control inputs; (2) the response to external disturbances; and (3) the sensitivity to variations of parameters. It was found that the time response to a control input and the response to external disturbances limit the selection of the sampling rate. The optimal discrete regulator, the steady state Kalman filter, and the mean response to external disturbances are calculated.
An optimized ensemble local mean decomposition method for fault detection of mechanical components
NASA Astrophysics Data System (ADS)
Zhang, Chao; Li, Zhixiong; Hu, Chao; Chen, Shuai; Wang, Jianguo; Zhang, Xiaogang
2017-03-01
Mechanical transmission systems have been widely adopted in most of industrial applications, and issues related to the maintenance of these systems have attracted considerable attention in the past few decades. The recently developed ensemble local mean decomposition (ELMD) method shows satisfactory performance in fault detection of mechanical components for preventing catastrophic failures and reducing maintenance costs. However, the performance of ELMD often heavily depends on proper selection of its model parameters. To this end, this paper proposes an optimized ensemble local mean decomposition (OELMD) method to determinate an optimum set of ELMD parameters for vibration signal analysis. In OELMD, an error index termed the relative root-mean-square error (Relative RMSE) is used to evaluate the decomposition performance of ELMD with a certain amplitude of the added white noise. Once a maximum Relative RMSE, corresponding to an optimal noise amplitude, is determined, OELMD then identifies optimal noise bandwidth and ensemble number based on the Relative RMSE and signal-to-noise ratio (SNR), respectively. Thus, all three critical parameters of ELMD (i.e. noise amplitude and bandwidth, and ensemble number) are optimized by OELMD. The effectiveness of OELMD was evaluated using experimental vibration signals measured from three different mechanical components (i.e. the rolling bearing, gear and diesel engine) under faulty operation conditions.
Chen, Sha; Wu, Ben-Hong; Fang, Jin-Bao; Liu, Yan-Ling; Zhang, Hao-Hao; Fang, Lin-Chuan; Guan, Le; Li, Shao-Hua
2012-03-02
The extraction protocol of flavonoids from lotus (Nelumbo nucifera) leaves was optimized through an orthogonal design. The solvent was the most important factor comparing solvent, solvent:tissue ratio, extraction time, and temperature. The highest yield of flavonoids was achieved with 70% methanol-water and a solvent:tissue ratio of 30:1 at 4 °C for 36 h. The optimized analytical method for HPLC was a multi-step gradient elution using 0.5% formic acid (A) and CH₃CN containing 0.1% formic acid (B), at a flow rate of 0.6 mL/min. Using this optimized method, thirteen flavonoids were simultaneously separated and identified by high performance liquid chromatography coupled with photodiode array detection/electrospray ionization mass spectrometry (HPLC/DAD/ESI-MS(n)). Five of the bioactive compounds are reported in lotus leaves for the first time. The flavonoid content of the leaves of three representative cultivars was assessed under the optimized extraction and HPLC analytical conditions, and the seed-producing cultivar 'Baijianlian' had the highest flavonoid content compared with rhizome-producing 'Zhimahuoulian' and wild floral cultivar 'Honglian'. Copyright © 2012 Elsevier B.V. All rights reserved.
Multidisciplinary optimization for engineering systems - Achievements and potential
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw
1989-01-01
The currently common sequential design process for engineering systems is likely to lead to suboptimal designs. Recently developed decomposition methods offer an alternative for coming closer to optimum by breaking the large task of system optimization into smaller, concurrently executed and, yet, coupled tasks, identified with engineering disciplines or subsystems. The hierarchic and non-hierarchic decompositions are discussed and illustrated by examples. An organization of a design process centered on the non-hierarchic decomposition is proposed.
Multidisciplinary optimization for engineering systems: Achievements and potential
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw
1989-01-01
The currently common sequential design process for engineering systems is likely to lead to suboptimal designs. Recently developed decomposition methods offer an alternative for coming closer to optimum by breaking the large task of system optimization into smaller, concurrently executed and, yet, coupled tasks, identified with engineering disciplines or subsystems. The hierarchic and non-hierarchic decompositions are discussed and illustrated by examples. An organization of a design process centered on the non-hierarchic decomposition is proposed.
NASA Astrophysics Data System (ADS)
Chebbi, A.; Bargaoui, Z. K.; da Conceição Cunha, M.
2013-10-01
Based on rainfall intensity-duration-frequency (IDF) curves, fitted in several locations of a given area, a robust optimization approach is proposed to identify the best locations to install new rain gauges. The advantage of robust optimization is that the resulting design solutions yield networks which behave acceptably under hydrological variability. Robust optimization can overcome the problem of selecting representative rainfall events when building the optimization process. This paper reports an original approach based on Montana IDF model parameters. The latter are assumed to be geostatistical variables, and their spatial interdependence is taken into account through the adoption of cross-variograms in the kriging process. The problem of optimally locating a fixed number of new monitoring stations based on an existing rain gauge network is addressed. The objective function is based on the mean spatial kriging variance and rainfall variogram structure using a variance-reduction method. Hydrological variability was taken into account by considering and implementing several return periods to define the robust objective function. Variance minimization is performed using a simulated annealing algorithm. In addition, knowledge of the time horizon is needed for the computation of the robust objective function. A short- and a long-term horizon were studied, and optimal networks are identified for each. The method developed is applied to north Tunisia (area = 21 000 km2). Data inputs for the variogram analysis were IDF curves provided by the hydrological bureau and available for 14 tipping bucket type rain gauges. The recording period was from 1962 to 2001, depending on the station. The study concerns an imaginary network augmentation based on the network configuration in 1973, which is a very significant year in Tunisia because there was an exceptional regional flood event in March 1973. This network consisted of 13 stations and did not meet World Meteorological Organization (WMO) recommendations for the minimum spatial density. Therefore, it is proposed to augment it by 25, 50, 100 and 160% virtually, which is the rate that would meet WMO requirements. Results suggest that for a given augmentation robust networks remain stable overall for the two time horizons.
Development of gel-filter method for high enrichment of low-molecular weight proteins from serum.
Chen, Lingsheng; Zhai, Linhui; Li, Yanchang; Li, Ning; Zhang, Chengpu; Ping, Lingyan; Chang, Lei; Wu, Junzhu; Li, Xiangping; Shi, Deshun; Xu, Ping
2015-01-01
The human serum proteome has been extensively screened for biomarkers. However, the large dynamic range of protein concentrations in serum and the presence of highly abundant and large molecular weight proteins, make identification and detection changes in the amount of low-molecular weight proteins (LMW, molecular weight ≤ 30kDa) difficult. Here, we developed a gel-filter method including four layers of different concentration of tricine SDS-PAGE-based gels to block high-molecular weight proteins and enrich LMW proteins. By utilizing this method, we identified 1,576 proteins (n = 2) from 10 μL serum. Among them, 559 (n = 2) proteins belonged to LMW proteins. Furthermore, this gel-filter method could identify 67.4% and 39.8% more LMW proteins than that in representative methods of glycine SDS-PAGE and optimized-DS, respectively. By utilizing SILAC-AQUA approach with labeled recombinant protein as internal standard, the recovery rate for GST spiked in serum during the treatment of gel-filter, optimized-DS, and ProteoMiner was 33.1 ± 0.01%, 18.7 ± 0.01% and 9.6 ± 0.03%, respectively. These results demonstrate that the gel-filter method offers a rapid, highly reproducible and efficient approach for screening biomarkers from serum through proteomic analyses.
Stiffness optimization of non-linear elastic structures
Wallin, Mathias; Ivarsson, Niklas; Tortorelli, Daniel
2017-11-13
Our paper revisits stiffness optimization of non-linear elastic structures. Due to the non-linearity, several possible stiffness measures can be identified and in this work conventional compliance, i.e. secant stiffness designs are compared to tangent stiffness designs. The optimization problem is solved by the method of moving asymptotes and the sensitivities are calculated using the adjoint method. And for the tangent cost function it is shown that although the objective involves the third derivative of the strain energy an efficient formulation for calculating the sensitivity can be obtained. Loss of convergence due to large deformations in void regions is addressed bymore » using a fictitious strain energy such that small strain linear elasticity is approached in the void regions. We formulate a well-posed topology optimization problem by using restriction which is achieved via a Helmholtz type filter. The numerical examples provided show that for low load levels, the designs obtained from the different stiffness measures coincide whereas for large deformations significant differences are observed.« less
On the Use of CAD and Cartesian Methods for Aerodynamic Optimization
NASA Technical Reports Server (NTRS)
Nemec, M.; Aftosmis, M. J.; Pulliam, T. H.
2004-01-01
The objective for this paper is to present the development of an optimization capability for Curt3D, a Cartesian inviscid-flow analysis package. We present the construction of a new optimization framework and we focus on the following issues: 1) Component-based geometry parameterization approach using parametric-CAD models and CAPRI. A novel geometry server is introduced that addresses the issue of parallel efficiency while only sparingly consuming CAD resources; 2) The use of genetic and gradient-based algorithms for three-dimensional aerodynamic design problems. The influence of noise on the optimization methods is studied. Our goal is to create a responsive and automated framework that efficiently identifies design modifications that result in substantial performance improvements. In addition, we examine the architectural issues associated with the deployment of a CAD-based approach in a heterogeneous parallel computing environment that contains both CAD workstations and dedicated compute engines. We demonstrate the effectiveness of the framework for a design problem that features topology changes and complex geometry.
Stiffness optimization of non-linear elastic structures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wallin, Mathias; Ivarsson, Niklas; Tortorelli, Daniel
Our paper revisits stiffness optimization of non-linear elastic structures. Due to the non-linearity, several possible stiffness measures can be identified and in this work conventional compliance, i.e. secant stiffness designs are compared to tangent stiffness designs. The optimization problem is solved by the method of moving asymptotes and the sensitivities are calculated using the adjoint method. And for the tangent cost function it is shown that although the objective involves the third derivative of the strain energy an efficient formulation for calculating the sensitivity can be obtained. Loss of convergence due to large deformations in void regions is addressed bymore » using a fictitious strain energy such that small strain linear elasticity is approached in the void regions. We formulate a well-posed topology optimization problem by using restriction which is achieved via a Helmholtz type filter. The numerical examples provided show that for low load levels, the designs obtained from the different stiffness measures coincide whereas for large deformations significant differences are observed.« less
Robust design of configurations and parameters of adaptable products
NASA Astrophysics Data System (ADS)
Zhang, Jian; Chen, Yongliang; Xue, Deyi; Gu, Peihua
2014-03-01
An adaptable product can satisfy different customer requirements by changing its configuration and parameter values during the operation stage. Design of adaptable products aims at reducing the environment impact through replacement of multiple different products with single adaptable ones. Due to the complex architecture, multiple functional requirements, and changes of product configurations and parameter values in operation, impact of uncertainties to the functional performance measures needs to be considered in design of adaptable products. In this paper, a robust design approach is introduced to identify the optimal design configuration and parameters of an adaptable product whose functional performance measures are the least sensitive to uncertainties. An adaptable product in this paper is modeled by both configurations and parameters. At the configuration level, methods to model different product configuration candidates in design and different product configuration states in operation to satisfy design requirements are introduced. At the parameter level, four types of product/operating parameters and relations among these parameters are discussed. A two-level optimization approach is developed to identify the optimal design configuration and its parameter values of the adaptable product. A case study is implemented to illustrate the effectiveness of the newly developed robust adaptable design method.
Sharif, K M; Rahman, M M; Azmir, J; Khatib, A; Sabina, E; Shamsudin, S H; Zaidul, I S M
2015-12-01
Multivariate analysis of thin-layer chromatography (TLC) images was modeled to predict antioxidant activity of Pereskia bleo leaves and to identify the contributing compounds of the activity. TLC was developed in optimized mobile phase using the 'PRISMA' optimization method and the image was then converted to wavelet signals and imported for multivariate analysis. An orthogonal partial least square (OPLS) model was developed consisting of a wavelet-converted TLC image and 2,2-diphynyl-picrylhydrazyl free radical scavenging activity of 24 different preparations of P. bleo as the x- and y-variables, respectively. The quality of the constructed OPLS model (1 + 1 + 0) with one predictive and one orthogonal component was evaluated by internal and external validity tests. The validated model was then used to identify the contributing spot from the TLC plate that was then analyzed by GC-MS after trimethylsilyl derivatization. Glycerol and amine compounds were mainly found to contribute to the antioxidant activity of the sample. An alternative method to predict the antioxidant activity of a new sample of P. bleo leaves has been developed. Copyright © 2015 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Vikram, K. Arun; Ratnam, Ch; Lakshmi, VVK; Kumar, A. Sunny; Ramakanth, RT
2018-02-01
Meta-heuristic multi-response optimization methods are widely in use to solve multi-objective problems to obtain Pareto optimal solutions during optimization. This work focuses on optimal multi-response evaluation of process parameters in generating responses like surface roughness (Ra), surface hardness (H) and tool vibration displacement amplitude (Vib) while performing operations like tangential and orthogonal turn-mill processes on A-axis Computer Numerical Control vertical milling center. Process parameters like tool speed, feed rate and depth of cut are considered as process parameters machined over brass material under dry condition with high speed steel end milling cutters using Taguchi design of experiments (DOE). Meta-heuristic like Dragonfly algorithm is used to optimize the multi-objectives like ‘Ra’, ‘H’ and ‘Vib’ to identify the optimal multi-response process parameters combination. Later, the results thus obtained from multi-objective dragonfly algorithm (MODA) are compared with another multi-response optimization technique Viz. Grey relational analysis (GRA).
Metabolomic Tools to Assess the Chemistry and Bioactivity of Endophytic Aspergillus Strain.
Tawfike, Ahmed F; Tate, Rothwelle; Abbott, Gráinne; Young, Louise; Viegelmann, Christina; Schumacher, Marc; Diederich, Marc; Edrada-Ebel, RuAngelie
2017-10-01
Endophytic fungi associated with medicinal plants are a potential source of novel chemistry and biology that may find applications as pharmaceutical and agrochemical drugs. In this study, a combination of metabolomics and bioactivity-guided approaches were employed to isolate secondary metabolites with cytotoxicity against cancer cells from an endophytic Aspergillus aculeatus. The endophyte was isolated from the Egyptian medicinal plant Terminalia laxiflora and identified using molecular biological methods. Metabolomics and dereplication studies were accomplished by utilizing the MZmine software coupled with the universal Dictionary of Natural Products database. Metabolic profiling, with aid of multivariate data analysis, was performed at different stages of the growth curve to choose the optimized method suitable for up-scaling. The optimized culture method yielded a crude extract abundant with biologically-active secondary metabolites. Crude extracts were fractionated using different high-throughput chromatographic techniques. Purified compounds were identified by HR-ESI-MS, 1D- and 2D-NMR. This study introduced a new method of dereplication utilizing both high-resolution mass spectrometry and NMR spectroscopy. The metabolites were putatively identified by applying a chemotaxonomic filter. We also present a short review on the diverse chemistry of terrestrial endophytic strains of Aspergillus, which has become a part of our dereplication work and this will be of wide interest to those working in this field. © 2017 Wiley-VHCA AG, Zurich, Switzerland.
Optimizing Vetoes for Gravitational-wave Transient Searches
NASA Technical Reports Server (NTRS)
Essick, R.; Blackburn, Lindy L.; Katsavounidis, E.
2014-01-01
Interferometric gravitational-wave detectors like LIGO, GEO600 and Virgo record a surplus of information above and beyond possible gravitational-wave events. These auxiliary channels capture information about the state of the detector and its surroundings which can be used to infer potential terrestrial noise sources of some gravitational-wave-like events. We present an algorithm addressing the ordering (or equivalently optimizing) of such information from auxiliary systems in gravitational-wave detectors to establish veto conditions in searches for gravitational-wave transients. The procedure was used to identify vetoes for searches for unmodelled transients by the LIGO and Virgo collaborations during their science runs from 2005 through 2007. In this work we present the details of the algorithm; we also use a limited amount of data from LIGO's past runs in order to examine the method, compare it with other methods, and identify its potential to characterize the instruments themselves. We examine the dependence of Receiver Operating Characteristic curves on the various parameters of the veto method and the implementation on real data. We find that the method robustly determines important auxiliary channels, ordering them by the apparent strength of their correlations to the gravitational-wave channel. This list can substantially reduce the background of noise events in the gravitational-wave data. In this way it can identify the source of glitches in the detector as well as assist in establishing confidence in the detection of gravitational-wave transients.
Xu, Guoai; Li, Qi; Guo, Yanhui; Zhang, Miao
2017-01-01
Authorship attribution is to identify the most likely author of a given sample among a set of candidate known authors. It can be not only applied to discover the original author of plain text, such as novels, blogs, emails, posts etc., but also used to identify source code programmers. Authorship attribution of source code is required in diverse applications, ranging from malicious code tracking to solving authorship dispute or software plagiarism detection. This paper aims to propose a new method to identify the programmer of Java source code samples with a higher accuracy. To this end, it first introduces back propagation (BP) neural network based on particle swarm optimization (PSO) into authorship attribution of source code. It begins by computing a set of defined feature metrics, including lexical and layout metrics, structure and syntax metrics, totally 19 dimensions. Then these metrics are input to neural network for supervised learning, the weights of which are output by PSO and BP hybrid algorithm. The effectiveness of the proposed method is evaluated on a collected dataset with 3,022 Java files belong to 40 authors. Experiment results show that the proposed method achieves 91.060% accuracy. And a comparison with previous work on authorship attribution of source code for Java language illustrates that this proposed method outperforms others overall, also with an acceptable overhead. PMID:29095934
NASA Astrophysics Data System (ADS)
Abdeh-Kolahchi, A.; Satish, M.; Datta, B.
2004-05-01
A state art groundwater monitoring network design is introduced. The method combines groundwater flow and transport results with optimization Genetic Algorithm (GA) to identify optimal monitoring well locations. Optimization theory uses different techniques to find a set of parameter values that minimize or maximize objective functions. The suggested groundwater optimal monitoring network design is based on the objective of maximizing the probability of tracking a transient contamination plume by determining sequential monitoring locations. The MODFLOW and MT3DMS models included as separate modules within the Groundwater Modeling System (GMS) are used to develop three dimensional groundwater flow and contamination transport simulation. The groundwater flow and contamination simulation results are introduced as input to the optimization model, using Genetic Algorithm (GA) to identify the groundwater optimal monitoring network design, based on several candidate monitoring locations. The groundwater monitoring network design model is used Genetic Algorithms with binary variables representing potential monitoring location. As the number of decision variables and constraints increase, the non-linearity of the objective function also increases which make difficulty to obtain optimal solutions. The genetic algorithm is an evolutionary global optimization technique, which is capable of finding the optimal solution for many complex problems. In this study, the GA approach capable of finding the global optimal solution to a groundwater monitoring network design problem involving 18.4X 1018 feasible solutions will be discussed. However, to ensure the efficiency of the solution process and global optimality of the solution obtained using GA, it is necessary that appropriate GA parameter values be specified. The sensitivity analysis of genetic algorithms parameters such as random number, crossover probability, mutation probability, and elitism are discussed for solution of monitoring network design.
Global Design Optimization for Aerodynamics and Rocket Propulsion Components
NASA Technical Reports Server (NTRS)
Shyy, Wei; Papila, Nilay; Vaidyanathan, Rajkumar; Tucker, Kevin; Turner, James E. (Technical Monitor)
2000-01-01
Modern computational and experimental tools for aerodynamics and propulsion applications have matured to a stage where they can provide substantial insight into engineering processes involving fluid flows, and can be fruitfully utilized to help improve the design of practical devices. In particular, rapid and continuous development in aerospace engineering demands that new design concepts be regularly proposed to meet goals for increased performance, robustness and safety while concurrently decreasing cost. To date, the majority of the effort in design optimization of fluid dynamics has relied on gradient-based search algorithms. Global optimization methods can utilize the information collected from various sources and by different tools. These methods offer multi-criterion optimization, handle the existence of multiple design points and trade-offs via insight into the entire design space, can easily perform tasks in parallel, and are often effective in filtering the noise intrinsic to numerical and experimental data. However, a successful application of the global optimization method needs to address issues related to data requirements with an increase in the number of design variables, and methods for predicting the model performance. In this article, we review recent progress made in establishing suitable global optimization techniques employing neural network and polynomial-based response surface methodologies. Issues addressed include techniques for construction of the response surface, design of experiment techniques for supplying information in an economical manner, optimization procedures and multi-level techniques, and assessment of relative performance between polynomials and neural networks. Examples drawn from wing aerodynamics, turbulent diffuser flows, gas-gas injectors, and supersonic turbines are employed to help demonstrate the issues involved in an engineering design context. Both the usefulness of the existing knowledge to aid current design practices and the need for future research are identified.
Fatigue design of a cellular phone folder using regression model-based multi-objective optimization
NASA Astrophysics Data System (ADS)
Kim, Young Gyun; Lee, Jongsoo
2016-08-01
In a folding cellular phone, the folding device is repeatedly opened and closed by the user, which eventually results in fatigue damage, particularly to the front of the folder. Hence, it is important to improve the safety and endurance of the folder while also reducing its weight. This article presents an optimal design for the folder front that maximizes its fatigue endurance while minimizing its thickness. Design data for analysis and optimization were obtained experimentally using a test jig. Multi-objective optimization was carried out using a nonlinear regression model. Three regression methods were employed: back-propagation neural networks, logistic regression and support vector machines. The AdaBoost ensemble technique was also used to improve the approximation. Two-objective Pareto-optimal solutions were identified using the non-dominated sorting genetic algorithm (NSGA-II). Finally, a numerically optimized solution was validated against experimental product data, in terms of both fatigue endurance and thickness index.
Mokeddem, Diab; Khellaf, Abdelhafid
2009-01-01
Optimal design problem are widely known by their multiple performance measures that are often competing with each other. In this paper, an optimal multiproduct batch chemical plant design is presented. The design is firstly formulated as a multiobjective optimization problem, to be solved using the well suited non dominating sorting genetic algorithm (NSGA-II). The NSGA-II have capability to achieve fine tuning of variables in determining a set of non dominating solutions distributed along the Pareto front in a single run of the algorithm. The NSGA-II ability to identify a set of optimal solutions provides the decision-maker DM with a complete picture of the optimal solution space to gain better and appropriate choices. Then an outranking with PROMETHEE II helps the decision-maker to finalize the selection of a best compromise. The effectiveness of NSGA-II method with multiojective optimization problem is illustrated through two carefully referenced examples.
Zhou, Fangbin; Zhou, Yaying; Yang, Ming; Wen, Jinli; Dong, Jun; Tan, Wenyong
2018-01-01
Circulating endothelial cells (CECs) and their subpopulations could be potential novel biomarkers for various malignancies. However, reliable enumerable methods are warranted to further improve their clinical utility. This study aimed to optimize a flow cytometric method (FCM) assay for CECs and subpopulations in peripheral blood for patients with solid cancers. An FCM assay was used to detect and identify CECs. A panel of 60 blood samples, including 44 metastatic cancer patients and 16 healthy controls, were used in this study. Some key issues of CEC enumeration, including sample material and anticoagulant selection, optimal titration of antibodies, lysis/wash procedures of blood sample preparation, conditions of sample storage, sufficient cell events to enhance the signal, fluorescence-minus-one controls instead of isotype controls to reduce background noise, optimal selection of cell surface markers, and evaluating the reproducibility of our method, were integrated and investigated. Wilcoxon and Mann-Whitney U tests were used to determine statistically significant differences. In this validation study, we refined a five-color FCM method to detect CECs and their subpopulations in peripheral blood of patients with solid tumors. Several key technical issues regarding preanalytical elements, FCM data acquisition, and analysis were addressed. Furthermore, we clinically validated the utility of our method. The baseline levels of mature CECs, endothelial progenitor cells, and activated CECs were higher in cancer patients than healthy subjects ( P <0.01). However, there was no significant difference in resting CEC levels between healthy subjects and cancer patients ( P =0.193). We integrated and comprehensively addressed significant technical issues found in previously published assays and validated the reproducibility and sensitivity of our proposed method. Future work is required to explore the potential of our optimized method in clinical oncologic applications.
Resonant power processors. II - Methods of control
NASA Technical Reports Server (NTRS)
Oruganti, R.; Lee, F. C.
1984-01-01
The nature of resonant converter control is discussed. Employing the state-portrait, different control methods for series resonant converter are identified and their performance evaluated based on their stability, response to control and load changes and range of operation. A new control method, optimal-trajectory control, is proposed which, by utilizing the state trajectories as control laws, continuously monitors the energy level of the resonant tank. The method is shown to have superior control properties especially under transient operation.
A classification scheme for risk assessment methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stamp, Jason Edwin; Campbell, Philip LaRoche
2004-08-01
This report presents a classification scheme for risk assessment methods. This scheme, like all classification schemes, provides meaning by imposing a structure that identifies relationships. Our scheme is based on two orthogonal aspects--level of detail, and approach. The resulting structure is shown in Table 1 and is explained in the body of the report. Each cell in the Table represent a different arrangement of strengths and weaknesses. Those arrangements shift gradually as one moves through the table, each cell optimal for a particular situation. The intention of this report is to enable informed use of the methods so that amore » method chosen is optimal for a situation given. This report imposes structure on the set of risk assessment methods in order to reveal their relationships and thus optimize their usage.We present a two-dimensional structure in the form of a matrix, using three abstraction levels for the rows and three approaches for the columns. For each of the nine cells in the matrix we identify the method type by name and example. The matrix helps the user understand: (1) what to expect from a given method, (2) how it relates to other methods, and (3) how best to use it. Each cell in the matrix represent a different arrangement of strengths and weaknesses. Those arrangements shift gradually as one moves through the table, each cell optimal for a particular situation. The intention of this report is to enable informed use of the methods so that a method chosen is optimal for a situation given. The matrix, with type names in the cells, is introduced in Table 2 on page 13 below. Unless otherwise stated we use the word 'method' in this report to refer to a 'risk assessment method', though often times we use the full phrase. The use of the terms 'risk assessment' and 'risk management' are close enough that we do not attempt to distinguish them in this report. The remainder of this report is organized as follows. In Section 2 we provide context for this report--what a 'method' is and where it fits. In Section 3 we present background for our classification scheme--what other schemes we have found, the fundamental nature of methods and their necessary incompleteness. In Section 4 we present our classification scheme in the form of a matrix, then we present an analogy that should provide an understanding of the scheme, concluding with an explanation of the two dimensions and the nine types in our scheme. In Section 5 we present examples of each of our classification types. In Section 6 we present conclusions.« less
2014-01-01
Background The Theoretical Domains Framework (TDF) is a set of 14 domains of behavior change that provide a framework for the critical issues and factors influencing optimal knowledge translation. Considering that a previous study has identified optimal knowledge translation techniques for each TDF domain, it was hypothesized that the TDF could be used to contextualize and interpret findings from a behavioral and educational needs assessment. To illustrate this hypothesis, findings and recommendations drawn from a 2012 national behavioral and educational needs assessment conducted with healthcare providers who treat and manage Growth and Growth Hormone Disorders, will be discussed using the TDF. Methods This needs assessment utilized a mixed-methods research approach that included a combination of: [a] data sources (Endocrinologists (n:120), Pediatric Endocrinologists (n:53), Pediatricians (n:52)), [b] data collection methods (focus groups, interviews, online survey), [c] analysis methodologies (qualitative - analyzed through thematic analysis, quantitative - analyzed using frequencies, cross-tabulations, and gap analysis). Triangulation was used to generate trustworthy findings on the clinical practice gaps of endocrinologists, pediatric endocrinologists, and general pediatricians in their provision of care to adult patients with adult growth hormone deficiency or acromegaly, or children/teenagers with pediatric growth disorders. The identified gaps were then broken into key underlying determinants, categorized according to the TDF domains, and linked to optimal behavioral change techniques. Results The needs assessment identified 13 gaps, each with one or more underlying determinant(s). Overall, these determinants were mapped to 9 of the 14 TDF domains. The Beliefs about Consequences domain was identified as a contributing determinant to 7 of the 13 challenges. Five of the gaps could be related to the Skills domain, while three were linked to the Knowledge domain. Conclusions The TDF categorization of the needs assessment findings allowed recommendation of appropriate behavior change techniques for each underlying determinant, and facilitated communication and understanding of the identified issues to a broader audience. This approach provides a means for health education researchers to categorize gaps and challenges identified through educational needs assessments, and facilitates the application of these findings by educators and knowledge translators, by linking the gaps to recommended behavioral change techniques. PMID:25060235
Reliability Methods for Shield Design Process
NASA Technical Reports Server (NTRS)
Tripathi, R. K.; Wilson, J. W.
2002-01-01
Providing protection against the hazards of space radiation is a major challenge to the exploration and development of space. The great cost of added radiation shielding is a potential limiting factor in deep space operations. In this enabling technology, we have developed methods for optimized shield design over multi-segmented missions involving multiple work and living areas in the transport and duty phase of space missions. The total shield mass over all pieces of equipment and habitats is optimized subject to career dose and dose rate constraints. An important component of this technology is the estimation of two most commonly identified uncertainties in radiation shield design, the shielding properties of materials used and the understanding of the biological response of the astronaut to the radiation leaking through the materials into the living space. The largest uncertainty, of course, is in the biological response to especially high charge and energy (HZE) ions of the galactic cosmic rays. These uncertainties are blended with the optimization design procedure to formulate reliability-based methods for shield design processes. The details of the methods will be discussed.
Scott, William E; Weegman, Bradley P; Balamurugan, Appakalai N; Ferrer-Fabrega, Joana; Anazawa, Takayuki; Karatzas, Theodore; Jie, Tun; Hammer, Bruce E; Matsumoto, Shuchiro; Avgoustiniatos, Efstathios S; Maynard, Kristen S; Sutherland, David E R; Hering, Bernhard J; Papas, Klearchos K
2014-01-01
Porcine islet xenotransplantation is emerging as a potential alternative for allogeneic clinical islet transplantation. Optimization of porcine islet isolation in terms of yield and quality is critical for the success and cost-effectiveness of this approach. Incomplete pancreas distention and inhomogeneous enzyme distribution have been identified as key factors for limiting viable islet yield per porcine pancreas. The aim of this study was to explore the utility of magnetic resonance imaging (MRI) as a tool to investigate the homogeneity of enzyme delivery in porcine pancreata. Traditional and novel methods for enzyme delivery aimed at optimizing enzyme distribution were examined. Pancreata were procured from Landrace pigs via en bloc viscerectomy. The main pancreatic duct was then cannulated with an 18-g winged catheter and MRI performed at 1.5-T. Images were collected before and after ductal infusion of chilled MRI contrast agent (gadolinium) in physiological saline. Regions of the distal aspect of the splenic lobe and portions of the connecting lobe and bridge exhibited reduced delivery of solution when traditional methods of distention were utilized. Use of alternative methods of delivery (such as selective re-cannulation and distention of identified problem regions) resolved these issues, and MRI was successfully utilized as a guide and assessment tool for improved delivery. Current methods of porcine pancreas distention do not consistently deliver enzyme uniformly or adequately to all regions of the pancreas. Novel methods of enzyme delivery should be investigated and implemented for improved enzyme distribution. MRI serves as a valuable tool to visualize and evaluate the efficacy of current and prospective methods of pancreas distention and enzyme delivery. © 2014 John Wiley & Sons A/S Published by John Wiley & Sons Ltd.
On process optimization considering LCA methodology.
Pieragostini, Carla; Mussati, Miguel C; Aguirre, Pío
2012-04-15
The goal of this work is to research the state-of-the-art in process optimization techniques and tools based on LCA, focused in the process engineering field. A collection of methods, approaches, applications, specific software packages, and insights regarding experiences and progress made in applying the LCA methodology coupled to optimization frameworks is provided, and general trends are identified. The "cradle-to-gate" concept to define the system boundaries is the most used approach in practice, instead of the "cradle-to-grave" approach. Normally, the relationship between inventory data and impact category indicators is linearly expressed by the characterization factors; then, synergic effects of the contaminants are neglected. Among the LCIA methods, the eco-indicator 99, which is based on the endpoint category and the panel method, is the most used in practice. A single environmental impact function, resulting from the aggregation of environmental impacts, is formulated as the environmental objective in most analyzed cases. SimaPro is the most used software for LCA applications in literature analyzed. The multi-objective optimization is the most used approach for dealing with this kind of problems, where the ε-constraint method for generating the Pareto set is the most applied technique. However, a renewed interest in formulating a single economic objective function in optimization frameworks can be observed, favored by the development of life cycle cost software and progress made in assessing costs of environmental externalities. Finally, a trend to deal with multi-period scenarios into integrated LCA-optimization frameworks can be distinguished providing more accurate results upon data availability. Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Zoka, Yoshifumi; Yorino, Naoto; Kawano, Koki; Suenari, Hiroyasu
This paper proposes a fast computation method for Available Transfer Capability (ATC) with respect to thermal and voltage magnitude limits. In the paper, ATC is formulated as an optimization problem. In order to obtain the efficiency for the N-1 outage contingency calculations, linear sensitivity methods are applied for screening and ranking all contingency selections with respect to the thermal and voltage magnitude limits margin to identify the severest case. In addition, homotopy functions are used for the generator QV constrains to reduce the maximum error of the linear estimation. Then, the Primal-Dual Interior Point Method (PDIPM) is used to solve the optimization problem for the severest case only, in which the solutions of ATC can be obtained efficiently. The effectiveness of the proposed method is demonstrated through IEEE 30, 57, 118-bus systems.
Applicability and Limitations of Reliability Allocation Methods
NASA Technical Reports Server (NTRS)
Cruz, Jose A.
2016-01-01
Reliability allocation process may be described as the process of assigning reliability requirements to individual components within a system to attain the specified system reliability. For large systems, the allocation process is often performed at different stages of system design. The allocation process often begins at the conceptual stage. As the system design develops, more information about components and the operating environment becomes available, different allocation methods can be considered. Reliability allocation methods are usually divided into two categories: weighting factors and optimal reliability allocation. When properly applied, these methods can produce reasonable approximations. Reliability allocation techniques have limitations and implied assumptions that need to be understood by system engineers. Applying reliability allocation techniques without understanding their limitations and assumptions can produce unrealistic results. This report addresses weighting factors, optimal reliability allocation techniques, and identifies the applicability and limitations of each reliability allocation technique.
Xiao, Xin-Yu; Cui, Long-Hai; Zhou, Xin-Xin; Wu, Yan; Ge, Fa-Huan
2011-05-01
The orthogonal test and the supercritical carbon dioxide fluid extraction were used for optimizing the extraction of the essential oil from Plumeria rubra var. actifolia for the first time. Compared with the steam distillation, the optimal operation parameter of extraction was as follows: extraction pressure 25 MPa, extraction temperature 45 degrees C; separator I pressure 12 MPa, separator I temperature 55 degrees C; separator II pressure 6 MPa, separator II temperature 30 degrees C. Under this condition the yield of the essential oil was 5.8927%. The components were separated and identified by GC-MS. 53 components of Plumeria rubra var. actifolia measured by SFE method were identified and determined by normalization method. The main components were 1, 6, 10-dodecatrien-3-ol, 3, 7, 11-trimethyl, benzoic acid, 2-hydroxy-, phenylmethyl ester, 1, 2-benzenedicarboxylic acid, bis(2-methylpropyl) ester,etc.. 1, 2-Benzenedicarboxylic acid, bis (2-methylpropyl) este. took up 66.11% of the total amount, and there was much difference of the results from SD method.
Assessing pretreatment reactor scaling through empirical analysis
Lischeske, James J.; Crawford, Nathan C.; Kuhn, Erik; ...
2016-10-10
Pretreatment is a critical step in the biochemical conversion of lignocellulosic biomass to fuels and chemicals. Due to the complexity of the physicochemical transformations involved, predictively scaling up technology from bench- to pilot-scale is difficult. This study examines how pretreatment effectiveness under nominally similar reaction conditions is influenced by pretreatment reactor design and scale using four different pretreatment reaction systems ranging from a 3 g batch reactor to a 10 dry-ton/d continuous reactor. The reactor systems examined were an Automated Solvent Extractor (ASE), Steam Explosion Reactor (SER), ZipperClave(R) reactor (ZCR), and Large Continuous Horizontal-Screw Reactor (LHR). To our knowledge, thismore » is the first such study performed on pretreatment reactors across a range of reaction conditions (time and temperature) and at different reactor scales. The comparative pretreatment performance results obtained for each reactor system were used to develop response surface models for total xylose yield after pretreatment and total sugar yield after pretreatment followed by enzymatic hydrolysis. Near- and very-near-optimal regions were defined as the set of conditions that the model identified as producing yields within one and two standard deviations of the optimum yield. Optimal conditions identified in the smallest-scale system (the ASE) were within the near-optimal region of the largest scale reactor system evaluated. A reaction severity factor modeling approach was shown to inadequately describe the optimal conditions in the ASE, incorrectly identifying a large set of sub-optimal conditions (as defined by the RSM) as optimal. The maximum total sugar yields for the ASE and LHR were 95%, while 89% was the optimum observed in the ZipperClave. The optimum condition identified using the automated and less costly to operate ASE system was within the very-near-optimal space for the total xylose yield of both the ZCR and the LHR, and was within the near-optimal space for total sugar yield for the LHR. This indicates that the ASE is a good tool for cost effectively finding near-optimal conditions for operating pilot-scale systems, which may be used as starting points for further optimization. Additionally, using a severity-factor approach to optimization was found to be inadequate compared to a multivariate optimization method. As a result, the ASE and the LHR were able to enable significantly higher total sugar yields after enzymatic hydrolysis relative to the ZCR, despite having similar optimal conditions and total xylose yields. This underscores the importance of incorporating mechanical disruption into pretreatment reactor designs to achieve high enzymatic digestibilities.« less
Assessing pretreatment reactor scaling through empirical analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lischeske, James J.; Crawford, Nathan C.; Kuhn, Erik
Pretreatment is a critical step in the biochemical conversion of lignocellulosic biomass to fuels and chemicals. Due to the complexity of the physicochemical transformations involved, predictively scaling up technology from bench- to pilot-scale is difficult. This study examines how pretreatment effectiveness under nominally similar reaction conditions is influenced by pretreatment reactor design and scale using four different pretreatment reaction systems ranging from a 3 g batch reactor to a 10 dry-ton/d continuous reactor. The reactor systems examined were an Automated Solvent Extractor (ASE), Steam Explosion Reactor (SER), ZipperClave(R) reactor (ZCR), and Large Continuous Horizontal-Screw Reactor (LHR). To our knowledge, thismore » is the first such study performed on pretreatment reactors across a range of reaction conditions (time and temperature) and at different reactor scales. The comparative pretreatment performance results obtained for each reactor system were used to develop response surface models for total xylose yield after pretreatment and total sugar yield after pretreatment followed by enzymatic hydrolysis. Near- and very-near-optimal regions were defined as the set of conditions that the model identified as producing yields within one and two standard deviations of the optimum yield. Optimal conditions identified in the smallest-scale system (the ASE) were within the near-optimal region of the largest scale reactor system evaluated. A reaction severity factor modeling approach was shown to inadequately describe the optimal conditions in the ASE, incorrectly identifying a large set of sub-optimal conditions (as defined by the RSM) as optimal. The maximum total sugar yields for the ASE and LHR were 95%, while 89% was the optimum observed in the ZipperClave. The optimum condition identified using the automated and less costly to operate ASE system was within the very-near-optimal space for the total xylose yield of both the ZCR and the LHR, and was within the near-optimal space for total sugar yield for the LHR. This indicates that the ASE is a good tool for cost effectively finding near-optimal conditions for operating pilot-scale systems, which may be used as starting points for further optimization. Additionally, using a severity-factor approach to optimization was found to be inadequate compared to a multivariate optimization method. As a result, the ASE and the LHR were able to enable significantly higher total sugar yields after enzymatic hydrolysis relative to the ZCR, despite having similar optimal conditions and total xylose yields. This underscores the importance of incorporating mechanical disruption into pretreatment reactor designs to achieve high enzymatic digestibilities.« less
NASA Astrophysics Data System (ADS)
Archer, Cristina; Ghaisas, Niranjan
2015-04-01
The energy generation at a wind farm is controlled primarily by the average wind speed at hub height. However, two other factors impact wind farm performance: 1) the layout of the wind turbines, in terms of spacing between turbines along and across the prevailing wind direction; staggering or aligning consecutive rows; angles between rows, columns, and prevailing wind direction); and 2) atmospheric stability, which is a measure of whether vertical motion is enhanced (unstable), suppressed (stable), or neither (neutral). Studying both factors and their complex interplay with Large-Eddy Simulation (LES) is a valid approach because it produces high-resolution, 3D, turbulent fields, such as wind velocity, temperature, and momentum and heat fluxes, and it properly accounts for the interactions between wind turbine blades and the surrounding atmospheric and near-surface properties. However, LES are computationally expensive and simulating all the possible combinations of wind directions, atmospheric stabilities, and turbine layouts to identify the optimal wind farm configuration is practically unfeasible today. A new, geometry-based method is proposed that is computationally inexpensive and that combines simple geometric quantities with a minimal number of LES simulations to identify the optimal wind turbine layout, taking into account not only the actual frequency distribution of wind directions (i.e., wind rose) at the site of interest, but also atmospheric stability. The geometry-based method is calibrated with LES of the Lillgrund wind farm conducted with the Software for Offshore/onshore Wind Farm Applications (SOWFA), based on the open-access OpenFOAM libraries. The geometric quantities that offer the best correlations (>0.93) with the LES results are the blockage ratio, defined as the fraction of the swept area of a wind turbine that is blocked by an upstream turbine, and the blockage distance, the weighted distance from a given turbine to all upstream turbines that can potentially block it. Based on blockage ratio and distance, an optimization procedure is proposed that explores many different layout variables and identifies, given actual wind direction and stability distributions, the optimal wind farm layout, i.e., the one with the highest wind energy production. The optimization procedure is applied to both the calibration wind farm (Lillgrund) and a test wind farm (Horns Rev) and a number of layouts more efficient than the existing ones are identified. The optimization procedure based on geometric models proposed here can be applied very quickly (within a few hours) to any proposed wind farm, once enough information on wind direction frequency and, if available, atmospheric stability frequency has been gathered and once the number of turbines and/or the areal extent of the wind farm have been identified.
Ben Taheur, Fadia; Fdhila, Kais; Elabed, Hamouda; Bouguerra, Amel; Kouidhi, Bochra; Bakhrouf, Amina; Chaieb, Kamel
2016-04-01
Three bacterial strains (TE1, TD3 and FB2) were isolated from date palm (degla), pistachio and barley. The presence of nitrate reductase (narG) and nitrite reductase (nirS and nirK) genes in the selected strains was detected by PCR technique. Molecular identification based on 16S rDNA sequencing method was applied to identify positive strains. In addition, the D-optimal mixture experimental design was used to optimize the optimal formulation of probiotic bacteria for denitrification process. Strains harboring denitrification genes were identified as: TE1, Agrococcus sp LN828197; TD3, Cronobacter sakazakii LN828198 and FB2, Pedicoccus pentosaceus LN828199. PCR results revealed that all strains carried the nirS gene. However only C. sakazakii LN828198 and Agrococcus sp LN828197 harbored the nirK and the narG genes respectively. Moreover, the studied bacteria were able to form biofilm on abiotic surfaces with different degree. Process optimization showed that the most significant reduction of nitrate was 100% with 14.98% of COD consumption and 5.57 mg/l nitrite accumulation. Meanwhile, the response values were optimized and showed that the most optimal combination was 78.79% of C. sakazakii LN828198 (curve value), 21.21% of P. pentosaceus LN828199 (curve value) and absence (0%) of Agrococcus sp LN828197 (curve value). Copyright © 2016 Elsevier Ltd. All rights reserved.
Conditional anomaly detection methods for patient–management alert systems
Valko, Michal; Cooper, Gregory; Seybert, Amy; Visweswaran, Shyam; Saul, Melissa; Hauskrecht, Milos
2010-01-01
Anomaly detection methods can be very useful in identifying unusual or interesting patterns in data. A recently proposed conditional anomaly detection framework extends anomaly detection to the problem of identifying anomalous patterns on a subset of attributes in the data. The anomaly always depends (is conditioned) on the value of remaining attributes. The work presented in this paper focuses on instance–based methods for detecting conditional anomalies. The methods rely on the distance metric to identify examples in the dataset that are most critical for detecting the anomaly. We investigate various metrics and metric learning methods to optimize the performance of the instance–based anomaly detection methods. We show the benefits of the instance–based methods on two real–world detection problems: detection of unusual admission decisions for patients with the community–acquired pneumonia and detection of unusual orders of an HPF4 test that is used to confirm Heparin induced thrombocytopenia — a life–threatening condition caused by the Heparin therapy. PMID:25392850
Optimal coherent control of dissipative N -level systems
NASA Astrophysics Data System (ADS)
Jirari, H.; Pötz, W.
2005-07-01
General optimal coherent control of dissipative N -level systems in the Markovian time regime is formulated within Pointryagin’s principle and the Lindblad equation. In the present paper, we study feasibility and limitations of steering of dissipative two-, three-, and four-level systems from a given initial pure or mixed state into a desired final state under the influence of an external electric field. The time evolution of the system is computed within the Lindblad equation and a conjugate gradient method is used to identify optimal control fields. The influence of both field-independent population and polarization decay on achieving the objective is investigated in systematic fashion. It is shown that, for realistic dephasing times, optimum control fields can be identified which drive the system into the target state with very high success rate and in economical fashion, even when starting from a poor initial guess. Furthermore, the optimal fields obtained give insight into the system dynamics. However, if decay rates of the system cannot be subjected to electromagnetic control, the dissipative system cannot be maintained in a specific pure or mixed state, in general.
Improved prediction of MHC class I and class II epitopes using a novel Gibbs sampling approach.
Nielsen, Morten; Lundegaard, Claus; Worning, Peder; Hvid, Christina Sylvester; Lamberth, Kasper; Buus, Søren; Brunak, Søren; Lund, Ole
2004-06-12
Prediction of which peptides will bind a specific major histocompatibility complex (MHC) constitutes an important step in identifying potential T-cell epitopes suitable as vaccine candidates. MHC class II binding peptides have a broad length distribution complicating such predictions. Thus, identifying the correct alignment is a crucial part of identifying the core of an MHC class II binding motif. In this context, we wish to describe a novel Gibbs motif sampler method ideally suited for recognizing such weak sequence motifs. The method is based on the Gibbs sampling method, and it incorporates novel features optimized for the task of recognizing the binding motif of MHC classes I and II. The method locates the binding motif in a set of sequences and characterizes the motif in terms of a weight-matrix. Subsequently, the weight-matrix can be applied to identifying effectively potential MHC binding peptides and to guiding the process of rational vaccine design. We apply the motif sampler method to the complex problem of MHC class II binding. The input to the method is amino acid peptide sequences extracted from the public databases of SYFPEITHI and MHCPEP and known to bind to the MHC class II complex HLA-DR4(B1*0401). Prior identification of information-rich (anchor) positions in the binding motif is shown to improve the predictive performance of the Gibbs sampler. Similarly, a consensus solution obtained from an ensemble average over suboptimal solutions is shown to outperform the use of a single optimal solution. In a large-scale benchmark calculation, the performance is quantified using relative operating characteristics curve (ROC) plots and we make a detailed comparison of the performance with that of both the TEPITOPE method and a weight-matrix derived using the conventional alignment algorithm of ClustalW. The calculation demonstrates that the predictive performance of the Gibbs sampler is higher than that of ClustalW and in most cases also higher than that of the TEPITOPE method.
Optimization of monopiles for offshore wind turbines.
Kallehave, Dan; Byrne, Byron W; LeBlanc Thilsted, Christian; Mikkelsen, Kristian Kousgaard
2015-02-28
The offshore wind industry currently relies on subsidy schemes to be competitive with fossil-fuel-based energy sources. For the wind industry to survive, it is vital that costs are significantly reduced for future projects. This can be partly achieved by introducing new technologies and partly through optimization of existing technologies and design methods. One of the areas where costs can be reduced is in the support structure, where better designs, cheaper fabrication and quicker installation might all be possible. The prevailing support structure design is the monopile structure, where the simple design is well suited to mass-fabrication, and the installation approach, based on conventional impact driving, is relatively low-risk and robust for most soil conditions. The range of application of the monopile for future wind farms can be extended by using more accurate engineering design methods, specifically tailored to offshore wind industry design. This paper describes how state-of-the-art optimization approaches are applied to the design of current wind farms and monopile support structures and identifies the main drivers where more accurate engineering methods could impact on a next generation of highly optimized monopiles. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
NASA Astrophysics Data System (ADS)
Sole-Mari, G.; Fernandez-Garcia, D.
2016-12-01
Random Walk Particle Tracking (RWPT) coupled with Kernel Density Estimation (KDE) has been recently proposed to simulate reactive transport in porous media. KDE provides an optimal estimation of the area of influence of particles which is a key element to simulate nonlinear chemical reactions. However, several important drawbacks can be identified: (1) the optimal KDE method is computationally intensive and thereby cannot be used at each time step of the simulation; (2) it does not take advantage of the prior information about the physical system and the previous history of the solute plume; (3) even if the kernel is optimal, the relative error in RWPT simulations typically increases over time as the particle density diminishes by dilution. To overcome these problems, we propose an adaptive branching random walk methodology that incorporates the physics, the particle history and maintains accuracy with time. The method allows particles to efficiently split and merge when necessary as well as to optimally adapt their local kernel shape without having to recalculate the kernel size. We illustrate the advantage of the method by simulating complex reactive transport problems in randomly heterogeneous porous media.
Methods of increasing efficiency and maintainability of pipeline systems
NASA Astrophysics Data System (ADS)
Ivanov, V. A.; Sokolov, S. M.; Ogudova, E. V.
2018-05-01
This study is dedicated to the issue of pipeline transportation system maintenance. The article identifies two classes of technical-and-economic indices, which are used to select an optimal pipeline transportation system structure. Further, the article determines various system maintenance strategies and strategy selection criteria. Meanwhile, the maintenance strategies turn out to be not sufficiently effective due to non-optimal values of maintenance intervals. This problem could be solved by running the adaptive maintenance system, which includes a pipeline transportation system reliability improvement algorithm, especially an equipment degradation computer model. In conclusion, three model building approaches for determining optimal technical systems verification inspections duration were considered.
Optimal Sensor Location Design for Reliable Fault Detection in Presence of False Alarms
Yang, Fan; Xiao, Deyun; Shah, Sirish L.
2009-01-01
To improve fault detection reliability, sensor location should be designed according to an optimization criterion with constraints imposed by issues of detectability and identifiability. Reliability requires the minimization of undetectability and false alarm probability due to random factors on sensor readings, which is not only related with sensor readings but also affected by fault propagation. This paper introduces the reliability criteria expression based on the missed/false alarm probability of each sensor and system topology or connectivity derived from the directed graph. The algorithm for the optimization problem is presented as a heuristic procedure. Finally, a boiler system is illustrated using the proposed method. PMID:22291524
Soft Snakes: Construction, Locomotion, and Control
NASA Astrophysics Data System (ADS)
Branyan, Callie; Courier, Taylor; Fleming, Chloe; Remaley, Jacquelin; Hatton, Ross; Menguc, Yigit
We fabricated modular bidirectional silicone pneumatic actuators to build a soft snake robot, applying geometric models of serpenoid swimmers to identify theoretically optimal gaits to realize serpentine locomotion. With the introduction of magnetic connections and elliptical cross-sections in fiber-reinforced modules, we can vary the number of continuum segments in the snake body to achieve more supple serpentine motion in a granular media. The performance of these gaits is observed using a motion capture system and efficiency is assessed in terms of pressure input and net displacement. These gaits are optimized using our geometric soap-bubble method of gait optimization, demonstrating the applicability of this tool to soft robot control and coordination.
Error reduction and parameter optimization of the TAPIR method for fast T1 mapping.
Zaitsev, M; Steinhoff, S; Shah, N J
2003-06-01
A methodology is presented for the reduction of both systematic and random errors in T(1) determination using TAPIR, a Look-Locker-based fast T(1) mapping technique. The relations between various sequence parameters were carefully investigated in order to develop recipes for choosing optimal sequence parameters. Theoretical predictions for the optimal flip angle were verified experimentally. Inversion pulse imperfections were identified as the main source of systematic errors in T(1) determination with TAPIR. An effective remedy is demonstrated which includes extension of the measurement protocol to include a special sequence for mapping the inversion efficiency itself. Copyright 2003 Wiley-Liss, Inc.
Algorithm For Optimal Control Of Large Structures
NASA Technical Reports Server (NTRS)
Salama, Moktar A.; Garba, John A..; Utku, Senol
1989-01-01
Cost of computation appears competitive with other methods. Problem to compute optimal control of forced response of structure with n degrees of freedom identified in terms of smaller number, r, of vibrational modes. Article begins with Hamilton-Jacobi formulation of mechanics and use of quadratic cost functional. Complexity reduced by alternative approach in which quadratic cost functional expressed in terms of control variables only. Leads to iterative solution of second-order time-integral matrix Volterra equation of second kind containing optimal control vector. Cost of algorithm, measured in terms of number of computations required, is of order of, or less than, cost of prior algoritms applied to similar problems.
NASA Astrophysics Data System (ADS)
Jiang, Xue; Lu, Wenxi; Hou, Zeyu; Zhao, Haiqing; Na, Jin
2015-11-01
The purpose of this study was to identify an optimal surfactant-enhanced aquifer remediation (SEAR) strategy for aquifers contaminated by dense non-aqueous phase liquid (DNAPL) based on an ensemble of surrogates-based optimization technique. A saturated heterogeneous medium contaminated by nitrobenzene was selected as case study. A new kind of surrogate-based SEAR optimization employing an ensemble surrogate (ES) model together with a genetic algorithm (GA) is presented. Four methods, namely radial basis function artificial neural network (RBFANN), kriging (KRG), support vector regression (SVR), and kernel extreme learning machines (KELM), were used to create four individual surrogate models, which were then compared. The comparison enabled us to select the two most accurate models (KELM and KRG) to establish an ES model of the SEAR simulation model, and the developed ES model as well as these four stand-alone surrogate models was compared. The results showed that the average relative error of the average nitrobenzene removal rates between the ES model and the simulation model for 20 test samples was 0.8%, which is a high approximation accuracy, and which indicates that the ES model provides more accurate predictions than the stand-alone surrogate models. Then, a nonlinear optimization model was formulated for the minimum cost, and the developed ES model was embedded into this optimization model as a constrained condition. Besides, GA was used to solve the optimization model to provide the optimal SEAR strategy. The developed ensemble surrogate-optimization approach was effective in seeking a cost-effective SEAR strategy for heterogeneous DNAPL-contaminated sites. This research is expected to enrich and develop the theoretical and technical implications for the analysis of remediation strategy optimization of DNAPL-contaminated aquifers.
NASA Astrophysics Data System (ADS)
Lu, W., Sr.; Xin, X.; Luo, J.; Jiang, X.; Zhang, Y.; Zhao, Y.; Chen, M.; Hou, Z.; Ouyang, Q.
2015-12-01
The purpose of this study was to identify an optimal surfactant-enhanced aquifer remediation (SEAR) strategy for aquifers contaminated by dense non-aqueous phase liquid (DNAPL) based on an ensemble of surrogates-based optimization technique. A saturated heterogeneous medium contaminated by nitrobenzene was selected as case study. A new kind of surrogate-based SEAR optimization employing an ensemble surrogate (ES) model together with a genetic algorithm (GA) is presented. Four methods, namely radial basis function artificial neural network (RBFANN), kriging (KRG), support vector regression (SVR), and kernel extreme learning machines (KELM), were used to create four individual surrogate models, which were then compared. The comparison enabled us to select the two most accurate models (KELM and KRG) to establish an ES model of the SEAR simulation model, and the developed ES model as well as these four stand-alone surrogate models was compared. The results showed that the average relative error of the average nitrobenzene removal rates between the ES model and the simulation model for 20 test samples was 0.8%, which is a high approximation accuracy, and which indicates that the ES model provides more accurate predictions than the stand-alone surrogate models. Then, a nonlinear optimization model was formulated for the minimum cost, and the developed ES model was embedded into this optimization model as a constrained condition. Besides, GA was used to solve the optimization model to provide the optimal SEAR strategy. The developed ensemble surrogate-optimization approach was effective in seeking a cost-effective SEAR strategy for heterogeneous DNAPL-contaminated sites. This research is expected to enrich and develop the theoretical and technical implications for the analysis of remediation strategy optimization of DNAPL-contaminated aquifers.
Connesson, N.; Clayton, E.H.; Bayly, P.V.; Pierron, F.
2015-01-01
In-vivo measurement of the mechanical properties of soft tissues is essential to provide necessary data in biomechanics and medicine (early cancer diagnosis, study of traumatic brain injuries, etc.). Imaging techniques such as Magnetic Resonance Elastography (MRE) can provide 3D displacement maps in the bulk and in vivo, from which, using inverse methods, it is then possible to identify some mechanical parameters of the tissues (stiffness, damping etc.). The main difficulties in these inverse identification procedures consist in dealing with the pressure waves contained in the data and with the experimental noise perturbing the spatial derivatives required during the processing. The Optimized Virtual Fields Method (OVFM) [1], designed to be robust to noise, present natural and rigorous solution to deal with these problems. The OVFM has been adapted to identify material parameter maps from Magnetic Resonance Elastography (MRE) data consisting of 3-dimensional displacement fields in harmonically loaded soft materials. In this work, the method has been developed to identify elastic and viscoelastic models. The OVFM sensitivity to spatial resolution and to noise has been studied by analyzing 3D analytically simulated displacement data. This study evaluates and describes the OVFM identification performances: different biases on the identified parameters are induced by the spatial resolution and experimental noise. The well-known identification problems in the case of quasi-incompressible materials also find a natural solution in the OVFM. Moreover, an a posteriori criterion to estimate the local identification quality is proposed. The identification results obtained on actual experiments are briefly presented. PMID:26146416
Multi-Objective Community Detection Based on Memetic Algorithm
2015-01-01
Community detection has drawn a lot of attention as it can provide invaluable help in understanding the function and visualizing the structure of networks. Since single objective optimization methods have intrinsic drawbacks to identifying multiple significant community structures, some methods formulate the community detection as multi-objective problems and adopt population-based evolutionary algorithms to obtain multiple community structures. Evolutionary algorithms have strong global search ability, but have difficulty in locating local optima efficiently. In this study, in order to identify multiple significant community structures more effectively, a multi-objective memetic algorithm for community detection is proposed by combining multi-objective evolutionary algorithm with a local search procedure. The local search procedure is designed by addressing three issues. Firstly, nondominated solutions generated by evolutionary operations and solutions in dominant population are set as initial individuals for local search procedure. Then, a new direction vector named as pseudonormal vector is proposed to integrate two objective functions together to form a fitness function. Finally, a network specific local search strategy based on label propagation rule is expanded to search the local optimal solutions efficiently. The extensive experiments on both artificial and real-world networks evaluate the proposed method from three aspects. Firstly, experiments on influence of local search procedure demonstrate that the local search procedure can speed up the convergence to better partitions and make the algorithm more stable. Secondly, comparisons with a set of classic community detection methods illustrate the proposed method can find single partitions effectively. Finally, the method is applied to identify hierarchical structures of networks which are beneficial for analyzing networks in multi-resolution levels. PMID:25932646
Multi-objective community detection based on memetic algorithm.
Wu, Peng; Pan, Li
2015-01-01
Community detection has drawn a lot of attention as it can provide invaluable help in understanding the function and visualizing the structure of networks. Since single objective optimization methods have intrinsic drawbacks to identifying multiple significant community structures, some methods formulate the community detection as multi-objective problems and adopt population-based evolutionary algorithms to obtain multiple community structures. Evolutionary algorithms have strong global search ability, but have difficulty in locating local optima efficiently. In this study, in order to identify multiple significant community structures more effectively, a multi-objective memetic algorithm for community detection is proposed by combining multi-objective evolutionary algorithm with a local search procedure. The local search procedure is designed by addressing three issues. Firstly, nondominated solutions generated by evolutionary operations and solutions in dominant population are set as initial individuals for local search procedure. Then, a new direction vector named as pseudonormal vector is proposed to integrate two objective functions together to form a fitness function. Finally, a network specific local search strategy based on label propagation rule is expanded to search the local optimal solutions efficiently. The extensive experiments on both artificial and real-world networks evaluate the proposed method from three aspects. Firstly, experiments on influence of local search procedure demonstrate that the local search procedure can speed up the convergence to better partitions and make the algorithm more stable. Secondly, comparisons with a set of classic community detection methods illustrate the proposed method can find single partitions effectively. Finally, the method is applied to identify hierarchical structures of networks which are beneficial for analyzing networks in multi-resolution levels.
Cost optimization for buildings with hybrid ventilation systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ji, Kun; Lu, Yan
A method including: computing a total cost for a first zone in a building, wherein the total cost is equal to an actual energy cost of the first zone plus a thermal discomfort cost of the first zone; and heuristically optimizing the total cost to identify temperature setpoints for a mechanical heating/cooling system and a start time and an end time of the mechanical heating/cooling system, based on external weather data and occupancy data of the first zone.
Inverse problems in the design, modeling and testing of engineering systems
NASA Technical Reports Server (NTRS)
Alifanov, Oleg M.
1991-01-01
Formulations, classification, areas of application, and approaches to solving different inverse problems are considered for the design of structures, modeling, and experimental data processing. Problems in the practical implementation of theoretical-experimental methods based on solving inverse problems are analyzed in order to identify mathematical models of physical processes, aid in input data preparation for design parameter optimization, help in design parameter optimization itself, and to model experiments, large-scale tests, and real tests of engineering systems.
Committee Report: Metrics & Methods for MF/UF System Optimization
After a membrane filtration (i.e., microfiltration (MF) and ultrafiltration (UF)) system is designed, installed, and commissioned, it is essential that the plant is well-maintained in order to proactively identify potential design or equipment problems and ensure its proper opera...
NASA Astrophysics Data System (ADS)
Guthier, C.; Aschenbrenner, K. P.; Buergy, D.; Ehmann, M.; Wenz, F.; Hesser, J. W.
2015-03-01
This work discusses a novel strategy for inverse planning in low dose rate brachytherapy. It applies the idea of compressed sensing to the problem of inverse treatment planning and a new solver for this formulation is developed. An inverse planning algorithm was developed incorporating brachytherapy dose calculation methods as recommended by AAPM TG-43. For optimization of the functional a new variant of a matching pursuit type solver is presented. The results are compared with current state-of-the-art inverse treatment planning algorithms by means of real prostate cancer patient data. The novel strategy outperforms the best state-of-the-art methods in speed, while achieving comparable quality. It is able to find solutions with comparable values for the objective function and it achieves these results within a few microseconds, being up to 542 times faster than competing state-of-the-art strategies, allowing real-time treatment planning. The sparse solution of inverse brachytherapy planning achieved with methods from compressed sensing is a new paradigm for optimization in medical physics. Through the sparsity of required needles and seeds identified by this method, the cost of intervention may be reduced.
Guthier, C; Aschenbrenner, K P; Buergy, D; Ehmann, M; Wenz, F; Hesser, J W
2015-03-21
This work discusses a novel strategy for inverse planning in low dose rate brachytherapy. It applies the idea of compressed sensing to the problem of inverse treatment planning and a new solver for this formulation is developed. An inverse planning algorithm was developed incorporating brachytherapy dose calculation methods as recommended by AAPM TG-43. For optimization of the functional a new variant of a matching pursuit type solver is presented. The results are compared with current state-of-the-art inverse treatment planning algorithms by means of real prostate cancer patient data. The novel strategy outperforms the best state-of-the-art methods in speed, while achieving comparable quality. It is able to find solutions with comparable values for the objective function and it achieves these results within a few microseconds, being up to 542 times faster than competing state-of-the-art strategies, allowing real-time treatment planning. The sparse solution of inverse brachytherapy planning achieved with methods from compressed sensing is a new paradigm for optimization in medical physics. Through the sparsity of required needles and seeds identified by this method, the cost of intervention may be reduced.
Intracardiac Shunting and Stroke in Children: A Systematic Review
Dowling, Michael M.; Ikemba, Catherine M.
2017-01-01
In adults, patent foramen ovale or other potential intracardiac shunts are established risk factors for stroke via paradoxical embolization. Stroke is less common in children and risk factors differ. The authors examined the literature on intracardiac shunting and stroke in children, identifying the methods employed, the prevalence of detectible intracardiac shunts, associated conditions, and treatments. PubMed searches with keywords related to intracardiac shunting and stroke in children identified articles of interest. Additional articles were identified via citations in these articles or in reviews. The authors found that studies of intracardiac shunting in children with stroke are limited. No controlled studies were identified. Detection methods vary and the prevalence of echocardiographically detectible intracardiac shunting appears lower than reported in adults and autopsy studies. Defining the role of intracardiac shunting in pediatric stroke will require controlled studies with unified detection methods in populations stratified by additional risk factors for paradoxical embolization. Optimal treatment is unclear. PMID:21212453
Knowledge-Based Methods To Train and Optimize Virtual Screening Ensembles
2016-01-01
Ensemble docking can be a successful virtual screening technique that addresses the innate conformational heterogeneity of macromolecular drug targets. Yet, lacking a method to identify a subset of conformational states that effectively segregates active and inactive small molecules, ensemble docking may result in the recommendation of a large number of false positives. Here, three knowledge-based methods that construct structural ensembles for virtual screening are presented. Each method selects ensembles by optimizing an objective function calculated using the receiver operating characteristic (ROC) curve: either the area under the ROC curve (AUC) or a ROC enrichment factor (EF). As the number of receptor conformations, N, becomes large, the methods differ in their asymptotic scaling. Given a set of small molecules with known activities and a collection of target conformations, the most resource intense method is guaranteed to find the optimal ensemble but scales as O(2N). A recursive approximation to the optimal solution scales as O(N2), and a more severe approximation leads to a faster method that scales linearly, O(N). The techniques are generally applicable to any system, and we demonstrate their effectiveness on the androgen nuclear hormone receptor (AR), cyclin-dependent kinase 2 (CDK2), and the peroxisome proliferator-activated receptor δ (PPAR-δ) drug targets. Conformations that consisted of a crystal structure and molecular dynamics simulation cluster centroids were used to form AR and CDK2 ensembles. Multiple available crystal structures were used to form PPAR-δ ensembles. For each target, we show that the three methods perform similarly to one another on both the training and test sets. PMID:27097522
Optimal Water-Power Flow Problem: Formulation and Distributed Optimal Solution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dall-Anese, Emiliano; Zhao, Changhong; Zamzam, Admed S.
This paper formalizes an optimal water-power flow (OWPF) problem to optimize the use of controllable assets across power and water systems while accounting for the couplings between the two infrastructures. Tanks and pumps are optimally managed to satisfy water demand while improving power grid operations; {for the power network, an AC optimal power flow formulation is augmented to accommodate the controllability of water pumps.} Unfortunately, the physics governing the operation of the two infrastructures and coupling constraints lead to a nonconvex (and, in fact, NP-hard) problem; however, after reformulating OWPF as a nonconvex, quadratically-constrained quadratic problem, a feasible point pursuit-successivemore » convex approximation approach is used to identify feasible and optimal solutions. In addition, a distributed solver based on the alternating direction method of multipliers enables water and power operators to pursue individual objectives while respecting the couplings between the two networks. The merits of the proposed approach are demonstrated for the case of a distribution feeder coupled with a municipal water distribution network.« less
Mokeddem, Diab; Khellaf, Abdelhafid
2009-01-01
Optimal design problem are widely known by their multiple performance measures that are often competing with each other. In this paper, an optimal multiproduct batch chemical plant design is presented. The design is firstly formulated as a multiobjective optimization problem, to be solved using the well suited non dominating sorting genetic algorithm (NSGA-II). The NSGA-II have capability to achieve fine tuning of variables in determining a set of non dominating solutions distributed along the Pareto front in a single run of the algorithm. The NSGA-II ability to identify a set of optimal solutions provides the decision-maker DM with a complete picture of the optimal solution space to gain better and appropriate choices. Then an outranking with PROMETHEE II helps the decision-maker to finalize the selection of a best compromise. The effectiveness of NSGA-II method with multiojective optimization problem is illustrated through two carefully referenced examples. PMID:19543537
Shindoh, Junichi; Loyer, Evelyne M.; Kopetz, Scott; Boonsirikamchai, Piyaporn; Maru, Dipen M.; Chun, Yun Shin; Zimmitti, Giuseppe; Curley, Steven A.; Charnsangavej, Chusilp; Aloia, Thomas A.; Vauthey, Jean-Nicolas
2012-01-01
Purpose The purposes of this study were to confirm the prognostic value of an optimal morphologic response to preoperative chemotherapy in patients undergoing chemotherapy with or without bevacizumab before resection of colorectal liver metastases (CLM) and to identify predictors of the optimal morphologic response. Patients and Methods The study included 209 patients who underwent resection of CLM after preoperative chemotherapy with oxaliplatin- or irinotecan-based regimens with or without bevacizumab. Radiologic responses were classified as optimal or suboptimal according to the morphologic response criteria. Overall survival (OS) was determined, and prognostic factors associated with an optimal response were identified in multivariate analysis. Results An optimal morphologic response was observed in 47% of patients treated with bevacizumab and 12% of patients treated without bevacizumab (P < .001). The 3- and 5-year OS rates were higher in the optimal response group (82% and 74%, respectively) compared with the suboptimal response group (60% and 45%, respectively; P < .001). On multivariate analysis, suboptimal morphologic response was an independent predictor of worse OS (hazard ratio, 2.09; P = .007). Receipt of bevacizumab (odds ratio, 6.71; P < .001) and largest metastasis before chemotherapy of ≤ 3 cm (odds ratio, 2.12; P = .025) were significantly associated with optimal morphologic response. The morphologic response showed no specific correlation with conventional size-based RECIST criteria, and it was superior to RECIST in predicting major pathologic response. Conclusion Independent of preoperative chemotherapy regimen, optimal morphologic response is sufficiently correlated with OS to be considered a surrogate therapeutic end point for patients with CLM. PMID:23150701
Methods to enable the design of bioactive small molecules targeting RNA
Disney, Matthew D.; Yildirim, Ilyas; Childs-Disney, Jessica L.
2014-01-01
RNA is an immensely important target for small molecule therapeutics or chemical probes of function. However, methods that identify, annotate, and optimize RNA-small molecule interactions that could enable the design of compounds that modulate RNA function are in their infancies. This review describes recent approaches that have been developed to understand and optimize RNA motif-small molecule interactions, including Structure-Activity Relationships Through Sequencing (StARTS), quantitative structure-activity relationships (QSAR), chemical similarity searching, structure-based design and docking, and molecular dynamics (MD) simulations. Case studies described include the design of small molecules targeting RNA expansions, the bacterial A-site, viral RNAs, and telomerase RNA. These approaches can be combined to afford a synergistic method to exploit the myriad of RNA targets in the transcriptome. PMID:24357181
Methods to enable the design of bioactive small molecules targeting RNA.
Disney, Matthew D; Yildirim, Ilyas; Childs-Disney, Jessica L
2014-02-21
RNA is an immensely important target for small molecule therapeutics or chemical probes of function. However, methods that identify, annotate, and optimize RNA-small molecule interactions that could enable the design of compounds that modulate RNA function are in their infancies. This review describes recent approaches that have been developed to understand and optimize RNA motif-small molecule interactions, including structure-activity relationships through sequencing (StARTS), quantitative structure-activity relationships (QSAR), chemical similarity searching, structure-based design and docking, and molecular dynamics (MD) simulations. Case studies described include the design of small molecules targeting RNA expansions, the bacterial A-site, viral RNAs, and telomerase RNA. These approaches can be combined to afford a synergistic method to exploit the myriad of RNA targets in the transcriptome.
Wischmeyer, Paul E; Carli, Franco; Evans, David C; Guilbert, Sarah; Kozar, Rosemary; Pryor, Aurora; Thiele, Robert H; Everett, Sotiria; Grocott, Mike; Gan, Tong J; Shaw, Andrew D; Thacker, Julie K M; Miller, Timothy E; Hedrick, Traci L; McEvoy, Matthew D; Mythen, Michael G; Bergamaschi, Roberto; Gupta, Ruchir; Holubar, Stefan D; Senagore, Anthony J; Abola, Ramon E; Bennett-Guerrero, Elliott; Kent, Michael L; Feldman, Liane S; Fiore, Julio F
2018-06-01
Perioperative malnutrition has proven to be challenging to define, diagnose, and treat. Despite these challenges, it is well known that suboptimal nutritional status is a strong independent predictor of poor postoperative outcomes. Although perioperative caregivers consistently express recognition of the importance of nutrition screening and optimization in the perioperative period, implementation of evidence-based perioperative nutrition guidelines and pathways in the United States has been quite limited and needs to be addressed in surgery-focused recommendations. The second Perioperative Quality Initiative brought together a group of international experts with the objective of providing consensus recommendations on this important topic with the goal of (1) developing guidelines for screening of nutritional status to identify patients at risk for adverse outcomes due to malnutrition; (2) address optimal methods of providing nutritional support and optimizing nutrition status preoperatively; and (3) identifying when and how to optimize nutrition delivery in the postoperative period. Discussion led to strong recommendations for implementation of routine preoperative nutrition screening to identify patients in need of preoperative nutrition optimization. Postoperatively, nutrition delivery should be restarted immediately after surgery. The key role of oral nutrition supplements, enteral nutrition, and parenteral nutrition (implemented in that order) in most perioperative patients was advocated for with protein delivery being more important than total calorie delivery. Finally, the role of often-inadequate nutrition intake in the posthospital setting was discussed, and the role of postdischarge oral nutrition supplements was emphasized.
Chen, Yantian; Bloemen, Veerle; Impens, Saartje; Moesen, Maarten; Luyten, Frank P; Schrooten, Jan
2011-12-01
Cell seeding into scaffolds plays a crucial role in the development of efficient bone tissue engineering constructs. Hence, it becomes imperative to identify the key factors that quantitatively predict reproducible and efficient seeding protocols. In this study, the optimization of a cell seeding process was investigated using design of experiments (DOE) statistical methods. Five seeding factors (cell type, scaffold type, seeding volume, seeding density, and seeding time) were selected and investigated by means of two response parameters, critically related to the cell seeding process: cell seeding efficiency (CSE) and cell-specific viability (CSV). In addition, cell spatial distribution (CSD) was analyzed by Live/Dead staining assays. Analysis identified a number of statistically significant main factor effects and interactions. Among the five seeding factors, only seeding volume and seeding time significantly affected CSE and CSV. Also, cell and scaffold type were involved in the interactions with other seeding factors. Within the investigated ranges, optimal conditions in terms of CSV and CSD were obtained when seeding cells in a regular scaffold with an excess of medium. The results of this case study contribute to a better understanding and definition of optimal process parameters for cell seeding. A DOE strategy can identify and optimize critical process variables to reduce the variability and assists in determining which variables should be carefully controlled during good manufacturing practice production to enable a clinically relevant implant.
Optimizing cost-efficiency in mean exposure assessment - cost functions reconsidered
2011-01-01
Background Reliable exposure data is a vital concern in medical epidemiology and intervention studies. The present study addresses the needs of the medical researcher to spend monetary resources devoted to exposure assessment with an optimal cost-efficiency, i.e. obtain the best possible statistical performance at a specified budget. A few previous studies have suggested mathematical optimization procedures based on very simple cost models; this study extends the methodology to cover even non-linear cost scenarios. Methods Statistical performance, i.e. efficiency, was assessed in terms of the precision of an exposure mean value, as determined in a hierarchical, nested measurement model with three stages. Total costs were assessed using a corresponding three-stage cost model, allowing costs at each stage to vary non-linearly with the number of measurements according to a power function. Using these models, procedures for identifying the optimally cost-efficient allocation of measurements under a constrained budget were developed, and applied on 225 scenarios combining different sizes of unit costs, cost function exponents, and exposure variance components. Results Explicit mathematical rules for identifying optimal allocation could be developed when cost functions were linear, while non-linear cost functions implied that parts of or the entire optimization procedure had to be carried out using numerical methods. For many of the 225 scenarios, the optimal strategy consisted in measuring on only one occasion from each of as many subjects as allowed by the budget. Significant deviations from this principle occurred if costs for recruiting subjects were large compared to costs for setting up measurement occasions, and, at the same time, the between-subjects to within-subject variance ratio was small. In these cases, non-linearities had a profound influence on the optimal allocation and on the eventual size of the exposure data set. Conclusions The analysis procedures developed in the present study can be used for informed design of exposure assessment strategies, provided that data are available on exposure variability and the costs of collecting and processing data. The present shortage of empirical evidence on costs and appropriate cost functions however impedes general conclusions on optimal exposure measurement strategies in different epidemiologic scenarios. PMID:21600023
NASA Astrophysics Data System (ADS)
Zheng, Guangdi; Pan, Mingbo; Liu, Wei; Wu, Xuetong
2018-03-01
The target identification of the sea battlefield is the prerequisite for the judgment of the enemy in the modern naval battle. In this paper, a collaborative identification method based on convolution neural network is proposed to identify the typical targets of sea battlefields. Different from the traditional single-input/single-output identification method, the proposed method constructs a multi-input/single-output co-identification architecture based on optimized convolution neural network and weighted D-S evidence theory. The simulation results show that
Teleconnection Paths via Climate Network Direct Link Detection.
Zhou, Dong; Gozolchiani, Avi; Ashkenazy, Yosef; Havlin, Shlomo
2015-12-31
Teleconnections describe remote connections (typically thousands of kilometers) of the climate system. These are of great importance in climate dynamics as they reflect the transportation of energy and climate change on global scales (like the El Niño phenomenon). Yet, the path of influence propagation between such remote regions, and weighting associated with different paths, are only partially known. Here we propose a systematic climate network approach to find and quantify the optimal paths between remotely distant interacting locations. Specifically, we separate the correlations between two grid points into direct and indirect components, where the optimal path is found based on a minimal total cost function of the direct links. We demonstrate our method using near surface air temperature reanalysis data, on identifying cross-latitude teleconnections and their corresponding optimal paths. The proposed method may be used to quantify and improve our understanding regarding the emergence of climate patterns on global scales.
Generalized rules for the optimization of elastic network models
NASA Astrophysics Data System (ADS)
Lezon, Timothy; Eyal, Eran; Bahar, Ivet
2009-03-01
Elastic network models (ENMs) are widely employed for approximating the coarse-grained equilibrium dynamics of proteins using only a few parameters. An area of current focus is improving the predictive accuracy of ENMs by fine-tuning their force constants to fit specific systems. Here we introduce a set of general rules for assigning ENM force constants to residue pairs. Using a novel method, we construct ENMs that optimally reproduce experimental residue covariances from NMR models of 68 proteins. We analyze the optimal interactions in terms of amino acid types, pair distances and local protein structures to identify key factors in determining the effective spring constants. When applied to several unrelated globular proteins, our method shows an improved correlation with experiment over a standard ENM. We discuss the physical interpretation of our findings as well as its implications in the fields of protein folding and dynamics.
Thermodynamic Studies for Drug Design and Screening
Garbett, Nichola C.; Chaires, Jonathan B.
2012-01-01
Introduction A key part of drug design and development is the optimization of molecular interactions between an engineered drug candidate and its binding target. Thermodynamic characterization provides information about the balance of energetic forces driving binding interactions and is essential for understanding and optimizing molecular interactions. Areas covered This review discusses the information that can be obtained from thermodynamic measurements and how this can be applied to the drug development process. Current approaches for the measurement and optimization of thermodynamic parameters are presented, specifically higher throughput and calorimetric methods. Relevant literature for this review was identified in part by bibliographic searches for the period 2004 – 2011 using the Science Citation Index and PUBMED and the keywords listed below. Expert opinion The most effective drug design and development platform comes from an integrated process utilizing all available information from structural, thermodynamic and biological studies. Continuing evolution in our understanding of the energetic basis of molecular interactions and advances in thermodynamic methods for widespread application are essential to realize the goal of thermodynamically-driven drug design. Comprehensive thermodynamic evaluation is vital early in the drug development process to speed drug development towards an optimal energetic interaction profile while retaining good pharmacological properties. Practical thermodynamic approaches, such as enthalpic optimization, thermodynamic optimization plots and the enthalpic efficiency index, have now matured to provide proven utility in design process. Improved throughput in calorimetric methods remains essential for even greater integration of thermodynamics into drug design. PMID:22458502
Silva, Aleidy; Lee, Bai-Yu; Clemens, Daniel L; Kee, Theodore; Ding, Xianting; Ho, Chih-Ming; Horwitz, Marcus A
2016-04-12
Tuberculosis (TB) remains a major global public health problem, and improved treatments are needed to shorten duration of therapy, decrease disease burden, improve compliance, and combat emergence of drug resistance. Ideally, the most effective regimen would be identified by a systematic and comprehensive combinatorial search of large numbers of TB drugs. However, optimization of regimens by standard methods is challenging, especially as the number of drugs increases, because of the extremely large number of drug-dose combinations requiring testing. Herein, we used an optimization platform, feedback system control (FSC) methodology, to identify improved drug-dose combinations for TB treatment using a fluorescence-based human macrophage cell culture model of TB, in which macrophages are infected with isopropyl β-D-1-thiogalactopyranoside (IPTG)-inducible green fluorescent protein (GFP)-expressing Mycobacterium tuberculosis (Mtb). On the basis of only a single screening test and three iterations, we identified highly efficacious three- and four-drug combinations. To verify the efficacy of these combinations, we further evaluated them using a methodologically independent assay for intramacrophage killing of Mtb; the optimized combinations showed greater efficacy than the current standard TB drug regimen. Surprisingly, all top three- and four-drug optimized regimens included the third-line drug clofazimine, and none included the first-line drugs isoniazid and rifampin, which had insignificant or antagonistic impacts on efficacy. Because top regimens also did not include a fluoroquinolone or aminoglycoside, they are potentially of use for treating many cases of multidrug- and extensively drug-resistant TB. Our study shows the power of an FSC platform to identify promising previously unidentified drug-dose combinations for treatment of TB.
Yang, Xue-Dong; Tang, Xu-Yan; Sang, Lin
2012-11-01
To establish a method for rapid identification of micro-constituents in monoammonium glycyrrhizinate by high-pressure solid phase extraction-high performance liquid chromatography-mass spectrometry. HPLC preparative chromatograph was adopted for determining the optimal method for high-pressure solid phase extraction under optimal conditions. 5C18-MS-II column (20.0 mm x 20.0 mm) was used as the extraction column, with 35% acetonitrile-acetic acid solution (pH 2. 20) as eluent at the speed of 16 mL x min(-1). The sample size was 0.5 mL, and the extraction cycle was 4.5 min. Then, extract liquid was analyzed by high performance liquid chromatography-mass spectrometry (HPLC-MS) after being concentrated by 100 times. Under the optimal condition of high-pressure solid phase extraction-high performance liquid chromatography-mass spectrometry, 10 components were rapidly identified from monoammonium glycyrrhizinate raw materials. Among them, the chemical structures of six micro-constituents were identified as 3-O-[beta-D-glucuronopyranosyl-beta-D-glucuronopyranosyl]-30-0-beta-D-apiopyranosylglycyrrhetic/3-O- [P-D-glucuronopyranosyl-beta-D-glucuronopyranosyl]-30-O-beta-D-arabinopyranosylglycyrrhetic, glycyrrhizic saponin F3, 22-hydroxyglycyrrhizin/18alpha-glycyrrhizic saponin G2, 3-O-[beta-D-rhamnopyranosyl]-24-hydroxyglycyrrhizin, glycyrrhizic saponin J2, and glycyrrhizic saponin B2 by MS(n) spectra analysis and reference to literatures. Four main chemical components were identified as glycyrrhizic saponin G2, 18beta-glycyrrhizic acid, uralglycyrrhizic saponin B and 18alpha-glycyrrhizic acid by liquid chromatography, MS(n) and ultraviolet spectra information and comparison with reference substances. The method can be used to identify chemical constituents in monoammonium glycyrrhizinate quickly and effectively, without any reference substance, which provides basis for quality control and safe application of monoammonium glycyrrhizinate-related products.
Using Impact Modulation to Identify Loose Bolts on a Satellite
2011-10-21
for public release; distribution is unlimited the literature to be an effective damage detection method for cracks, delamination, and fatigue in...to identify loose bolts and fatigue damage using optimized sensor locations using a Support Vector Machines algorithm to classify the dam- age. Finally...48] did preliminary work which showed that VM is effective in detecting fatigue cracks in engineering components despite changes in actuator location
Portfolio evaluation of health programs: a reply to Sendi et al.
Bridges, John F P; Terris, Darcey D
2004-05-01
Sendi et al. (Soc. Sci. Med. 57 (2003) 2207) extend previous research on cost-effectiveness analysis to the evaluation of a portfolio of interventions with risky outcomes using a "second best" approach that can identify improvements in efficiency in the allocation of resources. This method, however, cannot be used to directly identify the optimal solution to the resource allocation problem. Theoretically, a stricter adherence to the foundations of portfolio theory would permit direct optimization in portfolio selection, however, when we include uncertainty in our analysis in addition to the traditional concept of risk (which is often mislabelled uncertainty) complexities are introduced that create significant hurdles in the development of practical applications of portfolio theory for health care policy decision making.
NASA Astrophysics Data System (ADS)
Møll Nilsen, Halvor; Lie, Knut-Andreas; Andersen, Odd
2015-06-01
MRST-co2lab is a collection of open-source computational tools for modeling large-scale and long-time migration of CO2 in conductive aquifers, combining ideas from basin modeling, computational geometry, hydrology, and reservoir simulation. Herein, we employ the methods of MRST-co2lab to study long-term CO2 storage on the scale of hundreds of megatonnes. We consider public data sets of two aquifers from the Norwegian North Sea and use geometrical methods for identifying structural traps, percolation-type methods for identifying potential spill paths, and vertical-equilibrium methods for efficient simulation of structural, residual, and solubility trapping in a thousand-year perspective. In particular, we investigate how data resolution affects estimates of storage capacity and discuss workflows for identifying good injection sites and optimizing injection strategies.
Cheng, Qiang; Zhou, Hongbo; Cheng, Jie
2011-06-01
Selecting features for multiclass classification is a critically important task for pattern recognition and machine learning applications. Especially challenging is selecting an optimal subset of features from high-dimensional data, which typically have many more variables than observations and contain significant noise, missing components, or outliers. Existing methods either cannot handle high-dimensional data efficiently or scalably, or can only obtain local optimum instead of global optimum. Toward the selection of the globally optimal subset of features efficiently, we introduce a new selector--which we call the Fisher-Markov selector--to identify those features that are the most useful in describing essential differences among the possible groups. In particular, in this paper we present a way to represent essential discriminating characteristics together with the sparsity as an optimization objective. With properly identified measures for the sparseness and discriminativeness in possibly high-dimensional settings, we take a systematic approach for optimizing the measures to choose the best feature subset. We use Markov random field optimization techniques to solve the formulated objective functions for simultaneous feature selection. Our results are noncombinatorial, and they can achieve the exact global optimum of the objective function for some special kernels. The method is fast; in particular, it can be linear in the number of features and quadratic in the number of observations. We apply our procedure to a variety of real-world data, including mid--dimensional optical handwritten digit data set and high-dimensional microarray gene expression data sets. The effectiveness of our method is confirmed by experimental results. In pattern recognition and from a model selection viewpoint, our procedure says that it is possible to select the most discriminating subset of variables by solving a very simple unconstrained objective function which in fact can be obtained with an explicit expression.
Zhang, Chu; Feng, Xuping; Wang, Jian; Liu, Fei; He, Yong; Zhou, Weijun
2017-01-01
Detection of plant diseases in a fast and simple way is crucial for timely disease control. Conventionally, plant diseases are accurately identified by DNA, RNA or serology based methods which are time consuming, complex and expensive. Mid-infrared spectroscopy is a promising technique that simplifies the detection procedure for the disease. Mid-infrared spectroscopy was used to identify the spectral differences between healthy and infected oilseed rape leaves. Two different sample sets from two experiments were used to explore and validate the feasibility of using mid-infrared spectroscopy in detecting Sclerotinia stem rot (SSR) on oilseed rape leaves. The average mid-infrared spectra showed differences between healthy and infected leaves, and the differences varied among different sample sets. Optimal wavenumbers for the 2 sample sets selected by the second derivative spectra were similar, indicating the efficacy of selecting optimal wavenumbers. Chemometric methods were further used to quantitatively detect the oilseed rape leaves infected by SSR, including the partial least squares-discriminant analysis, support vector machine and extreme learning machine. The discriminant models using the full spectra and the optimal wavenumbers of the 2 sample sets were effective for classification accuracies over 80%. The discriminant results for the 2 sample sets varied due to variations in the samples. The use of two sample sets proved and validated the feasibility of using mid-infrared spectroscopy and chemometric methods for detecting SSR on oilseed rape leaves. The similarities among the selected optimal wavenumbers in different sample sets made it feasible to simplify the models and build practical models. Mid-infrared spectroscopy is a reliable and promising technique for SSR control. This study helps in developing practical application of using mid-infrared spectroscopy combined with chemometrics to detect plant disease.
Method of interplanetary trajectory optimization for the spacecraft with low thrust and swing-bys
NASA Astrophysics Data System (ADS)
Konstantinov, M. S.; Thein, M.
2017-07-01
The method developed to avoid the complexity of solving the multipoint boundary value problem while optimizing interplanetary trajectories of the spacecraft with electric propulsion and a sequence of swing-bys is presented in the paper. This method is based on the use of the preliminary problem solutions for the impulsive trajectories. The preliminary problem analyzed at the first stage of the study is formulated so that the analysis and optimization of a particular flight path is considered as the unconstrained minimum in the space of the selectable parameters. The existing methods can effectively solve this problem and make it possible to identify rational flight paths (the sequence of swing-bys) to receive the initial approximation for the main characteristics of the flight path (dates, values of the hyperbolic excess velocity, etc.). These characteristics can be used to optimize the trajectory of the spacecraft with electric propulsion. The special feature of the work is the introduction of the second (intermediate) stage of the research. At this stage some characteristics of the analyzed flight path (e.g. dates of swing-bys) are fixed and the problem is formulated so that the trajectory of the spacecraft with electric propulsion is optimized on selected sites of the flight path. The end-to-end optimization is carried out at the third (final) stage of the research. The distinctive feature of this stage is the analysis of the full set of optimal conditions for the considered flight path. The analysis of the characteristics of the optimal flight trajectories to Jupiter with Earth, Venus and Mars swing-bys for the spacecraft with electric propulsion are presented. The paper shows that the spacecraft weighing more than 7150 kg can be delivered into the vicinity of Jupiter along the trajectory with two Earth swing-bys by use of the space transportation system based on the "Angara A5" rocket launcher, the chemical upper stage "KVTK" and the electric propulsion system with input electrical power of 100 kW.
Investigation of metabolic objectives in cultured hepatocytes.
Uygun, Korkut; Matthew, Howard W T; Huang, Yinlun
2007-06-15
Using optimization based methods to predict fluxes in metabolic flux balance models has been a successful approach for some microorganisms, enabling construction of in silico models and even inference of some regulatory motifs. However, this success has not been translated to mammalian cells. The lack of knowledge about metabolic objectives in mammalian cells is a major obstacle that prevents utilization of various metabolic engineering tools and methods for tissue engineering and biomedical purposes. In this work, we investigate and identify possible metabolic objectives for hepatocytes cultured in vitro. To achieve this goal, we present a special data-mining procedure for identifying metabolic objective functions in mammalian cells. This multi-level optimization based algorithm enables identifying the major fluxes in the metabolic objective from MFA data in the absence of information about critical active constraints of the system. Further, once the objective is determined, active flux constraints can also be identified and analyzed. This information can be potentially used in a predictive manner to improve cell culture results or clinical metabolic outcomes. As a result of the application of this method, it was found that in vitro cultured hepatocytes maximize oxygen uptake, coupling of urea and TCA cycles, and synthesis of serine and urea. Selection of these fluxes as the metabolic objective enables accurate prediction of the flux distribution in the system given a limited amount of flux data; thus presenting a workable in silico model for cultured hepatocytes. It is observed that an overall homeostasis picture is also emergent in the findings.
Red ball ranging optimization based on dual camera ranging method
NASA Astrophysics Data System (ADS)
Kuang, Lei; Sun, Weijia; Liu, Jiaming; Tang, Matthew Wai-Chung
2018-05-01
In this paper, the process of positioning and moving to target red ball by NAO robot through its camera system is analyzed and improved using the dual camera ranging method. The single camera ranging method, which is adapted by NAO robot, was first studied and experimented. Since the existing error of current NAO Robot is not a single variable, the experiments were divided into two parts to obtain more accurate single camera ranging experiment data: forward ranging and backward ranging. Moreover, two USB cameras were used in our experiments that adapted Hough's circular method to identify a ball, while the HSV color space model was used to identify red color. Our results showed that the dual camera ranging method reduced the variance of error in ball tracking from 0.68 to 0.20.
Recent Advances in Stellarator Optimization
NASA Astrophysics Data System (ADS)
Gates, David; Brown, T.; Breslau, J.; Landreman, M.; Lazerson, S. A.; Mynick, H.; Neilson, G. H.; Pomphrey, N.
2016-10-01
Computational optimization has revolutionized the field of stellarator design. To date, optimizations have focused primarily on optimization of neoclassical confinement and ideal MHD stability, although limited optimization of other parameters has also been performed. One criticism that has been levelled at this method of design is the complexity of the resultant field coils. Recently, a new coil optimization code, COILOPT + + , was written and included in the STELLOPT suite of codes. The advantage of this method is that it allows the addition of real space constraints on the locations of the coils. As an initial exercise, a constraint that the windings be vertical was placed on large major radius half of the non-planar coils. Further constraints were also imposed that guaranteed that sector blanket modules could be removed from between the coils, enabling a sector maintenance scheme. Results of this exercise will be presented. We have also explored possibilities for generating an experimental database that could check whether the reduction in turbulent transport that is predicted by GENE as a function of local shear would be consistent with experiments. To this end, a series of equilibria that can be made in the now latent QUASAR experiment have been identified. This work was supported by U.S. DoE Contract #DE-AC02-09CH11466.
Rapid glucosinolate detection and identification using accurate mass MS-MS
USDA-ARS?s Scientific Manuscript database
Currently, there is a demand for accurate evaluation of brassica plat species for their glucosinolate content. An optimized method has been developed for detecting and identifying glucosinolates in plant extracts using MS-MS fragmentation with ion trap collision induced dissociation (CID) and higher...
Synthesis of actual knowledge on machine-tool monitoring methods and equipment
NASA Astrophysics Data System (ADS)
Tanguy, J. C.
1988-06-01
Problems connected with the automatic supervision of production were studied. Many different automatic control devices are now able to identify defects in the tools, but the solutions proposed to detect optimal limits in the utilization of a tool are not satisfactory.
Yang, Jian; Liu, Chuangui; Wang, Boqian; Ding, Xianting
2017-10-13
Superhydrophobic surface, as a promising micro/nano material, has tremendous applications in biological and artificial investigations. The electrohydrodynamics (EHD) technique is a versatile and effective method for fabricating micro- to nanoscale fibers and particles from a variety of materials. A combination of critical parameters, such as mass fraction, ratio of N, N-Dimethylformamide (DMF) to Tetrahydrofuran (THF), inner diameter of needle, feed rate, receiving distance, applied voltage as well as temperature, during electrospinning process, to determine the morphology of the electrospun membranes, which in turn determines the superhydrophobic property of the membrane. In this study, we applied a recently developed feedback system control (FSC) scheme for rapid identification of the optimal combination of these controllable parameters to fabricate superhydrophobic surface by one-step electrospinning method without any further modification. Within five rounds of experiments by testing totally forty-six data points, FSC scheme successfully identified an optimal parameter combination that generated electrospun membranes with a static water contact angle of 160 degrees or larger. Scanning electron microscope (SEM) imaging indicates that the FSC optimized surface attains unique morphology. The optimized setup introduced here therefore serves as a one-step, straightforward, and economic approach to fabricate superhydrophobic surface with electrospinning approach.
A predictive machine learning approach for microstructure optimization and materials design
NASA Astrophysics Data System (ADS)
Liu, Ruoqian; Kumar, Abhishek; Chen, Zhengzhang; Agrawal, Ankit; Sundararaghavan, Veera; Choudhary, Alok
2015-06-01
This paper addresses an important materials engineering question: How can one identify the complete space (or as much of it as possible) of microstructures that are theoretically predicted to yield the desired combination of properties demanded by a selected application? We present a problem involving design of magnetoelastic Fe-Ga alloy microstructure for enhanced elastic, plastic and magnetostrictive properties. While theoretical models for computing properties given the microstructure are known for this alloy, inversion of these relationships to obtain microstructures that lead to desired properties is challenging, primarily due to the high dimensionality of microstructure space, multi-objective design requirement and non-uniqueness of solutions. These challenges render traditional search-based optimization methods incompetent in terms of both searching efficiency and result optimality. In this paper, a route to address these challenges using a machine learning methodology is proposed. A systematic framework consisting of random data generation, feature selection and classification algorithms is developed. Experiments with five design problems that involve identification of microstructures that satisfy both linear and nonlinear property constraints show that our framework outperforms traditional optimization methods with the average running time reduced by as much as 80% and with optimality that would not be achieved otherwise.
Huang, Si-Da; Shang, Cheng; Zhang, Xiao-Jie; Liu, Zhi-Pan
2017-09-01
While the underlying potential energy surface (PES) determines the structure and other properties of a material, it has been frustrating to predict new materials from theory even with the advent of supercomputing facilities. The accuracy of the PES and the efficiency of PES sampling are two major bottlenecks, not least because of the great complexity of the material PES. This work introduces a "Global-to-Global" approach for material discovery by combining for the first time a global optimization method with neural network (NN) techniques. The novel global optimization method, named the stochastic surface walking (SSW) method, is carried out massively in parallel for generating a global training data set, the fitting of which by the atom-centered NN produces a multi-dimensional global PES; the subsequent SSW exploration of large systems with the analytical NN PES can provide key information on the thermodynamics and kinetics stability of unknown phases identified from global PESs. We describe in detail the current implementation of the SSW-NN method with particular focuses on the size of the global data set and the simultaneous energy/force/stress NN training procedure. An important functional material, TiO 2 , is utilized as an example to demonstrate the automated global data set generation, the improved NN training procedure and the application in material discovery. Two new TiO 2 porous crystal structures are identified, which have similar thermodynamics stability to the common TiO 2 rutile phase and the kinetics stability for one of them is further proved from SSW pathway sampling. As a general tool for material simulation, the SSW-NN method provides an efficient and predictive platform for large-scale computational material screening.
Konstantinidis, Spyridon; Titchener-Hooker, Nigel; Velayudhan, Ajoy
2017-08-01
Bioprocess development studies often involve the investigation of numerical and categorical inputs via the adoption of Design of Experiments (DoE) techniques. An attractive alternative is the deployment of a grid compatible Simplex variant which has been shown to yield optima rapidly and consistently. In this work, the method is combined with dummy variables and it is deployed in three case studies wherein spaces are comprised of both categorical and numerical inputs, a situation intractable by traditional Simplex methods. The first study employs in silico data and lays out the dummy variable methodology. The latter two employ experimental data from chromatography based studies performed with the filter-plate and miniature column High Throughput (HT) techniques. The solute of interest in the former case study was a monoclonal antibody whereas the latter dealt with the separation of a binary system of model proteins. The implemented approach prevented the stranding of the Simplex method at local optima, due to the arbitrary handling of the categorical inputs, and allowed for the concurrent optimization of numerical and categorical, multilevel and/or dichotomous, inputs. The deployment of the Simplex method, combined with dummy variables, was therefore entirely successful in identifying and characterizing global optima in all three case studies. The Simplex-based method was further shown to be of equivalent efficiency to a DoE-based approach, represented here by D-Optimal designs. Such an approach failed, however, to both capture trends and identify optima, and led to poor operating conditions. It is suggested that the Simplex-variant is suited to development activities involving numerical and categorical inputs in early bioprocess development. © 2017 The Authors. Biotechnology Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Body-Cooling Paradigm in Sport: Maximizing Safety and Performance During Competition.
Adams, William M; Hosokawa, Yuri; Casa, Douglas J
2016-12-01
Although body cooling has both performance and safety benefits, knowledge on optimizing cooling during specific sport competition is limited. To identify when, during sport competition, it is optimal for body cooling and to identify optimal body-cooling modalities to enhance safety and maximize sport performance. A comprehensive literature search was conducted to identify articles with specific context regarding body cooling, sport performance, and cooling modalities used during sport competition. A search of scientific peer-reviewed literature examining the effects of body cooling on exercise performance was done to examine the influence of body cooling on exercise performance. Subsequently, a literature search was done to identify effective cooling modalities that have been shown to improve exercise performance. The cooling modalities that are most effective in cooling the body during sport competition depend on the sport, timing of cooling, and feasibility based on the constraints of the sports rules and regulations. Factoring in the length of breaks (halftime substitutions, etc), the equipment worn during competition, and the cooling modalities that offer the greatest potential to cool must be considered in each individual sport. Scientific evidence supports using body cooling as a method of improving performance during sport competition. Developing a strategy to use cooling modalities that are scientifically evidence-based to improve performance while maximizing athlete's safety warrants further investigation.
Challenges in Building Disease-Based National Health Accounts
Rosen, Allison B.; Cutler, David M.
2012-01-01
Background Measuring spending on diseases is critical to assessing the value of medical care. Objective To review the current state of cost of illness (COI) estimation methods, identifying their strengths, limitations and uses. We briefly describe the current National Health Expenditure Accounts (NHEA), and then go on to discuss the addition of COI estimation to the NHEA. Conclusion Recommendations are made for future research aimed at identifying the best methods for developing and using disease-based national health accounts to optimize the information available to policymakers as they struggle with difficult resource allocation decisions. PMID:19536017
Optimal Design of Grid-Stiffened Composite Panels Using Global and Local Buckling Analysis
NASA Technical Reports Server (NTRS)
Ambur, Damodar R.; Jaunky, Navin; Knight, Norman F., Jr.
1996-01-01
A design strategy for optimal design of composite grid-stiffened panels subjected to global and local buckling constraints is developed using a discrete optimizer. An improved smeared stiffener theory is used for the global buckling analysis. Local buckling of skin segments is assessed using a Rayleigh-Ritz method that accounts for material anisotropy and transverse shear flexibility. The local buckling of stiffener segments is also assessed. Design variables are the axial and transverse stiffener spacing, stiffener height and thickness, skin laminate, and stiffening configuration. The design optimization process is adapted to identify the lightest-weight stiffening configuration and pattern for grid stiffened composite panels given the overall panel dimensions, design in-plane loads, material properties, and boundary conditions of the grid-stiffened panel.
Control-enhanced multiparameter quantum estimation
NASA Astrophysics Data System (ADS)
Liu, Jing; Yuan, Haidong
2017-10-01
Most studies in multiparameter estimation assume the dynamics is fixed and focus on identifying the optimal probe state and the optimal measurements. In practice, however, controls are usually available to alter the dynamics, which provides another degree of freedom. In this paper we employ optimal control methods, particularly the gradient ascent pulse engineering (GRAPE), to design optimal controls for the improvement of the precision limit in multiparameter estimation. We show that the controlled schemes are not only capable to provide a higher precision limit, but also have a higher stability to the inaccuracy of the time point performing the measurements. This high time stability will benefit the practical metrology, where it is hard to perform the measurement at a very accurate time point due to the response time of the measurement apparatus.
SU-E-T-446: Group-Sparsity Based Angle Generation Method for Beam Angle Optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, H
2015-06-15
Purpose: This work is to develop the effective algorithm for beam angle optimization (BAO), with the emphasis on enabling further improvement from existing treatment-dependent templates based on clinical knowledge and experience. Methods: The proposed BAO algorithm utilizes a priori beam angle templates as the initial guess, and iteratively generates angular updates for this initial set, namely angle generation method, with improved dose conformality that is quantitatively measured by the objective function. That is, during each iteration, we select “the test angle” in the initial set, and use group-sparsity based fluence map optimization to identify “the candidate angle” for updating “themore » test angle”, for which all the angles in the initial set except “the test angle”, namely “the fixed set”, are set free, i.e., with no group-sparsity penalty, and the rest of angles including “the test angle” during this iteration are in “the working set”. And then “the candidate angle” is selected with the smallest objective function value from the angles in “the working set” with locally maximal group sparsity, and replaces “the test angle” if “the fixed set” with “the candidate angle” has a smaller objective function value by solving the standard fluence map optimization (with no group-sparsity regularization). Similarly other angles in the initial set are in turn selected as “the test angle” for angular updates and this chain of updates is iterated until no further new angular update is identified for a full loop. Results: The tests using the MGH public prostate dataset demonstrated the effectiveness of the proposed BAO algorithm. For example, the optimized angular set from the proposed BAO algorithm was better the MGH template. Conclusion: A new BAO algorithm is proposed based on the angle generation method via group sparsity, with improved dose conformality from the given template. Hao Gao was partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000) and the Shanghai Pujiang Talent Program (#14PJ1404500)« less
Zhou, Bangyan; Wu, Xiaopei; Lv, Zhao; Zhang, Lei; Guo, Xiaojin
2016-01-01
Independent component analysis (ICA) as a promising spatial filtering method can separate motor-related independent components (MRICs) from the multichannel electroencephalogram (EEG) signals. However, the unpredictable burst interferences may significantly degrade the performance of ICA-based brain-computer interface (BCI) system. In this study, we proposed a new algorithm frame to address this issue by combining the single-trial-based ICA filter with zero-training classifier. We developed a two-round data selection method to identify automatically the badly corrupted EEG trials in the training set. The "high quality" training trials were utilized to optimize the ICA filter. In addition, we proposed an accuracy-matrix method to locate the artifact data segments within a single trial and investigated which types of artifacts can influence the performance of the ICA-based MIBCIs. Twenty-six EEG datasets of three-class motor imagery were used to validate the proposed methods, and the classification accuracies were compared with that obtained by frequently used common spatial pattern (CSP) spatial filtering algorithm. The experimental results demonstrated that the proposed optimizing strategy could effectively improve the stability, practicality and classification performance of ICA-based MIBCI. The study revealed that rational use of ICA method may be crucial in building a practical ICA-based MIBCI system.
Alekseeva, M G; Mavletova, D A; Kolchina, N V; Nezametdinova, V Z; Danilenko, V N
2015-10-01
Previously, we identified six serine/threonine protein kinases (STPK) of Bifidobacterium and named them Pkb1-Pkb6. In the present study, we optimized methods for isolation of the six STPK catalytic domains proteins of B. longum B379M: a method for isolation of Pkb3 and Pkb4 in native conditions, a method for isolation of Pkb5 in denaturing conditions, and a method for isolation of Pkb1, Pkb2, and Pkb6 from inclusion bodies. The dialysis conditions for the renaturation of the proteins were optimized. All of the enzymes were isolated in quantities sufficient for study of the protein activity. The proteins were homogeneous according to SDS-PAGE. The autophosphorylation ability of Pkb1, Pkb3, Pkb4, and Pkb6 was investigated for the first time. Autophosphorylation was detected only for the Pkb3 catalytic domain.
Harmonic component detection: Optimized Spectral Kurtosis for operational modal analysis
NASA Astrophysics Data System (ADS)
Dion, J.-L.; Tawfiq, I.; Chevallier, G.
2012-01-01
This work is a contribution in the field of Operational Modal Analysis to identify the modal parameters of mechanical structures using only measured responses. The study deals with structural responses coupled with harmonic components amplitude and frequency modulated in a short range, a common combination for mechanical systems with engines and other rotating machines in operation. These harmonic components generate misleading data interpreted erroneously by the classical methods used in OMA. The present work attempts to differentiate maxima in spectra stemming from harmonic components and structural modes. The detection method proposed is based on the so-called Optimized Spectral Kurtosis and compared with others definitions of Spectral Kurtosis described in the literature. After a parametric study of the method, a critical study is performed on numerical simulations and then on an experimental structure in operation in order to assess the method's performance.
A Fast Gradient Method for Nonnegative Sparse Regression With Self-Dictionary
NASA Astrophysics Data System (ADS)
Gillis, Nicolas; Luce, Robert
2018-01-01
A nonnegative matrix factorization (NMF) can be computed efficiently under the separability assumption, which asserts that all the columns of the given input data matrix belong to the cone generated by a (small) subset of them. The provably most robust methods to identify these conic basis columns are based on nonnegative sparse regression and self dictionaries, and require the solution of large-scale convex optimization problems. In this paper we study a particular nonnegative sparse regression model with self dictionary. As opposed to previously proposed models, this model yields a smooth optimization problem where the sparsity is enforced through linear constraints. We show that the Euclidean projection on the polyhedron defined by these constraints can be computed efficiently, and propose a fast gradient method to solve our model. We compare our algorithm with several state-of-the-art methods on synthetic data sets and real-world hyperspectral images.
NASA Astrophysics Data System (ADS)
Zhou, Wei-Xing; Sornette, Didier
2007-07-01
We have recently introduced the “thermal optimal path” (TOP) method to investigate the real-time lead-lag structure between two time series. The TOP method consists in searching for a robust noise-averaged optimal path of the distance matrix along which the two time series have the greatest similarity. Here, we generalize the TOP method by introducing a more general definition of distance which takes into account possible regime shifts between positive and negative correlations. This generalization to track possible changes of correlation signs is able to identify possible transitions from one convention (or consensus) to another. Numerical simulations on synthetic time series verify that the new TOP method performs as expected even in the presence of substantial noise. We then apply it to investigate changes of convention in the dependence structure between the historical volatilities of the USA inflation rate and economic growth rate. Several measures show that the new TOP method significantly outperforms standard cross-correlation methods.
NASA Astrophysics Data System (ADS)
Chebbi, A.; Bargaoui, Z. K.; da Conceição Cunha, M.
2012-12-01
Based on rainfall intensity-duration-frequency (IDF) curves, a robust optimization approach is proposed to identify the best locations to install new rain gauges. The advantage of robust optimization is that the resulting design solutions yield networks which behave acceptably under hydrological variability. Robust optimisation can overcome the problem of selecting representative rainfall events when building the optimization process. This paper reports an original approach based on Montana IDF model parameters. The latter are assumed to be geostatistical variables and their spatial interdependence is taken into account through the adoption of cross-variograms in the kriging process. The problem of optimally locating a fixed number of new monitoring stations based on an existing rain gauge network is addressed. The objective function is based on the mean spatial kriging variance and rainfall variogram structure using a variance-reduction method. Hydrological variability was taken into account by considering and implementing several return periods to define the robust objective function. Variance minimization is performed using a simulated annealing algorithm. In addition, knowledge of the time horizon is needed for the computation of the robust objective function. A short and a long term horizon were studied, and optimal networks are identified for each. The method developed is applied to north Tunisia (area = 21 000 km2). Data inputs for the variogram analysis were IDF curves provided by the hydrological bureau and available for 14 tipping bucket type rain gauges. The recording period was from 1962 to 2001, depending on the station. The study concerns an imaginary network augmentation based on the network configuration in 1973, which is a very significant year in Tunisia because there was an exceptional regional flood event in March 1973. This network consisted of 13 stations and did not meet World Meteorological Organization (WMO) recommendations for the minimum spatial density. So, it is proposed to virtually augment it by 25, 50, 100 and 160% which is the rate that would meet WMO requirements. Results suggest that for a given augmentation robust networks remain stable overall for the two time horizons.
Optimal False Discovery Rate Control for Dependent Data
Xie, Jichun; Cai, T. Tony; Maris, John; Li, Hongzhe
2013-01-01
This paper considers the problem of optimal false discovery rate control when the test statistics are dependent. An optimal joint oracle procedure, which minimizes the false non-discovery rate subject to a constraint on the false discovery rate is developed. A data-driven marginal plug-in procedure is then proposed to approximate the optimal joint procedure for multivariate normal data. It is shown that the marginal procedure is asymptotically optimal for multivariate normal data with a short-range dependent covariance structure. Numerical results show that the marginal procedure controls false discovery rate and leads to a smaller false non-discovery rate than several commonly used p-value based false discovery rate controlling methods. The procedure is illustrated by an application to a genome-wide association study of neuroblastoma and it identifies a few more genetic variants that are potentially associated with neuroblastoma than several p-value-based false discovery rate controlling procedures. PMID:23378870
A systematic optimization for graphene-based supercapacitors
NASA Astrophysics Data System (ADS)
Deuk Lee, Sung; Lee, Han Sung; Kim, Jin Young; Jeong, Jaesik; Kahng, Yung Ho
2017-08-01
Increasing the energy-storage density for supercapacitors is critical for their applications. Many researchers have attempted to identify optimal candidate component materials to achieve this goal, but investigations into systematically optimizing their mixing rate for maximizing the performance of each candidate material have been insufficient, which hinders the progress in their technology. In this study, we employ a statistically systematic method to determine the optimum mixing ratio of three components that constitute graphene-based supercapacitor electrodes: reduced graphene oxide (rGO), acetylene black (AB), and polyvinylidene fluoride (PVDF). By using the extreme-vertices design, the optimized proportion is determined to be (rGO: AB: PVDF = 0.95: 0.00: 0.05). The corresponding energy-storage density increases by a factor of 2 compared with that of non-optimized electrodes. Electrochemical and microscopic analyses are performed to determine the reason for the performance improvements.
A multidimensional model of optimal participation of children with physical disabilities.
Kang, Lin-Ju; Palisano, Robert J; King, Gillian A; Chiarello, Lisa A
2014-01-01
To present a conceptual model of optimal participation in recreational and leisure activities for children with physical disabilities. The conceptualization of the model was based on review of contemporary theories and frameworks, empirical research and the authors' practice knowledge. A case scenario is used to illustrate application to practice. The model proposes that optimal participation in recreational and leisure activities involves the dynamic interaction of multiple dimensions and determinants of participation. The three dimensions of participation are physical, social and self-engagement. Determinants of participation encompass attributes of the child, family and environment. Experiences of optimal participation are hypothesized to result in long-term benefits including better quality of life, a healthier lifestyle and emotional and psychosocial well-being. Consideration of relevant child, family and environment determinants of dimensions of optimal participation should assist children, families and health care professionals to identify meaningful goals and outcomes and guide the selection and implementation of innovative therapy approaches and methods of service delivery. Implications for Rehabilitation Optimal participation is proposed to involve the dynamic interaction of physical, social and self-engagement and attributes of the child, family and environment. The model emphasizes the importance of self-perceptions and participation experiences of children with physical disabilities. Optimal participation may have a positive influence on quality of life, a healthy lifestyle and emotional and psychosocial well-being. Knowledge of child, family, and environment determinants of physical, social and self-engagement should assist children, families and professionals in identifying meaningful goals and guiding innovative therapy approaches.
Yan, Yin-zhuo; Qian, Yu-lin; Ji, Feng-di; Chen, Jing-yu; Han, Bei-zhong
2013-05-01
Koji-making is a key process for production of high quality soy sauce. The microbial composition during koji-making was investigated by culture-dependent and culture-independent methods to determine predominant bacterial and fungal populations. The culture-dependent methods used were direct culture and colony morphology observation, and PCR amplification of 16S/26S rDNA fragments followed by sequencing analysis. The culture-independent method was based on the analysis of 16S/26S rDNA clone libraries. There were differences between the results obtained by different methods. However, sufficient overlap existed between the different methods to identify potentially significant microbial groups. 16 and 20 different bacterial species were identified using culture-dependent and culture-independent methods, respectively. 7 species could be identified by both methods. The most predominant bacterial genera were Weissella and Staphylococcus. Both 6 different fungal species were identified using culture-dependent and culture-independent methods, respectively. Only 3 species could be identified by both sets of methods. The most predominant fungi were Aspergillus and Candida species. This work illustrated the importance of a comprehensive polyphasic approach in the analysis of microbial composition during soy sauce koji-making, the knowledge of which will enable further optimization of microbial composition and quality control of koji to upgrade Chinese traditional soy sauce product. Copyright © 2013 Elsevier Ltd. All rights reserved.
Boulet, Louis-Philippe; Dorval, Eileen; Labrecque, Manon; Turgeon, Michel; Montague, Terrence; Thivierge, Robert L
2008-01-01
BACKGROUND AND OBJECTIVES: Asthma care in Canada and around the world persistently falls short of optimal treatment. To optimize care, a systematic approach to identifying such shortfalls or ‘care gaps’, in which all stakeholders of the health care system (including patients) are involved, was proposed. METHODS: Several projects of a multipartner, multidisciplinary disease management program, developed to optimize asthma care in Quebec, was conducted in a period of eight years. First, two population maps were produced to identify regional variations in asthma-related morbidity and to prioritize interventions for improving treatment. Second, current care was evaluated in a physician-patient cohort, confirming the many care gaps in asthma management. Third, two series of peer-reviewed outcome studies, targeting high-risk populations and specific asthma care gaps, were conducted. Finally, a process to integrate the best interventions into the health care system and an agenda for further research on optimal asthma management were proposed. RESULTS: Key observations from these studies included the identification of specific patterns of noncompliance in using inhaled corticosteroids, the failure of increased access to spirometry in asthma education centres to increase the number of education referrals, the transient improvement in educational abilities of nurses involved with an asthma hotline telephone service, and the beneficial effects of practice tools aimed at facilitating the assessment of asthma control and treatment needs by general practitioners. CONCLUSIONS: Disease management programs such as Towards Excellence in Asthma Management can provide valuable information on optimal strategies for improving treatment of asthma and other chronic diseases by identifying care gaps, improving guidelines implementation and optimizing care. PMID:18818784
Huang, X N; Ren, H P
2016-05-13
Robust adaptation is a critical ability of gene regulatory network (GRN) to survive in a fluctuating environment, which represents the system responding to an input stimulus rapidly and then returning to its pre-stimulus steady state timely. In this paper, the GRN is modeled using the Michaelis-Menten rate equations, which are highly nonlinear differential equations containing 12 undetermined parameters. The robust adaption is quantitatively described by two conflicting indices. To identify the parameter sets in order to confer the GRNs with robust adaptation is a multi-variable, multi-objective, and multi-peak optimization problem, which is difficult to acquire satisfactory solutions especially high-quality solutions. A new best-neighbor particle swarm optimization algorithm is proposed to implement this task. The proposed algorithm employs a Latin hypercube sampling method to generate the initial population. The particle crossover operation and elitist preservation strategy are also used in the proposed algorithm. The simulation results revealed that the proposed algorithm could identify multiple solutions in one time running. Moreover, it demonstrated a superior performance as compared to the previous methods in the sense of detecting more high-quality solutions within an acceptable time. The proposed methodology, owing to its universality and simplicity, is useful for providing the guidance to design GRN with superior robust adaptation.
Jung, Melissa R.; Horgen, F. David; Orski, Sara V.; Rodriguez, Viviana; Beers, Kathryn L.; Balazs, George H.; Jones, T. Todd; Work, Thierry M.; Brignac, Kayla C.; Royer, Sarah-Jeanne; Hyrenbach, David K.; Jensen, Brenda A.; Lynch, Jennifer M.
2018-01-01
Polymer identification of plastic marine debris can help identify its sources, degradation, and fate. We optimized and validated a fast, simple, and accessible technique, attenuated total reflectance Fourier transform infrared spectroscopy (ATR FT-IR), to identify polymers contained in plastic ingested by sea turtles. Spectra of consumer good items with known resin identification codes #1–6 and several #7 plastics were compared to standard and raw manufactured polymers. High temperature size exclusion chromatography measurements confirmed ATR FT-IR could differentiate these polymers. High-density (HDPE) and low-density polyethylene (LDPE) discrimination is challenging but a clear step-by-step guide is provided that identified 78% of ingested PE samples. The optimal cleaning methods consisted of wiping ingested pieces with water or cutting. Of 828 ingested plastics pieces from 50 Pacific sea turtles, 96% were identified by ATR FT-IR as HDPE, LDPE, unknown PE, polypropylene (PP), PE and PP mixtures, polystyrene, polyvinyl chloride, and nylon.
NASA Astrophysics Data System (ADS)
Li, Peng-fei; Zhou, Xiao-jun
2015-12-01
Subsea tunnel lining structures should be designed to sustain the loads transmitted from surrounding ground and groundwater during excavation. Extremely high pore-water pressure reduces the effective strength of the country rock that surrounds a tunnel, thereby lowering the arching effect and stratum stability of the structure. In this paper, the mechanical behavior and shape optimization of the lining structure for the Xiang'an tunnel excavated in weathered slots are examined. Eight cross sections with different geometric parameters are adopted to study the mechanical behavior and shape optimization of the lining structure. The hyperstatic reaction method is used through finite element analysis software ANSYS. The mechanical behavior of the lining structure is evidently affected by the geometric parameters of crosssectional shape. The minimum safety factor of the lining structure elements is set to be the objective function. The efficient tunnel shape to maximize the minimum safety factor is identified. The minimum safety factor increases significantly after optimization. The optimized cross section significantly improves the mechanical characteristics of the lining structure and effectively reduces its deformation. Force analyses of optimization process and program are conducted parametrically so that the method can be applied to the optimization design of other similar structures. The results obtained from this study enhance our understanding of the mechanical behavior of the lining structure for subsea tunnels. These results are also beneficial to the optimal design of lining structures in general.
Shape optimization of pulsatile ventricular assist devices using FSI to minimize thrombotic risk
NASA Astrophysics Data System (ADS)
Long, C. C.; Marsden, A. L.; Bazilevs, Y.
2014-10-01
In this paper we perform shape optimization of a pediatric pulsatile ventricular assist device (PVAD). The device simulation is carried out using fluid-structure interaction (FSI) modeling techniques within a computational framework that combines FEM for fluid mechanics and isogeometric analysis for structural mechanics modeling. The PVAD FSI simulations are performed under realistic conditions (i.e., flow speeds, pressure levels, boundary conditions, etc.), and account for the interaction of air, blood, and a thin structural membrane separating the two fluid subdomains. The shape optimization study is designed to reduce thrombotic risk, a major clinical problem in PVADs. Thrombotic risk is quantified in terms of particle residence time in the device blood chamber. Methods to compute particle residence time in the context of moving spatial domains are presented in a companion paper published in the same issue (Comput Mech, doi: 10.1007/s00466-013-0931-y, 2013). The surrogate management framework, a derivative-free pattern search optimization method that relies on surrogates for increased efficiency, is employed in this work. For the optimization study shown here, particle residence time is used to define a suitable cost or objective function, while four adjustable design optimization parameters are used to define the device geometry. The FSI-based optimization framework is implemented in a parallel computing environment, and deployed with minimal user intervention. Using five SEARCH/ POLL steps the optimization scheme identifies a PVAD design with significantly better throughput efficiency than the original device.
Switching neuronal state: optimal stimuli revealed using a stochastically-seeded gradient algorithm.
Chang, Joshua; Paydarfar, David
2014-12-01
Inducing a switch in neuronal state using energy optimal stimuli is relevant to a variety of problems in neuroscience. Analytical techniques from optimal control theory can identify such stimuli; however, solutions to the optimization problem using indirect variational approaches can be elusive in models that describe neuronal behavior. Here we develop and apply a direct gradient-based optimization algorithm to find stimulus waveforms that elicit a change in neuronal state while minimizing energy usage. We analyze standard models of neuronal behavior, the Hodgkin-Huxley and FitzHugh-Nagumo models, to show that the gradient-based algorithm: (1) enables automated exploration of a wide solution space, using stochastically generated initial waveforms that converge to multiple locally optimal solutions; and (2) finds optimal stimulus waveforms that achieve a physiological outcome condition, without a priori knowledge of the optimal terminal condition of all state variables. Analysis of biological systems using stochastically-seeded gradient methods can reveal salient dynamical mechanisms underlying the optimal control of system behavior. The gradient algorithm may also have practical applications in future work, for example, finding energy optimal waveforms for therapeutic neural stimulation that minimizes power usage and diminishes off-target effects and damage to neighboring tissue.
Evaluating remote sensing methods for targeting erosion in riparian corridors
USDA-ARS?s Scientific Manuscript database
State agencies in the United States and other groups developing water quality programs have begun using satellite imagery with hydrologic/water quality modeling to identify possible critical source areas of erosion. To optimize the use of available funds, quantitative targeting of areas with the hig...
Intelligent Fault Diagnosis of HVCB with Feature Space Optimization-Based Random Forest
Ma, Suliang; Wu, Jianwen; Wang, Yuhao; Jia, Bowen; Jiang, Yuan
2018-01-01
Mechanical faults of high-voltage circuit breakers (HVCBs) always happen over long-term operation, so extracting the fault features and identifying the fault type have become a key issue for ensuring the security and reliability of power supply. Based on wavelet packet decomposition technology and random forest algorithm, an effective identification system was developed in this paper. First, compared with the incomplete description of Shannon entropy, the wavelet packet time-frequency energy rate (WTFER) was adopted as the input vector for the classifier model in the feature selection procedure. Then, a random forest classifier was used to diagnose the HVCB fault, assess the importance of the feature variable and optimize the feature space. Finally, the approach was verified based on actual HVCB vibration signals by considering six typical fault classes. The comparative experiment results show that the classification accuracy of the proposed method with the origin feature space reached 93.33% and reached up to 95.56% with optimized input feature vector of classifier. This indicates that feature optimization procedure is successful, and the proposed diagnosis algorithm has higher efficiency and robustness than traditional methods. PMID:29659548
[Study on application of SVM in prediction of coronary heart disease].
Zhu, Yue; Wu, Jianghua; Fang, Ying
2013-12-01
Base on the data of blood pressure, plasma lipid, Glu and UA by physical test, Support Vector Machine (SVM) was applied to identify coronary heart disease (CHD) in patients and non-CHD individuals in south China population for guide of further prevention and treatment of the disease. Firstly, the SVM classifier was built using radial basis kernel function, liner kernel function and polynomial kernel function, respectively. Secondly, the SVM penalty factor C and kernel parameter sigma were optimized by particle swarm optimization (PSO) and then employed to diagnose and predict the CHD. By comparison with those from artificial neural network with the back propagation (BP) model, linear discriminant analysis, logistic regression method and non-optimized SVM, the overall results of our calculation demonstrated that the classification performance of optimized RBF-SVM model could be superior to other classifier algorithm with higher accuracy rate, sensitivity and specificity, which were 94.51%, 92.31% and 96.67%, respectively. So, it is well concluded that SVM could be used as a valid method for assisting diagnosis of CHD.
NASA Astrophysics Data System (ADS)
Bortolotti, P.; Adolphs, G.; Bottasso, C. L.
2016-09-01
This work is concerned with the development of an optimization methodology for the composite materials used in wind turbine blades. Goal of the approach is to guide designers in the selection of the different materials of the blade, while providing indications to composite manufacturers on optimal trade-offs between mechanical properties and material costs. The method works by using a parametric material model, and including its free parameters amongst the design variables of a multi-disciplinary wind turbine optimization procedure. The proposed method is tested on the structural redesign of a conceptual 10 MW wind turbine blade, its spar caps and shell skin laminates being subjected to optimization. The procedure identifies a blade optimum for a new spar cap laminate characterized by a higher longitudinal Young's modulus and higher cost than the initial one, which however in turn induce both cost and mass savings in the blade. In terms of shell skin, the adoption of a laminate with intermediate properties between a bi-axial one and a tri-axial one also leads to slight structural improvements.
Zou, Meng; Liu, Zhaoqi; Zhang, Xiang-Sun; Wang, Yong
2015-10-15
In prognosis and survival studies, an important goal is to identify multi-biomarker panels with predictive power using molecular characteristics or clinical observations. Such analysis is often challenged by censored, small-sample-size, but high-dimensional genomic profiles or clinical data. Therefore, sophisticated models and algorithms are in pressing need. In this study, we propose a novel Area Under Curve (AUC) optimization method for multi-biomarker panel identification named Nearest Centroid Classifier for AUC optimization (NCC-AUC). Our method is motived by the connection between AUC score for classification accuracy evaluation and Harrell's concordance index in survival analysis. This connection allows us to convert the survival time regression problem to a binary classification problem. Then an optimization model is formulated to directly maximize AUC and meanwhile minimize the number of selected features to construct a predictor in the nearest centroid classifier framework. NCC-AUC shows its great performance by validating both in genomic data of breast cancer and clinical data of stage IB Non-Small-Cell Lung Cancer (NSCLC). For the genomic data, NCC-AUC outperforms Support Vector Machine (SVM) and Support Vector Machine-based Recursive Feature Elimination (SVM-RFE) in classification accuracy. It tends to select a multi-biomarker panel with low average redundancy and enriched biological meanings. Also NCC-AUC is more significant in separation of low and high risk cohorts than widely used Cox model (Cox proportional-hazards regression model) and L1-Cox model (L1 penalized in Cox model). These performance gains of NCC-AUC are quite robust across 5 subtypes of breast cancer. Further in an independent clinical data, NCC-AUC outperforms SVM and SVM-RFE in predictive accuracy and is consistently better than Cox model and L1-Cox model in grouping patients into high and low risk categories. In summary, NCC-AUC provides a rigorous optimization framework to systematically reveal multi-biomarker panel from genomic and clinical data. It can serve as a useful tool to identify prognostic biomarkers for survival analysis. NCC-AUC is available at http://doc.aporc.org/wiki/NCC-AUC. ywang@amss.ac.cn Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Kwon, O.; Kim, W.; Kim, J.
2017-12-01
Recently construction of subsea tunnel has been increased globally. For safe construction of subsea tunnel, identifying the geological structure including fault at design and construction stage is more than important. Then unlike the tunnel in land, it's very difficult to obtain the data on geological structure because of the limit in geological survey. This study is intended to challenge such difficulties in a way of developing the technology to identify the geological structure of seabed automatically by using echo sounding data. When investigation a potential site for a deep subsea tunnel, there is the technical and economical limit with borehole of geophysical investigation. On the contrary, echo sounding data is easily obtainable while information reliability is higher comparing to above approaches. This study is aimed at developing the algorithm that identifies the large scale of geological structure of seabed using geostatic approach. This study is based on theory of structural geology that topographic features indicate geological structure. Basic concept of algorithm is outlined as follows; (1) convert the seabed topography to the grid data using echo sounding data, (2) apply the moving window in optimal size to the grid data, (3) estimate the spatial statistics of the grid data in the window area, (4) set the percentile standard of spatial statistics, (5) display the values satisfying the standard on the map, (6) visualize the geological structure on the map. The important elements in this study include optimal size of moving window, kinds of optimal spatial statistics and determination of optimal percentile standard. To determine such optimal elements, a numerous simulations were implemented. Eventually, user program based on R was developed using optimal analysis algorithm. The user program was designed to identify the variations of various spatial statistics. It leads to easy analysis of geological structure depending on variation of spatial statistics by arranging to easily designate the type of spatial statistics and percentile standard. This research was supported by the Korea Agency for Infrastructure Technology Advancement under the Ministry of Land, Infrastructure and Transport of the Korean government. (Project Number: 13 Construction Research T01)
Parameter identification in ODE models with oscillatory dynamics: a Fourier regularization approach
NASA Astrophysics Data System (ADS)
Chiara D'Autilia, Maria; Sgura, Ivonne; Bozzini, Benedetto
2017-12-01
In this paper we consider a parameter identification problem (PIP) for data oscillating in time, that can be described in terms of the dynamics of some ordinary differential equation (ODE) model, resulting in an optimization problem constrained by the ODEs. In problems with this type of data structure, simple application of the direct method of control theory (discretize-then-optimize) yields a least-squares cost function exhibiting multiple ‘low’ minima. Since in this situation any optimization algorithm is liable to fail in the approximation of a good solution, here we propose a Fourier regularization approach that is able to identify an iso-frequency manifold {{ S}} of codimension-one in the parameter space \
Chen, Shuang; Sha, Sha; Qian, Michael; Xu, Yan
2017-12-01
This study investigated the aroma contribution of volatile sulfur compounds (VSCs) in Moutai liquors. The VSCs were analyzed using headspace solid-phase microextraction-gas chromatography-pulsed flame photometric detection (HS-SPME-GC-PFPD). The influences of SPME fibers, ethanol content in the sample, pre-incubation time, and extraction temperature and time on the extraction of VSCs were optimized. The VSCs were optimally extracted using a divinylbenzene/carboxen/polydimethylsiloxane fiber, by incubating 10 mL diluted Chinese liquor (5% vol.) with 3 g NaCl at 30 °C for 15 min, followed by a subsequent extraction for 40 min at 30 °C. The optimized method was further validated. A total of 13 VSCs were identified and quantified in Moutai liquors. The aroma contribution of these VSCs were evaluated by their odor activity values (OAVs), with the result that 7 of 13 VSCs had OAVs > 1. In particular, 2-furfurylthiol, methanethiol, dimethyl trisulfide, ethanethiol, and methional had relatively high OAVs and could be the key aroma contributors to Moutai liquors. In this study, a method for analyzing volatile sulfur compounds in Chinese liquors has been developed. This method will allow an in-depth study the aroma contribution of volatile sulfur compounds in Chinese liquors. Seven volatile sulfur compounds were identified as potential key aroma contributors for Moutai liquors, which can help to the quality control of Moutai liquors. © 2017 Institute of Food Technologists®.
Taghanaki, Saeid Asgari; Kawahara, Jeremy; Miles, Brandon; Hamarneh, Ghassan
2017-07-01
Feature reduction is an essential stage in computer aided breast cancer diagnosis systems. Multilayer neural networks can be trained to extract relevant features by encoding high-dimensional data into low-dimensional codes. Optimizing traditional auto-encoders works well only if the initial weights are close to a proper solution. They are also trained to only reduce the mean squared reconstruction error (MRE) between the encoder inputs and the decoder outputs, but do not address the classification error. The goal of the current work is to test the hypothesis that extending traditional auto-encoders (which only minimize reconstruction error) to multi-objective optimization for finding Pareto-optimal solutions provides more discriminative features that will improve classification performance when compared to single-objective and other multi-objective approaches (i.e. scalarized and sequential). In this paper, we introduce a novel multi-objective optimization of deep auto-encoder networks, in which the auto-encoder optimizes two objectives: MRE and mean classification error (MCE) for Pareto-optimal solutions, rather than just MRE. These two objectives are optimized simultaneously by a non-dominated sorting genetic algorithm. We tested our method on 949 X-ray mammograms categorized into 12 classes. The results show that the features identified by the proposed algorithm allow a classification accuracy of up to 98.45%, demonstrating favourable accuracy over the results of state-of-the-art methods reported in the literature. We conclude that adding the classification objective to the traditional auto-encoder objective and optimizing for finding Pareto-optimal solutions, using evolutionary multi-objective optimization, results in producing more discriminative features. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Weerasinghe, Harshi; Schneider, Uwe A.
2010-05-01
Assessment of economically optimal water management and geospatial potential for large-scale water storage Weerasinghe, Harshi; Schneider, Uwe A Water is an essential but limited and vulnerable resource for all socio-economic development and for maintaining healthy ecosystems. Water scarcity accelerated due to population expansion, improved living standards, and rapid growth in economic activities, has profound environmental and social implications. These include severe environmental degradation, declining groundwater levels, and increasing problems of water conflicts. Water scarcity is predicted to be one of the key factors limiting development in the 21st century. Climate scientists have projected spatial and temporal changes in precipitation and changes in the probability of intense floods and droughts in the future. As scarcity of accessible and usable water increases, demand for efficient water management and adaptation strategies increases as well. Addressing water scarcity requires an intersectoral and multidisciplinary approach in managing water resources. This would in return safeguard the social welfare and the economical benefit to be at their optimal balance without compromising the sustainability of ecosystems. This paper presents a geographically explicit method to assess the potential for water storage with reservoirs and a dynamic model that identifies the dimensions and material requirements under an economically optimal water management plan. The methodology is applied to the Elbe and Nile river basins. Input data for geospatial analysis at watershed level are taken from global data repositories and include data on elevation, rainfall, soil texture, soil depth, drainage, land use and land cover; which are then downscaled to 1km spatial resolution. Runoff potential for different combinations of land use and hydraulic soil groups and for mean annual precipitation levels are derived by the SCS-CN method. Using the overlay and decision tree algorithms in GIS, potential water storage sites are identified for constructing regional reservoirs. Subsequently, sites are prioritized based on runoff generation potential (m3 per unit area), and geographical suitability for constructing storage structures. The results from the spatial analysis are used as input for the optimization model. Allocation of resources and appropriate dimension for dams and associated structures are identified using the optimization model. The model evaluates the capability of alternative reservoirs for cost-efficient water management. The Geographic Information System is used to store, analyze, and integrate spatially explicit and non-spatial attribute information whereas the algebraic modeling platform is used to develop the dynamic optimization model. The results of this methodology are validated over space against satellite remote sensing data and existing data on reservoir capacities and runoff. The method is suitable for application of on-farm water storage structures, water distribution networks, and moisture conservation structures in a global context.
Multiplex detection of agricultural pathogens
Siezak, Thomas R.; Gardner, Shea; Torres, Clinton; Vitalis, Elizabeth; Lenhoff, Raymond J.
2013-01-15
Described are kits and methods useful for detection of agricultural pathogens in a sample. Genomic sequence information from agricultural pathogens was analyzed to identify signature sequences, e.g., polynucleotide sequences useful for confirming the presence or absence of a pathogen in a sample. Primer and probe sets were designed and optimized for use in a PCR based, multiplexed Luminex assay and/or an array assay to successfully identify the presence or absence of pathogens in a sample.
Optimal topologies for maximizing network transmission capacity
NASA Astrophysics Data System (ADS)
Chen, Zhenhao; Wu, Jiajing; Rong, Zhihai; Tse, Chi K.
2018-04-01
It has been widely demonstrated that the structure of a network is a major factor that affects its traffic dynamics. In this work, we try to identify the optimal topologies for maximizing the network transmission capacity, as well as to build a clear relationship between structural features of a network and the transmission performance in terms of traffic delivery. We propose an approach for designing optimal network topologies against traffic congestion by link rewiring and apply them on the Barabási-Albert scale-free, static scale-free and Internet Autonomous System-level networks. Furthermore, we analyze the optimized networks using complex network parameters that characterize the structure of networks, and our simulation results suggest that an optimal network for traffic transmission is more likely to have a core-periphery structure. However, assortative mixing and the rich-club phenomenon may have negative impacts on network performance. Based on the observations of the optimized networks, we propose an efficient method to improve the transmission capacity of large-scale networks.
A method to accelerate creation of plasma etch recipes using physics and Bayesian statistics
NASA Astrophysics Data System (ADS)
Chopra, Meghali J.; Verma, Rahul; Lane, Austin; Willson, C. G.; Bonnecaze, Roger T.
2017-03-01
Next generation semiconductor technologies like high density memory storage require precise 2D and 3D nanopatterns. Plasma etching processes are essential to achieving the nanoscale precision required for these structures. Current plasma process development methods rely primarily on iterative trial and error or factorial design of experiment (DOE) to define the plasma process space. Here we evaluate the efficacy of the software tool Recipe Optimization for Deposition and Etching (RODEo) against standard industry methods at determining the process parameters of a high density O2 plasma system with three case studies. In the first case study, we demonstrate that RODEo is able to predict etch rates more accurately than a regression model based on a full factorial design while using 40% fewer experiments. In the second case study, we demonstrate that RODEo performs significantly better than a full factorial DOE at identifying optimal process conditions to maximize anisotropy. In the third case study we experimentally show how RODEo maximizes etch rates while using half the experiments of a full factorial DOE method. With enhanced process predictions and more accurate maps of the process space, RODEo reduces the number of experiments required to develop and optimize plasma processes.
Optimized doppler optical coherence tomography for choroidal capillary vasculature imaging
NASA Astrophysics Data System (ADS)
Liu, Gangjun; Qi, Wenjuan; Yu, Lingfeng; Chen, Zhongping
2011-03-01
In this paper, we analyzed the retinal and choroidal blood vasculature in the posterior segment of the human eye with optimized color Doppler and Doppler variance optical coherence tomography. Depth-resolved structure, color Doppler and Doppler variance images were compared. Blood vessels down to capillary level were able to be obtained with the optimized optical coherence color Doppler and Doppler variance method. For in-vivo imaging of human eyes, bulkmotion induced bulk phase must be identified and removed before using color Doppler method. It was found that the Doppler variance method is not sensitive to bulk motion and the method can be used without removing the bulk phase. A novel, simple and fast segmentation algorithm to indentify retinal pigment epithelium (RPE) was proposed and used to segment the retinal and choroidal layer. The algorithm was based on the detected OCT signal intensity difference between different layers. A spectrometer-based Fourier domain OCT system with a central wavelength of 890 nm and bandwidth of 150nm was used in this study. The 3-dimensional imaging volume contained 120 sequential two dimensional images with 2048 A-lines per image. The total imaging time was 12 seconds and the imaging area was 5x5 mm2.
NASA Astrophysics Data System (ADS)
Ginting, E.; Tambunanand, M. M.; Syahputri, K.
2018-02-01
Evolutionary Operation Methods (EVOP) is a method that is designed used in the process of running or operating routinely in the company to enables high productivity. Quality is one of the critical factors for a company to win the competition. Because of these conditions, the research for products quality has been done by gathering the production data of the company and make a direct observation to the factory floor especially the drying department to identify the problem which is the high water content in the mosquito incense coil. PT.X which is producing mosquito coils attempted to reduce product defects caused by the inaccuracy of operating conditions. One of the parameters of good quality insect repellent that is water content, that if the moisture content is too high then the product easy to mold and broken, and vice versa if it is too low the products are easily broken and burn shorter hours. Three factors that affect the value of the optimal water content, the stirring time, drying temperature and drying time. To obtain the required conditions Evolutionary Operation (EVOP) methods is used. Evolutionary Operation (EVOP) is used as an efficient technique for optimization of two or three variable experimental parameters using two-level factorial designs with center point. Optimal operating conditions in the experiment are stirring time performed for 20 minutes, drying temperature at 65°C, and drying time for 130 minutes. The results of the analysis based on the method of Evolutionary Operation (EVOP) value is the optimum water content of 6.90%, which indicates the value has approached the optimal in a production plant that is 7%.
Andrews, J R
1981-01-01
Two methods dominate cancer treatment--one, the traditional best practice, individualized treatment method and two, the a priori determined decision method of the interinstitutional, cooperative, clinical trial. In the first, choices are infinite and can be made at the time of treatment; in the second, choices are finite and are made in advance of treatment on a random basis. Neither method systematically selects, identifies, or formalizes the optimum level of effect in the treatment chosen. Of the two, it can be argued that the first, other things being equal, is more likely to select the optimum treatment. The determination of level of effect for the optimization of cancer treatment requires the generation of dose-response relationships for both benefit and risk and the introduction of benefit and risk considerations and judgements. The clinical trial, as presently constituted, doses not yield this kind of information, it being, generally, of the binary yes or no, better or worse type. The best practice, individualized treatment method can yield, when adequately documented, both a range of dose-response relationships and a variety of benefit and risk considerations. The presentation will be limited to a consideration of a single modality of cancer treatment, radiation therapy, but an analogy with other modalities of cancer treatment will be inferred. Criteria for optimization will be developed and graphic means for its identification and formalization will be demonstrated with examples taken from the radiotherapy literature. The general problem of optimization theory and practice will be discussed; the necessity for its exploration in relation to the increasing complexity of cancer treatment will be developed; and recommendations for clinical research will be made including a proposal for the support of clinics as an alternative to the support of programs.
NASA Astrophysics Data System (ADS)
Worthy, Johnny L.; Holzinger, Marcus J.; Scheeres, Daniel J.
2018-06-01
The observation to observation measurement association problem for dynamical systems can be addressed by determining if the uncertain admissible regions produced from each observation have one or more points of intersection in state space. An observation association method is developed which uses an optimization based approach to identify local Mahalanobis distance minima in state space between two uncertain admissible regions. A binary hypothesis test with a selected false alarm rate is used to assess the probability that an intersection exists at the point(s) of minimum distance. The systemic uncertainties, such as measurement uncertainties, timing errors, and other parameter errors, define a distribution about a state estimate located at the local Mahalanobis distance minima. If local minima do not exist, then the observations are not associated. The proposed method utilizes an optimization approach defined on a reduced dimension state space to reduce the computational load of the algorithm. The efficacy and efficiency of the proposed method is demonstrated on observation data collected from the Georgia Tech Space Object Research Telescope.
Fu, Yue; Chai, Tianyou
2016-12-01
Regarding two-player zero-sum games of continuous-time nonlinear systems with completely unknown dynamics, this paper presents an online adaptive algorithm for learning the Nash equilibrium solution, i.e., the optimal policy pair. First, for known systems, the simultaneous policy updating algorithm (SPUA) is reviewed. A new analytical method to prove the convergence is presented. Then, based on the SPUA, without using a priori knowledge of any system dynamics, an online algorithm is proposed to simultaneously learn in real time either the minimal nonnegative solution of the Hamilton-Jacobi-Isaacs (HJI) equation or the generalized algebraic Riccati equation for linear systems as a special case, along with the optimal policy pair. The approximate solution to the HJI equation and the admissible policy pair is reexpressed by the approximation theorem. The unknown constants or weights of each are identified simultaneously by resorting to the recursive least square method. The convergence of the online algorithm to the optimal solutions is provided. A practical online algorithm is also developed. Simulation results illustrate the effectiveness of the proposed method.
Li, Xiang; Yang, Zhibo; Chen, Xuefeng
2014-01-01
The active structural health monitoring (SHM) approach for the complex composite laminate structures of wind turbine blades (WTBs), addresses the important and complicated problem of signal noise. After illustrating the wind energy industry's development perspectives and its crucial requirement for SHM, an improved redundant second generation wavelet transform (IRSGWT) pre-processing algorithm based on neighboring coefficients is introduced for feeble signal denoising. The method can avoid the drawbacks of conventional wavelet methods that lose information in transforms and the shortcomings of redundant second generation wavelet (RSGWT) denoising that can lead to error propagation. For large scale WTB composites, how to minimize the number of sensors while ensuring accuracy is also a key issue. A sparse sensor array optimization of composites for WTB applications is proposed that can reduce the number of transducers that must be used. Compared to a full sixteen transducer array, the optimized eight transducer configuration displays better accuracy in identifying the correct position of simulated damage (mass of load) on composite laminates with anisotropic characteristics than a non-optimized array. It can help to guarantee more flexible and qualified monitoring of the areas that more frequently suffer damage. The proposed methods are verified experimentally on specimens of carbon fiber reinforced resin composite laminates. PMID:24763210
Routing performance analysis and optimization within a massively parallel computer
Archer, Charles Jens; Peters, Amanda; Pinnow, Kurt Walter; Swartz, Brent Allen
2013-04-16
An apparatus, program product and method optimize the operation of a massively parallel computer system by, in part, receiving actual performance data concerning an application executed by the plurality of interconnected nodes, and analyzing the actual performance data to identify an actual performance pattern. A desired performance pattern may be determined for the application, and an algorithm may be selected from among a plurality of algorithms stored within a memory, the algorithm being configured to achieve the desired performance pattern based on the actual performance data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
O’Hara, Matthew J.; Murray, Nathaniel J.; Carter, Jennifer C.
Zirconium-89 ( 89Zr), produced by the (p, n) reaction from naturally monoisotopic yttrium ( natY), is a promising positron emitting isotope for immunoPET imaging. Its long half-life of 78.4 h is sufficient for evaluating slow physiological processes. A prototype automated fluidic system, coupled to on-line and in-line detectors, has been constructed to facilitate development of new 89Zr purification methodologies. The highly reproducible reagent delivery platform and near-real time monitoring of column effluents allows for efficient method optimization. The separation of Zr from dissolved Y metal targets was evaluated using several anion exchange resins. Each resin was evaluated against its abilitymore » to quantitatively capture Zr from a load solution high in dissolved Y. The most appropriate anion exchange resin for this application was identified, and the separation method was optimized. The method is capable of a high Y decontamination factor (>10 5) and has been shown to remove Fe, an abundant contaminant in Y foils, from the 89Zr elution fraction. Finally, the method was evaluated using cyclotron bombarded Y foil targets; the method was shown to achieve >95% recovery of the 89Zr present in the foils. The anion exchange column method described here is intended to be the first 89Zr isolation stage in a dual-column purification process.« less
O’Hara, Matthew J.; Murray, Nathaniel J.; Carter, Jennifer C.; ...
2018-02-24
Zirconium-89 ( 89Zr), produced by the (p, n) reaction from naturally monoisotopic yttrium ( natY), is a promising positron emitting isotope for immunoPET imaging. Its long half-life of 78.4 h is sufficient for evaluating slow physiological processes. A prototype automated fluidic system, coupled to on-line and in-line detectors, has been constructed to facilitate development of new 89Zr purification methodologies. The highly reproducible reagent delivery platform and near-real time monitoring of column effluents allows for efficient method optimization. The separation of Zr from dissolved Y metal targets was evaluated using several anion exchange resins. Each resin was evaluated against its abilitymore » to quantitatively capture Zr from a load solution high in dissolved Y. The most appropriate anion exchange resin for this application was identified, and the separation method was optimized. The method is capable of a high Y decontamination factor (>10 5) and has been shown to remove Fe, an abundant contaminant in Y foils, from the 89Zr elution fraction. Finally, the method was evaluated using cyclotron bombarded Y foil targets; the method was shown to achieve >95% recovery of the 89Zr present in the foils. The anion exchange column method described here is intended to be the first 89Zr isolation stage in a dual-column purification process.« less
Peyrodie, Laurent; Szurhaj, William; Bolo, Nicolas; Pinti, Antonio; Gallois, Philippe
2014-01-01
Muscle artifacts constitute one of the major problems in electroencephalogram (EEG) examinations, particularly for the diagnosis of epilepsy, where pathological rhythms occur within the same frequency bands as those of artifacts. This paper proposes to use the method dual adaptive filtering by optimal projection (DAFOP) to automatically remove artifacts while preserving true cerebral signals. DAFOP is a two-step method. The first step consists in applying the common spatial pattern (CSP) method to two frequency windows to identify the slowest components which will be considered as cerebral sources. The two frequency windows are defined by optimizing convolutional filters. The second step consists in using a regression method to reconstruct the signal independently within various frequency windows. This method was evaluated by two neurologists on a selection of 114 pages with muscle artifacts, from 20 clinical recordings of awake and sleeping adults, subject to pathological signals and epileptic seizures. A blind comparison was then conducted with the canonical correlation analysis (CCA) method and conventional low-pass filtering at 30 Hz. The filtering rate was 84.3% for muscle artifacts with a 6.4% reduction of cerebral signals even for the fastest waves. DAFOP was found to be significantly more efficient than CCA and 30 Hz filters. The DAFOP method is fast and automatic and can be easily used in clinical EEG recordings. PMID:25298967
Non-Viral Transfection Methods Optimized for Gene Delivery to a Lung Cancer Cell Line
Salimzadeh, Loghman; Jaberipour, Mansooreh; Hosseini, Ahmad; Ghaderi, Abbas
2013-01-01
Background Mehr-80 is a newly established adherent human large cell lung cancer cell line that has not been transfected until now. This study aims to define the optimal transfection conditions and effects of some critical elements for enhancing gene delivery to this cell line by utilizing different non-viral transfection Procedures. Methods In the current study, calcium phosphate (CaP), DEAE-dextran, superfect, electroporation and lipofection transfection methods were used to optimize delivery of a plasmid construct that expressed Green Fluorescent Protein (GFP). Transgene expression was detected by fluorescent microscopy and flowcytometry. Toxicities of the methods were estimated by trypan blue staining. In order to evaluate the density of the transfected gene, we used a plasmid construct that expressed the Stromal cell-Derived Factor-1 (SDF-1) gene and measured its expression by real-time PCR. Results Mean levels of GFP-expressing cells 48 hr after transfection were 8.4% (CaP), 8.2% (DEAE-dextran), 4.9% (superfect), 34.1% (electroporation), and 40.1% (lipofection). Lipofection had the highest intense SDF-1 expression of the analyzed methods. Conclusion This study has shown that the lipofection and electroporation methods were more efficient at gene delivery to Mehr-80 cells. The quantity of DNA per transfection, reagent concentration, and incubation time were identified as essential factors for successful transfection in all of the studied methods. PMID:23799175
Ensink, Elliot; Sinha, Jessica; Sinha, Arkadeep; Tang, Huiyuan; Calderone, Heather M; Hostetter, Galen; Winter, Jordan; Cherba, David; Brand, Randall E; Allen, Peter J; Sempere, Lorenzo F; Haab, Brian B
2015-10-06
Experiments involving the high-throughput quantification of image data require algorithms for automation. A challenge in the development of such algorithms is to properly interpret signals over a broad range of image characteristics, without the need for manual adjustment of parameters. Here we present a new approach for locating signals in image data, called Segment and Fit Thresholding (SFT). The method assesses statistical characteristics of small segments of the image and determines the best-fit trends between the statistics. Based on the relationships, SFT identifies segments belonging to background regions; analyzes the background to determine optimal thresholds; and analyzes all segments to identify signal pixels. We optimized the initial settings for locating background and signal in antibody microarray and immunofluorescence data and found that SFT performed well over multiple, diverse image characteristics without readjustment of settings. When used for the automated analysis of multicolor, tissue-microarray images, SFT correctly found the overlap of markers with known subcellular localization, and it performed better than a fixed threshold and Otsu's method for selected images. SFT promises to advance the goal of full automation in image analysis.
Ensink, Elliot; Sinha, Jessica; Sinha, Arkadeep; Tang, Huiyuan; Calderone, Heather M.; Hostetter, Galen; Winter, Jordan; Cherba, David; Brand, Randall E.; Allen, Peter J.; Sempere, Lorenzo F.; Haab, Brian B.
2016-01-01
Certain experiments involve the high-throughput quantification of image data, thus requiring algorithms for automation. A challenge in the development of such algorithms is to properly interpret signals over a broad range of image characteristics, without the need for manual adjustment of parameters. Here we present a new approach for locating signals in image data, called Segment and Fit Thresholding (SFT). The method assesses statistical characteristics of small segments of the image and determines the best-fit trends between the statistics. Based on the relationships, SFT identifies segments belonging to background regions; analyzes the background to determine optimal thresholds; and analyzes all segments to identify signal pixels. We optimized the initial settings for locating background and signal in antibody microarray and immunofluorescence data and found that SFT performed well over multiple, diverse image characteristics without readjustment of settings. When used for the automated analysis of multi-color, tissue-microarray images, SFT correctly found the overlap of markers with known subcellular localization, and it performed better than a fixed threshold and Otsu’s method for selected images. SFT promises to advance the goal of full automation in image analysis. PMID:26339978
Farouk, Abd-ElAziem; Batcha, Mohamed F; Greiner, Ralf; Salleh, Hamzah M; Salleh, Mohamad R; Sirajudin, Abdur R
2006-09-01
To develop a molecular technique that is fast and reliable in detecting porcine contamination or ingredients in foods. The method applied involved DNA amplification using polymerase chain reaction (PCR) technology. Thus, the sequence of a certain gene found uniquely in pork was identified and its sequence was used to design specific primers for the PCR. The extraction of DNA was optimized in respect to PCR and detection limits were established. The optimized method was then used to identify pork in food products obtained from various local hypermarkets. The latest results were confirmed in triplicates on the 20th April 2006 at the Molecular Biology Laboratory, International Islamic University, Malaysia. The method was shown to be robust and reliable. Out of 30 food samples not expected to contain pork material, 3 samples were shown to be contaminated with pork material; 2 chocolates and one chicken nugget. We observed that 2 food products that were labeled as halal showed positive for porcine ingredients, while another that did not have any halal logo but originated from outside Malaysia and exported to many Middle Eastern nations also showed positive.
Optimizing Telehealth Strategies for Subspecialty Care: Recommendations from Rural Pediatricians
Demirci, Jill R.; Bogen, Debra L.; Mehrotra, Ateev; Miller, Elizabeth
2015-01-01
Abstract Background: Telehealth offers strategies to improve access to subspecialty care for children in rural communities. Rural pediatrician experiences and preferences regarding the use of these telehealth strategies for children's subspecialty care needs are not known. We elicited rural pediatrician experiences and preferences regarding different pediatric subspecialty telehealth strategies. Materials and Methods: Seventeen semistructured telephone interviews were conducted with rural pediatricians from 17 states within the United States. Interviewees were recruited by e-mails to a pediatric rural health listserv and to rural pediatricians identified through snowball sampling. Themes were identified through thematic analysis of interview transcripts. Institutional Review Board approval was obtained. Results: Rural pediatricians identified several telehealth strategies to improve access to subspecialty care, including physician access hotlines, remote electronic medical record access, electronic messaging systems, live video telemedicine, and telehealth triage systems. Rural pediatricians provided recommendations for optimizing the utility of each of these strategies based on their experiences with different systems. Rural pediatricians preferred specific telehealth strategies for specific clinical contexts, resulting in a proposed framework describing the complementary role of different telehealth strategies for pediatric subspecialty care. Finally, rural pediatricians identified additional benefits associated with the use of telehealth strategies and described a desire for telehealth systems that enhanced (rather than replaced) personal relationships between rural pediatricians and subspecialists. Conclusions: Rural pediatricians described complementary roles for different subspecialty care telehealth strategies. Additionally, rural pediatricians provided recommendations for optimizing individual telehealth strategies. Input from rural pediatricians will be crucial for optimizing specific telehealth strategies and designing effective telehealth systems. PMID:25919585
The need for efficient methods of screening chemicals for the potential to cause developmental neurotoxicity is paramount. We previously described optimization of an HCA assay for proliferation and apoptosis in ReNcell CX cells (ReN), identifying appropriate controls. Utility of ...
High throughput protein production screening
Beernink, Peter T [Walnut Creek, CA; Coleman, Matthew A [Oakland, CA; Segelke, Brent W [San Ramon, CA
2009-09-08
Methods, compositions, and kits for the cell-free production and analysis of proteins are provided. The invention allows for the production of proteins from prokaryotic sequences or eukaryotic sequences, including human cDNAs using PCR and IVT methods and detecting the proteins through fluorescence or immunoblot techniques. This invention can be used to identify optimized PCR and WT conditions, codon usages and mutations. The methods are readily automated and can be used for high throughput analysis of protein expression levels, interactions, and functional states.
1993-06-04
34 J In a paper entitled "Understanding and Developing Combat Power," by Colonel Huba Vass de Czege, a method identifying analytical techniques for...reiterates several important doctrinal and theoretical requirements for the de ’elopment of 9« an optimal «valuation criteria nodal. Although...Methode de Ralsonnenent Tactlque" (The Tactical Reasoning Method)’". Is a version of concurrent COA analysis under conditions at uncertainty. Figure
Long, Zhili; Wang, Rui; Fang, Jiwen; Dai, Xufei; Li, Zuohua
2017-07-01
Piezoelectric actuators invariably exhibit hysteresis nonlinearities that tend to become significant under the open-loop condition and could cause oscillations and errors in nanometer-positioning tasks. Chaotic map modified particle swarm optimization (MPSO) is proposed and implemented to identify the Prandtl-Ishlinskii model for piezoelectric actuators. Hysteresis compensation is attained through application of an inverse Prandtl-Ishlinskii model, in which the parameters are formulated based on the original model with chaotic map MPSO. To strengthen the diversity and improve the searching ergodicity of the swarm, an initial method of adaptive inertia weight based on a chaotic map is proposed. To compare and prove that the swarm's convergence occurs before stochastic initialization and to attain an optimal particle swarm optimization algorithm, the parameters of a proportional-integral-derivative controller are searched using self-tuning, and the simulated results are used to verify the search effectiveness of chaotic map MPSO. The results show that chaotic map MPSO is superior to its competitors for identifying the Prandtl-Ishlinskii model and that the inverse Prandtl-Ishlinskii model can provide hysteresis compensation under different conditions in a simple and effective manner.
Development of a bedside viable ultrasound protocol to quantify appendicular lean tissue mass
Paris, Michael T.; Lafleur, Benoit; Dubin, Joel A.
2017-01-01
Abstract Background Ultrasound is a non‐invasive and readily available tool that can be prospectively applied at the bedside to assess muscle mass in clinical settings. The four‐site protocol, which images two anatomical sites on each quadriceps, may be a viable bedside method, but its ability to predict musculature has not been compared against whole‐body reference methods. Our primary objectives were to (i) compare the four‐site protocol's ability to predict appendicular lean tissue mass from dual‐energy X‐ray absorptiometry; (ii) optimize the predictability of the four‐site protocol with additional anatomical muscle thicknesses and easily obtained covariates; and (iii) assess the ability of the optimized protocol to identify individuals with low lean tissue mass. Methods This observational cross‐sectional study recruited 96 university and community dwelling adults. Participants underwent ultrasound scans for assessment of muscle thickness and whole‐body dual‐energy X‐ray absorptiometry scans for assessment of appendicular lean tissue. Ultrasound protocols included (i) the nine‐site protocol, which images nine anterior and posterior muscle groups in supine and prone positions, and (ii) the four‐site protocol, which images two anterior sites on each quadriceps muscle group in a supine position. Results The four‐site protocol was strongly associated (R 2 = 0.72) with appendicular lean tissue mass, but Bland–Altman analysis displayed wide limits of agreement (−5.67, 5.67 kg). Incorporating the anterior upper arm muscle thickness, and covariates age and sex, alongside the four‐site protocol, improved the association (R 2 = 0.91) with appendicular lean tissue and displayed narrower limits of agreement (−3.18, 3.18 kg). The optimized protocol demonstrated a strong ability to identify low lean tissue mass (area under the curve = 0.89). Conclusions The four‐site protocol can be improved with the addition of the anterior upper arm muscle thickness, sex, and age when predicting appendicular lean tissue mass. This optimized protocol can accurately identify low lean tissue mass, while still being easily applied at the bedside. PMID:28722298
Optimal 2D-SIM reconstruction by two filtering steps with Richardson-Lucy deconvolution.
Perez, Victor; Chang, Bo-Jui; Stelzer, Ernst Hans Karl
2016-11-16
Structured illumination microscopy relies on reconstruction algorithms to yield super-resolution images. Artifacts can arise in the reconstruction and affect the image quality. Current reconstruction methods involve a parametrized apodization function and a Wiener filter. Empirically tuning the parameters in these functions can minimize artifacts, but such an approach is subjective and produces volatile results. We present a robust and objective method that yields optimal results by two straightforward filtering steps with Richardson-Lucy-based deconvolutions. We provide a resource to identify artifacts in 2D-SIM images by analyzing two main reasons for artifacts, out-of-focus background and a fluctuating reconstruction spectrum. We show how the filtering steps improve images of test specimens, microtubules, yeast and mammalian cells.
Optimal 2D-SIM reconstruction by two filtering steps with Richardson-Lucy deconvolution
NASA Astrophysics Data System (ADS)
Perez, Victor; Chang, Bo-Jui; Stelzer, Ernst Hans Karl
2016-11-01
Structured illumination microscopy relies on reconstruction algorithms to yield super-resolution images. Artifacts can arise in the reconstruction and affect the image quality. Current reconstruction methods involve a parametrized apodization function and a Wiener filter. Empirically tuning the parameters in these functions can minimize artifacts, but such an approach is subjective and produces volatile results. We present a robust and objective method that yields optimal results by two straightforward filtering steps with Richardson-Lucy-based deconvolutions. We provide a resource to identify artifacts in 2D-SIM images by analyzing two main reasons for artifacts, out-of-focus background and a fluctuating reconstruction spectrum. We show how the filtering steps improve images of test specimens, microtubules, yeast and mammalian cells.
Back analysis of geomechanical parameters in underground engineering using artificial bee colony.
Zhu, Changxing; Zhao, Hongbo; Zhao, Ming
2014-01-01
Accurate geomechanical parameters are critical in tunneling excavation, design, and supporting. In this paper, a displacements back analysis based on artificial bee colony (ABC) algorithm is proposed to identify geomechanical parameters from monitored displacements. ABC was used as global optimal algorithm to search the unknown geomechanical parameters for the problem with analytical solution. To the problem without analytical solution, optimal back analysis is time-consuming, and least square support vector machine (LSSVM) was used to build the relationship between unknown geomechanical parameters and displacement and improve the efficiency of back analysis. The proposed method was applied to a tunnel with analytical solution and a tunnel without analytical solution. The results show the proposed method is feasible.
After the strike: using facilitation in a residency training program.
Andres, D; Hamoline, D; Sanders, M; Anderson, J
1998-03-10
Methods of alternative dispute resolution, including facilitation, can be used to identify and resolve areas of conflict. Facilitation was used by the University of Saskatchewan's Department of Family Medicine (Saskatoon division) after the strike by residents in July and August 1995 so as to allow optimal use of the remaining educational time. Through facilitation, experiences of the strike and areas of potential conflict were explored. Participants had a broad range of responses to the strike. Specific coping strategies were developed to deal with identified concerns. Although outcomes were not measured formally, levels of trust improved and collegial relationships were restored. Because so many changes occur in health care and medical education, conflict inevitably arises. Facilitation offers one way of dealing with change constructively, thereby making possible the optimal use of educational time.
Esfandiari, Kasra; Abdollahi, Farzaneh; Talebi, Heidar Ali
2017-09-01
In this paper, an identifier-critic structure is introduced to find an online near-optimal controller for continuous-time nonaffine nonlinear systems having saturated control signal. By employing two Neural Networks (NNs), the solution of Hamilton-Jacobi-Bellman (HJB) equation associated with the cost function is derived without requiring a priori knowledge about system dynamics. Weights of the identifier and critic NNs are tuned online and simultaneously such that unknown terms are approximated accurately and the control signal is kept between the saturation bounds. The convergence of NNs' weights, identification error, and system states is guaranteed using Lyapunov's direct method. Finally, simulation results are performed on two nonlinear systems to confirm the effectiveness of the proposed control strategy. Copyright © 2017 Elsevier Ltd. All rights reserved.
Singular perturbation techniques for real time aircraft trajectory optimization and control
NASA Technical Reports Server (NTRS)
Calise, A. J.; Moerder, D. D.
1982-01-01
The usefulness of singular perturbation methods for developing real time computer algorithms to control and optimize aircraft flight trajectories is examined. A minimum time intercept problem using F-8 aerodynamic and propulsion data is used as a baseline. This provides a framework within which issues relating to problem formulation, solution methodology and real time implementation are examined. Theoretical questions relating to separability of dynamics are addressed. With respect to implementation, situations leading to numerical singularities are identified, and procedures for dealing with them are outlined. Also, particular attention is given to identifying quantities that can be precomputed and stored, thus greatly reducing the on-board computational load. Numerical results are given to illustrate the minimum time algorithm, and the resulting flight paths. An estimate is given for execution time and storage requirements.
Optimized mixed Markov models for motif identification
Huang, Weichun; Umbach, David M; Ohler, Uwe; Li, Leping
2006-01-01
Background Identifying functional elements, such as transcriptional factor binding sites, is a fundamental step in reconstructing gene regulatory networks and remains a challenging issue, largely due to limited availability of training samples. Results We introduce a novel and flexible model, the Optimized Mixture Markov model (OMiMa), and related methods to allow adjustment of model complexity for different motifs. In comparison with other leading methods, OMiMa can incorporate more than the NNSplice's pairwise dependencies; OMiMa avoids model over-fitting better than the Permuted Variable Length Markov Model (PVLMM); and OMiMa requires smaller training samples than the Maximum Entropy Model (MEM). Testing on both simulated and actual data (regulatory cis-elements and splice sites), we found OMiMa's performance superior to the other leading methods in terms of prediction accuracy, required size of training data or computational time. Our OMiMa system, to our knowledge, is the only motif finding tool that incorporates automatic selection of the best model. OMiMa is freely available at [1]. Conclusion Our optimized mixture of Markov models represents an alternative to the existing methods for modeling dependent structures within a biological motif. Our model is conceptually simple and effective, and can improve prediction accuracy and/or computational speed over other leading methods. PMID:16749929
Gagnon, Marie-Pierre; Légaré, France; Fortin, Jean-Paul; Lamothe, Lise; Labrecque, Michel; Duplantie, Julie
2008-01-01
Background E-health is increasingly valued for supporting: 1) access to quality health care services for all citizens; 2) information flow and exchange; 3) integrated health care services and 4) interprofessional collaboration. Nevertheless, several questions remain on the factors allowing an optimal integration of e-health in health care policies, organisations and practices. An evidence-based integrated strategy would maximise the efficacy and efficiency of e-health implementation. However, decisions regarding e-health applications are usually not evidence-based, which can lead to a sub-optimal use of these technologies. This study aims at understanding factors influencing the application of scientific knowledge for an optimal implementation of e-health in the health care system. Methods A three-year multi-method study is being conducted in the Province of Quebec (Canada). Decision-making at each decisional level (political, organisational and clinical) are analysed based on specific approaches. At the political level, critical incidents analysis is being used. This method will identify how decisions regarding the implementation of e-health could be influenced or not by scientific knowledge. Then, interviews with key-decision-makers will look at how knowledge was actually used to support their decisions, and what factors influenced its use. At the organisational level, e-health projects are being analysed as case studies in order to explore the use of scientific knowledge to support decision-making during the implementation of the technology. Interviews with promoters, managers and clinicians will be carried out in order to identify factors influencing the production and application of scientific knowledge. At the clinical level, questionnaires are being distributed to clinicians involved in e-health projects in order to analyse factors influencing knowledge application in their decision-making. Finally, a triangulation of the results will be done using mixed methodologies to allow a transversal analysis of the results at each of the decisional levels. Results This study will identify factors influencing the use of scientific evidence and other types of knowledge by decision-makers involved in planning, financing, implementing and evaluating e-health projects. Conclusion These results will be highly relevant to inform decision-makers who wish to optimise the implementation of e-health in the Quebec health care system. This study is extremely relevant given the context of major transformations in the health care system where e-health becomes a must. PMID:18435853
2013-01-01
Background Optimization procedures to identify gene knockouts for targeted biochemical overproduction have been widely in use in modern metabolic engineering. Flux balance analysis (FBA) framework has provided conceptual simplifications for genome-scale dynamic analysis at steady states. Based on FBA, many current optimization methods for targeted bio-productions have been developed under the maximum cell growth assumption. The optimization problem to derive gene knockout strategies recently has been formulated as a bi-level programming problem in OptKnock for maximum targeted bio-productions with maximum growth rates. However, it has been shown that knockout mutants in fact reach the steady states with the minimization of metabolic adjustment (MOMA) from the corresponding wild-type strains instead of having maximal growth rates after genetic or metabolic intervention. In this work, we propose a new bi-level computational framework--MOMAKnock--which can derive robust knockout strategies under the MOMA flux distribution approximation. Methods In this new bi-level optimization framework, we aim to maximize the production of targeted chemicals by identifying candidate knockout genes or reactions under phenotypic constraints approximated by the MOMA assumption. Hence, the targeted chemical production is the primary objective of MOMAKnock while the MOMA assumption is formulated as the inner problem of constraining the knockout metabolic flux to be as close as possible to the steady-state phenotypes of wide-type strains. As this new inner problem becomes a quadratic programming problem, a novel adaptive piecewise linearization algorithm is developed in this paper to obtain the exact optimal solution to this new bi-level integer quadratic programming problem for MOMAKnock. Results Our new MOMAKnock model and the adaptive piecewise linearization solution algorithm are tested with a small E. coli core metabolic network and a large-scale iAF1260 E. coli metabolic network. The derived knockout strategies are compared with those from OptKnock. Our preliminary experimental results show that MOMAKnock can provide improved targeted productions with more robust knockout strategies. PMID:23368729
Ho, Steven C L; Yang, Yuansheng
2014-08-01
Promoters are essential on plasmid vectors to initiate transcription of the transgenes when generating therapeutic recombinant proteins expressing mammalian cell lines. High and sustained levels of gene expression are desired during therapeutic protein production while gene expression is useful for cell engineering. As many finely controlled promoters exhibit cell and product specificity, new promoters need to be identified, optimized and carefully evaluated before use. Suitable promoters can be identified using techniques ranging from simple molecular biology methods to modern high-throughput omics screenings. Promoter engineering is often required after identification to either obtain high and sustained expression or to provide a wider range of gene expression. This review discusses some of the available methods to identify and engineer promoters for therapeutic recombinant protein expression in mammalian cells.
Zhang, Bin; He, Xin; Ouyang, Fusheng; Gu, Dongsheng; Dong, Yuhao; Zhang, Lu; Mo, Xiaokai; Huang, Wenhui; Tian, Jie; Zhang, Shuixing
2017-09-10
We aimed to identify optimal machine-learning methods for radiomics-based prediction of local failure and distant failure in advanced nasopharyngeal carcinoma (NPC). We enrolled 110 patients with advanced NPC. A total of 970 radiomic features were extracted from MRI images for each patient. Six feature selection methods and nine classification methods were evaluated in terms of their performance. We applied the 10-fold cross-validation as the criterion for feature selection and classification. We repeated each combination for 50 times to obtain the mean area under the curve (AUC) and test error. We observed that the combination methods Random Forest (RF) + RF (AUC, 0.8464 ± 0.0069; test error, 0.3135 ± 0.0088) had the highest prognostic performance, followed by RF + Adaptive Boosting (AdaBoost) (AUC, 0.8204 ± 0.0095; test error, 0.3384 ± 0.0097), and Sure Independence Screening (SIS) + Linear Support Vector Machines (LSVM) (AUC, 0.7883 ± 0.0096; test error, 0.3985 ± 0.0100). Our radiomics study identified optimal machine-learning methods for the radiomics-based prediction of local failure and distant failure in advanced NPC, which could enhance the applications of radiomics in precision oncology and clinical practice. Copyright © 2017 Elsevier B.V. All rights reserved.
National survey on management of weight reduction in PCOS women in the United Kingdom.
Sharma, Aarti; Walker, Dawn-Marie; Atiomo, William
2010-10-01
To identify the most commonly used methods for weight reduction in women with polycystic ovarian syndrome (PCOS) utilized by obstetricians and gynaecologists in the United Kingdom (UK). Permission was sought from the Royal College of Obstetricians and Gynaecologists (RCOG) to conduct an electronic survey of all consultants practising in the UK. The questionnaire was anonymous and an electronic link was sent via email to the 1140 consultants whose details were provided by the RCOG. A 27-item questionnaire was developed. The variables evaluated were first-line methods of weight reduction used, proportion of women with PCOS seen that were obese, whether the patients had tried other weight reduction methods before seeking help, the optimal dietary advice and optimal composition, the optimal duration and frequency of exercise suggested, BMI used for suggesting weight loss, percentage of women in whom weight loss worked, length of time allowed prior to suggesting another method, methods considered most effective by patients, use of metformin for weight loss, criteria used for prescribing metformin, first-line anti-obesity drugs preferred if any, second- and third-line methods used, referral to other specialists and criteria for referral for bariatric surgery. Responses were categorical and are reported as proportions. One hundred and seven (9.4%) consultants responded to the questionnaire. One hundred and four (97%) provided advice on diet and 101 (94%) advice on exercise as their first-line strategy for weight management. Fifty-one (47.7%) stated that they provided specific information on an optimal dietary intake, 53 (49.9%) on the optimal dietary composition and 61 (57%) on the optimal duration and frequency of exercise per week. The commonest second-line methods used were anti-obesity drugs and metformin and the most popular third-line management options were anti-obesity drugs and bariatric surgery. The results suggest that the information provided to women with PCOS on weight management is variable, and highlight the need for specific guidelines and further research on weight management in women with this condition. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Chuan, E-mail: chuan@umich.edu; Chan, Heang-
Purpose: The authors are developing an automated method to identify the best-quality coronary arterial segment from multiple-phase coronary CT angiography (cCTA) acquisitions, which may be used by either interpreting physicians or computer-aided detection systems to optimally and efficiently utilize the diagnostic information available in multiple-phase cCTA for the detection of coronary artery disease. Methods: After initialization with a manually identified seed point, each coronary artery tree is automatically extracted from multiple cCTA phases using our multiscale coronary artery response enhancement and 3D rolling balloon region growing vessel segmentation and tracking method. The coronary artery trees from multiple phases are thenmore » aligned by a global registration using an affine transformation with quadratic terms and nonlinear simplex optimization, followed by a local registration using a cubic B-spline method with fast localized optimization. The corresponding coronary arteries among the available phases are identified using a recursive coronary segment matching method. Each of the identified vessel segments is transformed by the curved planar reformation (CPR) method. Four features are extracted from each corresponding segment as quality indicators in the original computed tomography volume and the straightened CPR volume, and each quality indicator is used as a voting classifier for the arterial segment. A weighted voting ensemble (WVE) classifier is designed to combine the votes of the four voting classifiers for each corresponding segment. The segment with the highest WVE vote is then selected as the best-quality segment. In this study, the training and test sets consisted of 6 and 20 cCTA cases, respectively, each with 6 phases, containing a total of 156 cCTA volumes and 312 coronary artery trees. An observer preference study was also conducted with one expert cardiothoracic radiologist and four nonradiologist readers to visually rank vessel segment quality. The performance of our automated method was evaluated by comparing the automatically identified best-quality segments identified by the computer to those selected by the observers. Results: For the 20 test cases, 254 groups of corresponding vessel segments were identified after multiple phase registration and recursive matching. The AI-BQ segments agreed with the radiologist’s top 2 ranked segments in 78.3% of the 254 groups (Cohen’s kappa 0.60), and with the 4 nonradiologist observers in 76.8%, 84.3%, 83.9%, and 85.8% of the 254 groups. In addition, 89.4% of the AI-BQ segments agreed with at least two observers’ top 2 rankings, and 96.5% agreed with at least one observer’s top 2 rankings. In comparison, agreement between the four observers’ top ranked segment and the radiologist’s top 2 ranked segments were 79.9%, 80.7%, 82.3%, and 76.8%, respectively, with kappa values ranging from 0.56 to 0.68. Conclusions: The performance of our automated method for selecting the best-quality coronary segments from a multiple-phase cCTA acquisition was comparable to the selection made by human observers. This study demonstrates the potential usefulness of the automated method in clinical practice, enabling interpreting physicians to fully utilize the best available information in cCTA for diagnosis of coronary disease, without requiring manual search through the multiple phases and minimizing the variability in image phase selection for evaluation of coronary artery segments across the diversity of human readers with variations in expertise.« less
Multiplex detection of agricultural pathogens
McBride, Mary Teresa; Slezak, Thomas Richard; Messenger, Sharon Lee
2010-09-14
Described are kits and methods useful for detection of seven agricultural pathogens (BPSV; BHV; BVD; FMDV; BTV; SVD; and VESV) in a sample. Genomic sequence information from 7 agricultural pathogens was analyzed to identify signature sequences, e.g., polynucleotide sequences useful for confirming the presence or absence of a pathogen in a sample. Primer and probe sets were designed and optimized for use in a PCR based, multiplexed Luminex assay to successfully identify the presence or absence of pathogens in a sample.
Occurrence of mycotoxins and yeasts and moulds identification in corn silages in tropical climate.
Carvalho, B F; Ávila, C L S; Krempser, P M; Batista, L R; Pereira, M N; Schwan, R F
2016-05-01
This study was aimed to identify yeasts and moulds as well as to detect mycotoxin in corn silages in southern Minas Gerais, Brazil. Corn silages from 36 farms were sampled to analyse dry matter, crude protein, ether extract, ash, neutral detergent fibre, nonfibre carbohydrates and mycotoxins contents, yeasts and moulds population, pH and temperature values. The mycotoxins found in high frequency were aflatoxin in 77·7% of analysed samples, ochratoxin (33·3%) and zearalenone (22·2%). There was no significant correlation between the mycotoxin concentration and the presence of moulds. The pH was negatively correlated with ochratoxin concentration. Aspergillus fumigatus was identified in all silages that presented growth of moulds. Ten different yeast species were identified using the culture-dependent method: Candida diversa, Candida ethanolica, Candida rugosa, Issatchenkia orientalis, Kluyveromyces marxianus, Pichia manshurica, Pichia membranifaciens, Saccharomyces cerevisiae, Trichosporon asahii and Trichosporon japonicum. Another six different yeast species were identified using the culture-independent method. A high mycotoxin contamination rate (91·6% of the analysed silages) was observed. The results indicated that conventional culturing and PCR-DGGE should be combined to optimally describe the microbiota associated with corn silage. This study provides information about the corn silage fermentation dynamics and our findings are relevant to optimization of this silage fermentation. © 2016 The Society for Applied Microbiology.
Dinarvand, Mojdeh; Rezaee, Malahat; Masomian, Malihe; Jazayeri, Seyed Davoud; Zareian, Mohsen; Abbasi, Sahar; Ariff, Arbakariya B.
2013-01-01
The study is to identify the extraction of intracellular inulinase (exo- and endoinulinase) and invertase as well as optimization medium composition for maximum productions of intra- and extracellular enzymes from Aspergillus niger ATCC 20611. From two different methods for extraction of intracellular enzymes, ultrasonic method was found more effective. Response surface methodology (RSM) with a five-variable and three-level central composite design (CCD) was employed to optimize the medium composition. The effect of five main reaction parameters including sucrose, yeast extract, NaNO3, Zn+2, and Triton X-100 on the production of enzymes was analyzed. A modified quadratic model was fitted to the data with a coefficient of determination (R 2) more than 0.90 for all responses. The intra-extracellular inulinase and invertase productions increased in the range from 16 to 8.4 times in the optimized medium (10% (w/v) sucrose, 2.5% (w/v) yeast extract, 2% (w/v) NaNO3, 1.5 mM (v/v) Zn+2, and 1% (v/v) Triton X-100) by RSM and from around 1.2 to 1.3 times greater than in the medium optimized by one-factor-at-a-time, respectively. The results of bioprocesses optimization can be useful in the scale-up fermentation and food industry. PMID:24151605
Optimal knockout strategies in genome-scale metabolic networks using particle swarm optimization.
Nair, Govind; Jungreuthmayer, Christian; Zanghellini, Jürgen
2017-02-01
Knockout strategies, particularly the concept of constrained minimal cut sets (cMCSs), are an important part of the arsenal of tools used in manipulating metabolic networks. Given a specific design, cMCSs can be calculated even in genome-scale networks. We would however like to find not only the optimal intervention strategy for a given design but the best possible design too. Our solution (PSOMCS) is to use particle swarm optimization (PSO) along with the direct calculation of cMCSs from the stoichiometric matrix to obtain optimal designs satisfying multiple objectives. To illustrate the working of PSOMCS, we apply it to a toy network. Next we show its superiority by comparing its performance against other comparable methods on a medium sized E. coli core metabolic network. PSOMCS not only finds solutions comparable to previously published results but also it is orders of magnitude faster. Finally, we use PSOMCS to predict knockouts satisfying multiple objectives in a genome-scale metabolic model of E. coli and compare it with OptKnock and RobustKnock. PSOMCS finds competitive knockout strategies and designs compared to other current methods and is in some cases significantly faster. It can be used in identifying knockouts which will force optimal desired behaviors in large and genome scale metabolic networks. It will be even more useful as larger metabolic models of industrially relevant organisms become available.
Shin, Sangmun; Choi, Du Hyung; Truong, Nguyen Khoa Viet; Kim, Nam Ah; Chu, Kyung Rok; Jeong, Seong Hoon
2011-04-04
A new experimental design methodology was developed by integrating the response surface methodology and the time series modeling. The major purposes were to identify significant factors in determining swelling and release rate from matrix tablets and their relative factor levels for optimizing the experimental responses. Properties of tablet swelling and drug release were assessed with ten factors and two default factors, a hydrophilic model drug (terazosin) and magnesium stearate, and compared with target values. The selected input control factors were arranged in a mixture simplex lattice design with 21 experimental runs. The obtained optimal settings for gelation were PEO, LH-11, Syloid, and Pharmacoat with weight ratios of 215.33 (88.50%), 5.68 (2.33%), 19.27 (7.92%), and 3.04 (1.25%), respectively. The optimal settings for drug release were PEO and citric acid with weight ratios of 191.99 (78.91%) and 51.32 (21.09%), respectively. Based on the results of matrix swelling and drug release, the optimal solutions, target values, and validation experiment results over time were similar and showed consistent patterns with very small biases. The experimental design methodology could be a very promising experimental design method to obtain maximum information with limited time and resources. It could also be very useful in formulation studies by providing a systematic and reliable screening method to characterize significant factors in the sustained release matrix tablet. Copyright © 2011 Elsevier B.V. All rights reserved.
Raisch, D W
1990-04-01
The purpose of this literature review is to develop a model of methods to be used to influence prescribing. Four bodies of literature were identified as being important for developing the model: (1) Theoretical prescribing models furnish information concerning factors that affect prescribing and how prescribing decisions are made. (2) Theories of persuasion provide insight into important components of educational communications. (3) Research articles of programs to improve prescribing identify types of programs that have been found to be successful. (4) Theories of human inference describe how judgments are formulated and identify errors in judgment that can play a role in prescribing. This review is presented in two parts. This article reviews prescribing models, theories of persuasion, studies of administrative programs to control prescribing, and sub-optimally designed studies of educational efforts to influence drug prescribing.
Innovative model-based flow rate optimization for vanadium redox flow batteries
NASA Astrophysics Data System (ADS)
König, S.; Suriyah, M. R.; Leibfried, T.
2016-11-01
In this paper, an innovative approach is presented to optimize the flow rate of a 6-kW vanadium redox flow battery with realistic stack dimensions. Efficiency is derived using a multi-physics battery model and a newly proposed instantaneous efficiency determination technique. An optimization algorithm is applied to identify optimal flow rates for operation points defined by state-of-charge (SoC) and current. The proposed method is evaluated against the conventional approach of applying Faraday's first law of electrolysis, scaled to the so-called flow factor. To make a fair comparison, the flow factor is also optimized by simulating cycles with different charging/discharging currents. It is shown through the obtained results that the efficiency is increased by up to 1.2% points; in addition, discharge capacity is also increased by up to 1.0 kWh or 5.4%. Detailed loss analysis is carried out for the cycles with maximum and minimum charging/discharging currents. It is shown that the proposed method minimizes the sum of losses caused by concentration over-potential, pumping and diffusion. Furthermore, for the deployed Nafion 115 membrane, it is observed that diffusion losses increase with stack SoC. Therefore, to decrease stack SoC and lower diffusion losses, a higher flow rate during charging than during discharging is reasonable.
Study of optimal laser parameters for cutting QFN packages by Taguchi's matrix method
NASA Astrophysics Data System (ADS)
Li, Chen-Hao; Tsai, Ming-Jong; Yang, Ciann-Dong
2007-06-01
This paper reports the study of optimal laser parameters for cutting QFN (Quad Flat No-lead) packages by using a diode pumped solid-state laser system (DPSSL). The QFN cutting path includes two different materials, which are the encapsulated epoxy and a copper lead frame substrate. The Taguchi's experimental method with orthogonal array of L 9(3 4) is employed to obtain optimal combinatorial parameters. A quantified mechanism was proposed for examining the laser cutting quality of a QFN package. The influences of the various factors such as laser current, laser frequency, and cutting speed on the laser cutting quality is also examined. From the experimental results, the factors on the cutting quality in the order of decreasing significance are found to be (a) laser frequency, (b) cutting speed, and (c) laser driving current. The optimal parameters were obtained at the laser frequency of 2 kHz, the cutting speed of 2 mm/s, and the driving current of 29 A. Besides identifying this sequence of dominance, matrix experiment also determines the best level for each control factor. The verification experiment confirms that the application of laser cutting technology to QFN is very successfully by using the optimal laser parameters predicted from matrix experiments.
Optimizing the response to surveillance alerts in automated surveillance systems.
Izadi, Masoumeh; Buckeridge, David L
2011-02-28
Although much research effort has been directed toward refining algorithms for disease outbreak alerting, considerably less attention has been given to the response to alerts generated from statistical detection algorithms. Given the inherent inaccuracy in alerting, it is imperative to develop methods that help public health personnel identify optimal policies in response to alerts. This study evaluates the application of dynamic decision making models to the problem of responding to outbreak detection methods, using anthrax surveillance as an example. Adaptive optimization through approximate dynamic programming is used to generate a policy for decision making following outbreak detection. We investigate the degree to which the model can tolerate noise theoretically, in order to keep near optimal behavior. We also evaluate the policy from our model empirically and compare it with current approaches in routine public health practice for investigating alerts. Timeliness of outbreak confirmation and total costs associated with the decisions made are used as performance measures. Using our approach, on average, 80 per cent of outbreaks were confirmed prior to the fifth day of post-attack with considerably less cost compared to response strategies currently in use. Experimental results are also provided to illustrate the robustness of the adaptive optimization approach and to show the realization of the derived error bounds in practice. Copyright © 2011 John Wiley & Sons, Ltd.
A predictive machine learning approach for microstructure optimization and materials design
Liu, Ruoqian; Kumar, Abhishek; Chen, Zhengzhang; ...
2015-06-23
This paper addresses an important materials engineering question: How can one identify the complete space (or as much of it as possible) of microstructures that are theoretically predicted to yield the desired combination of properties demanded by a selected application? We present a problem involving design of magnetoelastic Fe-Ga alloy microstructure for enhanced elastic, plastic and magnetostrictive properties. While theoretical models for computing properties given the microstructure are known for this alloy, inversion of these relationships to obtain microstructures that lead to desired properties is challenging, primarily due to the high dimensionality of microstructure space, multi-objective design requirement and non-uniquenessmore » of solutions. These challenges render traditional search-based optimization methods incompetent in terms of both searching efficiency and result optimality. In this paper, a route to address these challenges using a machine learning methodology is proposed. A systematic framework consisting of random data generation, feature selection and classification algorithms is developed. In conclusion, experiments with five design problems that involve identification of microstructures that satisfy both linear and nonlinear property constraints show that our framework outperforms traditional optimization methods with the average running time reduced by as much as 80% and with optimality that would not be achieved otherwise.« less
Carroll, Sean Michael; Chubiz, Lon M.; Agashe, Deepa; Marx, Christopher J.
2015-01-01
Bioengineering holds great promise to provide fast and efficient biocatalysts for methanol-based biotechnology, but necessitates proven methods to optimize physiology in engineered strains. Here, we highlight experimental evolution as an effective means for optimizing an engineered Methylobacterium extorquens AM1. Replacement of the native formaldehyde oxidation pathway with a functional analog substantially decreased growth in an engineered Methylobacterium, but growth rapidly recovered after six hundred generations of evolution on methanol. We used whole-genome sequencing to identify the basis of adaptation in eight replicate evolved strains, and examined genomic changes in light of other growth and physiological data. We observed great variety in the numbers and types of mutations that occurred, including instances of parallel mutations at targets that may have been “rationalized” by the bioengineer, plus other “illogical” mutations that demonstrate the ability of evolution to expose unforeseen optimization solutions. Notably, we investigated mutations to RNA polymerase, which provided a massive growth benefit but are linked to highly aberrant transcriptional profiles. Overall, we highlight the power of experimental evolution to present genetic and physiological solutions for strain optimization, particularly in systems where the challenges of engineering are too many or too difficult to overcome via traditional engineering methods. PMID:27682084
Hart, Lucas; Mackenzie, Ashley; Purcell, Maureen; Thompson, Rachel L.; Hershberger, Paul
2017-01-01
Methods for a plaque neutralization test (PNT) were optimized for the detection and quantification of viral hemorrhagic septicemia virus (VHSV) neutralizing activity in the plasma of Pacific Herring Clupea pallasii. The PNT was complement dependent, as neutralizing activity was attenuated by heat inactivation; further, neutralizing activity was mostly restored by the addition of exogenous complement from specific-pathogen-free Pacific Herring. Optimal methods included the overnight incubation of VHSV aliquots in serial dilutions (starting at 1:16) of whole test plasma containing endogenous complement. The resulting viral titers were then enumerated using a viral plaque assay in 96-well microplates. Serum neutralizing activity was virus-specific as plasma from viral hemorrhagic septicemia (VHS) survivors demonstrated only negligible reactivity to infectious hematopoietic necrosis virus, a closely related rhabdovirus. Among Pacific Herring that survived VHSV exposure, neutralizing activity was detected in the plasma as early as 37 d postexposure and peaked at approximately 64 d postexposure. The onset of neutralizing activity was slightly delayed in fish reared at 7.4°C relative to those in warmer temperatures (9.9°C and 13.1°C); however, neutralizing activity persisted for at least 345 d postexposure in all temperature treatments. It is anticipated that this novel ability to assess VHSV neutralizing activity in Pacific Herring will enable retrospective comparisons between prior VHS infections and year-class recruitment failures. Additionally, the optimized PNT could be employed as a forecasting tool capable of identifying the potential for future VHS epizootics in wild Pacific Herring populations.
Detecting event-related changes in organizational networks using optimized neural network models.
Li, Ze; Sun, Duoyong; Zhu, Renqi; Lin, Zihan
2017-01-01
Organizational external behavior changes are caused by the internal structure and interactions. External behaviors are also known as the behavioral events of an organization. Detecting event-related changes in organizational networks could efficiently be used to monitor the dynamics of organizational behaviors. Although many different methods have been used to detect changes in organizational networks, these methods usually ignore the correlation between the internal structure and external events. Event-related change detection considers the correlation and could be used for event recognition based on social network modeling and supervised classification. Detecting event-related changes could be effectively useful in providing early warnings and faster responses to both positive and negative organizational activities. In this study, event-related change in an organizational network was defined, and artificial neural network models were used to quantitatively determine whether and when a change occurred. To achieve a higher accuracy, Back Propagation Neural Networks (BPNNs) were optimized using Genetic Algorithms (GAs) and Particle Swarm Optimization (PSO). We showed the feasibility of the proposed method by comparing its performance with that of other methods using two cases. The results suggested that the proposed method could identify organizational events based on a correlation between the organizational networks and events. The results also suggested that the proposed method not only has a higher precision but also has a better robustness than the previously used techniques.
Detecting event-related changes in organizational networks using optimized neural network models
Sun, Duoyong; Zhu, Renqi; Lin, Zihan
2017-01-01
Organizational external behavior changes are caused by the internal structure and interactions. External behaviors are also known as the behavioral events of an organization. Detecting event-related changes in organizational networks could efficiently be used to monitor the dynamics of organizational behaviors. Although many different methods have been used to detect changes in organizational networks, these methods usually ignore the correlation between the internal structure and external events. Event-related change detection considers the correlation and could be used for event recognition based on social network modeling and supervised classification. Detecting event-related changes could be effectively useful in providing early warnings and faster responses to both positive and negative organizational activities. In this study, event-related change in an organizational network was defined, and artificial neural network models were used to quantitatively determine whether and when a change occurred. To achieve a higher accuracy, Back Propagation Neural Networks (BPNNs) were optimized using Genetic Algorithms (GAs) and Particle Swarm Optimization (PSO). We showed the feasibility of the proposed method by comparing its performance with that of other methods using two cases. The results suggested that the proposed method could identify organizational events based on a correlation between the organizational networks and events. The results also suggested that the proposed method not only has a higher precision but also has a better robustness than the previously used techniques. PMID:29190799
NASA Astrophysics Data System (ADS)
Brewick, Patrick T.; Smyth, Andrew W.
2016-12-01
The authors have previously shown that many traditional approaches to operational modal analysis (OMA) struggle to properly identify the modal damping ratios for bridges under traffic loading due to the interference caused by the driving frequencies of the traffic loads. This paper presents a novel methodology for modal parameter estimation in OMA that overcomes the problems presented by driving frequencies and significantly improves the damping estimates. This methodology is based on finding the power spectral density (PSD) of a given modal coordinate, and then dividing the modal PSD into separate regions, left- and right-side spectra. The modal coordinates were found using a blind source separation (BSS) algorithm and a curve-fitting technique was developed that uses optimization to find the modal parameters that best fit each side spectra of the PSD. Specifically, a pattern-search optimization method was combined with a clustering analysis algorithm and together they were employed in a series of stages in order to improve the estimates of the modal damping ratios. This method was used to estimate the damping ratios from a simulated bridge model subjected to moving traffic loads. The results of this method were compared to other established OMA methods, such as Frequency Domain Decomposition (FDD) and BSS methods, and they were found to be more accurate and more reliable, even for modes that had their PSDs distorted or altered by driving frequencies.
Binns, Michael; de Atauri, Pedro; Vlysidis, Anestis; Cascante, Marta; Theodoropoulos, Constantinos
2015-02-18
Flux balance analysis is traditionally implemented to identify the maximum theoretical flux for some specified reaction and a single distribution of flux values for all the reactions present which achieve this maximum value. However it is well known that the uncertainty in reaction networks due to branches, cycles and experimental errors results in a large number of combinations of internal reaction fluxes which can achieve the same optimal flux value. In this work, we have modified the applied linear objective of flux balance analysis to include a poling penalty function, which pushes each new set of reaction fluxes away from previous solutions generated. Repeated poling-based flux balance analysis generates a sample of different solutions (a characteristic set), which represents all the possible functionality of the reaction network. Compared to existing sampling methods, for the purpose of generating a relatively "small" characteristic set, our new method is shown to obtain a higher coverage than competing methods under most conditions. The influence of the linear objective function on the sampling (the linear bias) constrains optimisation results to a subspace of optimal solutions all producing the same maximal fluxes. Visualisation of reaction fluxes plotted against each other in 2 dimensions with and without the linear bias indicates the existence of correlations between fluxes. This method of sampling is applied to the organism Actinobacillus succinogenes for the production of succinic acid from glycerol. A new method of sampling for the generation of different flux distributions (sets of individual fluxes satisfying constraints on the steady-state mass balances of intermediates) has been developed using a relatively simple modification of flux balance analysis to include a poling penalty function inside the resulting optimisation objective function. This new methodology can achieve a high coverage of the possible flux space and can be used with and without linear bias to show optimal versus sub-optimal solution spaces. Basic analysis of the Actinobacillus succinogenes system using sampling shows that in order to achieve the maximal succinic acid production CO₂ must be taken into the system. Solutions involving release of CO₂ all give sub-optimal succinic acid production.
Comparison of Sample and Detection Quantification Methods for Salmonella Enterica from Produce
NASA Technical Reports Server (NTRS)
Hummerick, M. P.; Khodadad, C.; Richards, J. T.; Dixit, A.; Spencer, L. M.; Larson, B.; Parrish, C., II; Birmele, M.; Wheeler, Raymond
2014-01-01
The purpose of this study was to identify and optimize fast and reliable sampling and detection methods for the identification of pathogens that may be present on produce grown in small vegetable production units on the International Space Station (ISS), thus a field setting. Microbiological testing is necessary before astronauts are allowed to consume produce grown on ISS where currently there are two vegetable production units deployed, Lada and Veggie.
Cat Swarm Optimization algorithm for optimal linear phase FIR filter design.
Saha, Suman Kumar; Ghoshal, Sakti Prasad; Kar, Rajib; Mandal, Durbadal
2013-11-01
In this paper a new meta-heuristic search method, called Cat Swarm Optimization (CSO) algorithm is applied to determine the best optimal impulse response coefficients of FIR low pass, high pass, band pass and band stop filters, trying to meet the respective ideal frequency response characteristics. CSO is generated by observing the behaviour of cats and composed of two sub-models. In CSO, one can decide how many cats are used in the iteration. Every cat has its' own position composed of M dimensions, velocities for each dimension, a fitness value which represents the accommodation of the cat to the fitness function, and a flag to identify whether the cat is in seeking mode or tracing mode. The final solution would be the best position of one of the cats. CSO keeps the best solution until it reaches the end of the iteration. The results of the proposed CSO based approach have been compared to those of other well-known optimization methods such as Real Coded Genetic Algorithm (RGA), standard Particle Swarm Optimization (PSO) and Differential Evolution (DE). The CSO based results confirm the superiority of the proposed CSO for solving FIR filter design problems. The performances of the CSO based designed FIR filters have proven to be superior as compared to those obtained by RGA, conventional PSO and DE. The simulation results also demonstrate that the CSO is the best optimizer among other relevant techniques, not only in the convergence speed but also in the optimal performances of the designed filters. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.
Optimizing The DSSC Fabrication Process Using Lean Six Sigma
NASA Astrophysics Data System (ADS)
Fauss, Brian
Alternative energy technologies must become more cost effective to achieve grid parity with fossil fuels. Dye sensitized solar cells (DSSCs) are an innovative third generation photovoltaic technology, which is demonstrating tremendous potential to become a revolutionary technology due to recent breakthroughs in cost of fabrication. The study here focused on quality improvement measures undertaken to improve fabrication of DSSCs and enhance process efficiency and effectiveness. Several quality improvement methods were implemented to optimize the seven step individual DSSC fabrication processes. Lean Manufacturing's 5S method successfully increased efficiency in all of the processes. Six Sigma's DMAIC methodology was used to identify and eliminate each of the root causes of defects in the critical titanium dioxide deposition process. These optimizations resulted with the following significant improvements in the production process: 1. fabrication time of the DSSCs was reduced by 54 %; 2. fabrication procedures were improved to the extent that all critical defects in the process were eliminated; 3. the quantity of functioning DSSCs fabricated was increased from 17 % to 90 %.
A computational study of thrust augmenting ejectors based on a viscous-inviscid approach
NASA Technical Reports Server (NTRS)
Lund, Thomas S.; Tavella, Domingo A.; Roberts, Leonard
1987-01-01
A viscous-inviscid interaction technique is advocated as both an efficient and accurate means of predicting the performance of two-dimensional thrust augmenting ejectors. The flow field is subdivided into a viscous region that contains the turbulent jet and an inviscid region that contains the ambient fluid drawn into the device. The inviscid region is computed with a higher-order panel method, while an integral method is used for the description of the viscous part. The strong viscous-inviscid interaction present within the ejector is simulated in an iterative process where the two regions influence each other en route to a converged solution. The model is applied to a variety of parametric and optimization studies involving ejectors having either one or two primary jets. The effects of nozzle placement, inlet and diffuser shape, free stream speed, and ejector length are investigated. The inlet shape for single jet ejectors is optimized for various free stream speeds and Reynolds numbers. Optimal nozzle tilt and location are identified for various dual-ejector configurations.
Model-Free Adaptive Control for Unknown Nonlinear Zero-Sum Differential Game.
Zhong, Xiangnan; He, Haibo; Wang, Ding; Ni, Zhen
2018-05-01
In this paper, we present a new model-free globalized dual heuristic dynamic programming (GDHP) approach for the discrete-time nonlinear zero-sum game problems. First, the online learning algorithm is proposed based on the GDHP method to solve the Hamilton-Jacobi-Isaacs equation associated with optimal regulation control problem. By setting backward one step of the definition of performance index, the requirement of system dynamics, or an identifier is relaxed in the proposed method. Then, three neural networks are established to approximate the optimal saddle point feedback control law, the disturbance law, and the performance index, respectively. The explicit updating rules for these three neural networks are provided based on the data generated during the online learning along the system trajectories. The stability analysis in terms of the neural network approximation errors is discussed based on the Lyapunov approach. Finally, two simulation examples are provided to show the effectiveness of the proposed method.
A Global Optimization Methodology for Rocket Propulsion Applications
NASA Technical Reports Server (NTRS)
2001-01-01
While the response surface method is an effective method in engineering optimization, its accuracy is often affected by the use of limited amount of data points for model construction. In this chapter, the issues related to the accuracy of the RS approximations and possible ways of improving the RS model using appropriate treatments, including the iteratively re-weighted least square (IRLS) technique and the radial-basis neural networks, are investigated. A main interest is to identify ways to offer added capabilities for the RS method to be able to at least selectively improve the accuracy in regions of importance. An example is to target the high efficiency region of a fluid machinery design space so that the predictive power of the RS can be maximized when it matters most. Analytical models based on polynomials, with controlled level of noise, are used to assess the performance of these techniques.
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.; Hornby, Gregory; Ishihara, Abe
2013-01-01
This paper describes two methods of trajectory optimization to obtain an optimal trajectory of minimum-fuel- to-climb for an aircraft. The first method is based on the adjoint method, and the second method is based on a direct trajectory optimization method using a Chebyshev polynomial approximation and cubic spine approximation. The approximate optimal trajectory will be compared with the adjoint-based optimal trajectory which is considered as the true optimal solution of the trajectory optimization problem. The adjoint-based optimization problem leads to a singular optimal control solution which results in a bang-singular-bang optimal control.
Zhang, Shanxin; Zhou, Zhiping; Chen, Xinmeng; Hu, Yong; Yang, Lindong
2017-08-07
DNase I hypersensitive sites (DHSs) are accessible chromatin regions hypersensitive to cleavages by DNase I endonucleases. DHSs are indicative of cis-regulatory DNA elements (CREs), all of which play important roles in global gene expression regulation. It is helpful for discovering CREs by recognition of DHSs in genome. To accelerate the investigation, it is an important complement to develop cost-effective computational methods to identify DHSs. However, there is a lack of tools used for identifying DHSs in plant genome. Here we presented pDHS-SVM, a computational predictor to identify plant DHSs. To integrate the global sequence-order information and local DNA properties, reverse complement kmer and dinucleotide-based auto covariance of DNA sequences were applied to construct the feature space. In this work, fifteen physical-chemical properties of dinucleotides were used and Support Vector Machine (SVM) was employed. To further improve the performance of the predictor and extract an optimized subset of nucleotide physical-chemical properties positive for the DHSs, a heuristic nucleotide physical-chemical property selection algorithm was introduced. With the optimized subset of properties, experimental results of Arabidopsis thaliana and rice (Oryza sativa) showed that pDHS-SVM could achieve accuracies up to 87.00%, and 85.79%, respectively. The results indicated the effectiveness of proposed method for predicting DHSs. Furthermore, pDHS-SVM could provide a helpful complement for predicting CREs in plant genome. Our implementation of the novel proposed method pDHS-SVM is freely available as source code, at https://github.com/shanxinzhang/pDHS-SVM. Copyright © 2017 Elsevier Ltd. All rights reserved.
Are quantitative sensitivity analysis methods always reliable?
NASA Astrophysics Data System (ADS)
Huang, X.
2016-12-01
Physical parameterizations developed to represent subgrid-scale physical processes include various uncertain parameters, leading to large uncertainties in today's Earth System Models (ESMs). Sensitivity Analysis (SA) is an efficient approach to quantitatively determine how the uncertainty of the evaluation metric can be apportioned to each parameter. Also, SA can identify the most influential parameters, as a result to reduce the high dimensional parametric space. In previous studies, some SA-based approaches, such as Sobol' and Fourier amplitude sensitivity testing (FAST), divide the parameters into sensitive and insensitive groups respectively. The first one is reserved but the other is eliminated for certain scientific study. However, these approaches ignore the disappearance of the interactive effects between the reserved parameters and the eliminated ones, which are also part of the total sensitive indices. Therefore, the wrong sensitive parameters might be identified by these traditional SA approaches and tools. In this study, we propose a dynamic global sensitivity analysis method (DGSAM), which iteratively removes the least important parameter until there are only two parameters left. We use the CLM-CASA, a global terrestrial model, as an example to verify our findings with different sample sizes ranging from 7000 to 280000. The result shows DGSAM has abilities to identify more influential parameters, which is confirmed by parameter calibration experiments using four popular optimization methods. For example, optimization using Top3 parameters filtered by DGSAM could achieve substantial improvement against Sobol' by 10%. Furthermore, the current computational cost for calibration has been reduced to 1/6 of the original one. In future, it is necessary to explore alternative SA methods emphasizing parameter interactions.
2012-01-01
Background There are numerous applications for Health Information Systems (HIS) that support specific tasks in the clinical workflow. The Lean method has been used increasingly to optimize clinical workflows, by removing waste and shortening the delivery cycle time. There are a limited number of studies on Lean applications related to HIS. Therefore, we applied the Lean method to evaluate the clinical processes related to HIS, in order to evaluate its efficiency in removing waste and optimizing the process flow. This paper presents the evaluation findings of these clinical processes, with regards to a critical care information system (CCIS), known as IntelliVue Clinical Information Portfolio (ICIP), and recommends solutions to the problems that were identified during the study. Methods We conducted a case study under actual clinical settings, to investigate how the Lean method can be used to improve the clinical process. We used observations, interviews, and document analysis, to achieve our stated goal. We also applied two tools from the Lean methodology, namely the Value Stream Mapping and the A3 problem-solving tools. We used eVSM software to plot the Value Stream Map and A3 reports. Results We identified a number of problems related to inefficiency and waste in the clinical process, and proposed an improved process model. Conclusions The case study findings show that the Value Stream Mapping and the A3 reports can be used as tools to identify waste and integrate the process steps more efficiently. We also proposed a standardized and improved clinical process model and suggested an integrated information system that combines database and software applications to reduce waste and data redundancy. PMID:23259846
Identifying ideal brow vector position: empirical analysis of three brow archetypes.
Hamamoto, Ashley A; Liu, Tiffany W; Wong, Brian J
2013-02-01
Surgical browlifts counteract the effects of aging, correct ptosis, and optimize forehead aesthetics. While surgeons have control over brow shape, the metrics defining ideal brow shape are subjective. This study aims to empirically determine whether three expert brow design strategies are aesthetically equivalent by using expert focus group analysis and relating these findings to brow surgery. Comprehensive literature search identified three dominant brow design methods (Westmore, Lamas and Anastasia) that are heavily cited, referenced or internationally recognized in either medical literature or by the lay media. Using their respective guidelines, brow shape was modified for 10 synthetic female faces, yielding 30 images. A focus group of 50 professional makeup artists ranked the three images for each of the 10 faces to generate ordinal attractiveness scores. The contemporary methods employed by Anastasia and Lamas produce a brow arch more lateral than Westmore's classic method. Although the more laterally located brow arch is considered the current trend in facial aesthetics, this style was not empirically supported. No single method was consistently rated most or least attractive by the focus group, and no significant difference in attractiveness score for the different methods was observed (p = 0.2454). Although each method of brow placement has been promoted as the "best" approach, no single brow design method achieved statistical significance in optimizing attractiveness. Each can be used effectively as a guide in designing eyebrow shape during browlift procedures, making it possible to use the three methods interchangeably. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
Evaluation of MODFLOW-LGR in connection with a synthetic regional-scale model
Vilhelmsen, T.N.; Christensen, S.; Mehl, S.W.
2012-01-01
This work studies costs and benefits of utilizing local-grid refinement (LGR) as implemented in MODFLOW-LGR to simulate groundwater flow in a buried tunnel valley interacting with a regional aquifer. Two alternative LGR methods were used: the shared-node (SN) method and the ghost-node (GN) method. To conserve flows the SN method requires correction of sources and sinks in cells at the refined/coarse-grid interface. We found that the optimal correction method is case dependent and difficult to identify in practice. However, the results showed little difference and suggest that identifying the optimal method was of minor importance in our case. The GN method does not require corrections at the models' interface, and it uses a simpler head interpolation scheme than the SN method. The simpler scheme is faster but less accurate so that more iterations may be necessary. However, the GN method solved our flow problem more efficiently than the SN method. The MODFLOW-LGR results were compared with the results obtained using a globally coarse (GC) grid. The LGR simulations required one to two orders of magnitude longer run times than the GC model. However, the improvements of the numerical resolution around the buried valley substantially increased the accuracy of simulated heads and flows compared with the GC simulation. Accuracy further increased locally around the valley flanks when improving the geological resolution using the refined grid. Finally, comparing MODFLOW-LGR simulation with a globally refined (GR) grid showed that the refinement proportion of the model should not exceed 10% to 15% in order to secure method efficiency. ?? 2011, The Author(s). Ground Water ?? 2011, National Ground Water Association.
Howells, Tim; Johnson, Ulf; McKelvey, Tomas; Enblad, Per
2015-02-01
The objective of this study was to identify the optimal frequency range for computing the pressure reactivity index (PRx). PRx is a clinical method for assessing cerebral pressure autoregulation based on the correlation of spontaneous variations of arterial blood pressure (ABP) and intracranial pressure (ICP). Our hypothesis was that optimizing the methodology for computing PRx in this way could produce a more stable, reliable and clinically useful index of autoregulation status. The patients studied were a series of 131 traumatic brain injury patients. Pressure reactivity indices were computed in various frequency bands during the first 4 days following injury using bandpass filtering of the input ABP and ICP signals. Patient outcome was assessed using the extended Glasgow Outcome Scale (GOSe). The optimization criterion was the strength of the correlation with GOSe of the mean index value over the first 4 days following injury. Stability of the indices was measured as the mean absolute deviation of the minute by minute index value from 30-min moving averages. The optimal index frequency range for prediction of outcome was identified as 0.018-0.067 Hz (oscillations with periods from 55 to 15 s). The index based on this frequency range correlated with GOSe with ρ=-0.46 compared to -0.41 for standard PRx, and reduced the 30-min variation by 23%.
Low-Cost Ultrasonic Distance Sensor Arrays with Networked Error Correction
Dai, Hongjun; Zhao, Shulin; Jia, Zhiping; Chen, Tianzhou
2013-01-01
Distance has been one of the basic factors in manufacturing and control fields, and ultrasonic distance sensors have been widely used as a low-cost measuring tool. However, the propagation of ultrasonic waves is greatly affected by environmental factors such as temperature, humidity and atmospheric pressure. In order to solve the problem of inaccurate measurement, which is significant within industry, this paper presents a novel ultrasonic distance sensor model using networked error correction (NEC) trained on experimental data. This is more accurate than other existing approaches because it uses information from indirect association with neighboring sensors, which has not been considered before. The NEC technique, focusing on optimization of the relationship of the topological structure of sensor arrays, is implemented for the compensation of erroneous measurements caused by the environment. We apply the maximum likelihood method to determine the optimal fusion data set and use a neighbor discovery algorithm to identify neighbor nodes at the top speed. Furthermore, we adopt the NEC optimization algorithm, which takes full advantage of the correlation coefficients for neighbor sensors. The experimental results demonstrate that the ranging errors of the NEC system are within 2.20%; furthermore, the mean absolute percentage error is reduced to 0.01% after three iterations of this method, which means that the proposed method performs extremely well. The optimized method of distance measurement we propose, with the capability of NEC, would bring a significant advantage for intelligent industrial automation. PMID:24013491
Sathishkumar, Thiyagarajan; Baskar, Ramakrishnan; Aravind, Mohan; Tilak, Suryanarayanan; Deepthi, Sri; Bharathikumar, Vellalore Maruthachalam
2013-01-01
Flavonoids are exploited as antioxidants, antimicrobial, antithrombogenic, antiviral, and antihypercholesterolemic agents. Normally, conventional extraction techniques like soxhlet or shake flask methods provide low yield of flavonoids with structural loss, and thereby, these techniques may be considered as inefficient. In this regard, an attempt was made to optimize the flavonoid extraction using orthogonal design of experiment and subsequent structural elucidation by high-performance liquid chromatography-diode array detector-electron spray ionization/mass spectrometry (HPLC-DAD-ESI/MS) techniques. The shake flask method of flavonoid extraction was observed to provide a yield of 1.2 ± 0.13 (mg/g tissue). With the two different solvents, namely, ethanol and ethyl acetate, tried for the extraction optimization of flavonoid, ethanol (80.1 mg/g tissue) has been proved better than ethyl acetate (20.5 mg/g tissue). The optimal conditions of the extraction of flavonoid were found to be 85°C, 3 hours with a material ratio of 1 : 20, 75% ethanol, and 1 cycle of extraction. About seven different phenolics like robinin, quercetin, rutin, sinapoyl-hexoside, dicaffeic acid, and two unknown compounds were identified for the first time in the flowers of T. heyneana. The study has also concluded that L16 orthogonal design of experiment is an effective method for the extraction of flavonoid than the shake flask method. PMID:25969771
Ahrari, Ali; Deb, Kalyanmoy; Preuss, Mike
2017-01-01
During the recent decades, many niching methods have been proposed and empirically verified on some available test problems. They often rely on some particular assumptions associated with the distribution, shape, and size of the basins, which can seldom be made in practical optimization problems. This study utilizes several existing concepts and techniques, such as taboo points, normalized Mahalanobis distance, and the Ursem's hill-valley function in order to develop a new tool for multimodal optimization, which does not make any of these assumptions. In the proposed method, several subpopulations explore the search space in parallel. Offspring of a subpopulation are forced to maintain a sufficient distance to the center of fitter subpopulations and the previously identified basins, which are marked as taboo points. The taboo points repel the subpopulation to prevent convergence to the same basin. A strategy to update the repelling power of the taboo points is proposed to address the challenge of basins of dissimilar size. The local shape of a basin is also approximated by the distribution of the subpopulation members converging to that basin. The proposed niching strategy is incorporated into the covariance matrix self-adaptation evolution strategy (CMSA-ES), a potent global optimization method. The resultant method, called the covariance matrix self-adaptation with repelling subpopulations (RS-CMSA), is assessed and compared to several state-of-the-art niching methods on a standard test suite for multimodal optimization. An organized procedure for parameter setting is followed which assumes a rough estimation of the desired/expected number of minima available. Performance sensitivity to the accuracy of this estimation is also studied by introducing the concept of robust mean peak ratio. Based on the numerical results using the available and the introduced performance measures, RS-CMSA emerges as the most successful method when robustness and efficiency are considered at the same time.
Stakeholder Perspectives on Optimizing Communication in a School-Centered Asthma Program
ERIC Educational Resources Information Center
Snieder, Hylke M.; Nickels, Sarah; Gleason, Melanie; McFarlane, Arthur; Szefler, Stanley J.; Allison, Mandy A.
2017-01-01
Background: School-centered asthma programs (SAPs) can be an effective intervention to improve asthma control for underserved populations but little is known about how key stakeholders communicate within these programs. Therefore, our aim was to identify key components of effective communication in a SAP. Methods: Primary care providers (PCPs),…
Why work was done?
To be able to identify, on a proteomic level, cytochromes P450 (CYP) and UDP-glucuronosyltransferases (UGT) in mouse liver microsomes for the conazole exposure study IRP # NHEERL-ECD-SCN-CZ-2002-01-R1_Addendum 1. The new enrichment method was necessary beca...
Landscape silviculture for late-successional reserve management
S Hummel; R.J. Barbour
2007-01-01
The effects of different combinations of multiple, variable-intensity silvicultural treatments on fire and habitat management objectives were evaluated for a ±6,000 ha forest reserve using simulation models and optimization techniques. Our methods help identify areas within the reserve where opportunities exist to minimize conflict between the dual landscape objectives...
Prototype Testing in Instructional Development. SWRL Working Papers: 1972.
ERIC Educational Resources Information Center
Niedermeyer, Fred C., Ed.
When properly implemented, prototype testing appears to provide one of the most direct and economical methods for identifying means to optimize the effectiveness of a product, and ultimately to validate a product's effect. The nine papers in this volume exemplify several categories of protytype testing conducted at different stages of the…
Intra- to Multi-Decadal Temperature Variability over the Continental United States: 1896-2012
USDA-ARS?s Scientific Manuscript database
The Optimal Ranking Regime (ORR) method was used to identify intra- to multi-decadal (IMD) time windows containing significant ranking sequences in U.S. climate division temperature data. The simplicity of the ORR procedure’s output – a time series’ most significant non-overlapping periods of high o...
Effectively Serving AB 540 and Undocumented Students at a Hispanic Serving Institution
ERIC Educational Resources Information Center
Person, Dawn; Gutierrez Keeton, Rebecca; Medina, Noemy; Gonzalez, Jacquelyn; Minero, Laura P.
2017-01-01
This mixed-methods study examined the experiences of undocumented students at a 4-year Hispanic Serving Institution. Barriers identified by these students included a lack of resources and minimal career opportunities after graduation. Faculty and staff perceived this historically underserved population as exhibiting high levels of optimism and…
Zeng, Xueqiang; Luo, Gang
2017-12-01
Machine learning is broadly used for clinical data analysis. Before training a model, a machine learning algorithm must be selected. Also, the values of one or more model parameters termed hyper-parameters must be set. Selecting algorithms and hyper-parameter values requires advanced machine learning knowledge and many labor-intensive manual iterations. To lower the bar to machine learning, miscellaneous automatic selection methods for algorithms and/or hyper-parameter values have been proposed. Existing automatic selection methods are inefficient on large data sets. This poses a challenge for using machine learning in the clinical big data era. To address the challenge, this paper presents progressive sampling-based Bayesian optimization, an efficient and automatic selection method for both algorithms and hyper-parameter values. We report an implementation of the method. We show that compared to a state of the art automatic selection method, our method can significantly reduce search time, classification error rate, and standard deviation of error rate due to randomization. This is major progress towards enabling fast turnaround in identifying high-quality solutions required by many machine learning-based clinical data analysis tasks.
Automatic yield-line analysis of slabs using discontinuity layout optimization
Gilbert, Matthew; He, Linwei; Smith, Colin C.; Le, Canh V.
2014-01-01
The yield-line method of analysis is a long established and extremely effective means of estimating the maximum load sustainable by a slab or plate. However, although numerous attempts to automate the process of directly identifying the critical pattern of yield-lines have been made over the past few decades, to date none has proved capable of reliably analysing slabs of arbitrary geometry. Here, it is demonstrated that the discontinuity layout optimization (DLO) procedure can successfully be applied to such problems. The procedure involves discretization of the problem using nodes inter-connected by potential yield-line discontinuities, with the critical layout of these then identified using linear programming. The procedure is applied to various benchmark problems, demonstrating that highly accurate solutions can be obtained, and showing that DLO provides a truly systematic means of directly and reliably automatically identifying yield-line patterns. Finally, since the critical yield-line patterns for many problems are found to be quite complex in form, a means of automatically simplifying these is presented. PMID:25104905
Progress feedback and the OQ-system: The past and the future.
Lambert, Michael J
2015-12-01
A serious problem in routine clinical practice is clinician optimism about the benefit clients derive from the therapy that they offer compared to measured benefits. The consequence of seeing the silver lining is a failure to identify cases that, in the end, leave treatment worse-off than when they started or are simply unaffected. It has become clear that some methods of measuring, monitoring, and providing feedback to clinicians about client mental health status over the course of routine care improves treatment outcomes for clients at risk of treatment failure (Shimokawa, Lambert, & Smart, 2010) and thus is a remedy for therapist optimism by identifying cases at risk for poor outcomes. The current article presents research findings related to use of the Outcome Questionnaire-45 and Clinical Support Tools for this purpose. The necessary characteristics of feedback systems that work to benefit client's well-being are identified. In addition, suggestions for future research and use in routine care are presented. (c) 2015 APA, all rights reserved).
NASA Astrophysics Data System (ADS)
Liu, Pudong; Zhou, Jiayuan; Shi, Runhe; Zhang, Chao; Liu, Chaoshun; Sun, Zhibin; Gao, Wei
2016-09-01
The aim of this work was to identify the coastal wetland plants between Bayes and BP neural network using hyperspectral data in order to optimize the classification method. For this purpose, we chose two dominant plants (invasive S. alterniflora and native P. australis) in the Yangtze Estuary, the leaf spectral reflectance of P. australis and S. alterniflora were measured by ASD field spectral machine. We tested the Bayes method and BP neural network for the identification of these two species. Results showed that three different bands (i.e., 555 nm 711 nm and 920 nm) could be identified as the sensitive bands for the input parameters for the two methods. Bayes method and BP neural network prediction model both performed well (Bayes prediction for 88.57% accuracy, BP neural network model prediction for about 80% accuracy), but Bayes theorem method could give higher accuracy and stability.
Sensitivity-Based Guided Model Calibration
NASA Astrophysics Data System (ADS)
Semnani, M.; Asadzadeh, M.
2017-12-01
A common practice in automatic calibration of hydrologic models is applying the sensitivity analysis prior to the global optimization to reduce the number of decision variables (DVs) by identifying the most sensitive ones. This two-stage process aims to improve the optimization efficiency. However, Parameter sensitivity information can be used to enhance the ability of the optimization algorithms to find good quality solutions in a fewer number of solution evaluations. This improvement can be achieved by increasing the focus of optimization on sampling from the most sensitive parameters in each iteration. In this study, the selection process of the dynamically dimensioned search (DDS) optimization algorithm is enhanced by utilizing a sensitivity analysis method to put more emphasis on the most sensitive decision variables for perturbation. The performance of DDS with the sensitivity information is compared to the original version of DDS for different mathematical test functions and a model calibration case study. Overall, the results show that DDS with sensitivity information finds nearly the same solutions as original DDS, however, in a significantly fewer number of solution evaluations.
NASA Astrophysics Data System (ADS)
Jalligampala, Archana; Sekhar, Sudarshan; Zrenner, Eberhart; Rathbun, Daniel L.
2017-04-01
To further improve the quality of visual percepts elicited by microelectronic retinal prosthetics, substantial efforts have been made to understand how retinal neurons respond to electrical stimulation. It is generally assumed that a sufficiently strong stimulus will recruit most retinal neurons. However, recent evidence has shown that the responses of some retinal neurons decrease with excessively strong stimuli (a non-monotonic response function). Therefore, it is necessary to identify stimuli that can be used to activate the majority of retinal neurons even when such non-monotonic cells are part of the neuronal population. Taking these non-monotonic responses into consideration, we establish the optimal voltage stimulation parameters (amplitude, duration, and polarity) for epiretinal stimulation of network-mediated (indirect) ganglion cell responses. We recorded responses from 3958 mouse retinal ganglion cells (RGCs) in both healthy (wild type, WT) and a degenerating (rd10) mouse model of retinitis pigmentosa—using flat-mounted retina on a microelectrode array. Rectangular monophasic voltage-controlled pulses were presented with varying voltage, duration, and polarity. We found that in 4-5 weeks old rd10 mice the RGC thresholds were comparable to those of WT. There was a marked response variability among mouse RGCs. To account for this variability, we interpolated the percentage of RGCs activated at each point in the voltage-polarity-duration stimulus space, thus identifying the optimal voltage-controlled pulse (-2.4 V, 0.88 ms). The identified optimal voltage pulse can activate at least 65% of potentially responsive RGCs in both mouse strains. Furthermore, this pulse is well within the range of stimuli demonstrated to be safe and effective for retinal implant patients. Such optimized stimuli and the underlying method used to identify them support a high yield of responsive RGCs and will serve as an effective guideline for future in vitro investigations of retinal electrostimulation by establishing standard stimuli for each unique experimental condition.
Durana, Nieves; García, José Antonio; Gómez, María Carmen; Alonso, Lucio
2018-01-01
Thermal desorption (TD) coupled with gas chromatography/mass spectrometry (TD-GC/MS) is a simple alternative that overcomes the main drawbacks of the solvent extraction-based method: long extraction times, high sample manipulation, and large amounts of solvent waste. This work describes the optimization of TD-GC/MS for the measurement of airborne polycyclic aromatic hydrocarbons (PAHs) in particulate phase. The performance of the method was tested by Standard Reference Material (SRM) 1649b urban dust and compared with the conventional method (Soxhlet extraction-GC/MS), showing a better recovery (mean of 97%), precision (mean of 12%), and accuracy (±25%) for the determination of 14 EPA PAHs. Furthermore, other 15 nonpriority PAHs were identified and quantified using their relative response factors (RRFs). Finally, the proposed method was successfully applied for the quantification of PAHs in real 8 h-samples (PM10), demonstrating its capability for determination of these compounds in short-term monitoring. PMID:29854561
Clustering PPI data by combining FA and SHC method.
Lei, Xiujuan; Ying, Chao; Wu, Fang-Xiang; Xu, Jin
2015-01-01
Clustering is one of main methods to identify functional modules from protein-protein interaction (PPI) data. Nevertheless traditional clustering methods may not be effective for clustering PPI data. In this paper, we proposed a novel method for clustering PPI data by combining firefly algorithm (FA) and synchronization-based hierarchical clustering (SHC) algorithm. Firstly, the PPI data are preprocessed via spectral clustering (SC) which transforms the high-dimensional similarity matrix into a low dimension matrix. Then the SHC algorithm is used to perform clustering. In SHC algorithm, hierarchical clustering is achieved by enlarging the neighborhood radius of synchronized objects continuously, while the hierarchical search is very difficult to find the optimal neighborhood radius of synchronization and the efficiency is not high. So we adopt the firefly algorithm to determine the optimal threshold of the neighborhood radius of synchronization automatically. The proposed algorithm is tested on the MIPS PPI dataset. The results show that our proposed algorithm is better than the traditional algorithms in precision, recall and f-measure value.
Clustering PPI data by combining FA and SHC method
2015-01-01
Clustering is one of main methods to identify functional modules from protein-protein interaction (PPI) data. Nevertheless traditional clustering methods may not be effective for clustering PPI data. In this paper, we proposed a novel method for clustering PPI data by combining firefly algorithm (FA) and synchronization-based hierarchical clustering (SHC) algorithm. Firstly, the PPI data are preprocessed via spectral clustering (SC) which transforms the high-dimensional similarity matrix into a low dimension matrix. Then the SHC algorithm is used to perform clustering. In SHC algorithm, hierarchical clustering is achieved by enlarging the neighborhood radius of synchronized objects continuously, while the hierarchical search is very difficult to find the optimal neighborhood radius of synchronization and the efficiency is not high. So we adopt the firefly algorithm to determine the optimal threshold of the neighborhood radius of synchronization automatically. The proposed algorithm is tested on the MIPS PPI dataset. The results show that our proposed algorithm is better than the traditional algorithms in precision, recall and f-measure value. PMID:25707632
Predicting cancerlectins by the optimal g-gap dipeptides
NASA Astrophysics Data System (ADS)
Lin, Hao; Liu, Wei-Xin; He, Jiao; Liu, Xin-Hui; Ding, Hui; Chen, Wei
2015-12-01
The cancerlectin plays a key role in the process of tumor cell differentiation. Thus, to fully understand the function of cancerlectin is significant because it sheds light on the future direction for the cancer therapy. However, the traditional wet-experimental methods were money- and time-consuming. It is highly desirable to develop an effective and efficient computational tool to identify cancerlectins. In this study, we developed a sequence-based method to discriminate between cancerlectins and non-cancerlectins. The analysis of variance (ANOVA) was used to choose the optimal feature set derived from the g-gap dipeptide composition. The jackknife cross-validated results showed that the proposed method achieved the accuracy of 75.19%, which is superior to other published methods. For the convenience of other researchers, an online web-server CaLecPred was established and can be freely accessed from the website http://lin.uestc.edu.cn/server/CalecPred. We believe that the CaLecPred is a powerful tool to study cancerlectins and to guide the related experimental validations.
Integrating computational methods to retrofit enzymes to synthetic pathways.
Brunk, Elizabeth; Neri, Marilisa; Tavernelli, Ivano; Hatzimanikatis, Vassily; Rothlisberger, Ursula
2012-02-01
Microbial production of desired compounds provides an efficient framework for the development of renewable energy resources. To be competitive to traditional chemistry, one requirement is to utilize the full capacity of the microorganism to produce target compounds with high yields and turnover rates. We use integrated computational methods to generate and quantify the performance of novel biosynthetic routes that contain highly optimized catalysts. Engineering a novel reaction pathway entails addressing feasibility on multiple levels, which involves handling the complexity of large-scale biochemical networks while respecting the critical chemical phenomena at the atomistic scale. To pursue this multi-layer challenge, our strategy merges knowledge-based metabolic engineering methods with computational chemistry methods. By bridging multiple disciplines, we provide an integral computational framework that could accelerate the discovery and implementation of novel biosynthetic production routes. Using this approach, we have identified and optimized a novel biosynthetic route for the production of 3HP from pyruvate. Copyright © 2011 Wiley Periodicals, Inc.
A Minimum Spanning Forest Based Method for Noninvasive Cancer Detection with Hyperspectral Imaging
Pike, Robert; Lu, Guolan; Wang, Dongsheng; Chen, Zhuo Georgia; Fei, Baowei
2016-01-01
Goal The purpose of this paper is to develop a classification method that combines both spectral and spatial information for distinguishing cancer from healthy tissue on hyperspectral images in an animal model. Methods An automated algorithm based on a minimum spanning forest (MSF) and optimal band selection has been proposed to classify healthy and cancerous tissue on hyperspectral images. A support vector machine (SVM) classifier is trained to create a pixel-wise classification probability map of cancerous and healthy tissue. This map is then used to identify markers that are used to compute mutual information for a range of bands in the hyperspectral image and thus select the optimal bands. An MSF is finally grown to segment the image using spatial and spectral information. Conclusion The MSF based method with automatically selected bands proved to be accurate in determining the tumor boundary on hyperspectral images. Significance Hyperspectral imaging combined with the proposed classification technique has the potential to provide a noninvasive tool for cancer detection. PMID:26285052
Wang, Hong-Hua
2014-01-01
A precise mathematical model plays a pivotal role in the simulation, evaluation, and optimization of photovoltaic (PV) power systems. Different from the traditional linear model, the model of PV module has the features of nonlinearity and multiparameters. Since conventional methods are incapable of identifying the parameters of PV module, an excellent optimization algorithm is required. Artificial fish swarm algorithm (AFSA), originally inspired by the simulation of collective behavior of real fish swarms, is proposed to fast and accurately extract the parameters of PV module. In addition to the regular operation, a mutation operator (MO) is designed to enhance the searching performance of the algorithm. The feasibility of the proposed method is demonstrated by various parameters of PV module under different environmental conditions, and the testing results are compared with other studied methods in terms of final solutions and computational time. The simulation results show that the proposed method is capable of obtaining higher parameters identification precision. PMID:25243233
Advanced Interactive Display Formats for Terminal Area Traffic Control
NASA Technical Reports Server (NTRS)
Grunwald, Arthur J.; Shaviv, G. E.
1999-01-01
This research project deals with an on-line dynamic method for automated viewing parameter management in perspective displays. Perspective images are optimized such that a human observer will perceive relevant spatial geometrical features with minimal errors. In order to compute the errors at which observers reconstruct spatial features from perspective images, a visual spatial-perception model was formulated. The model was employed as the basis of an optimization scheme aimed at seeking the optimal projection parameter setting. These ideas are implemented in the context of an air traffic control (ATC) application. A concept, referred to as an active display system, was developed. This system uses heuristic rules to identify relevant geometrical features of the three-dimensional air traffic situation. Agile, on-line optimization was achieved by a specially developed and custom-tailored genetic algorithm (GA), which was to deal with the multi-modal characteristics of the objective function and exploit its time-evolving nature.
Design Methods and Optimization for Morphing Aircraft
NASA Technical Reports Server (NTRS)
Crossley, William A.
2005-01-01
This report provides a summary of accomplishments made during this research effort. The major accomplishments are in three areas. The first is the use of a multiobjective optimization strategy to help identify potential morphing features that uses an existing aircraft sizing code to predict the weight, size and performance of several fixed-geometry aircraft that are Pareto-optimal based upon on two competing aircraft performance objectives. The second area has been titled morphing as an independent variable and formulates the sizing of a morphing aircraft as an optimization problem in which the amount of geometric morphing for various aircraft parameters are included as design variables. This second effort consumed most of the overall effort on the project. The third area involved a more detailed sizing study of a commercial transport aircraft that would incorporate a morphing wing to possibly enable transatlantic point-to-point passenger service.
Kai, Wang; Peisheng, Yan
2016-01-01
ABSTRACT Lipases can catalyze the hydrolysis of glycerol, esters and long chain fatty acids. A lipase producing isolate M35-15 was screened and identified as Thalassospira permensis using 16S rRNA gene sequence analysis. To our knowledge this is the first report on Thalassospira permensis producing lipases. In this paper the optimization of medium composition for the increase in bacterial lipase was achieved using statistical methods. Firstly the key ingredients were selected by Plackett-Burman experimental design, then the levels of the ingredients were optimized using central composite design of Response Surface Methodology. The predicted optimal lipase activity was 11.49 U under the conditions that medium composition were 5.15 g/l glucose, 11.74 g/l peptone, 6.74 g/l yeast powder and 22.90 g/l olive oil emulsifier. PMID:27285376
Kai, Wang; Peisheng, Yan
2016-09-02
Lipases can catalyze the hydrolysis of glycerol, esters and long chain fatty acids. A lipase producing isolate M35-15 was screened and identified as Thalassospira permensis using 16S rRNA gene sequence analysis. To our knowledge this is the first report on Thalassospira permensis producing lipases. In this paper the optimization of medium composition for the increase in bacterial lipase was achieved using statistical methods. Firstly the key ingredients were selected by Plackett-Burman experimental design, then the levels of the ingredients were optimized using central composite design of Response Surface Methodology. The predicted optimal lipase activity was 11.49 U under the conditions that medium composition were 5.15 g/l glucose, 11.74 g/l peptone, 6.74 g/l yeast powder and 22.90 g/l olive oil emulsifier.
Optimal Design of Grid-Stiffened Panels and Shells With Variable Curvature
NASA Technical Reports Server (NTRS)
Ambur, Damodar R.; Jaunky, Navin
2001-01-01
A design strategy for optimal design of composite grid-stiffened structures with variable curvature subjected to global and local buckling constraints is developed using a discrete optimizer. An improved smeared stiffener theory is used for the global buckling analysis. Local buckling of skin segments is assessed using a Rayleigh-Ritz method that accounts for material anisotropy and transverse shear flexibility. The local buckling of stiffener segments is also assessed. Design variables are the axial and transverse stiffener spacing, stiffener height and thickness, skin laminate, and stiffening configuration. Stiffening configuration is herein defined as a design variable that indicates the combination of axial, transverse and diagonal stiffeners in the stiffened panel. The design optimization process is adapted to identify the lightest-weight stiffening configuration and stiffener spacing for grid-stiffened composite panels given the overall panel dimensions. in-plane design loads, material properties. and boundary conditions of the grid-stiffened panel or shell.
Efficient Simulation Budget Allocation for Selecting an Optimal Subset
NASA Technical Reports Server (NTRS)
Chen, Chun-Hung; He, Donghai; Fu, Michael; Lee, Loo Hay
2008-01-01
We consider a class of the subset selection problem in ranking and selection. The objective is to identify the top m out of k designs based on simulated output. Traditional procedures are conservative and inefficient. Using the optimal computing budget allocation framework, we formulate the problem as that of maximizing the probability of correc tly selecting all of the top-m designs subject to a constraint on the total number of samples available. For an approximation of this corre ct selection probability, we derive an asymptotically optimal allocat ion and propose an easy-to-implement heuristic sequential allocation procedure. Numerical experiments indicate that the resulting allocatio ns are superior to other methods in the literature that we tested, and the relative efficiency increases for larger problems. In addition, preliminary numerical results indicate that the proposed new procedur e has the potential to enhance computational efficiency for simulation optimization.
Kong, Fansheng; Yu, Shujuan; Feng, Zeng; Wu, Xinlan
2015-01-01
Objective: To optimization of extraction of antioxidant compounds from guava (Psidium guajava L.) leaves and showed that the guava leaves are the potential source of antioxidant compounds. Materials and Methods: The bioactive polysaccharide compounds of guava leaves (P. guajava L.) were obtained using ultrasonic-assisted extraction. Extraction was carried out according to Box-Behnken central composite design, and independent variables were temperature (20–60°C), time (20–40 min) and power (200–350 W). The extraction process was optimized by using response surface methodology for the highest crude extraction yield of bioactive polysaccharide compounds. Results: The optimal conditions were identified as 55°C, 30 min, and 240 W. 1,1-diphenyl-2-picryl-hydrazyl and hydroxyl free radical scavenging were conducted. Conclusion: The results of quantification showed that the guava leaves are the potential source of antioxidant compounds. PMID:26246720
NASA Astrophysics Data System (ADS)
Yang, Xiong; Liu, Derong; Wang, Ding
2014-03-01
In this paper, an adaptive reinforcement learning-based solution is developed for the infinite-horizon optimal control problem of constrained-input continuous-time nonlinear systems in the presence of nonlinearities with unknown structures. Two different types of neural networks (NNs) are employed to approximate the Hamilton-Jacobi-Bellman equation. That is, an recurrent NN is constructed to identify the unknown dynamical system, and two feedforward NNs are used as the actor and the critic to approximate the optimal control and the optimal cost, respectively. Based on this framework, the action NN and the critic NN are tuned simultaneously, without the requirement for the knowledge of system drift dynamics. Moreover, by using Lyapunov's direct method, the weights of the action NN and the critic NN are guaranteed to be uniformly ultimately bounded, while keeping the closed-loop system stable. To demonstrate the effectiveness of the present approach, simulation results are illustrated.
1983-05-01
AVAILABILITY OF REPORT Approved for public release; 2b. DECLASSIFICATION I DOWNGRADING SCHEDULE Distribution Unlimited 4. PERFORMING ORGANIZATION REPORT NUMBER(S...S. MONITORING ORGANIZATION REPORT NUMBER(S) 1-88 6a. NAME OF PERFORMING ORGANIZATION 6b. OFFICE SYMBOL 7a. NAME OF MONITORING ORGANIZATION U.S. Army...identify by block number) FIELD GROUP SUB-GROUP Health Care; Appointment Absenteeism - ... 19, ABSTRACT (Continue on reverse if necessary and identify
Lazure, Patrice; Bartel, Robert C; Biller, Beverly M K; Molitch, Mark E; Rosenthal, Stephen M; Ross, Judith L; Bernsten, Brock D; Hayes, Sean M
2014-07-24
The Theoretical Domains Framework (TDF) is a set of 14 domains of behavior change that provide a framework for the critical issues and factors influencing optimal knowledge translation. Considering that a previous study has identified optimal knowledge translation techniques for each TDF domain, it was hypothesized that the TDF could be used to contextualize and interpret findings from a behavioral and educational needs assessment. To illustrate this hypothesis, findings and recommendations drawn from a 2012 national behavioral and educational needs assessment conducted with healthcare providers who treat and manage Growth and Growth Hormone Disorders, will be discussed using the TDF. This needs assessment utilized a mixed-methods research approach that included a combination of: [a] data sources (Endocrinologists (n:120), Pediatric Endocrinologists (n:53), Pediatricians (n:52)), [b] data collection methods (focus groups, interviews, online survey), [c] analysis methodologies (qualitative - analyzed through thematic analysis, quantitative - analyzed using frequencies, cross-tabulations, and gap analysis). Triangulation was used to generate trustworthy findings on the clinical practice gaps of endocrinologists, pediatric endocrinologists, and general pediatricians in their provision of care to adult patients with adult growth hormone deficiency or acromegaly, or children/teenagers with pediatric growth disorders. The identified gaps were then broken into key underlying determinants, categorized according to the TDF domains, and linked to optimal behavioral change techniques. The needs assessment identified 13 gaps, each with one or more underlying determinant(s). Overall, these determinants were mapped to 9 of the 14 TDF domains. The Beliefs about Consequences domain was identified as a contributing determinant to 7 of the 13 challenges. Five of the gaps could be related to the Skills domain, while three were linked to the Knowledge domain. The TDF categorization of the needs assessment findings allowed recommendation of appropriate behavior change techniques for each underlying determinant, and facilitated communication and understanding of the identified issues to a broader audience. This approach provides a means for health education researchers to categorize gaps and challenges identified through educational needs assessments, and facilitates the application of these findings by educators and knowledge translators, by linking the gaps to recommended behavioral change techniques.
Assessing Feedback in a Mobile Videogame
Brand, Leah; Beltran, Alicia; Hughes, Sheryl; O'Connor, Teresia; Baranowski, Janice; Nicklas, Theresa; Chen, Tzu-An; Dadabhoy, Hafza R.; Diep, Cassandra S.; Buday, Richard
2016-01-01
Abstract Background: Player feedback is an important part of serious games, although there is no consensus regarding its delivery or optimal content. “Mommio” is a serious game designed to help mothers motivate their preschoolers to eat vegetables. The purpose of this study was to assess optimal format and content of player feedback for use in “Mommio.” Materials and Methods: The current study posed 36 potential “Mommio” gameplay feedback statements to 20 mothers using a Web survey and interview. Mothers were asked about the meaning and helpfulness of each feedback statement. Results: Several themes emerged upon thematic analysis, including identifying an effective alternative in the case of corrective feedback, avoiding vague wording, using succinct and correct grammar, avoiding provocation of guilt, and clearly identifying why players' game choice was correct or incorrect. Conclusions: Guidelines are proposed for future feedback statements. PMID:27058403
After the strike: using facilitation in a residency training program
Andres, D; Hamoline, D; Sanders, M; Anderson, J
1998-01-01
Methods of alternative dispute resolution, including facilitation, can be used to identify and resolve areas of conflict. Facilitation was used by the University of Saskatchewan's Department of Family Medicine (Saskatoon division) after the strike by residents in July and August 1995 so as to allow optimal use of the remaining educational time. Through facilitation, experiences of the strike and areas of potential conflict were explored. Participants had a broad range of responses to the strike. Specific coping strategies were developed to deal with identified concerns. Although outcomes were not measured formally, levels of trust improved and collegial relationships were restored. Because so many changes occur in health care and medical education, conflict inevitably arises. Facilitation offers one way of dealing with change constructively, thereby making possible the optimal use of educational time. PMID:9526479
NASA Astrophysics Data System (ADS)
Addawe, Rizavel C.; Addawe, Joel M.; Magadia, Joselito C.
2016-10-01
Accurate forecasting of dengue cases would significantly improve epidemic prevention and control capabilities. This paper attempts to provide useful models in forecasting dengue epidemic specific to the young and adult population of Baguio City. To capture the seasonal variations in dengue incidence, this paper develops a robust modeling approach to identify and estimate seasonal autoregressive integrated moving average (SARIMA) models in the presence of additive outliers. Since the least squares estimators are not robust in the presence of outliers, we suggest a robust estimation based on winsorized and reweighted least squares estimators. A hybrid algorithm, Differential Evolution - Simulated Annealing (DESA), is used to identify and estimate the parameters of the optimal SARIMA model. The method is applied to the monthly reported dengue cases in Baguio City, Philippines.
Lin, Bo-Cheng; Chen, Chao-Wen; Chen, Chien-Chou; Kuo, Chiao-Ling; Fan, I-Chun; Ho, Chi-Kung; Liu, I-Chuan; Chan, Ta-Chien
2016-05-25
The occurrence of out-of-hospital cardiac arrest (OHCA) is a critical life-threatening event which frequently warrants early defibrillation with an automated external defibrillator (AED). The optimization of allocating a limited number of AEDs in various types of communities is challenging. We aimed to propose a two-stage modeling framework including spatial accessibility evaluation and priority ranking to identify the highest gaps between demand and supply for allocating AEDs. In this study, a total of 6135 OHCA patients were defined as demand, and the existing 476 publicly available AEDs locations and 51 emergency medical service (EMS) stations were defined as supply. To identify the demand for AEDs, Bayesian spatial analysis with the integrated nested Laplace approximation (INLA) method is applied to estimate the composite spatial risks from multiple factors. The population density, proportion of elderly people, and land use classifications are identified as risk factors. Then, the multi-criterion two-step floating catchment area (MC2SFCA) method is used to measure spatial accessibility of AEDs between the spatial risks and the supply of AEDs. Priority ranking is utilized for prioritizing deployment of AEDs among communities because of limited resources. Among 6135 OHCA patients, 56.85 % were older than 65 years old, and 79.04 % were in a residential area. The spatial distribution of OHCA incidents was found to be concentrated in the metropolitan area of Kaohsiung City, Taiwan. According to the posterior mean estimated by INLA, the spatial effects including population density and proportion of elderly people, and land use classifications are positively associated with the OHCA incidence. Utilizing the MC2SFCA for spatial accessibility, we found that supply of AEDs is less than demand in most areas, especially in rural areas. Under limited resources, we identify priority places for deploying AEDs based on transportation time to the nearest hospital and population size of the communities. The proposed method will be beneficial for optimizing resource allocation while considering multiple local risks. The optimized deployment of AEDs can broaden EMS coverage and minimize the problems of the disparity in urban areas and the deficiency in rural areas.
Anaerobic digestion of food waste: A review focusing on process stability.
Li, Lei; Peng, Xuya; Wang, Xiaoming; Wu, Di
2018-01-01
Food waste (FW) is rich in biomass energy, and increasing numbers of national programs are being established to recover energy from FW using anaerobic digestion (AD). However process instability is a common operational issue for AD of FW. Process monitoring and control as well as microbial management can be used to control instability and increase the energy conversion efficiency of anaerobic digesters. Here, we review research progress related to these methods and identify existing limitations to efficient AD; recommendations for future research are also discussed. Process monitoring and control are suitable for evaluating the current operational status of digesters, whereas microbial management can facilitate early diagnosis and process optimization. Optimizing and combining these two methods are necessary to improve AD efficiency. Copyright © 2017 Elsevier Ltd. All rights reserved.
Directed differentiation of embryonic stem cells using a bead-based combinatorial screening method.
Tarunina, Marina; Hernandez, Diana; Johnson, Christopher J; Rybtsov, Stanislav; Ramathas, Vidya; Jeyakumar, Mylvaganam; Watson, Thomas; Hook, Lilian; Medvinsky, Alexander; Mason, Chris; Choo, Yen
2014-01-01
We have developed a rapid, bead-based combinatorial screening method to determine optimal combinations of variables that direct stem cell differentiation to produce known or novel cell types having pre-determined characteristics. Here we describe three experiments comprising stepwise exposure of mouse or human embryonic cells to 10,000 combinations of serum-free differentiation media, through which we discovered multiple novel, efficient and robust protocols to generate a number of specific hematopoietic and neural lineages. We further demonstrate that the technology can be used to optimize existing protocols in order to substitute costly growth factors with bioactive small molecules and/or increase cell yield, and to identify in vitro conditions for the production of rare developmental intermediates such as an embryonic lymphoid progenitor cell that has not previously been reported.
Real-Time Adaptive Least-Squares Drag Minimization for Performance Adaptive Aeroelastic Wing
NASA Technical Reports Server (NTRS)
Ferrier, Yvonne L.; Nguyen, Nhan T.; Ting, Eric
2016-01-01
This paper contains a simulation study of a real-time adaptive least-squares drag minimization algorithm for an aeroelastic model of a flexible wing aircraft. The aircraft model is based on the NASA Generic Transport Model (GTM). The wing structures incorporate a novel aerodynamic control surface known as the Variable Camber Continuous Trailing Edge Flap (VCCTEF). The drag minimization algorithm uses the Newton-Raphson method to find the optimal VCCTEF deflections for minimum drag in the context of an altitude-hold flight control mode at cruise conditions. The aerodynamic coefficient parameters used in this optimization method are identified in real-time using Recursive Least Squares (RLS). The results demonstrate the potential of the VCCTEF to improve aerodynamic efficiency for drag minimization for transport aircraft.
Liu, Derong; Yang, Xiong; Wang, Ding; Wei, Qinglai
2015-07-01
The design of stabilizing controller for uncertain nonlinear systems with control constraints is a challenging problem. The constrained-input coupled with the inability to identify accurately the uncertainties motivates the design of stabilizing controller based on reinforcement-learning (RL) methods. In this paper, a novel RL-based robust adaptive control algorithm is developed for a class of continuous-time uncertain nonlinear systems subject to input constraints. The robust control problem is converted to the constrained optimal control problem with appropriately selecting value functions for the nominal system. Distinct from typical action-critic dual networks employed in RL, only one critic neural network (NN) is constructed to derive the approximate optimal control. Meanwhile, unlike initial stabilizing control often indispensable in RL, there is no special requirement imposed on the initial control. By utilizing Lyapunov's direct method, the closed-loop optimal control system and the estimated weights of the critic NN are proved to be uniformly ultimately bounded. In addition, the derived approximate optimal control is verified to guarantee the uncertain nonlinear system to be stable in the sense of uniform ultimate boundedness. Two simulation examples are provided to illustrate the effectiveness and applicability of the present approach.
O'Hara, Matthew J; Murray, Nathaniel J; Carter, Jennifer C; Morrison, Samuel S
2018-04-13
Zirconium-89 ( 89 Zr), produced by the (p, n) reaction from naturally monoisotopic yttrium ( nat Y), is a promising positron emitting isotope for immunoPET imaging. Its long half-life of 78.4 h is sufficient for evaluating slow physiological processes. A prototype automated fluidic system, coupled to on-line and in-line detectors, has been constructed to facilitate development of new 89 Zr purification methodologies. The highly reproducible reagent delivery platform and near-real time monitoring of column effluents allows for efficient method optimization. The separation of Zr from dissolved Y metal targets was evaluated using several anion exchange resins. Each resin was evaluated against its ability to quantitatively capture Zr from a load solution high in dissolved Y. The most appropriate anion exchange resin for this application was identified, and the separation method was optimized. The method is capable of a high Y decontamination factor (>10 5 ) and has been shown to remove Fe, an abundant contaminant in Y foils, from the 89 Zr elution fraction. Finally, the method was evaluated using cyclotron bombarded Y foil targets; the method was shown to achieve >95% recovery of the 89 Zr present in the foils. The anion exchange column method described here is intended to be the first 89 Zr isolation stage in a dual-column purification process. Copyright © 2018 Elsevier B.V. All rights reserved.
A fuzzy optimal threshold technique for medical images
NASA Astrophysics Data System (ADS)
Thirupathi Kannan, Balaji; Krishnasamy, Krishnaveni; Pradeep Kumar Kenny, S.
2012-01-01
A new fuzzy based thresholding method for medical images especially cervical cytology images having blob and mosaic structures is proposed in this paper. Many existing thresholding algorithms may segment either blob or mosaic images but there aren't any single algorithm that can do both. In this paper, an input cervical cytology image is binarized, preprocessed and the pixel value with minimum Fuzzy Gaussian Index is identified as an optimal threshold value and used for segmentation. The proposed technique is tested on various cervical cytology images having blob or mosaic structures, compared with various existing algorithms and proved better than the existing algorithms.
Optimization of x-ray capillary optics for mammography
NASA Astrophysics Data System (ADS)
Ross, Richard E.; Bradford, Carla D.; Peppler, Walter W.
2002-05-01
The purpose of this study is to develop a full-field digital mammography system utilizing capillary optics. Specific aims are to identify optic properties that affect image quality and to optimize those properties in the design of a multi-element capillary array. It has been shown that polycapillary optics significantly improve mammographic image quality through increased resolution and reduced x-ray scatter. For practical clinical application much larger multi-element optics will be required. This study quantified the contributing factors to the multi-element optic MTF and investigated methods to determine optimal parameters for a practical design. Individual and a prototype multi-element array of linearly tapered optics with a common focal point were investigated. A conventional (MO/MO) mammography tube and computed radiography system were used. The system and optic MTF were measured using the angled slit method with a slit camera (10 micron slit). MTF measurements were performed with both stationary and scanned optics. Contributions to MTF included: distortion within individual optics, misalignment between optics, capillary channel size, and vibration. Measurement techniques used to identify and quantify the contributions to optic MTF included a phantom chosen specifically for polycapillary optics. This phantom provided a method for assessing the coherence among capillaries within an optic as well as the relative alignment of the optics within the array. In addition, modifications to the scanning procedure allowed for the isolation and quantification of several contributors to the system MTF. Specifically, measurements were made using a stationary optic, a scanning optic, and an optic placed at multiple locations within the imaged field of view. These techniques yielded the optic MTF, the degradation of MTF due to loss of coherence within the optic, and the degradation of MTF due to vibration of the scanning mechanism. Distortion within individual optics was, typically, quite small. However, MTF degradation resulting from twist was significant in some optics. MTF degradation due to misalignment was relatively large in the prototype triad. Modeling found that misalignment up to 50 microns reduced MTF by less than 10 percent up to 3 cycles/mm. Channel diameters of 52 microns and 85 microns reduced MTF by 9 percent to 20 percent at 5 cycles/mm and provided an optimal tradeoff between transmission and MTF. Vibration was identified as a significant degradation to MTF but can easily reduced with simple modifications. In spite of some reduced optic MTF values, system MTF has always been significantly improved - in some cases almost by the magnification ratio. These results allow for accurate modeling of optic performance and optimization of design parameters. This study demonstrates that a multi-element array can be produced with nearly optimal properties. A large area array suitable for clinical trial is feasible and is the next step in this program.
NASA Astrophysics Data System (ADS)
Gamshadzaei, Mohammad Hossein; Rahimzadegan, Majid
2017-10-01
Identification of water extents in Landsat images is challenging due to surfaces with similar reflectance to water extents. The objective of this study is to provide stable and accurate methods for identifying water extents in Landsat images based on meta-heuristic algorithms. Then, seven Landsat images were selected from various environmental regions in Iran. Training of the algorithms was performed using 40 water pixels and 40 nonwater pixels in operational land imager images of Chitgar Lake (one of the study regions). Moreover, high-resolution images from Google Earth were digitized to evaluate the results. Two approaches were considered: index-based and artificial intelligence (AI) algorithms. In the first approach, nine common water spectral indices were investigated. AI algorithms were utilized to acquire coefficients of optimal band combinations to extract water extents. Among the AI algorithms, the artificial neural network algorithm and also the ant colony optimization, genetic algorithm, and particle swarm optimization (PSO) meta-heuristic algorithms were implemented. Index-based methods represented different performances in various regions. Among AI methods, PSO had the best performance with average overall accuracy and kappa coefficient of 93% and 98%, respectively. The results indicated the applicability of acquired band combinations to extract accurately and stably water extents in Landsat imagery.
Zhang, Jian; Gao, Bo; Chai, Haiting; Ma, Zhiqiang; Yang, Guifu
2016-08-26
DNA-binding proteins (DBPs) play fundamental roles in many biological processes. Therefore, the developing of effective computational tools for identifying DBPs is becoming highly desirable. In this study, we proposed an accurate method for the prediction of DBPs. Firstly, we focused on the challenge of improving DBP prediction accuracy with information solely from the sequence. Secondly, we used multiple informative features to encode the protein. These features included evolutionary conservation profile, secondary structure motifs, and physicochemical properties. Thirdly, we introduced a novel improved Binary Firefly Algorithm (BFA) to remove redundant or noisy features as well as select optimal parameters for the classifier. The experimental results of our predictor on two benchmark datasets outperformed many state-of-the-art predictors, which revealed the effectiveness of our method. The promising prediction performance on a new-compiled independent testing dataset from PDB and a large-scale dataset from UniProt proved the good generalization ability of our method. In addition, the BFA forged in this research would be of great potential in practical applications in optimization fields, especially in feature selection problems. A highly accurate method was proposed for the identification of DBPs. A user-friendly web-server named iDbP (identification of DNA-binding Proteins) was constructed and provided for academic use.
Dai, Hongying; Wu, Guodong; Wu, Michael; Zhi, Degui
2016-01-01
Next-generation sequencing data pose a severe curse of dimensionality, complicating traditional "single marker-single trait" analysis. We propose a two-stage combined p-value method for pathway analysis. The first stage is at the gene level, where we integrate effects within a gene using the Sequence Kernel Association Test (SKAT). The second stage is at the pathway level, where we perform a correlated Lancaster procedure to detect joint effects from multiple genes within a pathway. We show that the Lancaster procedure is optimal in Bahadur efficiency among all combined p-value methods. The Bahadur efficiency,[Formula: see text], compares sample sizes among different statistical tests when signals become sparse in sequencing data, i.e. ε →0. The optimal Bahadur efficiency ensures that the Lancaster procedure asymptotically requires a minimal sample size to detect sparse signals ([Formula: see text]). The Lancaster procedure can also be applied to meta-analysis. Extensive empirical assessments of exome sequencing data show that the proposed method outperforms Gene Set Enrichment Analysis (GSEA). We applied the competitive Lancaster procedure to meta-analysis data generated by the Global Lipids Genetics Consortium to identify pathways significantly associated with high-density lipoprotein cholesterol, low-density lipoprotein cholesterol, triglycerides, and total cholesterol.
A Comparison Study for DNA Motif Modeling on Protein Binding Microarray.
Wong, Ka-Chun; Li, Yue; Peng, Chengbin; Wong, Hau-San
2016-01-01
Transcription factor binding sites (TFBSs) are relatively short (5-15 bp) and degenerate. Identifying them is a computationally challenging task. In particular, protein binding microarray (PBM) is a high-throughput platform that can measure the DNA binding preference of a protein in a comprehensive and unbiased manner; for instance, a typical PBM experiment can measure binding signal intensities of a protein to all possible DNA k-mers (k = 8∼10). Since proteins can often bind to DNA with different binding intensities, one of the major challenges is to build TFBS (also known as DNA motif) models which can fully capture the quantitative binding affinity data. To learn DNA motif models from the non-convex objective function landscape, several optimization methods are compared and applied to the PBM motif model building problem. In particular, representative methods from different optimization paradigms have been chosen for modeling performance comparison on hundreds of PBM datasets. The results suggest that the multimodal optimization methods are very effective for capturing the binding preference information from PBM data. In particular, we observe a general performance improvement if choosing di-nucleotide modeling over mono-nucleotide modeling. In addition, the models learned by the best-performing method are applied to two independent applications: PBM probe rotation testing and ChIP-Seq peak sequence prediction, demonstrating its biological applicability.
Distribution path robust optimization of electric vehicle with multiple distribution centers
Hao, Wei; He, Ruichun; Jia, Xiaoyan; Pan, Fuquan; Fan, Jing; Xiong, Ruiqi
2018-01-01
To identify electrical vehicle (EV) distribution paths with high robustness, insensitivity to uncertainty factors, and detailed road-by-road schemes, optimization of the distribution path problem of EV with multiple distribution centers and considering the charging facilities is necessary. With the minimum transport time as the goal, a robust optimization model of EV distribution path with adjustable robustness is established based on Bertsimas’ theory of robust discrete optimization. An enhanced three-segment genetic algorithm is also developed to solve the model, such that the optimal distribution scheme initially contains all road-by-road path data using the three-segment mixed coding and decoding method. During genetic manipulation, different interlacing and mutation operations are carried out on different chromosomes, while, during population evolution, the infeasible solution is naturally avoided. A part of the road network of Xifeng District in Qingyang City is taken as an example to test the model and the algorithm in this study, and the concrete transportation paths are utilized in the final distribution scheme. Therefore, more robust EV distribution paths with multiple distribution centers can be obtained using the robust optimization model. PMID:29518169
Real-time energy-saving metro train rescheduling with primary delay identification
Li, Keping; Schonfeld, Paul
2018-01-01
This paper aims to reschedule online metro trains in delay scenarios. A graph representation and a mixed integer programming model are proposed to formulate the optimization problem. The solution approach is a two-stage optimization method. In the first stage, based on a proposed train state graph and system analysis, the primary and flow-on delays are specifically analyzed and identified with a critical path algorithm. For the second stage a hybrid genetic algorithm is designed to optimize the schedule, with the delay identification results as input. Then, based on the infrastructure data of Beijing Subway Line 4 of China, case studies are presented to demonstrate the effectiveness and efficiency of the solution approach. The results show that the algorithm can quickly and accurately identify primary delays among different types of delays. The economic cost of energy consumption and total delay is considerably reduced (by more than 10% in each case). The computation time of the Hybrid-GA is low enough for rescheduling online. Sensitivity analyses further demonstrate that the proposed approach can be used as a decision-making support tool for operators. PMID:29474471
Selection of optimal multispectral imaging system parameters for small joint arthritis detection
NASA Astrophysics Data System (ADS)
Dolenec, Rok; Laistler, Elmar; Stergar, Jost; Milanic, Matija
2018-02-01
Early detection and treatment of arthritis is essential for a successful outcome of the treatment, but it has proven to be very challenging with existing diagnostic methods. Novel methods based on the optical imaging of the affected joints are becoming an attractive alternative. A non-contact multispectral imaging (MSI) system for imaging of small joints of human hands and feet is being developed. In this work, a numerical simulation of the MSI system is presented. The purpose of the simulation is to determine the optimal design parameters. Inflamed and unaffected human joint models were constructed with a realistic geometry and tissue distributions, based on a MRI scan of a human finger with a spatial resolution of 0.2 mm. The light transport simulation is based on a weighted-photon 3D Monte Carlo method utilizing CUDA GPU acceleration. An uniform illumination of the finger within the 400-1100 nm spectral range was simulated and the photons exiting the joint were recorded using different acceptance angles. From the obtained reflectance and transmittance images the spectral and spatial features most indicative of inflammation were identified. Optimal acceptance angle and spectral bands were determined. This study demonstrates that proper selection of MSI system parameters critically affects ability of a MSI system to discriminate the unaffected and inflamed joints. The presented system design optimization approach could be applied to other pathologies.
Sreenivasa, Manish; Millard, Matthew; Felis, Martin; Mombaur, Katja; Wolf, Sebastian I.
2017-01-01
Predicting the movements, ground reaction forces and neuromuscular activity during gait can be a valuable asset to the clinical rehabilitation community, both to understand pathology, as well as to plan effective intervention. In this work we use an optimal control method to generate predictive simulations of pathological gait in the sagittal plane. We construct a patient-specific model corresponding to a 7-year old child with gait abnormalities and identify the optimal spring characteristics of an ankle-foot orthosis that minimizes muscle effort. Our simulations include the computation of foot-ground reaction forces, as well as the neuromuscular dynamics using computationally efficient muscle torque generators and excitation-activation equations. The optimal control problem (OCP) is solved with a direct multiple shooting method. The solution of this problem is physically consistent synthetic neural excitation commands, muscle activations and whole body motion. Our simulations produced similar changes to the gait characteristics as those recorded on the patient. The orthosis-equipped model was able to walk faster with more extended knees. Notably, our approach can be easily tuned to simulate weakened muscles, produces physiologically realistic ground reaction forces and smooth muscle activations and torques, and can be implemented on a standard workstation to produce results within a few hours. These results are an important contribution toward bridging the gap between research methods in computational neuromechanics and day-to-day clinical rehabilitation. PMID:28450833
Aircraft Dynamic Modeling in Turbulence
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.; Cunninham, Kevin
2012-01-01
A method for accurately identifying aircraft dynamic models in turbulence was developed and demonstrated. The method uses orthogonal optimized multisine excitation inputs and an analytic method for enhancing signal-to-noise ratio for dynamic modeling in turbulence. A turbulence metric was developed to accurately characterize the turbulence level using flight measurements. The modeling technique was demonstrated in simulation, then applied to a subscale twin-engine jet transport aircraft in flight. Comparisons of modeling results obtained in turbulent air to results obtained in smooth air were used to demonstrate the effectiveness of the approach.
Enhancing to method for extracting Social network by the relation existence
NASA Astrophysics Data System (ADS)
Elfida, Maria; Matyuso Nasution, M. K.; Sitompul, O. S.
2018-01-01
To get the trusty information about the social network extracted from the Web requires a reliable method, but for optimal resultant required the method that can overcome the complexity of information resources. This paper intends to reveal ways to overcome the constraints of social network extraction leading to high complexity by identifying relationships among social actors. By changing the treatment of the procedure used, we obtain the complexity is smaller than the previous procedure. This has also been demonstrated in an experiment by using the denial sample.
ATLAS, an integrated structural analysis and design system. Volume 6: Design module theory
NASA Technical Reports Server (NTRS)
Backman, B. F.
1979-01-01
The automated design theory underlying the operation of the ATLAS Design Module is decribed. The methods, applications and limitations associated with the fully stressed design, the thermal fully stressed design and a regional optimization algorithm are presented. A discussion of the convergence characteristics of the fully stressed design is also included. Derivations and concepts specific to the ATLAS design theory are shown, while conventional terminology and established methods are identified by references.
Steps to achieve quantitative measurements of microRNA using two step droplet digital PCR.
Stein, Erica V; Duewer, David L; Farkas, Natalia; Romsos, Erica L; Wang, Lili; Cole, Kenneth D
2017-01-01
Droplet digital PCR (ddPCR) is being advocated as a reference method to measure rare genomic targets. It has consistently been proven to be more sensitive and direct at discerning copy numbers of DNA than other quantitative methods. However, one of the largest obstacles to measuring microRNA (miRNA) using ddPCR is that reverse transcription efficiency depends upon the target, meaning small RNA nucleotide composition directly effects primer specificity in a manner that prevents traditional quantitation optimization strategies. Additionally, the use of reagents that are optimized for miRNA measurements using quantitative real-time PCR (qRT-PCR) appear to either cause false positive or false negative detection of certain targets when used with traditional ddPCR quantification methods. False readings are often related to using inadequate enzymes, primers and probes. Given that two-step miRNA quantification using ddPCR relies solely on reverse transcription and uses proprietary reagents previously optimized only for qRT-PCR, these barriers are substantial. Therefore, here we outline essential controls, optimization techniques, and an efficacy model to improve the quality of ddPCR miRNA measurements. We have applied two-step principles used for miRNA qRT-PCR measurements and leveraged the use of synthetic miRNA targets to evaluate ddPCR following cDNA synthesis with four different commercial kits. We have identified inefficiencies and limitations as well as proposed ways to circumvent identified obstacles. Lastly, we show that we can apply these criteria to a model system to confidently quantify miRNA copy number. Our measurement technique is a novel way to quantify specific miRNA copy number in a single sample, without using standard curves for individual experiments. Our methodology can be used for validation and control measurements, as well as a diagnostic technique that allows scientists, technicians, clinicians, and regulators to base miRNA measures on a single unit of measurement rather than a ratio of values.
Steps to achieve quantitative measurements of microRNA using two step droplet digital PCR
Duewer, David L.; Farkas, Natalia; Romsos, Erica L.; Wang, Lili; Cole, Kenneth D.
2017-01-01
Droplet digital PCR (ddPCR) is being advocated as a reference method to measure rare genomic targets. It has consistently been proven to be more sensitive and direct at discerning copy numbers of DNA than other quantitative methods. However, one of the largest obstacles to measuring microRNA (miRNA) using ddPCR is that reverse transcription efficiency depends upon the target, meaning small RNA nucleotide composition directly effects primer specificity in a manner that prevents traditional quantitation optimization strategies. Additionally, the use of reagents that are optimized for miRNA measurements using quantitative real-time PCR (qRT-PCR) appear to either cause false positive or false negative detection of certain targets when used with traditional ddPCR quantification methods. False readings are often related to using inadequate enzymes, primers and probes. Given that two-step miRNA quantification using ddPCR relies solely on reverse transcription and uses proprietary reagents previously optimized only for qRT-PCR, these barriers are substantial. Therefore, here we outline essential controls, optimization techniques, and an efficacy model to improve the quality of ddPCR miRNA measurements. We have applied two-step principles used for miRNA qRT-PCR measurements and leveraged the use of synthetic miRNA targets to evaluate ddPCR following cDNA synthesis with four different commercial kits. We have identified inefficiencies and limitations as well as proposed ways to circumvent identified obstacles. Lastly, we show that we can apply these criteria to a model system to confidently quantify miRNA copy number. Our measurement technique is a novel way to quantify specific miRNA copy number in a single sample, without using standard curves for individual experiments. Our methodology can be used for validation and control measurements, as well as a diagnostic technique that allows scientists, technicians, clinicians, and regulators to base miRNA measures on a single unit of measurement rather than a ratio of values. PMID:29145448
Stroet, Martin; Koziara, Katarzyna B; Malde, Alpeshkumar K; Mark, Alan E
2017-12-12
A general method for parametrizing atomic interaction functions is presented. The method is based on an analysis of surfaces corresponding to the difference between calculated and target data as a function of alternative combinations of parameters (parameter space mapping). The consideration of surfaces in parameter space as opposed to local values or gradients leads to a better understanding of the relationships between the parameters being optimized and a given set of target data. This in turn enables for a range of target data from multiple molecules to be combined in a robust manner and for the optimal region of parameter space to be trivially identified. The effectiveness of the approach is illustrated by using the method to refine the chlorine 6-12 Lennard-Jones parameters against experimental solvation free enthalpies in water and hexane as well as the density and heat of vaporization of the liquid at atmospheric pressure for a set of 10 aromatic-chloro compounds simultaneously. Single-step perturbation is used to efficiently calculate solvation free enthalpies for a wide range of parameter combinations. The capacity of this approach to parametrize accurate and transferrable force fields is discussed.
NASA Astrophysics Data System (ADS)
Torres, Veronica C.; Wilson, Todd; Staneviciute, Austeja; Byrne, Richard W.; Tichauer, Kenneth M.
2018-03-01
Skull base tumors are particularly difficult to visualize and access for surgeons because of the crowded environment and close proximity of vital structures, such as cranial nerves. As a result, accidental nerve damage is a significant concern and the likelihood of tumor recurrence is increased because of more conservative resections that attempt to avoid injuring these structures. In this study, a paired-agent imaging method with direct administration of fluorophores is applied to enhance cranial nerve identification. Here, a control imaging agent (ICG) accounts for non-specific uptake of the nerve-targeting agent (Oxazine 4), and ratiometric data analysis is employed to approximate binding potential (BP, a surrogate of targeted biomolecule concentration). For clinical relevance, animal experiments and simulations were conducted to identify parameters for an optimized stain and rinse protocol using the developed paired-agent method. Numerical methods were used to model the diffusive and kinetic behavior of the imaging agents in tissue, and simulation results revealed that there are various combinations of stain time and rinse number that provide improved contrast of cranial nerves, as suggested by optimal measures of BP and contrast-to-noise ratio.
Urban Rain Gauge Siting Selection Based on Gis-Multicriteria Analysis
NASA Astrophysics Data System (ADS)
Fu, Yanli; Jing, Changfeng; Du, Mingyi
2016-06-01
With the increasingly rapid growth of urbanization and climate change, urban rainfall monitoring as well as urban waterlogging has widely been paid attention. In the light of conventional siting selection methods do not take into consideration of geographic surroundings and spatial-temporal scale for the urban rain gauge site selection, this paper primarily aims at finding the appropriate siting selection rules and methods for rain gauge in urban area. Additionally, for optimization gauge location, a spatial decision support system (DSS) aided by geographical information system (GIS) has been developed. In terms of a series of criteria, the rain gauge optimal site-search problem can be addressed by a multicriteria decision analysis (MCDA). A series of spatial analytical techniques are required for MCDA to identify the prospective sites. With the platform of GIS, using spatial kernel density analysis can reflect the population density; GIS buffer analysis is used to optimize the location with the rain gauge signal transmission character. Experiment results show that the rules and the proposed method are proper for the rain gauge site selection in urban areas, which is significant for the siting selection of urban hydrological facilities and infrastructure, such as water gauge.
Exchange inlet optimization by genetic algorithm for improved RBCC performance
NASA Astrophysics Data System (ADS)
Chorkawy, G.; Etele, J.
2017-09-01
A genetic algorithm based on real parameter representation using a variable selection pressure and variable probability of mutation is used to optimize an annular air breathing rocket inlet called the Exchange Inlet. A rapid and accurate design method which provides estimates for air breathing, mixing, and isentropic flow performance is used as the engine of the optimization routine. Comparison to detailed numerical simulations show that the design method yields desired exit Mach numbers to within approximately 1% over 75% of the annular exit area and predicts entrained air massflows to between 1% and 9% of numerically simulated values depending on the flight condition. Optimum designs are shown to be obtained within approximately 8000 fitness function evaluations in a search space on the order of 106. The method is also shown to be able to identify beneficial values for particular alleles when they exist while showing the ability to handle cases where physical and aphysical designs co-exist at particular values of a subset of alleles within a gene. For an air breathing engine based on a hydrogen fuelled rocket an exchange inlet is designed which yields a predicted air entrainment ratio within 95% of the theoretical maximum.
Tsai, Dung-Ying; Chen, Chien-Liang; Ding, Wang-Hsien
2014-07-01
A simple and effective method for the rapid determination of five salicylate and benzophenone-type UV absorbing substances in marketed fish is described. The method involves the use of matrix solid-phase dispersion (MSPD) prior to their determination by on-line silylation gas chromatography tandem mass spectrometry (GC-MS/MS). The parameters that affect the extraction efficiency were optimized using a Box-Behnken design method. The optimal extraction conditions involved dispersing 0.5g of freeze-dried powdered fish with 1.0g of Florisil using a mortar and pestle. This blend was then transferred to a solid-phase extraction (SPE) cartridge containing 1.0g of octadecyl bonded silica (C18), as the clean-up co-sorbent. The target analytes were then eluted with 7mL of acetonitrile. The extract was derivatized on-line in the GC injection-port by reaction with a trimethylsilylating (TMS) reagent. The TMS-derivatives were then identified and quantitated by GC-MS/MS. The limits of quantitation (LOQs) were less than 0.1ng/g. Copyright © 2014 Elsevier Ltd. All rights reserved.
Seismic waveform inversion best practices: regional, global and exploration test cases
NASA Astrophysics Data System (ADS)
Modrak, Ryan; Tromp, Jeroen
2016-09-01
Reaching the global minimum of a waveform misfit function requires careful choices about the nonlinear optimization, preconditioning and regularization methods underlying an inversion. Because waveform inversion problems are susceptible to erratic convergence associated with strong nonlinearity, one or two test cases are not enough to reliably inform such decisions. We identify best practices, instead, using four seismic near-surface problems, one regional problem and two global problems. To make meaningful quantitative comparisons between methods, we carry out hundreds of inversions, varying one aspect of the implementation at a time. Comparing nonlinear optimization algorithms, we find that limited-memory BFGS provides computational savings over nonlinear conjugate gradient methods in a wide range of test cases. Comparing preconditioners, we show that a new diagonal scaling derived from the adjoint of the forward operator provides better performance than two conventional preconditioning schemes. Comparing regularization strategies, we find that projection, convolution, Tikhonov regularization and total variation regularization are effective in different contexts. Besides questions of one strategy or another, reliability and efficiency in waveform inversion depend on close numerical attention and care. Implementation details involving the line search and restart conditions have a strong effect on computational cost, regardless of the chosen nonlinear optimization algorithm.
Establishment of the optimum two-dimensional electrophoresis system of ovine ovarian tissue.
Jia, J L; Zhang, L P; Wu, J P; Wang, J; Ding, Q
2014-08-26
Lambing performance of sheep is the most important economic trait and is regarded as a critic factoring affecting the productivity in sheep industry. Ovary plays the most roles in lambing trait. To establish the optimum two-dimensional electrophoresis system (2-DE) of ovine ovarian tissue, the common protein extraction methods of animal tissue (trichloroacetic acid/acetone precipitation and direct schizolysis methods) were used to extract ovine ovarian protein, and 17-cm nonlinear immobilized PH 3-10 gradient strips were used for 2-DE. The sample handling, loading quantity of the protein sample, and isoelectric focusing (IEF) steps were manipulated and optimized in this study. The results indicate that the direct schizolysis III method, a 200-μg loading quantity of the protein sample, and IEF steps II (20°C active hydration, 14 h→500 V, 1 h→1000 V 1 h→1000-9000 V, 6 h→80,000 VH→500 V 24 h) are optimal for 2-DE analysis of ovine ovarian tissue. Therefore, ovine ovarian tissue proteomics 2-DE was preliminarily established by the optimized conditions in this study; meanwhile, the conditions identified herein could provide a reference for ovarian sample preparation and 2-DE using tissues from other animals.
A method for estimating mount isolations of powertrain mounting systems
NASA Astrophysics Data System (ADS)
Qin, Wu; Shangguan, Wen-Bin; Luo, Guohai; Xie, Zhengchao
2018-07-01
A method for calculating isolation ratios of mounts at a powertrain mounting systems (PMS) is proposed assuming a powertrain as a rigid body and using the identified powertrain excitation forces and the measured IPI (input point inertance) of mounting points at the body side. With measured accelerations of mounts at powertrain and body sides of one Vehicle (Vehicle A), the excitation forces of a powertrain are identified using conversational method firstly. Another Vehicle (Vehicle B) has the same powertrain as that of Vehicle A, but with different body and mount configuration. The accelerations of mounts at powertrain side of a PMS on Vehicle B are calculated using the powertrain excitation forces identified from Vehicle A. The identified forces of the powertrain are validated by comparing the calculated and the measured accelerations of mounts at the powertrain side of the powertrain on Vehicle B. A method for calculating acceleration of mounting point at body side for Vehicle B is presented using the identified powertrain excitation forces and the measured IPI at a connecting point between car body and mount. Using the calculated accelerations of mounts at powertrain side and body side at different directions, the isolation ratios of a mount are then estimated. The isolation ratios are validated using the experiment, which verified the proposed methods for estimating isolation ratios of mounts. The developed method is beneficial for optimizing mount stiffness to meet mount isolation requirements before prototype.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Minsun, E-mail: mk688@uw.edu; Stewart, Robert D.; Phillips, Mark H.
2015-11-15
Purpose: To investigate the impact of using spatiotemporal optimization, i.e., intensity-modulated spatial optimization followed by fractionation schedule optimization, to select the patient-specific fractionation schedule that maximizes the tumor biologically equivalent dose (BED) under dose constraints for multiple organs-at-risk (OARs). Methods: Spatiotemporal optimization was applied to a variety of lung tumors in a phantom geometry using a range of tumor sizes and locations. The optimal fractionation schedule for a patient using the linear-quadratic cell survival model depends on the tumor and OAR sensitivity to fraction size (α/β), the effective tumor doubling time (T{sub d}), and the size and location of tumormore » target relative to one or more OARs (dose distribution). The authors used a spatiotemporal optimization method to identify the optimal number of fractions N that maximizes the 3D tumor BED distribution for 16 lung phantom cases. The selection of the optimal fractionation schedule used equivalent (30-fraction) OAR constraints for the heart (D{sub mean} ≤ 45 Gy), lungs (D{sub mean} ≤ 20 Gy), cord (D{sub max} ≤ 45 Gy), esophagus (D{sub max} ≤ 63 Gy), and unspecified tissues (D{sub 05} ≤ 60 Gy). To assess plan quality, the authors compared the minimum, mean, maximum, and D{sub 95} of tumor BED, as well as the equivalent uniform dose (EUD) for optimized plans to conventional intensity-modulated radiation therapy plans prescribing 60 Gy in 30 fractions. A sensitivity analysis was performed to assess the effects of T{sub d} (3–100 days), tumor lag-time (T{sub k} = 0–10 days), and the size of tumors on optimal fractionation schedule. Results: Using an α/β ratio of 10 Gy, the average values of tumor max, min, mean BED, and D{sub 95} were up to 19%, 21%, 20%, and 19% larger than those from conventional prescription, depending on T{sub d} and T{sub k} used. Tumor EUD was up to 17% larger than the conventional prescription. For fast proliferating tumors with T{sub d} less than 10 days, there was no significant increase in tumor BED but the treatment course could be shortened without a loss in tumor BED. The improvement in the tumor mean BED was more pronounced with smaller tumors (p-value = 0.08). Conclusions: Spatiotemporal optimization of patient plans has the potential to significantly improve local tumor control (larger BED/EUD) of patients with a favorable geometry, such as smaller tumors with larger distances between the tumor target and nearby OAR. In patients with a less favorable geometry and for fast growing tumors, plans optimized using spatiotemporal optimization and conventional (spatial-only) optimization are equivalent (negligible differences in tumor BED/EUD). However, spatiotemporal optimization yields shorter treatment courses than conventional spatial-only optimization. Personalized, spatiotemporal optimization of treatment schedules can increase patient convenience and help with the efficient allocation of clinical resources. Spatiotemporal optimization can also help identify a subset of patients that might benefit from nonconventional (large dose per fraction) treatments that are ineligible for the current practice of stereotactic body radiation therapy.« less
Mudge, Elizabeth; Paley, Lori; Schieber, Andreas; Brown, Paula N
2015-10-01
Seeds of milk thistle, Silybum marianum (L.) Gaertn., are used for treatment and prevention of liver disorders and were identified as a high priority ingredient requiring a validated analytical method. An AOAC International expert panel reviewed existing methods and made recommendations concerning method optimization prior to validation. A series of extraction and separation studies were undertaken on the selected method for determining flavonolignans from milk thistle seeds and finished products to address the review panel recommendations. Once optimized, a single-laboratory validation study was conducted. The method was assessed for repeatability, accuracy, selectivity, LOD, LOQ, analyte stability, and linearity. Flavonolignan content ranged from 1.40 to 52.86% in raw materials and dry finished products and ranged from 36.16 to 1570.7 μg/mL in liquid tinctures. Repeatability for the individual flavonolignans in raw materials and finished products ranged from 1.03 to 9.88% RSDr, with HorRat values between 0.21 and 1.55. Calibration curves for all flavonolignan concentrations had correlation coefficients of >99.8%. The LODs for the flavonolignans ranged from 0.20 to 0.48 μg/mL at 288 nm. Based on the results of this single-laboratory validation, this method is suitable for the quantitation of the six major flavonolignans in milk thistle raw materials and finished products, as well as multicomponent products containing dandelion, schizandra berry, and artichoke extracts. It is recommended that this method be adopted as First Action Official Method status by AOAC International.
Linguistic methodology for the analysis of aviation accidents
NASA Technical Reports Server (NTRS)
Goguen, J. A.; Linde, C.
1983-01-01
A linguistic method for the analysis of small group discourse, was developed and the use of this method on transcripts of commercial air transpot accidents is demonstrated. The method identifies the discourse types that occur and determine their linguistic structure; it identifies significant linguistic variables based upon these structures or other linguistic concepts such as speech act and topic; it tests hypotheses that support significance and reliability of these variables; and it indicates the implications of the validated hypotheses. These implications fall into three categories: (1) to train crews to use more nearly optimal communication patterns; (2) to use linguistic variables as indices for aspects of crew performance such as attention; and (3) to provide guidelines for the design of aviation procedures and equipment, especially those that involve speech.
Cepeda-Vázquez, Mayela; Blumenthal, David; Camel, Valérie; Rega, Barbara
2017-03-01
Furan, a possibly carcinogenic compound to humans, and furfural, a naturally occurring volatile contributing to aroma, can be both found in thermally treated foods. These process-induced compounds, formed by close reaction pathways, play an important role as markers of food safety and quality. A method capable of simultaneously quantifying both molecules is thus highly relevant for developing mitigation strategies and preserving the sensory properties of food at the same time. We have developed a unique reliable and sensitive headspace trap (HS trap) extraction method coupled to GC-MS for the simultaneous quantification of furan and furfural in a solid processed food (sponge cake). HS Trap extraction has been optimized using an optimal design of experiments (O-DOE) approach, considering four instrumental and two sample preparation variables, as well as a blocking factor identified during preliminary assays. Multicriteria and multiple response optimization was performed based on a desirability function, yielding the following conditions: thermostatting temperature, 65°C; thermostatting time, 15min; number of pressurization cycles, 4; dry purge time, 0.9min; water / sample amount ratio (dry basis), 16; and total amount (water + sample amount, dry basis), 10g. The performances of the optimized method were also assessed: repeatability (RSD: ≤3.3% for furan and ≤2.6% for furfural), intermediate precision (RSD: 4.0% for furan and 4.3% for furfural), linearity (R 2 : 0.9957 for furan and 0.9996 for furfural), LOD (0.50ng furan g sample dry basis -1 and 10.2ng furfural g sample dry basis -1 ), LOQ (0.99ng furan g sample dry basis -1 and 41.1ng furfural g sample dry basis -1 ). Matrix effect was observed mainly for furan. Finally, the optimized method was applied to other sponge cakes with different matrix characteristics and levels of analytes. Copyright © 2016. Published by Elsevier B.V.
Novel inter-crystal scattering event identification method for PET detectors
NASA Astrophysics Data System (ADS)
Lee, Min Sun; Kang, Seung Kwan; Lee, Jae Sung
2018-06-01
Here, we propose a novel method to identify inter-crystal scattering (ICS) events from a PET detector that is even applicable to light-sharing designs. In the proposed method, the detector observation was considered as a linear problem and ICS events were identified by solving this problem. Two ICS identification methods were suggested for solving the linear problem, pseudoinverse matrix calculation and convex constrained optimization. The proposed method was evaluated based on simulation and experimental studies. For the simulation study, an 8 × 8 photo sensor was coupled to 8 × 8, 10 × 10 and 12 × 12 crystal arrays to simulate a one-to-one coupling and two light-sharing detectors, respectively. The identification rate, the rate that the identified ICS events correctly include the true first interaction position and the energy linearity were evaluated for the proposed ICS identification methods. For the experimental study, a digital silicon photomultiplier was coupled with 8 × 8 and 10 × 10 arrays of 3 × 3 × 20 mm3 LGSO crystals to construct the one-to-one coupling and light-sharing detectors, respectively. Intrinsic spatial resolutions were measured for two detector types. The proposed ICS identification methods were implemented, and intrinsic resolutions were compared with and without ICS recovery. As a result, the simulation study showed that the proposed convex optimization method yielded robust energy estimation and high ICS identification rates of 0.93 and 0.87 for the one-to-one and light-sharing detectors, respectively. The experimental study showed a resolution improvement after recovering the identified ICS events into the first interaction position. The average intrinsic spatial resolutions for the one-to-one and light-sharing detector were 1.95 and 2.25 mm in the FWHM without ICS recovery, respectively. These values improved to 1.72 and 1.83 mm after ICS recovery, respectively. In conclusion, our proposed method showed good ICS identification in both one-to-one coupling and light-sharing detectors. We experimentally validated that the ICS recovery based on the proposed identification method led to an improved resolution.
Barish, Syndi; Ochs, Michael F.; Sontag, Eduardo D.; Gevertz, Jana L.
2017-01-01
Cancer is a highly heterogeneous disease, exhibiting spatial and temporal variations that pose challenges for designing robust therapies. Here, we propose the VEPART (Virtual Expansion of Populations for Analyzing Robustness of Therapies) technique as a platform that integrates experimental data, mathematical modeling, and statistical analyses for identifying robust optimal treatment protocols. VEPART begins with time course experimental data for a sample population, and a mathematical model fit to aggregate data from that sample population. Using nonparametric statistics, the sample population is amplified and used to create a large number of virtual populations. At the final step of VEPART, robustness is assessed by identifying and analyzing the optimal therapy (perhaps restricted to a set of clinically realizable protocols) across each virtual population. As proof of concept, we have applied the VEPART method to study the robustness of treatment response in a mouse model of melanoma subject to treatment with immunostimulatory oncolytic viruses and dendritic cell vaccines. Our analysis (i) showed that every scheduling variant of the experimentally used treatment protocol is fragile (nonrobust) and (ii) discovered an alternative region of dosing space (lower oncolytic virus dose, higher dendritic cell dose) for which a robust optimal protocol exists. PMID:28716945
NASA Astrophysics Data System (ADS)
Furton, Kenneth G.; Harper, Ross J.; Perr, Jeannette M.; Almirall, Jose R.
2003-09-01
A comprehensive study and comparison is underway using biological detectors and instrumental methods for the rapid detection of ignitable liquid residues (ILR) and high explosives. Headspace solid phase microextraction (SPME) has been demonstrated to be an effective sampling method helping to identify active odor signature chemicals used by detector dogs to locate forensic specimens as well as a rapid pre-concentration technique prior to instrumental detection. Common ignitable liquids and common military and industrial explosives have been studied including trinitrotoluene, tetryl, RDX, HMX, EGDN, PETN and nitroglycerine. This study focuses on identifying volatile odor signature chemicals present, which can be used to enhance the level and reliability of detection of ILR and explosives by canines and instrumental methods. While most instrumental methods currently in use focus on particles and on parent organic compounds, which are often involatile, characteristic volatile organics are generally also present and can be exploited to enhance detection particularly for well-concealed devices. Specific examples include the volatile odor chemicals 2-ethyl-1-hexanol and cyclohexanone, which are readily available in the headspace of the high explosive composition C-4; whereas, the active chemical cyclo-1,3,5-trimethylene-2,4,6-trinitramine (RDX) is not. The analysis and identification of these headspace 'fingerprint' organics is followed by double-blind dog trials of the individual components using certified teams in an attempt to isolate and understand the target compounds to which dogs are sensitive. Studies to compare commonly used training aids with the actual target explosive have also been undertaken to determine their suitability and effectiveness. The optimization of solid phase microextraction (SPME) combined with ion trap mobility spectrometry (ITMS) and gas chromatography/mass spectrometry/mass spectrometry (GC/MSn) is detailed including interface development and comparisons of limits of detection. These instrumental methods are being optimized in order to detect the same target odor chemicals used by detector dogs to reliably locate explosives and ignitable liquids.
Berman, Jesse D; Peters, Thomas M; Koehler, Kirsten A
2018-05-28
To design a method that uses preliminary hazard mapping data to optimize the number and location of sensors within a network for a long-term assessment of occupational concentrations, while preserving temporal variability, accuracy, and precision of predicted hazards. Particle number concentrations (PNCs) and respirable mass concentrations (RMCs) were measured with direct-reading instruments in a large heavy-vehicle manufacturing facility at 80-82 locations during 7 mapping events, stratified by day and season. Using kriged hazard mapping, a statistical approach identified optimal orders for removing locations to capture temporal variability and high prediction precision of PNC and RMC concentrations. We compared optimal-removal, random-removal, and least-optimal-removal orders to bound prediction performance. The temporal variability of PNC was found to be higher than RMC with low correlation between the two particulate metrics (ρ = 0.30). Optimal-removal orders resulted in more accurate PNC kriged estimates (root mean square error [RMSE] = 49.2) at sample locations compared with random-removal order (RMSE = 55.7). For estimates at locations having concentrations in the upper 10th percentile, the optimal-removal order preserved average estimated concentrations better than random- or least-optimal-removal orders (P < 0.01). However, estimated average concentrations using an optimal-removal were not statistically different than random-removal when averaged over the entire facility. No statistical difference was observed for optimal- and random-removal methods for RMCs that were less variable in time and space than PNCs. Optimized removal performed better than random-removal in preserving high temporal variability and accuracy of hazard map for PNC, but not for the more spatially homogeneous RMC. These results can be used to reduce the number of locations used in a network of static sensors for long-term monitoring of hazards in the workplace, without sacrificing prediction performance.
Design and operation of a bio-inspired micropump based on blood-sucking mechanism of mosquitoes
NASA Astrophysics Data System (ADS)
Leu, Tzong-Shyng; Kao, Ruei-Hung
2018-05-01
The study is to develop a novel bionic micropump, mimicking blood-suck mechanism of mosquitos with a similar efficiency of 36%. The micropump is produced by using micro-electro-mechanical system (MEMS) technology, PDMS (polydimethylsiloxane) to fabricate the microchannel, and an actuator membrane made by Fe-PDMS. It employs an Nd-FeB permanent magnet and PZT to actuate the Fe-PDMS membrane for generating flow rate. A lumped model theory and the Taguchi method are used for numerical simulation of pulsating flow in the micropump. Also focused is to change the size of mosquito mouth for identifying the best waveform for the transient flow processes. Based on computational results of channel size and the Taguchi method, an optimization actuation waveform is identified. The maximum pumping flow rate is 23.5 μL/min and the efficiency is 86%. The power density of micropump is about 8 times of that produced by mosquito’s suction. In addition to using theoretical design of the channel size, also combine with Taguchi method and asymmetric actuation to find the optimization actuation waveform, the experimental result shows the maximum pumping flowrate is 23.5 μL/min and efficiency is 86%, moreover, the power density of micropump is 8 times higher than mosquito’s.
PVP-SVM: Sequence-Based Prediction of Phage Virion Proteins Using a Support Vector Machine
Manavalan, Balachandran; Shin, Tae H.; Lee, Gwang
2018-01-01
Accurately identifying bacteriophage virion proteins from uncharacterized sequences is important to understand interactions between the phage and its host bacteria in order to develop new antibacterial drugs. However, identification of such proteins using experimental techniques is expensive and often time consuming; hence, development of an efficient computational algorithm for the prediction of phage virion proteins (PVPs) prior to in vitro experimentation is needed. Here, we describe a support vector machine (SVM)-based PVP predictor, called PVP-SVM, which was trained with 136 optimal features. A feature selection protocol was employed to identify the optimal features from a large set that included amino acid composition, dipeptide composition, atomic composition, physicochemical properties, and chain-transition-distribution. PVP-SVM achieved an accuracy of 0.870 during leave-one-out cross-validation, which was 6% higher than control SVM predictors trained with all features, indicating the efficiency of the feature selection method. Furthermore, PVP-SVM displayed superior performance compared to the currently available method, PVPred, and two other machine-learning methods developed in this study when objectively evaluated with an independent dataset. For the convenience of the scientific community, a user-friendly and publicly accessible web server has been established at www.thegleelab.org/PVP-SVM/PVP-SVM.html. PMID:29616000
PVP-SVM: Sequence-Based Prediction of Phage Virion Proteins Using a Support Vector Machine.
Manavalan, Balachandran; Shin, Tae H; Lee, Gwang
2018-01-01
Accurately identifying bacteriophage virion proteins from uncharacterized sequences is important to understand interactions between the phage and its host bacteria in order to develop new antibacterial drugs. However, identification of such proteins using experimental techniques is expensive and often time consuming; hence, development of an efficient computational algorithm for the prediction of phage virion proteins (PVPs) prior to in vitro experimentation is needed. Here, we describe a support vector machine (SVM)-based PVP predictor, called PVP-SVM, which was trained with 136 optimal features. A feature selection protocol was employed to identify the optimal features from a large set that included amino acid composition, dipeptide composition, atomic composition, physicochemical properties, and chain-transition-distribution. PVP-SVM achieved an accuracy of 0.870 during leave-one-out cross-validation, which was 6% higher than control SVM predictors trained with all features, indicating the efficiency of the feature selection method. Furthermore, PVP-SVM displayed superior performance compared to the currently available method, PVPred, and two other machine-learning methods developed in this study when objectively evaluated with an independent dataset. For the convenience of the scientific community, a user-friendly and publicly accessible web server has been established at www.thegleelab.org/PVP-SVM/PVP-SVM.html.
Gerbershagen, H J; Rothaug, J; Kalkman, C J; Meissner, W
2011-10-01
Cut-off points (CPs) of the numeric rating scale (NRS 0-10) are regularly used in postoperative pain treatment. However, there is insufficient evidence to identify the optimal CP between mild and moderate pain. A total of 435 patients undergoing general, trauma, or oral and maxillofacial surgery were studied. To determine the optimal CP for pain treatment, four approaches were used: first, patients estimated their tolerable postoperative pain intensity before operation; secondly, 24 h after surgery, they indicated if they would have preferred to receive more analgesics; thirdly, satisfaction with pain treatment was analysed, and fourthly, multivariate analysis was used to calculate the optimal CP for pain intensities in relation to pain-related interference with movement, breathing, sleep, and mood. The estimated tolerable postoperative pain before operation was median (range) NRS 4.0 (0-10). Patients who would have liked more analgesics reported significantly higher average pain since surgery [median NRS 5.0 (0-9)] compared with those without this request [NRS 3.0 (0-8)]. Patients satisfied with pain treatment reported an average pain intensity of median NRS 3.0 (0-8) compared with less satisfied patients with NRS 5.0 (2-9). Analysis of average postoperative pain in relation to pain-related interference with mood and activity indicated pain categories of NRS 0-2, mild; 3-4, moderate; and 5-10, severe pain. Three of the four methods identified a treatment threshold of average pain of NRS≥4. This was considered to identify patients with pain of moderate-to-severe intensity. This cut-off was indentified as the tolerable pain threshold.
Optimal fault-tolerant control strategy of a solid oxide fuel cell system
NASA Astrophysics Data System (ADS)
Wu, Xiaojuan; Gao, Danhui
2017-10-01
For solid oxide fuel cell (SOFC) development, load tracking, heat management, air excess ratio constraint, high efficiency, low cost and fault diagnosis are six key issues. However, no literature studies the control techniques combining optimization and fault diagnosis for the SOFC system. An optimal fault-tolerant control strategy is presented in this paper, which involves four parts: a fault diagnosis module, a switching module, two backup optimizers and a controller loop. The fault diagnosis part is presented to identify the SOFC current fault type, and the switching module is used to select the appropriate backup optimizer based on the diagnosis result. NSGA-II and TOPSIS are employed to design the two backup optimizers under normal and air compressor fault states. PID algorithm is proposed to design the control loop, which includes a power tracking controller, an anode inlet temperature controller, a cathode inlet temperature controller and an air excess ratio controller. The simulation results show the proposed optimal fault-tolerant control method can track the power, temperature and air excess ratio at the desired values, simultaneously achieving the maximum efficiency and the minimum unit cost in the case of SOFC normal and even in the air compressor fault.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Piepel, Gregory F.; Pasquini, Benedetta; Cooley, Scott K.
In recent years, multivariate optimization has played an increasing role in analytical method development. ICH guidelines recommend using statistical design of experiments to identify the design space, in which multivariate combinations of composition variables and process variables have been demonstrated to provide quality results. Considering a microemulsion electrokinetic chromatography method (MEEKC), the performance of the electrophoretic run depends on the proportions of mixture components (MCs) of the microemulsion and on the values of process variables (PVs). In the present work, for the first time in the literature, a mixture-process variable (MPV) approach was applied to optimize a MEEKC method formore » the analysis of coenzyme Q10 (Q10), ascorbic acid (AA), and folic acid (FA) contained in nutraceuticals. The MCs (buffer, surfactant-cosurfactant, oil) and the PVs (voltage, buffer concentration, buffer pH) were simultaneously changed according to a MPV experimental design. A 62-run MPV design was generated using the I-optimality criterion, assuming a 46-term MPV model allowing for special-cubic blending of the MCs, quadratic effects of the PVs, and some MC-PV interactions. The obtained data were used to develop MPV models that express the performance of an electrophoretic run (measured as peak efficiencies of Q10, AA, and FA) in terms of the MCs and PVs. Contour and perturbation plots were drawn for each of the responses. Finally, the MPV models and criteria for the peak efficiencies were used to develop the design space and an optimal subregion (i.e., the settings of the mixture MCs and PVs that satisfy the respective criteria), as well as a unique optimal combination of MCs and PVs.« less
NASA Astrophysics Data System (ADS)
Bolodurina, I. P.; Parfenov, D. I.
2018-01-01
We have elaborated a neural network model of virtual network flow identification based on the statistical properties of flows circulating in the network of the data center and characteristics that describe the content of packets transmitted through network objects. This enabled us to establish the optimal set of attributes to identify virtual network functions. We have established an algorithm for optimizing the placement of virtual data functions using the data obtained in our research. Our approach uses a hybrid method of visualization using virtual machines and containers, which enables to reduce the infrastructure load and the response time in the network of the virtual data center. The algorithmic solution is based on neural networks, which enables to scale it at any number of the network function copies.
Optimal laser wavelength for efficient laser power converter operation over temperature
DOE Office of Scientific and Technical Information (OSTI.GOV)
Höhn, O., E-mail: oliver.hoehn@ise.fraunhofer.de; Walker, A. W.; Bett, A. W.
2016-06-13
A temperature dependent modeling study is conducted on a GaAs laser power converter to identify the optimal incident laser wavelength for optical power transmission. Furthermore, the respective temperature dependent maximal conversion efficiencies in the radiative limit as well as in a practically achievable limit are presented. The model is based on the transfer matrix method coupled to a two-diode model, and is calibrated to experimental data of a GaAs photovoltaic device over laser irradiance and temperature. Since the laser wavelength does not strongly influence the open circuit voltage of the laser power converter, the optimal laser wavelength is determined tomore » be in the range where the external quantum efficiency is maximal, but weighted by the photon flux of the laser.« less
Dynamic optimization and its relation to classical and quantum constrained systems
NASA Astrophysics Data System (ADS)
Contreras, Mauricio; Pellicer, Rely; Villena, Marcelo
2017-08-01
We study the structure of a simple dynamic optimization problem consisting of one state and one control variable, from a physicist's point of view. By using an analogy to a physical model, we study this system in the classical and quantum frameworks. Classically, the dynamic optimization problem is equivalent to a classical mechanics constrained system, so we must use the Dirac method to analyze it in a correct way. We find that there are two second-class constraints in the model: one fix the momenta associated with the control variables, and the other is a reminder of the optimal control law. The dynamic evolution of this constrained system is given by the Dirac's bracket of the canonical variables with the Hamiltonian. This dynamic results to be identical to the unconstrained one given by the Pontryagin equations, which are the correct classical equations of motion for our physical optimization problem. In the same Pontryagin scheme, by imposing a closed-loop λ-strategy, the optimality condition for the action gives a consistency relation, which is associated to the Hamilton-Jacobi-Bellman equation of the dynamic programming method. A similar result is achieved by quantizing the classical model. By setting the wave function Ψ(x , t) =e iS(x , t) in the quantum Schrödinger equation, a non-linear partial equation is obtained for the S function. For the right-hand side quantization, this is the Hamilton-Jacobi-Bellman equation, when S(x , t) is identified with the optimal value function. Thus, the Hamilton-Jacobi-Bellman equation in Bellman's maximum principle, can be interpreted as the quantum approach of the optimization problem.
Gao, Zhenzhen; Chen, Jin; Qiu, Shulei; Li, Youying; Wang, Deyun; Liu, Cui; Li, Xiuping; Hou, Ranran; Yue, Chanjuan; Liu, Jie; Li, Hongquan; Hu, Yuanliang
2016-01-20
Garlic polysaccharide (GPS) was modified in selenylation respectively by nitric acid-sodium selenite (NA-SS), glacial acetic acid-selenous acid (GA-SA), glacial acetic acid-sodium selenite (GA-SS) and selenium oxychloride (SOC) methods each under nine modification conditions of L9(3(4)) orthogonal design and each to obtain nine selenizing GPSs (sGPSs). Their structures were identified, yields and selenium contents were determined, selenium yields were calculated, and the immune-enhancing activities of four sGPSs with higher selenium yields were compared taking unmodified GPS as control. The results showed that among four methods the selenylation efficiency of NA-SS method were the highest, the activity of sGPS5 was the strongest and significantly stronger than that of unmodified GPS. This indicates that selenylation modification can significantly enhance the immune-enhancing activity of GPS, NA-SS method is the best method and the optimal conditions are 0.8:1 weight ratio of sodium selenite to GPS, reaction temperature of 70 °C and reaction time of 10h. Copyright © 2015 Elsevier Ltd. All rights reserved.
Assessing and minimizing contamination in time of flight based validation data
NASA Astrophysics Data System (ADS)
Lennox, Kristin P.; Rosenfield, Paul; Blair, Brenton; Kaplan, Alan; Ruz, Jaime; Glenn, Andrew; Wurtz, Ronald
2017-10-01
Time of flight experiments are the gold standard method for generating labeled training and testing data for the neutron/gamma pulse shape discrimination problem. As the popularity of supervised classification methods increases in this field, there will also be increasing reliance on time of flight data for algorithm development and evaluation. However, time of flight experiments are subject to various sources of contamination that lead to neutron and gamma pulses being mislabeled. Such labeling errors have a detrimental effect on classification algorithm training and testing, and should therefore be minimized. This paper presents a method for identifying minimally contaminated data sets from time of flight experiments and estimating the residual contamination rate. This method leverages statistical models describing neutron and gamma travel time distributions and is easily implemented using existing statistical software. The method produces a set of optimal intervals that balance the trade-off between interval size and nuisance particle contamination, and its use is demonstrated on a time of flight data set for Cf-252. The particular properties of the optimal intervals for the demonstration data are explored in detail.
Optimal observation network design for conceptual model discrimination and uncertainty reduction
NASA Astrophysics Data System (ADS)
Pham, Hai V.; Tsai, Frank T.-C.
2016-02-01
This study expands the Box-Hill discrimination function to design an optimal observation network to discriminate conceptual models and, in turn, identify a most favored model. The Box-Hill discrimination function measures the expected decrease in Shannon entropy (for model identification) before and after the optimal design for one additional observation. This study modifies the discrimination function to account for multiple future observations that are assumed spatiotemporally independent and Gaussian-distributed. Bayesian model averaging (BMA) is used to incorporate existing observation data and quantify future observation uncertainty arising from conceptual and parametric uncertainties in the discrimination function. In addition, the BMA method is adopted to predict future observation data in a statistical sense. The design goal is to find optimal locations and least data via maximizing the Box-Hill discrimination function value subject to a posterior model probability threshold. The optimal observation network design is illustrated using a groundwater study in Baton Rouge, Louisiana, to collect additional groundwater heads from USGS wells. The sources of uncertainty creating multiple groundwater models are geological architecture, boundary condition, and fault permeability architecture. Impacts of considering homoscedastic and heteroscedastic future observation data and the sources of uncertainties on potential observation areas are analyzed. Results show that heteroscedasticity should be considered in the design procedure to account for various sources of future observation uncertainty. After the optimal design is obtained and the corresponding data are collected for model updating, total variances of head predictions can be significantly reduced by identifying a model with a superior posterior model probability.
ERIC Educational Resources Information Center
Thomas, Shailendra Nelle
2010-01-01
Purpose, scope, and method of study: Although computer technology has been a part of the educational community for many years, it is still not used at its optimal capacity (Gosmire & Grady, 2007b; Trotter, 2007). While teachers were identified early as playing important roles in the success of technology implementation, principals were often…
Front-End Analysis Methods for the Noncommissioned Officer Education System
2013-02-01
The Noncommissioned Officer Education System plays a crucial role in Soldier development by providing both institutional training and structured-self...created challenges with maintaining currency of institutional training . Questions have arisen regarding the optimal placement of tasks as their...relevance changes, especially considering the resources required to update institutional training . An analysis was conducted to identify the
McElvania Tekippe, Erin; Shuey, Sunni; Winkler, David W; Butler, Meghan A; Burnham, Carey-Ann D
2013-05-01
Matrix-assisted laser desorption ionization-time of flight mass spectrometry (MALDI-TOF MS) can be used as a method for the rapid identification of microorganisms. This study evaluated the Bruker Biotyper (MALDI-TOF MS) system for the identification of clinically relevant Gram-positive organisms. We tested 239 aerobic Gram-positive organisms isolated from clinical specimens. We evaluated 4 direct-smear methods, including "heavy" (H) and "light" (L) smears, with and without a 1-μl direct formic acid (FA) overlay. The quality measure assigned to a MALDI-TOF MS identification is a numerical value or "score." We found that a heavy smear with a formic acid overlay (H+FA) produced optimal MALDI-TOF MS identification scores and the highest percentage of correctly identified organisms. Using a score of ≥2.0, we identified 183 of the 239 isolates (76.6%) to the genus level, and of the 181 isolates resolved to the species level, 141 isolates (77.9%) were correctly identified. To maximize the number of correct identifications while minimizing misidentifications, the data were analyzed using a score of ≥1.7 for genus- and species-level identification. Using this score, 220 of the 239 isolates (92.1%) were identified to the genus level, and of the 181 isolates resolved to the species level, 167 isolates (92.2%) could be assigned an accurate species identification. We also evaluated a subset of isolates for preanalytic factors that might influence MALDI-TOF MS identification. Frequent subcultures increased the number of unidentified isolates. Incubation temperatures and subcultures of the media did not alter the rate of identification. These data define the ideal bacterial preparation, identification score, and medium conditions for optimal identification of Gram-positive bacteria by use of MALDI-TOF MS.
Kovács, A; Berkó, Sz; Csányi, E; Csóka, I
2017-03-01
The aim of our present work was to evaluate the applicability of the Quality by Design (QbD) methodology in the development and optimalization of nanostructured lipid carriers containing salicyclic acid (NLC SA). Within the Quality by Design methology, special emphasis is layed on the adaptation of the initial risk assessment step in order to properly identify the critical material attributes and critical process parameters in formulation development. NLC SA products were formulated by the ultrasonication method using Compritol 888 ATO as solid lipid, Miglyol 812 as liquid lipid and Cremophor RH 60® as surfactant. LeanQbD Software and StatSoft. Inc. Statistica for Windows 11 were employed to indentify the risks. Three highly critical quality attributes (CQAs) for NLC SA were identified, namely particle size, particle size distribution and aggregation. Five attributes of medium influence were identified, including dissolution rate, dissolution efficiency, pH, lipid solubility of the active pharmaceutical ingredient (API) and entrapment efficiency. Three critical material attributes (CMA) and critical process parameters (CPP) were identified: surfactant concentration, solid lipid/liquid lipid ratio and ultrasonication time. The CMAs and CPPs are considered as independent variables and the CQAs are defined as dependent variables. The 2 3 factorial design was used to evaluate the role of the independent and dependent variables. Based on our experiments, an optimal formulation can be obtained when the surfactant concentration is set to 5%, the solid lipid/liquid lipid ratio is 7:3 and ultrasonication time is 20min. The optimal NLC SA showed narrow size distribution (0.857±0.014) with a mean particle size of 114±2.64nm. The NLC SA product showed a significantly higher in vitro drug release compared to the micro-particle reference preparation containing salicylic acid (MP SA). Copyright © 2016 Elsevier B.V. All rights reserved.
McElvania TeKippe, Erin; Shuey, Sunni; Winkler, David W.; Butler, Meghan A.
2013-01-01
Matrix-assisted laser desorption ionization–time of flight mass spectrometry (MALDI-TOF MS) can be used as a method for the rapid identification of microorganisms. This study evaluated the Bruker Biotyper (MALDI-TOF MS) system for the identification of clinically relevant Gram-positive organisms. We tested 239 aerobic Gram-positive organisms isolated from clinical specimens. We evaluated 4 direct-smear methods, including “heavy” (H) and “light” (L) smears, with and without a 1-μl direct formic acid (FA) overlay. The quality measure assigned to a MALDI-TOF MS identification is a numerical value or “score.” We found that a heavy smear with a formic acid overlay (H+FA) produced optimal MALDI-TOF MS identification scores and the highest percentage of correctly identified organisms. Using a score of ≥2.0, we identified 183 of the 239 isolates (76.6%) to the genus level, and of the 181 isolates resolved to the species level, 141 isolates (77.9%) were correctly identified. To maximize the number of correct identifications while minimizing misidentifications, the data were analyzed using a score of ≥1.7 for genus- and species-level identification. Using this score, 220 of the 239 isolates (92.1%) were identified to the genus level, and of the 181 isolates resolved to the species level, 167 isolates (92.2%) could be assigned an accurate species identification. We also evaluated a subset of isolates for preanalytic factors that might influence MALDI-TOF MS identification. Frequent subcultures increased the number of unidentified isolates. Incubation temperatures and subcultures of the media did not alter the rate of identification. These data define the ideal bacterial preparation, identification score, and medium conditions for optimal identification of Gram-positive bacteria by use of MALDI-TOF MS. PMID:23426925
Optimizing Multi-Station Template Matching to Identify and Characterize Induced Seismicity in Ohio
NASA Astrophysics Data System (ADS)
Brudzinski, M. R.; Skoumal, R.; Currie, B. S.
2014-12-01
As oil and gas well completions utilizing multi-stage hydraulic fracturing have become more commonplace, the potential for seismicity induced by the deep disposal of frac-related flowback waters and the hydraulic fracturing process itself has become increasingly important. While it is rare for these processes to induce felt seismicity, the recent increase in the number of deep injection wells and volumes injected have been suspected to have contributed to a substantial increase of events = M 3 in the continental U.S. over the past decade. Earthquake template matching using multi-station waveform cross-correlation is an adept tool for investigating potentially induced sequences due to its proficiency at identifying similar/repeating seismic events. We have sought to refine this approach by investigating a variety of seismic sequences and determining the optimal parameters (station combinations, template lengths and offsets, filter frequencies, data access method, etc.) for identifying induced seismicity. When applied to a sequence near a wastewater injection well in Youngstown, Ohio, our optimized template matching routine yielded 566 events while other template matching studies found ~100-200 events. We also identified 77 events on 4-12 March 2014 that are temporally and spatially correlated with active hydraulic fracturing in Poland Township, Ohio. We find similar improvement in characterizing sequences in Washington and Harrison Counties, which appear to be related to wastewater injection and hydraulic fracturing, respectively. In the Youngstown and Poland Township cases, focal mechanisms and double difference relocation using the cross-correlation matrix finds left-lateral faults striking roughly east-west near the top of the basement. We have also used template matching to determine isolated earthquakes near several other wastewater injection wells are unlikely to be induced based on a lack of similar/repeating sequences. Optimized template matching utilizes high-quality reliable stations within pre-existing seismic networks and is therefore a cost-efficient monitoring strategy for identifying and characterizing potentially induced seismic sequences.
On the Convergence Analysis of the Optimized Gradient Method.
Kim, Donghwan; Fessler, Jeffrey A
2017-01-01
This paper considers the problem of unconstrained minimization of smooth convex functions having Lipschitz continuous gradients with known Lipschitz constant. We recently proposed the optimized gradient method for this problem and showed that it has a worst-case convergence bound for the cost function decrease that is twice as small as that of Nesterov's fast gradient method, yet has a similarly efficient practical implementation. Drori showed recently that the optimized gradient method has optimal complexity for the cost function decrease over the general class of first-order methods. This optimality makes it important to study fully the convergence properties of the optimized gradient method. The previous worst-case convergence bound for the optimized gradient method was derived for only the last iterate of a secondary sequence. This paper provides an analytic convergence bound for the primary sequence generated by the optimized gradient method. We then discuss additional convergence properties of the optimized gradient method, including the interesting fact that the optimized gradient method has two types of worstcase functions: a piecewise affine-quadratic function and a quadratic function. These results help complete the theory of an optimal first-order method for smooth convex minimization.
On the Convergence Analysis of the Optimized Gradient Method
Kim, Donghwan; Fessler, Jeffrey A.
2016-01-01
This paper considers the problem of unconstrained minimization of smooth convex functions having Lipschitz continuous gradients with known Lipschitz constant. We recently proposed the optimized gradient method for this problem and showed that it has a worst-case convergence bound for the cost function decrease that is twice as small as that of Nesterov’s fast gradient method, yet has a similarly efficient practical implementation. Drori showed recently that the optimized gradient method has optimal complexity for the cost function decrease over the general class of first-order methods. This optimality makes it important to study fully the convergence properties of the optimized gradient method. The previous worst-case convergence bound for the optimized gradient method was derived for only the last iterate of a secondary sequence. This paper provides an analytic convergence bound for the primary sequence generated by the optimized gradient method. We then discuss additional convergence properties of the optimized gradient method, including the interesting fact that the optimized gradient method has two types of worstcase functions: a piecewise affine-quadratic function and a quadratic function. These results help complete the theory of an optimal first-order method for smooth convex minimization. PMID:28461707
[Building Mass Spectrometry Spectral Libraries of Human Cancer Cell Lines].
Faktor, J; Bouchal, P
Cancer research often focuses on protein quantification in model cancer cell lines and cancer tissues. SWATH (sequential windowed acquisition of all theoretical fragment ion spectra), the state of the art method, enables the quantification of all proteins included in spectral library. Spectral library contains fragmentation patterns of each detectable protein in a sample. Thorough spectral library preparation will improve quantitation of low abundant proteins which usually play an important role in cancer. Our research is focused on the optimization of spectral library preparation aimed at maximizing the number of identified proteins in MCF-7 breast cancer cell line. First, we optimized the sample preparation prior entering the mass spectrometer. We examined the effects of lysis buffer composition, peptide dissolution protocol and the material of sample vial on the number of proteins identified in spectral library. Next, we optimized mass spectrometry (MS) method for spectral library data acquisition. Our thorough optimized protocol for spectral library building enabled the identification of 1,653 proteins (FDR < 1%) in 1 µg of MCF-7 lysate. This work contributed to the enhancement of protein coverage in SWATH digital biobanks which enable quantification of arbitrary protein from physically unavailable samples. In future, high quality spectral libraries could play a key role in preparing of patient proteome digital fingerprints.Key words: biomarker - mass spectrometry - proteomics - digital biobanking - SWATH - protein quantificationThis work was supported by the project MEYS - NPS I - LO1413.The authors declare they have no potential conflicts of interest concerning drugs, products, or services used in the study.The Editorial Board declares that the manuscript met the ICMJE recommendation for biomedical papers.Submitted: 7. 5. 2016Accepted: 9. 6. 2016.
Diaz, Maureen H.; Waller, Jessica L.; Napoliello, Rebecca A.; Islam, Md. Shahidul; Wolff, Bernard J.; Burken, Daniel J.; Holden, Rhiannon L.; Srinivasan, Velusamy; Arvay, Melissa; McGee, Lesley; Oberste, M. Steven; Whitney, Cynthia G.; Schrag, Stephanie J.; Winchell, Jonas M.; Saha, Samir K.
2013-01-01
Identification of etiology remains a significant challenge in the diagnosis of infectious diseases, particularly in resource-poor settings. Viral, bacterial, and fungal pathogens, as well as parasites, play a role for many syndromes, and optimizing a single diagnostic system to detect a range of pathogens is challenging. The TaqMan Array Card (TAC) is a multiple-pathogen detection method that has previously been identified as a valuable technique for determining etiology of infections and holds promise for expanded use in clinical microbiology laboratories and surveillance studies. We selected TAC for use in the Aetiology of Neonatal Infection in South Asia (ANISA) study for identifying etiologies of severe disease in neonates in Bangladesh, India, and Pakistan. Here we report optimization of TAC to improve pathogen detection and overcome technical challenges associated with use of this technology in a large-scale surveillance study. Specifically, we increased the number of assay replicates, implemented a more robust RT-qPCR enzyme formulation, and adopted a more efficient method for extraction of total nucleic acid from blood specimens. We also report the development and analytical validation of ten new assays for use in the ANISA study. Based on these data, we revised the study-specific TACs for detection of 22 pathogens in NP/OP swabs and 12 pathogens in blood specimens as well as two control reactions (internal positive control and human nucleic acid control) for each specimen type. The cumulative improvements realized through these optimization studies will benefit ANISA and perhaps other studies utilizing multiple-pathogen detection approaches. These lessons may also contribute to the expansion of TAC technology to the clinical setting. PMID:23805203
Li, Xue; Ahmad, Imad A Haidar; Tam, James; Wang, Yan; Dao, Gina; Blasko, Andrei
2018-02-05
A Total Organic Carbon (TOC) based analytical method to quantitate trace residues of clean-in-place (CIP) detergents CIP100 ® and CIP200 ® on the surfaces of pharmaceutical manufacturing equipment was developed and validated. Five factors affecting the development and validation of the method were identified: diluent composition, diluent volume, extraction method, location for TOC sample preparation, and oxidant flow rate. Key experimental parameters were optimized to minimize contamination and to improve the sensitivity, recovery, and reliability of the method. The optimized concentration of the phosphoric acid in the swabbing solution was 0.05M, and the optimal volume of the sample solution was 30mL. The swab extraction method was 1min sonication. The use of a clean room, as compared to an isolated lab environment, was not required for method validation. The method was demonstrated to be linear with a correlation coefficient (R) of 0.9999. The average recoveries from stainless steel surfaces at multiple spike levels were >90%. The repeatability and intermediate precision results were ≤5% across the 2.2-6.6ppm range (50-150% of the target maximum carry over, MACO, limit). The method was also shown to be sensitive with a detection limit (DL) of 38ppb and a quantitation limit (QL) of 114ppb. The method validation demonstrated that the developed method is suitable for its intended use. The methodology developed in this study is generally applicable to the cleaning verification of any organic detergents used for the cleaning of pharmaceutical manufacturing equipment made of electropolished stainless steel material. Copyright © 2017 Elsevier B.V. All rights reserved.
Analytical sizing methods for behind-the-meter battery storage
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Di; Kintner-Meyer, Michael; Yang, Tao
In behind-the-meter application, battery storage system (BSS) is utilized to reduce a commercial or industrial customer’s payment for electricity use, including energy charge and demand charge. The potential value of BSS in payment reduction and the most economic size can be determined by formulating and solving standard mathematical programming problems. In this method, users input system information such as load profiles, energy/demand charge rates, and battery characteristics to construct a standard programming problem that typically involve a large number of constraints and decision variables. Such a large scale programming problem is then solved by optimization solvers to obtain numerical solutions.more » Such a method cannot directly link the obtained optimal battery sizes to input parameters and requires case-by-case analysis. In this paper, we present an objective quantitative analysis of costs and benefits of customer-side energy storage, and thereby identify key factors that affect battery sizing. Based on the analysis, we then develop simple but effective guidelines that can be used to determine the most cost-effective battery size or guide utility rate design for stimulating energy storage development. The proposed analytical sizing methods are innovative, and offer engineering insights on how the optimal battery size varies with system characteristics. We illustrate the proposed methods using practical building load profile and utility rate. The obtained results are compared with the ones using mathematical programming based methods for validation.« less
NASA Astrophysics Data System (ADS)
Marhadi, Kun Saptohartyadi
Structural optimization for damage tolerance under various unforeseen damage scenarios is computationally challenging. It couples non-linear progressive failure analysis with sampling-based stochastic analysis of random damage. The goal of this research was to understand the relationship between alternate load paths available in a structure and its damage tolerance, and to use this information to develop computationally efficient methods for designing damage tolerant structures. Progressive failure of a redundant truss structure subjected to small random variability was investigated to identify features that correlate with robustness and predictability of the structure's progressive failure. The identified features were used to develop numerical surrogate measures that permit computationally efficient deterministic optimization to achieve robustness and predictability of progressive failure. Analysis of damage tolerance on designs with robust progressive failure indicated that robustness and predictability of progressive failure do not guarantee damage tolerance. Damage tolerance requires a structure to redistribute its load to alternate load paths. In order to investigate the load distribution characteristics that lead to damage tolerance in structures, designs with varying degrees of damage tolerance were generated using brute force stochastic optimization. A method based on principal component analysis was used to describe load distributions (alternate load paths) in the structures. Results indicate that a structure that can develop alternate paths is not necessarily damage tolerant. The alternate load paths must have a required minimum load capability. Robustness analysis of damage tolerant optimum designs indicates that designs are tailored to specified damage. A design Optimized under one damage specification can be sensitive to other damages not considered. Effectiveness of existing load path definitions and characterizations were investigated for continuum structures. A load path definition using a relative compliance change measure (U* field) was demonstrated to be the most useful measure of load path. This measure provides quantitative information on load path trajectories and qualitative information on the effectiveness of the load path. The use of the U* description of load paths in optimizing structures for effective load paths was investigated.