Sample records for optimization procedure based

  1. Computational study of engine external aerodynamics as a part of multidisciplinary optimization procedure

    NASA Astrophysics Data System (ADS)

    Savelyev, Andrey; Anisimov, Kirill; Kazhan, Egor; Kursakov, Innocentiy; Lysenkov, Alexandr

    2016-10-01

    The paper is devoted to the development of methodology to optimize external aerodynamics of the engine. Optimization procedure is based on numerical solution of the Reynolds-averaged Navier-Stokes equations. As a method of optimization the surrogate based method is used. As a test problem optimal shape design of turbofan nacelle is considered. The results of the first stage, which investigates classic airplane configuration with engine located under the wing, are presented. Described optimization procedure is considered in the context of multidisciplinary optimization of the 3rd generation, developed in the project AGILE.

  2. Optimal False Discovery Rate Control for Dependent Data

    PubMed Central

    Xie, Jichun; Cai, T. Tony; Maris, John; Li, Hongzhe

    2013-01-01

    This paper considers the problem of optimal false discovery rate control when the test statistics are dependent. An optimal joint oracle procedure, which minimizes the false non-discovery rate subject to a constraint on the false discovery rate is developed. A data-driven marginal plug-in procedure is then proposed to approximate the optimal joint procedure for multivariate normal data. It is shown that the marginal procedure is asymptotically optimal for multivariate normal data with a short-range dependent covariance structure. Numerical results show that the marginal procedure controls false discovery rate and leads to a smaller false non-discovery rate than several commonly used p-value based false discovery rate controlling methods. The procedure is illustrated by an application to a genome-wide association study of neuroblastoma and it identifies a few more genetic variants that are potentially associated with neuroblastoma than several p-value-based false discovery rate controlling procedures. PMID:23378870

  3. Neural Net-Based Redesign of Transonic Turbines for Improved Unsteady Aerodynamic Performance

    NASA Technical Reports Server (NTRS)

    Madavan, Nateri K.; Rai, Man Mohan; Huber, Frank W.

    1998-01-01

    A recently developed neural net-based aerodynamic design procedure is used in the redesign of a transonic turbine stage to improve its unsteady aerodynamic performance. The redesign procedure used incorporates the advantages of both traditional response surface methodology (RSM) and neural networks by employing a strategy called parameter-based partitioning of the design space. Starting from the reference design, a sequence of response surfaces based on both neural networks and polynomial fits are constructed to traverse the design space in search of an optimal solution that exhibits improved unsteady performance. The procedure combines the power of neural networks and the economy of low-order polynomials (in terms of number of simulations required and network training requirements). A time-accurate, two-dimensional, Navier-Stokes solver is used to evaluate the various intermediate designs and provide inputs to the optimization procedure. The optimization procedure yields a modified design that improves the aerodynamic performance through small changes to the reference design geometry. The computed results demonstrate the capabilities of the neural net-based design procedure, and also show the tremendous advantages that can be gained by including high-fidelity unsteady simulations that capture the relevant flow physics in the design optimization process.

  4. 47 CFR 1.2202 - Competitive bidding design options.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Section 1.2202 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Grants...) Procedures that utilize mathematical computer optimization software, such as integer programming, to evaluate... evaluating bids using a ranking based on specified factors. (B) Procedures that combine computer optimization...

  5. Least-squares/parabolized Navier-Stokes procedure for optimizing hypersonic wind tunnel nozzles

    NASA Technical Reports Server (NTRS)

    Korte, John J.; Kumar, Ajay; Singh, D. J.; Grossman, B.

    1991-01-01

    A new procedure is demonstrated for optimizing hypersonic wind-tunnel-nozzle contours. The procedure couples a CFD computer code to an optimization algorithm, and is applied to both conical and contoured hypersonic nozzles for the purpose of determining an optimal set of parameters to describe the surface geometry. A design-objective function is specified based on the deviation from the desired test-section flow-field conditions. The objective function is minimized by optimizing the parameters used to describe the nozzle contour based on the solution to a nonlinear least-squares problem. The effect of the changes in the nozzle wall parameters are evaluated by computing the nozzle flow using the parabolized Navier-Stokes equations. The advantage of the new procedure is that it directly takes into account the displacement effect of the boundary layer on the wall contour. The new procedure provides a method for optimizing hypersonic nozzles of high Mach numbers which have been designed by classical procedures, but are shown to produce poor flow quality due to the large boundary layers present in the test section. The procedure is demonstrated by finding the optimum design parameters for a Mach 10 conical nozzle and a Mach 6 and a Mach 15 contoured nozzle.

  6. Iterative pass optimization of sequence data

    NASA Technical Reports Server (NTRS)

    Wheeler, Ward C.

    2003-01-01

    The problem of determining the minimum-cost hypothetical ancestral sequences for a given cladogram is known to be NP-complete. This "tree alignment" problem has motivated the considerable effort placed in multiple sequence alignment procedures. Wheeler in 1996 proposed a heuristic method, direct optimization, to calculate cladogram costs without the intervention of multiple sequence alignment. This method, though more efficient in time and more effective in cladogram length than many alignment-based procedures, greedily optimizes nodes based on descendent information only. In their proposal of an exact multiple alignment solution, Sankoff et al. in 1976 described a heuristic procedure--the iterative improvement method--to create alignments at internal nodes by solving a series of median problems. The combination of a three-sequence direct optimization with iterative improvement and a branch-length-based cladogram cost procedure, provides an algorithm that frequently results in superior (i.e., lower) cladogram costs. This iterative pass optimization is both computation and memory intensive, but economies can be made to reduce this burden. An example in arthropod systematics is discussed. c2003 The Willi Hennig Society. Published by Elsevier Science (USA). All rights reserved.

  7. Modeling Training Site Vegetation Coverage Probability with a Random Optimizing Procedure: An Artificial Neural Network Approach.

    DTIC Science & Technology

    1998-05-01

    Coverage Probability with a Random Optimization Procedure: An Artificial Neural Network Approach by Biing T. Guan, George Z. Gertner, and Alan B...Modeling Training Site Vegetation Coverage Probability with a Random Optimizing Procedure: An Artificial Neural Network Approach 6. AUTHOR(S) Biing...coverage based on past coverage. Approach A literature survey was conducted to identify artificial neural network analysis techniques applicable for

  8. Aerodynamic Design Using Neural Networks

    NASA Technical Reports Server (NTRS)

    Rai, Man Mohan; Madavan, Nateri K.

    2003-01-01

    The design of aerodynamic components of aircraft, such as wings or engines, involves a process of obtaining the most optimal component shape that can deliver the desired level of component performance, subject to various constraints, e.g., total weight or cost, that the component must satisfy. Aerodynamic design can thus be formulated as an optimization problem that involves the minimization of an objective function subject to constraints. A new aerodynamic design optimization procedure based on neural networks and response surface methodology (RSM) incorporates the advantages of both traditional RSM and neural networks. The procedure uses a strategy, denoted parameter-based partitioning of the design space, to construct a sequence of response surfaces based on both neural networks and polynomial fits to traverse the design space in search of the optimal solution. Some desirable characteristics of the new design optimization procedure include the ability to handle a variety of design objectives, easily impose constraints, and incorporate design guidelines and rules of thumb. It provides an infrastructure for variable fidelity analysis and reduces the cost of computation by using less-expensive, lower fidelity simulations in the early stages of the design evolution. The initial or starting design can be far from optimal. The procedure is easy and economical to use in large-dimensional design space and can be used to perform design tradeoff studies rapidly. Designs involving multiple disciplines can also be optimized. Some practical applications of the design procedure that have demonstrated some of its capabilities include the inverse design of an optimal turbine airfoil starting from a generic shape and the redesign of transonic turbines to improve their unsteady aerodynamic characteristics.

  9. Optimization for minimum sensitivity to uncertain parameters

    NASA Technical Reports Server (NTRS)

    Pritchard, Jocelyn I.; Adelman, Howard M.; Sobieszczanski-Sobieski, Jaroslaw

    1994-01-01

    A procedure to design a structure for minimum sensitivity to uncertainties in problem parameters is described. The approach is to minimize directly the sensitivity derivatives of the optimum design with respect to fixed design parameters using a nested optimization procedure. The procedure is demonstrated for the design of a bimetallic beam for minimum weight with insensitivity to uncertainties in structural properties. The beam is modeled with finite elements based on two dimensional beam analysis. A sequential quadratic programming procedure used as the optimizer supplies the Lagrange multipliers that are used to calculate the optimum sensitivity derivatives. The method was perceived to be successful from comparisons of the optimization results with parametric studies.

  10. Mathematical calibration procedure of a capacitive sensor-based indexed metrology platform

    NASA Astrophysics Data System (ADS)

    Brau-Avila, A.; Santolaria, J.; Acero, R.; Valenzuela-Galvan, M.; Herrera-Jimenez, V. M.; Aguilar, J. J.

    2017-03-01

    The demand for faster and more reliable measuring tasks for the control and quality assurance of modern production systems has created new challenges for the field of coordinate metrology. Thus, the search for new solutions in coordinate metrology systems and the need for the development of existing ones still persists. One example of such a system is the portable coordinate measuring machine (PCMM), the use of which in industry has considerably increased in recent years, mostly due to its flexibility for accomplishing in-line measuring tasks as well as its reduced cost and operational advantages compared to traditional coordinate measuring machines. Nevertheless, PCMMs have a significant drawback derived from the techniques applied in the verification and optimization procedures of their kinematic parameters. These techniques are based on the capture of data with the measuring instrument from a calibrated gauge object, fixed successively in various positions so that most of the instrument measuring volume is covered, which results in time-consuming, tedious and expensive verification and optimization procedures. In this work the mathematical calibration procedure of a capacitive sensor-based indexed metrology platform (IMP) is presented. This calibration procedure is based on the readings and geometric features of six capacitive sensors and their targets with nanometer resolution. The final goal of the IMP calibration procedure is to optimize the geometric features of the capacitive sensors and their targets in order to use the optimized data in the verification procedures of PCMMs.

  11. Improving the Unsteady Aerodynamic Performance of Transonic Turbines using Neural Networks

    NASA Technical Reports Server (NTRS)

    Rai, Man Mohan; Madavan, Nateri K.; Huber, Frank W.

    1999-01-01

    A recently developed neural net-based aerodynamic design procedure is used in the redesign of a transonic turbine stage to improve its unsteady aerodynamic performance. The redesign procedure used incorporates the advantages of both traditional response surface methodology and neural networks by employing a strategy called parameter-based partitioning of the design space. Starting from the reference design, a sequence of response surfaces based on both neural networks and polynomial fits are constructed to traverse the design space in search of an optimal solution that exhibits improved unsteady performance. The procedure combines the power of neural networks and the economy of low-order polynomials (in terms of number of simulations required and network training requirements). A time-accurate, two-dimensional, Navier-Stokes solver is used to evaluate the various intermediate designs and provide inputs to the optimization procedure. The procedure yielded a modified design that improves the aerodynamic performance through small changes to the reference design geometry. These results demonstrate the capabilities of the neural net-based design procedure, and also show the advantages of including high-fidelity unsteady simulations that capture the relevant flow physics in the design optimization process.

  12. Development and application of an optimization procedure for flutter suppression using the aerodynamic energy concept

    NASA Technical Reports Server (NTRS)

    Nissim, E.; Abel, I.

    1978-01-01

    An optimization procedure is developed based on the responses of a system to continuous gust inputs. The procedure uses control law transfer functions which have been partially determined by using the relaxed aerodynamic energy approach. The optimization procedure yields a flutter suppression system which minimizes control surface activity in a gust environment. The procedure is applied to wing flutter of a drone aircraft to demonstrate a 44 percent increase in the basic wing flutter dynamic pressure. It is shown that a trailing edge control system suppresses the flutter instability over a wide range of subsonic mach numbers and flight altitudes. Results of this study confirm the effectiveness of the relaxed energy approach.

  13. Optimizing Travel Time to Outpatient Interventional Radiology Procedures in a Multi-Site Hospital System Using a Google Maps Application.

    PubMed

    Mandel, Jacob E; Morel-Ovalle, Louis; Boas, Franz E; Ziv, Etay; Yarmohammadi, Hooman; Deipolyi, Amy; Mohabir, Heeralall R; Erinjeri, Joseph P

    2018-02-20

    The purpose of this study is to determine whether a custom Google Maps application can optimize site selection when scheduling outpatient interventional radiology (IR) procedures within a multi-site hospital system. The Google Maps for Business Application Programming Interface (API) was used to develop an internal web application that uses real-time traffic data to determine estimated travel time (ETT; minutes) and estimated travel distance (ETD; miles) from a patient's home to each a nearby IR facility in our hospital system. Hypothetical patient home addresses based on the 33 cities comprising our institution's catchment area were used to determine the optimal IR site for hypothetical patients traveling from each city based on real-time traffic conditions. For 10/33 (30%) cities, there was discordance between the optimal IR site based on ETT and the optimal IR site based on ETD at non-rush hour time or rush hour time. By choosing to travel to an IR site based on ETT rather than ETD, patients from discordant cities were predicted to save an average of 7.29 min during non-rush hour (p = 0.03), and 28.80 min during rush hour (p < 0.001). Using a custom Google Maps application to schedule outpatients for IR procedures can effectively reduce patient travel time when more than one location providing IR procedures is available within the same hospital system.

  14. A seismic optimization procedure for reinforced concrete framed buildings based on eigenfrequency optimization

    NASA Astrophysics Data System (ADS)

    Arroyo, Orlando; Gutiérrez, Sergio

    2017-07-01

    Several seismic optimization methods have been proposed to improve the performance of reinforced concrete framed (RCF) buildings; however, they have not been widely adopted among practising engineers because they require complex nonlinear models and are computationally expensive. This article presents a procedure to improve the seismic performance of RCF buildings based on eigenfrequency optimization, which is effective, simple to implement and efficient. The method is used to optimize a 10-storey regular building, and its effectiveness is demonstrated by nonlinear time history analyses, which show important reductions in storey drifts and lateral displacements compared to a non-optimized building. A second example for an irregular six-storey building demonstrates that the method provides benefits to a wide range of RCF structures and supports the applicability of the proposed method.

  15. Investigation of Low-Reynolds-Number Rocket Nozzle Design Using PNS-Based Optimization Procedure

    NASA Technical Reports Server (NTRS)

    Hussaini, M. Moin; Korte, John J.

    1996-01-01

    An optimization approach to rocket nozzle design, based on computational fluid dynamics (CFD) methodology, is investigated for low-Reynolds-number cases. This study is undertaken to determine the benefits of this approach over those of classical design processes such as Rao's method. A CFD-based optimization procedure, using the parabolized Navier-Stokes (PNS) equations, is used to design conical and contoured axisymmetric nozzles. The advantage of this procedure is that it accounts for viscosity during the design process; other processes make an approximated boundary-layer correction after an inviscid design is created. Results showed significant improvement in the nozzle thrust coefficient over that of the baseline case; however, the unusual nozzle design necessitates further investigation of the accuracy of the PNS equations for modeling expanding flows with thick laminar boundary layers.

  16. Improvement of the insertion axis for cochlear implantation with a robot-based system.

    PubMed

    Torres, Renato; Kazmitcheff, Guillaume; De Seta, Daniele; Ferrary, Evelyne; Sterkers, Olivier; Nguyen, Yann

    2017-02-01

    It has previously reported that alignment of the insertion axis along the basal turn of the cochlea was depending on surgeon' experience. In this experimental study, we assessed technological assistances, such as navigation or a robot-based system, to improve the insertion axis during cochlear implantation. A preoperative cone beam CT and a mastoidectomy with a posterior tympanotomy were performed on four temporal bones. The optimal insertion axis was defined as the closest axis to the scala tympani centerline avoiding the facial nerve. A neuronavigation system, a robot assistance prototype, and software allowing a semi-automated alignment of the robot were used to align an insertion tool with an optimal insertion axis. Four procedures were performed and repeated three times in each temporal bone: manual, manual navigation-assisted, robot-based navigation-assisted, and robot-based semi-automated. The angle between the optimal and the insertion tool axis was measured in the four procedures. The error was 8.3° ± 2.82° for the manual procedure (n = 24), 8.6° ± 2.83° for the manual navigation-assisted procedure (n = 24), 5.4° ± 3.91° for the robot-based navigation-assisted procedure (n = 24), and 3.4° ± 1.56° for the robot-based semi-automated procedure (n = 12). A higher accuracy was observed with the semi-automated robot-based technique than manual and manual navigation-assisted (p < 0.01). Combination of a navigation system and a manual insertion does not improve the alignment accuracy due to the lack of friendly user interface. On the contrary, a semi-automated robot-based system reduces both the error and the variability of the alignment with a defined optimal axis.

  17. Development of Multiobjective Optimization Techniques for Sonic Boom Minimization

    NASA Technical Reports Server (NTRS)

    Chattopadhyay, Aditi; Rajadas, John Narayan; Pagaldipti, Naryanan S.

    1996-01-01

    A discrete, semi-analytical sensitivity analysis procedure has been developed for calculating aerodynamic design sensitivities. The sensitivities of the flow variables and the grid coordinates are numerically calculated using direct differentiation of the respective discretized governing equations. The sensitivity analysis techniques are adapted within a parabolized Navier Stokes equations solver. Aerodynamic design sensitivities for high speed wing-body configurations are calculated using the semi-analytical sensitivity analysis procedures. Representative results obtained compare well with those obtained using the finite difference approach and establish the computational efficiency and accuracy of the semi-analytical procedures. Multidisciplinary design optimization procedures have been developed for aerospace applications namely, gas turbine blades and high speed wing-body configurations. In complex applications, the coupled optimization problems are decomposed into sublevels using multilevel decomposition techniques. In cases with multiple objective functions, formal multiobjective formulation such as the Kreisselmeier-Steinhauser function approach and the modified global criteria approach have been used. Nonlinear programming techniques for continuous design variables and a hybrid optimization technique, based on a simulated annealing algorithm, for discrete design variables have been used for solving the optimization problems. The optimization procedure for gas turbine blades improves the aerodynamic and heat transfer characteristics of the blades. The two-dimensional, blade-to-blade aerodynamic analysis is performed using a panel code. The blade heat transfer analysis is performed using an in-house developed finite element procedure. The optimization procedure yields blade shapes with significantly improved velocity and temperature distributions. The multidisciplinary design optimization procedures for high speed wing-body configurations simultaneously improve the aerodynamic, the sonic boom and the structural characteristics of the aircraft. The flow solution is obtained using a comprehensive parabolized Navier Stokes solver. Sonic boom analysis is performed using an extrapolation procedure. The aircraft wing load carrying member is modeled as either an isotropic or a composite box beam. The isotropic box beam is analyzed using thin wall theory. The composite box beam is analyzed using a finite element procedure. The developed optimization procedures yield significant improvements in all the performance criteria and provide interesting design trade-offs. The semi-analytical sensitivity analysis techniques offer significant computational savings and allow the use of comprehensive analysis procedures within design optimization studies.

  18. Enhanced Multiobjective Optimization Technique for Comprehensive Aerospace Design. Part A

    NASA Technical Reports Server (NTRS)

    Chattopadhyay, Aditi; Rajadas, John N.

    1997-01-01

    A multidisciplinary design optimization procedure which couples formal multiobjectives based techniques and complex analysis procedures (such as computational fluid dynamics (CFD) codes) developed. The procedure has been demonstrated on a specific high speed flow application involving aerodynamics and acoustics (sonic boom minimization). In order to account for multiple design objectives arising from complex performance requirements, multiobjective formulation techniques are used to formulate the optimization problem. Techniques to enhance the existing Kreisselmeier-Steinhauser (K-S) function multiobjective formulation approach have been developed. The K-S function procedure used in the proposed work transforms a constrained multiple objective functions problem into an unconstrained problem which then is solved using the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm. Weight factors are introduced during the transformation process to each objective function. This enhanced procedure will provide the designer the capability to emphasize specific design objectives during the optimization process. The demonstration of the procedure utilizes a computational Fluid dynamics (CFD) code which solves the three-dimensional parabolized Navier-Stokes (PNS) equations for the flow field along with an appropriate sonic boom evaluation procedure thus introducing both aerodynamic performance as well as sonic boom as the design objectives to be optimized simultaneously. Sensitivity analysis is performed using a discrete differentiation approach. An approximation technique has been used within the optimizer to improve the overall computational efficiency of the procedure in order to make it suitable for design applications in an industrial setting.

  19. Jig-Shape Optimization of a Low-Boom Supersonic Aircraft

    NASA Technical Reports Server (NTRS)

    Pak, Chan-Gi

    2018-01-01

    A simple approach for optimizing the jig-shape is proposed in this study. This simple approach is based on an unconstrained optimization problem and applied to a low-boom supersonic aircraft. In this study, the jig-shape optimization is performed using the two-step approach. First, starting design variables are computed using the least-squares surface fitting technique. Next, the jig-shape is further tuned using a numerical optimization procedure based on an in-house object-oriented optimization tool. During the numerical optimization procedure, a design jig-shape is determined by the baseline jig-shape and basis functions. A total of 12 symmetric mode shapes of the cruise-weight configuration, rigid pitch shape, rigid left and right stabilator rotation shapes, and a residual shape are selected as sixteen basis functions. After three optimization runs, the trim shape error distribution is improved, and the maximum trim shape error of 0.9844 inches of the starting configuration becomes 0.00367 inch by the end of the third optimization run.

  20. Numerical modeling and optimization of the Iguassu gas centrifuge

    NASA Astrophysics Data System (ADS)

    Bogovalov, S. V.; Borman, V. D.; Borisevich, V. D.; Tronin, V. N.; Tronin, I. V.

    2017-07-01

    The full procedure of the numerical calculation of the optimized parameters of the Iguassu gas centrifuge (GC) is under discussion. The procedure consists of a few steps. On the first step the problem of a hydrodynamical flow of the gas in the rotating rotor of the GC is solved numerically. On the second step the problem of diffusion of the binary mixture of isotopes is solved. The separation power of the gas centrifuge is calculated after that. On the last step the time consuming procedure of optimization of the GC is performed providing us the maximum of the separation power. The optimization is based on the BOBYQA method exploring the results of numerical simulations of the hydrodynamics and diffusion of the mixture of isotopes. Fast convergence of calculations is achieved due to exploring of a direct solver at the solution of the hydrodynamical and diffusion parts of the problem. Optimized separative power and optimal internal parameters of the Iguassu GC with 1 m rotor were calculated using the developed approach. Optimization procedure converges in 45 iterations taking 811 minutes.

  1. Evolutionary algorithm based optimization of hydraulic machines utilizing a state-of-the-art block coupled CFD solver and parametric geometry and mesh generation tools

    NASA Astrophysics Data System (ADS)

    S, Kyriacou; E, Kontoleontos; S, Weissenberger; L, Mangani; E, Casartelli; I, Skouteropoulou; M, Gattringer; A, Gehrer; M, Buchmayr

    2014-03-01

    An efficient hydraulic optimization procedure, suitable for industrial use, requires an advanced optimization tool (EASY software), a fast solver (block coupled CFD) and a flexible geometry generation tool. EASY optimization software is a PCA-driven metamodel-assisted Evolutionary Algorithm (MAEA (PCA)) that can be used in both single- (SOO) and multiobjective optimization (MOO) problems. In MAEAs, low cost surrogate evaluation models are used to screen out non-promising individuals during the evolution and exclude them from the expensive, problem specific evaluation, here the solution of Navier-Stokes equations. For additional reduction of the optimization CPU cost, the PCA technique is used to identify dependences among the design variables and to exploit them in order to efficiently drive the application of the evolution operators. To further enhance the hydraulic optimization procedure, a very robust and fast Navier-Stokes solver has been developed. This incompressible CFD solver employs a pressure-based block-coupled approach, solving the governing equations simultaneously. This method, apart from being robust and fast, also provides a big gain in terms of computational cost. In order to optimize the geometry of hydraulic machines, an automatic geometry and mesh generation tool is necessary. The geometry generation tool used in this work is entirely based on b-spline curves and surfaces. In what follows, the components of the tool chain are outlined in some detail and the optimization results of hydraulic machine components are shown in order to demonstrate the performance of the presented optimization procedure.

  2. Jig-Shape Optimization of a Low-Boom Supersonic Aircraft

    NASA Technical Reports Server (NTRS)

    Pak, Chan-gi

    2018-01-01

    A simple approach for optimizing the jig-shape is proposed in this study. This simple approach is based on an unconstrained optimization problem and applied to a low-boom supersonic aircraft. In this study, the jig-shape optimization is performed using the two-step approach. First, starting design variables are computed using the least squares surface fitting technique. Next, the jig-shape is further tuned using a numerical optimization procedure based on in-house object-oriented optimization tool.

  3. A TOTP-based enhanced route optimization procedure for mobile IPv6 to reduce handover delay and signalling overhead.

    PubMed

    Shah, Peer Azmat; Hasbullah, Halabi B; Lawal, Ibrahim A; Aminu Mu'azu, Abubakar; Tang Jung, Low

    2014-01-01

    Due to the proliferation of handheld mobile devices, multimedia applications like Voice over IP (VoIP), video conferencing, network music, and online gaming are gaining popularity in recent years. These applications are well known to be delay sensitive and resource demanding. The mobility of mobile devices, running these applications, across different networks causes delay and service disruption. Mobile IPv6 was proposed to provide mobility support to IPv6-based mobile nodes for continuous communication when they roam across different networks. However, the Route Optimization procedure in Mobile IPv6 involves the verification of mobile node's reachability at the home address and at the care-of address (home test and care-of test) that results in higher handover delays and signalling overhead. This paper presents an enhanced procedure, time-based one-time password Route Optimization (TOTP-RO), for Mobile IPv6 Route Optimization that uses the concepts of shared secret Token, time based one-time password (TOTP) along with verification of the mobile node via direct communication and maintaining the status of correspondent node's compatibility. The TOTP-RO was implemented in network simulator (NS-2) and an analytical analysis was also made. Analysis showed that TOTP-RO has lower handover delays, packet loss, and signalling overhead with an increased level of security as compared to the standard Mobile IPv6's Return-Routability-based Route Optimization (RR-RO).

  4. Optimal design of a hybrid MR brake for haptic wrist application

    NASA Astrophysics Data System (ADS)

    Nguyen, Quoc Hung; Nguyen, Phuong Bac; Choi, Seung-Bok

    2011-03-01

    In this work, a new configuration of a magnetorheological (MR) brake is proposed and an optimal design of the proposed MR brake for haptic wrist application is performed considering the required braking torque, the zero-field friction torque, the size and mass of the brake. The proposed MR brake configuration is a combination of disc-type and drum-type which is referred as a hybrid configuration in this study. After the MR brake with the hybrid configuration is proposed, braking torque of the brake is analyzed based on Bingham rheological model of the MR fluid. The zero-field friction torque of the MR brake is also obtained. An optimization procedure based on finite element analysis integrated with an optimization tool is developed for the MR brake. The purpose of the optimal design is to find the optimal geometric dimensions of the MR brake structure that can produce the required braking torque and minimize the uncontrollable torque (passive torque) of the haptic wrist. Based on developed optimization procedure, optimal solution of the proposed MR brake is achieved. The proposed optimized hybrid brake is then compared with conventional types of MR brake and discussions on working performance of the proposed MR brake are described.

  5. Optimized tomography of continuous variable systems using excitation counting

    NASA Astrophysics Data System (ADS)

    Shen, Chao; Heeres, Reinier W.; Reinhold, Philip; Jiang, Luyao; Liu, Yi-Kai; Schoelkopf, Robert J.; Jiang, Liang

    2016-11-01

    We propose a systematic procedure to optimize quantum state tomography protocols for continuous variable systems based on excitation counting preceded by a displacement operation. Compared with conventional tomography based on Husimi or Wigner function measurement, the excitation counting approach can significantly reduce the number of measurement settings. We investigate both informational completeness and robustness, and provide a bound of reconstruction error involving the condition number of the sensing map. We also identify the measurement settings that optimize this error bound, and demonstrate that the improved reconstruction robustness can lead to an order-of-magnitude reduction of estimation error with given resources. This optimization procedure is general and can incorporate prior information of the unknown state to further simplify the protocol.

  6. Optimization of wearable microwave antenna with simplified electromagnetic model of the human body

    NASA Astrophysics Data System (ADS)

    Januszkiewicz, Łukasz; Barba, Paolo Di; Hausman, Sławomir

    2017-12-01

    In this paper the problem of optimization design of a microwave wearable antenna is investigated. Reference is made to a specific antenna design that is a wideband Vee antenna the geometry of which is characterized by 6 parameters. These parameters were automatically adjusted with an evolution strategy based algorithm EStra to obtain the impedance matching of the antenna located in the proximity of the human body. The antenna was designed to operate in the ISM (industrial, scientific, medical) band which covers the frequency range of 2.4 GHz up to 2.5 GHz. The optimization procedure used the finite-difference time-domain method based full-wave simulator with a simplified human body model. In the optimization procedure small movements of antenna towards or away of the human body that are likely to happen during real use were considered. The stability of the antenna parameters irrespective of the movements of the user's body is an important factor in wearable antenna design. The optimization procedure allowed obtaining good impedance matching for a given range of antenna distances with respect to the human body.

  7. Development of an ELISA for evaluation of swab recovery efficiencies of bovine serum albumin.

    PubMed

    Sparding, Nadja; Slotved, Hans-Christian; Nicolaisen, Gert M; Giese, Steen B; Elmlund, Jón; Steenhard, Nina R

    2014-01-01

    After a potential biological incident the sampling strategy and sample analysis are crucial for the outcome of the investigation and identification. In this study, we have developed a simple sandwich ELISA based on commercial components to quantify BSA (used as a surrogate for ricin) with a detection range of 1.32-80 ng/mL. We used the ELISA to evaluate different protein swabbing procedures (swabbing techniques and after-swabbing treatments) for two swab types: a cotton gauze swab and a flocked nylon swab. The optimal swabbing procedure for each swab type was used to obtain recovery efficiencies from different surface materials. The surface recoveries using the optimal swabbing procedure ranged from 0-60% and were significantly higher from nonporous surfaces compared to porous surfaces. In conclusion, this study presents a swabbing procedure evaluation and a simple BSA ELISA based on commercial components, which are easy to perform in a laboratory with basic facilities. The data indicate that different swabbing procedures were optimal for each of the tested swab types, and the particular swab preference depends on the surface material to be swabbed.

  8. A LSQR-type method provides a computationally efficient automated optimal choice of regularization parameter in diffuse optical tomography.

    PubMed

    Prakash, Jaya; Yalavarthy, Phaneendra K

    2013-03-01

    Developing a computationally efficient automated method for the optimal choice of regularization parameter in diffuse optical tomography. The least-squares QR (LSQR)-type method that uses Lanczos bidiagonalization is known to be computationally efficient in performing the reconstruction procedure in diffuse optical tomography. The same is effectively deployed via an optimization procedure that uses the simplex method to find the optimal regularization parameter. The proposed LSQR-type method is compared with the traditional methods such as L-curve, generalized cross-validation (GCV), and recently proposed minimal residual method (MRM)-based choice of regularization parameter using numerical and experimental phantom data. The results indicate that the proposed LSQR-type and MRM-based methods performance in terms of reconstructed image quality is similar and superior compared to L-curve and GCV-based methods. The proposed method computational complexity is at least five times lower compared to MRM-based method, making it an optimal technique. The LSQR-type method was able to overcome the inherent limitation of computationally expensive nature of MRM-based automated way finding the optimal regularization parameter in diffuse optical tomographic imaging, making this method more suitable to be deployed in real-time.

  9. A Rapid Aerodynamic Design Procedure Based on Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Rai, Man Mohan

    2001-01-01

    An aerodynamic design procedure that uses neural networks to model the functional behavior of the objective function in design space has been developed. This method incorporates several improvements to an earlier method that employed a strategy called parameter-based partitioning of the design space in order to reduce the computational costs associated with design optimization. As with the earlier method, the current method uses a sequence of response surfaces to traverse the design space in search of the optimal solution. The new method yields significant reductions in computational costs by using composite response surfaces with better generalization capabilities and by exploiting synergies between the optimization method and the simulation codes used to generate the training data. These reductions in design optimization costs are demonstrated for a turbine airfoil design study where a generic shape is evolved into an optimal airfoil.

  10. Beyond the drugs: nonpharmacologic strategies to optimize procedural care in children.

    PubMed

    Leroy, Piet L; Costa, Luciane R; Emmanouil, Dimitris; van Beukering, Alice; Franck, Linda S

    2016-03-01

    Painful and/or stressful medical procedures mean a substantial burden for sick children. There is good evidence that procedural comfort can be optimized by a comprehensive comfort-directed policy containing the triad of nonpharmacological strategies (NPS) in all cases, timely or preventive procedural analgesia if pain is an issue, and procedural sedation. Based both on well-established theoretical frameworks as well as an increasing body of scientific evidence NPS need to be regarded an inextricable part of procedural comfort care. Procedural comfort care must always start with a child-friendly, nonthreatening environment in which well-being, confidence, and self-efficacy are optimized and maintained. This requires a reconsideration of the medical spaces where we provide care, reduction of sensory stimulation, normalized professional behavior, optimal logistics, and coordination and comfort-directed and age-appropriate verbal and nonverbal expression by professionals. Next, age-appropriate distraction techniques and/or hypnosis should be readily available. NPS are useful for all types of medical and dental procedures and should always precede and accompany procedural sedation. NPS should be embedded into a family-centered, care-directed policy as it has been shown that family-centered care can lead to safer, more personalized, and effective care, improved healthcare experiences and patient outcomes, and more responsive organizations.

  11. The use of optimization techniques to design controlled diffusion compressor blading

    NASA Technical Reports Server (NTRS)

    Sanger, N. L.

    1982-01-01

    A method for automating compressor blade design using numerical optimization, and applied to the design of a controlled diffusion stator blade row is presented. A general purpose optimization procedure is employed, based on conjugate directions for locally unconstrained problems and on feasible directions for locally constrained problems. Coupled to the optimizer is an analysis package consisting of three analysis programs which calculate blade geometry, inviscid flow, and blade surface boundary layers. The optimizing concepts and selection of design objective and constraints are described. The procedure for automating the design of a two dimensional blade section is discussed, and design results are presented.

  12. Reliability- and performance-based robust design optimization of MEMS structures considering technological uncertainties

    NASA Astrophysics Data System (ADS)

    Martowicz, Adam; Uhl, Tadeusz

    2012-10-01

    The paper discusses the applicability of a reliability- and performance-based multi-criteria robust design optimization technique for micro-electromechanical systems, considering their technological uncertainties. Nowadays, micro-devices are commonly applied systems, especially in the automotive industry, taking advantage of utilizing both the mechanical structure and electronic control circuit on one board. Their frequent use motivates the elaboration of virtual prototyping tools that can be applied in design optimization with the introduction of technological uncertainties and reliability. The authors present a procedure for the optimization of micro-devices, which is based on the theory of reliability-based robust design optimization. This takes into consideration the performance of a micro-device and its reliability assessed by means of uncertainty analysis. The procedure assumes that, for each checked design configuration, the assessment of uncertainty propagation is performed with the meta-modeling technique. The described procedure is illustrated with an example of the optimization carried out for a finite element model of a micro-mirror. The multi-physics approach allowed the introduction of several physical phenomena to correctly model the electrostatic actuation and the squeezing effect present between electrodes. The optimization was preceded by sensitivity analysis to establish the design and uncertain domains. The genetic algorithms fulfilled the defined optimization task effectively. The best discovered individuals are characterized by a minimized value of the multi-criteria objective function, simultaneously satisfying the constraint on material strength. The restriction of the maximum equivalent stresses was introduced with the conditionally formulated objective function with a penalty component. The yielded results were successfully verified with a global uniform search through the input design domain.

  13. Performance evaluation of different types of particle representation procedures of Particle Swarm Optimization in Job-shop Scheduling Problems

    NASA Astrophysics Data System (ADS)

    Izah Anuar, Nurul; Saptari, Adi

    2016-02-01

    This paper addresses the types of particle representation (encoding) procedures in a population-based stochastic optimization technique in solving scheduling problems known in the job-shop manufacturing environment. It intends to evaluate and compare the performance of different particle representation procedures in Particle Swarm Optimization (PSO) in the case of solving Job-shop Scheduling Problems (JSP). Particle representation procedures refer to the mapping between the particle position in PSO and the scheduling solution in JSP. It is an important step to be carried out so that each particle in PSO can represent a schedule in JSP. Three procedures such as Operation and Particle Position Sequence (OPPS), random keys representation and random-key encoding scheme are used in this study. These procedures have been tested on FT06 and FT10 benchmark problems available in the OR-Library, where the objective function is to minimize the makespan by the use of MATLAB software. Based on the experimental results, it is discovered that OPPS gives the best performance in solving both benchmark problems. The contribution of this paper is the fact that it demonstrates to the practitioners involved in complex scheduling problems that different particle representation procedures can have significant effects on the performance of PSO in solving JSP.

  14. Use of different sample temperatures in a single extraction procedure for the screening of the aroma profile of plant matrices by headspace solid-phase microextraction.

    PubMed

    Martendal, Edmar; de Souza Silveira, Cristine Durante; Nardini, Giuliana Stael; Carasek, Eduardo

    2011-06-17

    This study proposes a new approach to the optimization of the extraction of the volatile fraction of plant matrices using the headspace solid-phase microextraction (HS-SPME) technique. The optimization focused on the extraction time and temperature using a CAR/DVB/PDMS 50/30 μm SPME fiber and 100mg of a mixture of plants as the sample in a 15-mL vial. The extraction time (10-60 min) and temperature (5-60 °C) were optimized by means of a central composite design. The chromatogram was divided into four groups of peaks based on the elution temperature to provide a better understanding of the influence of the extraction parameters on the extraction efficiency considering compounds with different volatilities/polarities. In view of the different optimum extraction time and temperature conditions obtained for each group, a new approach based on the use of two extraction temperatures in the same procedure is proposed. The optimum conditions were achieved by extracting for 30 min with a sample temperature of 60 °C followed by a further 15 min at 5 °C. The proposed method was compared with the optimized conventional method based on a single extraction temperature (45 min of extraction at 50 °C) by submitting five samples to both procedures. The proposed method led to better results in all cases, considering as the response both peak area and the number of identified peaks. The newly proposed optimization approach provided an excellent alternative procedure to extract analytes with quite different volatilities in the same procedure. Copyright © 2011 Elsevier B.V. All rights reserved.

  15. Aircraft wing structural design optimization based on automated finite element modelling and ground structure approach

    NASA Astrophysics Data System (ADS)

    Yang, Weizhu; Yue, Zhufeng; Li, Lei; Wang, Peiyan

    2016-01-01

    An optimization procedure combining an automated finite element modelling (AFEM) technique with a ground structure approach (GSA) is proposed for structural layout and sizing design of aircraft wings. The AFEM technique, based on CATIA VBA scripting and PCL programming, is used to generate models automatically considering the arrangement of inner systems. GSA is used for local structural topology optimization. The design procedure is applied to a high-aspect-ratio wing. The arrangement of the integral fuel tank, landing gear and control surfaces is considered. For the landing gear region, a non-conventional initial structural layout is adopted. The positions of components, the number of ribs and local topology in the wing box and landing gear region are optimized to obtain a minimum structural weight. Constraints include tank volume, strength, buckling and aeroelastic parameters. The results show that the combined approach leads to a greater weight saving, i.e. 26.5%, compared with three additional optimizations based on individual design approaches.

  16. Efficient Robust Optimization of Metal Forming Processes using a Sequential Metamodel Based Strategy

    NASA Astrophysics Data System (ADS)

    Wiebenga, J. H.; Klaseboer, G.; van den Boogaard, A. H.

    2011-08-01

    The coupling of Finite Element (FE) simulations to mathematical optimization techniques has contributed significantly to product improvements and cost reductions in the metal forming industries. The next challenge is to bridge the gap between deterministic optimization techniques and the industrial need for robustness. This paper introduces a new and generally applicable structured methodology for modeling and solving robust optimization problems. Stochastic design variables or noise variables are taken into account explicitly in the optimization procedure. The metamodel-based strategy is combined with a sequential improvement algorithm to efficiently increase the accuracy of the objective function prediction. This is only done at regions of interest containing the optimal robust design. Application of the methodology to an industrial V-bending process resulted in valuable process insights and an improved robust process design. Moreover, a significant improvement of the robustness (>2σ) was obtained by minimizing the deteriorating effects of several noise variables. The robust optimization results demonstrate the general applicability of the robust optimization strategy and underline the importance of including uncertainty and robustness explicitly in the numerical optimization procedure.

  17. Optimized postweld heat treatment procedures for 17-4 PH stainless steels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhaduri, A.K.; Sujith, S.; Srinivasan, G.

    1995-05-01

    The postweld heat treatment (PWHT) procedures for 17-4 PH stainless steel weldments of matching chemistry was optimized vis-a-vis its microstructure prior to welding based on microstructural studies and room-temperature mechanical properties. The 17-4 PH stainless steel was welded in two different prior microstructural conditions (condition A and condition H 1150) and then postweld heat treated to condition H900 or condition H1150, using different heat treatment procedures. Microstructural investigations and room-temperature tensile properties were determined to study the combined effects of prior microstructural and PWHT procedures.

  18. A TOTP-Based Enhanced Route Optimization Procedure for Mobile IPv6 to Reduce Handover Delay and Signalling Overhead

    PubMed Central

    Shah, Peer Azmat; Hasbullah, Halabi B.; Lawal, Ibrahim A.; Aminu Mu'azu, Abubakar; Tang Jung, Low

    2014-01-01

    Due to the proliferation of handheld mobile devices, multimedia applications like Voice over IP (VoIP), video conferencing, network music, and online gaming are gaining popularity in recent years. These applications are well known to be delay sensitive and resource demanding. The mobility of mobile devices, running these applications, across different networks causes delay and service disruption. Mobile IPv6 was proposed to provide mobility support to IPv6-based mobile nodes for continuous communication when they roam across different networks. However, the Route Optimization procedure in Mobile IPv6 involves the verification of mobile node's reachability at the home address and at the care-of address (home test and care-of test) that results in higher handover delays and signalling overhead. This paper presents an enhanced procedure, time-based one-time password Route Optimization (TOTP-RO), for Mobile IPv6 Route Optimization that uses the concepts of shared secret Token, time based one-time password (TOTP) along with verification of the mobile node via direct communication and maintaining the status of correspondent node's compatibility. The TOTP-RO was implemented in network simulator (NS-2) and an analytical analysis was also made. Analysis showed that TOTP-RO has lower handover delays, packet loss, and signalling overhead with an increased level of security as compared to the standard Mobile IPv6's Return-Routability-based Route Optimization (RR-RO). PMID:24688398

  19. Sequential-Optimization-Based Framework for Robust Modeling and Design of Heterogeneous Catalytic Systems

    DOE PAGES

    Rangarajan, Srinivas; Maravelias, Christos T.; Mavrikakis, Manos

    2017-11-09

    Here, we present a general optimization-based framework for (i) ab initio and experimental data driven mechanistic modeling and (ii) optimal catalyst design of heterogeneous catalytic systems. Both cases are formulated as a nonlinear optimization problem that is subject to a mean-field microkinetic model and thermodynamic consistency requirements as constraints, for which we seek sparse solutions through a ridge (L 2 regularization) penalty. The solution procedure involves an iterative sequence of forward simulation of the differential algebraic equations pertaining to the microkinetic model using a numerical tool capable of handling stiff systems, sensitivity calculations using linear algebra, and gradient-based nonlinear optimization.more » A multistart approach is used to explore the solution space, and a hierarchical clustering procedure is implemented for statistically classifying potentially competing solutions. An example of methanol synthesis through hydrogenation of CO and CO 2 on a Cu-based catalyst is used to illustrate the framework. The framework is fast, is robust, and can be used to comprehensively explore the model solution and design space of any heterogeneous catalytic system.« less

  20. Sequential-Optimization-Based Framework for Robust Modeling and Design of Heterogeneous Catalytic Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rangarajan, Srinivas; Maravelias, Christos T.; Mavrikakis, Manos

    Here, we present a general optimization-based framework for (i) ab initio and experimental data driven mechanistic modeling and (ii) optimal catalyst design of heterogeneous catalytic systems. Both cases are formulated as a nonlinear optimization problem that is subject to a mean-field microkinetic model and thermodynamic consistency requirements as constraints, for which we seek sparse solutions through a ridge (L 2 regularization) penalty. The solution procedure involves an iterative sequence of forward simulation of the differential algebraic equations pertaining to the microkinetic model using a numerical tool capable of handling stiff systems, sensitivity calculations using linear algebra, and gradient-based nonlinear optimization.more » A multistart approach is used to explore the solution space, and a hierarchical clustering procedure is implemented for statistically classifying potentially competing solutions. An example of methanol synthesis through hydrogenation of CO and CO 2 on a Cu-based catalyst is used to illustrate the framework. The framework is fast, is robust, and can be used to comprehensively explore the model solution and design space of any heterogeneous catalytic system.« less

  1. Approximate approach for optimization space flights with a low thrust on the basis of sufficient optimality conditions

    NASA Astrophysics Data System (ADS)

    Salmin, Vadim V.

    2017-01-01

    Flight mechanics with a low-thrust is a new chapter of mechanics of space flight, considered plurality of all problems trajectory optimization and movement control laws and the design parameters of spacecraft. Thus tasks associated with taking into account the additional factors in mathematical models of the motion of spacecraft becomes increasingly important, as well as additional restrictions on the possibilities of the thrust vector control. The complication of the mathematical models of controlled motion leads to difficulties in solving optimization problems. Author proposed methods of finding approximate optimal control and evaluating their optimality based on analytical solutions. These methods are based on the principle of extending the class of admissible states and controls and sufficient conditions for the absolute minimum. Developed procedures of the estimation enabling to determine how close to the optimal founded solution, and indicate ways to improve them. Authors describes procedures of estimate for approximately optimal control laws for space flight mechanics problems, in particular for optimization flight low-thrust between the circular non-coplanar orbits, optimization the control angle and trajectory movement of the spacecraft during interorbital flights, optimization flights with low-thrust between arbitrary elliptical orbits Earth satellites.

  2. Optimization applications in aircraft engine design and test

    NASA Technical Reports Server (NTRS)

    Pratt, T. K.

    1984-01-01

    Starting with the NASA-sponsored STAEBL program, optimization methods based primarily upon the versatile program COPES/CONMIN were introduced over the past few years to a broad spectrum of engineering problems in structural optimization, engine design, engine test, and more recently, manufacturing processes. By automating design and testing processes, many repetitive and costly trade-off studies have been replaced by optimization procedures. Rather than taking engineers and designers out of the loop, optimization has, in fact, put them more in control by providing sophisticated search techniques. The ultimate decision whether to accept or reject an optimal feasible design still rests with the analyst. Feedback obtained from this decision process has been invaluable since it can be incorporated into the optimization procedure to make it more intelligent. On several occasions, optimization procedures have produced novel designs, such as the nonsymmetric placement of rotor case stiffener rings, not anticipated by engineering designers. In another case, a particularly difficult resonance contraint could not be satisfied using hand iterations for a compressor blade, when the STAEBL program was applied to the problem, a feasible solution was obtained in just two iterations.

  3. Cryogenic Tank Structure Sizing With Structural Optimization Method

    NASA Technical Reports Server (NTRS)

    Wang, J. T.; Johnson, T. F.; Sleight, D. W.; Saether, E.

    2001-01-01

    Structural optimization methods in MSC /NASTRAN are used to size substructures and to reduce the weight of a composite sandwich cryogenic tank for future launch vehicles. Because the feasible design space of this problem is non-convex, many local minima are found. This non-convex problem is investigated in detail by conducting a series of analyses along a design line connecting two feasible designs. Strain constraint violations occur for some design points along the design line. Since MSC/NASTRAN uses gradient-based optimization procedures. it does not guarantee that the lowest weight design can be found. In this study, a simple procedure is introduced to create a new starting point based on design variable values from previous optimization analyses. Optimization analysis using this new starting point can produce a lower weight design. Detailed inputs for setting up the MSC/NASTRAN optimization analysis and final tank design results are presented in this paper. Approaches for obtaining further weight reductions are also discussed.

  4. An integrated optimum design approach for high speed prop rotors

    NASA Technical Reports Server (NTRS)

    Chattopadhyay, Aditi; Mccarthy, Thomas R.

    1995-01-01

    The objective is to develop an optimization procedure for high-speed and civil tilt-rotors by coupling all of the necessary disciplines within a closed-loop optimization procedure. Both simplified and comprehensive analysis codes are used for the aerodynamic analyses. The structural properties are calculated using in-house developed algorithms for both isotropic and composite box beam sections. There are four major objectives of this study. (1) Aerodynamic optimization: The effects of blade aerodynamic characteristics on cruise and hover performance of prop-rotor aircraft are investigated using the classical blade element momentum approach with corrections for the high lift capability of rotors/propellers. (2) Coupled aerodynamic/structures optimization: A multilevel hybrid optimization technique is developed for the design of prop-rotor aircraft. The design problem is decomposed into a level for improved aerodynamics with continuous design variables and a level with discrete variables to investigate composite tailoring. The aerodynamic analysis is based on that developed in objective 1 and the structural analysis is performed using an in-house code which models a composite box beam. The results are compared to both a reference rotor and the optimum rotor found in the purely aerodynamic formulation. (3) Multipoint optimization: The multilevel optimization procedure of objective 2 is extended to a multipoint design problem. Hover, cruise, and take-off are the three flight conditions simultaneously maximized. (4) Coupled rotor/wing optimization: Using the comprehensive rotary wing code CAMRAD, an optimization procedure is developed for the coupled rotor/wing performance in high speed tilt-rotor aircraft. The developed procedure contains design variables which define the rotor and wing planforms.

  5. Determination of full piezoelectric complex parameters using gradient-based optimization algorithm

    NASA Astrophysics Data System (ADS)

    Kiyono, C. Y.; Pérez, N.; Silva, E. C. N.

    2016-02-01

    At present, numerical techniques allow the precise simulation of mechanical structures, but the results are limited by the knowledge of the material properties. In the case of piezoelectric ceramics, the full model determination in the linear range involves five elastic, three piezoelectric, and two dielectric complex parameters. A successful solution to obtaining piezoceramic properties consists of comparing the experimental measurement of the impedance curve and the results of a numerical model by using the finite element method (FEM). In the present work, a new systematic optimization method is proposed to adjust the full piezoelectric complex parameters in the FEM model. Once implemented, the method only requires the experimental data (impedance modulus and phase data acquired by an impedometer), material density, geometry, and initial values for the properties. This method combines a FEM routine implemented using an 8-noded axisymmetric element with a gradient-based optimization routine based on the method of moving asymptotes (MMA). The main objective of the optimization procedure is minimizing the quadratic difference between the experimental and numerical electrical conductance and resistance curves (to consider resonance and antiresonance frequencies). To assure the convergence of the optimization procedure, this work proposes restarting the optimization loop whenever the procedure ends in an undesired or an unfeasible solution. Two experimental examples using PZ27 and APC850 samples are presented to test the precision of the method and to check the dependency of the frequency range used, respectively.

  6. Distributed Method to Optimal Profile Descent

    NASA Astrophysics Data System (ADS)

    Kim, Geun I.

    Current ground automation tools for Optimal Profile Descent (OPD) procedures utilize path stretching and speed profile change to maintain proper merging and spacing requirements at high traffic terminal area. However, low predictability of aircraft's vertical profile and path deviation during decent add uncertainty to computing estimated time of arrival, a key information that enables the ground control center to manage airspace traffic effectively. This paper uses an OPD procedure that is based on a constant flight path angle to increase the predictability of the vertical profile and defines an OPD optimization problem that uses both path stretching and speed profile change while largely maintaining the original OPD procedure. This problem minimizes the cumulative cost of performing OPD procedures for a group of aircraft by assigning a time cost function to each aircraft and a separation cost function to a pair of aircraft. The OPD optimization problem is then solved in a decentralized manner using dual decomposition techniques under inter-aircraft ADS-B mechanism. This method divides the optimization problem into more manageable sub-problems which are then distributed to the group of aircraft. Each aircraft solves its assigned sub-problem and communicate the solutions to other aircraft in an iterative process until an optimal solution is achieved thus decentralizing the computation of the optimization problem.

  7. Optimal locations and orientations of piezoelectric transducers on cylindrical shell based on gramians of contributed and undesired Rayleigh-Ritz modes using genetic algorithm

    NASA Astrophysics Data System (ADS)

    Biglar, Mojtaba; Mirdamadi, Hamid Reza; Danesh, Mohammad

    2014-02-01

    In this study, the active vibration control and configurational optimization of a cylindrical shell are analyzed by using piezoelectric transducers. The piezoelectric patches are attached to the surface of the cylindrical shell. The Rayleigh-Ritz method is used for deriving dynamic modeling of cylindrical shell and piezoelectric sensors and actuators based on the Donnel-Mushtari shell theory. The major goal of this study is to find the optimal locations and orientations of piezoelectric sensors and actuators on the cylindrical shell. The optimization procedure is designed based on desired controllability and observability of each contributed and undesired mode. Further, in order to limit spillover effects, the residual modes are taken into consideration. The optimization variables are the positions and orientations of piezoelectric patches. Genetic algorithm is utilized to evaluate the optimal configurations. In this article, for improving the maximum power and capacity of actuators for amplitude depreciation of negative velocity feedback strategy, we have proposed a new control strategy, called "Saturated Negative Velocity Feedback Rule (SNVF)". The numerical results show that the optimization procedure is effective for vibration reduction, and specifically, by locating actuators and sensors in their optimal locations and orientations, the vibrations of cylindrical shell are suppressed more quickly.

  8. An Integrated Method Based on PSO and EDA for the Max-Cut Problem.

    PubMed

    Lin, Geng; Guan, Jian

    2016-01-01

    The max-cut problem is NP-hard combinatorial optimization problem with many real world applications. In this paper, we propose an integrated method based on particle swarm optimization and estimation of distribution algorithm (PSO-EDA) for solving the max-cut problem. The integrated algorithm overcomes the shortcomings of particle swarm optimization and estimation of distribution algorithm. To enhance the performance of the PSO-EDA, a fast local search procedure is applied. In addition, a path relinking procedure is developed to intensify the search. To evaluate the performance of PSO-EDA, extensive experiments were carried out on two sets of benchmark instances with 800 to 20,000 vertices from the literature. Computational results and comparisons show that PSO-EDA significantly outperforms the existing PSO-based and EDA-based algorithms for the max-cut problem. Compared with other best performing algorithms, PSO-EDA is able to find very competitive results in terms of solution quality.

  9. Three-Dimensional Viscous Alternating Direction Implicit Algorithm and Strategies for Shape Optimization

    NASA Technical Reports Server (NTRS)

    Pandya, Mohagna J.; Baysal, Oktay

    1997-01-01

    A gradient-based shape optimization based on quasi-analytical sensitivities has been extended for practical three-dimensional aerodynamic applications. The flow analysis has been rendered by a fully implicit, finite-volume formulation of the Euler and Thin-Layer Navier-Stokes (TLNS) equations. Initially, the viscous laminar flow analysis for a wing has been compared with an independent computational fluid dynamics (CFD) code which has been extensively validated. The new procedure has been demonstrated in the design of a cranked arrow wing at Mach 2.4 with coarse- and fine-grid based computations performed with Euler and TLNS equations. The influence of the initial constraints on the geometry and aerodynamics of the optimized shape has been explored. Various final shapes generated for an identical initial problem formulation but with different optimization path options (coarse or fine grid, Euler or TLNS), have been aerodynamically evaluated via a common fine-grid TLNS-based analysis. The initial constraint conditions show significant bearing on the optimization results. Also, the results demonstrate that to produce an aerodynamically efficient design, it is imperative to include the viscous physics in the optimization procedure with the proper resolution. Based upon the present results, to better utilize the scarce computational resources, it is recommended that, a number of viscous coarse grid cases using either a preconditioned bi-conjugate gradient (PbCG) or an alternating-direction-implicit (ADI) method, should initially be employed to improve the optimization problem definition, the design space and initial shape. Optimized shapes should subsequently be analyzed using a high fidelity (viscous with fine-grid resolution) flow analysis to evaluate their true performance potential. Finally, a viscous fine-grid-based shape optimization should be conducted, using an ADI method, to accurately obtain the final optimized shape.

  10. A study of optimization techniques in HDR brachytherapy for the prostate

    NASA Astrophysics Data System (ADS)

    Pokharel, Ghana Shyam

    Several studies carried out thus far are in favor of dose escalation to the prostate gland to have better local control of the disease. But optimal way of delivery of higher doses of radiation therapy to the prostate without hurting neighboring critical structures is still debatable. In this study, we proposed that real time high dose rate (HDR) brachytherapy with highly efficient and effective optimization could be an alternative means of precise delivery of such higher doses. This approach of delivery eliminates the critical issues such as treatment setup uncertainties and target localization as in external beam radiation therapy. Likewise, dosimetry in HDR brachytherapy is not influenced by organ edema and potential source migration as in permanent interstitial implants. Moreover, the recent report of radiobiological parameters further strengthen the argument of using hypofractionated HDR brachytherapy for the management of prostate cancer. Firstly, we studied the essential features and requirements of real time HDR brachytherapy treatment planning system. Automating catheter reconstruction with fast editing tools, fast yet accurate dose engine, robust and fast optimization and evaluation engine are some of the essential requirements for such procedures. Moreover, in most of the cases we performed, treatment plan optimization took significant amount of time of overall procedure. So, making treatment plan optimization automatic or semi-automatic with sufficient speed and accuracy was the goal of the remaining part of the project. Secondly, we studied the role of optimization function and constraints in overall quality of optimized plan. We have studied the gradient based deterministic algorithm with dose volume histogram (DVH) and more conventional variance based objective functions for optimization. In this optimization strategy, the relative weight of particular objective in aggregate objective function signifies its importance with respect to other objectives. Based on our study, DVH based objective function performed better than traditional variance based objective function in creating a clinically acceptable plan when executed under identical conditions. Thirdly, we studied the multiobjective optimization strategy using both DVH and variance based objective functions. The optimization strategy was to create several Pareto optimal solutions by scanning the clinically relevant part of the Pareto front. This strategy was adopted to decouple optimization from decision such that user could select final solution from the pool of alternative solutions based on his/her clinical goals. The overall quality of treatment plan improved using this approach compared to traditional class solution approach. In fact, the final optimized plan selected using decision engine with DVH based objective was comparable to typical clinical plan created by an experienced physicist. Next, we studied the hybrid technique comprising both stochastic and deterministic algorithm to optimize both dwell positions and dwell times. The simulated annealing algorithm was used to find optimal catheter distribution and the DVH based algorithm was used to optimize 3D dose distribution for given catheter distribution. This unique treatment planning and optimization tool was capable of producing clinically acceptable highly reproducible treatment plans in clinically reasonable time. As this algorithm was able to create clinically acceptable plans within clinically reasonable time automatically, it is really appealing for real time procedures. Next, we studied the feasibility of multiobjective optimization using evolutionary algorithm for real time HDR brachytherapy for the prostate. The algorithm with properly tuned algorithm specific parameters was able to create clinically acceptable plans within clinically reasonable time. However, the algorithm was let to run just for limited number of generations not considered optimal, in general, for such algorithms. This was done to keep time window desirable for real time procedures. Therefore, it requires further study with improved conditions to realize the full potential of the algorithm.

  11. Design optimization of an axial-field eddy-current magnetic coupling based on magneto-thermal analytical model

    NASA Astrophysics Data System (ADS)

    Fontchastagner, Julien; Lubin, Thierry; Mezani, Smaïl; Takorabet, Noureddine

    2018-03-01

    This paper presents a design optimization of an axial-flux eddy-current magnetic coupling. The design procedure is based on a torque formula derived from a 3D analytical model and a population algorithm method. The main objective of this paper is to determine the best design in terms of magnets volume in order to transmit a torque between two movers, while ensuring a low slip speed and a good efficiency. The torque formula is very accurate and computationally efficient, and is valid for any slip speed values. Nevertheless, in order to solve more realistic problems, and then, take into account the thermal effects on the torque value, a thermal model based on convection heat transfer coefficients is also established and used in the design optimization procedure. Results show the effectiveness of the proposed methodology.

  12. Fuel Injector Design Optimization for an Annular Scramjet Geometry

    NASA Technical Reports Server (NTRS)

    Steffen, Christopher J., Jr.

    2003-01-01

    A four-parameter, three-level, central composite experiment design has been used to optimize the configuration of an annular scramjet injector geometry using computational fluid dynamics. The computational fluid dynamic solutions played the role of computer experiments, and response surface methodology was used to capture the simulation results for mixing efficiency and total pressure recovery within the scramjet flowpath. An optimization procedure, based upon the response surface results of mixing efficiency, was used to compare the optimal design configuration against the target efficiency value of 92.5%. The results of three different optimization procedures are presented and all point to the need to look outside the current design space for different injector geometries that can meet or exceed the stated mixing efficiency target.

  13. MATLAB-implemented estimation procedure for model-based assessment of hepatic insulin degradation from standard intravenous glucose tolerance test data.

    PubMed

    Di Nardo, Francesco; Mengoni, Michele; Morettini, Micaela

    2013-05-01

    Present study provides a novel MATLAB-based parameter estimation procedure for individual assessment of hepatic insulin degradation (HID) process from standard frequently-sampled intravenous glucose tolerance test (FSIGTT) data. Direct access to the source code, offered by MATLAB, enabled us to design an optimization procedure based on the alternating use of Gauss-Newton's and Levenberg-Marquardt's algorithms, which assures the full convergence of the process and the containment of computational time. Reliability was tested by direct comparison with the application, in eighteen non-diabetic subjects, of well-known kinetic analysis software package SAAM II, and by application on different data. Agreement between MATLAB and SAAM II was warranted by intraclass correlation coefficients ≥0.73; no significant differences between corresponding mean parameter estimates and prediction of HID rate; and consistent residual analysis. Moreover, MATLAB optimization procedure resulted in a significant 51% reduction of CV% for the worst-estimated parameter by SAAM II and in maintaining all model-parameter CV% <20%. In conclusion, our MATLAB-based procedure was suggested as a suitable tool for the individual assessment of HID process. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  14. Optimized in vitro procedure for assessing the cytocompatibility of magnesium-based biomaterials.

    PubMed

    Jung, Ole; Smeets, Ralf; Porchetta, Dario; Kopp, Alexander; Ptock, Christoph; Müller, Ute; Heiland, Max; Schwade, Max; Behr, Björn; Kröger, Nadja; Kluwe, Lan; Hanken, Henning; Hartjen, Philip

    2015-09-01

    Magnesium (Mg) is a promising biomaterial for degradable implant applications that has been extensively studied in vitro and in vivo in recent years. In this study, we developed a procedure that allows an optimized and uniform in vitro assessment of the cytocompatibility of Mg-based materials while respecting the standard protocol DIN EN ISO 10993-5:2009. The mouse fibroblast line L-929 was chosen as the preferred assay cell line and MEM supplemented with 10% FCS, penicillin/streptomycin and 4mM l-glutamine as the favored assay medium. The procedure consists of (1) an indirect assessment of effects of soluble Mg corrosion products in material extracts and (2) a direct assessment of the surface compatibility in terms of cell attachment and cytotoxicity originating from active corrosion processes. The indirect assessment allows the quantification of cell-proliferation (BrdU-assay), viability (XTT-assay) as well as cytotoxicity (LDH-assay) of the mouse fibroblasts incubated with material extracts. Direct assessment visualizes cells attached to the test materials by means of live-dead staining. The colorimetric assays and the visual evaluation complement each other and the combination of both provides an optimized and simple procedure for assessing the cytocompatibility of Mg-based biomaterials in vitro. Copyright © 2015 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.

  15. An artificial system for selecting the optimal surgical team.

    PubMed

    Saberi, Nahid; Mahvash, Mohsen; Zenati, Marco

    2015-01-01

    We introduce an intelligent system to optimize a team composition based on the team's historical outcomes and apply this system to compose a surgical team. The system relies on a record of the procedures performed in the past. The optimal team composition is the one with the lowest probability of unfavorable outcome. We use the theory of probability and the inclusion exclusion principle to model the probability of team outcome for a given composition. A probability value is assigned to each person of database and the probability of a team composition is calculated from them. The model allows to determine the probability of all possible team compositions even if there is no recoded procedure for some team compositions. From an analytical perspective, assembling an optimal team is equivalent to minimizing the overlap of team members who have a recurring tendency to be involved with procedures of unfavorable results. A conceptual example shows the accuracy of the proposed system on obtaining the optimal team.

  16. Efficient Simulation Budget Allocation for Selecting an Optimal Subset

    NASA Technical Reports Server (NTRS)

    Chen, Chun-Hung; He, Donghai; Fu, Michael; Lee, Loo Hay

    2008-01-01

    We consider a class of the subset selection problem in ranking and selection. The objective is to identify the top m out of k designs based on simulated output. Traditional procedures are conservative and inefficient. Using the optimal computing budget allocation framework, we formulate the problem as that of maximizing the probability of correc tly selecting all of the top-m designs subject to a constraint on the total number of samples available. For an approximation of this corre ct selection probability, we derive an asymptotically optimal allocat ion and propose an easy-to-implement heuristic sequential allocation procedure. Numerical experiments indicate that the resulting allocatio ns are superior to other methods in the literature that we tested, and the relative efficiency increases for larger problems. In addition, preliminary numerical results indicate that the proposed new procedur e has the potential to enhance computational efficiency for simulation optimization.

  17. Base norms and discrimination of generalized quantum channels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jenčová, A.

    2014-02-15

    We introduce and study norms in the space of hermitian matrices, obtained from base norms in positively generated subspaces. These norms are closely related to discrimination of so-called generalized quantum channels, including quantum states, channels, and networks. We further introduce generalized quantum decision problems and show that the maximal average payoffs of decision procedures are again given by these norms. We also study optimality of decision procedures, in particular, we obtain a necessary and sufficient condition under which an optimal 1-tester for discrimination of quantum channels exists, such that the input state is maximally entangled.

  18. K-Minimax Stochastic Programming Problems

    NASA Astrophysics Data System (ADS)

    Nedeva, C.

    2007-10-01

    The purpose of this paper is a discussion of a numerical procedure based on the simplex method for stochastic optimization problems with partially known distribution functions. The convergence of this procedure is proved by the condition on dual problems.

  19. Large Scale Bacterial Colony Screening of Diversified FRET Biosensors

    PubMed Central

    Litzlbauer, Julia; Schifferer, Martina; Ng, David; Fabritius, Arne; Thestrup, Thomas; Griesbeck, Oliver

    2015-01-01

    Biosensors based on Förster Resonance Energy Transfer (FRET) between fluorescent protein mutants have started to revolutionize physiology and biochemistry. However, many types of FRET biosensors show relatively small FRET changes, making measurements with these probes challenging when used under sub-optimal experimental conditions. Thus, a major effort in the field currently lies in designing new optimization strategies for these types of sensors. Here we describe procedures for optimizing FRET changes by large scale screening of mutant biosensor libraries in bacterial colonies. We describe optimization of biosensor expression, permeabilization of bacteria, software tools for analysis, and screening conditions. The procedures reported here may help in improving FRET changes in multiple suitable classes of biosensors. PMID:26061878

  20. Neural networks for structural design - An integrated system implementation

    NASA Technical Reports Server (NTRS)

    Berke, Laszlo; Hafez, Wassim; Pao, Yoh-Han

    1992-01-01

    The development of powerful automated procedures to aid the creative designer is becoming increasingly critical for complex design tasks. In the work described here Artificial Neural Nets are applied to acquire structural analysis and optimization domain expertise. Based on initial instructions from the user an automated procedure generates random instances of structural analysis and/or optimization 'experiences' that cover a desired domain. It extracts training patterns from the created instances, constructs and trains an appropriate network architecture and checks the accuracy of net predictions. The final product is a trained neural net that can estimate analysis and/or optimization results instantaneously.

  1. Steepest descent method implementation on unconstrained optimization problem using C++ program

    NASA Astrophysics Data System (ADS)

    Napitupulu, H.; Sukono; Mohd, I. Bin; Hidayat, Y.; Supian, S.

    2018-03-01

    Steepest Descent is known as the simplest gradient method. Recently, many researches are done to obtain the appropriate step size in order to reduce the objective function value progressively. In this paper, the properties of steepest descent method from literatures are reviewed together with advantages and disadvantages of each step size procedure. The development of steepest descent method due to its step size procedure is discussed. In order to test the performance of each step size, we run a steepest descent procedure in C++ program. We implemented it to unconstrained optimization test problem with two variables, then we compare the numerical results of each step size procedure. Based on the numerical experiment, we conclude the general computational features and weaknesses of each procedure in each case of problem.

  2. Optimal Artificial Boundary Condition Configurations for Sensitivity-Based Model Updating and Damage Detection

    DTIC Science & Technology

    2010-09-01

    matrix is used in many methods, like Jacobi or Gauss Seidel , for solving linear systems. Also, no partial pivoting is necessary for a strictly column...problems that arise during the procedure, which in general, converges to the solving of a linear system. The most common issue with the solution is the... iterative procedure to find an appropriate subset of parameters that produce an optimal solution commonly known as forward selection. Then, the

  3. Optimal spatial sampling techniques for ground truth data in microwave remote sensing of soil moisture

    NASA Technical Reports Server (NTRS)

    Rao, R. G. S.; Ulaby, F. T.

    1977-01-01

    The paper examines optimal sampling techniques for obtaining accurate spatial averages of soil moisture, at various depths and for cell sizes in the range 2.5-40 acres, with a minimum number of samples. Both simple random sampling and stratified sampling procedures are used to reach a set of recommended sample sizes for each depth and for each cell size. Major conclusions from statistical sampling test results are that (1) the number of samples required decreases with increasing depth; (2) when the total number of samples cannot be prespecified or the moisture in only one single layer is of interest, then a simple random sample procedure should be used which is based on the observed mean and SD for data from a single field; (3) when the total number of samples can be prespecified and the objective is to measure the soil moisture profile with depth, then stratified random sampling based on optimal allocation should be used; and (4) decreasing the sensor resolution cell size leads to fairly large decreases in samples sizes with stratified sampling procedures, whereas only a moderate decrease is obtained in simple random sampling procedures.

  4. A design procedure and handling quality criteria for lateral directional flight control systems

    NASA Technical Reports Server (NTRS)

    Stein, G.; Henke, A. H.

    1972-01-01

    A practical design procedure for aircraft augmentation systems is described based on quadratic optimal control technology and handling-quality-oriented cost functionals. The procedure is applied to the design of a lateral-directional control system for the F4C aircraft. The design criteria, design procedure, and final control system are validated with a program of formal pilot evaluation experiments.

  5. Aerodynamic design optimization using sensitivity analysis and computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Baysal, Oktay; Eleshaky, Mohamed E.

    1991-01-01

    A new and efficient method is presented for aerodynamic design optimization, which is based on a computational fluid dynamics (CFD)-sensitivity analysis algorithm. The method is applied to design a scramjet-afterbody configuration for an optimized axial thrust. The Euler equations are solved for the inviscid analysis of the flow, which in turn provides the objective function and the constraints. The CFD analysis is then coupled with the optimization procedure that uses a constrained minimization method. The sensitivity coefficients, i.e. gradients of the objective function and the constraints, needed for the optimization are obtained using a quasi-analytical method rather than the traditional brute force method of finite difference approximations. During the one-dimensional search of the optimization procedure, an approximate flow analysis (predicted flow) based on a first-order Taylor series expansion is used to reduce the computational cost. Finally, the sensitivity of the optimum objective function to various design parameters, which are kept constant during the optimization, is computed to predict new optimum solutions. The flow analysis of the demonstrative example are compared with the experimental data. It is shown that the method is more efficient than the traditional methods.

  6. A CFD-based aerodynamic design procedure for hypersonic wind-tunnel nozzles

    NASA Technical Reports Server (NTRS)

    Korte, John J.

    1993-01-01

    A new procedure which unifies the best of current classical design practices, computational fluid dynamics (CFD), and optimization procedures is demonstrated for designing the aerodynamic lines of hypersonic wind-tunnel nozzles. The new procedure can be used to design hypersonic wind tunnel nozzles with thick boundary layers where the classical design procedure has been shown to break down. An efficient CFD code, which solves the parabolized Navier-Stokes (PNS) equations using an explicit upwind algorithm, is coupled to a least-squares (LS) optimization procedure. A LS problem is formulated to minimize the difference between the computed flow field and the objective function, consisting of the centerline Mach number distribution and the exit Mach number and flow angle profiles. The aerodynamic lines of the nozzle are defined using a cubic spline, the slopes of which are optimized with the design procedure. The advantages of the new procedure are that it allows full use of powerful CFD codes in the design process, solves an optimization problem to determine the new contour, can be used to design new nozzles or improve sections of existing nozzles, and automatically compensates the nozzle contour for viscous effects as part of the unified design procedure. The new procedure is demonstrated by designing two Mach 15, a Mach 12, and a Mach 18 helium nozzles. The flexibility of the procedure is demonstrated by designing the two Mach 15 nozzles using different constraints, the first nozzle for a fixed length and exit diameter and the second nozzle for a fixed length and throat diameter. The computed flow field for the Mach 15 least squares parabolized Navier-Stokes (LS/PNS) designed nozzle is compared with the classically designed nozzle and demonstrates a significant improvement in the flow expansion process and uniform core region.

  7. The Optimization Based Dynamic and Cyclic Working Strategies for Rechargeable Wireless Sensor Networks with Multiple Base Stations and Wireless Energy Transfer Devices

    PubMed Central

    Ding, Xu; Han, Jianghong; Shi, Lei

    2015-01-01

    In this paper, the optimal working schemes for wireless sensor networks with multiple base stations and wireless energy transfer devices are proposed. The wireless energy transfer devices also work as data gatherers while charging sensor nodes. The wireless sensor network is firstly divided into sub networks according to the concept of Voronoi diagram. Then, the entire energy replenishing procedure is split into the pre-normal and normal energy replenishing stages. With the objective of maximizing the sojourn time ratio of the wireless energy transfer device, a continuous time optimization problem for the normal energy replenishing cycle is formed according to constraints with which sensor nodes and wireless energy transfer devices should comply. Later on, the continuous time optimization problem is reshaped into a discrete multi-phased optimization problem, which yields the identical optimality. After linearizing it, we obtain a linear programming problem that can be solved efficiently. The working strategies of both sensor nodes and wireless energy transfer devices in the pre-normal replenishing stage are also discussed in this paper. The intensive simulations exhibit the dynamic and cyclic working schemes for the entire energy replenishing procedure. Additionally, a way of eliminating “bottleneck” sensor nodes is also developed in this paper. PMID:25785305

  8. The optimization based dynamic and cyclic working strategies for rechargeable wireless sensor networks with multiple base stations and wireless energy transfer devices.

    PubMed

    Ding, Xu; Han, Jianghong; Shi, Lei

    2015-03-16

    In this paper, the optimal working schemes for wireless sensor networks with multiple base stations and wireless energy transfer devices are proposed. The wireless energy transfer devices also work as data gatherers while charging sensor nodes. The wireless sensor network is firstly divided into sub networks according to the concept of Voronoi diagram. Then, the entire energy replenishing procedure is split into the pre-normal and normal energy replenishing stages. With the objective of maximizing the sojourn time ratio of the wireless energy transfer device, a continuous time optimization problem for the normal energy replenishing cycle is formed according to constraints with which sensor nodes and wireless energy transfer devices should comply. Later on, the continuous time optimization problem is reshaped into a discrete multi-phased optimization problem, which yields the identical optimality. After linearizing it, we obtain a linear programming problem that can be solved efficiently. The working strategies of both sensor nodes and wireless energy transfer devices in the pre-normal replenishing stage are also discussed in this paper. The intensive simulations exhibit the dynamic and cyclic working schemes for the entire energy replenishing procedure. Additionally, a way of eliminating "bottleneck" sensor nodes is also developed in this paper.

  9. Optimal tree-stem bucking of northeastern species of China

    Treesearch

    Jingxin Wang; Chris B. LeDoux; Joseph McNeel

    2004-01-01

    An application of optimal tree-stem bucking to the northeastern tree species of China is reported. The bucking procedures used in this region are summarized, which are the basic guidelines for the optimal bucking design. The directed graph approach was adopted to generate the bucking patterns by using the network analysis labeling algorithm. A computer-based bucking...

  10. Wave drag as the objective function in transonic fighter wing optimization

    NASA Technical Reports Server (NTRS)

    Phillips, P. S.

    1984-01-01

    The original computational method for determining wave drag in a three dimensional transonic analysis method was replaced by a wave drag formula based on the loss in momentum across an isentropic shock. This formula was used as the objective function in a numerical optimization procedure to reduce the wave drag of a fighter wing at transonic maneuver conditions. The optimization procedure minimized wave drag through modifications to the wing section contours defined by a wing profile shape function. A significant reduction in wave drag was achieved while maintaining a high lift coefficient. Comparisons of the pressure distributions for the initial and optimized wing geometries showed significant reductions in the leading-edge peaks and shock strength across the span.

  11. Network placement optimization for large-scale distributed system

    NASA Astrophysics Data System (ADS)

    Ren, Yu; Liu, Fangfang; Fu, Yunxia; Zhou, Zheng

    2018-01-01

    The network geometry strongly influences the performance of the distributed system, i.e., the coverage capability, measurement accuracy and overall cost. Therefore the network placement optimization represents an urgent issue in the distributed measurement, even in large-scale metrology. This paper presents an effective computer-assisted network placement optimization procedure for the large-scale distributed system and illustrates it with the example of the multi-tracker system. To get an optimal placement, the coverage capability and the coordinate uncertainty of the network are quantified. Then a placement optimization objective function is developed in terms of coverage capabilities, measurement accuracy and overall cost. And a novel grid-based encoding approach for Genetic algorithm is proposed. So the network placement is optimized by a global rough search and a local detailed search. Its obvious advantage is that there is no need for a specific initial placement. At last, a specific application illustrates this placement optimization procedure can simulate the measurement results of a specific network and design the optimal placement efficiently.

  12. Intelligent Fault Diagnosis of HVCB with Feature Space Optimization-Based Random Forest

    PubMed Central

    Ma, Suliang; Wu, Jianwen; Wang, Yuhao; Jia, Bowen; Jiang, Yuan

    2018-01-01

    Mechanical faults of high-voltage circuit breakers (HVCBs) always happen over long-term operation, so extracting the fault features and identifying the fault type have become a key issue for ensuring the security and reliability of power supply. Based on wavelet packet decomposition technology and random forest algorithm, an effective identification system was developed in this paper. First, compared with the incomplete description of Shannon entropy, the wavelet packet time-frequency energy rate (WTFER) was adopted as the input vector for the classifier model in the feature selection procedure. Then, a random forest classifier was used to diagnose the HVCB fault, assess the importance of the feature variable and optimize the feature space. Finally, the approach was verified based on actual HVCB vibration signals by considering six typical fault classes. The comparative experiment results show that the classification accuracy of the proposed method with the origin feature space reached 93.33% and reached up to 95.56% with optimized input feature vector of classifier. This indicates that feature optimization procedure is successful, and the proposed diagnosis algorithm has higher efficiency and robustness than traditional methods. PMID:29659548

  13. Computationally efficient stochastic optimization using multiple realizations

    NASA Astrophysics Data System (ADS)

    Bayer, P.; Bürger, C. M.; Finkel, M.

    2008-02-01

    The presented study is concerned with computationally efficient methods for solving stochastic optimization problems involving multiple equally probable realizations of uncertain parameters. A new and straightforward technique is introduced that is based on dynamically ordering the stack of realizations during the search procedure. The rationale is that a small number of critical realizations govern the output of a reliability-based objective function. By utilizing a problem, which is typical to designing a water supply well field, several variants of this "stack ordering" approach are tested. The results are statistically assessed, in terms of optimality and nominal reliability. This study demonstrates that the simple ordering of a given number of 500 realizations while applying an evolutionary search algorithm can save about half of the model runs without compromising the optimization procedure. More advanced variants of stack ordering can, if properly configured, save up to more than 97% of the computational effort that would be required if the entire number of realizations were considered. The findings herein are promising for similar problems of water management and reliability-based design in general, and particularly for non-convex problems that require heuristic search techniques.

  14. Kuhn-Tucker optimization based reliability analysis for probabilistic finite elements

    NASA Technical Reports Server (NTRS)

    Liu, W. K.; Besterfield, G.; Lawrence, M.; Belytschko, T.

    1988-01-01

    The fusion of probability finite element method (PFEM) and reliability analysis for fracture mechanics is considered. Reliability analysis with specific application to fracture mechanics is presented, and computational procedures are discussed. Explicit expressions for the optimization procedure with regard to fracture mechanics are given. The results show the PFEM is a very powerful tool in determining the second-moment statistics. The method can determine the probability of failure or fracture subject to randomness in load, material properties and crack length, orientation, and location.

  15. Computer-oriented synthesis of wide-band non-uniform negative resistance amplifiers

    NASA Technical Reports Server (NTRS)

    Branner, G. R.; Chan, S.-P.

    1975-01-01

    This paper presents a synthesis procedure which provides design values for broad-band amplifiers using non-uniform negative resistance devices. Employing a weighted least squares optimization scheme, the technique, based on an extension of procedures for uniform negative resistance devices, is capable of providing designs for a variety of matching network topologies. It also provides, for the first time, quantitative results for predicting the effects of parameter element variations on overall amplifier performance. The technique is also unique in that it employs exact partial derivatives for optimization and sensitivity computation. In comparison with conventional procedures, significantly improved broad-band designs are shown to result.

  16. Turbomachinery Airfoil Design Optimization Using Differential Evolution

    NASA Technical Reports Server (NTRS)

    Madavan, Nateri K.; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    An aerodynamic design optimization procedure that is based on a evolutionary algorithm known at Differential Evolution is described. Differential Evolution is a simple, fast, and robust evolutionary strategy that has been proven effective in determining the global optimum for several difficult optimization problems, including highly nonlinear systems with discontinuities and multiple local optima. The method is combined with a Navier-Stokes solver that evaluates the various intermediate designs and provides inputs to the optimization procedure. An efficient constraint handling mechanism is also incorporated. Results are presented for the inverse design of a turbine airfoil from a modern jet engine. The capability of the method to search large design spaces and obtain the optimal airfoils in an automatic fashion is demonstrated. Substantial reductions in the overall computing time requirements are achieved by using the algorithm in conjunction with neural networks.

  17. Interactive Reference Point Procedure Based on the Conic Scalarizing Function

    PubMed Central

    2014-01-01

    In multiobjective optimization methods, multiple conflicting objectives are typically converted into a single objective optimization problem with the help of scalarizing functions. The conic scalarizing function is a general characterization of Benson proper efficient solutions of non-convex multiobjective problems in terms of saddle points of scalar Lagrangian functions. This approach preserves convexity. The conic scalarizing function, as a part of a posteriori or a priori methods, has successfully been applied to several real-life problems. In this paper, we propose a conic scalarizing function based interactive reference point procedure where the decision maker actively takes part in the solution process and directs the search according to her or his preferences. An algorithmic framework for the interactive solution of multiple objective optimization problems is presented and is utilized for solving some illustrative examples. PMID:24723795

  18. Electro-thermal battery model identification for automotive applications

    NASA Astrophysics Data System (ADS)

    Hu, Y.; Yurkovich, S.; Guezennec, Y.; Yurkovich, B. J.

    This paper describes a model identification procedure for identifying an electro-thermal model of lithium ion batteries used in automotive applications. The dynamic model structure adopted is based on an equivalent circuit model whose parameters are scheduled on the state-of-charge, temperature, and current direction. Linear spline functions are used as the functional form for the parametric dependence. The model identified in this way is valid inside a large range of temperatures and state-of-charge, so that the resulting model can be used for automotive applications such as on-board estimation of the state-of-charge and state-of-health. The model coefficients are identified using a multiple step genetic algorithm based optimization procedure designed for large scale optimization problems. The validity of the procedure is demonstrated experimentally for an A123 lithium ion iron-phosphate battery.

  19. On the preventive management of sediment-related sewer blockages: a combined maintenance and routing optimization approach.

    PubMed

    Fontecha, John E; Akhavan-Tabatabaei, Raha; Duque, Daniel; Medaglia, Andrés L; Torres, María N; Rodríguez, Juan Pablo

    In this work we tackle the problem of planning and scheduling preventive maintenance (PM) of sediment-related sewer blockages in a set of geographically distributed sites that are subject to non-deterministic failures. To solve the problem, we extend a combined maintenance and routing (CMR) optimization approach which is a procedure based on two components: (a) first a maintenance model is used to determine the optimal time to perform PM operations for each site and second (b) a mixed integer program-based split procedure is proposed to route a set of crews (e.g., sewer cleaners, vehicles equipped with winches or rods and dump trucks) in order to perform PM operations at a near-optimal minimum expected cost. We applied the proposed CMR optimization approach to two (out of five) operative zones in the city of Bogotá (Colombia), where more than 100 maintenance operations per zone must be scheduled on a weekly basis. Comparing the CMR against the current maintenance plan, we obtained more than 50% of cost savings in 90% of the sites.

  20. Sampling design optimization for spatial functions

    USGS Publications Warehouse

    Olea, R.A.

    1984-01-01

    A new procedure is presented for minimizing the sampling requirements necessary to estimate a mappable spatial function at a specified level of accuracy. The technique is based on universal kriging, an estimation method within the theory of regionalized variables. Neither actual implementation of the sampling nor universal kriging estimations are necessary to make an optimal design. The average standard error and maximum standard error of estimation over the sampling domain are used as global indices of sampling efficiency. The procedure optimally selects those parameters controlling the magnitude of the indices, including the density and spatial pattern of the sample elements and the number of nearest sample elements used in the estimation. As an illustration, the network of observation wells used to monitor the water table in the Equus Beds of Kansas is analyzed and an improved sampling pattern suggested. This example demonstrates the practical utility of the procedure, which can be applied equally well to other spatial sampling problems, as the procedure is not limited by the nature of the spatial function. ?? 1984 Plenum Publishing Corporation.

  1. [Preparation procedures of anti-complementary polysaccharides from Houttuynia cordata].

    PubMed

    Zhang, Juanjuan; Lu, Yan; Chen, Daofeng

    2012-07-01

    To establish and optimize the preparation procedures of the anti-complementary polysaccharides from Houttuynia cordata. Based on the yield and anti-complementary activity in vitro, the conditions of extraction and alcohol precipitating process were optimized by orthogonal tests. The optimal condition of deproteinization was determined according to the results of protein removed and polysaccharide maintained. The best decoloring method was also optimized by orthogonal experimental design. The optimized preparation procedures were given as follows: extract the coarse powder 3 times with 50 times volume of water at 90 degrees C for 2 hours every time, combine the extracts and concentrate appropriately, equivalent to 0.12 g of H. cordata per milliliter. Add 4 times volume of 90% ethanol to the extract, allow to stand for 24 hours to precipitate totally, filter and the precipitate was successfully washed with anhydrous alcohol, acetone and anhydrous ether. Resolve the residue with water, add trichloroacetic acid (TCA) to a concentration of 20% to remove protein. Decoloration was at a concentration of 3% with activated carbon at pH 3.0, 50 degrees C for 50 min. The above procedures above were tested 3 times, resulting in the average yield of polysaccharides at 4.03% (RSD 0.96%), the average concentrations of polysaccharides and protein at 80.97% (RSD 1.5%) and 2.02% (RSD 2.3%), and average CH50 at 0.079 g x L-(-1) (RSD 3.6%). The established and optimized procedures are repeatable and reliable to prepare the anti-complementary polysaccharides with high quality and activity from H. cordata.

  2. Investigation on the use of optimization techniques for helicopter airframe vibrations design studies

    NASA Technical Reports Server (NTRS)

    Sreekanta Murthy, T.

    1992-01-01

    Results of the investigation of formal nonlinear programming-based numerical optimization techniques of helicopter airframe vibration reduction are summarized. The objective and constraint function and the sensitivity expressions used in the formulation of airframe vibration optimization problems are presented and discussed. Implementation of a new computational procedure based on MSC/NASTRAN and CONMIN in a computer program system called DYNOPT for optimizing airframes subject to strength, frequency, dynamic response, and dynamic stress constraints is described. An optimization methodology is proposed which is thought to provide a new way of applying formal optimization techniques during the various phases of the airframe design process. Numerical results obtained from the application of the DYNOPT optimization code to a helicopter airframe are discussed.

  3. Parallel-vector computation for linear structural analysis and non-linear unconstrained optimization problems

    NASA Technical Reports Server (NTRS)

    Nguyen, D. T.; Al-Nasra, M.; Zhang, Y.; Baddourah, M. A.; Agarwal, T. K.; Storaasli, O. O.; Carmona, E. A.

    1991-01-01

    Several parallel-vector computational improvements to the unconstrained optimization procedure are described which speed up the structural analysis-synthesis process. A fast parallel-vector Choleski-based equation solver, pvsolve, is incorporated into the well-known SAP-4 general-purpose finite-element code. The new code, denoted PV-SAP, is tested for static structural analysis. Initial results on a four processor CRAY 2 show that using pvsolve reduces the equation solution time by a factor of 14-16 over the original SAP-4 code. In addition, parallel-vector procedures for the Golden Block Search technique and the BFGS method are developed and tested for nonlinear unconstrained optimization. A parallel version of an iterative solver and the pvsolve direct solver are incorporated into the BFGS method. Preliminary results on nonlinear unconstrained optimization test problems, using pvsolve in the analysis, show excellent parallel-vector performance indicating that these parallel-vector algorithms can be used in a new generation of finite-element based structural design/analysis-synthesis codes.

  4. Torsional Ultrasound Sensor Optimization for Soft Tissue Characterization

    PubMed Central

    Melchor, Juan; Muñoz, Rafael; Rus, Guillermo

    2017-01-01

    Torsion mechanical waves have the capability to characterize shear stiffness moduli of soft tissue. Under this hypothesis, a computational methodology is proposed to design and optimize a piezoelectrics-based transmitter and receiver to generate and measure the response of torsional ultrasonic waves. The procedure employed is divided into two steps: (i) a finite element method (FEM) is developed to obtain a transmitted and received waveform as well as a resonance frequency of a previous geometry validated with a semi-analytical simplified model and (ii) a probabilistic optimality criteria of the design based on inverse problem from the estimation of robust probability of detection (RPOD) to maximize the detection of the pathology defined in terms of changes of shear stiffness. This study collects different options of design in two separated models, in transmission and contact, respectively. The main contribution of this work describes a framework to establish such as forward, inverse and optimization procedures to choose a set of appropriate parameters of a transducer. This methodological framework may be generalizable for other different applications. PMID:28617353

  5. A new design approach to innovative spectrometers. Case study: TROPOLITE

    NASA Astrophysics Data System (ADS)

    Volatier, Jean-Baptiste; Baümer, Stefan; Kruizinga, Bob; Vink, Rob

    2014-05-01

    Designing a novel optical system is a nested iterative process. The optimization loop, from a starting point to final system is already mostly automated. However this loop is part of a wider loop which is not. This wider loop starts with an optical specification and ends with a manufacturability assessment. When designing a new spectrometer with emphasis on weight and cost, numerous iterations between the optical- and mechanical designer are inevitable. The optical designer must then be able to reliably produce optical designs based on new input gained from multidisciplinary studies. This paper presents a procedure that can automatically generate new starting points based on any kind of input or new constraint that might arise. These starting points can then be handed over to a generic optimization routine to make the design tasks extremely efficient. The optical designer job is then not to design optical systems, but to meta-design a procedure that produces optical systems paving the way for system level optimization. We present here this procedure and its application to the design of TROPOLITE a lightweight push broom imaging spectrometer.

  6. Representing and comparing protein structures as paths in three-dimensional space

    PubMed Central

    Zhi, Degui; Krishna, S Sri; Cao, Haibo; Pevzner, Pavel; Godzik, Adam

    2006-01-01

    Background Most existing formulations of protein structure comparison are based on detailed atomic level descriptions of protein structures and bypass potential insights that arise from a higher-level abstraction. Results We propose a structure comparison approach based on a simplified representation of proteins that describes its three-dimensional path by local curvature along the generalized backbone of the polypeptide. We have implemented a dynamic programming procedure that aligns curvatures of proteins by optimizing a defined sum turning angle deviation measure. Conclusion Although our procedure does not directly optimize global structural similarity as measured by RMSD, our benchmarking results indicate that it can surprisingly well recover the structural similarity defined by structure classification databases and traditional structure alignment programs. In addition, our program can recognize similarities between structures with extensive conformation changes that are beyond the ability of traditional structure alignment programs. We demonstrate the applications of procedure to several contexts of structure comparison. An implementation of our procedure, CURVE, is available as a public webserver. PMID:17052359

  7. Efficient sensitivity analysis and optimization of a helicopter rotor

    NASA Technical Reports Server (NTRS)

    Lim, Joon W.; Chopra, Inderjit

    1989-01-01

    Aeroelastic optimization of a system essentially consists of the determination of the optimum values of design variables which minimize the objective function and satisfy certain aeroelastic and geometric constraints. The process of aeroelastic optimization analysis is illustrated. To carry out aeroelastic optimization effectively, one needs a reliable analysis procedure to determine steady response and stability of a rotor system in forward flight. The rotor dynamic analysis used in the present study developed inhouse at the University of Maryland is based on finite elements in space and time. The analysis consists of two major phases: vehicle trim and rotor steady response (coupled trim analysis), and aeroelastic stability of the blade. For a reduction of helicopter vibration, the optimization process requires the sensitivity derivatives of the objective function and aeroelastic stability constraints. For this, the derivatives of steady response, hub loads and blade stability roots are calculated using a direct analytical approach. An automated optimization procedure is developed by coupling the rotor dynamic analysis, design sensitivity analysis and constrained optimization code CONMIN.

  8. Shape Optimization for Additive Manufacturing of Removable Partial Dentures - A New Paradigm for Prosthetic CAD/CAM

    PubMed Central

    2015-01-01

    With ever-growing aging population and demand for denture treatments, pressure-induced mucosa lesion and residual ridge resorption remain main sources of clinical complications. Conventional denture design and fabrication are challenged for its labor and experience intensity, urgently necessitating an automatic procedure. This study aims to develop a fully automatic procedure enabling shape optimization and additive manufacturing of removable partial dentures (RPD), to maximize the uniformity of contact pressure distribution on the mucosa, thereby reducing associated clinical complications. A 3D heterogeneous finite element (FE) model was constructed from CT scan, and the critical tissue of mucosa was modeled as a hyperelastic material from in vivo clinical data. A contact shape optimization algorithm was developed based on the bi-directional evolutionary structural optimization (BESO) technique. Both initial and optimized dentures were prototyped by 3D printing technology and evaluated with in vitro tests. Through the optimization, the peak contact pressure was reduced by 70%, and the uniformity was improved by 63%. In vitro tests verified the effectiveness of this procedure, and the hydrostatic pressure induced in the mucosa is well below clinical pressure-pain thresholds (PPT), potentially lessening risk of residual ridge resorption. This proposed computational optimization and additive fabrication procedure provides a novel method for fast denture design and adjustment at low cost, with quantitative guidelines and computer aided design and manufacturing (CAD/CAM) for a specific patient. The integration of digitalized modeling, computational optimization, and free-form fabrication enables more efficient clinical adaptation. The customized optimal denture design is expected to minimize pain/discomfort and potentially reduce long-term residual ridge resorption. PMID:26161878

  9. Expected p-values in light of an ROC curve analysis applied to optimal multiple testing procedures.

    PubMed

    Vexler, Albert; Yu, Jihnhee; Zhao, Yang; Hutson, Alan D; Gurevich, Gregory

    2017-01-01

    Many statistical studies report p-values for inferential purposes. In several scenarios, the stochastic aspect of p-values is neglected, which may contribute to drawing wrong conclusions in real data experiments. The stochastic nature of p-values makes their use to examine the performance of given testing procedures or associations between investigated factors to be difficult. We turn our focus on the modern statistical literature to address the expected p-value (EPV) as a measure of the performance of decision-making rules. During the course of our study, we prove that the EPV can be considered in the context of receiver operating characteristic (ROC) curve analysis, a well-established biostatistical methodology. The ROC-based framework provides a new and efficient methodology for investigating and constructing statistical decision-making procedures, including: (1) evaluation and visualization of properties of the testing mechanisms, considering, e.g. partial EPVs; (2) developing optimal tests via the minimization of EPVs; (3) creation of novel methods for optimally combining multiple test statistics. We demonstrate that the proposed EPV-based approach allows us to maximize the integrated power of testing algorithms with respect to various significance levels. In an application, we use the proposed method to construct the optimal test and analyze a myocardial infarction disease dataset. We outline the usefulness of the "EPV/ROC" technique for evaluating different decision-making procedures, their constructions and properties with an eye towards practical applications.

  10. Reserve design to maximize species persistence

    Treesearch

    Robert G. Haight; Laurel E. Travis

    2008-01-01

    We develop a reserve design strategy to maximize the probability of species persistence predicted by a stochastic, individual-based, metapopulation model. Because the population model does not fit exact optimization procedures, our strategy involves deriving promising solutions from theory, obtaining promising solutions from a simulation optimization heuristic, and...

  11. Team-based Service Delivery for Students with Disabilities: Practice Options and Guidelines for Success.

    ERIC Educational Resources Information Center

    Ogletree, Billy T.; Bull, Jeannette; Drew, Ruby; Lunnen, Karen Y.

    2001-01-01

    This article reviews the assessment procedures, treatment procedures, and the advantages and disadvantages of three professional-family team models: multidisciplinary teams, interdisciplinary teams, and transdisciplinary teams. Guidelines for optimal team participation are provided. The importance of mission statements, communication, trust,…

  12. Optimization and validation of moving average quality control procedures using bias detection curves and moving average validation charts.

    PubMed

    van Rossum, Huub H; Kemperman, Hans

    2017-02-01

    To date, no practical tools are available to obtain optimal settings for moving average (MA) as a continuous analytical quality control instrument. Also, there is no knowledge of the true bias detection properties of applied MA. We describe the use of bias detection curves for MA optimization and MA validation charts for validation of MA. MA optimization was performed on a data set of previously obtained consecutive assay results. Bias introduction and MA bias detection were simulated for multiple MA procedures (combination of truncation limits, calculation algorithms and control limits) and performed for various biases. Bias detection curves were generated by plotting the median number of test results needed for bias detection against the simulated introduced bias. In MA validation charts the minimum, median, and maximum numbers of assay results required for MA bias detection are shown for various bias. Their use was demonstrated for sodium, potassium, and albumin. Bias detection curves allowed optimization of MA settings by graphical comparison of bias detection properties of multiple MA. The optimal MA was selected based on the bias detection characteristics obtained. MA validation charts were generated for selected optimal MA and provided insight into the range of results required for MA bias detection. Bias detection curves and MA validation charts are useful tools for optimization and validation of MA procedures.

  13. Point-based warping with optimized weighting factors of displacement vectors

    NASA Astrophysics Data System (ADS)

    Pielot, Ranier; Scholz, Michael; Obermayer, Klaus; Gundelfinger, Eckart D.; Hess, Andreas

    2000-06-01

    The accurate comparison of inter-individual 3D image brain datasets requires non-affine transformation techniques (warping) to reduce geometric variations. Constrained by the biological prerequisites we use in this study a landmark-based warping method with weighted sums of displacement vectors, which is enhanced by an optimization process. Furthermore, we investigate fast automatic procedures for determining landmarks to improve the practicability of 3D warping. This combined approach was tested on 3D autoradiographs of Gerbil brains. The autoradiographs were obtained after injecting a non-metabolized radioactive glucose derivative into the Gerbil thereby visualizing neuronal activity in the brain. Afterwards the brain was processed with standard autoradiographical methods. The landmark-generator computes corresponding reference points simultaneously within a given number of datasets by Monte-Carlo-techniques. The warping function is a distance weighted exponential function with a landmark- specific weighting factor. These weighting factors are optimized by a computational evolution strategy. The warping quality is quantified by several coefficients (correlation coefficient, overlap-index, and registration error). The described approach combines a highly suitable procedure to automatically detect landmarks in autoradiographical brain images and an enhanced point-based warping technique, optimizing the local weighting factors. This optimization process significantly improves the similarity between the warped and the target dataset.

  14. Control theory based airfoil design using the Euler equations

    NASA Technical Reports Server (NTRS)

    Jameson, Antony; Reuther, James

    1994-01-01

    This paper describes the implementation of optimization techniques based on control theory for airfoil design. In our previous work it was shown that control theory could be employed to devise effective optimization procedures for two-dimensional profiles by using the potential flow equation with either a conformal mapping or a general coordinate system. The goal of our present work is to extend the development to treat the Euler equations in two-dimensions by procedures that can readily be generalized to treat complex shapes in three-dimensions. Therefore, we have developed methods which can address airfoil design through either an analytic mapping or an arbitrary grid perturbation method applied to a finite volume discretization of the Euler equations. Here the control law serves to provide computationally inexpensive gradient information to a standard numerical optimization method. Results are presented for both the inverse problem and drag minimization problem.

  15. Stochastic DG Placement for Conservation Voltage Reduction Based on Multiple Replications Procedure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Zhaoyu; Chen, Bokan; Wang, Jianhui

    2015-06-01

    Conservation voltage reduction (CVR) and distributed-generation (DG) integration are popular strategies implemented by utilities to improve energy efficiency. This paper investigates the interactions between CVR and DG placement to minimize load consumption in distribution networks, while keeping the lowest voltage level within the predefined range. The optimal placement of DG units is formulated as a stochastic optimization problem considering the uncertainty of DG outputs and load consumptions. A sample average approximation algorithm-based technique is developed to solve the formulated problem effectively. A multiple replications procedure is developed to test the stability of the solution and calculate the confidence interval ofmore » the gap between the candidate solution and optimal solution. The proposed method has been applied to the IEEE 37-bus distribution test system with different scenarios. The numerical results indicate that the implementations of CVR and DG, if combined, can achieve significant energy savings.« less

  16. Reinforcement learning solution for HJB equation arising in constrained optimal control problem.

    PubMed

    Luo, Biao; Wu, Huai-Ning; Huang, Tingwen; Liu, Derong

    2015-11-01

    The constrained optimal control problem depends on the solution of the complicated Hamilton-Jacobi-Bellman equation (HJBE). In this paper, a data-based off-policy reinforcement learning (RL) method is proposed, which learns the solution of the HJBE and the optimal control policy from real system data. One important feature of the off-policy RL is that its policy evaluation can be realized with data generated by other behavior policies, not necessarily the target policy, which solves the insufficient exploration problem. The convergence of the off-policy RL is proved by demonstrating its equivalence to the successive approximation approach. Its implementation procedure is based on the actor-critic neural networks structure, where the function approximation is conducted with linearly independent basis functions. Subsequently, the convergence of the implementation procedure with function approximation is also proved. Finally, its effectiveness is verified through computer simulations. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Incorporating deliverable monitor unit constraints into spot intensity optimization in intensity modulated proton therapy treatment planning

    PubMed Central

    Cao, Wenhua; Lim, Gino; Li, Xiaoqiang; Li, Yupeng; Zhu, X. Ronald; Zhang, Xiaodong

    2014-01-01

    The purpose of this study is to investigate the feasibility and impact of incorporating deliverable monitor unit (MU) constraints into spot intensity optimization in intensity modulated proton therapy (IMPT) treatment planning. The current treatment planning system (TPS) for IMPT disregards deliverable MU constraints in the spot intensity optimization (SIO) routine. It performs a post-processing procedure on an optimized plan to enforce deliverable MU values that are required by the spot scanning proton delivery system. This procedure can create a significant dose distribution deviation between the optimized and post-processed deliverable plans, especially when small spot spacings are used. In this study, we introduce a two-stage linear programming (LP) approach to optimize spot intensities and constrain deliverable MU values simultaneously, i.e., a deliverable spot intensity optimization (DSIO) model. Thus, the post-processing procedure is eliminated and the associated optimized plan deterioration can be avoided. Four prostate cancer cases at our institution were selected for study and two parallel opposed beam angles were planned for all cases. A quadratic programming (QP) based model without MU constraints, i.e., a conventional spot intensity optimization (CSIO) model, was also implemented to emulate the commercial TPS. Plans optimized by both the DSIO and CSIO models were evaluated for five different settings of spot spacing from 3 mm to 7 mm. For all spot spacings, the DSIO-optimized plans yielded better uniformity for the target dose coverage and critical structure sparing than did the CSIO-optimized plans. With reduced spot spacings, more significant improvements in target dose uniformity and critical structure sparing were observed in the DSIO- than in the CSIO-optimized plans. Additionally, better sparing of the rectum and bladder was achieved when reduced spacings were used for the DSIO-optimized plans. The proposed DSIO approach ensures the deliverability of optimized IMPT plans that take into account MU constraints. This eliminates the post-processing procedure required by the TPS as well as the resultant deteriorating effect on ultimate dose distributions. This approach therefore allows IMPT plans to adopt all possible spot spacings optimally. Moreover, dosimetric benefits can be achieved using smaller spot spacings. PMID:23835656

  18. Efficient fractal-based mutation in evolutionary algorithms from iterated function systems

    NASA Astrophysics Data System (ADS)

    Salcedo-Sanz, S.; Aybar-Ruíz, A.; Camacho-Gómez, C.; Pereira, E.

    2018-03-01

    In this paper we present a new mutation procedure for Evolutionary Programming (EP) approaches, based on Iterated Function Systems (IFSs). The new mutation procedure proposed consists of considering a set of IFS which are able to generate fractal structures in a two-dimensional phase space, and use them to modify a current individual of the EP algorithm, instead of using random numbers from different probability density functions. We test this new proposal in a set of benchmark functions for continuous optimization problems. In this case, we compare the proposed mutation against classical Evolutionary Programming approaches, with mutations based on Gaussian, Cauchy and chaotic maps. We also include a discussion on the IFS-based mutation in a real application of Tuned Mass Dumper (TMD) location and optimization for vibration cancellation in buildings. In both practical cases, the proposed EP with the IFS-based mutation obtained extremely competitive results compared to alternative classical mutation operators.

  19. How to formulate and solve "optimal stand density over time" problems for even-aged stands using dynamic programming.

    Treesearch

    Chung M. Chen; Dietmar W. Rose; Rolfe A. Leary

    1980-01-01

    Describes how dynamic programming can be used to solve optimal stand density problems when yields are given by prior simulation or by a new stand growth equation that is a function of the decision variable. Formulations of the latter type allow use of a calculus-based search procedure; they determine exact optimal residual density at each stage.

  20. Optimization of the Inverse Algorithm for Estimating the Optical Properties of Biological Materials Using Spatially-resolved Diffuse Reflectance Technique

    USDA-ARS?s Scientific Manuscript database

    Determination of the optical properties from intact biological materials based on diffusion approximation theory is a complicated inverse problem, and it requires proper implementation of inverse algorithm, instrumentation, and experiment. This work was aimed at optimizing the procedure of estimatin...

  1. A stepwise approach for the reproducible optimization of PAMO expression in Escherichia coli for whole-cell biocatalysis

    PubMed Central

    2012-01-01

    Background Baeyer-Villiger monooxygenases (BVMOs) represent a group of enzymes of considerable biotechnological relevance as illustrated by their growing use as biocatalyst in a variety of synthetic applications. However, due to their increased use the reproducible expression of BVMOs and other biotechnologically relevant enzymes has become a pressing matter while knowledge about the factors governing their reproducible expression is scattered. Results Here, we have used phenylacetone monooxygenase (PAMO) from Thermobifida fusca, a prototype Type I BVMO, as a model enzyme to develop a stepwise strategy to optimize the biotransformation performance of recombinant E. coli expressing PAMO in 96-well microtiter plates in a reproducible fashion. Using this system, the best expression conditions of PAMO were investigated first, including different host strains, temperature as well as time and induction period for PAMO expression. This optimized system was used next to improve biotransformation conditions, the PAMO-catalyzed conversion of phenylacetone, by evaluating the best electron donor, substrate concentration, and the temperature and length of biotransformation. Combining all optimized parameters resulted in a more than four-fold enhancement of the biocatalytic performance and, importantly, this was highly reproducible as indicated by the relative standard deviation of 1% for non-washed cells and 3% for washed cells. Furthermore, the optimized procedure was successfully adapted for activity-based mutant screening. Conclusions Our optimized procedure, which provides a comprehensive overview of the key factors influencing the reproducible expression and performance of a biocatalyst, is expected to form a rational basis for the optimization of miniaturized biotransformations and for the design of novel activity-based screening procedures suitable for BVMOs and other NAD(P)H-dependent enzymes as well. PMID:22720747

  2. Optimal domain decomposition strategies

    NASA Technical Reports Server (NTRS)

    Yoon, Yonghyun; Soni, Bharat K.

    1995-01-01

    The primary interest of the authors is in the area of grid generation, in particular, optimal domain decomposition about realistic configurations. A grid generation procedure with optimal blocking strategies has been developed to generate multi-block grids for a circular-to-rectangular transition duct. The focus of this study is the domain decomposition which optimizes solution algorithm/block compatibility based on geometrical complexities as well as the physical characteristics of flow field. The progress realized in this study is summarized in this paper.

  3. New procedure to design low radar cross section near perfect isotropic and homogeneous triangular carpet cloaks.

    PubMed

    Sharifi, Zohreh; Atlasbaf, Zahra

    2016-10-01

    A new design procedure for near perfect triangular carpet cloaks, fabricated based on only isotropic homogeneous materials, is proposed. This procedure enables us to fabricate a cloak with simple metamaterials or even without employing metamaterials. The proposed procedure together with an invasive weed optimization algorithm is used to design carpet cloaks based on quasi-isotropic metamaterial structures, Teflon and AN-73. According to the simulation results, the proposed cloaks have good invisibility properties against radar, especially monostatic radar. The procedure is a new method to derive isotropic and homogeneous parameters from transformation optics formulas so we do not need to use complicated structures to fabricate the carpet cloaks.

  4. The Krigifier: A Procedure for Generating Pseudorandom Nonlinear Objective Functions for Computational Experimentation

    NASA Technical Reports Server (NTRS)

    Trosset, Michael W.

    1999-01-01

    Comprehensive computational experiments to assess the performance of algorithms for numerical optimization require (among other things) a practical procedure for generating pseudorandom nonlinear objective functions. We propose a procedure that is based on the convenient fiction that objective functions are realizations of stochastic processes. This report details the calculations necessary to implement our procedure for the case of certain stationary Gaussian processes and presents a specific implementation in the statistical programming language S-PLUS.

  5. Commissioning of a 3D image‐based treatment planning system for high‐dose‐rate brachytherapy of cervical cancer

    PubMed Central

    Kim, Yongbok; Modrick, Joseph M.; Pennington, Edward C.

    2016-01-01

    The objective of this work is to present commissioning procedures to clinically implement a three‐dimensional (3D), image‐based, treatment‐planning system (TPS) for high‐dose‐rate (HDR) brachytherapy (BT) for gynecological (GYN) cancer. The physical dimensions of the GYN applicators and their values in the virtual applicator library were varied by 0.4 mm of their nominal values. Reconstruction uncertainties of the titanium tandem and ovoids (T&O) were less than 0.4 mm on CT phantom studies and on average between 0.8‐1.0 mm on MRI when compared with X‐rays. In‐house software, HDRCalculator, was developed to check HDR plan parameters such as independently verifying active tandem or cylinder probe length and ovoid or cylinder size, source calibration and treatment date, and differences between average Point A dose and prescription dose. Dose‐volume histograms were validated using another independent TPS. Comprehensive procedures to commission volume optimization algorithms and process in 3D image‐based planning were presented. For the difference between line and volume optimizations, the average absolute differences as a percentage were 1.4% for total reference air KERMA (TRAK) and 1.1% for Point A dose. Volume optimization consistency tests between versions resulted in average absolute differences in 0.2% for TRAK and 0.9 s (0.2%) for total treatment time. The data revealed that the optimizer should run for at least 1 min in order to avoid more than 0.6% dwell time changes. For clinical GYN T&O cases, three different volume optimization techniques (graphical optimization, pure inverse planning, and hybrid inverse optimization) were investigated by comparing them against a conventional Point A technique. End‐to‐end testing was performed using a T&O phantom to ensure no errors or inconsistencies occurred from imaging through to planning and delivery. The proposed commissioning procedures provide a clinically safe implementation technique for 3D image‐based TPS for HDR BT for GYN cancer. PACS number(s): 87.55.D‐ PMID:27074463

  6. Optimal use of colonoscopy and fecal immunochemical test for population-based colorectal cancer screening: a cost-effectiveness analysis using Japanese data.

    PubMed

    Sekiguchi, Masau; Igarashi, Ataru; Matsuda, Takahisa; Matsumoto, Minori; Sakamoto, Taku; Nakajima, Takeshi; Kakugawa, Yasuo; Yamamoto, Seiichiro; Saito, Hiroshi; Saito, Yutaka

    2016-02-01

    There have been few cost-effectiveness analyses of population-based colorectal cancer screening in Japan, and there is no consensus on the optimal use of total colonoscopy and the fecal immunochemical test for colorectal cancer screening with regard to cost-effectiveness and total colonoscopy workload. The present study aimed to examine the cost-effectiveness of colorectal cancer screening using Japanese data to identify the optimal use of total colonoscopy and fecal immunochemical test. We developed a Markov model to assess the cost-effectiveness of colorectal cancer screening offered to an average-risk population aged 40 years or over. The cost, quality-adjusted life-years and number of total colonoscopy procedures required were evaluated for three screening strategies: (i) a fecal immunochemical test-based strategy; (ii) a total colonoscopy-based strategy; (iii) a strategy of adding population-wide total colonoscopy at 50 years to a fecal immunochemical test-based strategy. All three strategies dominated no screening. Among the three, Strategy 1 was dominated by Strategy 3, and the incremental cost per quality-adjusted life-years gained for Strategy 2 against Strategies 1 and 3 were JPY 293 616 and JPY 781 342, respectively. Within the Japanese threshold (JPY 5-6 million per QALY gained), Strategy 2 was the most cost-effective, followed by Strategy 3; however, Strategy 2 required more than double the number of total colonoscopy procedures than the other strategies. The total colonoscopy-based strategy could be the most cost-effective for population-based colorectal cancer screening in Japan. However, it requires more total colonoscopy procedures than the other strategies. Depending on total colonoscopy capacity, the strategy of adding total colonoscopy for individuals at a specified age to a fecal immunochemical test-based screening may be an optimal solution. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  7. Use of multilevel modeling for determining optimal parameters of heat supply systems

    NASA Astrophysics Data System (ADS)

    Stennikov, V. A.; Barakhtenko, E. A.; Sokolov, D. V.

    2017-07-01

    The problem of finding optimal parameters of a heat-supply system (HSS) is in ensuring the required throughput capacity of a heat network by determining pipeline diameters and characteristics and location of pumping stations. Effective methods for solving this problem, i.e., the method of stepwise optimization based on the concept of dynamic programming and the method of multicircuit optimization, were proposed in the context of the hydraulic circuit theory developed at Melentiev Energy Systems Institute (Siberian Branch, Russian Academy of Sciences). These methods enable us to determine optimal parameters of various types of piping systems due to flexible adaptability of the calculation procedure to intricate nonlinear mathematical models describing features of used equipment items and methods of their construction and operation. The new and most significant results achieved in developing methodological support and software for finding optimal parameters of complex heat supply systems are presented: a new procedure for solving the problem based on multilevel decomposition of a heat network model that makes it possible to proceed from the initial problem to a set of interrelated, less cumbersome subproblems with reduced dimensionality; a new algorithm implementing the method of multicircuit optimization and focused on the calculation of a hierarchical model of a heat supply system; the SOSNA software system for determining optimum parameters of intricate heat-supply systems and implementing the developed methodological foundation. The proposed procedure and algorithm enable us to solve engineering problems of finding the optimal parameters of multicircuit heat supply systems having large (real) dimensionality, and are applied in solving urgent problems related to the optimal development and reconstruction of these systems. The developed methodological foundation and software can be used for designing heat supply systems in the Central and the Admiralty regions in St. Petersburg, the city of Bratsk, and the Magistral'nyi settlement.

  8. Subgrid-scale scalar flux modelling based on optimal estimation theory and machine-learning procedures

    NASA Astrophysics Data System (ADS)

    Vollant, A.; Balarac, G.; Corre, C.

    2017-09-01

    New procedures are explored for the development of models in the context of large eddy simulation (LES) of a passive scalar. They rely on the combination of the optimal estimator theory with machine-learning algorithms. The concept of optimal estimator allows to identify the most accurate set of parameters to be used when deriving a model. The model itself can then be defined by training an artificial neural network (ANN) on a database derived from the filtering of direct numerical simulation (DNS) results. This procedure leads to a subgrid scale model displaying good structural performance, which allows to perform LESs very close to the filtered DNS results. However, this first procedure does not control the functional performance so that the model can fail when the flow configuration differs from the training database. Another procedure is then proposed, where the model functional form is imposed and the ANN used only to define the model coefficients. The training step is a bi-objective optimisation in order to control both structural and functional performances. The model derived from this second procedure proves to be more robust. It also provides stable LESs for a turbulent plane jet flow configuration very far from the training database but over-estimates the mixing process in that case.

  9. High-performance slow light photonic crystal waveguides with topology optimized or circular-hole based material layouts

    NASA Astrophysics Data System (ADS)

    Wang, Fengwen; Jensen, Jakob S.; Sigmund, Ole

    2012-10-01

    Photonic crystal waveguides are optimized for modal confinement and loss related to slow light with high group index. A detailed comparison between optimized circular-hole based waveguides and optimized waveguides with free topology is performed. Design robustness with respect to manufacturing imperfections is enforced by considering different design realizations generated from under-, standard- and over-etching processes in the optimization procedure. A constraint ensures a certain modal confinement, and loss related to slow light with high group index is indirectly treated by penalizing field energy located in air regions. It is demonstrated that slow light with a group index up to ng = 278 can be achieved by topology optimized waveguides with promising modal confinement and restricted group-velocity-dispersion. All the topology optimized waveguides achieve a normalized group-index bandwidth of 0.48 or above. The comparisons between circular-hole based designs and topology optimized designs illustrate that the former can be efficient for dispersion engineering but that larger improvements are possible if irregular geometries are allowed.

  10. Systematic and efficient side chain optimization for molecular docking using a cheapest-path procedure.

    PubMed

    Schumann, Marcel; Armen, Roger S

    2013-05-30

    Molecular docking of small-molecules is an important procedure for computer-aided drug design. Modeling receptor side chain flexibility is often important or even crucial, as it allows the receptor to adopt new conformations as induced by ligand binding. However, the accurate and efficient incorporation of receptor side chain flexibility has proven to be a challenge due to the huge computational complexity required to adequately address this problem. Here we describe a new docking approach with a very fast, graph-based optimization algorithm for assignment of the near-optimal set of residue rotamers. We extensively validate our approach using the 40 DUD target benchmarks commonly used to assess virtual screening performance and demonstrate a large improvement using the developed side chain optimization over rigid receptor docking (average ROC AUC of 0.693 vs. 0.623). Compared to numerous benchmarks, the overall performance is better than nearly all other commonly used procedures. Furthermore, we provide a detailed analysis of the level of receptor flexibility observed in docking results for different classes of residues and elucidate potential avenues for further improvement. Copyright © 2013 Wiley Periodicals, Inc.

  11. Improved optimization of polycyclic aromatic hydrocarbons (PAHs) mixtures resolution in reversed-phase high-performance liquid chromatography by using factorial design and response surface methodology.

    PubMed

    Andrade-Eiroa, Auréa; Diévart, Pascal; Dagaut, Philippe

    2010-04-15

    A new procedure for optimizing PAHs separation in very complex mixtures by reverse phase high performance (RPLC) is proposed. It is based on changing gradually the experimental conditions all along the chromatographic procedure as a function of the physical properties of the compounds eluted. The temperature and speed flow gradients allowed obtaining the optimum resolution in large chromatographic determinations where PAHs with very different medium polarizability have to be separated. Whereas optimization procedures of RPLC methodologies had always been accomplished regardless of the physico-chemical properties of the target analytes, we found that resolution is highly dependent on the physico-chemical properties of the target analytes. Based on resolution criterion, optimization process for a 16 EPA PAHs mixture was performed on three sets of difficult-to-separate PAHs pairs: acenaphthene-fluorene (for the optimization procedure in the first part of the chromatogram where light PAHs elute), benzo[g,h,i]perylene-dibenzo[a,h]anthracene and benzo[g,h,i]perylene-indeno[1,2,3-cd]pyrene (for the optimization procedure of the second part of the chromatogram where the heavier PAHs elute). Two-level full factorial designs were applied to detect interactions among variables to be optimized: speed flow, temperature of column oven and mobile-phase gradient in the two parts of the studied chromatogram. Experimental data were fitted by multivariate nonlinear regression models and optimum values of speed flow and temperature were obtained through mathematical analysis of the constructed models. An HPLC system equipped with a reversed phase 5 microm C18, 250 mm x 4.6mm column (with acetonitrile/water mobile phase), a column oven, a binary pump, a photodiode array detector (PDA), and a fluorimetric detector were used in this work. Optimum resolution was achieved operating at 1.0 mL/min in the first part of the chromatogram (until 45 min) and 0.5 mL/min in the second one (from 45 min to the end) and by applying programmed temperature gradient (15 degrees C until 30 min and progressively increasing temperature until reaching 40 degrees C at 45 min). (c) 2009 Elsevier B.V. All rights reserved.

  12. Approach for Input Uncertainty Propagation and Robust Design in CFD Using Sensitivity Derivatives

    NASA Technical Reports Server (NTRS)

    Putko, Michele M.; Taylor, Arthur C., III; Newman, Perry A.; Green, Lawrence L.

    2002-01-01

    An implementation of the approximate statistical moment method for uncertainty propagation and robust optimization for quasi 3-D Euler CFD code is presented. Given uncertainties in statistically independent, random, normally distributed input variables, first- and second-order statistical moment procedures are performed to approximate the uncertainty in the CFD output. Efficient calculation of both first- and second-order sensitivity derivatives is required. In order to assess the validity of the approximations, these moments are compared with statistical moments generated through Monte Carlo simulations. The uncertainties in the CFD input variables are also incorporated into a robust optimization procedure. For this optimization, statistical moments involving first-order sensitivity derivatives appear in the objective function and system constraints. Second-order sensitivity derivatives are used in a gradient-based search to successfully execute a robust optimization. The approximate methods used throughout the analyses are found to be valid when considering robustness about input parameter mean values.

  13. Application of mixture experimental design in the formulation and optimization of matrix tablets containing carbomer and hydroxy-propylmethylcellulose.

    PubMed

    Petrovic, Aleksandra; Cvetkovic, Nebojsa; Ibric, Svetlana; Trajkovic, Svetlana; Djuric, Zorica; Popadic, Dragica; Popovic, Radmila

    2009-12-01

    Using mixture experimental design, the effect of carbomer (Carbopol((R)) 971P NF) and hydroxypropylmethylcellulose (Methocel((R)) K100M or Methocel((R)) K4M) combination on the release profile and on the mechanism of drug liberation from matrix tablet was investigated. The numerical optimization procedure was also applied to establish and obtain formulation with desired drug release. The amount of TP released, release rate and mechanism varied with carbomer ratio in total matrix and HPMC viscosity. Increasing carbomer fractions led to a decrease in drug release. Anomalous diffusion was found in all matrices containing carbomer, while Case - II transport was predominant for tablet based on HPMC only. The predicted and obtained profiles for optimized formulations showed similarity. Those results indicate that Simplex Lattice Mixture experimental design and numerical optimization procedure can be applied during development to obtain sustained release matrix formulation with desired release profile.

  14. Production and characterization of alginate microcapsules produced by a vibrational encapsulation device.

    PubMed

    Mazzitelli, S; Tosi, A; Balestra, C; Nastruzzi, C; Luca, G; Mancuso, F; Calafiore, R; Calvitti, M

    2008-09-01

    The optimization, through a Design of Experiments (DoE) approach, of a microencapsulation procedure for isolated neonatal porcine islets (NPI) is described. The applied method is based on the generation of monodisperse droplets by a vibrational nozzle. An alginate/polyornithine encapsulation procedure, developed and validated in our laboratory for almost a decade, was used to embody pancreatic islets. We analyzed different experimental parameters including frequency of vibration, amplitude of vibration, polymer pumping rate, and distance between the nozzle and the gelling bath. We produced calcium-alginate gel microbeads with excellent morphological characteristics as well as a very narrow size distribution. The automatically produced microcapsules did not alter morphology, viability and functional properties of the enveloped NPI. The optimization of this automatic procedure may provide a novel approach to obtain a large number of batches possibly suitable for large scale production of immunoisolated NPI for in vivo cell transplantation procedures in humans.

  15. Optimization of locations of diffusion spots in indoor optical wireless local area networks

    NASA Astrophysics Data System (ADS)

    Eltokhey, Mahmoud W.; Mahmoud, K. R.; Ghassemlooy, Zabih; Obayya, Salah S. A.

    2018-03-01

    In this paper, we present a novel optimization of the locations of the diffusion spots in indoor optical wireless local area networks, based on the central force optimization (CFO) scheme. The users' performance uniformity is addressed by using the CFO algorithm, and adopting different objective function's configurations, while considering maximization and minimization of the signal to noise ratio and the delay spread, respectively. We also investigate the effect of varying the objective function's weights on the system and the users' performance as part of the adaptation process. The results show that the proposed objective function configuration-based optimization procedure offers an improvement of 65% in the standard deviation of individual receivers' performance.

  16. Optimization of a wet microalgal lipid extraction procedure for improved lipid recovery for biofuel and bioproduct production.

    PubMed

    Sathish, Ashik; Marlar, Tyler; Sims, Ronald C

    2015-10-01

    Methods to convert microalgal biomass to bio based fuels and chemicals are limited by several processing and economic hurdles. Research conducted in this study modified/optimized a previously published procedure capable of extracting transesterifiable lipids from wet algal biomass. This optimization resulted in the extraction of 77% of the total transesterifiable lipids, while reducing the amount of materials and temperature required in the procedure. In addition, characterization of side streams generated demonstrated that: (1) the C/N ratio of the residual biomass or lipid extracted (LE) biomass increased to 54.6 versus 10.1 for the original biomass, (2) the aqueous phase generated contains nitrogen, phosphorous, and carbon, and (3) the solid precipitate phase was composed of up to 11.2 wt% nitrogen (70% protein). The ability to isolate algal lipids and the possibility of utilizing generated side streams as products and/or feedstock material for downstream processes helps promote the algal biorefinery concept. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Optimum Edging and Trimming of Hardwood Lumber

    Treesearch

    Carmen Regalado; D. Earl Kline; Philip A. Araman

    1992-01-01

    Before the adoption of an automated system for optimizing edging and trimming in hardwood mills, the performance of present manual systems must be evaluated to provide a basis for comparison. a study was made in which lumber values recovered in actual hardwood operations were compared to the output of a computer-based procedure for edging and trimming optimization. The...

  18. How to Optimize Learning from Animated Models: A Review of Guidelines Based on Cognitive Load

    ERIC Educational Resources Information Center

    Wouters, Pieter; Paas, Fred; van Merrienboer, Jeroen J. G.

    2008-01-01

    Animated models explicate the procedure to solve a problem, as well as the rationale behind this procedure. For abstract cognitive processes, animations might be beneficial, especially when a supportive pedagogical agent provides explanations. This article argues that animated models can be an effective instructional method, provided that they are…

  19. Numerical solutions of a control problem governed by functional differential equations

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Thrift, P. R.; Burns, J. A.; Cliff, E. M.

    1978-01-01

    A numerical procedure is proposed for solving optimal control problems governed by linear retarded functional differential equations. The procedure is based on the idea of 'averaging approximations', due to Banks and Burns (1975). For illustration, numerical results generated on an IBM 370/158 computer, which demonstrate the rapid convergence of the method are presented.

  20. Practical Procedures for Constructing Mastery Tests to Minimize Errors of Classification and to Maximize or Optimize Decision Reliability.

    ERIC Educational Resources Information Center

    Byars, Alvin Gregg

    The objectives of this investigation are to develop, describe, assess, and demonstrate procedures for constructing mastery tests to minimize errors of classification and to maximize decision reliability. The guidelines are based on conditions where item exchangeability is a reasonable assumption and the test constructor can control the number of…

  1. CAN-DO, CFD-based Aerodynamic Nozzle Design and Optimization program for supersonic/hypersonic wind tunnels

    NASA Technical Reports Server (NTRS)

    Korte, John J.; Kumar, Ajay; Singh, D. J.; White, J. A.

    1992-01-01

    A design program is developed which incorporates a modern approach to the design of supersonic/hypersonic wind-tunnel nozzles. The approach is obtained by the coupling of computational fluid dynamics (CFD) with design optimization. The program can be used to design a 2D or axisymmetric, supersonic or hypersonic, wind-tunnel nozzles that can be modeled with a calorically perfect gas. The nozzle design is obtained by solving a nonlinear least-squares optimization problem (LSOP). The LSOP is solved using an iterative procedure which requires intermediate flowfield solutions. The nozzle flowfield is simulated by solving the Navier-Stokes equations for the subsonic and transonic flow regions and the parabolized Navier-Stokes equations for the supersonic flow regions. The advantages of this method are that the design is based on the solution of the viscous equations eliminating the need to make separate corrections to a design contour, and the flexibility of applying the procedure to different types of nozzle design problems.

  2. Ternary isocratic mobile phase optimization utilizing resolution Design Space based on retention time and peak width modeling.

    PubMed

    Kawabe, Takefumi; Tomitsuka, Toshiaki; Kajiro, Toshi; Kishi, Naoyuki; Toyo'oka, Toshimasa

    2013-01-18

    An optimization procedure of ternary isocratic mobile phase composition in the HPLC method using a statistical prediction model and visualization technique is described. In this report, two prediction models were first evaluated to obtain reliable prediction results. The retention time prediction model was constructed by modification from past respectable knowledge of retention modeling against ternary solvent strength changes. An excellent correlation between observed and predicted retention time was given in various kinds of pharmaceutical compounds by the multiple regression modeling of solvent strength parameters. The peak width of half height prediction model employed polynomial fitting of the retention time, because a linear relationship between the peak width of half height and the retention time was not obtained even after taking into account the contribution of the extra-column effect based on a moment method. Accurate prediction results were able to be obtained by such model, showing mostly over 0.99 value of correlation coefficient between observed and predicted peak width of half height. Then, a procedure to visualize a resolution Design Space was tried as the secondary challenge. An artificial neural network method was performed to link directly between ternary solvent strength parameters and predicted resolution, which were determined by accurate prediction results of retention time and a peak width of half height, and to visualize appropriate ternary mobile phase compositions as a range of resolution over 1.5 on the contour profile. By using mixtures of similar pharmaceutical compounds in case studies, we verified a possibility of prediction to find the optimal range of condition. Observed chromatographic results on the optimal condition mostly matched with the prediction and the average of difference between observed and predicted resolution were approximately 0.3. This means that enough accuracy for prediction could be achieved by the proposed procedure. Consequently, the procedure to search the optimal range of ternary solvent strength achieving an appropriate separation is provided by using the resolution Design Space based on accurate prediction. Copyright © 2012 Elsevier B.V. All rights reserved.

  3. Methodological aspects of an adaptive multidirectional pattern search to optimize speech perception using three hearing-aid algorithms

    NASA Astrophysics Data System (ADS)

    Franck, Bas A. M.; Dreschler, Wouter A.; Lyzenga, Johannes

    2004-12-01

    In this study we investigated the reliability and convergence characteristics of an adaptive multidirectional pattern search procedure, relative to a nonadaptive multidirectional pattern search procedure. The procedure was designed to optimize three speech-processing strategies. These comprise noise reduction, spectral enhancement, and spectral lift. The search is based on a paired-comparison paradigm, in which subjects evaluated the listening comfort of speech-in-noise fragments. The procedural and nonprocedural factors that influence the reliability and convergence of the procedure are studied using various test conditions. The test conditions combine different tests, initial settings, background noise types, and step size configurations. Seven normal hearing subjects participated in this study. The results indicate that the reliability of the optimization strategy may benefit from the use of an adaptive step size. Decreasing the step size increases accuracy, while increasing the step size can be beneficial to create clear perceptual differences in the comparisons. The reliability also depends on starting point, stop criterion, step size constraints, background noise, algorithms used, as well as the presence of drifting cues and suboptimal settings. There appears to be a trade-off between reliability and convergence, i.e., when the step size is enlarged the reliability improves, but the convergence deteriorates. .

  4. Experimental design for evaluating WWTP data by linear mass balances.

    PubMed

    Le, Quan H; Verheijen, Peter J T; van Loosdrecht, Mark C M; Volcke, Eveline I P

    2018-05-15

    A stepwise experimental design procedure to obtain reliable data from wastewater treatment plants (WWTPs) was developed. The proposed procedure aims at determining sets of additional measurements (besides available ones) that guarantee the identifiability of key process variables, which means that their value can be calculated from other, measured variables, based on available constraints in the form of linear mass balances. Among all solutions, i.e. all possible sets of additional measurements allowing the identifiability of all key process variables, the optimal solutions were found taking into account two objectives, namely the accuracy of the identified key variables and the cost of additional measurements. The results of this multi-objective optimization problem were represented in a Pareto-optimal front. The presented procedure was applied to a full-scale WWTP. Detailed analysis of the relation between measurements allowed the determination of groups of overlapping mass balances. Adding measured variables could only serve in identifying key variables that appear in the same group of mass balances. Besides, the application of the experimental design procedure to these individual groups significantly reduced the computational effort in evaluating available measurements and planning additional monitoring campaigns. The proposed procedure is straightforward and can be applied to other WWTPs with or without prior data collection. Copyright © 2018 Elsevier Ltd. All rights reserved.

  5. Optimal experimental designs for the estimation of thermal properties of composite materials

    NASA Technical Reports Server (NTRS)

    Scott, Elaine P.; Moncman, Deborah A.

    1994-01-01

    Reliable estimation of thermal properties is extremely important in the utilization of new advanced materials, such as composite materials. The accuracy of these estimates can be increased if the experiments are designed carefully. The objectives of this study are to design optimal experiments to be used in the prediction of these thermal properties and to then utilize these designs in the development of an estimation procedure to determine the effective thermal properties (thermal conductivity and volumetric heat capacity). The experiments were optimized by choosing experimental parameters that maximize the temperature derivatives with respect to all of the unknown thermal properties. This procedure has the effect of minimizing the confidence intervals of the resulting thermal property estimates. Both one-dimensional and two-dimensional experimental designs were optimized. A heat flux boundary condition is required in both analyses for the simultaneous estimation of the thermal properties. For the one-dimensional experiment, the parameters optimized were the heating time of the applied heat flux, the temperature sensor location, and the experimental time. In addition to these parameters, the optimal location of the heat flux was also determined for the two-dimensional experiments. Utilizing the optimal one-dimensional experiment, the effective thermal conductivity perpendicular to the fibers and the effective volumetric heat capacity were then estimated for an IM7-Bismaleimide composite material. The estimation procedure used is based on the minimization of a least squares function which incorporates both calculated and measured temperatures and allows for the parameters to be estimated simultaneously.

  6. Gaussian process regression for geometry optimization

    NASA Astrophysics Data System (ADS)

    Denzel, Alexander; Kästner, Johannes

    2018-03-01

    We implemented a geometry optimizer based on Gaussian process regression (GPR) to find minimum structures on potential energy surfaces. We tested both a two times differentiable form of the Matérn kernel and the squared exponential kernel. The Matérn kernel performs much better. We give a detailed description of the optimization procedures. These include overshooting the step resulting from GPR in order to obtain a higher degree of interpolation vs. extrapolation. In a benchmark against the Limited-memory Broyden-Fletcher-Goldfarb-Shanno optimizer of the DL-FIND library on 26 test systems, we found the new optimizer to generally reduce the number of required optimization steps.

  7. Impacts of Intelligent Automated Quality Control on a Small Animal APD-Based Digital PET Scanner

    NASA Astrophysics Data System (ADS)

    Charest, Jonathan; Beaudoin, Jean-François; Bergeron, Mélanie; Cadorette, Jules; Arpin, Louis; Lecomte, Roger; Brunet, Charles-Antoine; Fontaine, Réjean

    2016-10-01

    Stable system performance is mandatory to warrant the accuracy and reliability of biological results relying on small animal positron emission tomography (PET) imaging studies. This simple requirement sets the ground for imposing routine quality control (QC) procedures to keep PET scanners at a reliable optimal performance level. However, such procedures can become burdensome to implement for scanner operators, especially taking into account the increasing number of data acquisition channels in newer generation PET scanners. In systems using pixel detectors to achieve enhanced spatial resolution and contrast-to-noise ratio (CNR), the QC workload rapidly increases to unmanageable levels due to the number of independent channels involved. An artificial intelligence based QC system, referred to as Scanner Intelligent Diagnosis for Optimal Performance (SIDOP), was proposed to help reducing the QC workload by performing automatic channel fault detection and diagnosis. SIDOP consists of four high-level modules that employ machine learning methods to perform their tasks: Parameter Extraction, Channel Fault Detection, Fault Prioritization, and Fault Diagnosis. Ultimately, SIDOP submits a prioritized faulty channel list to the operator and proposes actions to correct them. To validate that SIDOP can perform QC procedures adequately, it was deployed on a LabPET™ scanner and multiple performance metrics were extracted. After multiple corrections on sub-optimal scanner settings, a 8.5% (with a 95% confidence interval (CI) of [7.6, 9.3]) improvement in the CNR, a 17.0% (CI: [15.3, 18.7]) decrease of the uniformity percentage standard deviation, and a 6.8% gain in global sensitivity were observed. These results confirm that SIDOP can indeed be of assistance in performing QC procedures and restore performance to optimal figures.

  8. Accounting for Proof Test Data in a Reliability Based Design Optimization Framework

    NASA Technical Reports Server (NTRS)

    Ventor, Gerharad; Scotti, Stephen J.

    2012-01-01

    This paper investigates the use of proof (or acceptance) test data during the reliability based design optimization of structural components. It is assumed that every component will be proof tested and that the component will only enter into service if it passes the proof test. The goal is to reduce the component weight, while maintaining high reliability, by exploiting the proof test results during the design process. The proposed procedure results in the simultaneous design of the structural component and the proof test itself and provides the designer with direct control over the probability of failing the proof test. The procedure is illustrated using two analytical example problems and the results indicate that significant weight savings are possible when exploiting the proof test results during the design process.

  9. Cost-Based Optimization of a Papermaking Wastewater Regeneration Recycling System

    NASA Astrophysics Data System (ADS)

    Huang, Long; Feng, Xiao; Chu, Khim H.

    2010-11-01

    Wastewater can be regenerated for recycling in an industrial process to reduce freshwater consumption and wastewater discharge. Such an environment friendly approach will also lead to cost savings that accrue due to reduced freshwater usage and wastewater discharge. However, the resulting cost savings are offset to varying degrees by the costs incurred for the regeneration of wastewater for recycling. Therefore, systematic procedures should be used to determine the true economic benefits for any water-using system involving wastewater regeneration recycling. In this paper, a total cost accounting procedure is employed to construct a comprehensive cost model for a paper mill. The resulting cost model is optimized by means of mathematical programming to determine the optimal regeneration flowrate and regeneration efficiency that will yield the minimum total cost.

  10. Computer-aided diagnostic strategy selection.

    PubMed

    Greenes, R A

    1986-03-01

    Determination of the optimal diagnostic work-up strategy for the patient is becoming a major concern for the practicing physician. Overlap of the indications for various diagnostic procedures, differences in their invasiveness or risk, and high costs have made physicians aware of the need to consider the choice of procedure carefully, as well as its relation to management actions available. In this article, the author discusses research approaches that aim toward development of formal decision analytic methods to allow the physician to determine optimal strategy; clinical algorithms or rules as guides to physician decisions; improved measures for characterizing the performance of diagnostic tests; educational tools for increasing the familiarity of physicians with the concepts underlying these measures and analytic procedures; and computer-based aids for facilitating the employment of these resources in actual clinical practice.

  11. Shape optimization of the modular press body

    NASA Astrophysics Data System (ADS)

    Pabiszczak, Stanisław

    2016-12-01

    A paper contains an optimization algorithm of cross-sectional dimensions of a modular press body for the minimum mass criterion. Parameters of the wall thickness and the angle of their inclination relative to the base of section are assumed as the decision variables. The overall dimensions are treated as a constant. The optimal values of parameters were calculated using numerical method of the tool Solver in the program Microsoft Excel. The results of the optimization procedure helped reduce body weight by 27% while maintaining the required rigidity of the body.

  12. Modern meta-heuristics based on nonlinear physics processes: A review of models and design procedures

    NASA Astrophysics Data System (ADS)

    Salcedo-Sanz, S.

    2016-10-01

    Meta-heuristic algorithms are problem-solving methods which try to find good-enough solutions to very hard optimization problems, at a reasonable computation time, where classical approaches fail, or cannot even been applied. Many existing meta-heuristics approaches are nature-inspired techniques, which work by simulating or modeling different natural processes in a computer. Historically, many of the most successful meta-heuristic approaches have had a biological inspiration, such as evolutionary computation or swarm intelligence paradigms, but in the last few years new approaches based on nonlinear physics processes modeling have been proposed and applied with success. Non-linear physics processes, modeled as optimization algorithms, are able to produce completely new search procedures, with extremely effective exploration capabilities in many cases, which are able to outperform existing optimization approaches. In this paper we review the most important optimization algorithms based on nonlinear physics, how they have been constructed from specific modeling of a real phenomena, and also their novelty in terms of comparison with alternative existing algorithms for optimization. We first review important concepts on optimization problems, search spaces and problems' difficulty. Then, the usefulness of heuristics and meta-heuristics approaches to face hard optimization problems is introduced, and some of the main existing classical versions of these algorithms are reviewed. The mathematical framework of different nonlinear physics processes is then introduced as a preparatory step to review in detail the most important meta-heuristics based on them. A discussion on the novelty of these approaches, their main computational implementation and design issues, and the evaluation of a novel meta-heuristic based on Strange Attractors mutation will be carried out to complete the review of these techniques. We also describe some of the most important application areas, in broad sense, of meta-heuristics, and describe free-accessible software frameworks which can be used to make easier the implementation of these algorithms.

  13. A Robust Adaptive Autonomous Approach to Optimal Experimental Design

    NASA Astrophysics Data System (ADS)

    Gu, Hairong

    Experimentation is the fundamental tool of scientific inquiries to understand the laws governing the nature and human behaviors. Many complex real-world experimental scenarios, particularly in quest of prediction accuracy, often encounter difficulties to conduct experiments using an existing experimental procedure for the following two reasons. First, the existing experimental procedures require a parametric model to serve as the proxy of the latent data structure or data-generating mechanism at the beginning of an experiment. However, for those experimental scenarios of concern, a sound model is often unavailable before an experiment. Second, those experimental scenarios usually contain a large number of design variables, which potentially leads to a lengthy and costly data collection cycle. Incompetently, the existing experimental procedures are unable to optimize large-scale experiments so as to minimize the experimental length and cost. Facing the two challenges in those experimental scenarios, the aim of the present study is to develop a new experimental procedure that allows an experiment to be conducted without the assumption of a parametric model while still achieving satisfactory prediction, and performs optimization of experimental designs to improve the efficiency of an experiment. The new experimental procedure developed in the present study is named robust adaptive autonomous system (RAAS). RAAS is a procedure for sequential experiments composed of multiple experimental trials, which performs function estimation, variable selection, reverse prediction and design optimization on each trial. Directly addressing the challenges in those experimental scenarios of concern, function estimation and variable selection are performed by data-driven modeling methods to generate a predictive model from data collected during the course of an experiment, thus exempting the requirement of a parametric model at the beginning of an experiment; design optimization is performed to select experimental designs on the fly of an experiment based on their usefulness so that fewest designs are needed to reach useful inferential conclusions. Technically, function estimation is realized by Bayesian P-splines, variable selection is realized by Bayesian spike-and-slab prior, reverse prediction is realized by grid-search and design optimization is realized by the concepts of active learning. The present study demonstrated that RAAS achieves statistical robustness by making accurate predictions without the assumption of a parametric model serving as the proxy of latent data structure while the existing procedures can draw poor statistical inferences if a misspecified model is assumed; RAAS also achieves inferential efficiency by taking fewer designs to acquire useful statistical inferences than non-optimal procedures. Thus, RAAS is expected to be a principled solution to real-world experimental scenarios pursuing robust prediction and efficient experimentation.

  14. Diagnostic procedures for non-small-cell lung cancer (NSCLC): recommendations of the European Expert Group

    PubMed Central

    Dietel, Manfred; Bubendorf, Lukas; Dingemans, Anne-Marie C; Dooms, Christophe; Elmberger, Göran; García, Rosa Calero; Kerr, Keith M; Lim, Eric; López-Ríos, Fernando; Thunnissen, Erik; Van Schil, Paul E; von Laffert, Maximilian

    2016-01-01

    Background There is currently no Europe-wide consensus on the appropriate preanalytical measures and workflow to optimise procedures for tissue-based molecular testing of non-small-cell lung cancer (NSCLC). To address this, a group of lung cancer experts (see list of authors) convened to discuss and propose standard operating procedures (SOPs) for NSCLC. Methods Based on earlier meetings and scientific expertise on lung cancer, a multidisciplinary group meeting was aligned. The aim was to include all relevant aspects concerning NSCLC diagnosis. After careful consideration, the following topics were selected and each was reviewed by the experts: surgical resection and sampling; biopsy procedures for analysis; preanalytical and other variables affecting quality of tissue; tissue conservation; testing procedures for epidermal growth factor receptor, anaplastic lymphoma kinase and ROS proto-oncogene 1, receptor tyrosine kinase (ROS1) in lung tissue and cytological specimens; as well as standardised reporting and quality control (QC). Finally, an optimal workflow was described. Results Suggested optimal procedures and workflows are discussed in detail. The broad consensus was that the complex workflow presented can only be executed effectively by an interdisciplinary approach using a well-trained team. Conclusions To optimise diagnosis and treatment of patients with NSCLC, it is essential to establish SOPs that are adaptable to the local situation. In addition, a continuous QC system and a local multidisciplinary tumour-type-oriented board are essential. PMID:26530085

  15. Heuristic query optimization for query multiple table and multiple clausa on mobile finance application

    NASA Astrophysics Data System (ADS)

    Indrayana, I. N. E.; P, N. M. Wirasyanti D.; Sudiartha, I. KG

    2018-01-01

    Mobile application allow many users to access data from the application without being limited to space, space and time. Over time the data population of this application will increase. Data access time will cause problems if the data record has reached tens of thousands to millions of records.The objective of this research is to maintain the performance of data execution for large data records. One effort to maintain data access time performance is to apply query optimization method. The optimization used in this research is query heuristic optimization method. The built application is a mobile-based financial application using MySQL database with stored procedure therein. This application is used by more than one business entity in one database, thus enabling rapid data growth. In this stored procedure there is an optimized query using heuristic method. Query optimization is performed on a “Select” query that involves more than one table with multiple clausa. Evaluation is done by calculating the average access time using optimized and unoptimized queries. Access time calculation is also performed on the increase of population data in the database. The evaluation results shown the time of data execution with query heuristic optimization relatively faster than data execution time without using query optimization.

  16. A Biogeography-Based Optimization Algorithm Hybridized with Tabu Search for the Quadratic Assignment Problem

    PubMed Central

    Lim, Wee Loon; Wibowo, Antoni; Desa, Mohammad Ishak; Haron, Habibollah

    2016-01-01

    The quadratic assignment problem (QAP) is an NP-hard combinatorial optimization problem with a wide variety of applications. Biogeography-based optimization (BBO), a relatively new optimization technique based on the biogeography concept, uses the idea of migration strategy of species to derive algorithm for solving optimization problems. It has been shown that BBO provides performance on a par with other optimization methods. A classical BBO algorithm employs the mutation operator as its diversification strategy. However, this process will often ruin the quality of solutions in QAP. In this paper, we propose a hybrid technique to overcome the weakness of classical BBO algorithm to solve QAP, by replacing the mutation operator with a tabu search procedure. Our experiments using the benchmark instances from QAPLIB show that the proposed hybrid method is able to find good solutions for them within reasonable computational times. Out of 61 benchmark instances tested, the proposed method is able to obtain the best known solutions for 57 of them. PMID:26819585

  17. A Biogeography-Based Optimization Algorithm Hybridized with Tabu Search for the Quadratic Assignment Problem.

    PubMed

    Lim, Wee Loon; Wibowo, Antoni; Desa, Mohammad Ishak; Haron, Habibollah

    2016-01-01

    The quadratic assignment problem (QAP) is an NP-hard combinatorial optimization problem with a wide variety of applications. Biogeography-based optimization (BBO), a relatively new optimization technique based on the biogeography concept, uses the idea of migration strategy of species to derive algorithm for solving optimization problems. It has been shown that BBO provides performance on a par with other optimization methods. A classical BBO algorithm employs the mutation operator as its diversification strategy. However, this process will often ruin the quality of solutions in QAP. In this paper, we propose a hybrid technique to overcome the weakness of classical BBO algorithm to solve QAP, by replacing the mutation operator with a tabu search procedure. Our experiments using the benchmark instances from QAPLIB show that the proposed hybrid method is able to find good solutions for them within reasonable computational times. Out of 61 benchmark instances tested, the proposed method is able to obtain the best known solutions for 57 of them.

  18. Neural-Network-Based Robust Optimal Tracking Control for MIMO Discrete-Time Systems With Unknown Uncertainty Using Adaptive Critic Design.

    PubMed

    Liu, Lei; Wang, Zhanshan; Zhang, Huaguang

    2018-04-01

    This paper is concerned with the robust optimal tracking control strategy for a class of nonlinear multi-input multi-output discrete-time systems with unknown uncertainty via adaptive critic design (ACD) scheme. The main purpose is to establish an adaptive actor-critic control method, so that the cost function in the procedure of dealing with uncertainty is minimum and the closed-loop system is stable. Based on the neural network approximator, an action network is applied to generate the optimal control signal and a critic network is used to approximate the cost function, respectively. In contrast to the previous methods, the main features of this paper are: 1) the ACD scheme is integrated into the controllers to cope with the uncertainty and 2) a novel cost function, which is not in quadric form, is proposed so that the total cost in the design procedure is reduced. It is proved that the optimal control signals and the tracking errors are uniformly ultimately bounded even when the uncertainty exists. Finally, a numerical simulation is developed to show the effectiveness of the present approach.

  19. Genetic algorithm dynamics on a rugged landscape

    NASA Astrophysics Data System (ADS)

    Bornholdt, Stefan

    1998-04-01

    The genetic algorithm is an optimization procedure motivated by biological evolution and is successfully applied to optimization problems in different areas. A statistical mechanics model for its dynamics is proposed based on the parent-child fitness correlation of the genetic operators, making it applicable to general fitness landscapes. It is compared to a recent model based on a maximum entropy ansatz. Finally it is applied to modeling the dynamics of a genetic algorithm on the rugged fitness landscape of the NK model.

  20. Numerical Procedures in the Optimal Grouping of Students for Instructional Purposes. Technical Report No. 399 (Parts 1 and 2).

    ERIC Educational Resources Information Center

    Lawrence, Brian F.

    The study was concerned with the formation of grouPs of students and specifically addressed the problem: Can a computerized Procedure be developed which assigns students to instructional groups, which maximizes the homogeneity of these groups when this homogeneity is based on relevent student learning characteristics, and which takes account of…

  1. Fundamental principles in periodontal plastic surgery and mucosal augmentation--a narrative review.

    PubMed

    Burkhardt, Rino; Lang, Niklaus P

    2014-04-01

    To provide a narrative review of the current literature elaborating on fundamental principles of periodontal plastic surgical procedures. Based on a presumptive outline of the narrative review, MESH terms have been used to search the relevant literature electronically in the PubMed and Cochrane Collaboration databases. If possible, systematic reviews were included. The review is divided into three phases associated with periodontal plastic surgery: a) pre-operative phase, b) surgical procedures and c) post-surgical care. The surgical procedures were discussed in the light of a) flap design and preparation, b) flap mobilization and c) flap adaptation and stabilization. Pre-operative paradigms include the optimal plaque control and smoking counselling. Fundamental principles in surgical procedures address basic knowledge in anatomy and vascularity, leading to novel appropriate flap designs with papilla preservation. Flap mobilization based on releasing incisions can be performed up to 5 mm. Flap adaptation and stabilization depend on appropriate wound bed characteristics, undisturbed blood clot formation, revascularization and wound stability through adequate suturing. Delicate tissue handling and tension free wound closure represent prerequisites for optimal healing outcomes. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  2. Dynamic Hierarchical Energy-Efficient Method Based on Combinatorial Optimization for Wireless Sensor Networks.

    PubMed

    Chang, Yuchao; Tang, Hongying; Cheng, Yongbo; Zhao, Qin; Yuan, Baoqing Li andXiaobing

    2017-07-19

    Routing protocols based on topology control are significantly important for improving network longevity in wireless sensor networks (WSNs). Traditionally, some WSN routing protocols distribute uneven network traffic load to sensor nodes, which is not optimal for improving network longevity. Differently to conventional WSN routing protocols, we propose a dynamic hierarchical protocol based on combinatorial optimization (DHCO) to balance energy consumption of sensor nodes and to improve WSN longevity. For each sensor node, the DHCO algorithm obtains the optimal route by establishing a feasible routing set instead of selecting the cluster head or the next hop node. The process of obtaining the optimal route can be formulated as a combinatorial optimization problem. Specifically, the DHCO algorithm is carried out by the following procedures. It employs a hierarchy-based connection mechanism to construct a hierarchical network structure in which each sensor node is assigned to a special hierarchical subset; it utilizes the combinatorial optimization theory to establish the feasible routing set for each sensor node, and takes advantage of the maximum-minimum criterion to obtain their optimal routes to the base station. Various results of simulation experiments show effectiveness and superiority of the DHCO algorithm in comparison with state-of-the-art WSN routing algorithms, including low-energy adaptive clustering hierarchy (LEACH), hybrid energy-efficient distributed clustering (HEED), genetic protocol-based self-organizing network clustering (GASONeC), and double cost function-based routing (DCFR) algorithms.

  3. Optimization of fuels from waste composition with application of genetic algorithm.

    PubMed

    Małgorzata, Wzorek

    2014-05-01

    The objective of this article is to elaborate a method to optimize the composition of the fuels from sewage sludge (PBS fuel - fuel based on sewage sludge and coal slime, PBM fuel - fuel based on sewage sludge and meat and bone meal, PBT fuel - fuel based on sewage sludge and sawdust). As a tool for an optimization procedure, the use of a genetic algorithm is proposed. The optimization task involves the maximization of mass fraction of sewage sludge in a fuel developed on the basis of quality-based criteria for the use as an alternative fuel used by the cement industry. The selection criteria of fuels composition concerned such parameters as: calorific value, content of chlorine, sulphur and heavy metals. Mathematical descriptions of fuel compositions and general forms of the genetic algorithm, as well as the obtained optimization results are presented. The results of this study indicate that the proposed genetic algorithm offers an optimization tool, which could be useful in the determination of the composition of fuels that are produced from waste.

  4. A derived heuristics based multi-objective optimization procedure for micro-grid scheduling

    NASA Astrophysics Data System (ADS)

    Li, Xin; Deb, Kalyanmoy; Fang, Yanjun

    2017-06-01

    With the availability of different types of power generators to be used in an electric micro-grid system, their operation scheduling as the load demand changes with time becomes an important task. Besides satisfying load balance constraints and the generator's rated power, several other practicalities, such as limited availability of grid power and restricted ramping of power output from generators, must all be considered during the operation scheduling process, which makes it difficult to decide whether the optimization results are accurate and satisfactory. In solving such complex practical problems, heuristics-based customized optimization algorithms are suggested. However, due to nonlinear and complex interactions of variables, it is difficult to come up with heuristics in such problems off-hand. In this article, a two-step strategy is proposed in which the first task deciphers important heuristics about the problem and the second task utilizes the derived heuristics to solve the original problem in a computationally fast manner. Specifically, the specific operation scheduling is considered from a two-objective (cost and emission) point of view. The first task develops basic and advanced level knowledge bases offline from a series of prior demand-wise optimization runs and then the second task utilizes them to modify optimized solutions in an application scenario. Results on island and grid connected modes and several pragmatic formulations of the micro-grid operation scheduling problem clearly indicate the merit of the proposed two-step procedure.

  5. Strategic flexibility in computational estimation for Chinese- and Canadian-educated adults.

    PubMed

    Xu, Chang; Wells, Emma; LeFevre, Jo-Anne; Imbo, Ineke

    2014-09-01

    The purpose of the present study was to examine factors that influence strategic flexibility in computational estimation for Chinese- and Canadian-educated adults. Strategic flexibility was operationalized as the percentage of trials on which participants chose the problem-based procedure that best balanced proximity to the correct answer with simplification of the required calculation. For example, on 42 × 57, the optimal problem-based solution is 40 × 60 because 2,400 is closer to the exact answer 2,394 than is 40 × 50 or 50 × 60. In Experiment 1 (n = 50), where participants had free choice of estimation procedures, Chinese-educated participants were more likely to choose the optimal problem-based procedure (80% of trials) than Canadian-educated participants (50%). In Experiment 2 (n = 48), participants had to choose 1 of 3 solution procedures. They showed moderate strategic flexibility that was equal across groups (60%). In Experiment 3 (n = 50), participants were given the same 3 procedure choices as in Experiment 2 but different instructions and explicit feedback. When instructed to respond quickly, both groups showed moderate strategic flexibility as in Experiment 2 (60%). When instructed to respond as accurately as possible or to balance speed and accuracy, they showed very high strategic flexibility (greater than 90%). These findings suggest that solvers will show very different levels of strategic flexibility in response to instructions, feedback, and problem characteristics and that these factors interact with individual differences (e.g., arithmetic skills, nationality) to produce variable response patterns.

  6. Finite element design procedure for correcting the coining die profiles

    NASA Astrophysics Data System (ADS)

    Alexandrino, Paulo; Leitão, Paulo J.; Alves, Luis M.; Martins, Paulo A. F.

    2018-05-01

    This paper presents a new finite element based design procedure for correcting the coining die profiles in order to optimize the distribution of pressure and the alignment of the resultant vertical force at the end of the die stroke. The procedure avoids time consuming and costly try-outs, does not interfere with the creative process of the sculptors and extends the service life of the coining dies by significantly decreasing the applied pressure and bending moments. The numerical simulations were carried out in a computer program based on the finite element flow formulation that is currently being developed by the authors in collaboration with the Portuguese Mint. A new experimental procedure based on the stack compression test is also proposed for determining the stress-strain curve of the materials directly from the coin blanks.

  7. Tuning rules for robust FOPID controllers based on multi-objective optimization with FOPDT models.

    PubMed

    Sánchez, Helem Sabina; Padula, Fabrizio; Visioli, Antonio; Vilanova, Ramon

    2017-01-01

    In this paper a set of optimally balanced tuning rules for fractional-order proportional-integral-derivative controllers is proposed. The control problem of minimizing at once the integrated absolute error for both the set-point and the load disturbance responses is addressed. The control problem is stated as a multi-objective optimization problem where a first-order-plus-dead-time process model subject to a robustness, maximum sensitivity based, constraint has been considered. A set of Pareto optimal solutions is obtained for different normalized dead times and then the optimal balance between the competing objectives is obtained by choosing the Nash solution among the Pareto-optimal ones. A curve fitting procedure has then been applied in order to generate suitable tuning rules. Several simulation results show the effectiveness of the proposed approach. Copyright © 2016. Published by Elsevier Ltd.

  8. A multi-material topology optimization approach for wrinkle-free design of cable-suspended membrane structures

    NASA Astrophysics Data System (ADS)

    Luo, Yangjun; Niu, Yanzhuang; Li, Ming; Kang, Zhan

    2017-06-01

    In order to eliminate stress-related wrinkles in cable-suspended membrane structures and to provide simple and reliable deployment, this study presents a multi-material topology optimization model and an effective solution procedure for generating optimal connected layouts for membranes and cables. On the basis of the principal stress criterion of membrane wrinkling behavior and the density-based interpolation of multi-phase materials, the optimization objective is to maximize the total structural stiffness while satisfying principal stress constraints and specified material volume requirements. By adopting the cosine-type relaxation scheme to avoid the stress singularity phenomenon, the optimization model is successfully solved through a standard gradient-based algorithm. Four-corner tensioned membrane structures with different loading cases were investigated to demonstrate the effectiveness of the proposed method in automatically finding the optimal design composed of curved boundary cables and wrinkle-free membranes.

  9. Data Transfer Advisor with Transport Profiling Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rao, Nageswara S.; Liu, Qiang; Yun, Daqing

    The network infrastructures have been rapidly upgraded in many high-performance networks (HPNs). However, such infrastructure investment has not led to corresponding performance improvement in big data transfer, especially at the application layer, largely due to the complexity of optimizing transport control on end hosts. We design and implement ProbData, a PRofiling Optimization Based DAta Transfer Advisor, to help users determine the most effective data transfer method with the most appropriate control parameter values to achieve the best data transfer performance. ProbData employs a profiling optimization based approach to exploit the optimal operational zone of various data transfer methods in supportmore » of big data transfer in extreme scale scientific applications. We present a theoretical framework of the optimized profiling approach employed in ProbData as wellas its detailed design and implementation. The advising procedure and performance benefits of ProbData are illustrated and evaluated by proof-of-concept experiments in real-life networks.« less

  10. Optimization of life support systems and their systems reliability

    NASA Technical Reports Server (NTRS)

    Fan, L. T.; Hwang, C. L.; Erickson, L. E.

    1971-01-01

    The identification, analysis, and optimization of life support systems and subsystems have been investigated. For each system or subsystem that has been considered, the procedure involves the establishment of a set of system equations (or mathematical model) based on theory and experimental evidences; the analysis and simulation of the model; the optimization of the operation, control, and reliability; analysis of sensitivity of the system based on the model; and, if possible, experimental verification of the theoretical and computational results. Research activities include: (1) modeling of air flow in a confined space; (2) review of several different gas-liquid contactors utilizing centrifugal force: (3) review of carbon dioxide reduction contactors in space vehicles and other enclosed structures: (4) application of modern optimal control theory to environmental control of confined spaces; (5) optimal control of class of nonlinear diffusional distributed parameter systems: (6) optimization of system reliability of life support systems and sub-systems: (7) modeling, simulation and optimal control of the human thermal system: and (8) analysis and optimization of the water-vapor eletrolysis cell.

  11. Optimization process planning using hybrid genetic algorithm and intelligent search for job shop machining.

    PubMed

    Salehi, Mojtaba; Bahreininejad, Ardeshir

    2011-08-01

    Optimization of process planning is considered as the key technology for computer-aided process planning which is a rather complex and difficult procedure. A good process plan of a part is built up based on two elements: (1) the optimized sequence of the operations of the part; and (2) the optimized selection of the machine, cutting tool and Tool Access Direction (TAD) for each operation. In the present work, the process planning is divided into preliminary planning, and secondary/detailed planning. In the preliminary stage, based on the analysis of order and clustering constraints as a compulsive constraint aggregation in operation sequencing and using an intelligent searching strategy, the feasible sequences are generated. Then, in the detailed planning stage, using the genetic algorithm which prunes the initial feasible sequences, the optimized operation sequence and the optimized selection of the machine, cutting tool and TAD for each operation based on optimization constraints as an additive constraint aggregation are obtained. The main contribution of this work is the optimization of sequence of the operations of the part, and optimization of machine selection, cutting tool and TAD for each operation using the intelligent search and genetic algorithm simultaneously.

  12. Optimization process planning using hybrid genetic algorithm and intelligent search for job shop machining

    PubMed Central

    Salehi, Mojtaba

    2010-01-01

    Optimization of process planning is considered as the key technology for computer-aided process planning which is a rather complex and difficult procedure. A good process plan of a part is built up based on two elements: (1) the optimized sequence of the operations of the part; and (2) the optimized selection of the machine, cutting tool and Tool Access Direction (TAD) for each operation. In the present work, the process planning is divided into preliminary planning, and secondary/detailed planning. In the preliminary stage, based on the analysis of order and clustering constraints as a compulsive constraint aggregation in operation sequencing and using an intelligent searching strategy, the feasible sequences are generated. Then, in the detailed planning stage, using the genetic algorithm which prunes the initial feasible sequences, the optimized operation sequence and the optimized selection of the machine, cutting tool and TAD for each operation based on optimization constraints as an additive constraint aggregation are obtained. The main contribution of this work is the optimization of sequence of the operations of the part, and optimization of machine selection, cutting tool and TAD for each operation using the intelligent search and genetic algorithm simultaneously. PMID:21845020

  13. Aerodynamic shape optimization of a HSCT type configuration with improved surface definition

    NASA Technical Reports Server (NTRS)

    Thomas, Almuttil M.; Tiwari, Surendra N.

    1994-01-01

    Two distinct parametrization procedures of generating free-form surfaces to represent aerospace vehicles are presented. The first procedure is the representation using spline functions such as nonuniform rational b-splines (NURBS) and the second is a novel (geometrical) parametrization using solutions to a suitably chosen partial differential equation. The main idea is to develop a surface which is more versatile and can be used in an optimization process. Unstructured volume grid is generated by an advancing front algorithm and solutions obtained using an Euler solver. Grid sensitivity with respect to surface design parameters and aerodynamic sensitivity coefficients based on potential flow is obtained using an automatic differentiator precompiler software tool. Aerodynamic shape optimization of a complete aircraft with twenty four design variables is performed. High speed civil transport aircraft (HSCT) configurations are targeted to demonstrate the process.

  14. Statistically optimal perception and learning: from behavior to neural representations

    PubMed Central

    Fiser, József; Berkes, Pietro; Orbán, Gergő; Lengyel, Máté

    2010-01-01

    Human perception has recently been characterized as statistical inference based on noisy and ambiguous sensory inputs. Moreover, suitable neural representations of uncertainty have been identified that could underlie such probabilistic computations. In this review, we argue that learning an internal model of the sensory environment is another key aspect of the same statistical inference procedure and thus perception and learning need to be treated jointly. We review evidence for statistically optimal learning in humans and animals, and reevaluate possible neural representations of uncertainty based on their potential to support statistically optimal learning. We propose that spontaneous activity can have a functional role in such representations leading to a new, sampling-based, framework of how the cortex represents information and uncertainty. PMID:20153683

  15. Implied alignment: a synapomorphy-based multiple-sequence alignment method and its use in cladogram search

    NASA Technical Reports Server (NTRS)

    Wheeler, Ward C.

    2003-01-01

    A method to align sequence data based on parsimonious synapomorphy schemes generated by direct optimization (DO; earlier termed optimization alignment) is proposed. DO directly diagnoses sequence data on cladograms without an intervening multiple-alignment step, thereby creating topology-specific, dynamic homology statements. Hence, no multiple-alignment is required to generate cladograms. Unlike general and globally optimal multiple-alignment procedures, the method described here, implied alignment (IA), takes these dynamic homologies and traces them back through a single cladogram, linking the unaligned sequence positions in the terminal taxa via DO transformation series. These "lines of correspondence" link ancestor-descendent states and, when displayed as linearly arrayed columns without hypothetical ancestors, are largely indistinguishable from standard multiple alignment. Since this method is based on synapomorphy, the treatment of certain classes of insertion-deletion (indel) events may be different from that of other alignment procedures. As with all alignment methods, results are dependent on parameter assumptions such as indel cost and transversion:transition ratios. Such an IA could be used as a basis for phylogenetic search, but this would be questionable since the homologies derived from the implied alignment depend on its natal cladogram and any variance, between DO and IA + Search, due to heuristic approach. The utility of this procedure in heuristic cladogram searches using DO and the improvement of heuristic cladogram cost calculations are discussed. c2003 The Willi Hennig Society. Published by Elsevier Science (USA). All rights reserved.

  16. Interactive 3d Landscapes on Line

    NASA Astrophysics Data System (ADS)

    Fanini, B.; Calori, L.; Ferdani, D.; Pescarin, S.

    2011-09-01

    The paper describes challenges identified while developing browser embedded 3D landscape rendering applications, our current approach and work-flow and how recent development in browser technologies could affect. All the data, even if processed by optimization and decimation tools, result in very huge databases that require paging, streaming and Level-of-Detail techniques to be implemented to allow remote web based real time fruition. Our approach has been to select an open source scene-graph based visual simulation library with sufficient performance and flexibility and adapt it to the web by providing a browser plug-in. Within the current Montegrotto VR Project, content produced with new pipelines has been integrated. The whole Montegrotto Town has been generated procedurally by CityEngine. We used this procedural approach, based on algorithms and procedures because it is particularly functional to create extensive and credible urban reconstructions. To create the archaeological sites we used optimized mesh acquired with laser scanning and photogrammetry techniques whereas to realize the 3D reconstructions of the main historical buildings we adopted computer-graphic software like blender and 3ds Max. At the final stage, semi-automatic tools have been developed and used up to prepare and clusterise 3D models and scene graph routes for web publishing. Vegetation generators have also been used with the goal of populating the virtual scene to enhance the user perceived realism during the navigation experience. After the description of 3D modelling and optimization techniques, the paper will focus and discuss its results and expectations.

  17. Optimizing winter/snow removal operations in MoDOT St. Louis district : includes outcome based evaluation of operations.

    DOT National Transportation Integrated Search

    2011-10-01

    The objective of this project was to develop fleet location, route decision, material selection, and treatment procedures for winter snow removal operations to improve MoDOTs services and lower costs. This work uses a systematic, heuristic-based o...

  18. Optimized manual and automated recovery of amplifiable DNA from tissues preserved in buffered formalin and alcohol-based fixative.

    PubMed

    Duval, Kristin; Aubin, Rémy A; Elliott, James; Gorn-Hondermann, Ivan; Birnboim, H Chaim; Jonker, Derek; Fourney, Ron M; Frégeau, Chantal J

    2010-02-01

    Archival tissue preserved in fixative constitutes an invaluable resource for histological examination, molecular diagnostic procedures and for DNA typing analysis in forensic investigations. However, available material is often limited in size and quantity. Moreover, recovery of DNA is often severely compromised by the presence of covalent DNA-protein cross-links generated by formalin, the most prevalent fixative. We describe the evaluation of buffer formulations, sample lysis regimens and DNA recovery strategies and define optimized manual and automated procedures for the extraction of high quality DNA suitable for molecular diagnostics and genotyping. Using a 3-step enzymatic digestion protocol carried out in the absence of dithiothreitol, we demonstrate that DNA can be efficiently released from cells or tissues preserved in buffered formalin or the alcohol-based fixative GenoFix. This preparatory procedure can then be integrated to traditional phenol/chloroform extraction, a modified manual DNA IQ or automated DNA IQ/Te-Shake-based extraction in order to recover DNA for downstream applications. Quantitative recovery of high quality DNA was best achieved from specimens archived in GenoFix and extracted using magnetic bead capture.

  19. Mould routine identification in the clinical laboratory by matrix-assisted laser desorption ionization time-of-flight mass spectrometry.

    PubMed

    Cassagne, Carole; Ranque, Stéphane; Normand, Anne-Cécile; Fourquet, Patrick; Thiebault, Sandrine; Planard, Chantal; Hendrickx, Marijke; Piarroux, Renaud

    2011-01-01

    MALDI-TOF MS recently emerged as a valuable identification tool for bacteria and yeasts and revolutionized the daily clinical laboratory routine. But it has not been established for routine mould identification. This study aimed to validate a standardized procedure for MALDI-TOF MS-based mould identification in clinical laboratory. First, pre-extraction and extraction procedures were optimized. With this standardized procedure, a 143 mould strains reference spectra library was built. Then, the mould isolates cultured from sequential clinical samples were prospectively subjected to this MALDI-TOF MS based-identification assay. MALDI-TOF MS-based identification was considered correct if it was concordant with the phenotypic identification; otherwise, the gold standard was DNA sequence comparison-based identification. The optimized procedure comprised a culture on sabouraud-gentamicin-chloramphenicol agar followed by a chemical extraction of the fungal colonies with formic acid and acetonitril. The identification was done using a reference database built with references from at least four culture replicates. For five months, 197 clinical isolates were analyzed; 20 were excluded because they were not identified at the species level. MALDI-TOF MS-based approach correctly identified 87% (154/177) of the isolates analyzed in a routine clinical laboratory activity. It failed in 12% (21/177), whose species were not represented in the reference library. MALDI-TOF MS-based identification was correct in 154 out of the remaining 156 isolates. One Beauveria bassiana was not identified and one Rhizopus oryzae was misidentified as Mucor circinelloides. This work's seminal finding is that a standardized procedure can also be used for MALDI-TOF MS-based identification of a wide array of clinically relevant mould species. It thus makes it possible to identify moulds in the routine clinical laboratory setting and opens new avenues for the development of an integrated MALDI-TOF MS-based solution for the identification of any clinically relevant microorganism.

  20. Model-based local density sharpening of cryo-EM maps

    PubMed Central

    Jakobi, Arjen J; Wilmanns, Matthias

    2017-01-01

    Atomic models based on high-resolution density maps are the ultimate result of the cryo-EM structure determination process. Here, we introduce a general procedure for local sharpening of cryo-EM density maps based on prior knowledge of an atomic reference structure. The procedure optimizes contrast of cryo-EM densities by amplitude scaling against the radially averaged local falloff estimated from a windowed reference model. By testing the procedure using six cryo-EM structures of TRPV1, β-galactosidase, γ-secretase, ribosome-EF-Tu complex, 20S proteasome and RNA polymerase III, we illustrate how local sharpening can increase interpretability of density maps in particular in cases of resolution variation and facilitates model building and atomic model refinement. PMID:29058676

  1. Two Topics in Data Analysis: Sample-based Optimal Transport and Analysis of Turbulent Spectra from Ship Track Data

    NASA Astrophysics Data System (ADS)

    Kuang, Simeng Max

    This thesis contains two topics in data analysis. The first topic consists of the introduction of algorithms for sample-based optimal transport and barycenter problems. In chapter 1, a family of algorithms is introduced to solve both the L2 optimal transport problem and the Wasserstein barycenter problem. Starting from a theoretical perspective, the new algorithms are motivated from a key characterization of the barycenter measure, which suggests an update that reduces the total transportation cost and stops only when the barycenter is reached. A series of general theorems is given to prove the convergence of all the algorithms. We then extend the algorithms to solve sample-based optimal transport and barycenter problems, in which only finite sample sets are available instead of underlying probability distributions. A unique feature of the new approach is that it compares sample sets in terms of the expected values of a set of feature functions, which at the same time induce the function space of optimal maps and can be chosen by users to incorporate their prior knowledge of the data. All the algorithms are implemented and applied to various synthetic example and practical applications. On synthetic examples it is found that both the SOT algorithm and the SCB algorithm are able to find the true solution and often converge in a handful of iterations. On more challenging applications including Gaussian mixture models, color transfer and shape transform problems, the algorithms give very good results throughout despite the very different nature of the corresponding datasets. In chapter 2, a preconditioning procedure is developed for the L2 and more general optimal transport problems. The procedure is based on a family of affine map pairs, which transforms the original measures into two new measures that are closer to each other, while preserving the optimality of solutions. It is proved that the preconditioning procedure minimizes the remaining transportation cost among all admissible affine maps. The procedure can be used on both continuous measures and finite sample sets from distributions. In numerical examples, the procedure is applied to multivariate normal distributions, to a two-dimensional shape transform problem and to color transfer problems. For the second topic, we present an extension to anisotropic flows of the recently developed Helmholtz and wave-vortex decomposition method for one-dimensional spectra measured along ship or aircraft tracks in Buhler et al. (J. Fluid Mech., vol. 756, 2014, pp. 1007-1026). While in the original method the flow was assumed to be homogeneous and isotropic in the horizontal plane, we allow the flow to have a simple kind of horizontal anisotropy that is chosen in a self-consistent manner and can be deduced from the one-dimensional power spectra of the horizontal velocity fields and their cross-correlation. The key result is that an exact and robust Helmholtz decomposition of the horizontal kinetic energy spectrum can be achieved in this anisotropic flow setting, which then also allows the subsequent wave-vortex decomposition step. The new method is developed theoretically and tested with encouraging results on challenging synthetic data as well as on ocean data from the Gulf Stream.

  2. Architecture and settings optimization procedure of a TES frequency domain multiplexed readout firmware

    NASA Astrophysics Data System (ADS)

    Clenet, A.; Ravera, L.; Bertrand, B.; den Hartog, R.; Jackson, B.; van Leeuwen, B.-J.; van Loon, D.; Parot, Y.; Pointecouteau, E.; Sournac, A.

    2014-11-01

    IRAP is developing the readout electronics of the SPICA-SAFARI's TES bolometer arrays. Based on the frequency domain multiplexing technique the readout electronics provides the AC-signals to voltage-bias the detectors; it demodulates the data; and it computes a feedback to linearize the detection chain. The feedback is computed with a specific technique, so called baseband feedback (BBFB) which ensures that the loop is stable even with long propagation and processing delays (i.e. several μ s) and with fast signals (i.e. frequency carriers of the order of 5 MHz). To optimize the power consumption we took advantage of the reduced science signal bandwidth to decouple the signal sampling frequency and the data processing rate. This technique allowed a reduction of the power consumption of the circuit by a factor of 10. Beyond the firmware architecture the optimization of the instrument concerns the characterization routines and the definition of the optimal parameters. Indeed, to operate an array TES one has to properly define about 21000 parameters. We defined a set of procedures to automatically characterize these parameters and find out the optimal settings.

  3. Inverse optimal design of the radiant heating in materials processing and manufacturing

    NASA Astrophysics Data System (ADS)

    Fedorov, A. G.; Lee, K. H.; Viskanta, R.

    1998-12-01

    Combined convective, conductive, and radiative heat transfer is analyzed during heating of a continuously moving load in the industrial radiant oven. A transient, quasi-three-dimensional model of heat transfer between a continuous load of parts moving inside an oven on a conveyor belt at a constant speed and an array of radiant heaters/burners placed inside the furnace enclosure is developed. The model accounts for radiative exchange between the heaters and the load, heat conduction in the load, and convective heat transfer between the moving load and oven environment. The thermal model developed has been used to construct a general framework for an inverse optimal design of an industrial oven as an example. In particular, the procedure based on the Levenberg-Marquardt nonlinear least squares optimization algorithm has been developed to obtain the optimal temperatures of the heaters/burners that need to be specified to achieve a prescribed temperature distribution of the surface of a load. The results of calculations for several sample cases are reported to illustrate the capabilities of the procedure developed for the optimal inverse design of an industrial radiant oven.

  4. Method of optimization onboard communication network

    NASA Astrophysics Data System (ADS)

    Platoshin, G. A.; Selvesuk, N. I.; Semenov, M. E.; Novikov, V. M.

    2018-02-01

    In this article the optimization levels of onboard communication network (OCN) are proposed. We defined the basic parameters, which are necessary for the evaluation and comparison of modern OCN, we identified also a set of initial data for possible modeling of the OCN. We also proposed a mathematical technique for implementing the OCN optimization procedure. This technique is based on the principles and ideas of binary programming. It is shown that the binary programming technique allows to obtain an inherently optimal solution for the avionics tasks. An example of the proposed approach implementation to the problem of devices assignment in OCN is considered.

  5. Development of multidisciplinary design optimization procedures for smart composite wings and turbomachinery blades

    NASA Astrophysics Data System (ADS)

    Jha, Ratneshwar

    Multidisciplinary design optimization (MDO) procedures have been developed for smart composite wings and turbomachinery blades. The analysis and optimization methods used are computationally efficient and sufficiently rigorous. Therefore, the developed MDO procedures are well suited for actual design applications. The optimization procedure for the conceptual design of composite aircraft wings with surface bonded piezoelectric actuators involves the coupling of structural mechanics, aeroelasticity, aerodynamics and controls. The load carrying member of the wing is represented as a single-celled composite box beam. Each wall of the box beam is analyzed as a composite laminate using a refined higher-order displacement field to account for the variations in transverse shear stresses through the thickness. Therefore, the model is applicable for the analysis of composite wings of arbitrary thickness. Detailed structural modeling issues associated with piezoelectric actuation of composite structures are considered. The governing equations of motion are solved using the finite element method to analyze practical wing geometries. Three-dimensional aerodynamic computations are performed using a panel code based on the constant-pressure lifting surface method to obtain steady and unsteady forces. The Laplace domain method of aeroelastic analysis produces root-loci of the system which gives an insight into the physical phenomena leading to flutter/divergence and can be efficiently integrated within an optimization procedure. The significance of the refined higher-order displacement field on the aeroelastic stability of composite wings has been established. The effect of composite ply orientations on flutter and divergence speeds has been studied. The Kreisselmeier-Steinhauser (K-S) function approach is used to efficiently integrate the objective functions and constraints into a single envelope function. The resulting unconstrained optimization problem is solved using the Broyden-Fletcher-Goldberg-Shanno algorithm. The optimization problem is formulated with the objective of simultaneously minimizing wing weight and maximizing its aerodynamic efficiency. Design variables include composite ply orientations, ply thicknesses, wing sweep, piezoelectric actuator thickness and actuator voltage. Constraints are placed on the flutter/divergence dynamic pressure, wing root stresses and the maximum electric field applied to the actuators. Numerical results are presented showing significant improvements, after optimization, compared to reference designs. The multidisciplinary optimization procedure for the design of turbomachinery blades integrates aerodynamic and heat transfer design objective criteria along with various mechanical and geometric constraints on the blade geometry. The airfoil shape is represented by Bezier-Bernstein polynomials, which results in a relatively small number of design variables for the optimization. Thin shear layer approximation of the Navier-Stokes equation is used for the viscous flow calculations. Grid generation is accomplished by solving Poisson equations. The maximum and average blade temperatures are obtained through a finite element analysis. Total pressure and exit kinetic energy losses are minimized, with constraints on blade temperatures and geometry. The constrained multiobjective optimization problem is solved using the K-S function approach. The results for the numerical example show significant improvements after optimization.

  6. An experimental strategy validated to design cost-effective culture media based on response surface methodology.

    PubMed

    Navarrete-Bolaños, J L; Téllez-Martínez, M G; Miranda-López, R; Jiménez-Islas, H

    2017-07-03

    For any fermentation process, the production cost depends on several factors, such as the genetics of the microorganism, the process condition, and the culture medium composition. In this work, a guideline for the design of cost-efficient culture media using a sequential approach based on response surface methodology is described. The procedure was applied to analyze and optimize a culture medium of registered trademark and a base culture medium obtained as a result of the screening analysis from different culture media used to grow the same strain according to the literature. During the experiments, the procedure quantitatively identified an appropriate array of micronutrients to obtain a significant yield and find a minimum number of culture medium ingredients without limiting the process efficiency. The resultant culture medium showed an efficiency that compares favorably with the registered trademark medium at a 95% lower cost as well as reduced the number of ingredients in the base culture medium by 60% without limiting the process efficiency. These results demonstrated that, aside from satisfying the qualitative requirements, an optimum quantity of each constituent is needed to obtain a cost-effective culture medium. Study process variables for optimized culture medium and scaling-up production for the optimal values are desirable.

  7. Taboo Search: An Approach to the Multiple Minima Problem

    NASA Astrophysics Data System (ADS)

    Cvijovic, Djurdje; Klinowski, Jacek

    1995-02-01

    Described here is a method, based on Glover's taboo search for discrete functions, of solving the multiple minima problem for continuous functions. As demonstrated by model calculations, the algorithm avoids entrapment in local minima and continues the search to give a near-optimal final solution. Unlike other methods of global optimization, this procedure is generally applicable, easy to implement, derivative-free, and conceptually simple.

  8. Approach for Uncertainty Propagation and Robust Design in CFD Using Sensitivity Derivatives

    NASA Technical Reports Server (NTRS)

    Putko, Michele M.; Newman, Perry A.; Taylor, Arthur C., III; Green, Lawrence L.

    2001-01-01

    This paper presents an implementation of the approximate statistical moment method for uncertainty propagation and robust optimization for a quasi 1-D Euler CFD (computational fluid dynamics) code. Given uncertainties in statistically independent, random, normally distributed input variables, a first- and second-order statistical moment matching procedure is performed to approximate the uncertainty in the CFD output. Efficient calculation of both first- and second-order sensitivity derivatives is required. In order to assess the validity of the approximations, the moments are compared with statistical moments generated through Monte Carlo simulations. The uncertainties in the CFD input variables are also incorporated into a robust optimization procedure. For this optimization, statistical moments involving first-order sensitivity derivatives appear in the objective function and system constraints. Second-order sensitivity derivatives are used in a gradient-based search to successfully execute a robust optimization. The approximate methods used throughout the analyses are found to be valid when considering robustness about input parameter mean values.

  9. Optimizing cost-efficiency in mean exposure assessment - cost functions reconsidered

    PubMed Central

    2011-01-01

    Background Reliable exposure data is a vital concern in medical epidemiology and intervention studies. The present study addresses the needs of the medical researcher to spend monetary resources devoted to exposure assessment with an optimal cost-efficiency, i.e. obtain the best possible statistical performance at a specified budget. A few previous studies have suggested mathematical optimization procedures based on very simple cost models; this study extends the methodology to cover even non-linear cost scenarios. Methods Statistical performance, i.e. efficiency, was assessed in terms of the precision of an exposure mean value, as determined in a hierarchical, nested measurement model with three stages. Total costs were assessed using a corresponding three-stage cost model, allowing costs at each stage to vary non-linearly with the number of measurements according to a power function. Using these models, procedures for identifying the optimally cost-efficient allocation of measurements under a constrained budget were developed, and applied on 225 scenarios combining different sizes of unit costs, cost function exponents, and exposure variance components. Results Explicit mathematical rules for identifying optimal allocation could be developed when cost functions were linear, while non-linear cost functions implied that parts of or the entire optimization procedure had to be carried out using numerical methods. For many of the 225 scenarios, the optimal strategy consisted in measuring on only one occasion from each of as many subjects as allowed by the budget. Significant deviations from this principle occurred if costs for recruiting subjects were large compared to costs for setting up measurement occasions, and, at the same time, the between-subjects to within-subject variance ratio was small. In these cases, non-linearities had a profound influence on the optimal allocation and on the eventual size of the exposure data set. Conclusions The analysis procedures developed in the present study can be used for informed design of exposure assessment strategies, provided that data are available on exposure variability and the costs of collecting and processing data. The present shortage of empirical evidence on costs and appropriate cost functions however impedes general conclusions on optimal exposure measurement strategies in different epidemiologic scenarios. PMID:21600023

  10. Optimizing cost-efficiency in mean exposure assessment--cost functions reconsidered.

    PubMed

    Mathiassen, Svend Erik; Bolin, Kristian

    2011-05-21

    Reliable exposure data is a vital concern in medical epidemiology and intervention studies. The present study addresses the needs of the medical researcher to spend monetary resources devoted to exposure assessment with an optimal cost-efficiency, i.e. obtain the best possible statistical performance at a specified budget. A few previous studies have suggested mathematical optimization procedures based on very simple cost models; this study extends the methodology to cover even non-linear cost scenarios. Statistical performance, i.e. efficiency, was assessed in terms of the precision of an exposure mean value, as determined in a hierarchical, nested measurement model with three stages. Total costs were assessed using a corresponding three-stage cost model, allowing costs at each stage to vary non-linearly with the number of measurements according to a power function. Using these models, procedures for identifying the optimally cost-efficient allocation of measurements under a constrained budget were developed, and applied on 225 scenarios combining different sizes of unit costs, cost function exponents, and exposure variance components. Explicit mathematical rules for identifying optimal allocation could be developed when cost functions were linear, while non-linear cost functions implied that parts of or the entire optimization procedure had to be carried out using numerical methods.For many of the 225 scenarios, the optimal strategy consisted in measuring on only one occasion from each of as many subjects as allowed by the budget. Significant deviations from this principle occurred if costs for recruiting subjects were large compared to costs for setting up measurement occasions, and, at the same time, the between-subjects to within-subject variance ratio was small. In these cases, non-linearities had a profound influence on the optimal allocation and on the eventual size of the exposure data set. The analysis procedures developed in the present study can be used for informed design of exposure assessment strategies, provided that data are available on exposure variability and the costs of collecting and processing data. The present shortage of empirical evidence on costs and appropriate cost functions however impedes general conclusions on optimal exposure measurement strategies in different epidemiologic scenarios.

  11. Procedures for shape optimization of gas turbine disks

    NASA Technical Reports Server (NTRS)

    Cheu, Tsu-Chien

    1989-01-01

    Two procedures, the feasible direction method and sequential linear programming, for shape optimization of gas turbine disks are presented. The objective of these procedures is to obtain optimal designs of turbine disks with geometric and stress constraints. The coordinates of the selected points on the disk contours are used as the design variables. Structural weight, stress and their derivatives with respect to the design variables are calculated by an efficient finite element method for design senitivity analysis. Numerical examples of the optimal designs of a disk subjected to thermo-mechanical loadings are presented to illustrate and compare the effectiveness of these two procedures.

  12. Distributed Cooperative Optimal Control for Multiagent Systems on Directed Graphs: An Inverse Optimal Approach.

    PubMed

    Zhang, Huaguang; Feng, Tao; Yang, Guang-Hong; Liang, Hongjing

    2015-07-01

    In this paper, the inverse optimal approach is employed to design distributed consensus protocols that guarantee consensus and global optimality with respect to some quadratic performance indexes for identical linear systems on a directed graph. The inverse optimal theory is developed by introducing the notion of partial stability. As a result, the necessary and sufficient conditions for inverse optimality are proposed. By means of the developed inverse optimal theory, the necessary and sufficient conditions are established for globally optimal cooperative control problems on directed graphs. Basic optimal cooperative design procedures are given based on asymptotic properties of the resulting optimal distributed consensus protocols, and the multiagent systems can reach desired consensus performance (convergence rate and damping rate) asymptotically. Finally, two examples are given to illustrate the effectiveness of the proposed methods.

  13. Application of a derivative-free global optimization algorithm to the derivation of a new time integration scheme for the simulation of incompressible turbulence

    NASA Astrophysics Data System (ADS)

    Alimohammadi, Shahrouz; Cavaglieri, Daniele; Beyhaghi, Pooriya; Bewley, Thomas R.

    2016-11-01

    This work applies a recently developed Derivative-free optimization algorithm to derive a new mixed implicit-explicit (IMEX) time integration scheme for Computational Fluid Dynamics (CFD) simulations. This algorithm allows imposing a specified order of accuracy for the time integration and other important stability properties in the form of nonlinear constraints within the optimization problem. In this procedure, the coefficients of the IMEX scheme should satisfy a set of constraints simultaneously. Therefore, the optimization process, at each iteration, estimates the location of the optimal coefficients using a set of global surrogates, for both the objective and constraint functions, as well as a model of the uncertainty function of these surrogates based on the concept of Delaunay triangulation. This procedure has been proven to converge to the global minimum of the constrained optimization problem provided the constraints and objective functions are twice differentiable. As a result, a new third-order, low-storage IMEX Runge-Kutta time integration scheme is obtained with remarkably fast convergence. Numerical tests are then performed leveraging the turbulent channel flow simulations to validate the theoretical order of accuracy and stability properties of the new scheme.

  14. Design enhancement tools in MSC/NASTRAN

    NASA Technical Reports Server (NTRS)

    Wallerstein, D. V.

    1984-01-01

    Design sensitivity is the calculation of derivatives of constraint functions with respect to design variables. While a knowledge of these derivatives is useful in its own right, the derivatives are required in many efficient optimization methods. Constraint derivatives are also required in some reanalysis methods. It is shown where the sensitivity coefficients fit into the scheme of a basic organization of an optimization procedure. The analyzer is to be taken as MSC/NASTRAN. The terminator program monitors the termination criteria and ends the optimization procedure when the criteria are satisfied. This program can reside in several plances: in the optimizer itself, in a user written code, or as part of the MSC/EOS (Engineering Operating System) MSC/EOS currently under development. Since several excellent optimization codes exist and since they require such very specialized technical knowledge, the optimizer under the new MSC/EOS is considered to be selected and supplied by the user to meet his specific needs and preferences. The one exception to this is a fully stressed design (FSD) based on simple scaling. The gradients are currently supplied by various design sensitivity options now existing in MSC/NASTRAN's design sensitivity analysis (DSA).

  15. Optimal sensors placement and spillover suppression

    NASA Astrophysics Data System (ADS)

    Hanis, Tomas; Hromcik, Martin

    2012-04-01

    A new approach to optimal placement of sensors (OSP) in mechanical structures is presented. In contrast to existing methods, the presented procedure enables a designer to seek for a trade-off between the presence of desirable modes in captured measurements and the elimination of influence of those mode shapes that are not of interest in a given situation. An efficient numerical algorithm is presented, developed from an existing routine based on the Fischer information matrix analysis. We consider two requirements in the optimal sensor placement procedure. On top of the classical EFI approach, the sensors configuration should also minimize spillover of unwanted higher modes. We use the information approach to OSP, based on the effective independent method (EFI), and modify the underlying criterion to meet both of our requirements—to maximize useful signals and minimize spillover of unwanted modes at the same time. Performance of our approach is demonstrated by means of examples, and a flexible Blended Wing Body (BWB) aircraft case study related to a running European-level FP7 research project 'ACFA 2020—Active Control for Flexible Aircraft'.

  16. A New Methodology for Open Pit Slope Design in Karst-Prone Ground Conditions Based on Integrated Stochastic-Limit Equilibrium Analysis

    NASA Astrophysics Data System (ADS)

    Zhang, Ke; Cao, Ping; Ma, Guowei; Fan, Wenchen; Meng, Jingjing; Li, Kaihui

    2016-07-01

    Using the Chengmenshan Copper Mine as a case study, a new methodology for open pit slope design in karst-prone ground conditions is presented based on integrated stochastic-limit equilibrium analysis. The numerical modeling and optimization design procedure contain a collection of drill core data, karst cave stochastic model generation, SLIDE simulation and bisection method optimization. Borehole investigations are performed, and the statistical result shows that the length of the karst cave fits a negative exponential distribution model, but the length of carbonatite does not exactly follow any standard distribution. The inverse transform method and acceptance-rejection method are used to reproduce the length of the karst cave and carbonatite, respectively. A code for karst cave stochastic model generation, named KCSMG, is developed. The stability of the rock slope with the karst cave stochastic model is analyzed by combining the KCSMG code and the SLIDE program. This approach is then applied to study the effect of the karst cave on the stability of the open pit slope, and a procedure to optimize the open pit slope angle is presented.

  17. Sulcal set optimization for cortical surface registration.

    PubMed

    Joshi, Anand A; Pantazis, Dimitrios; Li, Quanzheng; Damasio, Hanna; Shattuck, David W; Toga, Arthur W; Leahy, Richard M

    2010-04-15

    Flat mapping based cortical surface registration constrained by manually traced sulcal curves has been widely used for inter subject comparisons of neuroanatomical data. Even for an experienced neuroanatomist, manual sulcal tracing can be quite time consuming, with the cost increasing with the number of sulcal curves used for registration. We present a method for estimation of an optimal subset of size N(C) from N possible candidate sulcal curves that minimizes a mean squared error metric over all combinations of N(C) curves. The resulting procedure allows us to estimate a subset with a reduced number of curves to be traced as part of the registration procedure leading to optimal use of manual labeling effort for registration. To minimize the error metric we analyze the correlation structure of the errors in the sulcal curves by modeling them as a multivariate Gaussian distribution. For a given subset of sulci used as constraints in surface registration, the proposed model estimates registration error based on the correlation structure of the sulcal errors. The optimal subset of constraint curves consists of the N(C) sulci that jointly minimize the estimated error variance for the subset of unconstrained curves conditioned on the N(C) constraint curves. The optimal subsets of sulci are presented and the estimated and actual registration errors for these subsets are computed. Copyright 2009 Elsevier Inc. All rights reserved.

  18. Hybrid Self-Adaptive Evolution Strategies Guided by Neighborhood Structures for Combinatorial Optimization Problems.

    PubMed

    Coelho, V N; Coelho, I M; Souza, M J F; Oliveira, T A; Cota, L P; Haddad, M N; Mladenovic, N; Silva, R C P; Guimarães, F G

    2016-01-01

    This article presents an Evolution Strategy (ES)--based algorithm, designed to self-adapt its mutation operators, guiding the search into the solution space using a Self-Adaptive Reduced Variable Neighborhood Search procedure. In view of the specific local search operators for each individual, the proposed population-based approach also fits into the context of the Memetic Algorithms. The proposed variant uses the Greedy Randomized Adaptive Search Procedure with different greedy parameters for generating its initial population, providing an interesting exploration-exploitation balance. To validate the proposal, this framework is applied to solve three different [Formula: see text]-Hard combinatorial optimization problems: an Open-Pit-Mining Operational Planning Problem with dynamic allocation of trucks, an Unrelated Parallel Machine Scheduling Problem with Setup Times, and the calibration of a hybrid fuzzy model for Short-Term Load Forecasting. Computational results point out the convergence of the proposed model and highlight its ability in combining the application of move operations from distinct neighborhood structures along the optimization. The results gathered and reported in this article represent a collective evidence of the performance of the method in challenging combinatorial optimization problems from different application domains. The proposed evolution strategy demonstrates an ability of adapting the strength of the mutation disturbance during the generations of its evolution process. The effectiveness of the proposal motivates the application of this novel evolutionary framework for solving other combinatorial optimization problems.

  19. Towards Robust Designs Via Multiple-Objective Optimization Methods

    NASA Technical Reports Server (NTRS)

    Man Mohan, Rai

    2006-01-01

    Fabricating and operating complex systems involves dealing with uncertainty in the relevant variables. In the case of aircraft, flow conditions are subject to change during operation. Efficiency and engine noise may be different from the expected values because of manufacturing tolerances and normal wear and tear. Engine components may have a shorter life than expected because of manufacturing tolerances. In spite of the important effect of operating- and manufacturing-uncertainty on the performance and expected life of the component or system, traditional aerodynamic shape optimization has focused on obtaining the best design given a set of deterministic flow conditions. Clearly it is important to both maintain near-optimal performance levels at off-design operating conditions, and, ensure that performance does not degrade appreciably when the component shape differs from the optimal shape due to manufacturing tolerances and normal wear and tear. These requirements naturally lead to the idea of robust optimal design wherein the concept of robustness to various perturbations is built into the design optimization procedure. The basic ideas involved in robust optimal design will be included in this lecture. The imposition of the additional requirement of robustness results in a multiple-objective optimization problem requiring appropriate solution procedures. Typically the costs associated with multiple-objective optimization are substantial. Therefore efficient multiple-objective optimization procedures are crucial to the rapid deployment of the principles of robust design in industry. Hence the companion set of lecture notes (Single- and Multiple-Objective Optimization with Differential Evolution and Neural Networks ) deals with methodology for solving multiple-objective Optimization problems efficiently, reliably and with little user intervention. Applications of the methodologies presented in the companion lecture to robust design will be included here. The evolutionary method (DE) is first used to solve a relatively difficult problem in extended surface heat transfer wherein optimal fin geometries are obtained for different safe operating base temperatures. The objective of maximizing the safe operating base temperature range is in direct conflict with the objective of maximizing fin heat transfer. This problem is a good example of achieving robustness in the context of changing operating conditions. The evolutionary method is then used to design a turbine airfoil; the two objectives being reduced sensitivity of the pressure distribution to small changes in the airfoil shape and the maximization of the trailing edge wedge angle with the consequent increase in airfoil thickness and strength. This is a relevant example of achieving robustness to manufacturing tolerances and wear and tear in the presence of other objectives.

  20. An optimized protocol for DNA extraction in plants with a high content of secondary metabolites, based on leaves of Mimosa tenuiflora (Willd.) Poir. (Leguminosae).

    PubMed

    Arruda, S R; Pereira, D G; Silva-Castro, M M; Brito, M G; Waldschmidt, A M

    2017-07-06

    Some species are characterized by a high content of tannins, alkaloids, and phenols in their leaves. These secondary metabolites are released during DNA extraction and might hinder molecular studies based on PCR (polymerase chain reaction). To provide an efficient method to extract DNA, Mimosa tenuiflora, an important leguminous plant from Brazilian semiarid region used in popular medicine and as a source of fuelwood or forage, was used. Eight procedures previously reported for plants were tested and adapted from leaf tissues of M. tenuiflora stored at -20°C. The optimized procedure in this study encompassed the utilization of phenol during deproteinization, increased concentrations of cetyltrimethylammonium bromide and sodium chloride, and a shorter period and lower temperature of incubation concerning other methods. The extracted DNA did not present degradation, and amplification via PCR was successful using ISSR, trnL, ITS, and ETS primers. Besides M. tenuiflora, this procedure was also tested and proved to be efficient in genetic studies of other plant species.

  1. Teaching and assessing procedural skills using simulation: metrics and methodology.

    PubMed

    Lammers, Richard L; Davenport, Moira; Korley, Frederick; Griswold-Theodorson, Sharon; Fitch, Michael T; Narang, Aneesh T; Evans, Leigh V; Gross, Amy; Rodriguez, Elliot; Dodge, Kelly L; Hamann, Cara J; Robey, Walter C

    2008-11-01

    Simulation allows educators to develop learner-focused training and outcomes-based assessments. However, the effectiveness and validity of simulation-based training in emergency medicine (EM) requires further investigation. Teaching and testing technical skills require methods and assessment instruments that are somewhat different than those used for cognitive or team skills. Drawing from work published by other medical disciplines as well as educational, behavioral, and human factors research, the authors developed six research themes: measurement of procedural skills; development of performance standards; assessment and validation of training methods, simulator models, and assessment tools; optimization of training methods; transfer of skills learned on simulator models to patients; and prevention of skill decay over time. The article reviews relevant and established educational research methodologies and identifies gaps in our knowledge of how physicians learn procedures. The authors present questions requiring further research that, once answered, will advance understanding of simulation-based procedural training and assessment in EM.

  2. Static and Dynamic Model Update of an Inflatable/Rigidizable Torus Structure

    NASA Technical Reports Server (NTRS)

    Horta, Lucas G.; Reaves, mercedes C.

    2006-01-01

    The present work addresses the development of an experimental and computational procedure for validating finite element models. A torus structure, part of an inflatable/rigidizable Hexapod, is used to demonstrate the approach. Because of fabrication, materials, and geometric uncertainties, a statistical approach combined with optimization is used to modify key model parameters. Static test results are used to update stiffness parameters and dynamic test results are used to update the mass distribution. Updated parameters are computed using gradient and non-gradient based optimization algorithms. Results show significant improvements in model predictions after parameters are updated. Lessons learned in the areas of test procedures, modeling approaches, and uncertainties quantification are presented.

  3. A model for the value of a business, some optimization problems in its operating procedures and the valuation of its debt

    NASA Astrophysics Data System (ADS)

    1997-12-01

    In this paper we present a model for the value of a firm based on observable variables and parameters: the annual turnover, the expenses, interest rates. This value is the solution of a parabolic partial differential equation. We show how the value of the company depends on its legal status such as its liability (that is, whether it is a Limited Company or a sole trader/partnership). We give examples of how the operating procedures can be optimized (for example, whether the firm should close down, relocate etc.). Finally, we show how the model can be used to value the debt issued by the firm.

  4. A cubic extended interior penalty function for structural optimization

    NASA Technical Reports Server (NTRS)

    Prasad, B.; Haftka, R. T.

    1979-01-01

    This paper describes an optimization procedure for the minimum weight design of complex structures. The procedure is based on a new cubic extended interior penalty function (CEIPF) used with the sequence of unconstrained minimization technique (SUMT) and Newton's method. The Hessian matrix of the penalty function is approximated using only constraints and their derivatives. The CEIPF is designed to minimize the error in the approximation of the Hessian matrix, and as a result the number of structural analyses required is small and independent of the number of design variables. Three example problems are reported. The number of structural analyses is reduced by as much as 50 per cent below previously reported results.

  5. Dynamic Hierarchical Energy-Efficient Method Based on Combinatorial Optimization for Wireless Sensor Networks

    PubMed Central

    Tang, Hongying; Cheng, Yongbo; Zhao, Qin; Li, Baoqing; Yuan, Xiaobing

    2017-01-01

    Routing protocols based on topology control are significantly important for improving network longevity in wireless sensor networks (WSNs). Traditionally, some WSN routing protocols distribute uneven network traffic load to sensor nodes, which is not optimal for improving network longevity. Differently to conventional WSN routing protocols, we propose a dynamic hierarchical protocol based on combinatorial optimization (DHCO) to balance energy consumption of sensor nodes and to improve WSN longevity. For each sensor node, the DHCO algorithm obtains the optimal route by establishing a feasible routing set instead of selecting the cluster head or the next hop node. The process of obtaining the optimal route can be formulated as a combinatorial optimization problem. Specifically, the DHCO algorithm is carried out by the following procedures. It employs a hierarchy-based connection mechanism to construct a hierarchical network structure in which each sensor node is assigned to a special hierarchical subset; it utilizes the combinatorial optimization theory to establish the feasible routing set for each sensor node, and takes advantage of the maximum–minimum criterion to obtain their optimal routes to the base station. Various results of simulation experiments show effectiveness and superiority of the DHCO algorithm in comparison with state-of-the-art WSN routing algorithms, including low-energy adaptive clustering hierarchy (LEACH), hybrid energy-efficient distributed clustering (HEED), genetic protocol-based self-organizing network clustering (GASONeC), and double cost function-based routing (DCFR) algorithms. PMID:28753962

  6. Performance optimization of helicopter rotor blades

    NASA Technical Reports Server (NTRS)

    Walsh, Joanne L.

    1991-01-01

    As part of a center-wide activity at NASA Langley Research Center to develop multidisciplinary design procedures by accounting for discipline interactions, a performance design optimization procedure is developed. The procedure optimizes the aerodynamic performance of rotor blades by selecting the point of taper initiation, root chord, taper ratio, and maximum twist which minimize hover horsepower while not degrading forward flight performance. The procedure uses HOVT (a strip theory momentum analysis) to compute the horse power required for hover and the comprehensive helicopter analysis program CAMRAD to compute the horsepower required for forward flight and maneuver. The optimization algorithm consists of the general purpose optimization program CONMIN and approximate analyses. Sensitivity analyses consisting of derivatives of the objective function and constraints are carried out by forward finite differences. The procedure is applied to a test problem which is an analytical model of a wind tunnel model of a utility rotor blade.

  7. Determination of optimal ultrasound planes for the initialisation of image registration during endoscopic ultrasound-guided procedures.

    PubMed

    Bonmati, Ester; Hu, Yipeng; Gibson, Eli; Uribarri, Laura; Keane, Geri; Gurusami, Kurinchi; Davidson, Brian; Pereira, Stephen P; Clarkson, Matthew J; Barratt, Dean C

    2018-06-01

    Navigation of endoscopic ultrasound (EUS)-guided procedures of the upper gastrointestinal (GI) system can be technically challenging due to the small fields-of-view of ultrasound and optical devices, as well as the anatomical variability and limited number of orienting landmarks during navigation. Co-registration of an EUS device and a pre-procedure 3D image can enhance the ability to navigate. However, the fidelity of this contextual information depends on the accuracy of registration. The purpose of this study was to develop and test the feasibility of a simulation-based planning method for pre-selecting patient-specific EUS-visible anatomical landmark locations to maximise the accuracy and robustness of a feature-based multimodality registration method. A registration approach was adopted in which landmarks are registered to anatomical structures segmented from the pre-procedure volume. The predicted target registration errors (TREs) of EUS-CT registration were estimated using simulated visible anatomical landmarks and a Monte Carlo simulation of landmark localisation error. The optimal planes were selected based on the 90th percentile of TREs, which provide a robust and more accurate EUS-CT registration initialisation. The method was evaluated by comparing the accuracy and robustness of registrations initialised using optimised planes versus non-optimised planes using manually segmented CT images and simulated ([Formula: see text]) or retrospective clinical ([Formula: see text]) EUS landmarks. The results show a lower 90th percentile TRE when registration is initialised using the optimised planes compared with a non-optimised initialisation approach (p value [Formula: see text]). The proposed simulation-based method to find optimised EUS planes and landmarks for EUS-guided procedures may have the potential to improve registration accuracy. Further work will investigate applying the technique in a clinical setting.

  8. Optimal Operation of a Thermal Energy Storage Tank Using Linear Optimization

    NASA Astrophysics Data System (ADS)

    Civit Sabate, Carles

    In this thesis, an optimization procedure for minimizing the operating costs of a Thermal Energy Storage (TES) tank is presented. The facility in which the optimization is based is the combined cooling, heating, and power (CCHP) plant at the University of California, Irvine. TES tanks provide the ability of decoupling the demand of chilled water from its generation, over the course of a day, from the refrigeration and air-conditioning plants. They can be used to perform demand-side management, and optimization techniques can help to approach their optimal use. The proposed optimization approach provides a fast and reliable methodology of finding the optimal use of the TES tank to reduce energy costs and provides a tool for future implementation of optimal control laws on the system. Advantages of the proposed methodology are studied using simulation with historical data.

  9. Group search optimiser-based optimal bidding strategies with no Karush-Kuhn-Tucker optimality conditions

    NASA Astrophysics Data System (ADS)

    Yadav, Naresh Kumar; Kumar, Mukesh; Gupta, S. K.

    2017-03-01

    General strategic bidding procedure has been formulated in the literature as a bi-level searching problem, in which the offer curve tends to minimise the market clearing function and to maximise the profit. Computationally, this is complex and hence, the researchers have adopted Karush-Kuhn-Tucker (KKT) optimality conditions to transform the model into a single-level maximisation problem. However, the profit maximisation problem with KKT optimality conditions poses great challenge to the classical optimisation algorithms. The problem has become more complex after the inclusion of transmission constraints. This paper simplifies the profit maximisation problem as a minimisation function, in which the transmission constraints, the operating limits and the ISO market clearing functions are considered with no KKT optimality conditions. The derived function is solved using group search optimiser (GSO), a robust population-based optimisation algorithm. Experimental investigation is carried out on IEEE 14 as well as IEEE 30 bus systems and the performance is compared against differential evolution-based strategic bidding, genetic algorithm-based strategic bidding and particle swarm optimisation-based strategic bidding methods. The simulation results demonstrate that the obtained profit maximisation through GSO-based bidding strategies is higher than the other three methods.

  10. Optimal ancilla-free Pauli+V circuits for axial rotations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blass, Andreas; Bocharov, Alex; Gurevich, Yuri

    We address the problem of optimal representation of single-qubit rotations in a certain unitary basis consisting of the so-called V gates and Pauli matrices. The V matrices were proposed by Lubotsky, Philips, and Sarnak [Commun. Pure Appl. Math. 40, 401–420 (1987)] as a purely geometric construct in 1987 and recently found applications in quantum computation. They allow for exceptionally simple quantum circuit synthesis algorithms based on quaternionic factorization. We adapt the deterministic-search technique initially proposed by Ross and Selinger to synthesize approximating Pauli+V circuits of optimal depth for single-qubit axial rotations. Our synthesis procedure based on simple SL{sub 2}(ℤ) geometrymore » is almost elementary.« less

  11. Hormesis and the salk polio vaccine.

    PubMed

    Calabrese, Edward J

    2012-01-01

    The production of the Salk vaccine polio virus by monkey kidney cells was generated using the synthetic tissue culture medium, Mixture 199. In this paper's retrospective assessment of this process, it was discovered that Mixture 199 was modified by the addition of ethanol to optimize animal cell survival based on experimentation that revealed a hormetic-like biphasic response relationship. This hormesis-based optimization procedure was then applied to all uses of Mixture 199 and modifications of it, including its application to the Salk polio vaccine during preliminary testing and in its subsequent major societal treatment programs.

  12. Finite element mesh refinement criteria for stress analysis

    NASA Technical Reports Server (NTRS)

    Kittur, Madan G.; Huston, Ronald L.

    1990-01-01

    This paper discusses procedures for finite-element mesh selection and refinement. The objective is to improve accuracy. The procedures are based on (1) the minimization of the stiffness matrix race (optimizing node location); (2) the use of h-version refinement (rezoning, element size reduction, and increasing the number of elements); and (3) the use of p-version refinement (increasing the order of polynomial approximation of the elements). A step-by-step procedure of mesh selection, improvement, and refinement is presented. The criteria for 'goodness' of a mesh are based on strain energy, displacement, and stress values at selected critical points of a structure. An analysis of an aircraft lug problem is presented as an example.

  13. Duct wall impedance control as an advanced concept for acoustic suppression enhancement. [engine noise reduction

    NASA Technical Reports Server (NTRS)

    Dean, P. D.

    1978-01-01

    A systems concept procedure is described for the optimization of acoustic duct liner design for both uniform and multisegment types. The concept was implemented by the use of a double reverberant chamber flow duct facility coupled with sophisticated computer control and acoustic analysis systems. The optimization procedure for liner insertion loss was based on the concept of variable liner impedance produced by bias air flow through a multilayer, resonant cavity liner. A multiple microphone technique for in situ wall impedance measurements was used and successfully adapted to produce automated measurements for all liner configurations tested. The complete validation of the systems concept was prevented by the inability to optimize the insertion loss using bias flow induced wall impedance changes. This inability appeared to be a direct function of the presence of a higher order energy carrying modes which were not influenced significantly by the wall impedance changes.

  14. Optimization of Composite Structures with Curved Fiber Trajectories

    NASA Astrophysics Data System (ADS)

    Lemaire, Etienne; Zein, Samih; Bruyneel, Michael

    2014-06-01

    This paper studies the problem of optimizing composites shells manufactured using Automated Tape Layup (ATL) or Automated Fiber Placement (AFP) processes. The optimization procedure relies on a new approach to generate equidistant fiber trajectories based on Fast Marching Method. Starting with a (possibly curved) reference fiber direction defined on a (possibly curved) meshed surface, the new method allows determining fibers orientation resulting from a uniform thickness layup. The design variables are the parameters defining the position and the shape of the reference curve which results in very few design variables. Thanks to this efficient parameterization, maximum stiffness optimization numerical applications are proposed. The shape of the design space is discussed, regarding local and global optimal solutions.

  15. Multi-Objective Community Detection Based on Memetic Algorithm

    PubMed Central

    2015-01-01

    Community detection has drawn a lot of attention as it can provide invaluable help in understanding the function and visualizing the structure of networks. Since single objective optimization methods have intrinsic drawbacks to identifying multiple significant community structures, some methods formulate the community detection as multi-objective problems and adopt population-based evolutionary algorithms to obtain multiple community structures. Evolutionary algorithms have strong global search ability, but have difficulty in locating local optima efficiently. In this study, in order to identify multiple significant community structures more effectively, a multi-objective memetic algorithm for community detection is proposed by combining multi-objective evolutionary algorithm with a local search procedure. The local search procedure is designed by addressing three issues. Firstly, nondominated solutions generated by evolutionary operations and solutions in dominant population are set as initial individuals for local search procedure. Then, a new direction vector named as pseudonormal vector is proposed to integrate two objective functions together to form a fitness function. Finally, a network specific local search strategy based on label propagation rule is expanded to search the local optimal solutions efficiently. The extensive experiments on both artificial and real-world networks evaluate the proposed method from three aspects. Firstly, experiments on influence of local search procedure demonstrate that the local search procedure can speed up the convergence to better partitions and make the algorithm more stable. Secondly, comparisons with a set of classic community detection methods illustrate the proposed method can find single partitions effectively. Finally, the method is applied to identify hierarchical structures of networks which are beneficial for analyzing networks in multi-resolution levels. PMID:25932646

  16. Multi-objective community detection based on memetic algorithm.

    PubMed

    Wu, Peng; Pan, Li

    2015-01-01

    Community detection has drawn a lot of attention as it can provide invaluable help in understanding the function and visualizing the structure of networks. Since single objective optimization methods have intrinsic drawbacks to identifying multiple significant community structures, some methods formulate the community detection as multi-objective problems and adopt population-based evolutionary algorithms to obtain multiple community structures. Evolutionary algorithms have strong global search ability, but have difficulty in locating local optima efficiently. In this study, in order to identify multiple significant community structures more effectively, a multi-objective memetic algorithm for community detection is proposed by combining multi-objective evolutionary algorithm with a local search procedure. The local search procedure is designed by addressing three issues. Firstly, nondominated solutions generated by evolutionary operations and solutions in dominant population are set as initial individuals for local search procedure. Then, a new direction vector named as pseudonormal vector is proposed to integrate two objective functions together to form a fitness function. Finally, a network specific local search strategy based on label propagation rule is expanded to search the local optimal solutions efficiently. The extensive experiments on both artificial and real-world networks evaluate the proposed method from three aspects. Firstly, experiments on influence of local search procedure demonstrate that the local search procedure can speed up the convergence to better partitions and make the algorithm more stable. Secondly, comparisons with a set of classic community detection methods illustrate the proposed method can find single partitions effectively. Finally, the method is applied to identify hierarchical structures of networks which are beneficial for analyzing networks in multi-resolution levels.

  17. An approach of ionic liquids/lithium salts based microwave irradiation pretreatment followed by ultrasound-microwave synergistic extraction for two coumarins preparation from Cortex fraxini.

    PubMed

    Liu, Zaizhi; Gu, Huiyan; Yang, Lei

    2015-10-23

    Ionic liquids/lithium salts solvent system was successfully introduced into the separation technique for the preparation of two coumarins (aesculin and aesculetin) from Cortex fraxini. Ionic liquids/lithium salts based microwave irradiation pretreatment followed by ultrasound-microwave synergy extraction (ILSMP-UMSE) procedure was developed and optimized for the sufficient extraction of these two analytes. Several variables which can potentially influence the extraction yields, including pretreatment time and temperature, [C4mim]Br concentration, LiAc content, ultrasound-microwave synergy extraction (UMSE) time, liquid-solid ratio, and UMSE power were optimized by Plackett-Burman design. Among seven variables, UMSE time, liquid-solid ratio, and UMSE power were the statistically significant variables and these three factors were further optimized by Box-Behnken design to predict optimal extraction conditions and find out operability ranges with maximum extraction yields. Under optimum operating conditions, ILSMP-UMSE showed higher extraction yields of two target compounds than those obtained by reference extraction solvents. Method validation studies also evidenced that ILSMP-UMSE is credible for the preparation of two coumarins from Cortex fraxini. This study is indicative of the proposed procedure that has huge application prospects for the preparation of natural products from plant materials. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. An Empirical Bayes Approach to Item Banking. Project Psychometric Aspects of Item Banking No. 6. Research Report 86-6.

    ERIC Educational Resources Information Center

    van der Linden, Wim J.; Eggen, Theo J. H. M.

    A procedure for the sequential optimization of the calibration of an item bank is given. The procedure is based on an empirical Bayes approach to a reformulation of the Rasch model as a model for paired comparisons between the difficulties of test items in which ties are allowed to occur. First, it is indicated how a paired-comparisons design…

  19. Clinical feasibility of exercise-based A-V interval optimization for cardiac resynchronization: a pilot study.

    PubMed

    Choudhuri, Indrajit; MacCarter, Dean; Shaw, Rachael; Anderson, Steve; St Cyr, John; Niazi, Imran

    2014-11-01

    One-third of eligible patients fail to respond to cardiac resynchronization therapy (CRT). Current methods to "optimize" the atrio-ventricular (A-V) interval are performed at rest, which may limit its efficacy during daily activities. We hypothesized that low-intensity cardiopulmonary exercise testing (CPX) could identify the most favorable physiologic combination of specific gas exchange parameters reflecting pulmonary blood flow or cardiac output, stroke volume, and left atrial pressure to guide determination of the optimal A-V interval. We assessed relative feasibility of determining the optimal A-V interval by three methods in 17 patients who underwent optimization of CRT: (1) resting echocardiographic optimization (the Ritter method), (2) resting electrical optimization (intrinsic A-V interval and QRS duration), and (3) during low-intensity, steady-state CPX. Five sequential, incremental A-V intervals were programmed in each method. Assessment of cardiopulmonary stability and potential influence on the CPX-based method were assessed. CPX and determination of a physiological optimal A-V interval was successfully completed in 94.1% of patients, slightly higher than the resting echo-based approach (88.2%). There was a wide variation in the optimal A-V delay determined by each method. There was no observed cardiopulmonary instability or impact of the implant procedure that affected determination of the CPX-based optimized A-V interval. Determining optimized A-V intervals by CPX is feasible. Proposed mechanisms explaining this finding and long-term impact require further study. ©2014 Wiley Periodicals, Inc.

  20. Feasibility of employing model-based optimization of pulse amplitude and electrode distance for effective tumor electropermeabilization.

    PubMed

    Sel, Davorka; Lebar, Alenka Macek; Miklavcic, Damijan

    2007-05-01

    In electrochemotherapy (ECT) electropermeabilization, parameters (pulse amplitude, electrode setup) need to be customized in order to expose the whole tumor to electric field intensities above permeabilizing threshold to achieve effective ECT. In this paper, we present a model-based optimization approach toward determination of optimal electropermeabilization parameters for effective ECT. The optimization is carried out by minimizing the difference between the permeabilization threshold and electric field intensities computed by finite element model in selected points of tumor. We examined the feasibility of model-based optimization of electropermeabilization parameters on a model geometry generated from computer tomography images, representing brain tissue with tumor. Continuous parameter subject to optimization was pulse amplitude. The distance between electrode pairs was optimized as a discrete parameter. Optimization also considered the pulse generator constraints on voltage and current. During optimization the two constraints were reached preventing the exposure of the entire volume of the tumor to electric field intensities above permeabilizing threshold. However, despite the fact that with the particular needle array holder and pulse generator the entire volume of the tumor was not permeabilized, the maximal extent of permeabilization for the particular case (electrodes, tissue) was determined with the proposed approach. Model-based optimization approach could also be used for electro-gene transfer, where electric field intensities should be distributed between permeabilizing threshold and irreversible threshold-the latter causing tissue necrosis. This can be obtained by adding constraints on maximum electric field intensity in optimization procedure.

  1. Development of a turbomachinery design optimization procedure using a multiple-parameter nonlinear perturbation method

    NASA Technical Reports Server (NTRS)

    Stahara, S. S.

    1984-01-01

    An investigation was carried out to complete the preliminary development of a combined perturbation/optimization procedure and associated computational code for designing optimized blade-to-blade profiles of turbomachinery blades. The overall purpose of the procedures developed is to provide demonstration of a rapid nonlinear perturbation method for minimizing the computational requirements associated with parametric design studies of turbomachinery flows. The method combines the multiple parameter nonlinear perturbation method, successfully developed in previous phases of this study, with the NASA TSONIC blade-to-blade turbomachinery flow solver, and the COPES-CONMIN optimization procedure into a user's code for designing optimized blade-to-blade surface profiles of turbomachinery blades. Results of several design applications and a documented version of the code together with a user's manual are provided.

  2. High-Lift Optimization Design Using Neural Networks on a Multi-Element Airfoil

    NASA Technical Reports Server (NTRS)

    Greenman, Roxana M.; Roth, Karlin R.; Smith, Charles A. (Technical Monitor)

    1998-01-01

    The high-lift performance of a multi-element airfoil was optimized by using neural-net predictions that were trained using a computational data set. The numerical data was generated using a two-dimensional, incompressible, Navier-Stokes algorithm with the Spalart-Allmaras turbulence model. Because it is difficult to predict maximum lift for high-lift systems, an empirically-based maximum lift criteria was used in this study to determine both the maximum lift and the angle at which it occurs. Multiple input, single output networks were trained using the NASA Ames variation of the Levenberg-Marquardt algorithm for each of the aerodynamic coefficients (lift, drag, and moment). The artificial neural networks were integrated with a gradient-based optimizer. Using independent numerical simulations and experimental data for this high-lift configuration, it was shown that this design process successfully optimized flap deflection, gap, overlap, and angle of attack to maximize lift. Once the neural networks were trained and integrated with the optimizer, minimal additional computer resources were required to perform optimization runs with different initial conditions and parameters. Applying the neural networks within the high-lift rigging optimization process reduced the amount of computational time and resources by 83% compared with traditional gradient-based optimization procedures for multiple optimization runs.

  3. Toxicity Minimized Cryoprotectant Addition and Removal Procedures for Adherent Endothelial Cells

    PubMed Central

    Davidson, Allyson Fry; Glasscock, Cameron; McClanahan, Danielle R.; Benson, James D.; Higgins, Adam Z.

    2015-01-01

    Ice-free cryopreservation, known as vitrification, is an appealing approach for banking of adherent cells and tissues because it prevents dissociation and morphological damage that may result from ice crystal formation. However, current vitrification methods are often limited by the cytotoxicity of the concentrated cryoprotective agent (CPA) solutions that are required to suppress ice formation. Recently, we described a mathematical strategy for identifying minimally toxic CPA equilibration procedures based on the minimization of a toxicity cost function. Here we provide direct experimental support for the feasibility of these methods when applied to adherent endothelial cells. We first developed a concentration- and temperature-dependent toxicity cost function by exposing the cells to a range of glycerol concentrations at 21°C and 37°C, and fitting the resulting viability data to a first order cell death model. This cost function was then numerically minimized in our state constrained optimization routine to determine addition and removal procedures for 17 molal (mol/kg water) glycerol solutions. Using these predicted optimal procedures, we obtained 81% recovery after exposure to vitrification solutions, as well as successful vitrification with the relatively slow cooling and warming rates of 50°C/min and 130°C/min. In comparison, conventional multistep CPA equilibration procedures resulted in much lower cell yields of about 10%. Our results demonstrate the potential for rational design of minimally toxic vitrification procedures and pave the way for extension of our optimization approach to other adherent cell types as well as more complex systems such as tissues and organs. PMID:26605546

  4. New approaches to optimization in aerospace conceptual design

    NASA Technical Reports Server (NTRS)

    Gage, Peter J.

    1995-01-01

    Aerospace design can be viewed as an optimization process, but conceptual studies are rarely performed using formal search algorithms. Three issues that restrict the success of automatic search are identified in this work. New approaches are introduced to address the integration of analyses and optimizers, to avoid the need for accurate gradient information and a smooth search space (required for calculus-based optimization), and to remove the restrictions imposed by fixed complexity problem formulations. (1) Optimization should be performed in a flexible environment. A quasi-procedural architecture is used to conveniently link analysis modules and automatically coordinate their execution. It efficiently controls a large-scale design tasks. (2) Genetic algorithms provide a search method for discontinuous or noisy domains. The utility of genetic optimization is demonstrated here, but parameter encodings and constraint-handling schemes must be carefully chosen to avoid premature convergence to suboptimal designs. The relationship between genetic and calculus-based methods is explored. (3) A variable-complexity genetic algorithm is created to permit flexible parameterization, so that the level of description can change during optimization. This new optimizer automatically discovers novel designs in structural and aerodynamic tasks.

  5. Optimized mirror shape tuning using beam weightings based on distance, angle of incidence, reflectivity, and power.

    PubMed

    Goldberg, Kenneth A; Yashchuk, Valeriy V

    2016-05-01

    For glancing-incidence optical systems, such as short-wavelength optics used for nano-focusing, incorporating physical factors in the calculations used for shape optimization can improve performance. Wavefront metrology, including the measurement of a mirror's shape or slope, is routinely used as input for mirror figure optimization on mirrors that can be bent, actuated, positioned, or aligned. Modeling shows that when the incident power distribution, distance from focus, angle of incidence, and the spatially varying reflectivity are included in the optimization, higher Strehl ratios can be achieved. Following the works of Maréchal and Mahajan, optimization of the Strehl ratio (for peak intensity with a coherently illuminated system) occurs when the expectation value of the phase error's variance is minimized. We describe an optimization procedure based on regression analysis that incorporates these physical parameters. This approach is suitable for coherently illuminated systems of nearly diffraction-limited quality. Mathematically, this work is an enhancement of the methods commonly applied for ex situ alignment based on uniform weighting of all points on the surface (or a sub-region of the surface). It follows a similar approach to the optimization of apodized and non-uniformly illuminated optical systems. Significantly, it reaches a different conclusion than a more recent approach based on minimization of focal plane ray errors.

  6. Multidisciplinary design optimization using multiobjective formulation techniques

    NASA Technical Reports Server (NTRS)

    Chattopadhyay, Aditi; Pagaldipti, Narayanan S.

    1995-01-01

    This report addresses the development of a multidisciplinary optimization procedure using an efficient semi-analytical sensitivity analysis technique and multilevel decomposition for the design of aerospace vehicles. A semi-analytical sensitivity analysis procedure is developed for calculating computational grid sensitivities and aerodynamic design sensitivities. Accuracy and efficiency of the sensitivity analysis procedure is established through comparison of the results with those obtained using a finite difference technique. The developed sensitivity analysis technique are then used within a multidisciplinary optimization procedure for designing aerospace vehicles. The optimization problem, with the integration of aerodynamics and structures, is decomposed into two levels. Optimization is performed for improved aerodynamic performance at the first level and improved structural performance at the second level. Aerodynamic analysis is performed by solving the three-dimensional parabolized Navier Stokes equations. A nonlinear programming technique and an approximate analysis procedure are used for optimization. The proceduredeveloped is applied to design the wing of a high speed aircraft. Results obtained show significant improvements in the aircraft aerodynamic and structural performance when compared to a reference or baseline configuration. The use of the semi-analytical sensitivity technique provides significant computational savings.

  7. New Method of Calibrating IRT Models.

    ERIC Educational Resources Information Center

    Jiang, Hai; Tang, K. Linda

    This discussion of new methods for calibrating item response theory (IRT) models looks into new optimization procedures, such as the Genetic Algorithm (GA) to improve on the use of the Newton-Raphson procedure. The advantages of using a global optimization procedure like GA is that this kind of procedure is not easily affected by local optima and…

  8. Genetic algorithm-based multi-objective optimal absorber system for three-dimensional seismic structures

    NASA Astrophysics Data System (ADS)

    Ren, Wenjie; Li, Hongnan; Song, Gangbing; Huo, Linsheng

    2009-03-01

    The problem of optimizing an absorber system for three-dimensional seismic structures is addressed. The objective is to determine the number and position of absorbers to minimize the coupling effects of translation-torsion of structures at minimum cost. A procedure for a multi-objective optimization problem is developed by integrating a dominance-based selection operator and a dominance-based penalty function method. Based on the two-branch tournament genetic algorithm, the selection operator is constructed by evaluating individuals according to their dominance in one run. The technique guarantees the better performing individual winning its competition, provides a slight selection pressure toward individuals and maintains diversity in the population. Moreover, due to the evaluation for individuals in each generation being finished in one run, less computational effort is taken. Penalty function methods are generally used to transform a constrained optimization problem into an unconstrained one. The dominance-based penalty function contains necessary information on non-dominated character and infeasible position of an individual, essential for success in seeking a Pareto optimal set. The proposed approach is used to obtain a set of non-dominated designs for a six-storey three-dimensional building with shape memory alloy dampers subjected to earthquake.

  9. Using Agent Base Models to Optimize Large Scale Network for Large System Inventories

    NASA Technical Reports Server (NTRS)

    Shameldin, Ramez Ahmed; Bowling, Shannon R.

    2010-01-01

    The aim of this paper is to use Agent Base Models (ABM) to optimize large scale network handling capabilities for large system inventories and to implement strategies for the purpose of reducing capital expenses. The models used in this paper either use computational algorithms or procedure implementations developed by Matlab to simulate agent based models in a principal programming language and mathematical theory using clusters, these clusters work as a high performance computational performance to run the program in parallel computational. In both cases, a model is defined as compilation of a set of structures and processes assumed to underlie the behavior of a network system.

  10. Determining which phenotypes underlie a pleiotropic signal

    PubMed Central

    Majumdar, Arunabha; Haldar, Tanushree; Witte, John S.

    2016-01-01

    Discovering pleiotropic loci is important to understand the biological basis of seemingly distinct phenotypes. Most methods for assessing pleiotropy only test for the overall association between genetic variants and multiple phenotypes. To determine which specific traits are pleiotropic, we evaluate via simulation and application three different strategies. The first is model selection techniques based on the inverse regression of genotype on phenotypes. The second is a subset-based meta-analysis ASSET [Bhattacharjee et al., 2012], which provides an optimal subset of non-null traits. And the third is a modified Benjamini-Hochberg (B-H) procedure of controlling the expected false discovery rate [Benjamini and Hochberg, 1995] in the framework of phenome-wide association study. From our simulations we see that an inverse regression based approach MultiPhen [O’Reilly et al., 2012] is more powerful than ASSET for detecting overall pleiotropic association, except for when all the phenotypes are associated and have genetic effects in the same direction. For determining which specific traits are pleiotropic, the modified B-H procedure performs consistently better than the other two methods. The inverse regression based selection methods perform competitively with the modified B-H procedure only when the phenotypes are weakly correlated. The efficiency of ASSET is observed to lie below and in between the efficiency of the other two methods when the traits are weakly and strongly correlated, respectively. In our application to a large GWAS, we find that the modified B-H procedure also performs well, indicating that this may be an optimal approach for determining the traits underlying a pleiotropic signal. PMID:27238845

  11. Design of transonic airfoil sections using a similarity theory

    NASA Technical Reports Server (NTRS)

    Nixon, D.

    1978-01-01

    A study of the available methods for transonic airfoil and wing design indicates that the most powerful technique is the numerical optimization procedure. However, the computer time for this method is relatively large because of the amount of computation required in the searches during optimization. The optimization method requires that base and calibration solutions be computed to determine a minimum drag direction. The design space is then computationally searched in this direction; it is these searches that dominate the computation time. A recent similarity theory allows certain transonic flows to be calculated rapidly from the base and calibration solutions. In this paper the application of the similarity theory to design problems is examined with the object of at least partially eliminating the costly searches of the design optimization method. An example of an airfoil design is presented.

  12. A Numerical-Analytical Approach Based on Canonical Transformations for Computing Optimal Low-Thrust Transfers

    NASA Astrophysics Data System (ADS)

    da Silva Fernandes, S.; das Chagas Carvalho, F.; Bateli Romão, J. V.

    2018-04-01

    A numerical-analytical procedure based on infinitesimal canonical transformations is developed for computing optimal time-fixed low-thrust limited power transfers (no rendezvous) between coplanar orbits with small eccentricities in an inverse-square force field. The optimization problem is formulated as a Mayer problem with a set of non-singular orbital elements as state variables. Second order terms in eccentricity are considered in the development of the maximum Hamiltonian describing the optimal trajectories. The two-point boundary value problem of going from an initial orbit to a final orbit is solved by means of a two-stage Newton-Raphson algorithm which uses an infinitesimal canonical transformation. Numerical results are presented for some transfers between circular orbits with moderate radius ratio, including a preliminary analysis of Earth-Mars and Earth-Venus missions.

  13. Optimizing wind farm layout via LES-calibrated geometric models inclusive of wind direction and atmospheric stability effects

    NASA Astrophysics Data System (ADS)

    Archer, Cristina; Ghaisas, Niranjan

    2015-04-01

    The energy generation at a wind farm is controlled primarily by the average wind speed at hub height. However, two other factors impact wind farm performance: 1) the layout of the wind turbines, in terms of spacing between turbines along and across the prevailing wind direction; staggering or aligning consecutive rows; angles between rows, columns, and prevailing wind direction); and 2) atmospheric stability, which is a measure of whether vertical motion is enhanced (unstable), suppressed (stable), or neither (neutral). Studying both factors and their complex interplay with Large-Eddy Simulation (LES) is a valid approach because it produces high-resolution, 3D, turbulent fields, such as wind velocity, temperature, and momentum and heat fluxes, and it properly accounts for the interactions between wind turbine blades and the surrounding atmospheric and near-surface properties. However, LES are computationally expensive and simulating all the possible combinations of wind directions, atmospheric stabilities, and turbine layouts to identify the optimal wind farm configuration is practically unfeasible today. A new, geometry-based method is proposed that is computationally inexpensive and that combines simple geometric quantities with a minimal number of LES simulations to identify the optimal wind turbine layout, taking into account not only the actual frequency distribution of wind directions (i.e., wind rose) at the site of interest, but also atmospheric stability. The geometry-based method is calibrated with LES of the Lillgrund wind farm conducted with the Software for Offshore/onshore Wind Farm Applications (SOWFA), based on the open-access OpenFOAM libraries. The geometric quantities that offer the best correlations (>0.93) with the LES results are the blockage ratio, defined as the fraction of the swept area of a wind turbine that is blocked by an upstream turbine, and the blockage distance, the weighted distance from a given turbine to all upstream turbines that can potentially block it. Based on blockage ratio and distance, an optimization procedure is proposed that explores many different layout variables and identifies, given actual wind direction and stability distributions, the optimal wind farm layout, i.e., the one with the highest wind energy production. The optimization procedure is applied to both the calibration wind farm (Lillgrund) and a test wind farm (Horns Rev) and a number of layouts more efficient than the existing ones are identified. The optimization procedure based on geometric models proposed here can be applied very quickly (within a few hours) to any proposed wind farm, once enough information on wind direction frequency and, if available, atmospheric stability frequency has been gathered and once the number of turbines and/or the areal extent of the wind farm have been identified.

  14. LC-MS metabolic profiling of Arabidopsis thaliana plant leaves and cell cultures: optimization of pre-LC-MS procedure parameters.

    PubMed

    t'Kindt, Ruben; De Veylder, Lieven; Storme, Michael; Deforce, Dieter; Van Bocxlaer, Jan

    2008-08-01

    This study treats the optimization of methods for homogenizing Arabidopsis thaliana plant leaves as well as cell cultures, and extracting their metabolites for metabolomics analysis by conventional liquid chromatography electrospray ionization mass spectrometry (LC-ESI/MS). Absolute recovery, process efficiency and procedure repeatability have been compared between different pre-LC-MS homogenization/extraction procedures through the use of samples fortified before extraction with a range of representative metabolites. Hereby, the magnitude of the matrix effect observed in the ensuing LC-MS based metabolomics analysis was evaluated. Based on relative recovery and repeatability of key metabolites, comprehensiveness of extraction (number of m/z-retention time pairs) and clean-up potential of the approach (minimum matrix effects), the most appropriate sample pre-treatment was adopted. It combines liquid nitrogen homogenization for plant leaves with thermomixer based extraction using MeOH/H(2)O 80/20. As such, an efficient and highly reproducible LC-MS plant metabolomics set-up is achieved, as illustrated by the obtained results for both LC-MS (8.88%+/-5.16 versus 7.05%+/-4.45) and technical variability (12.53%+/-11.21 versus 9.31%+/-6.65) data in a comparative investigation of A. thaliana plant leaves and cell cultures, respectively.

  15. Fully integrated aerodynamic/dynamic optimization of helicopter rotor blades

    NASA Technical Reports Server (NTRS)

    Walsh, Joanne L.; Lamarsh, William J., II; Adelman, Howard M.

    1992-01-01

    This paper describes a fully integrated aerodynamic/dynamic optimization procedure for helicopter rotor blades. The procedure combines performance and dynamics analyses with a general purpose optimizer. The procedure minimizes a linear combination of power required (in hover, forward flight, and maneuver) and vibratory hub shear. The design variables include pretwist, taper initiation, taper ratio, root chord, blade stiffnesses, tuning masses, and tuning mass locations. Aerodynamic constraints consist of limits on power required in hover, forward flight and maneuver; airfoil section stall; drag divergence Mach number; minimum tip chord; and trim. Dynamic constraints are on frequencies, minimum autorotational inertia, and maximum blade weight. The procedure is demonstrated for two cases. In the first case the objective function involves power required (in hover, forward flight, and maneuver) and dynamics. The second case involves only hover power and dynamics. The designs from the integrated procedure are compared with designs from a sequential optimization approach in which the blade is first optimized for performance and then for dynamics. In both cases, the integrated approach is superior.

  16. Fully integrated aerodynamic/dynamic optimization of helicopter rotor blades

    NASA Technical Reports Server (NTRS)

    Walsh, Joanne L.; Lamarsh, William J., II; Adelman, Howard M.

    1992-01-01

    A fully integrated aerodynamic/dynamic optimization procedure is described for helicopter rotor blades. The procedure combines performance and dynamic analyses with a general purpose optimizer. The procedure minimizes a linear combination of power required (in hover, forward flight, and maneuver) and vibratory hub shear. The design variables include pretwist, taper initiation, taper ratio, root chord, blade stiffnesses, tuning masses, and tuning mass locations. Aerodynamic constraints consist of limits on power required in hover, forward flight and maneuvers; airfoil section stall; drag divergence Mach number; minimum tip chord; and trim. Dynamic constraints are on frequencies, minimum autorotational inertia, and maximum blade weight. The procedure is demonstrated for two cases. In the first case, the objective function involves power required (in hover, forward flight and maneuver) and dynamics. The second case involves only hover power and dynamics. The designs from the integrated procedure are compared with designs from a sequential optimization approach in which the blade is first optimized for performance and then for dynamics. In both cases, the integrated approach is superior.

  17. Information Fusion for High Level Situation Assessment and Prediction

    DTIC Science & Technology

    2007-03-01

    procedure includes deciding a sensor set that achieves the optimal trade -off between its cost and benefit, activating the identified sensors, integrating...and effective decision can be made by dynamic inference based on selecting a subset of sensors with the optimal trade -off between their cost and...first step is achieved by designing a sensor selection criterion that represents the trade -off between the sensor benefit and sensor cost. This is then

  18. Quantitative analysis of crystalline pharmaceuticals in powders and tablets by a pattern-fitting procedure using X-ray powder diffraction data.

    PubMed

    Yamamura, S; Momose, Y

    2001-01-16

    A pattern-fitting procedure for quantitative analysis of crystalline pharmaceuticals in solid dosage forms using X-ray powder diffraction data is described. This method is based on a procedure for pattern-fitting in crystal structure refinement, and observed X-ray scattering intensities were fitted to analytical expressions including some fitting parameters, i.e. scale factor, peak positions, peak widths and degree of preferred orientation of the crystallites. All fitting parameters were optimized by the non-linear least-squares procedure. Then the weight fraction of each component was determined from the optimized scale factors. In the present study, well-crystallized binary systems, zinc oxide-zinc sulfide (ZnO-ZnS) and salicylic acid-benzoic acid (SA-BA), were used as the samples. In analysis of the ZnO-ZnS system, the weight fraction of ZnO or ZnS could be determined quantitatively in the range of 5-95% in the case of both powders and tablets. In analysis of the SA-BA systems, the weight fraction of SA or BA could be determined quantitatively in the range of 20-80% in the case of both powders and tablets. Quantitative analysis applying this pattern-fitting procedure showed better reproducibility than other X-ray methods based on the linear or integral intensities of particular diffraction peaks. Analysis using this pattern-fitting procedure also has the advantage that the preferred orientation of the crystallites in solid dosage forms can be also determined in the course of quantitative analysis.

  19. Preparation of alpha-emitting nuclides by electrodeposition

    NASA Astrophysics Data System (ADS)

    Lee, M. H.; Lee, C. W.

    2000-06-01

    A method is described for electrodepositing the alpha-emitting nuclides. To determine the optimum conditions for plating plutonium, the effects of electrolyte concentration, chelating reagent, current, pH of electrolyte and the time of plating on the electrodeposition were investigated on the base of the ammonium oxalate-ammonium sulfate electrolyte containing diethyl triamino pentaacetic acid. An optimized electrodeposition procedure for the determination of plutonium was validated by application to environmental samples. The chemical yield of the optimized method of electrodeposition step in the environmental sample was a little higher than that of Talvitie's method. The developed electrodeposition procedure in this study was applied to determine the radionuclides such as thorium, uranium and americium that the electrodeposition yields were a little higher than those of the conventional method.

  20. A comparison of design variables for control theory based airfoil optimization

    NASA Technical Reports Server (NTRS)

    Reuther, James; Jameson, Antony

    1995-01-01

    This paper describes the implementation of optimization techniques based on control theory for airfoil design. In our previous work in the area it was shown that control theory could be employed to devise effective optimization procedures for two-dimensional profiles by using either the potential flow or the Euler equations with either a conformal mapping or a general coordinate system. We have also explored three-dimensional extensions of these formulations recently. The goal of our present work is to demonstrate the versatility of the control theory approach by designing airfoils using both Hicks-Henne functions and B-spline control points as design variables. The research also demonstrates that the parameterization of the design space is an open question in aerodynamic design.

  1. A level-set procedure for the design of electromagnetic metamaterials.

    PubMed

    Zhou, Shiwei; Li, Wei; Sun, Guangyong; Li, Qing

    2010-03-29

    Achieving negative permittivity and negative permeability signifies a key topic of research in the design of metamaterials. This paper introduces a level-set based topology optimization method, in which the interface between the vacuum and metal phases is implicitly expressed by the zero-level contour of a higher dimensional level-set function. Following a sensitivity analysis, the optimization maximizes the objective based on the normal direction of the level-set function and induced current flow, thereby generating the desirable patterns of current flow on metal surface. As a benchmark example, the U-shaped structure and its variations are obtained from the level-set topology optimization. Numerical examples demonstrate that both negative permittivity and negative permeability can be attained.

  2. Optimal periodic proof test based on cost-effective and reliability criteria

    NASA Technical Reports Server (NTRS)

    Yang, J.-N.

    1976-01-01

    An exploratory study for the optimization of periodic proof tests for fatigue-critical structures is presented. The optimal proof load level and the optimal number of periodic proof tests are determined by minimizing the total expected (statistical average) cost, while the constraint on the allowable level of structural reliability is satisfied. The total expected cost consists of the expected cost of proof tests, the expected cost of structures destroyed by proof tests, and the expected cost of structural failure in service. It is demonstrated by numerical examples that significant cost saving and reliability improvement for fatigue-critical structures can be achieved by the application of the optimal periodic proof test. The present study is relevant to the establishment of optimal maintenance procedures for fatigue-critical structures.

  3. A multiplexed microfluidic toolbox for the rapid optimization of affinity-driven partition in aqueous two phase systems.

    PubMed

    Bras, Eduardo J S; Soares, Ruben R G; Azevedo, Ana M; Fernandes, Pedro; Arévalo-Rodríguez, Miguel; Chu, Virginia; Conde, João P; Aires-Barros, M Raquel

    2017-09-15

    Antibodies and other protein products such as interferons and cytokines are biopharmaceuticals of critical importance which, in order to be safely administered, have to be thoroughly purified in a cost effective and efficient manner. The use of aqueous two-phase extraction (ATPE) is a viable option for this purification, but these systems are difficult to model and optimization procedures require lengthy and expensive screening processes. Here, a methodology for the rapid screening of antibody extraction conditions using a microfluidic channel-based toolbox is presented. A first microfluidic structure allows a simple negative-pressure driven rapid screening of up to 8 extraction conditions simultaneously, using less than 20μL of each phase-forming solution per experiment, while a second microfluidic structure allows the integration of multi-step extraction protocols based on the results obtained with the first device. In this paper, this microfluidic toolbox was used to demonstrate the potential of LYTAG fusion proteins used as affinity tags to optimize the partitioning of antibodies in ATPE processes, where a maximum partition coefficient (K) of 9.2 in a PEG 3350/phosphate system was obtained for the antibody extraction in the presence of the LYTAG-Z dual ligand. This represents an increase of approx. 3.7 fold when compared with the same conditions without the affinity molecule (K=2.5). Overall, this miniaturized and versatile approach allowed the rapid optimization of molecule partition followed by a proof-of-concept demonstration of an integrated back extraction procedure, both of which are critical procedures towards obtaining high purity biopharmaceuticals using ATPE. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Optimization of a gene electrotransfer procedure for efficient intradermal immunization with an hTERT-based DNA vaccine in mice

    PubMed Central

    Calvet, Christophe Y; Thalmensi, Jessie; Liard, Christelle; Pliquet, Elodie; Bestetti, Thomas; Huet, Thierry; Langlade-Demoyen, Pierre; Mir, Lluis M

    2014-01-01

    DNA vaccination consists in administering an antigen-encoding plasmid in order to trigger a specific immune response. This specific vaccine strategy is of particular interest to fight against various infectious diseases and cancer. Gene electrotransfer is the most efficient and safest non-viral gene transfer procedure and specific electrical parameters have been developed for several target tissues. Here, a gene electrotransfer protocol into the skin has been optimized in mice for efficient intradermal immunization against the well-known telomerase tumor antigen. First, the luciferase reporter gene was used to evaluate gene electrotransfer efficiency into the skin as a function of the electrical parameters and electrodes, either non-invasive or invasive. In a second time, these parameters were tested for their potency to generate specific cellular CD8 immune responses against telomerase epitopes. These CD8 T-cells were fully functional as they secreted IFNγ and were endowed with specific cytotoxic activity towards target cells. This simple and optimized procedure for efficient gene electrotransfer into the skin using the telomerase antigen is to be used in cancer patients for the phase 1 clinical evaluation of a therapeutic cancer DNA vaccine called INVAC-1. PMID:26015983

  5. Assessment and Reduction of Model Parametric Uncertainties: A Case Study with A Distributed Hydrological Model

    NASA Astrophysics Data System (ADS)

    Gan, Y.; Liang, X. Z.; Duan, Q.; Xu, J.; Zhao, P.; Hong, Y.

    2017-12-01

    The uncertainties associated with the parameters of a hydrological model need to be quantified and reduced for it to be useful for operational hydrological forecasting and decision support. An uncertainty quantification framework is presented to facilitate practical assessment and reduction of model parametric uncertainties. A case study, using the distributed hydrological model CREST for daily streamflow simulation during the period 2008-2010 over ten watershed, was used to demonstrate the performance of this new framework. Model behaviors across watersheds were analyzed by a two-stage stepwise sensitivity analysis procedure, using LH-OAT method for screening out insensitive parameters, followed by MARS-based Sobol' sensitivity indices for quantifying each parameter's contribution to the response variance due to its first-order and higher-order effects. Pareto optimal sets of the influential parameters were then found by the adaptive surrogate-based multi-objective optimization procedure, using MARS model for approximating the parameter-response relationship and SCE-UA algorithm for searching the optimal parameter sets of the adaptively updated surrogate model. The final optimal parameter sets were validated against the daily streamflow simulation of the same watersheds during the period 2011-2012. The stepwise sensitivity analysis procedure efficiently reduced the number of parameters that need to be calibrated from twelve to seven, which helps to limit the dimensionality of calibration problem and serves to enhance the efficiency of parameter calibration. The adaptive MARS-based multi-objective calibration exercise provided satisfactory solutions to the reproduction of the observed streamflow for all watersheds. The final optimal solutions showed significant improvement when compared to the default solutions, with about 65-90% reduction in 1-NSE and 60-95% reduction in |RB|. The validation exercise indicated a large improvement in model performance with about 40-85% reduction in 1-NSE, and 35-90% reduction in |RB|. Overall, this uncertainty quantification framework is robust, effective and efficient for parametric uncertainty analysis, the results of which provide useful information that helps to understand the model behaviors and improve the model simulations.

  6. On-The-Fly Data Processing with Scanamorphos: Application To ArTéMiS

    NASA Astrophysics Data System (ADS)

    Roussel, Hélène

    2018-03-01

    Scanamorphos is a suite of IDL based routines to optimally subtract low-frequency noise making maximal use of the redundancy in the data. The procedures were adapted to be applicable to ArTéMiS data.

  7. Search-based optimization

    NASA Technical Reports Server (NTRS)

    Wheeler, Ward C.

    2003-01-01

    The problem of determining the minimum cost hypothetical ancestral sequences for a given cladogram is known to be NP-complete (Wang and Jiang, 1994). Traditionally, point estimations of hypothetical ancestral sequences have been used to gain heuristic, upper bounds on cladogram cost. These include procedures with such diverse approaches as non-additive optimization of multiple sequence alignment, direct optimization (Wheeler, 1996), and fixed-state character optimization (Wheeler, 1999). A method is proposed here which, by extending fixed-state character optimization, replaces the estimation process with a search. This form of optimization examines a diversity of potential state solutions for cost-efficient hypothetical ancestral sequences and can result in greatly more parsimonious cladograms. Additionally, such an approach can be applied to other NP-complete phylogenetic optimization problems such as genomic break-point analysis. c2003 The Willi Hennig Society. Published by Elsevier Science (USA). All rights reserved.

  8. Strain-Based Damage Determination Using Finite Element Analysis for Structural Health Management

    NASA Technical Reports Server (NTRS)

    Hochhalter, Jacob D.; Krishnamurthy, Thiagaraja; Aguilo, Miguel A.

    2016-01-01

    A damage determination method is presented that relies on in-service strain sensor measurements. The method employs a gradient-based optimization procedure combined with the finite element method for solution to the forward problem. It is demonstrated that strains, measured at a limited number of sensors, can be used to accurately determine the location, size, and orientation of damage. Numerical examples are presented to demonstrate the general procedure. This work is motivated by the need to provide structural health management systems with a real-time damage characterization. The damage cases investigated herein are characteristic of point-source damage, which can attain critical size during flight. The procedure described can be used to provide prognosis tools with the current damage configuration.

  9. A Robust Kalman Framework with Resampling and Optimal Smoothing

    PubMed Central

    Kautz, Thomas; Eskofier, Bjoern M.

    2015-01-01

    The Kalman filter (KF) is an extremely powerful and versatile tool for signal processing that has been applied extensively in various fields. We introduce a novel Kalman-based analysis procedure that encompasses robustness towards outliers, Kalman smoothing and real-time conversion from non-uniformly sampled inputs to a constant output rate. These features have been mostly treated independently, so that not all of their benefits could be exploited at the same time. Here, we present a coherent analysis procedure that combines the aforementioned features and their benefits. To facilitate utilization of the proposed methodology and to ensure optimal performance, we also introduce a procedure to calculate all necessary parameters. Thereby, we substantially expand the versatility of one of the most widely-used filtering approaches, taking full advantage of its most prevalent extensions. The applicability and superior performance of the proposed methods are demonstrated using simulated and real data. The possible areas of applications for the presented analysis procedure range from movement analysis over medical imaging, brain-computer interfaces to robot navigation or meteorological studies. PMID:25734647

  10. Air-Gapped Structures as Magnetic Elements for Use in Power Processing Systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Ohri, A. K.

    1977-01-01

    Methodical approaches to the design of inductors for use in LC filters and dc-to-dc converters using air gapped magnetic structures are presented. Methods for the analysis and design of full wave rectifier LC filter circuits operating with the inductor current in both the continuous conduction and the discontinuous conduction modes are also described. In the continuous conduction mode, linear circuit analysis techniques are employed, while in the case of the discontinuous mode, the method of analysis requires computer solutions of the piecewise linear differential equations which describe the filter in the time domain. Procedures for designing filter inductors using air gapped cores are presented. The first procedure requires digital computation to yield a design which is optimized in the sense of minimum core volume and minimum number of turns. The second procedure does not yield an optimized design as defined above, but the design can be obtained by hand calculations or with a small calculator. The third procedure is based on the use of specially prepared magnetic core data and provides an easy way to quickly reach a workable design.

  11. Poster — Thur Eve — 61: A new framework for MPERT plan optimization using MC-DAO

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, M; Lloyd, S AM; Townson, R

    2014-08-15

    This work combines the inverse planning technique known as Direct Aperture Optimization (DAO) with Intensity Modulated Radiation Therapy (IMRT) and combined electron and photon therapy plans. In particular, determining conditions under which Modulated Photon/Electron Radiation Therapy (MPERT) produces better dose conformality and sparing of organs at risk than traditional IMRT plans is central to the project. Presented here are the materials and methods used to generate and manipulate the DAO procedure. Included is the introduction of a powerful Java-based toolkit, the Aperture-based Monte Carlo (MC) MPERT Optimizer (AMMO), that serves as a framework for optimization and provides streamlined access tomore » underlying particle transport packages. Comparison of the toolkit's dose calculations to those produced by the Eclipse TPS and the demonstration of a preliminary optimization are presented as first benchmarks. Excellent agreement is illustrated between the Eclipse TPS and AMMO for a 6MV photon field. The results of a simple optimization shows the functioning of the optimization framework, while significant research remains to characterize appropriate constraints.« less

  12. Adjoint Algorithm for CAD-Based Shape Optimization Using a Cartesian Method

    NASA Technical Reports Server (NTRS)

    Nemec, Marian; Aftosmis, Michael J.

    2004-01-01

    Adjoint solutions of the governing flow equations are becoming increasingly important for the development of efficient analysis and optimization algorithms. A well-known use of the adjoint method is gradient-based shape optimization. Given an objective function that defines some measure of performance, such as the lift and drag functionals, its gradient is computed at a cost that is essentially independent of the number of design variables (geometric parameters that control the shape). More recently, emerging adjoint applications focus on the analysis problem, where the adjoint solution is used to drive mesh adaptation, as well as to provide estimates of functional error bounds and corrections. The attractive feature of this approach is that the mesh-adaptation procedure targets a specific functional, thereby localizing the mesh refinement and reducing computational cost. Our focus is on the development of adjoint-based optimization techniques for a Cartesian method with embedded boundaries.12 In contrast t o implementations on structured and unstructured grids, Cartesian methods decouple the surface discretization from the volume mesh. This feature makes Cartesian methods well suited for the automated analysis of complex geometry problems, and consequently a promising approach to aerodynamic optimization. Melvin et developed an adjoint formulation for the TRANAIR code, which is based on the full-potential equation with viscous corrections. More recently, Dadone and Grossman presented an adjoint formulation for the Euler equations. In both approaches, a boundary condition is introduced to approximate the effects of the evolving surface shape that results in accurate gradient computation. Central to automated shape optimization algorithms is the issue of geometry modeling and control. The need to optimize complex, "real-life" geometry provides a strong incentive for the use of parametric-CAD systems within the optimization procedure. In previous work, we presented an effective optimization framework that incorporates a direct-CAD interface. In this work, we enhance the capabilities of this framework with efficient gradient computations using the discrete adjoint method. We present details of the adjoint numerical implementation, which reuses the domain decomposition, multigrid, and time-marching schemes of the flow solver. Furthermore, we explain and demonstrate the use of CAD in conjunction with the Cartesian adjoint approach. The final paper will contain a number of complex geometry, industrially relevant examples with many design variables to demonstrate the effectiveness of the adjoint method on Cartesian meshes.

  13. Investigations of the pushability behavior of cardiovascular angiographic catheters.

    PubMed

    Bloss, Peter; Rothe, Wolfgang; Wünsche, Peter; Werner, Christian; Rothe, Alexander; Kneissl, Georg Dieter; Burger, Wolfram; Rehberg, Elisabeth

    2003-01-01

    The placement of angiographic catheters into the vascular system is a routine procedure in modern clinical business. The definition of objective but not yet available evaluation protocols based on measurable physical quantities correlated to the empirical clinical findings is of utmost importance for catheter manufacturers for in-house product screening and optimization. In this context, we present an assessment of multiple mechanical and surface catheter properties such as static and kinetic friction, bending stiffness, microscopic surface topology, surface roughness, surface free energy and their interrelation. Theoretical framework, description of experimental methods and extensive data measured on several different catheters are provided and in conclusion a testing procedure is defined. Although this procedure is based on the measurement of several physical quantities it can be easily implemented by commercial laboratories testing catheters as it is based on relatively low-cost standard methods.

  14. 40 CFR 91.316 - Hydrocarbon analyzer calibration.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... periodic optimization of detector response. Prior to introduction into service and at least annually... nitrogen. (2) One of the following procedures is required for FID or HFID optimization: (i) The procedure outlined in Society of Automotive Engineers (SAE) paper No. 770141, “Optimization of Flame Ionization...

  15. Hormesis and the Salk Polio Vaccine

    PubMed Central

    Calabrese, Edward J.

    2011-01-01

    The production of the Salk vaccine polio virus by monkey kidney cells was generated using the synthetic tissue culture medium, Mixture 199. In this paper’s retrospective assessment of this process, it was discovered that Mixture 199 was modified by the addition of ethanol to optimize animal cell survival based on experimentation that revealed a hormetic-like biphasic response relationship. This hormesis-based optimization procedure was then applied to all uses of Mixture 199 and modifications of it, including its application to the Salk polio vaccine during preliminary testing and in its subsequent major societal treatment programs. PMID:22423232

  16. Study of flutter related computational procedures for minimum weight structural sizing of advanced aircraft

    NASA Technical Reports Server (NTRS)

    Oconnell, R. F.; Hassig, H. J.; Radovcich, N. A.

    1976-01-01

    Results of a study of the development of flutter modules applicable to automated structural design of advanced aircraft configurations, such as a supersonic transport, are presented. Automated structural design is restricted to automated sizing of the elements of a given structural model. It includes a flutter optimization procedure; i.e., a procedure for arriving at a structure with minimum mass for satisfying flutter constraints. Methods of solving the flutter equation and computing the generalized aerodynamic force coefficients in the repetitive analysis environment of a flutter optimization procedure are studied, and recommended approaches are presented. Five approaches to flutter optimization are explained in detail and compared. An approach to flutter optimization incorporating some of the methods discussed is presented. Problems related to flutter optimization in a realistic design environment are discussed and an integrated approach to the entire flutter task is presented. Recommendations for further investigations are made. Results of numerical evaluations, applying the five methods of flutter optimization to the same design task, are presented.

  17. Optimizing Illumina next-generation sequencing library preparation for extremely AT-biased genomes.

    PubMed

    Oyola, Samuel O; Otto, Thomas D; Gu, Yong; Maslen, Gareth; Manske, Magnus; Campino, Susana; Turner, Daniel J; Macinnis, Bronwyn; Kwiatkowski, Dominic P; Swerdlow, Harold P; Quail, Michael A

    2012-01-03

    Massively parallel sequencing technology is revolutionizing approaches to genomic and genetic research. Since its advent, the scale and efficiency of Next-Generation Sequencing (NGS) has rapidly improved. In spite of this success, sequencing genomes or genomic regions with extremely biased base composition is still a great challenge to the currently available NGS platforms. The genomes of some important pathogenic organisms like Plasmodium falciparum (high AT content) and Mycobacterium tuberculosis (high GC content) display extremes of base composition. The standard library preparation procedures that employ PCR amplification have been shown to cause uneven read coverage particularly across AT and GC rich regions, leading to problems in genome assembly and variation analyses. Alternative library-preparation approaches that omit PCR amplification require large quantities of starting material and hence are not suitable for small amounts of DNA/RNA such as those from clinical isolates. We have developed and optimized library-preparation procedures suitable for low quantity starting material and tolerant to extremely high AT content sequences. We have used our optimized conditions in parallel with standard methods to prepare Illumina sequencing libraries from a non-clinical and a clinical isolate (containing ~53% host contamination). By analyzing and comparing the quality of sequence data generated, we show that our optimized conditions that involve a PCR additive (TMAC), produces amplified libraries with improved coverage of extremely AT-rich regions and reduced bias toward GC neutral templates. We have developed a robust and optimized Next-Generation Sequencing library amplification method suitable for extremely AT-rich genomes. The new amplification conditions significantly reduce bias and retain the complexity of either extremes of base composition. This development will greatly benefit sequencing clinical samples that often require amplification due to low mass of DNA starting material.

  18. How Near is a Near-Optimal Solution: Confidence Limits for the Global Optimum.

    DTIC Science & Technology

    1980-05-01

    or near-optimal solutions are the only practical solutions available. This paper identifies and compares some procedures which use independent near...approximate or near-optimal solutions are the only practical solutions available. This paper identifies and compares some procedures which use inde- pendent...The objective of this paper is to indicate some relatively new statistical procedures for obtaining an upper confidence limit on G Each of these

  19. Optimization of Residual Stresses in MMC's Using Compensating/Compliant Interfacial Layers. Part 2: OPTCOMP User's Guide

    NASA Technical Reports Server (NTRS)

    Pindera, Marek-Jerzy; Salzar, Robert S.; Williams, Todd O.

    1994-01-01

    A user's guide for the computer program OPTCOMP is presented in this report. This program provides a capability to optimize the fabrication or service-induced residual stresses in uni-directional metal matrix composites subjected to combined thermo-mechanical axisymmetric loading using compensating or compliant layers at the fiber/matrix interface. The user specifies the architecture and the initial material parameters of the interfacial region, which can be either elastic or elastoplastic, and defines the design variables, together with the objective function, the associated constraints and the loading history through a user-friendly data input interface. The optimization procedure is based on an efficient solution methodology for the elastoplastic response of an arbitrarily layered multiple concentric cylinder model that is coupled to the commercial optimization package DOT. The solution methodology for the arbitrarily layered cylinder is based on the local-global stiffness matrix formulation and Mendelson's iterative technique of successive elastic solutions developed for elastoplastic boundary-value problems. The optimization algorithm employed in DOT is based on the method of feasible directions.

  20. Resolvent analysis of shear flows using One-Way Navier-Stokes equations

    NASA Astrophysics Data System (ADS)

    Rigas, Georgios; Schmidt, Oliver; Towne, Aaron; Colonius, Tim

    2017-11-01

    For three-dimensional flows, questions of stability, receptivity, secondary flows, and coherent structures require the solution of large partial-derivative eigenvalue problems. Reduced-order approximations are thus required for engineering prediction since these problems are often computationally intractable or prohibitively expensive. For spatially slowly evolving flows, such as jets and boundary layers, the One-Way Navier-Stokes (OWNS) equations permit a fast spatial marching procedure that results in a huge reduction in computational cost. Here, an adjoint-based optimization framework is proposed and demonstrated for calculating optimal boundary conditions and optimal volumetric forcing. The corresponding optimal response modes are validated against modes obtained in terms of global resolvent analysis. For laminar base flows, the optimal modes reveal modal and non-modal transition mechanisms. For turbulent base flows, they predict the evolution of coherent structures in a statistical sense. Results from the application of the method to three-dimensional laminar wall-bounded flows and turbulent jets will be presented. This research was supported by the Office of Naval Research (N00014-16-1-2445) and Boeing Company (CT-BA-GTA-1).

  1. Efficient computation of the genomic relationship matrix and other matrices used in single-step evaluation.

    PubMed

    Aguilar, I; Misztal, I; Legarra, A; Tsuruta, S

    2011-12-01

    Genomic evaluations can be calculated using a unified procedure that combines phenotypic, pedigree and genomic information. Implementation of such a procedure requires the inverse of the relationship matrix based on pedigree and genomic relationships. The objective of this study was to investigate efficient computing options to create relationship matrices based on genomic markers and pedigree information as well as their inverses. SNP maker information was simulated for a panel of 40 K SNPs, with the number of genotyped animals up to 30 000. Matrix multiplication in the computation of the genomic relationship was by a simple 'do' loop, by two optimized versions of the loop, and by a specific matrix multiplication subroutine. Inversion was by a generalized inverse algorithm and by a LAPACK subroutine. With the most efficient choices and parallel processing, creation of matrices for 30 000 animals would take a few hours. Matrices required to implement a unified approach can be computed efficiently. Optimizations can be either by modifications of existing code or by the use of efficient automatic optimizations provided by open source or third-party libraries. © 2011 Blackwell Verlag GmbH.

  2. Constructing diabatic representations using adiabatic and approximate diabatic data--Coping with diabolical singularities.

    PubMed

    Zhu, Xiaolei; Yarkony, David R

    2016-01-28

    We have recently introduced a diabatization scheme, which simultaneously fits and diabatizes adiabatic ab initio electronic wave functions, Zhu and Yarkony J. Chem. Phys. 140, 024112 (2014). The algorithm uses derivative couplings in the defining equations for the diabatic Hamiltonian, H(d), and fits all its matrix elements simultaneously to adiabatic state data. This procedure ultimately provides an accurate, quantifiably diabatic, representation of the adiabatic electronic structure data. However, optimizing the large number of nonlinear parameters in the basis functions and adjusting the number and kind of basis functions from which the fit is built, which provide the essential flexibility, has proved challenging. In this work, we introduce a procedure that combines adiabatic state and diabatic state data to efficiently optimize the nonlinear parameters and basis function expansion. Further, we consider using direct properties based diabatizations to initialize the fitting procedure. To address this issue, we introduce a systematic method for eliminating the debilitating (diabolical) singularities in the defining equations of properties based diabatizations. We exploit the observation that if approximate diabatic data are available, the commonly used approach of fitting each matrix element of H(d) individually provides a starting point (seed) from which convergence of the full H(d) construction algorithm is rapid. The optimization of nonlinear parameters and basis functions and the elimination of debilitating singularities are, respectively, illustrated using the 1,2,3,4(1)A states of phenol and the 1,2(1)A states of NH3, states which are coupled by conical intersections.

  3. An approach to design controllers for MIMO fractional-order plants based on parameter optimization algorithm.

    PubMed

    Xue, Dingyü; Li, Tingxue

    2017-04-27

    The parameter optimization method for multivariable systems is extended to the controller design problems for multiple input multiple output (MIMO) square fractional-order plants. The algorithm can be applied to search for the optimal parameters of integer-order controllers for fractional-order plants with or without time delays. Two examples are given to present the controller design procedures for MIMO fractional-order systems. Simulation studies show that the integer-order controllers designed are robust to plant gain variations. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  4. A thermal vacuum test optimization procedure

    NASA Technical Reports Server (NTRS)

    Kruger, R.; Norris, H. P.

    1979-01-01

    An analytical model was developed that can be used to establish certain parameters of a thermal vacuum environmental test program based on an optimization of program costs. This model is in the form of a computer program that interacts with a user insofar as the input of certain parameters. The program provides the user a list of pertinent information regarding an optimized test program and graphs of some of the parameters. The model is a first attempt in this area and includes numerous simplifications. The model appears useful as a general guide and provides a way for extrapolating past performance to future missions.

  5. Model Specification Searches Using Ant Colony Optimization Algorithms

    ERIC Educational Resources Information Center

    Marcoulides, George A.; Drezner, Zvi

    2003-01-01

    Ant colony optimization is a recently proposed heuristic procedure inspired by the behavior of real ants. This article applies the procedure to model specification searches in structural equation modeling and reports the results. The results demonstrate the capabilities of ant colony optimization algorithms for conducting automated searches.

  6. Reliability Analysis and Reliability-Based Design Optimization of Circular Composite Cylinders Under Axial Compression

    NASA Technical Reports Server (NTRS)

    Rais-Rohani, Masoud

    2001-01-01

    This report describes the preliminary results of an investigation on component reliability analysis and reliability-based design optimization of thin-walled circular composite cylinders with average diameter and average length of 15 inches. Structural reliability is based on axial buckling strength of the cylinder. Both Monte Carlo simulation and First Order Reliability Method are considered for reliability analysis with the latter incorporated into the reliability-based structural optimization problem. To improve the efficiency of reliability sensitivity analysis and design optimization solution, the buckling strength of the cylinder is estimated using a second-order response surface model. The sensitivity of the reliability index with respect to the mean and standard deviation of each random variable is calculated and compared. The reliability index is found to be extremely sensitive to the applied load and elastic modulus of the material in the fiber direction. The cylinder diameter was found to have the third highest impact on the reliability index. Also the uncertainty in the applied load, captured by examining different values for its coefficient of variation, is found to have a large influence on cylinder reliability. The optimization problem for minimum weight is solved subject to a design constraint on element reliability index. The methodology, solution procedure and optimization results are included in this report.

  7. A Machine-Learning and Filtering Based Data Assimilation Framework for Geologic Carbon Sequestration Monitoring Optimization

    NASA Astrophysics Data System (ADS)

    Chen, B.; Harp, D. R.; Lin, Y.; Keating, E. H.; Pawar, R.

    2017-12-01

    Monitoring is a crucial aspect of geologic carbon sequestration (GCS) risk management. It has gained importance as a means to ensure CO2 is safely and permanently stored underground throughout the lifecycle of a GCS project. Three issues are often involved in a monitoring project: (i) where is the optimal location to place the monitoring well(s), (ii) what type of data (pressure, rate and/or CO2 concentration) should be measured, and (iii) What is the optimal frequency to collect the data. In order to address these important issues, a filtering-based data assimilation procedure is developed to perform the monitoring optimization. The optimal monitoring strategy is selected based on the uncertainty reduction of the objective of interest (e.g., cumulative CO2 leak) for all potential monitoring strategies. To reduce the computational cost of the filtering-based data assimilation process, two machine-learning algorithms: Support Vector Regression (SVR) and Multivariate Adaptive Regression Splines (MARS) are used to develop the computationally efficient reduced-order-models (ROMs) from full numerical simulations of CO2 and brine flow. The proposed framework for GCS monitoring optimization is demonstrated with two examples: a simple 3D synthetic case and a real field case named Rock Spring Uplift carbon storage site in Southwestern Wyoming.

  8. CFD-based optimization in plastics extrusion

    NASA Astrophysics Data System (ADS)

    Eusterholz, Sebastian; Elgeti, Stefanie

    2018-05-01

    This paper presents novel ideas in numerical design of mixing elements in single-screw extruders. The actual design process is reformulated as a shape optimization problem, given some functional, but possibly inefficient initial design. Thereby automatic optimization can be incorporated and the design process is advanced, beyond the simulation-supported, but still experience-based approach. This paper proposes concepts to extend a method which has been developed and validated for die design to the design of mixing-elements. For simplicity, it focuses on single-phase flows only. The developed method conducts forward-simulations to predict the quasi-steady melt behavior in the relevant part of the extruder. The result of each simulation is used in a black-box optimization procedure based on an efficient low-order parameterization of the geometry. To minimize user interaction, an objective function is formulated that quantifies the products' quality based on the forward simulation. This paper covers two aspects: (1) It reviews the set-up of the optimization framework as discussed in [1], and (2) it details the necessary extensions for the optimization of mixing elements in single-screw extruders. It concludes with a presentation of first advances in the unsteady flow simulation of a metering and mixing section with the SSMUM [2] using the Carreau material model.

  9. Optimizing chirped laser pulse parameters for electron acceleration in vacuum

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Akhyani, Mina; Jahangiri, Fazel; Niknam, Ali Reza

    2015-11-14

    Electron dynamics in the field of a chirped linearly polarized laser pulse is investigated. Variations of electron energy gain versus chirp parameter, time duration, and initial phase of laser pulse are studied. Based on maximizing laser pulse asymmetry, a numerical optimization procedure is presented, which leads to the elimination of rapid fluctuations of gain versus the chirp parameter. Instead, a smooth variation is observed that considerably reduces the accuracy required for experimentally adjusting the chirp parameter.

  10. Optimized Non-Obstructive Particle Damping (NOPD) Treatment for Composite Honeycomb Structures

    NASA Technical Reports Server (NTRS)

    Panossian, H.

    2008-01-01

    Non-Obstructive Particle Damping (NOPD) technology is a passive vibration damping approach whereby metallic or non-metallic particles in spherical or irregular shapes, of heavy or light consistency, and even liquid particles are placed inside cavities or attached to structures by an appropriate means at strategic locations, to absorb vibration energy. The objective of the work described herein is the development of a design optimization procedure and discussion of test results for such a NOPD treatment on honeycomb (HC) composite structures, based on finite element modeling (FEM) analyses, optimization and tests. Modeling and predictions were performed and tests were carried out to correlate the test data with the FEM. The optimization procedure consisted of defining a global objective function, using finite difference methods, to determine the optimal values of the design variables through quadratic linear programming. The optimization process was carried out by targeting the highest dynamic displacements of several vibration modes of the structure and finding an optimal treatment configuration that will minimize them. An optimal design was thus derived and laboratory tests were conducted to evaluate its performance under different vibration environments. Three honeycomb composite beams, with Nomex core and aluminum face sheets, empty (untreated), uniformly treated with NOPD, and optimally treated with NOPD, according to the analytically predicted optimal design configuration, were tested in the laboratory. It is shown that the beam with optimal treatment has the lowest response amplitude. Described below are results of modal vibration tests and FEM analyses from predictions of the modal characteristics of honeycomb beams under zero, 50% uniform treatment and an optimal NOPD treatment design configuration and verification with test data.

  11. A structural topological optimization method for multi-displacement constraints and any initial topology configuration

    NASA Astrophysics Data System (ADS)

    Rong, J. H.; Yi, J. H.

    2010-10-01

    In density-based topological design, one expects that the final result consists of elements either black (solid material) or white (void), without any grey areas. Moreover, one also expects that the optimal topology can be obtained by starting from any initial topology configuration. An improved structural topological optimization method for multi- displacement constraints is proposed in this paper. In the proposed method, the whole optimization process is divided into two optimization adjustment phases and a phase transferring step. Firstly, an optimization model is built to deal with the varied displacement limits, design space adjustments, and reasonable relations between the element stiffness matrix and mass and its element topology variable. Secondly, a procedure is proposed to solve the optimization problem formulated in the first optimization adjustment phase, by starting with a small design space and advancing to a larger deign space. The design space adjustments are automatic when the design domain needs expansions, in which the convergence of the proposed method will not be affected. The final topology obtained by the proposed procedure in the first optimization phase, can approach to the vicinity of the optimum topology. Then, a heuristic algorithm is given to improve the efficiency and make the designed structural topology black/white in both the phase transferring step and the second optimization adjustment phase. And the optimum topology can finally be obtained by the second phase optimization adjustments. Two examples are presented to show that the topologies obtained by the proposed method are of very good 0/1 design distribution property, and the computational efficiency is enhanced by reducing the element number of the design structural finite model during two optimization adjustment phases. And the examples also show that this method is robust and practicable.

  12. Vulnerable Atherosclerotic Plaque Elasticity Reconstruction Based on a Segmentation-Driven Optimization Procedure Using Strain Measurements: Theoretical Framework

    PubMed Central

    Le Floc’h, Simon; Tracqui, Philippe; Finet, Gérard; Gharib, Ahmed M.; Maurice, Roch L.; Cloutier, Guy; Pettigrew, Roderic I.

    2016-01-01

    It is now recognized that prediction of the vulnerable coronary plaque rupture requires not only an accurate quantification of fibrous cap thickness and necrotic core morphology but also a precise knowledge of the mechanical properties of plaque components. Indeed, such knowledge would allow a precise evaluation of the peak cap-stress amplitude, which is known to be a good biomechanical predictor of plaque rupture. Several studies have been performed to reconstruct a Young’s modulus map from strain elastograms. It seems that the main issue for improving such methods does not rely on the optimization algorithm itself, but rather on preconditioning requiring the best estimation of the plaque components’ contours. The present theoretical study was therefore designed to develop: 1) a preconditioning model to extract the plaque morphology in order to initiate the optimization process, and 2) an approach combining a dynamic segmentation method with an optimization procedure to highlight the modulogram of the atherosclerotic plaque. This methodology, based on the continuum mechanics theory prescribing the strain field, was successfully applied to seven intravascular ultrasound coronary lesion morphologies. The reconstructed cap thickness, necrotic core area, calcium area, and the Young’s moduli of the calcium, necrotic core, and fibrosis were obtained with mean relative errors of 12%, 4% and 1%, 43%, 32%, and 2%, respectively. PMID:19164080

  13. Optimal Design of a Resonance-Based Voltage Boosting Rectifier for Wireless Power Transmission.

    PubMed

    Lim, Jaemyung; Lee, Byunghun; Ghovanloo, Maysam

    2018-02-01

    This paper presents the design procedure for a new multi-cycle resonance-based voltage boosting rectifier (MCRR) capable of delivering a desired amount of power to the load (PDL) at a designated high voltage (HV) through a loosely-coupled inductive link. This is achieved by shorting the receiver (Rx) LC-tank for several cycles to harvest and accumulate the wireless energy in the RX inductor before boosting the voltage by breaking the loop and transferring the energy to the load in a quarter cycle. By optimizing the geometries of the transmitter (Tx) and Rx coils and the number of cycles, N , for energy harvesting, through an iterative design procedure, the MCRR can achieve the highest PDL under a given set of design constraints. Governing equations in the MCRR operation are derived to identify key specifications and the design guidelines. Using an exemplary set of specs, the optimized MCRR was able to generate 20.9 V DC across a 100 kΩ load from a 1.8 V p , 6.78 MHz sinusoid input in the ISM-band at a Tx/Rx coil separation of 1.3 cm, power transfer efficiency (PTE) of 2.2%, and N = 9 cycles. At the same coil distance and loading, coils optimized for a conventional half-wave rectifier (CHWR) were able to reach only 13.6 V DC from the same source.

  14. A trust region-based approach to optimize triple response systems

    NASA Astrophysics Data System (ADS)

    Fan, Shu-Kai S.; Fan, Chihhao; Huang, Chia-Fen

    2014-05-01

    This article presents a new computing procedure for the global optimization of the triple response system (TRS) where the response functions are non-convex quadratics and the input factors satisfy a radial constrained region of interest. The TRS arising from response surface modelling can be approximated using a nonlinear mathematical program that considers one primary objective function and two secondary constraint functions. An optimization algorithm named the triple response surface algorithm (TRSALG) is proposed to determine the global optimum for the non-degenerate TRS. In TRSALG, the Lagrange multipliers of the secondary functions are determined using the Hooke-Jeeves search method and the Lagrange multiplier of the radial constraint is located using the trust region method within the global optimality space. The proposed algorithm is illustrated in terms of three examples appearing in the quality-control literature. The results of TRSALG compared to a gradient-based method are also presented.

  15. Prediction of sonic boom from experimental near-field overpressure data. Volume 1: Method and results

    NASA Technical Reports Server (NTRS)

    Glatt, C. R.; Hague, D. S.; Reiners, S. J.

    1975-01-01

    A computerized procedure for predicting sonic boom from experimental near-field overpressure data has been developed. The procedure extrapolates near-field pressure signatures for a specified flight condition to the ground by the Thomas method. Near-field pressure signatures are interpolated from a data base of experimental pressure signatures. The program is an independently operated ODIN (Optimal Design Integration) program which obtains flight path information from other ODIN programs or from input.

  16. Anesthesiology and gastroenterology.

    PubMed

    de Villiers, Willem J S

    2009-03-01

    A successful population-based colorectal cancer screening requires efficient colonoscopy practices that incorporate high throughput, safety, and patient satisfaction. There are several different modalities of nonanesthesiologist-administered sedation currently available and in development that may fulfill these requirements. Modern-day gastroenterology endoscopic procedures are complex and demand the full attention of the attending gastroenterologist and the complete cooperation of the patient. Many of these procedures will also require the anesthesiologist's knowledge, skills, abilities, and experience to ensure optimal procedure results and good patient outcomes. The goal of this review is (1) to provide a gastroenterology perspective on the use of propofol in gastroenterology endoscopic practice, and (2) to describe newer GI endoscopy procedures that gastroenterologists perform that might involve anesthesiologists.

  17. 40 CFR 90.316 - Hydrocarbon analyzer calibration.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...) Initial and periodic optimization of detector response. Prior to initial use and at least annually... nitrogen. (2) Use of one of the following procedures is required for FID or HFID optimization: (i) The procedure outlined in Society of Automotive Engineers (SAE) paper No. 770141, “Optimization of a Flame...

  18. Dakota, a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis :

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, Brian M.; Ebeida, Mohamed Salah; Eldred, Michael S.

    The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components requiredmore » for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the Dakota software and provides capability overviews and procedures for software execution, as well as a variety of example studies.« less

  19. Optimal design of supply chain network under uncertainty environment using hybrid analytical and simulation modeling approach

    NASA Astrophysics Data System (ADS)

    Chiadamrong, N.; Piyathanavong, V.

    2017-12-01

    Models that aim to optimize the design of supply chain networks have gained more interest in the supply chain literature. Mixed-integer linear programming and discrete-event simulation are widely used for such an optimization problem. We present a hybrid approach to support decisions for supply chain network design using a combination of analytical and discrete-event simulation models. The proposed approach is based on iterative procedures until the difference between subsequent solutions satisfies the pre-determined termination criteria. The effectiveness of proposed approach is illustrated by an example, which shows closer to optimal results with much faster solving time than the results obtained from the conventional simulation-based optimization model. The efficacy of this proposed hybrid approach is promising and can be applied as a powerful tool in designing a real supply chain network. It also provides the possibility to model and solve more realistic problems, which incorporate dynamism and uncertainty.

  20. CFD-Based Design Optimization Tool Developed for Subsonic Inlet

    NASA Technical Reports Server (NTRS)

    1995-01-01

    The traditional approach to the design of engine inlets for commercial transport aircraft is a tedious process that ends with a less-than-optimum design. With the advent of high-speed computers and the availability of more accurate and reliable computational fluid dynamics (CFD) solvers, numerical optimization processes can effectively be used to design an aerodynamic inlet lip that enhances engine performance. The designers' experience at Boeing Corporation showed that for a peak Mach number on the inlet surface beyond some upper limit, the performance of the engine degrades excessively. Thus, our objective was to optimize efficiency (minimize the peak Mach number) at maximum cruise without compromising performance at other operating conditions. Using a CFD code NPARC, the NASA Lewis Research Center, in collaboration with Boeing, developed an integrated procedure at Lewis to find the optimum shape of a subsonic inlet lip and a numerical optimization code, ADS. We used a GRAPE-based three-dimensional grid generator to help automate the optimization procedure. The inlet lip shape at the crown and the keel was described as a superellipse, and the superellipse exponents and radii ratios were considered as design variables. Three operating conditions: cruise, takeoff, and rolling takeoff, were considered in this study. Three-dimensional Euler computations were carried out to obtain the flow field. At the initial design, the peak Mach numbers for maximum cruise, takeoff, and rolling takeoff conditions were 0.88, 1.772, and 1.61, respectively. The acceptable upper limits on the takeoff and rolling takeoff Mach numbers were 1.55 and 1.45. Since the initial design provided by Boeing was found to be optimum with respect to the maximum cruise condition, the sum of the peak Mach numbers at takeoff and rolling takeoff were minimized in the current study while the maximum cruise Mach number was constrained to be close to that at the existing design. With this objective, the optimum design satisfied the upper limits at takeoff and rolling takeoff while retaining the desirable cruise performance. Further studies are being conducted to include static and cross-wind operating conditions in the design optimization procedure. This work was carried out in collaboration with Dr. E.S. Reddy of NYMA, Inc.

  1. Real-time terminal area trajectory planning for runway independent aircraft

    NASA Astrophysics Data System (ADS)

    Xue, Min

    The increasing demand for commercial air transportation results in delays due to traffic queues that form bottlenecks along final approach and departure corridors. In urban areas, it is often infeasible to build new runways, and regardless of automation upgrades traffic must remain separated to avoid the wakes of previous aircraft. Vertical or short takeoff and landing aircraft as Runway Independent Aircraft (RIA) can increase passenger throughput at major urban airports via the use of vertiports or stub runways. The concept of simultaneous non-interfering (SNI) operations has been proposed to reduce traffic delays by creating approach and departure corridors that do not intersect existing fixed-wing routes. However, SNI trajectories open new routes that may overfly noise-sensitive areas, and RIA may generate more noise than traditional jet aircraft, particularly on approach. In this dissertation, we develop efficient SNI noise abatement procedures applicable to RIA. First, we introduce a methodology based on modified approximated cell-decomposition and Dijkstra's search algorithm to optimize longitudinal plane (2-D) RIA trajectories over a cost function that minimizes noise, time, and fuel use. Then, we extend the trajectory optimization model to 3-D with a k-ary tree as the discrete search space. We incorporate geography information system (GIS) data, specifically population, into our objective function, and focus on a practical case study: the design of SNI RIA approach procedures to Baltimore-Washington International airport. Because solutions were represented as trim state sequences, we incorporated smooth transition between segments to enable more realistic cost estimates. Due to the significant computational complexity, we investigated alternative more efficient optimization techniques applicable to our nonlinear, non-convex, heavily constrained, and discontinuous objective function. Comparing genetic algorithm (GA) and adaptive simulated annealing (ASA) with our original Dijkstra's algorithm, ASA is identified as the most efficient algorithm for terminal area trajectory optimization. The effects of design parameter discretization are analyzed, with results indicating a SNI procedure with 3-4 segments effectively balances simplicity with cost minimization. Finally, pilot control commands were implemented and generated via optimization-base inverse simulation to validate execution of the optimal approach trajectories.

  2. 'It's a logistical nightmare!' Recommendations for optimising human papillomavirus school-based vaccination experience.

    PubMed

    Robbins, Spring Chenoa Cooper; Bernard, Diana; McCaffery, Kirsten; Skinner, S Rachel

    2010-09-01

    To date, no published studies examine procedural factors of the school-based human papillomavirus (HPV) vaccination program from the perspective of those involved. This study examines the factors that were perceived to impact optimal vaccination experience. Schools across Sydney were selected to reflect a range of vaccination coverage at the school level and different school types to ensure a range of experiences. Semi-structured focus groups were conducted with girls; and one-on-one interviews were undertaken with parents, teachers and nurses until saturation of data in all emergent themes was reached. Focus groups and interviews explored participants' experiences in school-based HPV vaccination. Transcripts were analysed, letting themes emerge. Themes related to participants' experience of the organisational, logistical and procedural aspects of the vaccination program and their perceptions of an optimal process were organised into two categories: (1) preparation for the vaccination program and (2) vaccination day strategies. In (1), themes emerged regarding commitment to the process from those involved, planning time and space for vaccinations, communication within and between agencies, and flexibility. In (2), themes included vaccinating the most anxious girls first, facilitating peer support, use of distraction techniques, minimising waiting time girls, and support staff. A range of views exists on what constitutes an optimal school-based program. Several findings were identified that should be considered in the development of guidelines for implementing school-based programs. Future research should evaluate how different approaches to acquiring parental consent, and the use of anxiety and fear reduction strategies impact experience and uptake in the school-based setting.

  3. Optimization of β-cyclodextrin-based flavonol extraction from apple pomace using response surface methodology.

    PubMed

    Parmar, Indu; Sharma, Sowmya; Rupasinghe, H P Vasantha

    2015-04-01

    The present study investigated five cyclodextrins (CDs) for the extraction of flavonols from apple pomace powder and optimized β-CD based extraction of total flavonols using response surface methodology. A 2(3) central composite design with β-CD concentration (0-5 g 100 mL(-1)), extraction temperature (20-72 °C), extraction time (6-48 h) and second-order quadratic model for the total flavonol yield (mg 100 g(-1) DM) was selected to generate the response surface curves. The optimal conditions obtained were: β-CD concentration, 2.8 g 100 mL(-1); extraction temperature, 45 °C and extraction time, 25.6 h that predicted the extraction of 166.6 mg total flavonols 100 g(-1) DM. The predicted amount was comparable to the experimental amount of 151.5 mg total flavonols 100 g(-1) DM obtained from optimal β-CD based parameters, thereby giving a low absolute error and adequacy of fitted model. In addition, the results from optimized extraction conditions showed values similar to those obtained through previously established solvent based sonication assisted flavonol extraction procedure. To the best of our knowledge, this is the first study to optimize aqueous β-CD based flavonol extraction which presents an environmentally safe method for value-addition to under-utilized bio resources.

  4. Optimization of flexible wing structures subject to strength and induced drag constraints

    NASA Technical Reports Server (NTRS)

    Haftka, R. T.

    1977-01-01

    An optimization procedure for designing wing structures subject to stress, strain, and drag constraints is presented. The optimization method utilizes an extended penalty function formulation for converting the constrained problem into a series of unconstrained ones. Newton's method is used to solve the unconstrained problems. An iterative analysis procedure is used to obtain the displacements of the wing structure including the effects of load redistribution due to the flexibility of the structure. The induced drag is calculated from the lift distribution. Approximate expressions for the constraints used during major portions of the optimization process enhance the efficiency of the procedure. A typical fighter wing is used to demonstrate the procedure. Aluminum and composite material designs are obtained. The tradeoff between weight savings and drag reduction is investigated.

  5. Integrated aerodynamic/dynamic/structural optimization of helicopter rotor blades using multilevel decomposition

    NASA Technical Reports Server (NTRS)

    Walsh, Joanne L.; Young, Katherine C.; Pritchard, Jocelyn I.; Adelman, Howard M.; Mantay, Wayne R.

    1995-01-01

    This paper describes an integrated aerodynamic/dynamic/structural (IADS) optimization procedure for helicopter rotor blades. The procedure combines performance, dynamics, and structural analyses with a general-purpose optimizer using multilevel decomposition techniques. At the upper level, the structure is defined in terms of global quantities (stiffness, mass, and average strains). At the lower level, the structure is defined in terms of local quantities (detailed dimensions of the blade structure and stresses). The IADS procedure provides an optimization technique that is compatible with industrial design practices in which the aerodynamic and dynamic designs are performed at a global level and the structural design is carried out at a detailed level with considerable dialog and compromise among the aerodynamic, dynamic, and structural groups. The IADS procedure is demonstrated for several examples.

  6. Multilevel decomposition approach to integrated aerodynamic/dynamic/structural optimization of helicopter rotor blades

    NASA Technical Reports Server (NTRS)

    Walsh, Joanne L.; Young, Katherine C.; Pritchard, Jocelyn I.; Adelman, Howard M.; Mantay, Wayne R.

    1994-01-01

    This paper describes an integrated aerodynamic, dynamic, and structural (IADS) optimization procedure for helicopter rotor blades. The procedure combines performance, dynamics, and structural analyses with a general purpose optimizer using multilevel decomposition techniques. At the upper level, the structure is defined in terms of local quantities (stiffnesses, mass, and average strains). At the lower level, the structure is defined in terms of local quantities (detailed dimensions of the blade structure and stresses). The IADS procedure provides an optimization technique that is compatible with industrial design practices in which the aerodynamic and dynamic design is performed at a global level and the structural design is carried out at a detailed level with considerable dialogue and compromise among the aerodynamic, dynamic, and structural groups. The IADS procedure is demonstrated for several cases.

  7. Reducing infection risk in implant-based breast-reconstruction surgery: challenges and solutions

    PubMed Central

    Ooi, Adrian SH; Song, David H

    2016-01-01

    Implant-based procedures are the most commonly performed method for postmastectomy breast reconstruction. While donor-site morbidity is low, these procedures are associated with a higher risk of reconstructive loss. Many of these are related to infection of the implant, which can lead to prolonged antibiotic treatment, undesired additional surgical procedures, and unsatisfactory results. This review combines a summary of the recent literature regarding implant-related breast-reconstruction infections and combines this with a practical approach to the patient and surgery aimed at reducing this risk. Prevention of infection begins with appropriate reconstructive choice based on an assessment and optimization of risk factors. These include patient and disease characteristics, such as smoking, obesity, large breast size, and immediate reconstructive procedures, as well as adjuvant therapy, such as radiotherapy and chemotherapy. For implant-based breast reconstruction, preoperative planning and organization is key to reducing infection. A logical and consistent intraoperative and postoperative surgical protocol, including appropriate antibiotic choice, mastectomy-pocket creation, implant handling, and considered acellular dermal matrix use contribute toward the reduction of breast-implant infections. PMID:27621667

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Townsend, D.W.; Linnhoff, B.

    In Part I, criteria for heat engine and heat pump placement in chemical process networks were derived, based on the ''temperature interval'' (T.I) analysis of the heat exchanger network problem. Using these criteria, this paper gives a method for identifying the best outline design for any combined system of chemical process, heat engines, and heat pumps. The method eliminates inferior alternatives early, and positively leads on to the most appropriate solution. A graphical procedure based on the T.I. analysis forms the heart of the approach, and the calculations involved are simple enough to be carried out on, say, a programmablemore » calculator. Application to a case study is demonstrated. Optimization methods based on this procedure are currently under research.« less

  9. Optimal Sensor Selection for Health Monitoring Systems

    NASA Technical Reports Server (NTRS)

    Santi, L. Michael; Sowers, T. Shane; Aguilar, Robert B.

    2005-01-01

    Sensor data are the basis for performance and health assessment of most complex systems. Careful selection and implementation of sensors is critical to enable high fidelity system health assessment. A model-based procedure that systematically selects an optimal sensor suite for overall health assessment of a designated host system is described. This procedure, termed the Systematic Sensor Selection Strategy (S4), was developed at NASA John H. Glenn Research Center in order to enhance design phase planning and preparations for in-space propulsion health management systems (HMS). Information and capabilities required to utilize the S4 approach in support of design phase development of robust health diagnostics are outlined. A merit metric that quantifies diagnostic performance and overall risk reduction potential of individual sensor suites is introduced. The conceptual foundation for this merit metric is presented and the algorithmic organization of the S4 optimization process is described. Representative results from S4 analyses of a boost stage rocket engine previously under development as part of NASA's Next Generation Launch Technology (NGLT) program are presented.

  10. Heuristic algorithms for the minmax regret flow-shop problem with interval processing times.

    PubMed

    Ćwik, Michał; Józefczyk, Jerzy

    2018-01-01

    An uncertain version of the permutation flow-shop with unlimited buffers and the makespan as a criterion is considered. The investigated parametric uncertainty is represented by given interval-valued processing times. The maximum regret is used for the evaluation of uncertainty. Consequently, the minmax regret discrete optimization problem is solved. Due to its high complexity, two relaxations are applied to simplify the optimization procedure. First of all, a greedy procedure is used for calculating the criterion's value, as such calculation is NP-hard problem itself. Moreover, the lower bound is used instead of solving the internal deterministic flow-shop. The constructive heuristic algorithm is applied for the relaxed optimization problem. The algorithm is compared with previously elaborated other heuristic algorithms basing on the evolutionary and the middle interval approaches. The conducted computational experiments showed the advantage of the constructive heuristic algorithm with regards to both the criterion and the time of computations. The Wilcoxon paired-rank statistical test confirmed this conclusion.

  11. Nash equilibrium and multi criterion aerodynamic optimization

    NASA Astrophysics Data System (ADS)

    Tang, Zhili; Zhang, Lianhe

    2016-06-01

    Game theory and its particular Nash Equilibrium (NE) are gaining importance in solving Multi Criterion Optimization (MCO) in engineering problems over the past decade. The solution of a MCO problem can be viewed as a NE under the concept of competitive games. This paper surveyed/proposed four efficient algorithms for calculating a NE of a MCO problem. Existence and equivalence of the solution are analyzed and proved in the paper based on fixed point theorem. Specific virtual symmetric Nash game is also presented to set up an optimization strategy for single objective optimization problems. Two numerical examples are presented to verify proposed algorithms. One is mathematical functions' optimization to illustrate detailed numerical procedures of algorithms, the other is aerodynamic drag reduction of civil transport wing fuselage configuration by using virtual game. The successful application validates efficiency of algorithms in solving complex aerodynamic optimization problem.

  12. Standardization of Solar Mirror Reflectance Measurements - Round Robin Test: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meyen, S.; Lupfert, E.; Fernandez-Garcia, A.

    2010-10-01

    Within the SolarPaces Task III standardization activities, DLR, CIEMAT, and NREL have concentrated on optimizing the procedure to measure the reflectance of solar mirrors. From this work, the laboratories have developed a clear definition of the method and requirements needed of commercial instruments for reliable reflectance results. A round robin test was performed between the three laboratories with samples that represent all of the commercial solar mirrors currently available for concentrating solar power (CSP) applications. The results show surprisingly large differences in hemispherical reflectance (sh) of 0.007 and specular reflectance (ss) of 0.004 between the laboratories. These differences indicate themore » importance of minimum instrument requirements and standardized procedures. Based on these results, the optimal procedure will be formulated and validated with a new round robin test in which a better accuracy is expected. Improved instruments and reference standards are needed to reach the necessary accuracy for cost and efficiency calculations.« less

  13. Accelerated forgetting? An evaluation on the use of long-term forgetting rates in patients with memory problems

    PubMed Central

    Geurts, Sofie; van der Werf, Sieberen P.; Kessels, Roy P. C.

    2015-01-01

    The main focus of this review was to evaluate whether long-term forgetting rates (delayed tests, days, to weeks, after initial learning) are more sensitive measures than standard delayed recall measures to detect memory problems in various patient groups. It has been suggested that accelerated forgetting might be characteristic for epilepsy patients, but little research has been performed in other populations. Here, we identified eleven studies in a wide range of brain injured patient groups, whose long-term forgetting patterns were compared to those of healthy controls. Signs of accelerated forgetting were found in three studies. The results of eight studies showed normal forgetting over time for the patient groups. However, most of the studies used only a recognition procedure, after optimizing initial learning. Based on these results, we recommend the use of a combined recall and recognition procedure to examine accelerated forgetting and we discuss the relevance of standard and optimized learning procedures in clinical practice. PMID:26106343

  14. Maze Procedures for Atrial Fibrillation, From History to Practice.

    PubMed

    Kik, Charles; Bogers, Ad J J C

    2011-10-01

    Atrial fibrillation may result in significant symptoms, (systemic) thrombo-embolism, as well as tachycardia-induced cardiomyopathy with cardiac failure, and consequently be associated with significant morbidity and mortality. Nowadays symptomatic atrial fibrillation can be treated with catheter-based ablation, surgical ablation or hybrid approaches. In this setting a fairly large number of surgical approaches and procedures are described and being practised. It should be clear that the Cox-maze procedure resulted from building up evidence and experience in different steps, while some of the present surgical approaches and techniques are being based only on technical feasibility with limited experience, rather than on a process of consequent methodology. Some of the issues still under debate are whether or not the maze procedure can be limited to the left atrium or even to isolation of the pulmonary veins or that bi-atrial procedures are indicated, whether or not cardiopulmonary bypass is to be applied and which route of exposure facilitates an optimal result. In addition, maze procedures are not procedures guide by electrophysiological mapping. At least in theory not in all patients all lesions of the maze procedures are necessary. A history and aspects of current practise in surgical treatment of atrial fibrillation is presented.

  15. Maze Procedures for Atrial Fibrillation, From History to Practice

    PubMed Central

    Kik, Charles; Bogers, Ad J.J.C.

    2011-01-01

    Atrial fibrillation may result in significant symptoms, (systemic) thrombo-embolism, as well as tachycardia-induced cardiomyopathy with cardiac failure, and consequently be associated with significant morbidity and mortality. Nowadays symptomatic atrial fibrillation can be treated with catheter-based ablation, surgical ablation or hybrid approaches. In this setting a fairly large number of surgical approaches and procedures are described and being practised. It should be clear that the Cox-maze procedure resulted from building up evidence and experience in different steps, while some of the present surgical approaches and techniques are being based only on technical feasibility with limited experience, rather than on a process of consequent methodology. Some of the issues still under debate are whether or not the maze procedure can be limited to the left atrium or even to isolation of the pulmonary veins or that bi-atrial procedures are indicated, whether or not cardiopulmonary bypass is to be applied and which route of exposure facilitates an optimal result. In addition, maze procedures are not procedures guide by electrophysiological mapping. At least in theory not in all patients all lesions of the maze procedures are necessary. A history and aspects of current practise in surgical treatment of atrial fibrillation is presented. PMID:28357007

  16. A robust active control system for shimmy damping in the presence of free play and uncertainties

    NASA Astrophysics Data System (ADS)

    Orlando, Calogero; Alaimo, Andrea

    2017-02-01

    Shimmy vibration is the oscillatory motion of the fork-wheel assembly about the steering axis. It represents one of the major problem of aircraft landing gear because it can lead to excessive wear, discomfort as well as safety concerns. Based on the nonlinear model of the mechanics of a single wheel nose landing gear (NLG), electromechanical actuator and tire elasticity, a robust active controller capable of damping shimmy vibration is designed and investigated in this study. A novel Decline Population Swarm Optimization (PDSO) procedure is introduced and used to select the optimal parameters for the controller. The PDSO procedure is based on a decline demographic model and shows high global search capability with reduced computational costs. The open and closed loop system behavior is analyzed under different case studies of aeronautical interest and the effects of torsional free play on the nose landing gear response are also studied. Plant parameters probabilistic uncertainties are then taken into account to assess the active controller robustness using a stochastic approach.

  17. Design of Quiet Rotorcraft Approach Trajectories

    NASA Technical Reports Server (NTRS)

    Padula, Sharon L.; Burley, Casey L.; Boyd, D. Douglas, Jr.; Marcolini, Michael A.

    2009-01-01

    A optimization procedure for identifying quiet rotorcraft approach trajectories is proposed and demonstrated. The procedure employs a multi-objective genetic algorithm in order to reduce noise and create approach paths that will be acceptable to pilots and passengers. The concept is demonstrated by application to two different helicopters. The optimized paths are compared with one another and to a standard 6-deg approach path. The two demonstration cases validate the optimization procedure but highlight the need for improved noise prediction techniques and for additional rotorcraft acoustic data sets.

  18. Tunable wavefront coded imaging system based on detachable phase mask: Mathematical analysis, optimization and underlying applications

    NASA Astrophysics Data System (ADS)

    Zhao, Hui; Wei, Jingxuan

    2014-09-01

    The key to the concept of tunable wavefront coding lies in detachable phase masks. Ojeda-Castaneda et al. (Progress in Electronics Research Symposium Proceedings, Cambridge, USA, July 5-8, 2010) described a typical design in which two components with cosinusoidal phase variation operate together to make defocus sensitivity tunable. The present study proposes an improved design and makes three contributions: (1) A mathematical derivation based on the stationary phase method explains why the detachable phase mask of Ojeda-Castaneda et al. tunes the defocus sensitivity. (2) The mathematical derivations show that the effective bandwidth wavefront coded imaging system is also tunable by making each component of the detachable phase mask move asymmetrically. An improved Fisher information-based optimization procedure was also designed to ascertain the optimal mask parameters corresponding to specific bandwidth. (3) Possible applications of the tunable bandwidth are demonstrated by simulated imaging.

  19. Model based LV-reconstruction in bi-plane x-ray angiography

    NASA Astrophysics Data System (ADS)

    Backfrieder, Werner; Carpella, Martin; Swoboda, Roland; Steinwender, Clemens; Gabriel, Christian; Leisch, Franz

    2005-04-01

    Interventional x-ray angiography is state of the art in diagnosis and therapy of severe diseases of the cardiovascular system. Diagnosis is based on contrast enhanced dynamic projection images of the left ventricle. A new model based algorithm for three dimensional reconstruction of the left ventricle from bi-planar angiograms was developed. Parametric super ellipses are deformed until their projection profiles optimally fit measured ventricular projections. Deformation is controlled by a simplex optimization procedure. A resulting optimized parameter set builds the initial guess for neighboring slices. A three dimensional surface model of the ventricle is built from stacked contours. The accuracy of the algorithm has been tested with mathematical phantom data and clinical data. Results show conformance with provided projection data and high convergence speed makes the algorithm useful for clinical application. Fully three dimensional reconstruction of the left ventricle has a high potential for improvements of clinical findings in interventional cardiology.

  20. An efficient and accurate solution methodology for bilevel multi-objective programming problems using a hybrid evolutionary-local-search algorithm.

    PubMed

    Deb, Kalyanmoy; Sinha, Ankur

    2010-01-01

    Bilevel optimization problems involve two optimization tasks (upper and lower level), in which every feasible upper level solution must correspond to an optimal solution to a lower level optimization problem. These problems commonly appear in many practical problem solving tasks including optimal control, process optimization, game-playing strategy developments, transportation problems, and others. However, they are commonly converted into a single level optimization problem by using an approximate solution procedure to replace the lower level optimization task. Although there exist a number of theoretical, numerical, and evolutionary optimization studies involving single-objective bilevel programming problems, not many studies look at the context of multiple conflicting objectives in each level of a bilevel programming problem. In this paper, we address certain intricate issues related to solving multi-objective bilevel programming problems, present challenging test problems, and propose a viable and hybrid evolutionary-cum-local-search based algorithm as a solution methodology. The hybrid approach performs better than a number of existing methodologies and scales well up to 40-variable difficult test problems used in this study. The population sizing and termination criteria are made self-adaptive, so that no additional parameters need to be supplied by the user. The study indicates a clear niche of evolutionary algorithms in solving such difficult problems of practical importance compared to their usual solution by a computationally expensive nested procedure. The study opens up many issues related to multi-objective bilevel programming and hopefully this study will motivate EMO and other researchers to pay more attention to this important and difficult problem solving activity.

  1. Optimization of contoured hypersonic scramjet inlets with a least-squares parabolized Navier-Stokes procedure

    NASA Technical Reports Server (NTRS)

    Korte, J. J.; Auslender, A. H.

    1993-01-01

    A new optimization procedure, in which a parabolized Navier-Stokes solver is coupled with a non-linear least-squares optimization algorithm, is applied to the design of a Mach 14, laminar two-dimensional hypersonic subscale flight inlet with an internal contraction ratio of 15:1 and a length-to-throat half-height ratio of 150:1. An automated numerical search of multiple geometric wall contours, which are defined by polynomical splines, results in an optimal geometry that yields the maximum total-pressure recovery for the compression process. Optimal inlet geometry is obtained for both inviscid and viscous flows, with the assumption that the gas is either calorically or thermally perfect. The analysis with a calorically perfect gas results in an optimized inviscid inlet design that is defined by two cubic splines and yields a mass-weighted total-pressure recovery of 0.787, which is a 23% improvement compared with the optimized shock-canceled two-ramp inlet design. Similarly, the design procedure obtains the optimized contour for a viscous calorically perfect gas to yield a mass-weighted total-pressure recovery value of 0.749. Additionally, an optimized contour for a viscous thermally perfect gas is obtained to yield a mass-weighted total-pressure recovery value of 0.768. The design methodology incorporates both complex fluid dynamic physics and optimal search techniques without an excessive compromise of computational speed; hence, this methodology is a practical technique that is applicable to optimal inlet design procedures.

  2. Aerodynamic shape optimization of wing and wing-body configurations using control theory

    NASA Technical Reports Server (NTRS)

    Reuther, James; Jameson, Antony

    1995-01-01

    This paper describes the implementation of optimization techniques based on control theory for wing and wing-body design. In previous studies it was shown that control theory could be used to devise an effective optimization procedure for airfoils and wings in which the shape and the surrounding body-fitted mesh are both generated analytically, and the control is the mapping function. Recently, the method has been implemented for both potential flows and flows governed by the Euler equations using an alternative formulation which employs numerically generated grids, so that it can more easily be extended to treat general configurations. Here results are presented both for the optimization of a swept wing using an analytic mapping, and for the optimization of wing and wing-body configurations using a general mesh.

  3. Texture mapping via optimal mass transport.

    PubMed

    Dominitz, Ayelet; Tannenbaum, Allen

    2010-01-01

    In this paper, we present a novel method for texture mapping of closed surfaces. Our method is based on the technique of optimal mass transport (also known as the "earth-mover's metric"). This is a classical problem that concerns determining the optimal way, in the sense of minimal transportation cost, of moving a pile of soil from one site to another. In our context, the resulting mapping is area preserving and minimizes angle distortion in the optimal mass sense. Indeed, we first begin with an angle-preserving mapping (which may greatly distort area) and then correct it using the mass transport procedure derived via a certain gradient flow. In order to obtain fast convergence to the optimal mapping, we incorporate a multiresolution scheme into our flow. We also use ideas from discrete exterior calculus in our computations.

  4. Semidefinite Relaxation-Based Optimization of Multiple-Input Wireless Power Transfer Systems

    NASA Astrophysics Data System (ADS)

    Lang, Hans-Dieter; Sarris, Costas D.

    2017-11-01

    An optimization procedure for multi-transmitter (MISO) wireless power transfer (WPT) systems based on tight semidefinite relaxation (SDR) is presented. This method ensures physical realizability of MISO WPT systems designed via convex optimization -- a robust, semi-analytical and intuitive route to optimizing such systems. To that end, the nonconvex constraints requiring that power is fed into rather than drawn from the system via all transmitter ports are incorporated in a convex semidefinite relaxation, which is efficiently and reliably solvable by dedicated algorithms. A test of the solution then confirms that this modified problem is equivalent (tight relaxation) to the original (nonconvex) one and that the true global optimum has been found. This is a clear advantage over global optimization methods (e.g. genetic algorithms), where convergence to the true global optimum cannot be ensured or tested. Discussions of numerical results yielded by both the closed-form expressions and the refined technique illustrate the importance and practicability of the new method. It, is shown that this technique offers a rigorous optimization framework for a broad range of current and emerging WPT applications.

  5. Quantum mechanical energy-based screening of combinatorially generated library of tautomers. TauTGen: a tautomer generator program.

    PubMed

    Harańczyk, Maciej; Gutowski, Maciej

    2007-01-01

    We describe a procedure of finding low-energy tautomers of a molecule. The procedure consists of (i) combinatorial generation of a library of tautomers, (ii) screening based on the results of geometry optimization of initial structures performed at the density functional level of theory, and (iii) final refinement of geometry for the top hits at the second-order Möller-Plesset level of theory followed by single-point energy calculations at the coupled cluster level of theory with single, double, and perturbative triple excitations. The library of initial structures of various tautomers is generated with TauTGen, a tautomer generator program. The procedure proved to be successful for these molecular systems for which common chemical knowledge had not been sufficient to predict the most stable structures.

  6. The Social Construction of "Evidence-Based" Drug Prevention Programs: A Reanalysis of Data from the Drug Abuse Resistance Education (DARE) Program

    ERIC Educational Resources Information Center

    Gorman, Dennis M.; Huber, J. Charles, Jr.

    2009-01-01

    This study explores the possibility that any drug prevention program might be considered "evidence-based" given the use of data analysis procedures that optimize the chance of producing statistically significant results by reanalyzing data from a Drug Abuse Resistance Education (DARE) program evaluation. The analysis produced a number of…

  7. A vibration-based health monitoring program for a large and seismically vulnerable masonry dome

    NASA Astrophysics Data System (ADS)

    Pecorelli, M. L.; Ceravolo, R.; De Lucia, G.; Epicoco, R.

    2017-05-01

    Vibration-based health monitoring of monumental structures must rely on efficient and, as far as possible, automatic modal analysis procedures. Relatively low excitation energy provided by traffic, wind and other sources is usually sufficient to detect structural changes, as those produced by earthquakes and extreme events. Above all, in-operation modal analysis is a non-invasive diagnostic technique that can support optimal strategies for the preservation of architectural heritage, especially if complemented by model-driven procedures. In this paper, the preliminary steps towards a fully automated vibration-based monitoring of the world’s largest masonry oval dome (internal axes of 37.23 by 24.89 m) are presented. More specifically, the paper reports on signal treatment operations conducted to set up the permanent dynamic monitoring system of the dome and to realise a robust automatic identification procedure. Preliminary considerations on the effects of temperature on dynamic parameters are finally reported.

  8. Optimal Parameters for Intervertebral Disk Resection Using Aqua-Plasma Beams.

    PubMed

    Yoon, Sung-Young; Kim, Gon-Ho; Kim, Yushin; Kim, Nack Hwan; Lee, Sangheon; Kawai, Christina; Hong, Youngki

    2018-06-14

     A minimally invasive procedure for intervertebral disk resection using plasma beams has been developed. Conventional parameters for the plasma procedure such as voltage and tip speed mainly rely on the surgeon's personal experience, without adequate evidence from experiments. Our objective was to determine the optimal parameters for plasma disk resection.  Rate of ablation was measured at different procedural tip speeds and voltages using porcine nucleus pulposi. The amount of heat formation during experimental conditions was also measured to evaluate the thermal safety of the plasma procedure.  The ablation rate increased at slower procedural speeds and higher voltages. However, for thermal safety, the optimal parameters for plasma procedures with minimal tissue damage were an electrical output of 280 volts root-mean-square (V rms ) and a procedural tip speed of 2.5 mm/s.  Our findings provide useful information for an effective and safe plasma procedure for disk resection in a clinical setting. Georg Thieme Verlag KG Stuttgart · New York.

  9. A Calibration-Free Laser-Induced Breakdown Spectroscopy (CF-LIBS) Quantitative Analysis Method Based on the Auto-Selection of an Internal Reference Line and Optimized Estimation of Plasma Temperature.

    PubMed

    Yang, Jianhong; Li, Xiaomeng; Xu, Jinwu; Ma, Xianghong

    2018-01-01

    The quantitative analysis accuracy of calibration-free laser-induced breakdown spectroscopy (CF-LIBS) is severely affected by the self-absorption effect and estimation of plasma temperature. Herein, a CF-LIBS quantitative analysis method based on the auto-selection of internal reference line and the optimized estimation of plasma temperature is proposed. The internal reference line of each species is automatically selected from analytical lines by a programmable procedure through easily accessible parameters. Furthermore, the self-absorption effect of the internal reference line is considered during the correction procedure. To improve the analysis accuracy of CF-LIBS, the particle swarm optimization (PSO) algorithm is introduced to estimate the plasma temperature based on the calculation results from the Boltzmann plot. Thereafter, the species concentrations of a sample can be calculated according to the classical CF-LIBS method. A total of 15 certified alloy steel standard samples of known compositions and elemental weight percentages were used in the experiment. Using the proposed method, the average relative errors of Cr, Ni, and Fe calculated concentrations were 4.40%, 6.81%, and 2.29%, respectively. The quantitative results demonstrated an improvement compared with the classical CF-LIBS method and the promising potential of in situ and real-time application.

  10. Identification of Bouc-Wen hysteretic parameters based on enhanced response sensitivity approach

    NASA Astrophysics Data System (ADS)

    Wang, Li; Lu, Zhong-Rong

    2017-05-01

    This paper aims to identify parameters of Bouc-Wen hysteretic model using time-domain measured data. It follows a general inverse identification procedure, that is, identifying model parameters is treated as an optimization problem with the nonlinear least squares objective function. Then, the enhanced response sensitivity approach, which has been shown convergent and proper for such kind of problems, is adopted to solve the optimization problem. Numerical tests are undertaken to verify the proposed identification approach.

  11. Hybrid PSO-ASVR-based method for data fitting in the calibration of infrared radiometer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Sen; Li, Chengwei, E-mail: heikuanghit@163.com

    2016-06-15

    The present paper describes a hybrid particle swarm optimization-adaptive support vector regression (PSO-ASVR)-based method for data fitting in the calibration of infrared radiometer. The proposed hybrid PSO-ASVR-based method is based on PSO in combination with Adaptive Processing and Support Vector Regression (SVR). The optimization technique involves setting parameters in the ASVR fitting procedure, which significantly improves the fitting accuracy. However, its use in the calibration of infrared radiometer has not yet been widely explored. Bearing this in mind, the PSO-ASVR-based method, which is based on the statistical learning theory, is successfully used here to get the relationship between the radiationmore » of a standard source and the response of an infrared radiometer. Main advantages of this method are the flexible adjustment mechanism in data processing and the optimization mechanism in a kernel parameter setting of SVR. Numerical examples and applications to the calibration of infrared radiometer are performed to verify the performance of PSO-ASVR-based method compared to conventional data fitting methods.« less

  12. Gradient-Based Aerodynamic Shape Optimization Using ADI Method for Large-Scale Problems

    NASA Technical Reports Server (NTRS)

    Pandya, Mohagna J.; Baysal, Oktay

    1997-01-01

    A gradient-based shape optimization methodology, that is intended for practical three-dimensional aerodynamic applications, has been developed. It is based on the quasi-analytical sensitivities. The flow analysis is rendered by a fully implicit, finite volume formulation of the Euler equations.The aerodynamic sensitivity equation is solved using the alternating-direction-implicit (ADI) algorithm for memory efficiency. A flexible wing geometry model, that is based on surface parameterization and platform schedules, is utilized. The present methodology and its components have been tested via several comparisons. Initially, the flow analysis for for a wing is compared with those obtained using an unfactored, preconditioned conjugate gradient approach (PCG), and an extensively validated CFD code. Then, the sensitivities computed with the present method have been compared with those obtained using the finite-difference and the PCG approaches. Effects of grid refinement and convergence tolerance on the analysis and shape optimization have been explored. Finally the new procedure has been demonstrated in the design of a cranked arrow wing at Mach 2.4. Despite the expected increase in the computational time, the results indicate that shape optimization, which require large numbers of grid points can be resolved with a gradient-based approach.

  13. A comparative study of coarse-graining methods for polymeric fluids: Mori-Zwanzig vs. iterative Boltzmann inversion vs. stochastic parametric optimization

    NASA Astrophysics Data System (ADS)

    Li, Zhen; Bian, Xin; Yang, Xiu; Karniadakis, George Em

    2016-07-01

    We construct effective coarse-grained (CG) models for polymeric fluids by employing two coarse-graining strategies. The first one is a forward-coarse-graining procedure by the Mori-Zwanzig (MZ) projection while the other one applies a reverse-coarse-graining procedure, such as the iterative Boltzmann inversion (IBI) and the stochastic parametric optimization (SPO). More specifically, we perform molecular dynamics (MD) simulations of star polymer melts to provide the atomistic fields to be coarse-grained. Each molecule of a star polymer with internal degrees of freedom is coarsened into a single CG particle and the effective interactions between CG particles can be either evaluated directly from microscopic dynamics based on the MZ formalism, or obtained by the reverse methods, i.e., IBI and SPO. The forward procedure has no free parameters to tune and recovers the MD system faithfully. For the reverse procedure, we find that the parameters in CG models cannot be selected arbitrarily. If the free parameters are properly defined, the reverse CG procedure also yields an accurate effective potential. Moreover, we explain how an aggressive coarse-graining procedure introduces the many-body effect, which makes the pairwise potential invalid for the same system at densities away from the training point. From this work, general guidelines for coarse-graining of polymeric fluids can be drawn.

  14. A comparative study of coarse-graining methods for polymeric fluids: Mori-Zwanzig vs. iterative Boltzmann inversion vs. stochastic parametric optimization.

    PubMed

    Li, Zhen; Bian, Xin; Yang, Xiu; Karniadakis, George Em

    2016-07-28

    We construct effective coarse-grained (CG) models for polymeric fluids by employing two coarse-graining strategies. The first one is a forward-coarse-graining procedure by the Mori-Zwanzig (MZ) projection while the other one applies a reverse-coarse-graining procedure, such as the iterative Boltzmann inversion (IBI) and the stochastic parametric optimization (SPO). More specifically, we perform molecular dynamics (MD) simulations of star polymer melts to provide the atomistic fields to be coarse-grained. Each molecule of a star polymer with internal degrees of freedom is coarsened into a single CG particle and the effective interactions between CG particles can be either evaluated directly from microscopic dynamics based on the MZ formalism, or obtained by the reverse methods, i.e., IBI and SPO. The forward procedure has no free parameters to tune and recovers the MD system faithfully. For the reverse procedure, we find that the parameters in CG models cannot be selected arbitrarily. If the free parameters are properly defined, the reverse CG procedure also yields an accurate effective potential. Moreover, we explain how an aggressive coarse-graining procedure introduces the many-body effect, which makes the pairwise potential invalid for the same system at densities away from the training point. From this work, general guidelines for coarse-graining of polymeric fluids can be drawn.

  15. A study of enhancing critical current densities (J(sub c)) and critical temperature (T(sub c)) of high-temperature superconductors

    NASA Technical Reports Server (NTRS)

    Vlasse, Marcus

    1992-01-01

    The development of pure phase 123 and Bi-based 2223 superconductors has been optimized. The pre-heat processing appears to be a very important parameter in achieving optimal physical properties. The synthesis of pure phases in the Bi-based system involves effects due to oxygen partial pressure, time, and temperature. Orientation/melt-sintering effects include the extreme c-axis orientation of Yttrium 123 and Bismuth 2223, 2212, and 2201 phases. This orientation is conductive to increasing critical currents. A procedure was established to substitute Sr for Ba in Y-123 single crystals.

  16. Applying constraints on model-based methods: Estimation of rate constants in a second order consecutive reaction

    NASA Astrophysics Data System (ADS)

    Kompany-Zareh, Mohsen; Khoshkam, Maryam

    2013-02-01

    This paper describes estimation of reaction rate constants and pure ultraviolet/visible (UV-vis) spectra of the component involved in a second order consecutive reaction between Ortho-Amino benzoeic acid (o-ABA) and Diazoniom ions (DIAZO), with one intermediate. In the described system, o-ABA was not absorbing in the visible region of interest and thus, closure rank deficiency problem did not exist. Concentration profiles were determined by solving differential equations of the corresponding kinetic model. In that sense, three types of model-based procedures were applied to estimate the rate constants of the kinetic system, according to Levenberg/Marquardt (NGL/M) algorithm. Original data-based, Score-based and concentration-based objective functions were included in these nonlinear fitting procedures. Results showed that when there is error in initial concentrations, accuracy of estimated rate constants strongly depends on the type of applied objective function in fitting procedure. Moreover, flexibility in application of different constraints and optimization of the initial concentrations estimation during the fitting procedure were investigated. Results showed a considerable decrease in ambiguity of obtained parameters by applying appropriate constraints and adjustable initial concentrations of reagents.

  17. A Complete Procedure for Predicting and Improving the Performance of HAWT's

    NASA Astrophysics Data System (ADS)

    Al-Abadi, Ali; Ertunç, Özgür; Sittig, Florian; Delgado, Antonio

    2014-06-01

    A complete procedure for predicting and improving the performance of the horizontal axis wind turbine (HAWT) has been developed. The first process is predicting the power extracted by the turbine and the derived rotor torque, which should be identical to that of the drive unit. The BEM method and a developed post-stall treatment for resolving stall-regulated HAWT is incorporated in the prediction. For that, a modified stall-regulated prediction model, which can predict the HAWT performance over the operating range of oncoming wind velocity, is derived from existing models. The model involves radius and chord, which has made it more general in applications for predicting the performance of different scales and rotor shapes of HAWTs. The second process is modifying the rotor shape by an optimization process, which can be applied to any existing HAWT, to improve its performance. A gradient- based optimization is used for adjusting the chord and twist angle distribution of the rotor blade to increase the extraction of the power while keeping the drive torque constant, thus the same drive unit can be kept. The final process is testing the modified turbine to predict its enhanced performance. The procedure is applied to NREL phase-VI 10kW as a baseline turbine. The study has proven the applicability of the developed model in predicting the performance of the baseline as well as the optimized turbine. In addition, the optimization method has shown that the power coefficient can be increased while keeping same design rotational speed.

  18. Energy-Efficient Next-Generation Passive Optical Networks Based on Sleep Mode and Heuristic Optimization

    NASA Astrophysics Data System (ADS)

    Zulai, Luis G. T.; Durand, Fábio R.; Abrão, Taufik

    2015-05-01

    In this article, an energy-efficiency mechanism for next-generation passive optical networks is investigated through heuristic particle swarm optimization. Ten-gigabit Ethernet-wavelength division multiplexing optical code division multiplexing-passive optical network next-generation passive optical networks are based on the use of a legacy 10-gigabit Ethernet-passive optical network with the advantage of using only an en/decoder pair of optical code division multiplexing technology, thus eliminating the en/decoder at each optical network unit. The proposed joint mechanism is based on the sleep-mode power-saving scheme for a 10-gigabit Ethernet-passive optical network, combined with a power control procedure aiming to adjust the transmitted power of the active optical network units while maximizing the overall energy-efficiency network. The particle swarm optimization based power control algorithm establishes the optimal transmitted power in each optical network unit according to the network pre-defined quality of service requirements. The objective is controlling the power consumption of the optical network unit according to the traffic demand by adjusting its transmitter power in an attempt to maximize the number of transmitted bits with minimum energy consumption, achieving maximal system energy efficiency. Numerical results have revealed that it is possible to save 75% of energy consumption with the proposed particle swarm optimization based sleep-mode energy-efficiency mechanism compared to 55% energy savings when just a sleeping-mode-based mechanism is deployed.

  19. Reconstruction after complex facial trauma: achieving optimal outcome through multiple contemporary surgeries.

    PubMed

    Jaiswal, Rohit; Pu, Lee L Q

    2013-04-01

    Major facial trauma injuries often require complex repair. Traditionally, the reconstruction of such injuries has primarily utilized only free tissue transfer. However, the advent of newer, contemporary procedures may lead to potential reconstructive improvement through the use of complementary procedures after free flap reconstruction. An 18-year-old male patient suffered a major left facial degloving injury resulting in soft-tissue defect with exposed zygoma, and parietal bone. Multiple operations were undertaken in a staged manner for reconstruction. A state-of-the-art free anterolateral thigh (ALT) perforator flap and Medpor implant reconstruction of the midface were initially performed, followed by flap debulking, lateral canthopexy, midface lift with redo canthopexy, scalp tissue expansion for hairline reconstruction, and epidermal skin grafting for optimal skin color matching. Over a follow-up period of 2 years, a good and impressive reconstructive result was achieved through the use of multiple contemporary reconstructive procedures following an excellent free ALT flap reconstruction. Multiple staged reconstructions are essential in producing an optimal outcome in this complex facial injury that would likely not have been produced through a 1-stage traditional free flap reconstruction. Utilizing multiple, sequential contemporary surgeries may substantially improve outcome through the enhancement and refinement of results based on possibly the best initial soft-tissue reconstruction.

  20. Design of materials with prescribed nonlinear properties

    NASA Astrophysics Data System (ADS)

    Wang, F.; Sigmund, O.; Jensen, J. S.

    2014-09-01

    We systematically design materials using topology optimization to achieve prescribed nonlinear properties under finite deformation. Instead of a formal homogenization procedure, a numerical experiment is proposed to evaluate the material performance in longitudinal and transverse tensile tests under finite deformation, i.e. stress-strain relations and Poissons ratio. By minimizing errors between actual and prescribed properties, materials are tailored to achieve the target. Both two dimensional (2D) truss-based and continuum materials are designed with various prescribed nonlinear properties. The numerical examples illustrate optimized materials with rubber-like behavior and also optimized materials with extreme strain-independent Poissons ratio for axial strain intervals of εi∈[0.00, 0.30].

  1. A Sequential Optimization Sampling Method for Metamodels with Radial Basis Functions

    PubMed Central

    Pan, Guang; Ye, Pengcheng; Yang, Zhidong

    2014-01-01

    Metamodels have been widely used in engineering design to facilitate analysis and optimization of complex systems that involve computationally expensive simulation programs. The accuracy of metamodels is strongly affected by the sampling methods. In this paper, a new sequential optimization sampling method is proposed. Based on the new sampling method, metamodels can be constructed repeatedly through the addition of sampling points, namely, extrema points of metamodels and minimum points of density function. Afterwards, the more accurate metamodels would be constructed by the procedure above. The validity and effectiveness of proposed sampling method are examined by studying typical numerical examples. PMID:25133206

  2. Optimal Sensor Location Design for Reliable Fault Detection in Presence of False Alarms

    PubMed Central

    Yang, Fan; Xiao, Deyun; Shah, Sirish L.

    2009-01-01

    To improve fault detection reliability, sensor location should be designed according to an optimization criterion with constraints imposed by issues of detectability and identifiability. Reliability requires the minimization of undetectability and false alarm probability due to random factors on sensor readings, which is not only related with sensor readings but also affected by fault propagation. This paper introduces the reliability criteria expression based on the missed/false alarm probability of each sensor and system topology or connectivity derived from the directed graph. The algorithm for the optimization problem is presented as a heuristic procedure. Finally, a boiler system is illustrated using the proposed method. PMID:22291524

  3. Automated optimization of water-water interaction parameters for a coarse-grained model.

    PubMed

    Fogarty, Joseph C; Chiu, See-Wing; Kirby, Peter; Jakobsson, Eric; Pandit, Sagar A

    2014-02-13

    We have developed an automated parameter optimization software framework (ParOpt) that implements the Nelder-Mead simplex algorithm and applied it to a coarse-grained polarizable water model. The model employs a tabulated, modified Morse potential with decoupled short- and long-range interactions incorporating four water molecules per interaction site. Polarizability is introduced by the addition of a harmonic angle term defined among three charged points within each bead. The target function for parameter optimization was based on the experimental density, surface tension, electric field permittivity, and diffusion coefficient. The model was validated by comparison of statistical quantities with experimental observation. We found very good performance of the optimization procedure and good agreement of the model with experiment.

  4. Aerodynamic shape optimization using preconditioned conjugate gradient methods

    NASA Technical Reports Server (NTRS)

    Burgreen, Greg W.; Baysal, Oktay

    1993-01-01

    In an effort to further improve upon the latest advancements made in aerodynamic shape optimization procedures, a systematic study is performed to examine several current solution methodologies as applied to various aspects of the optimization procedure. It is demonstrated that preconditioned conjugate gradient-like methodologies dramatically decrease the computational efforts required for such procedures. The design problem investigated is the shape optimization of the upper and lower surfaces of an initially symmetric (NACA-012) airfoil in inviscid transonic flow and at zero degree angle-of-attack. The complete surface shape is represented using a Bezier-Bernstein polynomial. The present optimization method then automatically obtains supercritical airfoil shapes over a variety of freestream Mach numbers. Furthermore, the best optimization strategy examined resulted in a factor of 8 decrease in computational time as well as a factor of 4 decrease in memory over the most efficient strategies in current use.

  5. Reliability-based design optimization of reinforced concrete structures including soil-structure interaction using a discrete gravitational search algorithm and a proposed metamodel

    NASA Astrophysics Data System (ADS)

    Khatibinia, M.; Salajegheh, E.; Salajegheh, J.; Fadaee, M. J.

    2013-10-01

    A new discrete gravitational search algorithm (DGSA) and a metamodelling framework are introduced for reliability-based design optimization (RBDO) of reinforced concrete structures. The RBDO of structures with soil-structure interaction (SSI) effects is investigated in accordance with performance-based design. The proposed DGSA is based on the standard gravitational search algorithm (GSA) to optimize the structural cost under deterministic and probabilistic constraints. The Monte-Carlo simulation (MCS) method is considered as the most reliable method for estimating the probabilities of reliability. In order to reduce the computational time of MCS, the proposed metamodelling framework is employed to predict the responses of the SSI system in the RBDO procedure. The metamodel consists of a weighted least squares support vector machine (WLS-SVM) and a wavelet kernel function, which is called WWLS-SVM. Numerical results demonstrate the efficiency and computational advantages of DGSA and the proposed metamodel for RBDO of reinforced concrete structures.

  6. Robust active noise control in the loadmaster area of a military transport aircraft.

    PubMed

    Kochan, Kay; Sachau, Delf; Breitbach, Harald

    2011-05-01

    The active noise control (ANC) method is based on the superposition of a disturbance noise field with a second anti-noise field using loudspeakers and error microphones. This method can be used to reduce the noise level inside the cabin of a propeller aircraft. However, during the design process of the ANC system, extensive measurements of transfer functions are necessary to optimize the loudspeaker and microphone positions. Sometimes, the transducer positions have to be tailored according to the optimization results to achieve a sufficient noise reduction. The purpose of this paper is to introduce a controller design method for such narrow band ANC systems. The method can be seen as an extension of common transducer placement optimization procedures. In the presented method, individual weighting parameters for the loudspeakers and microphones are used. With this procedure, the tailoring of the transducer positions is replaced by adjustment of controller parameters. Moreover, the ANC system will be robust because of the fact that the uncertainties are considered during the optimization of the controller parameters. The paper describes the necessary theoretic background for the method and demonstrates the efficiency in an acoustical mock-up of a military transport aircraft.

  7. Constructing diabatic representations using adiabatic and approximate diabatic data – Coping with diabolical singularities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Xiaolei, E-mail: virtualzx@gmail.com; Yarkony, David R., E-mail: yarkony@jhu.edu

    2016-01-28

    We have recently introduced a diabatization scheme, which simultaneously fits and diabatizes adiabatic ab initio electronic wave functions, Zhu and Yarkony J. Chem. Phys. 140, 024112 (2014). The algorithm uses derivative couplings in the defining equations for the diabatic Hamiltonian, H{sup d}, and fits all its matrix elements simultaneously to adiabatic state data. This procedure ultimately provides an accurate, quantifiably diabatic, representation of the adiabatic electronic structure data. However, optimizing the large number of nonlinear parameters in the basis functions and adjusting the number and kind of basis functions from which the fit is built, which provide the essential flexibility,more » has proved challenging. In this work, we introduce a procedure that combines adiabatic state and diabatic state data to efficiently optimize the nonlinear parameters and basis function expansion. Further, we consider using direct properties based diabatizations to initialize the fitting procedure. To address this issue, we introduce a systematic method for eliminating the debilitating (diabolical) singularities in the defining equations of properties based diabatizations. We exploit the observation that if approximate diabatic data are available, the commonly used approach of fitting each matrix element of H{sup d} individually provides a starting point (seed) from which convergence of the full H{sup d} construction algorithm is rapid. The optimization of nonlinear parameters and basis functions and the elimination of debilitating singularities are, respectively, illustrated using the 1,2,3,4{sup 1}A states of phenol and the 1,2{sup 1}A states of NH{sub 3}, states which are coupled by conical intersections.« less

  8. Validation of the procedures. [integrated multidisciplinary optimization of rotorcraft

    NASA Technical Reports Server (NTRS)

    Mantay, Wayne R.

    1989-01-01

    Validation strategies are described for procedures aimed at improving the rotor blade design process through a multidisciplinary optimization approach. Validation of the basic rotor environment prediction tools and the overall rotor design are discussed.

  9. Catheter-based closure of the patent ductus arteriosus in lower weight infants.

    PubMed

    Pavlek, Leeann R; Slaughter, Jonathan L; Berman, Darren P; Backes, Carl H

    2018-06-13

    Risks associated with drug therapy and surgical ligation have led health care providers to consider alternative strategies for patent ductus arteriosus (PDA) closure. Catheter-based PDA closure is the procedure of choice for ductal closure in adults, children, and infants ≥6kg. Given evidence among older counterparts, interest in catheter-based closure of the PDA in lower weight (<6kg) infants is growing. Among these smaller infants, the goals of this review are to: (1) provide an overview of the procedure; (2) review the types of PDA closure devices; (3) review the technical success (feasibility); (4) review the risks (safety profile); (5) discuss the quality of evidence on procedural efficacy; (6) consider areas for future research. The review provided herein suggests that catheter-based PDA closure is technically feasible, but the lack of comparative trials precludes determination of the optimal strategy for ductal closure in this subgroup of infants. Copyright © 2018 Elsevier Inc. All rights reserved.

  10. Two-Dimensional High-Lift Aerodynamic Optimization Using Neural Networks

    NASA Technical Reports Server (NTRS)

    Greenman, Roxana M.

    1998-01-01

    The high-lift performance of a multi-element airfoil was optimized by using neural-net predictions that were trained using a computational data set. The numerical data was generated using a two-dimensional, incompressible, Navier-Stokes algorithm with the Spalart-Allmaras turbulence model. Because it is difficult to predict maximum lift for high-lift systems, an empirically-based maximum lift criteria was used in this study to determine both the maximum lift and the angle at which it occurs. The 'pressure difference rule,' which states that the maximum lift condition corresponds to a certain pressure difference between the peak suction pressure and the pressure at the trailing edge of the element, was applied and verified with experimental observations for this configuration. Multiple input, single output networks were trained using the NASA Ames variation of the Levenberg-Marquardt algorithm for each of the aerodynamic coefficients (lift, drag and moment). The artificial neural networks were integrated with a gradient-based optimizer. Using independent numerical simulations and experimental data for this high-lift configuration, it was shown that this design process successfully optimized flap deflection, gap, overlap, and angle of attack to maximize lift. Once the neural nets were trained and integrated with the optimizer, minimal additional computer resources were required to perform optimization runs with different initial conditions and parameters. Applying the neural networks within the high-lift rigging optimization process reduced the amount of computational time and resources by 44% compared with traditional gradient-based optimization procedures for multiple optimization runs.

  11. Evaluating the effects of real power losses in optimal power flow based storage integration

    DOE PAGES

    Castillo, Anya; Gayme, Dennice

    2017-03-27

    This study proposes a DC optimal power flow (DCOPF) with losses formulation (the `-DCOPF+S problem) and uses it to investigate the role of real power losses in OPF based grid-scale storage integration. We derive the `- DCOPF+S problem by augmenting a standard DCOPF with storage (DCOPF+S) problem to include quadratic real power loss approximations. This procedure leads to a multi-period nonconvex quadratically constrained quadratic program, which we prove can be solved to optimality using either a semidefinite or second order cone relaxation. Our approach has some important benefits over existing models. It is more computationally tractable than ACOPF with storagemore » (ACOPF+S) formulations and the provably exact convex relaxations guarantee that an optimal solution can be attained for a feasible problem. Adding loss approximations to a DCOPF+S model leads to a more accurate representation of locational marginal prices, which have been shown to be critical to determining optimal storage dispatch and siting in prior ACOPF+S based studies. Case studies demonstrate the improved accuracy of the `-DCOPF+S model over a DCOPF+S model and the computational advantages over an ACOPF+S formulation.« less

  12. An expert system for integrated structural analysis and design optimization for aerospace structures

    NASA Technical Reports Server (NTRS)

    1992-01-01

    The results of a research study on the development of an expert system for integrated structural analysis and design optimization is presented. An Object Representation Language (ORL) was developed first in conjunction with a rule-based system. This ORL/AI shell was then used to develop expert systems to provide assistance with a variety of structural analysis and design optimization tasks, in conjunction with procedural modules for finite element structural analysis and design optimization. The main goal of the research study was to provide expertise, judgment, and reasoning capabilities in the aerospace structural design process. This will allow engineers performing structural analysis and design, even without extensive experience in the field, to develop error-free, efficient and reliable structural designs very rapidly and cost-effectively. This would not only improve the productivity of design engineers and analysts, but also significantly reduce time to completion of structural design. An extensive literature survey in the field of structural analysis, design optimization, artificial intelligence, and database management systems and their application to the structural design process was first performed. A feasibility study was then performed, and the architecture and the conceptual design for the integrated 'intelligent' structural analysis and design optimization software was then developed. An Object Representation Language (ORL), in conjunction with a rule-based system, was then developed using C++. Such an approach would improve the expressiveness for knowledge representation (especially for structural analysis and design applications), provide ability to build very large and practical expert systems, and provide an efficient way for storing knowledge. Functional specifications for the expert systems were then developed. The ORL/AI shell was then used to develop a variety of modules of expert systems for a variety of modeling, finite element analysis, and design optimization tasks in the integrated aerospace structural design process. These expert systems were developed to work in conjunction with procedural finite element structural analysis and design optimization modules (developed in-house at SAT, Inc.). The complete software, AutoDesign, so developed, can be used for integrated 'intelligent' structural analysis and design optimization. The software was beta-tested at a variety of companies, used by a range of engineers with different levels of background and expertise. Based on the feedback obtained by such users, conclusions were developed and are provided.

  13. An expert system for integrated structural analysis and design optimization for aerospace structures

    NASA Astrophysics Data System (ADS)

    1992-04-01

    The results of a research study on the development of an expert system for integrated structural analysis and design optimization is presented. An Object Representation Language (ORL) was developed first in conjunction with a rule-based system. This ORL/AI shell was then used to develop expert systems to provide assistance with a variety of structural analysis and design optimization tasks, in conjunction with procedural modules for finite element structural analysis and design optimization. The main goal of the research study was to provide expertise, judgment, and reasoning capabilities in the aerospace structural design process. This will allow engineers performing structural analysis and design, even without extensive experience in the field, to develop error-free, efficient and reliable structural designs very rapidly and cost-effectively. This would not only improve the productivity of design engineers and analysts, but also significantly reduce time to completion of structural design. An extensive literature survey in the field of structural analysis, design optimization, artificial intelligence, and database management systems and their application to the structural design process was first performed. A feasibility study was then performed, and the architecture and the conceptual design for the integrated 'intelligent' structural analysis and design optimization software was then developed. An Object Representation Language (ORL), in conjunction with a rule-based system, was then developed using C++. Such an approach would improve the expressiveness for knowledge representation (especially for structural analysis and design applications), provide ability to build very large and practical expert systems, and provide an efficient way for storing knowledge. Functional specifications for the expert systems were then developed. The ORL/AI shell was then used to develop a variety of modules of expert systems for a variety of modeling, finite element analysis, and design optimization tasks in the integrated aerospace structural design process. These expert systems were developed to work in conjunction with procedural finite element structural analysis and design optimization modules (developed in-house at SAT, Inc.). The complete software, AutoDesign, so developed, can be used for integrated 'intelligent' structural analysis and design optimization. The software was beta-tested at a variety of companies, used by a range of engineers with different levels of background and expertise. Based on the feedback obtained by such users, conclusions were developed and are provided.

  14. Parametric geometric model and hydrodynamic shape optimization of a flying-wing structure underwater glider

    NASA Astrophysics Data System (ADS)

    Wang, Zhen-yu; Yu, Jian-cheng; Zhang, Ai-qun; Wang, Ya-xing; Zhao, Wen-tao

    2017-12-01

    Combining high precision numerical analysis methods with optimization algorithms to make a systematic exploration of a design space has become an important topic in the modern design methods. During the design process of an underwater glider's flying-wing structure, a surrogate model is introduced to decrease the computation time for a high precision analysis. By these means, the contradiction between precision and efficiency is solved effectively. Based on the parametric geometry modeling, mesh generation and computational fluid dynamics analysis, a surrogate model is constructed by adopting the design of experiment (DOE) theory to solve the multi-objects design optimization problem of the underwater glider. The procedure of a surrogate model construction is presented, and the Gaussian kernel function is specifically discussed. The Particle Swarm Optimization (PSO) algorithm is applied to hydrodynamic design optimization. The hydrodynamic performance of the optimized flying-wing structure underwater glider increases by 9.1%.

  15. Optimization of end-pumped, actively Q-switched quasi-III-level lasers.

    PubMed

    Jabczynski, Jan K; Gorajek, Lukasz; Kwiatkowski, Jacek; Kaskow, Mateusz; Zendzian, Waldemar

    2011-08-15

    The new model of end-pumped quasi-III-level laser considering transient pumping processes, ground-state-depletion and up-conversion effects was developed. The model consists of two parts: pumping stage and Q-switched part, which can be separated in a case of active Q-switching regime. For pumping stage the semi-analytical model was developed, enabling the calculations for final occupation of upper laser level for given pump power and duration, spatial profile of pump beam, length and dopant level of gain medium. For quasi-stationary inversion, the optimization procedure of Q-switching regime based on Lagrange multiplier technique was developed. The new approach for optimization of CW regime of quasi-three-level lasers was developed to optimize the Q-switched lasers operating with high repetition rates. Both methods of optimizations enable calculation of optimal absorbance of gain medium and output losses for given pump rate. © 2011 Optical Society of America

  16. Virtually optimized insoles for offloading the diabetic foot: A randomized crossover study.

    PubMed

    Telfer, S; Woodburn, J; Collier, A; Cavanagh, P R

    2017-07-26

    Integration of objective biomechanical measures of foot function into the design process for insoles has been shown to provide enhanced plantar tissue protection for individuals at-risk of plantar ulceration. The use of virtual simulations utilizing numerical modeling techniques offers a potential approach to further optimize these devices. In a patient population at-risk of foot ulceration, we aimed to compare the pressure offloading performance of insoles that were optimized via numerical simulation techniques against shape-based devices. Twenty participants with diabetes and at-risk feet were enrolled in this study. Three pairs of personalized insoles: one based on shape data and subsequently manufactured via direct milling; and two were based on a design derived from shape, pressure, and ultrasound data which underwent a finite element analysis-based virtual optimization procedure. For the latter set of insole designs, one pair was manufactured via direct milling, and a second pair was manufactured through 3D printing. The offloading performance of the insoles was analyzed for forefoot regions identified as having elevated plantar pressures. In 88% of the regions of interest, the use of virtually optimized insoles resulted in lower peak plantar pressures compared to the shape-based devices. Overall, the virtually optimized insoles significantly reduced peak pressures by a mean of 41.3kPa (p<0.001, 95% CI [31.1, 51.5]) for milled and 40.5kPa (p<0.001, 95% CI [26.4, 54.5]) for printed devices compared to shape-based insoles. The integration of virtual optimization into the insole design process resulted in improved offloading performance compared to standard, shape-based devices. ISRCTN19805071, www.ISRCTN.org. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Topology optimization for three-dimensional electromagnetic waves using an edge element-based finite-element method.

    PubMed

    Deng, Yongbo; Korvink, Jan G

    2016-05-01

    This paper develops a topology optimization procedure for three-dimensional electromagnetic waves with an edge element-based finite-element method. In contrast to the two-dimensional case, three-dimensional electromagnetic waves must include an additional divergence-free condition for the field variables. The edge element-based finite-element method is used to both discretize the wave equations and enforce the divergence-free condition. For wave propagation described in terms of the magnetic field in the widely used class of non-magnetic materials, the divergence-free condition is imposed on the magnetic field. This naturally leads to a nodal topology optimization method. When wave propagation is described using the electric field, the divergence-free condition must be imposed on the electric displacement. In this case, the material in the design domain is assumed to be piecewise homogeneous to impose the divergence-free condition on the electric field. This results in an element-wise topology optimization algorithm. The topology optimization problems are regularized using a Helmholtz filter and a threshold projection method and are analysed using a continuous adjoint method. In order to ensure the applicability of the filter in the element-wise topology optimization version, a regularization method is presented to project the nodal into an element-wise physical density variable.

  18. Topology optimization for three-dimensional electromagnetic waves using an edge element-based finite-element method

    PubMed Central

    Korvink, Jan G.

    2016-01-01

    This paper develops a topology optimization procedure for three-dimensional electromagnetic waves with an edge element-based finite-element method. In contrast to the two-dimensional case, three-dimensional electromagnetic waves must include an additional divergence-free condition for the field variables. The edge element-based finite-element method is used to both discretize the wave equations and enforce the divergence-free condition. For wave propagation described in terms of the magnetic field in the widely used class of non-magnetic materials, the divergence-free condition is imposed on the magnetic field. This naturally leads to a nodal topology optimization method. When wave propagation is described using the electric field, the divergence-free condition must be imposed on the electric displacement. In this case, the material in the design domain is assumed to be piecewise homogeneous to impose the divergence-free condition on the electric field. This results in an element-wise topology optimization algorithm. The topology optimization problems are regularized using a Helmholtz filter and a threshold projection method and are analysed using a continuous adjoint method. In order to ensure the applicability of the filter in the element-wise topology optimization version, a regularization method is presented to project the nodal into an element-wise physical density variable. PMID:27279766

  19. Optimization of Residual Stresses in MMC's through Process Parameter Control and the use of Heterogeneous Compensating/Compliant Interfacial Layers. OPTCOMP2 User's Guide

    NASA Technical Reports Server (NTRS)

    Pindera, Marek-Jerzy; Salzar, Robert S.

    1996-01-01

    A user's guide for the computer program OPTCOMP2 is presented in this report. This program provides a capability to optimize the fabrication or service-induced residual stresses in unidirectional metal matrix composites subjected to combined thermomechanical axisymmetric loading by altering the processing history, as well as through the microstructural design of interfacial fiber coatings. The user specifies the initial architecture of the composite and the load history, with the constituent materials being elastic, plastic, viscoplastic, or as defined by the 'user-defined' constitutive model, in addition to the objective function and constraints, through a user-friendly data input interface. The optimization procedure is based on an efficient solution methodology for the inelastic response of a fiber/interface layer(s)/matrix concentric cylinder model where the interface layers can be either homogeneous or heterogeneous. The response of heterogeneous layers is modeled using Aboudi's three-dimensional method of cells micromechanics model. The commercial optimization package DOT is used for the nonlinear optimization problem. The solution methodology for the arbitrarily layered cylinder is based on the local-global stiffness matrix formulation and Mendelson's iterative technique of successive elastic solutions developed for elastoplastic boundary-value problems. The optimization algorithm employed in DOT is based on the method of feasible directions.

  20. Optimization-Based Image Reconstruction with Artifact Reduction in C-Arm CBCT

    PubMed Central

    Xia, Dan; Langan, David A.; Solomon, Stephen B.; Zhang, Zheng; Chen, Buxin; Lai, Hao; Sidky, Emil Y.; Pan, Xiaochuan

    2016-01-01

    We investigate an optimization-based reconstruction, with an emphasis on image-artifact reduction, from data collected in C-arm cone-beam computed tomography (CBCT) employed in image-guided interventional procedures. In the study, an image to be reconstructed is formulated as a solution to a convex optimization program in which a weighted data divergence is minimized subject to a constraint on the image total variation (TV); a data-derivative fidelity is introduced in the program specifically for effectively suppressing dominant, low-frequency data artifact caused by, e.g., data truncation; and the Chambolle-Pock (CP) algorithm is tailored to reconstruct an image through solving the program. Like any other reconstructions, the optimization-based reconstruction considered depends upon numerous parameters. We elucidate the parameters, illustrate their determination, and demonstrate their impact on the reconstruction. The optimization-based reconstruction, when applied to data collected from swine and patient subjects, yields images with visibly reduced artifacts in contrast to the reference reconstruction, and it also appears to exhibit a high degree of robustness against distinctively different anatomies of imaged subjects and scanning conditions of clinical significance. Knowledge and insights gained in the study may be exploited for aiding in the design of practical reconstructions of truly clinical-application utility. PMID:27694700

  1. Optimization-based image reconstruction with artifact reduction in C-arm CBCT

    NASA Astrophysics Data System (ADS)

    Xia, Dan; Langan, David A.; Solomon, Stephen B.; Zhang, Zheng; Chen, Buxin; Lai, Hao; Sidky, Emil Y.; Pan, Xiaochuan

    2016-10-01

    We investigate an optimization-based reconstruction, with an emphasis on image-artifact reduction, from data collected in C-arm cone-beam computed tomography (CBCT) employed in image-guided interventional procedures. In the study, an image to be reconstructed is formulated as a solution to a convex optimization program in which a weighted data divergence is minimized subject to a constraint on the image total variation (TV); a data-derivative fidelity is introduced in the program specifically for effectively suppressing dominant, low-frequency data artifact caused by, e.g. data truncation; and the Chambolle-Pock (CP) algorithm is tailored to reconstruct an image through solving the program. Like any other reconstructions, the optimization-based reconstruction considered depends upon numerous parameters. We elucidate the parameters, illustrate their determination, and demonstrate their impact on the reconstruction. The optimization-based reconstruction, when applied to data collected from swine and patient subjects, yields images with visibly reduced artifacts in contrast to the reference reconstruction, and it also appears to exhibit a high degree of robustness against distinctively different anatomies of imaged subjects and scanning conditions of clinical significance. Knowledge and insights gained in the study may be exploited for aiding in the design of practical reconstructions of truly clinical-application utility.

  2. Paired-Associate and Feedback-Based Weather Prediction Tasks Support Multiple Category Learning Systems.

    PubMed

    Li, Kaiyun; Fu, Qiufang; Sun, Xunwei; Zhou, Xiaoyan; Fu, Xiaolan

    2016-01-01

    It remains unclear whether probabilistic category learning in the feedback-based weather prediction task (FB-WPT) can be mediated by a non-declarative or procedural learning system. To address this issue, we compared the effects of training time and verbal working memory, which influence the declarative learning system but not the non-declarative learning system, in the FB and paired-associate (PA) WPTs, as the PA task recruits a declarative learning system. The results of Experiment 1 showed that the optimal accuracy in the PA condition was significantly decreased when the training time was reduced from 7 to 3 s, but this did not occur in the FB condition, although shortened training time impaired the acquisition of explicit knowledge in both conditions. The results of Experiment 2 showed that the concurrent working memory task impaired the optimal accuracy and the acquisition of explicit knowledge in the PA condition but did not influence the optimal accuracy or the acquisition of self-insight knowledge in the FB condition. The apparent dissociation results between the FB and PA conditions suggested that a non-declarative or procedural learning system is involved in the FB-WPT and provided new evidence for the multiple-systems theory of human category learning.

  3. Paired-Associate and Feedback-Based Weather Prediction Tasks Support Multiple Category Learning Systems

    PubMed Central

    Li, Kaiyun; Fu, Qiufang; Sun, Xunwei; Zhou, Xiaoyan; Fu, Xiaolan

    2016-01-01

    It remains unclear whether probabilistic category learning in the feedback-based weather prediction task (FB-WPT) can be mediated by a non-declarative or procedural learning system. To address this issue, we compared the effects of training time and verbal working memory, which influence the declarative learning system but not the non-declarative learning system, in the FB and paired-associate (PA) WPTs, as the PA task recruits a declarative learning system. The results of Experiment 1 showed that the optimal accuracy in the PA condition was significantly decreased when the training time was reduced from 7 to 3 s, but this did not occur in the FB condition, although shortened training time impaired the acquisition of explicit knowledge in both conditions. The results of Experiment 2 showed that the concurrent working memory task impaired the optimal accuracy and the acquisition of explicit knowledge in the PA condition but did not influence the optimal accuracy or the acquisition of self-insight knowledge in the FB condition. The apparent dissociation results between the FB and PA conditions suggested that a non-declarative or procedural learning system is involved in the FB-WPT and provided new evidence for the multiple-systems theory of human category learning. PMID:27445958

  4. Extraction-spectrophotometric determination of tris(2-chloroethyl)amine using phthaleins.

    PubMed

    Rozsypal, Tomas; Halamek, Emil

    2017-06-01

    Procedures for the extraction-spectrophotometric determination of tris(2-chloroethyl)amine, an alkylating agent known as a drug as well as a chemical warfare agent (nitrogen mustard HN-3), with 7 acid-base indicators of a triphenylmethane lactone type, phthaleins, were developed. Representatives of phthaleins without an oxygen bridge (thymolphthalein, o-cresolphthalein, naphtholphthalein) and with an oxygen bridge (fluorescein, 2',7'-dichlorofluorescein, eosin B and eosin Y) were used. The methods were based on the formation of ion pair complexes. Chloroform was used as a non-polar solvent for an extraction. The conditions to determine were optimized for the optimal pH of the buffer and the concentration of a phthalein as a reagent. The dependence on the reaction time in a water phase and the stoichiometry of extraction products were studied. The detection limits and the limits of the determination of separate procedures and conditional extraction constants were determined. Comparison with the spectrophotometric method of the group determination of alkyl halides and acyl halides using alkaline ethanol-water solution of thymolphthalein, the so-called T-135 agent, was conducted. While studying the selectivity, the possible interference of bis(2-chloroethyl)sulphide and 3 nitrogen mustards in the proposed procedures were verified. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  5. Design optimization of axial flow hydraulic turbine runner: Part II - multi-objective constrained optimization method

    NASA Astrophysics Data System (ADS)

    Peng, Guoyi; Cao, Shuliang; Ishizuka, Masaru; Hayama, Shinji

    2002-06-01

    This paper is concerned with the design optimization of axial flow hydraulic turbine runner blade geometry. In order to obtain a better design plan with good performance, a new comprehensive performance optimization procedure has been presented by combining a multi-variable multi-objective constrained optimization model with a Q3D inverse computation and a performance prediction procedure. With careful analysis of the inverse design of axial hydraulic turbine runner, the total hydraulic loss and the cavitation coefficient are taken as optimization objectives and a comprehensive objective function is defined using the weight factors. Parameters of a newly proposed blade bound circulation distribution function and parameters describing positions of blade leading and training edges in the meridional flow passage are taken as optimization variables.The optimization procedure has been applied to the design optimization of a Kaplan runner with specific speed of 440 kW. Numerical results show that the performance of designed runner is successfully improved through optimization computation. The optimization model is found to be validated and it has the feature of good convergence. With the multi-objective optimization model, it is possible to control the performance of designed runner by adjusting the value of weight factors defining the comprehensive objective function. Copyright

  6. An enhanced performance through agent-based secure approach for mobile ad hoc networks

    NASA Astrophysics Data System (ADS)

    Bisen, Dhananjay; Sharma, Sanjeev

    2018-01-01

    This paper proposes an agent-based secure enhanced performance approach (AB-SEP) for mobile ad hoc network. In this approach, agent nodes are selected through optimal node reliability as a factor. This factor is calculated on the basis of node performance features such as degree difference, normalised distance value, energy level, mobility and optimal hello interval of node. After selection of agent nodes, a procedure of malicious behaviour detection is performed using fuzzy-based secure architecture (FBSA). To evaluate the performance of the proposed approach, comparative analysis is done with conventional schemes using performance parameters such as packet delivery ratio, throughput, total packet forwarding, network overhead, end-to-end delay and percentage of malicious detection.

  7. An integrated computer-based procedure for teamwork in digital nuclear power plants.

    PubMed

    Gao, Qin; Yu, Wenzhu; Jiang, Xiang; Song, Fei; Pan, Jiajie; Li, Zhizhong

    2015-01-01

    Computer-based procedures (CBPs) are expected to improve operator performance in nuclear power plants (NPPs), but they may reduce the openness of interaction between team members and harm teamwork consequently. To support teamwork in the main control room of an NPP, this study proposed a team-level integrated CBP that presents team members' operation status and execution histories to one another. Through a laboratory experiment, we compared the new integrated design and the existing individual CBP design. Sixty participants, randomly divided into twenty teams of three people each, were assigned to the two conditions to perform simulated emergency operating procedures. The results showed that compared with the existing CBP design, the integrated CBP reduced the effort of team communication and improved team transparency. The results suggest that this novel design is effective to optim team process, but its impact on the behavioural outcomes may be moderated by more factors, such as task duration. The study proposed and evaluated a team-level integrated computer-based procedure, which present team members' operation status and execution history to one another. The experimental results show that compared with the traditional procedure design, the integrated design reduces the effort of team communication and improves team transparency.

  8. A new design approach based on differential evolution algorithm for geometric optimization of magnetorheological brakes

    NASA Astrophysics Data System (ADS)

    Le-Duc, Thang; Ho-Huu, Vinh; Nguyen-Thoi, Trung; Nguyen-Quoc, Hung

    2016-12-01

    In recent years, various types of magnetorheological brakes (MRBs) have been proposed and optimized by different optimization algorithms that are integrated in commercial software such as ANSYS and Comsol Multiphysics. However, many of these optimization algorithms often possess some noteworthy shortcomings such as the trap of solutions at local extremes, or the limited number of design variables or the difficulty of dealing with discrete design variables. Thus, to overcome these limitations and develop an efficient computation tool for optimal design of the MRBs, an optimization procedure that combines differential evolution (DE), a gradient-free global optimization method with finite element analysis (FEA) is proposed in this paper. The proposed approach is then applied to the optimal design of MRBs with different configurations including conventional MRBs and MRBs with coils placed on the side housings. Moreover, to approach a real-life design, some necessary design variables of MRBs are considered as discrete variables in the optimization process. The obtained optimal design results are compared with those of available optimal designs in the literature. The results reveal that the proposed method outperforms some traditional approaches.

  9. Optimized separation procedures for the simultaneous assay of three plant hormones in liquid biofertilizers

    USDA-ARS?s Scientific Manuscript database

    The overuse of petrochemical-based synthetic fertilizers has caused detrimental effects to soil, water supplies, foods, and animal health. This, in addition to increased awareness of organic farming, has generated considerable interest in the evaluation of renewable biofertilizers. In order to stu...

  10. [Basic research on digital logistic management of hospital].

    PubMed

    Cao, Hui

    2010-05-01

    This paper analyzes and explores the possibilities of digital information-based management realized by equipment department, general services department, supply room and other material flow departments in different hospitals in order to optimize the procedures of information-based asset management. There are various analytical methods of medical supplies business models, providing analytical data for correct decisions made by departments and leaders of hospital and the governing authorities.

  11. 78 FR 53237 - Establishment of Area Navigation (RNAV) Routes; Washington, DC

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-29

    ... ``Optimization of Airspace and Procedures in a Metroplex (OAPM)'' effort in that this rule did not include T.... The new routes support the Washington, DC Optimization of Airspace and Procedures in a Metroplex (OAPM...

  12. Optimizing experimental procedures for quantitative evaluation of crop plant performance in high throughput phenotyping systems

    PubMed Central

    Junker, Astrid; Muraya, Moses M.; Weigelt-Fischer, Kathleen; Arana-Ceballos, Fernando; Klukas, Christian; Melchinger, Albrecht E.; Meyer, Rhonda C.; Riewe, David; Altmann, Thomas

    2015-01-01

    Detailed and standardized protocols for plant cultivation in environmentally controlled conditions are an essential prerequisite to conduct reproducible experiments with precisely defined treatments. Setting up appropriate and well defined experimental procedures is thus crucial for the generation of solid evidence and indispensable for successful plant research. Non-invasive and high throughput (HT) phenotyping technologies offer the opportunity to monitor and quantify performance dynamics of several hundreds of plants at a time. Compared to small scale plant cultivations, HT systems have much higher demands, from a conceptual and a logistic point of view, on experimental design, as well as the actual plant cultivation conditions, and the image analysis and statistical methods for data evaluation. Furthermore, cultivation conditions need to be designed that elicit plant performance characteristics corresponding to those under natural conditions. This manuscript describes critical steps in the optimization of procedures for HT plant phenotyping systems. Starting with the model plant Arabidopsis, HT-compatible methods were tested, and optimized with regard to growth substrate, soil coverage, watering regime, experimental design (considering environmental inhomogeneities) in automated plant cultivation and imaging systems. As revealed by metabolite profiling, plant movement did not affect the plants' physiological status. Based on these results, procedures for maize HT cultivation and monitoring were established. Variation of maize vegetative growth in the HT phenotyping system did match well with that observed in the field. The presented results outline important issues to be considered in the design of HT phenotyping experiments for model and crop plants. It thereby provides guidelines for the setup of HT experimental procedures, which are required for the generation of reliable and reproducible data of phenotypic variation for a broad range of applications. PMID:25653655

  13. A predictive control framework for optimal energy extraction of wind farms

    NASA Astrophysics Data System (ADS)

    Vali, M.; van Wingerden, J. W.; Boersma, S.; Petrović, V.; Kühn, M.

    2016-09-01

    This paper proposes an adjoint-based model predictive control for optimal energy extraction of wind farms. It employs the axial induction factor of wind turbines to influence their aerodynamic interactions through the wake. The performance index is defined here as the total power production of the wind farm over a finite prediction horizon. A medium-fidelity wind farm model is utilized to predict the inflow propagation in advance. The adjoint method is employed to solve the formulated optimization problem in a cost effective way and the first part of the optimal solution is implemented over the control horizon. This procedure is repeated at the next controller sample time providing the feedback into the optimization. The effectiveness and some key features of the proposed approach are studied for a two turbine test case through simulations.

  14. Multiparameter optimization of mammography: an update

    NASA Astrophysics Data System (ADS)

    Jafroudi, Hamid; Muntz, E. P.; Jennings, Robert J.

    1994-05-01

    Previously in this forum we have reported the application of multiparameter optimization techniques to the design of a minimum dose mammography system. The approach used a reference system to define the physical imaging performance required and the dose to which the dose for the optimized system should be compared. During the course of implementing the resulting design in hardware suitable for laboratory testing, the state of the art in mammographic imaging changed, so that the original reference system, which did not have a grid, was no longer appropriate. A reference system with a grid was selected in response to this change, and at the same time the optimization procedure was modified, to make it more general and to facilitate study of the optimized design under a variety of conditions. We report the changes in the procedure, and the results obtained using the revised procedure and the up- to-date reference system. Our results, which are supported by laboratory measurements, indicate that the optimized design can image small objects as well as the reference system using only about 30% of the dose required by the reference system. Hardware meeting the specification produced by the optimization procedure and suitable for clinical use is currently under evaluation in the Diagnostic Radiology Department at the Clinical Center, NH.

  15. Shortened OR time and decreased patient risk through use of a modular surgical instrument with artificial intelligence.

    PubMed

    Miller, David J; Nelson, Carl A; Oleynikov, Dmitry

    2009-05-01

    With a limited number of access ports, minimally invasive surgery (MIS) often requires the complete removal of one tool and reinsertion of another. Modular or multifunctional tools can be used to avoid this step. In this study, soft computing techniques are used to optimally arrange a modular tool's functional tips, allowing surgeons to deliver treatment of improved quality in less time, decreasing overall cost. The investigators watched University Medical Center surgeons perform MIS procedures (e.g., cholecystectomy and Nissen fundoplication) and recorded the procedures to digital video. The video was then used to analyze the types of instruments used, the duration of each use, and the function of each instrument. These data were aggregated with fuzzy logic techniques using four membership functions to quantify the overall usefulness of each tool. This allowed subsequent optimization of the arrangement of functional tips within the modular tool to decrease overall time spent changing instruments during simulated surgical procedures based on the video recordings. Based on a prototype and a virtual model of a multifunction laparoscopic tool designed by the investigators that can interchange six different instrument tips through the tool's shaft, the range of tool change times is approximately 11-13 s. Using this figure, estimated time savings for the procedures analyzed ranged from 2.5 to over 32 min, and on average, total surgery time can be reduced by almost 17% by using the multifunction tool.

  16. Reconciling quality and cost: A case study in interventional radiology.

    PubMed

    Zhang, Li; Domröse, Sascha; Mahnken, Andreas

    2015-10-01

    To provide a method to calculate delay cost and examine the relationship between quality and total cost. The total cost including capacity, supply and delay cost for running an interventional radiology suite was calculated. The capacity cost, consisting of labour, lease and overhead costs, was derived based on expenses per unit time. The supply cost was calculated according to actual procedural material use. The delay cost and marginal delay cost derived from queueing models was calculated based on waiting times of inpatients for their procedures. Quality improvement increased patient safety and maintained the outcome. The average daily delay costs were reduced from 1275 € to 294 €, and marginal delay costs from approximately 2000 € to 500 €, respectively. The one-time annual cost saved from the transfer of surgical to radiological procedures was approximately 130,500 €. The yearly delay cost saved was approximately 150,000 €. With increased revenue of 10,000 € in project phase 2, the yearly total cost saved was approximately 290,000 €. Optimal daily capacity of 4.2 procedures was determined. An approach for calculating delay cost toward optimal capacity allocation was presented. An overall quality improvement was achieved at reduced costs. • Improving quality in terms of safety, outcome, efficiency and timeliness reduces cost. • Mismatch of demand and capacity is detrimental to quality and cost. • Full system utilization with random demand results in long waiting periods and increased cost.

  17. On the effect of response transformations in sequential parameter optimization.

    PubMed

    Wagner, Tobias; Wessing, Simon

    2012-01-01

    Parameter tuning of evolutionary algorithms (EAs) is attracting more and more interest. In particular, the sequential parameter optimization (SPO) framework for the model-assisted tuning of stochastic optimizers has resulted in established parameter tuning algorithms. In this paper, we enhance the SPO framework by introducing transformation steps before the response aggregation and before the actual modeling. Based on design-of-experiments techniques, we empirically analyze the effect of integrating different transformations. We show that in particular, a rank transformation of the responses provides significant improvements. A deeper analysis of the resulting models and additional experiments with adaptive procedures indicates that the rank and the Box-Cox transformation are able to improve the properties of the resultant distributions with respect to symmetry and normality of the residuals. Moreover, model-based effect plots document a higher discriminatory power obtained by the rank transformation.

  18. Deadlock-free genetic scheduling algorithm for automated manufacturing systems based on deadlock control policy.

    PubMed

    Xing, KeYi; Han, LiBin; Zhou, MengChu; Wang, Feng

    2012-06-01

    Deadlock-free control and scheduling are vital for optimizing the performance of automated manufacturing systems (AMSs) with shared resources and route flexibility. Based on the Petri net models of AMSs, this paper embeds the optimal deadlock avoidance policy into the genetic algorithm and develops a novel deadlock-free genetic scheduling algorithm for AMSs. A possible solution of the scheduling problem is coded as a chromosome representation that is a permutation with repetition of parts. By using the one-step look-ahead method in the optimal deadlock control policy, the feasibility of a chromosome is checked, and infeasible chromosomes are amended into feasible ones, which can be easily decoded into a feasible deadlock-free schedule. The chromosome representation and polynomial complexity of checking and amending procedures together support the cooperative aspect of genetic search for scheduling problems strongly.

  19. Research on theoretical optimization and experimental verification of minimum resistance hull form based on Rankine source method

    NASA Astrophysics Data System (ADS)

    Zhang, Bao-Ji; Zhang, Zhu-Xin

    2015-09-01

    To obtain low resistance and high efficiency energy-saving ship, minimum total resistance hull form design method is studied based on potential flow theory of wave-making resistance and considering the effects of tail viscous separation. With the sum of wave resistance and viscous resistance as objective functions and the parameters of B-Spline function as design variables, mathematical models are built using Nonlinear Programming Method (NLP) ensuring the basic limit of displacement and considering rear viscous separation. We develop ship lines optimization procedures with intellectual property rights. Series60 is used as parent ship in optimization design to obtain improved ship (Series60-1) theoretically. Then drag tests for the improved ship (Series60-1) is made to get the actual minimum total resistance hull form.

  20. An improved chaotic fruit fly optimization based on a mutation strategy for simultaneous feature selection and parameter optimization for SVM and its applications.

    PubMed

    Ye, Fei; Lou, Xin Yuan; Sun, Lin Fu

    2017-01-01

    This paper proposes a new support vector machine (SVM) optimization scheme based on an improved chaotic fly optimization algorithm (FOA) with a mutation strategy to simultaneously perform parameter setting turning for the SVM and feature selection. In the improved FOA, the chaotic particle initializes the fruit fly swarm location and replaces the expression of distance for the fruit fly to find the food source. However, the proposed mutation strategy uses two distinct generative mechanisms for new food sources at the osphresis phase, allowing the algorithm procedure to search for the optimal solution in both the whole solution space and within the local solution space containing the fruit fly swarm location. In an evaluation based on a group of ten benchmark problems, the proposed algorithm's performance is compared with that of other well-known algorithms, and the results support the superiority of the proposed algorithm. Moreover, this algorithm is successfully applied in a SVM to perform both parameter setting turning for the SVM and feature selection to solve real-world classification problems. This method is called chaotic fruit fly optimization algorithm (CIFOA)-SVM and has been shown to be a more robust and effective optimization method than other well-known methods, particularly in terms of solving the medical diagnosis problem and the credit card problem.

  1. An improved chaotic fruit fly optimization based on a mutation strategy for simultaneous feature selection and parameter optimization for SVM and its applications

    PubMed Central

    Lou, Xin Yuan; Sun, Lin Fu

    2017-01-01

    This paper proposes a new support vector machine (SVM) optimization scheme based on an improved chaotic fly optimization algorithm (FOA) with a mutation strategy to simultaneously perform parameter setting turning for the SVM and feature selection. In the improved FOA, the chaotic particle initializes the fruit fly swarm location and replaces the expression of distance for the fruit fly to find the food source. However, the proposed mutation strategy uses two distinct generative mechanisms for new food sources at the osphresis phase, allowing the algorithm procedure to search for the optimal solution in both the whole solution space and within the local solution space containing the fruit fly swarm location. In an evaluation based on a group of ten benchmark problems, the proposed algorithm’s performance is compared with that of other well-known algorithms, and the results support the superiority of the proposed algorithm. Moreover, this algorithm is successfully applied in a SVM to perform both parameter setting turning for the SVM and feature selection to solve real-world classification problems. This method is called chaotic fruit fly optimization algorithm (CIFOA)-SVM and has been shown to be a more robust and effective optimization method than other well-known methods, particularly in terms of solving the medical diagnosis problem and the credit card problem. PMID:28369096

  2. Fast Beam-Based BPM Calibration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bertsche, K.; Loos, H.; Nuhn, H.-D.

    2012-10-15

    The Alignment Diagnostic System (ADS) of the LCLS undulator system indicates that the 33 undulator quadrupoles have extremely high position stability over many weeks. However, beam trajectory straightness and lasing efficiency degrade more quickly than this. A lengthy Beam Based Alignment (BBA) procedure must be executed every two to four weeks to re-optimize the X-ray beam parameters. The undulator system includes RF cavity Beam Position Monitors (RFBPMs), several of which are utilized by an automatic feedback system to align the incoming electron-beam trajectory to the undulator axis. The beam trajectory straightness degradation has been traced to electronic drifts of themore » gain and offset of the BPMs used in the beam feedback system. To quickly recover the trajectory straightness, we have developed a fast beam-based procedure to recalibrate the BPMs. This procedure takes advantage of the high-precision monitoring capability of the ADS, which allows highly repeatable positioning of undulator quadrupoles. This report describes the ADS, the position stability of the LCLS undulator quadrupoles, and some results of the new recovery procedure.« less

  3. Virtual Distances Methodology as Verification Technique for AACMMs with a Capacitive Sensor Based Indexed Metrology Platform

    PubMed Central

    Acero, Raquel; Santolaria, Jorge; Brau, Agustin; Pueo, Marcos

    2016-01-01

    This paper presents a new verification procedure for articulated arm coordinate measuring machines (AACMMs) together with a capacitive sensor-based indexed metrology platform (IMP) based on the generation of virtual reference distances. The novelty of this procedure lays on the possibility of creating virtual points, virtual gauges and virtual distances through the indexed metrology platform’s mathematical model taking as a reference the measurements of a ball bar gauge located in a fixed position of the instrument’s working volume. The measurements are carried out with the AACMM assembled on the IMP from the six rotating positions of the platform. In this way, an unlimited number and types of reference distances could be created without the need of using a physical gauge, therefore optimizing the testing time, the number of gauge positions and the space needed in the calibration and verification procedures. Four evaluation methods are presented to assess the volumetric performance of the AACMM. The results obtained proved the suitability of the virtual distances methodology as an alternative procedure for verification of AACMMs using the indexed metrology platform. PMID:27869722

  4. Label-free offline versus online activity methods for nucleoside diphosphate kinase b using high performance liquid chromatography.

    PubMed

    Lima, Juliana Maria; Salmazo Vieira, Plínio; Cavalcante de Oliveira, Arthur Henrique; Cardoso, Carmen Lúcia

    2016-08-07

    Nucleoside diphosphate kinase from Leishmania spp. (LmNDKb) has recently been described as a potential drug target to treat leishmaniasis disease. Therefore, screening of LmNDKb ligands requires methodologies that mimic the conditions under which LmNDKb acts in biological systems. Here, we compare two label-free methodologies that could help screen LmNDKb ligands and measure NDKb activity: an offline LC-UV assay for soluble LmNDKb and an online two-dimensional LC-UV system based on LmNDKb immobilised on a silica capillary. The target enzyme was immobilised on the silica capillary via Schiff base formation (to give LmNDKb-ICER-Schiff) or affinity attachment (to give LmNDKb-ICER-His). Several aspects of the ICERs resulting from these procedures were compared, namely kinetic parameters, stability, and procedure steps. Both the LmNDKb immobilisation routes minimised the conformational changes and preserved the substrate binding sites. However, considering the number of steps involved in the immobilisation procedure, the cost of reagents, and the stability of the immobilised enzyme, immobilisation via Schiff base formation proved to be the optimal procedure.

  5. Virtual Distances Methodology as Verification Technique for AACMMs with a Capacitive Sensor Based Indexed Metrology Platform.

    PubMed

    Acero, Raquel; Santolaria, Jorge; Brau, Agustin; Pueo, Marcos

    2016-11-18

    This paper presents a new verification procedure for articulated arm coordinate measuring machines (AACMMs) together with a capacitive sensor-based indexed metrology platform (IMP) based on the generation of virtual reference distances. The novelty of this procedure lays on the possibility of creating virtual points, virtual gauges and virtual distances through the indexed metrology platform's mathematical model taking as a reference the measurements of a ball bar gauge located in a fixed position of the instrument's working volume. The measurements are carried out with the AACMM assembled on the IMP from the six rotating positions of the platform. In this way, an unlimited number and types of reference distances could be created without the need of using a physical gauge, therefore optimizing the testing time, the number of gauge positions and the space needed in the calibration and verification procedures. Four evaluation methods are presented to assess the volumetric performance of the AACMM. The results obtained proved the suitability of the virtual distances methodology as an alternative procedure for verification of AACMMs using the indexed metrology platform.

  6. TU-D-201-07: Severity Indication in High Dose Rate Brachytherapy Emergency Response Procedure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, K; Rustad, F

    Purpose: Understanding the corresponding dose to different staff during the High Dose Rate (HDR) Brachytherapy emergency response procedure could help to develop a strategy in efficiency and effective action. In this study, the variation and risk analysis methodology was developed to simulation the HDR emergency response procedure based on severity indicator. Methods: A GammaMedplus iX HDR unit from Varian Medical System was used for this simulation. The emergency response procedure was decomposed based on risk management methods. Severity indexes were used to identify the impact of a risk occurrence on the step including dose to patient and dose to operationmore » staff by varying the time, HDR source activity, distance from the source to patient and staff and the actions. These actions in 7 steps were to press the interrupt button, press emergency shutoff switch, press emergency button on the afterloader keypad, turn emergency hand-crank, remove applicator from the patient, disconnect transfer tube and move afterloader from the patient, and execute emergency surgical recovery. Results: Given the accumulated time in second at the assumed 7 steps were 15, 5, 30, 15, 180, 120, 1800, and the dose rate of HDR source is 10 Ci, the accumulated dose in cGy to patient at 1cm distance were 188, 250, 625, 813, 3063, 4563 and 27063, and the accumulated exposure in rem to operator at outside the vault, 1m and 10cm distance were 0.0, 0.0, 0.1, 0.1, 22.6, 37.6 and 262.6. The variation was determined by the operators in action at different time and distance from the HDR source. Conclusion: The time and dose were estimated for a HDR unit emergency response procedure. It provided information in making optimal decision during the emergency procedure. Further investigation would be to optimize and standardize the responses for other emergency procedure by time-spatial-dose severity function.« less

  7. Helicopter Flight Procedures for Community Noise Reduction

    NASA Technical Reports Server (NTRS)

    Greenwood, Eric

    2017-01-01

    A computationally efficient, semiempirical noise model suitable for maneuvering flight noise prediction is used to evaluate the community noise impact of practical variations on several helicopter flight procedures typical of normal operations. Turns, "quick-stops," approaches, climbs, and combinations of these maneuvers are assessed. Relatively small variations in flight procedures are shown to cause significant changes to Sound Exposure Levels over a wide area. Guidelines are developed for helicopter pilots intended to provide effective strategies for reducing the negative effects of helicopter noise on the community. Finally, direct optimization of flight trajectories is conducted to identify low noise optimal flight procedures and quantify the magnitude of community noise reductions that can be obtained through tailored helicopter flight procedures. Physically realizable optimal turns and approaches are identified that achieve global noise reductions of as much as 10 dBA Sound Exposure Level.

  8. FOCuS: a metaheuristic algorithm for computing knockouts from genome-scale models for strain optimization.

    PubMed

    Mutturi, Sarma

    2017-06-27

    Although handful tools are available for constraint-based flux analysis to generate knockout strains, most of these are either based on bilevel-MIP or its modifications. However, metaheuristic approaches that are known for their flexibility and scalability have been less studied. Moreover, in the existing tools, sectioning of search space to find optimal knocks has not been considered. Herein, a novel computational procedure, termed as FOCuS (Flower-pOllination coupled Clonal Selection algorithm), was developed to find the optimal reaction knockouts from a metabolic network to maximize the production of specific metabolites. FOCuS derives its benefits from nature-inspired flower pollination algorithm and artificial immune system-inspired clonal selection algorithm to converge to an optimal solution. To evaluate the performance of FOCuS, reported results obtained from both MIP and other metaheuristic-based tools were compared in selected case studies. The results demonstrated the robustness of FOCuS irrespective of the size of metabolic network and number of knockouts. Moreover, sectioning of search space coupled with pooling of priority reactions based on their contribution to objective function for generating smaller search space significantly reduced the computational time.

  9. Experiences at Langley Research Center in the application of optimization techniques to helicopter airframes for vibration reduction

    NASA Technical Reports Server (NTRS)

    Murthy, T. Sreekanta; Kvaternik, Raymond G.

    1991-01-01

    A NASA/industry rotorcraft structural dynamics program known as Design Analysis Methods for VIBrationS (DAMVIBS) was initiated at Langley Research Center in 1984 with the objective of establishing the technology base needed by the industry for developing an advanced finite-element-based vibrations design analysis capability for airframe structures. As a part of the in-house activities contributing to that program, a study was undertaken to investigate the use of formal, nonlinear programming-based, numerical optimization techniques for airframe vibrations design work. Considerable progress has been made in connection with that study since its inception in 1985. This paper presents a unified summary of the experiences and results of that study. The formulation and solution of airframe optimization problems are discussed. Particular attention is given to describing the implementation of a new computational procedure based on MSC/NASTRAN and CONstrained function MINimization (CONMIN) in a computer program system called DYNOPT for the optimization of airframes subject to strength, frequency, dynamic response, and fatigue constraints. The results from the application of the DYNOPT program to the Bell AH-1G helicopter are presented and discussed.

  10. A Bayesian Hybrid Adaptive Randomisation Design for Clinical Trials with Survival Outcomes.

    PubMed

    Moatti, M; Chevret, S; Zohar, S; Rosenberger, W F

    2016-01-01

    Response-adaptive randomisation designs have been proposed to improve the efficiency of phase III randomised clinical trials and improve the outcomes of the clinical trial population. In the setting of failure time outcomes, Zhang and Rosenberger (2007) developed a response-adaptive randomisation approach that targets an optimal allocation, based on a fixed sample size. The aim of this research is to propose a response-adaptive randomisation procedure for survival trials with an interim monitoring plan, based on the following optimal criterion: for fixed variance of the estimated log hazard ratio, what allocation minimizes the expected hazard of failure? We demonstrate the utility of the design by redesigning a clinical trial on multiple myeloma. To handle continuous monitoring of data, we propose a Bayesian response-adaptive randomisation procedure, where the log hazard ratio is the effect measure of interest. Combining the prior with the normal likelihood, the mean posterior estimate of the log hazard ratio allows derivation of the optimal target allocation. We perform a simulation study to assess and compare the performance of this proposed Bayesian hybrid adaptive design to those of fixed, sequential or adaptive - either frequentist or fully Bayesian - designs. Non informative normal priors of the log hazard ratio were used, as well as mixture of enthusiastic and skeptical priors. Stopping rules based on the posterior distribution of the log hazard ratio were computed. The method is then illustrated by redesigning a phase III randomised clinical trial of chemotherapy in patients with multiple myeloma, with mixture of normal priors elicited from experts. As expected, there was a reduction in the proportion of observed deaths in the adaptive vs. non-adaptive designs; this reduction was maximized using a Bayes mixture prior, with no clear-cut improvement by using a fully Bayesian procedure. The use of stopping rules allows a slight decrease in the observed proportion of deaths under the alternate hypothesis compared with the adaptive designs with no stopping rules. Such Bayesian hybrid adaptive survival trials may be promising alternatives to traditional designs, reducing the duration of survival trials, as well as optimizing the ethical concerns for patients enrolled in the trial.

  11. Optimal Robust Motion Controller Design Using Multiobjective Genetic Algorithm

    PubMed Central

    Svečko, Rajko

    2014-01-01

    This paper describes the use of a multiobjective genetic algorithm for robust motion controller design. Motion controller structure is based on a disturbance observer in an RIC framework. The RIC approach is presented in the form with internal and external feedback loops, in which an internal disturbance rejection controller and an external performance controller must be synthesised. This paper involves novel objectives for robustness and performance assessments for such an approach. Objective functions for the robustness property of RIC are based on simple even polynomials with nonnegativity conditions. Regional pole placement method is presented with the aims of controllers' structures simplification and their additional arbitrary selection. Regional pole placement involves arbitrary selection of central polynomials for both loops, with additional admissible region of the optimized pole location. Polynomial deviation between selected and optimized polynomials is measured with derived performance objective functions. A multiobjective function is composed of different unrelated criteria such as robust stability, controllers' stability, and time-performance indexes of closed loops. The design of controllers and multiobjective optimization procedure involve a set of the objectives, which are optimized simultaneously with a genetic algorithm—differential evolution. PMID:24987749

  12. Development of an Optimization Methodology for the Aluminum Alloy Wheel Casting Process

    NASA Astrophysics Data System (ADS)

    Duan, Jianglan; Reilly, Carl; Maijer, Daan M.; Cockcroft, Steve L.; Phillion, Andre B.

    2015-08-01

    An optimization methodology has been developed for the aluminum alloy wheel casting process. The methodology is focused on improving the timing of cooling processes in a die to achieve improved casting quality. This methodology utilizes (1) a casting process model, which was developed within the commercial finite element package, ABAQUS™—ABAQUS is a trademark of Dassault Systèms; (2) a Python-based results extraction procedure; and (3) a numerical optimization module from the open-source Python library, Scipy. To achieve optimal casting quality, a set of constraints have been defined to ensure directional solidification, and an objective function, based on the solidification cooling rates, has been defined to either maximize, or target a specific, cooling rate. The methodology has been applied to a series of casting and die geometries with different cooling system configurations, including a 2-D axisymmetric wheel and die assembly generated from a full-scale prototype wheel. The results show that, with properly defined constraint and objective functions, solidification conditions can be improved and optimal cooling conditions can be achieved leading to process productivity and product quality improvements.

  13. Optimal design of disc-type magneto-rheological brake for mid-sized motorcycle: experimental evaluation

    NASA Astrophysics Data System (ADS)

    Sohn, Jung Woo; Jeon, Juncheol; Nguyen, Quoc Hung; Choi, Seung-Bok

    2015-08-01

    In this paper, a disc-type magneto-rheological (MR) brake is designed for a mid-sized motorcycle and its performance is experimentally evaluated. The proposed MR brake consists of an outer housing, a rotating disc immersed in MR fluid, and a copper wire coiled around a bobbin to generate a magnetic field. The structural configuration of the MR brake is first presented with consideration of the installation space for the conventional hydraulic brake of a mid-sized motorcycle. The design parameters of the proposed MR brake are optimized to satisfy design requirements such as the braking torque, total mass of the MR brake, and cruising temperature caused by the magnetic-field friction of the MR fluid. In the optimization procedure, the braking torque is calculated based on the Herschel-Bulkley rheological model, which predicts MR fluid behavior well at high shear rate. An optimization tool based on finite element analysis is used to obtain the optimized dimensions of the MR brake. After manufacturing the MR brake, mechanical performances regarding the response time, braking torque and cruising temperature are experimentally evaluated.

  14. Aerodynamic design of axisymmetric hypersonic wind-tunnel nozzles using least-squares/parabolized Navier-Stokes procedure

    NASA Technical Reports Server (NTRS)

    Korte, John J.

    1992-01-01

    A new procedure unifying the best of present classical design practices, CFD and optimization procedures, is demonstrated for designing the aerodynamic lines of hypersonic wind tunnel nozzles. This procedure can be employed to design hypersonic wind tunnel nozzles with thick boundary layers where the classical design procedure has been demonstrated to break down. Advantages of this procedure allow full utilization of powerful CFD codes in the design process, solves an optimization problem to determine the new contour, may be used to design new nozzles or improve sections of existing nozzles, and automatically compensates the nozzle contour for viscous effects as part of the unified design procedure.

  15. Optimization of a Tube Hydroforming Process

    NASA Astrophysics Data System (ADS)

    Abedrabbo, Nader; Zafar, Naeem; Averill, Ron; Pourboghrat, Farhang; Sidhu, Ranny

    2004-06-01

    An approach is presented to optimize a tube hydroforming process using a Genetic Algorithm (GA) search method. The goal of the study is to maximize formability by identifying the optimal internal hydraulic pressure and feed rate while satisfying the forming limit diagram (FLD). The optimization software HEEDS is used in combination with the nonlinear structural finite element code LS-DYNA to carry out the investigation. In particular, a sub-region of a circular tube blank is formed into a square die. Compared to the best results of a manual optimization procedure, a 55% increase in expansion was achieved when using the pressure and feed profiles identified by the automated optimization procedure.

  16. High-throughput process development of an alternative platform for the production of virus-like particles in Escherichia coli.

    PubMed

    Ladd Effio, Christopher; Baumann, Pascal; Weigel, Claudia; Vormittag, Philipp; Middelberg, Anton; Hubbuch, Jürgen

    2016-02-10

    The production of safe vaccines against untreatable or new diseases has pushed the research in the field of virus-like particles (VLPs). Currently, a large number of commercial VLP-based human vaccines and vaccine candidates are available or under development. A promising VLP production route is the controlled in vitro assembly of virus proteins into capsids. In the study reported here, a high-throughput screening (HTS) procedure was implemented for the upstream process development of a VLP platform in bacterial cell systems. Miniaturized cultivations were carried out in 48-well format in the BioLector system (m2p-Labs, Germany) using an Escherichia coli strain with a tac promoter producing the murine polyomavirus capsid protein (VP1). The screening procedure incorporated micro-scale cultivations, HTS cell disruption by sonication and HTS-compatible analytics by capillary gel electrophoresis. Cultivation temperatures, shaking speeds, induction and medium conditions were varied to optimize the product expression in E. coli. The most efficient system was selected based on an evaluation of soluble and insoluble product concentrations as well as on the percentage of product in the total soluble protein fraction. The optimized system was scaled up to cultivation 2.5L shaker flask scale and purified using an anion exchange chromatography membrane adsorber, followed by a size exclusion chromatography polishing procedure. For proof of concept, purified VP1 capsomeres were assembled under defined buffer conditions into empty capsids and characterized using transmission electron microscopy (TEM). The presented HTS procedure allowed for a fast development of an efficient production process of VLPs in E. coli. Under optimized cultivation conditions, the VP1 product totalled up to 43% of the total soluble protein fraction, yielding 1.63 mg VP1 per mL of applied cultivation medium. The developed production process strongly promotes the murine polyoma-VLP platform, moving towards an industrially feasible technology for new chimeric vaccines. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. A Hybrid Procedural/Deductive Executive for Autonomous Spacecraft

    NASA Technical Reports Server (NTRS)

    Pell, Barney; Gamble, Edward B.; Gat, Erann; Kessing, Ron; Kurien, James; Millar, William; Nayak, P. Pandurang; Plaunt, Christian; Williams, Brian C.; Lau, Sonie (Technical Monitor)

    1998-01-01

    The New Millennium Remote Agent (NMRA) will be the first AI system to control an actual spacecraft. The spacecraft domain places a strong premium on autonomy and requires dynamic recoveries and robust concurrent execution, all in the presence of tight real-time deadlines, changing goals, scarce resource constraints, and a wide variety of possible failures. To achieve this level of execution robustness, we have integrated a procedural executive based on generic procedures with a deductive model-based executive. A procedural executive provides sophisticated control constructs such as loops, parallel activity, locks, and synchronization which are used for robust schedule execution, hierarchical task decomposition, and routine configuration management. A deductive executive provides algorithms for sophisticated state inference and optimal failure recover), planning. The integrated executive enables designers to code knowledge via a combination of procedures and declarative models, yielding a rich modeling capability suitable to the challenges of real spacecraft control. The interface between the two executives ensures both that recovery sequences are smoothly merged into high-level schedule execution and that a high degree of reactivity is retained to effectively handle additional failures during recovery.

  18. [Orthogonal experiment using SFE-CO2 in extraction of essential oil from fresh Houttuynia cordata and analysis of essential oil by GC-MS].

    PubMed

    Meng, Jiang; Dong, Xiao-ping; Zhou, Yi-sheng; Jiang, Zhi-hong; Leung, Kelvin Sze-Yin; Zhao, Zhong-zhen

    2007-02-01

    To optimize the extraction procedure of essential oil from H. cordata using the SFE-CO2 and analyze the chemical composition of the essential oil. The extraction procedure of essential oil from fresh H. cordata was optimized with the orthogonal experiment. Essential oil of fresh H. cordata was analysed by GC-MS. The optimize preparative procedure was as follow: essential oil of H. cordata was extracted at a temperature of 35 degrees C, pressure of 15,000 kPa for 20 min. 38 chemical components were identified and the relative contents were quantified. The optimum preparative procedure is reliable and can guarantee the quality of essential oil.

  19. Design and Optimization of a Composite Canard Control Surface of an Advanced Fighter Aircraft under Static Loading

    NASA Astrophysics Data System (ADS)

    Shrivastava, Sachin; Mohite, P. M.

    2015-01-01

    The minimization of weight and maximization of payload is an ever challenging design procedure for air vehicles. The present study has been carried out with an objective to redesign control surface of an advanced all-metallic fighter aircraft. In this study, the structure made up of high strength aluminum, titanium and ferrous alloys has been attempted to replace by carbon fiber composite (CFC) skin, ribs and stiffeners. This study presents an approach towards development of a methodology for optimization of first-ply failure index (FI) in unidirectional fibrous laminates using Genetic-Algorithms (GA) under quasi-static loading. The GAs, by the application of its operators like reproduction, cross-over, mutation and elitist strategy, optimize the ply-orientations in laminates so as to have minimum FI of Tsai-Wu first-ply failure criterion. The GA optimization procedure has been implemented in MATLAB and interfaced with commercial software ABAQUS using python scripting. FI calculations have been carried out in ABAQUS with user material subroutine (UMAT). The GA's application gave reasonably well-optimized ply-orientations combination at a faster convergence rate. However, the final optimized sequence of ply-orientations is obtained by tweaking the sequences given by GA's based on industrial practices and experience, whenever needed. The present study of conversion of an all metallic structure to partial CFC structure has led to 12% of weight reduction. Therefore, the approach proposed here motivates designer to use CFC with a confidence.

  20. Power system modeling and optimization methods vis-a-vis integrated resource planning (IRP)

    NASA Astrophysics Data System (ADS)

    Arsali, Mohammad H.

    1998-12-01

    The state-of-the-art restructuring of power industries is changing the fundamental nature of retail electricity business. As a result, the so-called Integrated Resource Planning (IRP) strategies implemented on electric utilities are also undergoing modifications. Such modifications evolve from the imminent considerations to minimize the revenue requirements and maximize electrical system reliability vis-a-vis capacity-additions (viewed as potential investments). IRP modifications also provide service-design bases to meet the customer needs towards profitability. The purpose of this research as deliberated in this dissertation is to propose procedures for optimal IRP intended to expand generation facilities of a power system over a stretched period of time. Relevant topics addressed in this research towards IRP optimization are as follows: (1) Historical prospective and evolutionary aspects of power system production-costing models and optimization techniques; (2) A survey of major U.S. electric utilities adopting IRP under changing socioeconomic environment; (3) A new technique designated as the Segmentation Method for production-costing via IRP optimization; (4) Construction of a fuzzy relational database of a typical electric power utility system for IRP purposes; (5) A genetic algorithm based approach for IRP optimization using the fuzzy relational database.

  1. Polarimetric SAR Interferometry to Monitor Land Subsidence in Tehran

    NASA Astrophysics Data System (ADS)

    Sadeghi, Zahra; Valadan Zoej, Mohammad Javad; Muller, Jan-Peter

    2016-08-01

    This letter uses a combination of ADInSAR with a coherence optimization method. Polarimetric DInSAR is able to enhance pixel phase quality and thus coherent pixel density. The coherence optimization method is a search-based approach to find the optimized scattering mechanism introduced by Navarro-Sanchez [1]. The case study is southwest of Tehran basin located in the North of Iran. It suffers from a high-rate of land subsidence and is covered by agricultural fields. Usually such an area would significantly decorrelate but applying polarimetric ADInSAR it is possible to obtain a more coherent pixel coverage. A set of dual-pol TerraSAR-X images was ordered for polarimetric ADInSAR procedure. The coherence optimization method is shown to have increased the density and phase quality of coherent pixels significantly.

  2. Finite-dimensional compensators for infinite-dimensional systems via Galerkin-type approximation

    NASA Technical Reports Server (NTRS)

    Ito, Kazufumi

    1990-01-01

    In this paper existence and construction of stabilizing compensators for linear time-invariant systems defined on Hilbert spaces are discussed. An existence result is established using Galkerin-type approximations in which independent basis elements are used instead of the complete set of eigenvectors. A design procedure based on approximate solutions of the optimal regulator and optimal observer via Galerkin-type approximation is given and the Schumacher approach is used to reduce the dimension of compensators. A detailed discussion for parabolic and hereditary differential systems is included.

  3. Application of modern control theory to the design of optimum aircraft controllers

    NASA Technical Reports Server (NTRS)

    Power, L. J.

    1973-01-01

    The synthesis procedure presented is based on the solution of the output regulator problem of linear optimal control theory for time-invariant systems. By this technique, solution of the matrix Riccati equation leads to a constant linear feedback control law for an output regulator which will maintain a plant in a particular equilibrium condition in the presence of impulse disturbances. Two simple algorithms are presented that can be used in an automatic synthesis procedure for the design of maneuverable output regulators requiring only selected state variables for feedback. The first algorithm is for the construction of optimal feedforward control laws that can be superimposed upon a Kalman output regulator and that will drive the output of a plant to a desired constant value on command. The second algorithm is for the construction of optimal Luenberger observers that can be used to obtain feedback control laws for the output regulator requiring measurement of only part of the state vector. This algorithm constructs observers which have minimum response time under the constraint that the magnitude of the gains in the observer filter be less than some arbitrary limit.

  4. Training radial basis function networks for wind speed prediction using PSO enhanced differential search optimizer

    PubMed Central

    2018-01-01

    This paper presents an integrated hybrid optimization algorithm for training the radial basis function neural network (RBF NN). Training of neural networks is still a challenging exercise in machine learning domain. Traditional training algorithms in general suffer and trap in local optima and lead to premature convergence, which makes them ineffective when applied for datasets with diverse features. Training algorithms based on evolutionary computations are becoming popular due to their robust nature in overcoming the drawbacks of the traditional algorithms. Accordingly, this paper proposes a hybrid training procedure with differential search (DS) algorithm functionally integrated with the particle swarm optimization (PSO). To surmount the local trapping of the search procedure, a new population initialization scheme is proposed using Logistic chaotic sequence, which enhances the population diversity and aid the search capability. To demonstrate the effectiveness of the proposed RBF hybrid training algorithm, experimental analysis on publicly available 7 benchmark datasets are performed. Subsequently, experiments were conducted on a practical application case for wind speed prediction to expound the superiority of the proposed RBF training algorithm in terms of prediction accuracy. PMID:29768463

  5. Training radial basis function networks for wind speed prediction using PSO enhanced differential search optimizer.

    PubMed

    Rani R, Hannah Jessie; Victoire T, Aruldoss Albert

    2018-01-01

    This paper presents an integrated hybrid optimization algorithm for training the radial basis function neural network (RBF NN). Training of neural networks is still a challenging exercise in machine learning domain. Traditional training algorithms in general suffer and trap in local optima and lead to premature convergence, which makes them ineffective when applied for datasets with diverse features. Training algorithms based on evolutionary computations are becoming popular due to their robust nature in overcoming the drawbacks of the traditional algorithms. Accordingly, this paper proposes a hybrid training procedure with differential search (DS) algorithm functionally integrated with the particle swarm optimization (PSO). To surmount the local trapping of the search procedure, a new population initialization scheme is proposed using Logistic chaotic sequence, which enhances the population diversity and aid the search capability. To demonstrate the effectiveness of the proposed RBF hybrid training algorithm, experimental analysis on publicly available 7 benchmark datasets are performed. Subsequently, experiments were conducted on a practical application case for wind speed prediction to expound the superiority of the proposed RBF training algorithm in terms of prediction accuracy.

  6. An optimized two-step derivatization method for analyzing diethylene glycol ozonation products using gas chromatography and mass spectrometry.

    PubMed

    Yu, Ran; Duan, Lei; Jiang, Jingkun; Hao, Jiming

    2017-03-01

    The ozonation of hydroxyl compounds (e.g., sugars and alcohols) gives a broad range of products such as alcohols, aldehydes, ketones, and carboxylic acids. This study developed and optimized a two-step derivatization procedure for analyzing polar products of aldehydes and carboxylic acids from the ozonation of diethylene glycol (DEG) in a non-aqueous environment using gas chromatography-mass spectrometry. Experiments based on Central Composite Design with response surface methodology were carried out to evaluate the effects of derivatization variables and their interactions on the analysis. The most desirable derivatization conditions were reported, i.e., oximation was performed at room temperature overnight with the o-(2,3,4,5,6-pentafluorobenzyl) hydroxyl amine to analyte molar ratio of 6, silylation reaction temperature of 70°C, reaction duration of 70min, and N,O-bis(trimethylsilyl)-trifluoroacetamide volume of 12.5μL. The applicability of this optimized procedure was verified by analyzing DEG ozonation products in an ultrafine condensation particle counter simulation system. Copyright © 2016. Published by Elsevier B.V.

  7. Analysis of pesticides in soy milk combining solid-phase extraction and capillary electrophoresis-mass spectrometry.

    PubMed

    Hernández-Borges, Javier; Rodriguez-Delgado, Miguel Angel; García-Montelongo, Francisco J; Cifuentes, Alejandro

    2005-06-01

    In this work, the determination of a group of triazolopyrimidine sulfoanilide herbicides (cloransulam-methyl, metosulam, flumetsulam, florasulam, and diclosulam) in soy milk by capillary electrophoresis-mass spectrometry (CE-MS) is presented. The main electrospray interface (ESI) parameters (nebulizer pressure, dry gas flow rate, dry gas temperature, and composition of the sheath liquid) are optimized using a central composite design. To increase the sensitivity of the CE-MS method, an off-line sample preconcentration procedure based on solid-phase extraction (SPE) is combined with an on-line stacking procedure (i.e. normal stacking mode, NSM). Samples could be injected for up to 100 s, providing limits of detection (LODs) down to 74 microg/L, i.e., at the low ppb level, with relative standard deviation values (RSD,%) between 3.8% and 6.4% for peak areas on the same day, and between 6.5% and 8.1% on three different days. The usefulness of the optimized SPE-NSM-CE-MS procedure is demonstrated through the sensitive quantification of the selected pesticides in soy milk samples.

  8. Quantification of Gear Tooth Damage by Optimal Tracking of Vibration Signatures

    NASA Technical Reports Server (NTRS)

    Choy, F. K.; Veillette, R. J.; Polyshchuk, V.; Braun, M. J.; Hendricks, R. C.

    1996-01-01

    This paper presents a technique for quantifying the wear or damage of gear teeth in a transmission system. The procedure developed in this study can be applied as a part of either an onboard machine health-monitoring system or a health diagnostic system used during regular maintenance. As the developed methodology is based on analysis of gearbox vibration under normal operating conditions, no shutdown or special modification of operating parameters is required during the diagnostic process. The process of quantifying the wear or damage of gear teeth requires a set of measured vibration data and a model of the gear mesh dynamics. An optimization problem is formulated to determine the profile of a time-varying mesh stiffness parameter for which the model output approximates the measured data. The resulting stiffness profile is then related to the level of gear tooth wear or damage. The procedure was applied to a data set generated artificially and to another obtained experimentally from a spiral bevel gear test rig. The results demonstrate the utility of the procedure as part of an overall health-monitoring system.

  9. Optimizing disinfection by-product monitoring points in a distribution system using cluster analysis.

    PubMed

    Delpla, Ianis; Florea, Mihai; Pelletier, Geneviève; Rodriguez, Manuel J

    2018-06-04

    Trihalomethanes (THMs) and Haloacetic Acids (HAAs) are the main groups detected in drinking water and are consequently strictly regulated. However, the increasing quantity of data for disinfection byproducts (DBPs) produced from research projects and regulatory programs remains largely unexploited, despite a great potential for its use in optimizing drinking water quality monitoring to meet specific objectives. In this work, we developed a procedure to optimize locations and periods for DBPs monitoring based on a set of monitoring scenarios using the cluster analysis technique. The optimization procedure used a robust set of spatio-temporal monitoring results on DBPs (THMs and HAAs) generated from intensive sampling campaigns conducted in a residential sector of a water distribution system. Results shows that cluster analysis allows for the classification of water quality in different groups of THMs and HAAs according to their similarities, and the identification of locations presenting water quality concerns. By using cluster analysis with different monitoring objectives, this work provides a set of monitoring solutions and a comparison between various monitoring scenarios for decision-making purposes. Finally, it was demonstrated that the data from intensive monitoring of free chlorine residual and water temperature as DBP proxy parameters, when processed using cluster analysis, could also help identify the optimal sampling points and periods for regulatory THMs and HAAs monitoring. Copyright © 2018 Elsevier Ltd. All rights reserved.

  10. Maximized gust loads for a nonlinear airplane using matched filter theory and constrained optimization

    NASA Technical Reports Server (NTRS)

    Scott, Robert C.; Pototzky, Anthony S.; Perry, Boyd, III

    1991-01-01

    Two matched filter theory based schemes are described and illustrated for obtaining maximized and time correlated gust loads for a nonlinear aircraft. The first scheme is computationally fast because it uses a simple 1-D search procedure to obtain its answers. The second scheme is computationally slow because it uses a more complex multi-dimensional search procedure to obtain its answers, but it consistently provides slightly higher maximum loads than the first scheme. Both schemes are illustrated with numerical examples involving a nonlinear control system.

  11. Maximized gust loads for a nonlinear airplane using matched filter theory and constrained optimization

    NASA Technical Reports Server (NTRS)

    Scott, Robert C.; Perry, Boyd, III; Pototzky, Anthony S.

    1991-01-01

    This paper describes and illustrates two matched-filter-theory based schemes for obtaining maximized and time-correlated gust-loads for a nonlinear airplane. The first scheme is computationally fast because it uses a simple one-dimensional search procedure to obtain its answers. The second scheme is computationally slow because it uses a more complex multidimensional search procedure to obtain its answers, but it consistently provides slightly higher maximum loads than the first scheme. Both schemes are illustrated with numerical examples involving a nonlinear control system.

  12. Electrotransformation of Lactobacillus delbrueckii subsp. bulgaricus and L. delbrueckii subsp. lactis with Various Plasmids

    PubMed Central

    Serror, Pascale; Sasaki, Takashi; Ehrlich, S. Dusko; Maguin, Emmanuelle

    2002-01-01

    We describe, for the first time, a detailed electroporation procedure for Lactobacillus delbrueckii. Three L. delbrueckii strains were successfully transformed. Under optimal conditions, the transformation efficiency was 104 transformants per μg of DNA. Using this procedure, we identified several plasmids able to replicate in L. delbrueckii and integrated an integrative vector based on phage integrative elements into the L. delbrueckii subsp. bulgaricus chromosome. These vectors provide a good basis for developing molecular tools for L. delbrueckii and open the field of genetic studies in L. delbrueckii. PMID:11772607

  13. Establishment and optimization of NMR-based cell metabonomics study protocols for neonatal Sprague-Dawley rat cardiomyocytes.

    PubMed

    Zhang, Ming; Sun, Bo; Zhang, Qi; Gao, Rong; Liu, Qiao; Dong, Fangting; Fang, Haiqin; Peng, Shuangqing; Li, Famei; Yan, Xianzhong

    2017-01-15

    A quenching, harvesting, and extraction protocol was optimized for cardiomyocytes NMR metabonomics analysis in this study. Trypsin treatment and direct scraping cells in acetonitrile were compared for sample harvesting. The results showed trypsin treatment cause normalized concentration increasing of phosphocholine and metabolites leakage, since the trypsin-induced membrane broken and long term harvesting procedures. Then the intracellular metabolite extraction efficiency of methanol and acetonitrile were compared. As a result, washing twice with phosphate buffer, direct scraping cells and extracting with acetonitrile were chosen to prepare cardiomyocytes extracts samples for metabonomics studies. This optimized protocol is rapid, effective, and exhibits greater metabolite retention. Copyright © 2016 Elsevier Inc. All rights reserved.

  14. Choosing Sensor Configuration for a Flexible Structure Using Full Control Synthesis

    NASA Technical Reports Server (NTRS)

    Lind, Rick; Nalbantoglu, Volkan; Balas, Gary

    1997-01-01

    Optimal locations and types for feedback sensors which meet design constraints and control requirements are difficult to determine. This paper introduces an approach to choosing a sensor configuration based on Full Control synthesis. A globally optimal Full Control compensator is computed for each member of a set of sensor configurations which are feasible for the plant. The sensor configuration associated with the Full Control system achieving the best closed-loop performance is chosen for feedback measurements to an output feedback controller. A flexible structure is used as an example to demonstrate this procedure. Experimental results show sensor configurations chosen to optimize the Full Control performance are effective for output feedback controllers.

  15. Optimum aerodynamic design via boundary control

    NASA Technical Reports Server (NTRS)

    Jameson, Antony

    1994-01-01

    These lectures describe the implementation of optimization techniques based on control theory for airfoil and wing design. In previous studies it was shown that control theory could be used to devise an effective optimization procedure for two-dimensional profiles in which the shape is determined by a conformal transformation from a unit circle, and the control is the mapping function. Recently the method has been implemented in an alternative formulation which does not depend on conformal mapping, so that it can more easily be extended to treat general configurations. The method has also been extended to treat the Euler equations, and results are presented for both two and three dimensional cases, including the optimization of a swept wing.

  16. Stochastic optimal control of ultradiffusion processes with application to dynamic portfolio management

    NASA Astrophysics Data System (ADS)

    Marcozzi, Michael D.

    2008-12-01

    We consider theoretical and approximation aspects of the stochastic optimal control of ultradiffusion processes in the context of a prototype model for the selling price of a European call option. Within a continuous-time framework, the dynamic management of a portfolio of assets is effected through continuous or point control, activation costs, and phase delay. The performance index is derived from the unique weak variational solution to the ultraparabolic Hamilton-Jacobi equation; the value function is the optimal realization of the performance index relative to all feasible portfolios. An approximation procedure based upon a temporal box scheme/finite element method is analyzed; numerical examples are presented in order to demonstrate the viability of the approach.

  17. Optimization of Milling Parameters Employing Desirability Functions

    NASA Astrophysics Data System (ADS)

    Ribeiro, J. L. S.; Rubio, J. C. Campos; Abrão, A. M.

    2011-01-01

    The principal aim of this paper is to investigate the influence of tool material (one cermet and two coated carbide grades), cutting speed and feed rate on the machinability of hardened AISI H13 hot work steel, in order to identify the cutting conditions which lead to optimal performance. A multiple response optimization procedure based on tool life, surface roughness, milling forces and the machining time (required to produce a sample cavity) was employed. The results indicated that the TiCN-TiN coated carbide and cermet presented similar results concerning the global optimum values for cutting speed and feed rate per tooth, outperforming the TiN-TiCN-Al2O3 coated carbide tool.

  18. Modeling surgical tool selection patterns as a "traveling salesman problem" for optimizing a modular surgical tool system.

    PubMed

    Nelson, Carl A; Miller, David J; Oleynikov, Dmitry

    2008-01-01

    As modular systems come into the forefront of robotic telesurgery, streamlining the process of selecting surgical tools becomes an important consideration. This paper presents a method for optimal queuing of tools in modular surgical tool systems, based on patterns in tool-use sequences, in order to minimize time spent changing tools. The solution approach is to model the set of tools as a graph, with tool-change frequency expressed as edge weights in the graph, and to solve the Traveling Salesman Problem for the graph. In a set of simulations, this method has shown superior performance at optimizing tool arrangements for streamlining surgical procedures.

  19. Attitude determination and parameter estimation using vector observations - Theory

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis

    1989-01-01

    Procedures for attitude determination based on Wahba's loss function are generalized to include the estimation of parameters other than the attitude, such as sensor biases. Optimization with respect to the attitude is carried out using the q-method, which does not require an a priori estimate of the attitude. Optimization with respect to the other parameters employs an iterative approach, which does require an a priori estimate of these parameters. Conventional state estimation methods require a priori estimates of both the parameters and the attitude, while the algorithm presented in this paper always computes the exact optimal attitude for given values of the parameters. Expressions for the covariance of the attitude and parameter estimates are derived.

  20. Chapter 8. Medical procedures. Recommendations and standard operating procedures for intensive care unit and hospital preparations for an influenza epidemic or mass disaster.

    PubMed

    Zimmerman, Janice L; Sprung, Charles L

    2010-04-01

    To provide recommendations and standard operating procedures for intensive care unit and hospital preparations for an influenza pandemic or mass disaster with a specific focus on ensuring that adequate resources are available and appropriate protocols are developed to safely perform procedures in patients with and without influenza illness. Based on a literature review and expert opinion, a Delphi process was used to define the essential topics including performing medical procedures. Key recommendations include: (1) specify high-risk procedures (aerosol generating-procedures); (2) determine if certain procedures will not be performed during a pandemic; (3) develop protocols for safe performance of high-risk procedures that include appropriateness, qualifications of personnel, site, personal protection equipment, safe technique and equipment needs; (4) ensure adequate training of personnel in high-risk procedures; (5) procedures should be performed at the bedside whenever possible; (6) ensure safe respiratory therapy practices to avoid aerosols; (7) provide safe respiratory equipment; and (8) determine criteria for cancelling and/or altering elective procedures. Judicious planning and adoption of protocols for safe performance of medical procedures are necessary to optimize outcomes during a pandemic.

  1. Documentation for a Structural Optimization Procedure Developed Using the Engineering Analysis Language (EAL)

    NASA Technical Reports Server (NTRS)

    Martin, Carl J., Jr.

    1996-01-01

    This report describes a structural optimization procedure developed for use with the Engineering Analysis Language (EAL) finite element analysis system. The procedure is written primarily in the EAL command language. Three external processors which are written in FORTRAN generate equivalent stiffnesses and evaluate stress and local buckling constraints for the sections. Several built-up structural sections were coded into the design procedures. These structural sections were selected for use in aircraft design, but are suitable for other applications. Sensitivity calculations use the semi-analytic method, and an extensive effort has been made to increase the execution speed and reduce the storage requirements. There is also an approximate sensitivity update method included which can significantly reduce computational time. The optimization is performed by an implementation of the MINOS V5.4 linear programming routine in a sequential liner programming procedure.

  2. Application of controller partitioning optimization procedure to integrated flight/propulsion control design for a STOVL aircraft

    NASA Technical Reports Server (NTRS)

    Garg, Sanjay; Schmidt, Phillip H.

    1993-01-01

    A parameter optimization framework has earlier been developed to solve the problem of partitioning a centralized controller into a decentralized, hierarchical structure suitable for integrated flight/propulsion control implementation. This paper presents results from the application of the controller partitioning optimization procedure to IFPC design for a Short Take-Off and Vertical Landing (STOVL) aircraft in transition flight. The controller partitioning problem and the parameter optimization algorithm are briefly described. Insight is provided into choosing various 'user' selected parameters in the optimization cost function such that the resulting optimized subcontrollers will meet the characteristics of the centralized controller that are crucial to achieving the desired closed-loop performance and robustness, while maintaining the desired subcontroller structure constraints that are crucial for IFPC implementation. The optimization procedure is shown to improve upon the initial partitioned subcontrollers and lead to performance comparable to that achieved with the centralized controller. This application also provides insight into the issues that should be addressed at the centralized control design level in order to obtain implementable partitioned subcontrollers.

  3. PROCRU: A model for analyzing crew procedures in approach to landing

    NASA Technical Reports Server (NTRS)

    Baron, S.; Muralidharan, R.; Lancraft, R.; Zacharias, G.

    1980-01-01

    A model for analyzing crew procedures in approach to landing is developed. The model employs the information processing structure used in the optimal control model and in recent models for monitoring and failure detection. Mechanisms are added to this basic structure to model crew decision making in this multi task environment. Decisions are based on probability assessments and potential mission impact (or gain). Sub models for procedural activities are included. The model distinguishes among external visual, instrument visual, and auditory sources of information. The external visual scene perception models incorporate limitations in obtaining information. The auditory information channel contains a buffer to allow for storage in memory until that information can be processed.

  4. Glass sealing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brow, R.K.; Kovacic, L.; Chambers, R.S.

    1996-04-01

    Hernetic glass sealing technologies developed for weapons component applications can be utilized for the design and manufacture of fuel cells. Design and processing of of a seal are optimized through an integrated approach based on glass composition research, finite element analysis, and sealing process definition. Glass sealing procedures are selected to accommodate the limits imposed by glass composition and predicted calculations.

  5. ICan: An Optimized Ion-Current-Based Quantification Procedure with Enhanced Quantitative Accuracy and Sensitivity in Biomarker Discovery

    PubMed Central

    2015-01-01

    The rapidly expanding availability of high-resolution mass spectrometry has substantially enhanced the ion-current-based relative quantification techniques. Despite the increasing interest in ion-current-based methods, quantitative sensitivity, accuracy, and false discovery rate remain the major concerns; consequently, comprehensive evaluation and development in these regards are urgently needed. Here we describe an integrated, new procedure for data normalization and protein ratio estimation, termed ICan, for improved ion-current-based analysis of data generated by high-resolution mass spectrometry (MS). ICan achieved significantly better accuracy and precision, and lower false-positive rate for discovering altered proteins, over current popular pipelines. A spiked-in experiment was used to evaluate the performance of ICan to detect small changes. In this study E. coli extracts were spiked with moderate-abundance proteins from human plasma (MAP, enriched by IgY14-SuperMix procedure) at two different levels to set a small change of 1.5-fold. Forty-five (92%, with an average ratio of 1.71 ± 0.13) of 49 identified MAP protein (i.e., the true positives) and none of the reference proteins (1.0-fold) were determined as significantly altered proteins, with cutoff thresholds of ≥1.3-fold change and p ≤ 0.05. This is the first study to evaluate and prove competitive performance of the ion-current-based approach for assigning significance to proteins with small changes. By comparison, other methods showed remarkably inferior performance. ICan can be broadly applicable to reliable and sensitive proteomic survey of multiple biological samples with the use of high-resolution MS. Moreover, many key features evaluated and optimized here such as normalization, protein ratio determination, and statistical analyses are also valuable for data analysis by isotope-labeling methods. PMID:25285707

  6. Design of an integrated thermoelectric generator power converter for ultra-low power and low voltage body energy harvesters aimed at ExG active electrodes

    NASA Astrophysics Data System (ADS)

    Ataei, Milad; Robert, Christian; Boegli, Alexis; Farine, Pierre-André

    2015-10-01

    This paper describes a detailed design procedure for an efficient thermal body energy harvesting integrated power converter. The procedure is based on the examination of power loss and power transfer in a converter for a self-powered medical device. The efficiency limit for the system is derived and the converter is optimized for the worst case scenario. All optimum system parameters are calculated respecting the transducer constraints and the application form factor. Circuit blocks including pulse generators are implemented based on the system specifications and optimized converter working frequency. At this working condition, it has been demonstrated that the wide area capacitor of the voltage doubler, which provides high voltage switch gating, can be eliminated at the expense of wider switches. With this method, measurements show that 54% efficiency is achieved for just a 20 mV transducer output voltage and 30% of the chip area is saved. The entire electronic board can fit in one EEG or ECG electrode, and the electronic system can convert the electrode to an active electrode.

  7. Automation and Optimization of Multipulse Laser Zona Drilling of Mouse Embryos During Embryo Biopsy.

    PubMed

    Wong, Christopher Yee; Mills, James K

    2017-03-01

    Laser zona drilling (LZD) is a required step in many embryonic surgical procedures, for example, assisted hatching and preimplantation genetic diagnosis. LZD involves the ablation of the zona pellucida (ZP) using a laser while minimizing potentially harmful thermal effects on critical internal cell structures. Develop a method for the automation and optimization of multipulse LZD, applied to cleavage-stage embryos. A two-stage optimization is used. The first stage uses computer vision algorithms to identify embryonic structures and determines the optimal ablation zone farthest away from critical structures such as blastomeres. The second stage combines a genetic algorithm with a previously reported thermal analysis of LZD to optimize the combination of laser pulse locations and pulse durations. The goal is to minimize the peak temperature experienced by the blastomeres while creating the desired opening in the ZP. A proof of concept of the proposed LZD automation and optimization method is demonstrated through experiments on mouse embryos with positive results, as adequately sized openings are created. Automation of LZD is feasible and is a viable step toward the automation of embryo biopsy procedures. LZD is a common but delicate procedure performed by human operators using subjective methods to gauge proper LZD procedure. Automation of LZD removes human error to increase the success rate of LZD. Although the proposed methods are developed for cleavage-stage embryos, the same methods may be applied to most types LZD procedures, embryos at different developmental stages, or nonembryonic cells.

  8. Performance of a cavity-method-based algorithm for the prize-collecting Steiner tree problem on graphs

    NASA Astrophysics Data System (ADS)

    Biazzo, Indaco; Braunstein, Alfredo; Zecchina, Riccardo

    2012-08-01

    We study the behavior of an algorithm derived from the cavity method for the prize-collecting steiner tree (PCST) problem on graphs. The algorithm is based on the zero temperature limit of the cavity equations and as such is formally simple (a fixed point equation resolved by iteration) and distributed (parallelizable). We provide a detailed comparison with state-of-the-art algorithms on a wide range of existing benchmarks, networks, and random graphs. Specifically, we consider an enhanced derivative of the Goemans-Williamson heuristics and the dhea solver, a branch and cut integer linear programming based approach. The comparison shows that the cavity algorithm outperforms the two algorithms in most large instances both in running time and quality of the solution. Finally we prove a few optimality properties of the solutions provided by our algorithm, including optimality under the two postprocessing procedures defined in the Goemans-Williamson derivative and global optimality in some limit cases.

  9. Robust Control Design for Systems With Probabilistic Uncertainty

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.

    2005-01-01

    This paper presents a reliability- and robustness-based formulation for robust control synthesis for systems with probabilistic uncertainty. In a reliability-based formulation, the probability of violating design requirements prescribed by inequality constraints is minimized. In a robustness-based formulation, a metric which measures the tendency of a random variable/process to cluster close to a target scalar/function is minimized. A multi-objective optimization procedure, which combines stability and performance requirements in time and frequency domains, is used to search for robustly optimal compensators. Some of the fundamental differences between the proposed strategy and conventional robust control methods are: (i) unnecessary conservatism is eliminated since there is not need for convex supports, (ii) the most likely plants are favored during synthesis allowing for probabilistic robust optimality, (iii) the tradeoff between robust stability and robust performance can be explored numerically, (iv) the uncertainty set is closely related to parameters with clear physical meaning, and (v) compensators with improved robust characteristics for a given control structure can be synthesized.

  10. Optimizing Design Parameters for Sets of Concentric Tube Robots using Sampling-based Motion Planning

    PubMed Central

    Baykal, Cenk; Torres, Luis G.; Alterovitz, Ron

    2015-01-01

    Concentric tube robots are tentacle-like medical robots that can bend around anatomical obstacles to access hard-to-reach clinical targets. The component tubes of these robots can be swapped prior to performing a task in order to customize the robot’s behavior and reachable workspace. Optimizing a robot’s design by appropriately selecting tube parameters can improve the robot’s effectiveness on a procedure-and patient-specific basis. In this paper, we present an algorithm that generates sets of concentric tube robot designs that can collectively maximize the reachable percentage of a given goal region in the human body. Our algorithm combines a search in the design space of a concentric tube robot using a global optimization method with a sampling-based motion planner in the robot’s configuration space in order to find sets of designs that enable motions to goal regions while avoiding contact with anatomical obstacles. We demonstrate the effectiveness of our algorithm in a simulated scenario based on lung anatomy. PMID:26951790

  11. Optimizing Design Parameters for Sets of Concentric Tube Robots using Sampling-based Motion Planning.

    PubMed

    Baykal, Cenk; Torres, Luis G; Alterovitz, Ron

    2015-09-28

    Concentric tube robots are tentacle-like medical robots that can bend around anatomical obstacles to access hard-to-reach clinical targets. The component tubes of these robots can be swapped prior to performing a task in order to customize the robot's behavior and reachable workspace. Optimizing a robot's design by appropriately selecting tube parameters can improve the robot's effectiveness on a procedure-and patient-specific basis. In this paper, we present an algorithm that generates sets of concentric tube robot designs that can collectively maximize the reachable percentage of a given goal region in the human body. Our algorithm combines a search in the design space of a concentric tube robot using a global optimization method with a sampling-based motion planner in the robot's configuration space in order to find sets of designs that enable motions to goal regions while avoiding contact with anatomical obstacles. We demonstrate the effectiveness of our algorithm in a simulated scenario based on lung anatomy.

  12. Shape optimization of road tunnel cross-section by simulated annealing

    NASA Astrophysics Data System (ADS)

    Sobótka, Maciej; Pachnicz, Michał

    2016-06-01

    The paper concerns shape optimization of a tunnel excavation cross-section. The study incorporates optimization procedure of the simulated annealing (SA). The form of a cost function derives from the energetic optimality condition, formulated in the authors' previous papers. The utilized algorithm takes advantage of the optimization procedure already published by the authors. Unlike other approaches presented in literature, the one introduced in this paper takes into consideration a practical requirement of preserving fixed clearance gauge. Itasca Flac software is utilized in numerical examples. The optimal excavation shapes are determined for five different in situ stress ratios. This factor significantly affects the optimal topology of excavation. The resulting shapes are elongated in the direction of a principal stress greater value. Moreover, the obtained optimal shapes have smooth contours circumscribing the gauge.

  13. An efficient algorithm for automatic phase correction of NMR spectra based on entropy minimization

    NASA Astrophysics Data System (ADS)

    Chen, Li; Weng, Zhiqiang; Goh, LaiYoong; Garland, Marc

    2002-09-01

    A new algorithm for automatic phase correction of NMR spectra based on entropy minimization is proposed. The optimal zero-order and first-order phase corrections for a NMR spectrum are determined by minimizing entropy. The objective function is constructed using a Shannon-type information entropy measure. Entropy is defined as the normalized derivative of the NMR spectral data. The algorithm has been successfully applied to experimental 1H NMR spectra. The results of automatic phase correction are found to be comparable to, or perhaps better than, manual phase correction. The advantages of this automatic phase correction algorithm include its simple mathematical basis and the straightforward, reproducible, and efficient optimization procedure. The algorithm is implemented in the Matlab program ACME—Automated phase Correction based on Minimization of Entropy.

  14. Evaluation of standardized sample collection, packaging, and decontamination procedures to assess cross-contamination potential during Bacillus anthracis incident response operations

    PubMed Central

    Calfee, M. Worth; Tufts, Jenia; Meyer, Kathryn; McConkey, Katrina; Mickelsen, Leroy; Rose, Laura; Dowell, Chad; Delaney, Lisa; Weber, Angela; Morse, Stephen; Chaitram, Jasmine; Gray, Marshall

    2016-01-01

    Sample collection procedures and primary receptacle (sample container and bag) decontamination methods should prevent contaminant transfer between contaminated and non-contaminated surfaces and areas during bio-incident operations. Cross-contamination of personnel, equipment, or sample containers may result in the exfiltration of biological agent from the exclusion (hot) zone and have unintended negative consequences on response resources, activities and outcomes. The current study was designed to: (1) evaluate currently recommended sample collection and packaging procedures to identify procedural steps that may increase the likelihood of spore exfiltration or contaminant transfer; (2) evaluate the efficacy of currently recommended primary receptacle decontamination procedures; and (3) evaluate the efficacy of outer packaging decontamination methods. Wet- and dry-deposited fluorescent tracer powder was used in contaminant transfer tests to qualitatively evaluate the currently-recommended sample collection procedures. Bacillus atrophaeus spores, a surrogate for Bacillus anthracis, were used to evaluate the efficacy of spray- and wipe-based decontamination procedures. Both decontamination procedures were quantitatively evaluated on three types of sample packaging materials (corrugated fiberboard, polystyrene foam, and polyethylene plastic), and two contamination mechanisms (wet or dry inoculums). Contaminant transfer results suggested that size-appropriate gloves should be worn by personnel, templates should not be taped to or removed from surfaces, and primary receptacles should be selected carefully. The decontamination tests indicated that wipe-based decontamination procedures may be more effective than spray-based procedures; efficacy was not influenced by material type but was affected by the inoculation method. Incomplete surface decontamination was observed in all tests with dry inoculums. This study provides a foundation for optimizing current B. anthracis response procedures to minimize contaminant exfiltration. PMID:27362274

  15. Evaluation of standardized sample collection, packaging, and decontamination procedures to assess cross-contamination potential during Bacillus anthracis incident response operations.

    PubMed

    Calfee, M Worth; Tufts, Jenia; Meyer, Kathryn; McConkey, Katrina; Mickelsen, Leroy; Rose, Laura; Dowell, Chad; Delaney, Lisa; Weber, Angela; Morse, Stephen; Chaitram, Jasmine; Gray, Marshall

    2016-12-01

    Sample collection procedures and primary receptacle (sample container and bag) decontamination methods should prevent contaminant transfer between contaminated and non-contaminated surfaces and areas during bio-incident operations. Cross-contamination of personnel, equipment, or sample containers may result in the exfiltration of biological agent from the exclusion (hot) zone and have unintended negative consequences on response resources, activities and outcomes. The current study was designed to: (1) evaluate currently recommended sample collection and packaging procedures to identify procedural steps that may increase the likelihood of spore exfiltration or contaminant transfer; (2) evaluate the efficacy of currently recommended primary receptacle decontamination procedures; and (3) evaluate the efficacy of outer packaging decontamination methods. Wet- and dry-deposited fluorescent tracer powder was used in contaminant transfer tests to qualitatively evaluate the currently-recommended sample collection procedures. Bacillus atrophaeus spores, a surrogate for Bacillus anthracis, were used to evaluate the efficacy of spray- and wipe-based decontamination procedures. Both decontamination procedures were quantitatively evaluated on three types of sample packaging materials (corrugated fiberboard, polystyrene foam, and polyethylene plastic), and two contamination mechanisms (wet or dry inoculums). Contaminant transfer results suggested that size-appropriate gloves should be worn by personnel, templates should not be taped to or removed from surfaces, and primary receptacles should be selected carefully. The decontamination tests indicated that wipe-based decontamination procedures may be more effective than spray-based procedures; efficacy was not influenced by material type but was affected by the inoculation method. Incomplete surface decontamination was observed in all tests with dry inoculums. This study provides a foundation for optimizing current B. anthracis response procedures to minimize contaminant exfiltration.

  16. Calibration of groundwater vulnerability mapping using the generalized reduced gradient method.

    PubMed

    Elçi, Alper

    2017-12-01

    Groundwater vulnerability assessment studies are essential in water resources management. Overlay-and-index methods such as DRASTIC are widely used for mapping of groundwater vulnerability, however, these methods mainly suffer from a subjective selection of model parameters. The objective of this study is to introduce a calibration procedure that results in a more accurate assessment of groundwater vulnerability. The improvement of the assessment is formulated as a parameter optimization problem using an objective function that is based on the correlation between actual groundwater contamination and vulnerability index values. The non-linear optimization problem is solved with the generalized-reduced-gradient (GRG) method, which is numerical algorithm based optimization method. To demonstrate the applicability of the procedure, a vulnerability map for the Tahtali stream basin is calibrated using nitrate concentration data. The calibration procedure is easy to implement and aims the maximization of correlation between observed pollutant concentrations and groundwater vulnerability index values. The influence of each vulnerability parameter in the calculation of the vulnerability index is assessed by performing a single-parameter sensitivity analysis. Results of the sensitivity analysis show that all factors are effective on the final vulnerability index. Calibration of the vulnerability map improves the correlation between index values and measured nitrate concentrations by 19%. The regression coefficient increases from 0.280 to 0.485. It is evident that the spatial distribution and the proportions of vulnerability class areas are significantly altered with the calibration process. Although the applicability of the calibration method is demonstrated on the DRASTIC model, the applicability of the approach is not specific to a certain model and can also be easily applied to other overlay-and-index methods. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Calibration of groundwater vulnerability mapping using the generalized reduced gradient method

    NASA Astrophysics Data System (ADS)

    Elçi, Alper

    2017-12-01

    Groundwater vulnerability assessment studies are essential in water resources management. Overlay-and-index methods such as DRASTIC are widely used for mapping of groundwater vulnerability, however, these methods mainly suffer from a subjective selection of model parameters. The objective of this study is to introduce a calibration procedure that results in a more accurate assessment of groundwater vulnerability. The improvement of the assessment is formulated as a parameter optimization problem using an objective function that is based on the correlation between actual groundwater contamination and vulnerability index values. The non-linear optimization problem is solved with the generalized-reduced-gradient (GRG) method, which is numerical algorithm based optimization method. To demonstrate the applicability of the procedure, a vulnerability map for the Tahtali stream basin is calibrated using nitrate concentration data. The calibration procedure is easy to implement and aims the maximization of correlation between observed pollutant concentrations and groundwater vulnerability index values. The influence of each vulnerability parameter in the calculation of the vulnerability index is assessed by performing a single-parameter sensitivity analysis. Results of the sensitivity analysis show that all factors are effective on the final vulnerability index. Calibration of the vulnerability map improves the correlation between index values and measured nitrate concentrations by 19%. The regression coefficient increases from 0.280 to 0.485. It is evident that the spatial distribution and the proportions of vulnerability class areas are significantly altered with the calibration process. Although the applicability of the calibration method is demonstrated on the DRASTIC model, the applicability of the approach is not specific to a certain model and can also be easily applied to other overlay-and-index methods.

  18. Enriching peptide libraries for binding affinity and specificity through computationally directed library design

    PubMed Central

    Foight, Glenna Wink; Chen, T. Scott; Richman, Daniel; Keating, Amy E.

    2017-01-01

    Peptide reagents with high affinity or specificity for their target protein interaction partner are of utility for many important applications. Optimization of peptide binding by screening large libraries is a proven and powerful approach. Libraries designed to be enriched in peptide sequences that are predicted to have desired affinity or specificity characteristics are more likely to yield success than random mutagenesis. We present a library optimization method in which the choice of amino acids to encode at each peptide position can be guided by available experimental data or structure-based predictions. We discuss how to use analysis of predicted library performance to inform rounds of library design. Finally, we include protocols for more complex library design procedures that consider the chemical diversity of the amino acids at each peptide position and optimize a library score based on a user-specified input model. PMID:28236241

  19. Control theory based airfoil design for potential flow and a finite volume discretization

    NASA Technical Reports Server (NTRS)

    Reuther, J.; Jameson, A.

    1994-01-01

    This paper describes the implementation of optimization techniques based on control theory for airfoil design. In previous studies it was shown that control theory could be used to devise an effective optimization procedure for two-dimensional profiles in which the shape is determined by a conformal transformation from a unit circle, and the control is the mapping function. The goal of our present work is to develop a method which does not depend on conformal mapping, so that it can be extended to treat three-dimensional problems. Therefore, we have developed a method which can address arbitrary geometric shapes through the use of a finite volume method to discretize the potential flow equation. Here the control law serves to provide computationally inexpensive gradient information to a standard numerical optimization method. Results are presented, where both target speed distributions and minimum drag are used as objective functions.

  20. Enriching Peptide Libraries for Binding Affinity and Specificity Through Computationally Directed Library Design.

    PubMed

    Foight, Glenna Wink; Chen, T Scott; Richman, Daniel; Keating, Amy E

    2017-01-01

    Peptide reagents with high affinity or specificity for their target protein interaction partner are of utility for many important applications. Optimization of peptide binding by screening large libraries is a proven and powerful approach. Libraries designed to be enriched in peptide sequences that are predicted to have desired affinity or specificity characteristics are more likely to yield success than random mutagenesis. We present a library optimization method in which the choice of amino acids to encode at each peptide position can be guided by available experimental data or structure-based predictions. We discuss how to use analysis of predicted library performance to inform rounds of library design. Finally, we include protocols for more complex library design procedures that consider the chemical diversity of the amino acids at each peptide position and optimize a library score based on a user-specified input model.

  1. Systematic Sensor Selection Strategy (S4) User Guide

    NASA Technical Reports Server (NTRS)

    Sowers, T. Shane

    2012-01-01

    This paper describes a User Guide for the Systematic Sensor Selection Strategy (S4). S4 was developed to optimally select a sensor suite from a larger pool of candidate sensors based on their performance in a diagnostic system. For aerospace systems, selecting the proper sensors is important for ensuring adequate measurement coverage to satisfy operational, maintenance, performance, and system diagnostic criteria. S4 optimizes the selection of sensors based on the system fault diagnostic approach while taking conflicting objectives such as cost, weight and reliability into consideration. S4 can be described as a general architecture structured to accommodate application-specific components and requirements. It performs combinational optimization with a user defined merit or cost function to identify optimum or near-optimum sensor suite solutions. The S4 User Guide describes the sensor selection procedure and presents an example problem using an open source turbofan engine simulation to demonstrate its application.

  2. Aerodynamic optimization by simultaneously updating flow variables and design parameters

    NASA Technical Reports Server (NTRS)

    Rizk, M. H.

    1990-01-01

    The application of conventional optimization schemes to aerodynamic design problems leads to inner-outer iterative procedures that are very costly. An alternative approach is presented based on the idea of updating the flow variable iterative solutions and the design parameter iterative solutions simultaneously. Two schemes based on this idea are applied to problems of correcting wind tunnel wall interference and optimizing advanced propeller designs. The first of these schemes is applicable to a limited class of two-design-parameter problems with an equality constraint. It requires the computation of a single flow solution. The second scheme is suitable for application to general aerodynamic problems. It requires the computation of several flow solutions in parallel. In both schemes, the design parameters are updated as the iterative flow solutions evolve. Computations are performed to test the schemes' efficiency, accuracy, and sensitivity to variations in the computational parameters.

  3. Numerical simulation and optimal design of Segmented Planar Imaging Detector for Electro-Optical Reconnaissance

    NASA Astrophysics Data System (ADS)

    Chu, Qiuhui; Shen, Yijie; Yuan, Meng; Gong, Mali

    2017-12-01

    Segmented Planar Imaging Detector for Electro-Optical Reconnaissance (SPIDER) is a cutting-edge electro-optical imaging technology to realize miniaturization and complanation of imaging systems. In this paper, the principle of SPIDER has been numerically demonstrated based on the partially coherent light theory, and a novel concept of adjustable baseline pairing SPIDER system has further been proposed. Based on the results of simulation, it is verified that the imaging quality could be effectively improved by adjusting the Nyquist sampling density, optimizing the baseline pairing method and increasing the spectral channel of demultiplexer. Therefore, an adjustable baseline pairing algorithm is established for further enhancing the image quality, and the optimal design procedure in SPIDER for arbitrary targets is also summarized. The SPIDER system with adjustable baseline pairing method can broaden its application and reduce cost under the same imaging quality.

  4. Parameter estimation techniques based on optimizing goodness-of-fit statistics for structural reliability

    NASA Technical Reports Server (NTRS)

    Starlinger, Alois; Duffy, Stephen F.; Palko, Joseph L.

    1993-01-01

    New methods are presented that utilize the optimization of goodness-of-fit statistics in order to estimate Weibull parameters from failure data. It is assumed that the underlying population is characterized by a three-parameter Weibull distribution. Goodness-of-fit tests are based on the empirical distribution function (EDF). The EDF is a step function, calculated using failure data, and represents an approximation of the cumulative distribution function for the underlying population. Statistics (such as the Kolmogorov-Smirnov statistic and the Anderson-Darling statistic) measure the discrepancy between the EDF and the cumulative distribution function (CDF). These statistics are minimized with respect to the three Weibull parameters. Due to nonlinearities encountered in the minimization process, Powell's numerical optimization procedure is applied to obtain the optimum value of the EDF. Numerical examples show the applicability of these new estimation methods. The results are compared to the estimates obtained with Cooper's nonlinear regression algorithm.

  5. Optimization and automation of quantitative NMR data extraction.

    PubMed

    Bernstein, Michael A; Sýkora, Stan; Peng, Chen; Barba, Agustín; Cobas, Carlos

    2013-06-18

    NMR is routinely used to quantitate chemical species. The necessary experimental procedures to acquire quantitative data are well-known, but relatively little attention has been applied to data processing and analysis. We describe here a robust expert system that can be used to automatically choose the best signals in a sample for overall concentration determination and determine analyte concentration using all accepted methods. The algorithm is based on the complete deconvolution of the spectrum which makes it tolerant of cases where signals are very close to one another and includes robust methods for the automatic classification of NMR resonances and molecule-to-spectrum multiplets assignments. With the functionality in place and optimized, it is then a relatively simple matter to apply the same workflow to data in a fully automatic way. The procedure is desirable for both its inherent performance and applicability to NMR data acquired for very large sample sets.

  6. Optimally robust redundancy relations for failure detection in uncertain systems

    NASA Technical Reports Server (NTRS)

    Lou, X.-C.; Willsky, A. S.; Verghese, G. C.

    1986-01-01

    All failure detection methods are based, either explicitly or implicitly, on the use of redundancy, i.e. on (possibly dynamic) relations among the measured variables. The robustness of the failure detection process consequently depends to a great degree on the reliability of the redundancy relations, which in turn is affected by the inevitable presence of model uncertainties. In this paper the problem of determining redundancy relations that are optimally robust is addressed in a sense that includes several major issues of importance in practical failure detection and that provides a significant amount of intuition concerning the geometry of robust failure detection. A procedure is given involving the construction of a single matrix and its singular value decomposition for the determination of a complete sequence of redundancy relations, ordered in terms of their level of robustness. This procedure also provides the basis for comparing levels of robustness in redundancy provided by different sets of sensors.

  7. Uncovering the community structure in signed social networks based on greedy optimization

    NASA Astrophysics Data System (ADS)

    Chen, Yan; Yan, Jiaqi; Yang, Yu; Chen, Junhua

    2017-05-01

    The formality of signed relationships has been recently adopted in a lot of complicated systems. The relations among these entities are complicated and multifarious. We cannot indicate these relationships only by positive links, and signed networks have been becoming more and more universal in the study of social networks when community is being significant. In this paper, to identify communities in signed networks, we exploit a new greedy algorithm, taking signs and the density of these links into account. The main idea of the algorithm is the initial procedure of signed modularity and the corresponding update rules. Specially, we employ the “Asymmetric and Constrained Belief Evolution” procedure to evaluate the optimal number of communities. According to the experimental results, the algorithm is proved to be able to run well. More specifically, the proposed algorithm is very efficient for these networks with medium size, both dense and sparse.

  8. Fumed silica nanoparticle mediated biomimicry for optimal cell-material interactions for artificial organ development.

    PubMed

    de Mel, Achala; Ramesh, Bala; Scurr, David J; Alexander, Morgan R; Hamilton, George; Birchall, Martin; Seifalian, Alexander M

    2014-03-01

    Replacement of irreversibly damaged organs due to chronic disease, with suitable tissue engineered implants is now a familiar area of interest to clinicians and multidisciplinary scientists. Ideal tissue engineering approaches require scaffolds to be tailor made to mimic physiological environments of interest with specific surface topographical and biological properties for optimal cell-material interactions. This study demonstrates a single-step procedure for inducing biomimicry in a novel nanocomposite base material scaffold, to re-create the extracellular matrix, which is required for stem cell integration and differentiation to mature cells. Fumed silica nanoparticle mediated procedure of scaffold functionalization, can be potentially adapted with multiple bioactive molecules to induce cellular biomimicry, in the development human organs. The proposed nanocomposite materials already in patients for number of implants, including world first synthetic trachea, tear ducts and vascular bypass graft. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Solution-mediated cladding doping of commercial polymer optical fibers

    NASA Astrophysics Data System (ADS)

    Stajanca, Pavol; Topolniak, Ievgeniia; Pötschke, Samuel; Krebber, Katerina

    2018-03-01

    Solution doping of commercial polymethyl methacrylate (PMMA) polymer optical fibers (POFs) is presented as a novel approach for preparation of custom cladding-doped POFs (CD-POFs). The presented method is based on a solution-mediated diffusion of dopant molecules into the fiber cladding upon soaking of POFs in a methanol-dopant solution. The method was tested on three different commercial POFs using Rhodamine B as a fluorescent dopant. The dynamics of the diffusion process was studied in order to optimize the doping procedure in terms of selection of the most suitable POF, doping time and conditions. Using the optimized procedure, longer segment of fluorescent CD-POF was prepared and its performance was characterized. Fiber's potential for sensing and illumination applications was demonstrated and discussed. The proposed method represents a simple and cheap way for fabrication of custom, short to medium length CD-POFs with various dopants.

  10. Design optimization studies using COSMIC NASTRAN

    NASA Technical Reports Server (NTRS)

    Pitrof, Stephen M.; Bharatram, G.; Venkayya, Vipperla B.

    1993-01-01

    The purpose of this study is to create, test and document a procedure to integrate mathematical optimization algorithms with COSMIC NASTRAN. This procedure is very important to structural design engineers who wish to capitalize on optimization methods to ensure that their design is optimized for its intended application. The OPTNAST computer program was created to link NASTRAN and design optimization codes into one package. This implementation was tested using two truss structure models and optimizing their designs for minimum weight, subject to multiple loading conditions and displacement and stress constraints. However, the process is generalized so that an engineer could design other types of elements by adding to or modifying some parts of the code.

  11. Tracking and Reporting Outcomes Of Procedural Sedation (TROOPS): Standardized Quality Improvement and Research Tools from the International Committee for the Advancement of Procedural Sedation.

    PubMed

    Roback, M G; Green, S M; Andolfatto, G; Leroy, P L; Mason, K P

    2018-01-01

    Many hospitals, and medical and dental clinics and offices, routinely monitor their procedural-sedation practices-tracking adverse events, outcomes, and efficacy in order to optimize the sedation delivery and practice. Currently, there exist substantial differences between settings in the content, collection, definition, and interpretation of such sedation outcomes, with resulting widespread reporting variation. With the objective of reducing such disparities, the International Committee for the Advancement of Procedural Sedation has herein developed a multidisciplinary, consensus-based, standardized tool intended to be applicable for all types of sedation providers in all locations worldwide. This tool is amenable for inclusion in either a paper or an electronic medical record. An additional, parallel research tool is presented to promote consistency and standardized data collection for procedural-sedation investigations. Copyright © 2017. Published by Elsevier Ltd.

  12. Carotid Artery Stenting – Strategies to Improve Procedural Performance and Reduce the Learning Curve

    PubMed Central

    Van Herzeele, Isabelle

    2013-01-01

    Carotid artery stenting (CAS) remains an appealing intervention to reduce the stroke risk because of its minimal invasive nature. Nevertheless, landmark randomised controlled trials have not been able to resolve the controversies surrounding this complex procedure as the peri-operative stroke risk in a non-selected patient population still seems to be higher after CAS in comparison to carotid endarterectomy. What is more, these trials have highlighted that patient outcome after CAS is influenced by patient- and operator-dependant factors. The CAS procedure exhibits a definitive learning curve resulting in higher complication rates if the procedure is performed by inexperienced interventionists or in low-volume centres. This article will outline strategies to improve the performance of physicians carrying out the CAS procedure by means of proficiency-based training, credentialing, virtual reality rehearsal and optimal patient selection. PMID:29588751

  13. Optimization of the blade trailing edge geometric parameters for a small scale ORC turbine

    NASA Astrophysics Data System (ADS)

    Zhang, L.; Zhuge, W. L.; Peng, J.; Liu, S. J.; Zhang, Y. J.

    2013-12-01

    In general, the method proposed by Whitfield and Baines is adopted for the turbine preliminary design. In this design procedure for the turbine blade trailing edge geometry, two assumptions (ideal gas and zero discharge swirl) and two experience values (WR and γ) are used to get the three blade trailing edge geometric parameters: relative exit flow angle β6, the exit tip radius R6t and hub radius R6h for the purpose of maximizing the rotor total-to-static isentropic efficiency. The method above is established based on the experience and results of testing using air as working fluid, so it does not provide a mathematical optimal solution to instruct the optimization of geometry parameters and consider the real gas effects of the organic, working fluid which must be taken into consideration for the ORC turbine design procedure. In this paper, a new preliminary design and optimization method is established for the purpose of reducing the exit kinetic energy loss to improve the turbine efficiency ηts, and the blade trailing edge geometric parameters for a small scale ORC turbine with working fluid R123 are optimized based on this method. The mathematical optimal solution to minimize the exit kinetic energy is deduced, which can be used to design and optimize the exit shroud/hub radius and exit blade angle. And then, the influence of blade trailing edge geometric parameters on turbine efficiency ηts are analysed and the optimal working ranges of these parameters for the equations are recommended in consideration of working fluid R123. This method is used to modify an existing ORC turbine exit kinetic energy loss from 11.7% to 7%, which indicates the effectiveness of the method. However, the internal passage loss increases from 7.9% to 9.4%, so the only way to consider the influence of geometric parameters on internal passage loss is to give the empirical ranges of these parameters, such as the recommended ranges that the value of γ is at 0.3 to 0.4, and the value of τ is at 0.5 to 0.6.

  14. Utilization of group theory in studies of molecular clusters

    NASA Astrophysics Data System (ADS)

    Ocak, Mahir E.

    The structure of the molecular symmetry group of molecular clusters was analyzed and it is shown that the molecular symmetry group of a molecular cluster can be written as direct products and semidirect products of its subgroups. Symmetry adaptation of basis functions in direct product groups and semidirect product groups was considered in general and the sequential symmetry adaptation procedure which is already known for direct product groups was extended to the case of semidirect product groups. By using the sequential symmetry adaptation procedure a new method for calculating the VRT spectra of molecular clusters which is named as Monomer Basis Representation (MBR) method is developed. In the MBR method, calculations starts with a single monomer with the purpose of obtaining an optimized basis for that monomer as a linear combination of some primitive basis functions. Then, an optimized basis for each identical monomer is generated from the optimized basis of this monomer. By using the optimized bases of the monomers, a basis is generated generated for the solution of the full problem, and the VRT spectra of the cluster is obtained by using this basis. Since an optimized basis is used for each monomer which has a much smaller size than the primitive basis from which the optimized bases are generated, the MBR method leads to an exponential optimization in the size of the basis that is required for the calculations. Application of the MBR method has been illustrated by calculating the VRT spectra of water dimer by using the SAPT-5st potential surface of Groenenboom et al. The rest of the calculations are in good agreement with both the original calculations of Groenenboom et al. and also with the experimental results. Comparing the size of the optimized basis with the size of the primitive basis, it can be said that the method works efficiently. Because of its efficiency, the MBR method can be used for studies of clusters bigger than dimers. Thus, MBR method can be used for studying the many-body terms and for deriving accurate potential surfaces.

  15. Inviscid Design of Hypersonic Wind Tunnel Nozzles for a Real Gas

    NASA Technical Reports Server (NTRS)

    Korte, J. J.

    2000-01-01

    A straightforward procedure has been developed to quickly determine an inviscid design of a hypersonic wind tunnel nozzle when the test crash is both calorically and thermally imperfect. This real gas procedure divides the nozzle into four distinct parts: subsonic, throat to conical, conical, and turning flow regions. The design process is greatly simplified by treating the imperfect gas effects only in the source flow region. This simplification can be justified for a large class of hypersonic wind tunnel nozzle design problems. The final nozzle design is obtained either by doing a classical boundary layer correction or by using this inviscid design as the starting point for a viscous design optimization based on computational fluid dynamics. An example of a real gas nozzle design is used to illustrate the method. The accuracy of the real gas design procedure is shown to compare favorably with an ideal gas design based on computed flow field solutions.

  16. Development of methodologies for the estimation of thermal properties associated with aerospace vehicles

    NASA Technical Reports Server (NTRS)

    Scott, Elaine P.

    1993-01-01

    Thermal stress analyses are an important aspect in the development of aerospace vehicles such as the National Aero-Space Plane (NASP) and the High-Speed Civil Transport (HSCT) at NASA-LaRC. These analyses require knowledge of the temperature within the structures which consequently necessitates the need for thermal property data. The initial goal of this research effort was to develop a methodology for the estimation of thermal properties of aerospace structural materials at room temperature and to develop a procedure to optimize the estimation process. The estimation procedure was implemented utilizing a general purpose finite element code. In addition, an optimization procedure was developed and implemented to determine critical experimental parameters to optimize the estimation procedure. Finally, preliminary experiments were conducted at the Aircraft Structures Branch (ASB) laboratory.

  17. Development of methodologies for the estimation of thermal properties associated with aerospace vehicles

    NASA Astrophysics Data System (ADS)

    Scott, Elaine P.

    1993-12-01

    Thermal stress analyses are an important aspect in the development of aerospace vehicles such as the National Aero-Space Plane (NASP) and the High-Speed Civil Transport (HSCT) at NASA-LaRC. These analyses require knowledge of the temperature within the structures which consequently necessitates the need for thermal property data. The initial goal of this research effort was to develop a methodology for the estimation of thermal properties of aerospace structural materials at room temperature and to develop a procedure to optimize the estimation process. The estimation procedure was implemented utilizing a general purpose finite element code. In addition, an optimization procedure was developed and implemented to determine critical experimental parameters to optimize the estimation procedure. Finally, preliminary experiments were conducted at the Aircraft Structures Branch (ASB) laboratory.

  18. Evidence-Based Design of Fixed-Dose Combinations: Principles and Application to Pediatric Anti-Tuberculosis Therapy.

    PubMed

    Svensson, Elin M; Yngman, Gunnar; Denti, Paolo; McIlleron, Helen; Kjellsson, Maria C; Karlsson, Mats O

    2018-05-01

    Fixed-dose combination formulations where several drugs are included in one tablet are important for the implementation of many long-term multidrug therapies. The selection of optimal dose ratios and tablet content of a fixed-dose combination and the design of individualized dosing regimens is a complex task, requiring multiple simultaneous considerations. In this work, a methodology for the rational design of a fixed-dose combination was developed and applied to the case of a three-drug pediatric anti-tuberculosis formulation individualized on body weight. The optimization methodology synthesizes information about the intended use population, the pharmacokinetic properties of the drugs, therapeutic targets, and practical constraints. A utility function is included to penalize deviations from the targets; a sequential estimation procedure was developed for stable estimation of break-points for individualized dosing. The suggested optimized pediatric anti-tuberculosis fixed-dose combination was compared with the recently launched World Health Organization-endorsed formulation. The optimized fixed-dose combination included 15, 36, and 16% higher amounts of rifampicin, isoniazid, and pyrazinamide, respectively. The optimized fixed-dose combination is expected to result in overall less deviation from the therapeutic targets based on adult exposure and substantially fewer children with underexposure (below half the target). The development of this design tool can aid the implementation of evidence-based formulations, integrating available knowledge and practical considerations, to optimize drug exposures and thereby treatment outcomes.

  19. Bayer image parallel decoding based on GPU

    NASA Astrophysics Data System (ADS)

    Hu, Rihui; Xu, Zhiyong; Wei, Yuxing; Sun, Shaohua

    2012-11-01

    In the photoelectrical tracking system, Bayer image is decompressed in traditional method, which is CPU-based. However, it is too slow when the images become large, for example, 2K×2K×16bit. In order to accelerate the Bayer image decoding, this paper introduces a parallel speedup method for NVIDA's Graphics Processor Unit (GPU) which supports CUDA architecture. The decoding procedure can be divided into three parts: the first is serial part, the second is task-parallelism part, and the last is data-parallelism part including inverse quantization, inverse discrete wavelet transform (IDWT) as well as image post-processing part. For reducing the execution time, the task-parallelism part is optimized by OpenMP techniques. The data-parallelism part could advance its efficiency through executing on the GPU as CUDA parallel program. The optimization techniques include instruction optimization, shared memory access optimization, the access memory coalesced optimization and texture memory optimization. In particular, it can significantly speed up the IDWT by rewriting the 2D (Tow-dimensional) serial IDWT into 1D parallel IDWT. Through experimenting with 1K×1K×16bit Bayer image, data-parallelism part is 10 more times faster than CPU-based implementation. Finally, a CPU+GPU heterogeneous decompression system was designed. The experimental result shows that it could achieve 3 to 5 times speed increase compared to the CPU serial method.

  20. Development of a novel naphthoic acid ionic liquid and its application in "no-organic solvent microextraction" for determination of triclosan and methyltriclosan in human fluids and the method optimization by central composite design.

    PubMed

    Wang, Hui; Gao, Jiajia; Yu, Nana; Qu, Jingang; Fang, Fang; Wang, Huili; Wang, Mei; Wang, Xuedong

    2016-07-01

    In traditional ionic liquids (ILs)-based microextraction, the hydrophobic and hydrophilic ILs are often used as extractant and disperser, respectively. However, the functional effects of ILs are not utilized in microextraction procedures. Herein, we introduced 1-naphthoic acid into imidazolium ring to synthesize a novel ionic liquid 1-butyl-3-methylimidazolium naphthoic acid salt ([C4MIM][NPA]), and its structure was characterized by IR, (1)H NMR and MS. On the basis of its acidic property and lower solubility than common [CnMIM][BF4], it was used as a mixing dispersive solvent with [C4MIM][BF4] in "functionalized ionic liquid-based no organic solvent microextraction (FIL-NOSM)". Utilization of [C4MIM][NPA] in FIL-NOSM procedures has two obvious advantages: (1) it promoted the non-polar environment, increased volume of the sedimented phase, and thus could enhance the extraction recoveries of triclosan (TCS) and methyltriclosan (MTCS) by more than 10%; and (2) because of the acidic property, it can act as a pH modifier, avoiding extra pH adjustment step. By combining single factor optimization and central composite design, the main factors in the FIL-NOSM method were optimized. Under the optimal conditions, the relative recoveries of TCS and MTCS reached up to 98.60-106.09%, and the LODs of them were as low as 0.12-0.15µgL(-1) in plasma and urine samples. In total, this [C4MIM][NPA]-based FIL-NOSM method provided high extraction efficiency, and required less pretreatment time and unutilized any organic solvent. To the best of our knowledge, this is the first application of [C4mim][NPA]-based microextraction method for the simultaneous quantification of trace TCS and MTCS in human fluids. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. Evaluating performances of simplified physically based landslide susceptibility models.

    NASA Astrophysics Data System (ADS)

    Capparelli, Giovanna; Formetta, Giuseppe; Versace, Pasquale

    2015-04-01

    Rainfall induced shallow landslides cause significant damages involving loss of life and properties. Prediction of shallow landslides susceptible locations is a complex task that involves many disciplines: hydrology, geotechnical science, geomorphology, and statistics. Usually to accomplish this task two main approaches are used: statistical or physically based model. This paper presents a package of GIS based models for landslide susceptibility analysis. It was integrated in the NewAge-JGrass hydrological model using the Object Modeling System (OMS) modeling framework. The package includes three simplified physically based models for landslides susceptibility analysis (M1, M2, and M3) and a component for models verifications. It computes eight goodness of fit indices (GOF) by comparing pixel-by-pixel model results and measurements data. Moreover, the package integration in NewAge-JGrass allows the use of other components such as geographic information system tools to manage inputs-output processes, and automatic calibration algorithms to estimate model parameters. The system offers the possibility to investigate and fairly compare the quality and the robustness of models and models parameters, according a procedure that includes: i) model parameters estimation by optimizing each of the GOF index separately, ii) models evaluation in the ROC plane by using each of the optimal parameter set, and iii) GOF robustness evaluation by assessing their sensitivity to the input parameter variation. This procedure was repeated for all three models. The system was applied for a case study in Calabria (Italy) along the Salerno-Reggio Calabria highway, between Cosenza and Altilia municipality. The analysis provided that among all the optimized indices and all the three models, Average Index (AI) optimization coupled with model M3 is the best modeling solution for our test case. This research was funded by PON Project No. 01_01503 "Integrated Systems for Hydrogeological Risk Monitoring, Early Warning and Mitigation Along the Main Lifelines", CUP B31H11000370005, in the framework of the National Operational Program for "Research and Competitiveness" 2007-2013.

  2. Automatic digital surface model (DSM) generation from aerial imagery data

    NASA Astrophysics Data System (ADS)

    Zhou, Nan; Cao, Shixiang; He, Hongyan; Xing, Kun; Yue, Chunyu

    2018-04-01

    Aerial sensors are widely used to acquire imagery for photogrammetric and remote sensing application. In general, the images have large overlapped region, which provide a lot of redundant geometry and radiation information for matching. This paper presents a POS supported dense matching procedure for automatic DSM generation from aerial imagery data. The method uses a coarse-to-fine hierarchical strategy with an effective combination of several image matching algorithms: image radiation pre-processing, image pyramid generation, feature point extraction and grid point generation, multi-image geometrically constraint cross-correlation (MIG3C), global relaxation optimization, multi-image geometrically constrained least squares matching (MIGCLSM), TIN generation and point cloud filtering. The image radiation pre-processing is used in order to reduce the effects of the inherent radiometric problems and optimize the images. The presented approach essentially consists of 3 components: feature point extraction and matching procedure, grid point matching procedure and relational matching procedure. The MIGCLSM method is used to achieve potentially sub-pixel accuracy matches and identify some inaccurate and possibly false matches. The feasibility of the method has been tested on different aerial scale images with different landcover types. The accuracy evaluation is based on the comparison between the automatic extracted DSMs derived from the precise exterior orientation parameters (EOPs) and the POS.

  3. Two-dimensional solid-phase extraction strategy for the selective enrichment of aminoglycosides in milk.

    PubMed

    Shen, Aijin; Wei, Jie; Yan, Jingyu; Jin, Gaowa; Ding, Junjie; Yang, Bingcheng; Guo, Zhimou; Zhang, Feifang; Liang, Xinmiao

    2017-03-01

    An orthogonal two-dimensional solid-phase extraction strategy was established for the selective enrichment of three aminoglycosides including spectinomycin, streptomycin, and dihydrostreptomycin in milk. A reversed-phase liquid chromatography material (C 18 ) and a weak cation-exchange material (TGA) were integrated in a single solid-phase extraction cartridge. The feasibility of two-dimensional clean-up procedure that experienced two-step adsorption, two-step rinsing, and two-step elution was systematically investigated. Based on the orthogonality of reversed-phase and weak cation-exchange procedures, the two-dimensional solid-phase extraction strategy could minimize the interference from the hydrophobic matrix existing in traditional reversed-phase solid-phase extraction. In addition, high ionic strength in the extracts could be effectively removed before the second dimension of weak cation-exchange solid-phase extraction. Combined with liquid chromatography and tandem mass spectrometry, the optimized procedure was validated according to the European Union Commission directive 2002/657/EC. A good performance was achieved in terms of linearity, recovery, precision, decision limit, and detection capability in milk. Finally, the optimized two-dimensional clean-up procedure incorporated with liquid chromatography and tandem mass spectrometry was successfully applied to the rapid monitoring of aminoglycoside residues in milk. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Optimal Roux-en-Y reconstruction after distal gastrectomy for early gastric cancer as assessed using the newly developed PGSAS-45 scale.

    PubMed

    Kawahira, Hiroshi; Kodera, Yasuhiro; Hiki, Naoki; Takahashi, Masazumi; Itoh, Seiji; Mitsumori, Norio; Kawashima, Yoshiyuki; Namikawa, Tsutomu; Inada, Takao; Nakada, Koji

    2015-10-01

    The optimal surgical procedure for distal gastrectomy with Roux-en-Y reconstruction (DGRY) remains to be determined. Recently, a self-report assessment instrument, the Postgastrectomy Syndrome Assessment Scale-45 (PGSAS-45), was compiled to evaluate symptoms, the living status and the quality of life of patients who have undergone gastrectomy. We used this scale to evaluate procedures used for DGRY. The subjects included 475 patients who underwent DGRY for stage IA/IB gastric cancer. We evaluated whether the size of the remnant stomach, length of the Roux limb, reconstruction route and anastomotic procedure affected the patients' symptoms, living status and quality of life assessed using the PGSAS-45. Patients with a residual stomach of more than half had significantly worse esophageal reflux scores than the patients with a smaller residual stomach (P = 0.0462); a residual stomach of one-third or one-fourth was favorable. A shorter length of the Roux limb was shown to be preferable to a longer Roux limb based on the results of the PGSAS-45. In addition, antecolic reconstruction and the anastomotic procedure using a linear stapler were found to be more favorable. The size of the remnant stomach and the length and route of the Roux limb significantly influence the patient-reported DGRY outcomes.

  5. Geometrical Optimization Approach to Isomerization: Models and Limitations.

    PubMed

    Chang, Bo Y; Shin, Seokmin; Engel, Volker; Sola, Ignacio R

    2017-11-02

    We study laser-driven isomerization reactions through an excited electronic state using the recently developed Geometrical Optimization procedure. Our goal is to analyze whether an initial wave packet in the ground state, with optimized amplitudes and phases, can be used to enhance the yield of the reaction at faster rates, driven by a single picosecond pulse or a pair of femtosecond pulses resonant with the electronic transition. We show that the symmetry of the system imposes limitations in the optimization procedure, such that the method rediscovers the pump-dump mechanism.

  6. An automatic optimum number of well-distributed ground control lines selection procedure based on genetic algorithm

    NASA Astrophysics Data System (ADS)

    Yavari, Somayeh; Valadan Zoej, Mohammad Javad; Salehi, Bahram

    2018-05-01

    The procedure of selecting an optimum number and best distribution of ground control information is important in order to reach accurate and robust registration results. This paper proposes a new general procedure based on Genetic Algorithm (GA) which is applicable for all kinds of features (point, line, and areal features). However, linear features due to their unique characteristics are of interest in this investigation. This method is called Optimum number of Well-Distributed ground control Information Selection (OWDIS) procedure. Using this method, a population of binary chromosomes is randomly initialized. The ones indicate the presence of a pair of conjugate lines as a GCL and zeros specify the absence. The chromosome length is considered equal to the number of all conjugate lines. For each chromosome, the unknown parameters of a proper mathematical model can be calculated using the selected GCLs (ones in each chromosome). Then, a limited number of Check Points (CPs) are used to evaluate the Root Mean Square Error (RMSE) of each chromosome as its fitness value. The procedure continues until reaching a stopping criterion. The number and position of ones in the best chromosome indicate the selected GCLs among all conjugate lines. To evaluate the proposed method, a GeoEye and an Ikonos Images are used over different areas of Iran. Comparing the obtained results by the proposed method in a traditional RFM with conventional methods that use all conjugate lines as GCLs shows five times the accuracy improvement (pixel level accuracy) as well as the strength of the proposed method. To prevent an over-parametrization error in a traditional RFM due to the selection of a high number of improper correlated terms, an optimized line-based RFM is also proposed. The results show the superiority of the combination of the proposed OWDIS method with an optimized line-based RFM in terms of increasing the accuracy to better than 0.7 pixel, reliability, and reducing systematic errors. These results also demonstrate the high potential of linear features as reliable control features to reach sub-pixel accuracy in registration applications.

  7. Goals and Objectives to Optimize the Value of an Acute Pain Service in Perioperative Pain Management.

    PubMed

    Le-Wendling, Linda; Glick, Wesley; Tighe, Patrick

    2017-12-01

    As newer pharmacologic and procedural interventions, technology, and data on outcomes in pain management are becoming available, effective acute pain management will require a dedicated Acute Pain Service (APS) to help determine the most optimal pain management plan for the patients. Goals for pain management must take into consideration the side effect profile of drugs and potential complications of procedural interventions. Multiple objective optimization is the combination of multiple different objectives for acute pain management. Simple use of opioids, for example, can reduce all pain to minimal levels, but at what cost to the patient, the medical system, and to public health as a whole? Many models for APS exist based on personnel's skills, knowledge and experience, but effective use of an APS will also require allocation of time, space, financial, and personnel resources with clear objectives and a feedback mechanism to guide changes to acute pain medicine practices to meet the constantly evolving medical field. Physician-based practices have the advantage of developing protocols for the management of low-variability, high-occurrence scenarios in addition to tailoring care to individual patients with high-variability, low-occurrence scenarios. Frequent feedback and data collection/assessment on patient outcomes is essential in evaluating the efficacy of the Acute Pain Service's intervention in improving patient outcomes in the acute and perioperative setting.

  8. Structural design of composite rotor blades with consideration of manufacturability, durability, and manufacturing uncertainties

    NASA Astrophysics Data System (ADS)

    Li, Leihong

    A modular structural design methodology for composite blades is developed. This design method can be used to design composite rotor blades with sophisticate geometric cross-sections. This design method hierarchically decomposed the highly-coupled interdisciplinary rotor analysis into global and local levels. In the global level, aeroelastic response analysis and rotor trim are conduced based on multi-body dynamic models. In the local level, variational asymptotic beam sectional analysis methods are used for the equivalent one-dimensional beam properties. Compared with traditional design methodology, the proposed method is more efficient and accurate. Then, the proposed method is used to study three different design problems that have not been investigated before. The first is to add manufacturing constraints into design optimization. The introduction of manufacturing constraints complicates the optimization process. However, the design with manufacturing constraints benefits the manufacturing process and reduces the risk of violating major performance constraints. Next, a new design procedure for structural design against fatigue failure is proposed. This procedure combines the fatigue analysis with the optimization process. The durability or fatigue analysis employs a strength-based model. The design is subject to stiffness, frequency, and durability constraints. Finally, the manufacturing uncertainty impacts on rotor blade aeroelastic behavior are investigated, and a probabilistic design method is proposed to control the impacts of uncertainty on blade structural performance. The uncertainty factors include dimensions, shapes, material properties, and service loads.

  9. Retrospective review of Contura HDR breast cases to improve our standardized procedure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Iftimia, Ileana, E-mail: Ileana.n.iftimia@lahey.org; Cirino, Eileen T.; Ladd, Ron

    2013-07-01

    To retrospectively review our first 20 Contura high dose rate breast cases to improve and refine our standardized procedure and checklists. We prepared in advance checklists for all steps, developed an in-house Excel spreadsheet for second checking the plan, and generated a procedure for efficient contouring and a set of optimization constraints to meet the dose volume histogram criteria. Templates were created in our treatment planning system for structures, isodose levels, optimization constraints, and plan report. This study reviews our first 20 high dose rate Contura breast treatment plans. We followed our standardized procedure for contouring, planning, and second checking.more » The established dose volume histogram criteria were successfully met for all plans. For the cases studied here, the balloon-skin and balloon-ribs distances ranged between 5 and 43 mm and 1 and 33 mm, respectively; air{sub s}eroma volume/PTV{sub E}val volume≤5.5% (allowed≤10%); asymmetry<1.2 mm (goal≤2 mm); PTV{sub E}val V90%≥97.6%; PTV{sub E}val V95%≥94.9%; skin max dose≤98%Rx; ribs max dose≤137%Rx; V150%≤29.8 cc; V200%≤7.8 cc; the total dwell time range was 225.4 to 401.9 seconds; and the second check agreement was within 3%. Based on this analysis, more appropriate ranges for the total dwell time and balloon diameter tolerance were found. Three major problems were encountered: balloon migration toward the skin for small balloon-to-skin distances, lumen obstruction, and length change for the flexible balloon. Solutions were found for these issues and our standardized procedure and checklists were updated accordingly. Based on our review of these cases, the use of checklists resulted in consistent results, indicating good coverage for the target without sacrificing the critical structures. This review helped us to refine our standardized procedure and update our checklists.« less

  10. A Semi-Infinite Programming based algorithm for determining T-optimum designs for model discrimination

    PubMed Central

    Duarte, Belmiro P.M.; Wong, Weng Kee; Atkinson, Anthony C.

    2016-01-01

    T-optimum designs for model discrimination are notoriously difficult to find because of the computational difficulty involved in solving an optimization problem that involves two layers of optimization. Only a handful of analytical T-optimal designs are available for the simplest problems; the rest in the literature are found using specialized numerical procedures for a specific problem. We propose a potentially more systematic and general way for finding T-optimal designs using a Semi-Infinite Programming (SIP) approach. The strategy requires that we first reformulate the original minimax or maximin optimization problem into an equivalent semi-infinite program and solve it using an exchange-based method where lower and upper bounds produced by solving the outer and the inner programs, are iterated to convergence. A global Nonlinear Programming (NLP) solver is used to handle the subproblems, thus finding the optimal design and the least favorable parametric configuration that minimizes the residual sum of squares from the alternative or test models. We also use a nonlinear program to check the global optimality of the SIP-generated design and automate the construction of globally optimal designs. The algorithm is successfully used to produce results that coincide with several T-optimal designs reported in the literature for various types of model discrimination problems with normally distributed errors. However, our method is more general, merely requiring that the parameters of the model be estimated by a numerical optimization. PMID:27330230

  11. A Semi-Infinite Programming based algorithm for determining T-optimum designs for model discrimination.

    PubMed

    Duarte, Belmiro P M; Wong, Weng Kee; Atkinson, Anthony C

    2015-03-01

    T-optimum designs for model discrimination are notoriously difficult to find because of the computational difficulty involved in solving an optimization problem that involves two layers of optimization. Only a handful of analytical T-optimal designs are available for the simplest problems; the rest in the literature are found using specialized numerical procedures for a specific problem. We propose a potentially more systematic and general way for finding T-optimal designs using a Semi-Infinite Programming (SIP) approach. The strategy requires that we first reformulate the original minimax or maximin optimization problem into an equivalent semi-infinite program and solve it using an exchange-based method where lower and upper bounds produced by solving the outer and the inner programs, are iterated to convergence. A global Nonlinear Programming (NLP) solver is used to handle the subproblems, thus finding the optimal design and the least favorable parametric configuration that minimizes the residual sum of squares from the alternative or test models. We also use a nonlinear program to check the global optimality of the SIP-generated design and automate the construction of globally optimal designs. The algorithm is successfully used to produce results that coincide with several T-optimal designs reported in the literature for various types of model discrimination problems with normally distributed errors. However, our method is more general, merely requiring that the parameters of the model be estimated by a numerical optimization.

  12. Constitutive error based parameter estimation technique for plate structures using free vibration signatures

    NASA Astrophysics Data System (ADS)

    Guchhait, Shyamal; Banerjee, Biswanath

    2018-04-01

    In this paper, a variant of constitutive equation error based material parameter estimation procedure for linear elastic plates is developed from partially measured free vibration sig-natures. It has been reported in many research articles that the mode shape curvatures are much more sensitive compared to mode shape themselves to localize inhomogeneity. Complying with this idea, an identification procedure is framed as an optimization problem where the proposed cost function measures the error in constitutive relation due to incompatible curvature/strain and moment/stress fields. Unlike standard constitutive equation error based procedure wherein a solution of a couple system is unavoidable in each iteration, we generate these incompatible fields via two linear solves. A simple, yet effective, penalty based approach is followed to incorporate measured data. The penalization parameter not only helps in incorporating corrupted measurement data weakly but also acts as a regularizer against the ill-posedness of the inverse problem. Explicit linear update formulas are then developed for anisotropic linear elastic material. Numerical examples are provided to show the applicability of the proposed technique. Finally, an experimental validation is also provided.

  13. Manufacturing of dental pulp cell-based products from human third molars: current strategies and future investigations

    PubMed Central

    Ducret, Maxime; Fabre, Hugo; Degoul, Olivier; Atzeni, Gianluigi; McGuckin, Colin; Forraz, Nico; Alliot-Licht, Brigitte; Mallein-Gerin, Frédéric; Perrier-Groult, Emeline; Farges, Jean-Christophe

    2015-01-01

    In recent years, mesenchymal cell-based products have been developed to improve surgical therapies aimed at repairing human tissues. In this context, the tooth has recently emerged as a valuable source of stem/progenitor cells for regenerating orofacial tissues, with easy access to pulp tissue and high differentiation potential of dental pulp mesenchymal cells. International guidelines now recommend the use of standardized procedures for cell isolation, storage and expansion in culture to ensure optimal reproducibility, efficacy and safety when cells are used for clinical application. However, most dental pulp cell-based medicinal products manufacturing procedures may not be fully satisfactory since they could alter the cells biological properties and the quality of derived products. Cell isolation, enrichment and cryopreservation procedures combined to long-term expansion in culture media containing xeno- and allogeneic components are known to affect cell phenotype, viability, proliferation and differentiation capacities. This article focuses on current manufacturing strategies of dental pulp cell-based medicinal products and proposes a new protocol to improve efficiency, reproducibility and safety of these strategies. PMID:26300779

  14. Optimum Design of High-Speed Prop-Rotors

    NASA Technical Reports Server (NTRS)

    Chattopadhyay, Aditi; McCarthy, Thomas Robert

    1993-01-01

    An integrated multidisciplinary optimization procedure is developed for application to rotary wing aircraft design. The necessary disciplines such as dynamics, aerodynamics, aeroelasticity, and structures are coupled within a closed-loop optimization process. The procedure developed is applied to address two different problems. The first problem considers the optimization of a helicopter rotor blade and the second problem addresses the optimum design of a high-speed tilting proprotor. In the helicopter blade problem, the objective is to reduce the critical vibratory shear forces and moments at the blade root, without degrading rotor aerodynamic performance and aeroelastic stability. In the case of the high-speed proprotor, the goal is to maximize the propulsive efficiency in high-speed cruise without deteriorating the aeroelastic stability in cruise and the aerodynamic performance in hover. The problems studied involve multiple design objectives; therefore, the optimization problems are formulated using multiobjective design procedures. A comprehensive helicopter analysis code is used for the rotary wing aerodynamic, dynamic and aeroelastic stability analyses and an algorithm developed specifically for these purposes is used for the structural analysis. A nonlinear programming technique coupled with an approximate analysis procedure is used to perform the optimization. The optimum blade designs obtained in each case are compared to corresponding reference designs.

  15. Patient-specific dosimetric endpoints based treatment plan quality control in radiotherapy.

    PubMed

    Song, Ting; Staub, David; Chen, Mingli; Lu, Weiguo; Tian, Zhen; Jia, Xun; Li, Yongbao; Zhou, Linghong; Jiang, Steve B; Gu, Xuejun

    2015-11-07

    In intensity modulated radiotherapy (IMRT), the optimal plan for each patient is specific due to unique patient anatomy. To achieve such a plan, patient-specific dosimetric goals reflecting each patient's unique anatomy should be defined and adopted in the treatment planning procedure for plan quality control. This study is to develop such a personalized treatment plan quality control tool by predicting patient-specific dosimetric endpoints (DEs). The incorporation of patient specific DEs is realized by a multi-OAR geometry-dosimetry model, capable of predicting optimal DEs based on the individual patient's geometry. The overall quality of a treatment plan is then judged with a numerical treatment plan quality indicator and characterized as optimal or suboptimal. Taking advantage of clinically available prostate volumetric modulated arc therapy (VMAT) treatment plans, we built and evaluated our proposed plan quality control tool. Using our developed tool, six of twenty evaluated plans were identified as sub-optimal plans. After plan re-optimization, these suboptimal plans achieved better OAR dose sparing without sacrificing the PTV coverage, and the dosimetric endpoints of the re-optimized plans agreed well with the model predicted values, which validate the predictability of the proposed tool. In conclusion, the developed tool is able to accurately predict optimally achievable DEs of multiple OARs, identify suboptimal plans, and guide plan optimization. It is a useful tool for achieving patient-specific treatment plan quality control.

  16. Bypass Ratio: The US Air Force and Light-Attack Aviation

    DTIC Science & Technology

    2013-06-01

    for making recommendations which optimize base activity and its impact on the environment. Local and state politics can keep a base open even if it is...for the region, and this conduct can affect global commerce. Such disruption and destabilization in turn can have large impacts on the US diplomatic... IFR ) operations, emergency procedures, low-level flight and two-ship formation flight by this stage. Once track selection occurs, the light-attack

  17. Support vector machine firefly algorithm based optimization of lens system.

    PubMed

    Shamshirband, Shahaboddin; Petković, Dalibor; Pavlović, Nenad T; Ch, Sudheer; Altameem, Torki A; Gani, Abdullah

    2015-01-01

    Lens system design is an important factor in image quality. The main aspect of the lens system design methodology is the optimization procedure. Since optimization is a complex, nonlinear task, soft computing optimization algorithms can be used. There are many tools that can be employed to measure optical performance, but the spot diagram is the most useful. The spot diagram gives an indication of the image of a point object. In this paper, the spot size radius is considered an optimization criterion. Intelligent soft computing scheme support vector machines (SVMs) coupled with the firefly algorithm (FFA) are implemented. The performance of the proposed estimators is confirmed with the simulation results. The result of the proposed SVM-FFA model has been compared with support vector regression (SVR), artificial neural networks, and generic programming methods. The results show that the SVM-FFA model performs more accurately than the other methodologies. Therefore, SVM-FFA can be used as an efficient soft computing technique in the optimization of lens system designs.

  18. The key role of extinction learning in anxiety disorders: behavioral strategies to enhance exposure-based treatments.

    PubMed

    Pittig, Andre; van den Berg, Linda; Vervliet, Bram

    2016-01-01

    Extinction learning is a major mechanism for fear reduction by means of exposure. Current research targets innovative strategies to enhance fear extinction and thereby optimize exposure-based treatments for anxiety disorders. This selective review updates novel behavioral strategies that may provide cutting-edge clinical implications. Recent studies provide further support for two types of enhancement strategies. Procedural enhancement strategies implemented during extinction training translate to how exposure exercises may be conducted to optimize fear extinction. These strategies mostly focus on a maximized violation of dysfunctional threat expectancies and on reducing context and stimulus specificity of extinction learning. Flanking enhancement strategies target periods before and after extinction training and inform optimal preparation and post-processing of exposure exercises. These flanking strategies focus on the enhancement of learning in general, memory (re-)consolidation, and memory retrieval. Behavioral strategies to enhance fear extinction may provide powerful clinical applications to further maximize the efficacy of exposure-based interventions. However, future replications, mechanistic examinations, and translational studies are warranted to verify long-term effects and naturalistic utility. Future directions also comprise the interplay of optimized fear extinction with (avoidance) behavior and motivational antecedents of exposure.

  19. Formulation for Simultaneous Aerodynamic Analysis and Design Optimization

    NASA Technical Reports Server (NTRS)

    Hou, G. W.; Taylor, A. C., III; Mani, S. V.; Newman, P. A.

    1993-01-01

    An efficient approach for simultaneous aerodynamic analysis and design optimization is presented. This approach does not require the performance of many flow analyses at each design optimization step, which can be an expensive procedure. Thus, this approach brings us one step closer to meeting the challenge of incorporating computational fluid dynamic codes into gradient-based optimization techniques for aerodynamic design. An adjoint-variable method is introduced to nullify the effect of the increased number of design variables in the problem formulation. The method has been successfully tested on one-dimensional nozzle flow problems, including a sample problem with a normal shock. Implementations of the above algorithm are also presented that incorporate Newton iterations to secure a high-quality flow solution at the end of the design process. Implementations with iterative flow solvers are possible and will be required for large, multidimensional flow problems.

  20. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1978-01-01

    This paper addresses the problem of obtaining numerically maximum-likelihood estimates of the parameters for a mixture of normal distributions. In recent literature, a certain successive-approximations procedure, based on the likelihood equations, was shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, we introduce a general iterative procedure, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. We show that, with probability 1 as the sample size grows large, this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. We also show that the step-size which yields optimal local convergence rates for large samples is determined in a sense by the 'separation' of the component normal densities and is bounded below by a number between 1 and 2.

  1. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions, 2

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1976-01-01

    The problem of obtaining numerically maximum likelihood estimates of the parameters for a mixture of normal distributions is addressed. In recent literature, a certain successive approximations procedure, based on the likelihood equations, is shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, a general iterative procedure is introduced, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. With probability 1 as the sample size grows large, it is shown that this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. The step-size which yields optimal local convergence rates for large samples is determined in a sense by the separation of the component normal densities and is bounded below by a number between 1 and 2.

  2. Optimization of proximity ligation assay (PLA) for detection of protein interactions and fusion proteins in non-adherent cells: application to pre-B lymphocytes.

    PubMed

    Debaize, Lydie; Jakobczyk, Hélène; Rio, Anne-Gaëlle; Gandemer, Virginie; Troadec, Marie-Bérengère

    2017-01-01

    Genetic abnormalities, including chromosomal translocations, are described for many hematological malignancies. From the clinical perspective, detection of chromosomal abnormalities is relevant not only for diagnostic and treatment purposes but also for prognostic risk assessment. From the translational research perspective, the identification of fusion proteins and protein interactions has allowed crucial breakthroughs in understanding the pathogenesis of malignancies and consequently major achievements in targeted therapy. We describe the optimization of the Proximity Ligation Assay (PLA) to ascertain the presence of fusion proteins, and protein interactions in non-adherent pre-B cells. PLA is an innovative method of protein-protein colocalization detection by molecular biology that combines the advantages of microscopy with the advantages of molecular biology precision, enabling detection of protein proximity theoretically ranging from 0 to 40 nm. We propose an optimized PLA procedure. We overcome the issue of maintaining non-adherent hematological cells by traditional cytocentrifugation and optimized buffers, by changing incubation times, and modifying washing steps. Further, we provide convincing negative and positive controls, and demonstrate that optimized PLA procedure is sensitive to total protein level. The optimized PLA procedure allows the detection of fusion proteins and protein interactions on non-adherent cells. The optimized PLA procedure described here can be readily applied to various non-adherent hematological cells, from cell lines to patients' cells. The optimized PLA protocol enables detection of fusion proteins and their subcellular expression, and protein interactions in non-adherent cells. Therefore, the optimized PLA protocol provides a new tool that can be adopted in a wide range of applications in the biological field.

  3. Detonation energies of explosives by optimized JCZ3 procedures

    NASA Astrophysics Data System (ADS)

    Stiel, Leonard I.; Baker, Ernest L.

    1998-07-01

    Procedures for the detonation properties of explosives have been extended for the calculation of detonation energies at adiabatic expansion conditions. The use of the JCZ3 equation of state with optimized Exp-6 potential parameters leads to lower errors in comparison to JWL detonation energies than for other methods tested.

  4. Implementation and application of moving average as continuous analytical quality control instrument demonstrated for 24 routine chemistry assays.

    PubMed

    Rossum, Huub H van; Kemperman, Hans

    2017-07-26

    General application of a moving average (MA) as continuous analytical quality control (QC) for routine chemistry assays has failed due to lack of a simple method that allows optimization of MAs. A new method was applied to optimize the MA for routine chemistry and was evaluated in daily practice as continuous analytical QC instrument. MA procedures were optimized using an MA bias detection simulation procedure. Optimization was graphically supported by bias detection curves. Next, all optimal MA procedures that contributed to the quality assurance were run for 100 consecutive days and MA alarms generated during working hours were investigated. Optimized MA procedures were applied for 24 chemistry assays. During this evaluation, 303,871 MA values and 76 MA alarms were generated. Of all alarms, 54 (71%) were generated during office hours. Of these, 41 were further investigated and were caused by ion selective electrode (ISE) failure (1), calibration failure not detected by QC due to improper QC settings (1), possible bias (significant difference with the other analyzer) (10), non-human materials analyzed (2), extreme result(s) of a single patient (2), pre-analytical error (1), no cause identified (20), and no conclusion possible (4). MA was implemented in daily practice as a continuous QC instrument for 24 routine chemistry assays. In our setup when an MA alarm required follow-up, a manageable number of MA alarms was generated that resulted in valuable MA alarms. For the management of MA alarms, several applications/requirements in the MA management software will simplify the use of MA procedures.

  5. Analytical design of an industrial two-term controller for optimal regulatory control of open-loop unstable processes under operational constraints.

    PubMed

    Tchamna, Rodrigue; Lee, Moonyong

    2018-01-01

    This paper proposes a novel optimization-based approach for the design of an industrial two-term proportional-integral (PI) controller for the optimal regulatory control of unstable processes subjected to three common operational constraints related to the process variable, manipulated variable and its rate of change. To derive analytical design relations, the constrained optimal control problem in the time domain was transformed into an unconstrained optimization problem in a new parameter space via an effective parameterization. The resulting optimal PI controller has been verified to yield optimal performance and stability of an open-loop unstable first-order process under operational constraints. The proposed analytical design method explicitly takes into account the operational constraints in the controller design stage and also provides useful insights into the optimal controller design. Practical procedures for designing optimal PI parameters and a feasible constraint set exclusive of complex optimization steps are also proposed. The proposed controller was compared with several other PI controllers to illustrate its performance. The robustness of the proposed controller against plant-model mismatch has also been investigated. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  6. Riparian buffer design guidelines for water quality and wildlife habitat functions on agricultural landscapes in the Intermountain West

    Treesearch

    Craig W. Johnson; Susan Buffler

    2008-01-01

    Intermountain West planners, designers, and resource managers are looking for science-based procedures for determining buffer widths and management techniques that will optimize the benefits riparian ecosystems provide. This study reviewed the riparian buffer literature, including protocols used to determine optimum buffer widths for water quality and wildlife habitat...

  7. Hybrid ABC Optimized MARS-Based Modeling of the Milling Tool Wear from Milling Run Experimental Data

    PubMed Central

    García Nieto, Paulino José; García-Gonzalo, Esperanza; Ordóñez Galán, Celestino; Bernardo Sánchez, Antonio

    2016-01-01

    Milling cutters are important cutting tools used in milling machines to perform milling operations, which are prone to wear and subsequent failure. In this paper, a practical new hybrid model to predict the milling tool wear in a regular cut, as well as entry cut and exit cut, of a milling tool is proposed. The model was based on the optimization tool termed artificial bee colony (ABC) in combination with multivariate adaptive regression splines (MARS) technique. This optimization mechanism involved the parameter setting in the MARS training procedure, which significantly influences the regression accuracy. Therefore, an ABC–MARS-based model was successfully used here to predict the milling tool flank wear (output variable) as a function of the following input variables: the time duration of experiment, depth of cut, feed, type of material, etc. Regression with optimal hyperparameters was performed and a determination coefficient of 0.94 was obtained. The ABC–MARS-based model's goodness of fit to experimental data confirmed the good performance of this model. This new model also allowed us to ascertain the most influential parameters on the milling tool flank wear with a view to proposing milling machine's improvements. Finally, conclusions of this study are exposed. PMID:28787882

  8. Hybrid ABC Optimized MARS-Based Modeling of the Milling Tool Wear from Milling Run Experimental Data.

    PubMed

    García Nieto, Paulino José; García-Gonzalo, Esperanza; Ordóñez Galán, Celestino; Bernardo Sánchez, Antonio

    2016-01-28

    Milling cutters are important cutting tools used in milling machines to perform milling operations, which are prone to wear and subsequent failure. In this paper, a practical new hybrid model to predict the milling tool wear in a regular cut, as well as entry cut and exit cut, of a milling tool is proposed. The model was based on the optimization tool termed artificial bee colony (ABC) in combination with multivariate adaptive regression splines (MARS) technique. This optimization mechanism involved the parameter setting in the MARS training procedure, which significantly influences the regression accuracy. Therefore, an ABC-MARS-based model was successfully used here to predict the milling tool flank wear (output variable) as a function of the following input variables: the time duration of experiment, depth of cut, feed, type of material, etc . Regression with optimal hyperparameters was performed and a determination coefficient of 0.94 was obtained. The ABC-MARS-based model's goodness of fit to experimental data confirmed the good performance of this model. This new model also allowed us to ascertain the most influential parameters on the milling tool flank wear with a view to proposing milling machine's improvements. Finally, conclusions of this study are exposed.

  9. Recent advances in integrated multidisciplinary optimization of rotorcraft

    NASA Technical Reports Server (NTRS)

    Adelman, Howard M.; Walsh, Joanne L.; Pritchard, Jocelyn I.

    1992-01-01

    A joint activity involving NASA and Army researchers at NASA LaRC to develop optimization procedures to improve the rotor blade design process by integrating appropriate disciplines and accounting for all of the important interactions among the disciplines is described. The disciplines involved include rotor aerodynamics, rotor dynamics, rotor structures, airframe dynamics, and acoustics. The work is focused on combining these five key disciplines in an optimization procedure capable of designing a rotor system to satisfy multidisciplinary design requirements. Fundamental to the plan is a three-phased approach. In phase 1, the disciplines of blade dynamics, blade aerodynamics, and blade structure are closely coupled while acoustics and airframe dynamics are decoupled and are accounted for as effective constraints on the design for the first three disciplines. In phase 2, acoustics is integrated with the first three disciplines. Finally, in phase 3, airframe dynamics is integrated with the other four disciplines. Representative results from work performed to date are described. These include optimal placement of tuning masses for reduction of blade vibratory shear forces, integrated aerodynamic/dynamic optimization, and integrated aerodynamic/dynamic/structural optimization. Examples of validating procedures are described.

  10. Recent advances in multidisciplinary optimization of rotorcraft

    NASA Technical Reports Server (NTRS)

    Adelman, Howard M.; Walsh, Joanne L.; Pritchard, Jocelyn I.

    1992-01-01

    A joint activity involving NASA and Army researchers at NASA LaRC to develop optimization procedures to improve the rotor blade design process by integrating appropriate disciplines and accounting for all of the important interactions among the disciplines is described. The disciplines involved include rotor aerodynamics, rotor dynamics, rotor structures, airframe dynamics, and acoustics. The work is focused on combining these five key disciplines in an optimization procedure capable of designing a rotor system to satisfy multidisciplinary design requirements. Fundamental to the plan is a three-phased approach. In phase 1, the disciplines of blade dynamics, blade aerodynamics, and blade structure are closely coupled while acoustics and airframe dynamics are decoupled and are accounted for as effective constraints on the design for the first three disciplines. In phase 2, acoustics is integrated with the first three disciplines. Finally, in phase 3, airframe dynamics is integrated with the other four disciplines. Representative results from work performed to date are described. These include optimal placement of tuning masses for reduction of blade vibratory shear forces, integrated aerodynamic/dynamic optimization, and integrated aerodynamic/dynamic/structural optimization. Examples of validating procedures are described.

  11. Optimal control of CPR procedure using hemodynamic circulation model

    DOEpatents

    Lenhart, Suzanne M.; Protopopescu, Vladimir A.; Jung, Eunok

    2007-12-25

    A method for determining a chest pressure profile for cardiopulmonary resuscitation (CPR) includes the steps of representing a hemodynamic circulation model based on a plurality of difference equations for a patient, applying an optimal control (OC) algorithm to the circulation model, and determining a chest pressure profile. The chest pressure profile defines a timing pattern of externally applied pressure to a chest of the patient to maximize blood flow through the patient. A CPR device includes a chest compressor, a controller communicably connected to the chest compressor, and a computer communicably connected to the controller. The computer determines the chest pressure profile by applying an OC algorithm to a hemodynamic circulation model based on the plurality of difference equations.

  12. Optimal design application on the advanced aeroelastic rotor blade

    NASA Technical Reports Server (NTRS)

    Wei, F. S.; Jones, R.

    1985-01-01

    The vibration and performance optimization procedure using regression analysis was successfully applied to an advanced aeroelastic blade design study. The major advantage of this regression technique is that multiple optimizations can be performed to evaluate the effects of various objective functions and constraint functions. The data bases obtained from the rotorcraft flight simulation program C81 and Myklestad mode shape program are analytically determined as a function of each design variable. This approach has been verified for various blade radial ballast weight locations and blade planforms. This method can also be utilized to ascertain the effect of a particular cost function which is composed of several objective functions with different weighting factors for various mission requirements without any additional effort.

  13. Design and Optimization of Ultrasonic Wireless Power Transmission Links for Millimeter-Sized Biomedical Implants.

    PubMed

    Meng, Miao; Kiani, Mehdi

    2017-02-01

    Ultrasound has been recently proposed as an alternative modality for efficient wireless power transmission (WPT) to biomedical implants with millimeter (mm) dimensions. This paper presents the theory and design methodology of ultrasonic WPT links that involve mm-sized receivers (Rx). For given load (R L ) and powering distance (d), the optimal geometries of transmitter (Tx) and Rx ultrasonic transducers, including their diameter and thickness, as well as the optimal operation frequency (f c ) are found through a recursive design procedure to maximize the power transmission efficiency (PTE). First, a range of realistic f c s is found based on the Rx thickness constrain. For a chosen f c within the range, the diameter and thickness of the Rx transducer are then swept together to maximize PTE. Then, the diameter and thickness of the Tx transducer are optimized to maximize PTE. Finally, this procedure is repeated for different f c s to find the optimal f c and its corresponding transducer geometries that maximize PTE. A design example of ultrasonic link has been presented and optimized for WPT to a 1 mm 3 implant, including a disk-shaped piezoelectric transducer on a silicon die. In simulations, a PTE of 2.11% at f c of 1.8 MHz was achieved for R L of 2.5 [Formula: see text] at [Formula: see text]. In order to validate our simulations, an ultrasonic link was optimized for a 1 mm 3 piezoelectric transducer mounted on a printed circuit board (PCB), which led to simulated and measured PTEs of 0.65% and 0.66% at f c of 1.1 MHz for R L of 2.5 [Formula: see text] at [Formula: see text], respectively.

  14. Particle swarm optimization-based automatic parameter selection for deep neural networks and its applications in large-scale and high-dimensional data

    PubMed Central

    2017-01-01

    In this paper, we propose a new automatic hyperparameter selection approach for determining the optimal network configuration (network structure and hyperparameters) for deep neural networks using particle swarm optimization (PSO) in combination with a steepest gradient descent algorithm. In the proposed approach, network configurations were coded as a set of real-number m-dimensional vectors as the individuals of the PSO algorithm in the search procedure. During the search procedure, the PSO algorithm is employed to search for optimal network configurations via the particles moving in a finite search space, and the steepest gradient descent algorithm is used to train the DNN classifier with a few training epochs (to find a local optimal solution) during the population evaluation of PSO. After the optimization scheme, the steepest gradient descent algorithm is performed with more epochs and the final solutions (pbest and gbest) of the PSO algorithm to train a final ensemble model and individual DNN classifiers, respectively. The local search ability of the steepest gradient descent algorithm and the global search capabilities of the PSO algorithm are exploited to determine an optimal solution that is close to the global optimum. We constructed several experiments on hand-written characters and biological activity prediction datasets to show that the DNN classifiers trained by the network configurations expressed by the final solutions of the PSO algorithm, employed to construct an ensemble model and individual classifier, outperform the random approach in terms of the generalization performance. Therefore, the proposed approach can be regarded an alternative tool for automatic network structure and parameter selection for deep neural networks. PMID:29236718

  15. Design optimization of the sensor spatial arrangement in a direct magnetic field-based localization system for medical applications.

    PubMed

    Marechal, Luc; Shaohui Foong; Zhenglong Sun; Wood, Kristin L

    2015-08-01

    Motivated by the need for developing a neuronavigation system to improve efficacy of intracranial surgical procedures, a localization system using passive magnetic fields for real-time monitoring of the insertion process of an external ventricular drain (EVD) catheter is conceived and developed. This system operates on the principle of measuring the static magnetic field of a magnetic marker using an array of magnetic sensors. An artificial neural network (ANN) is directly used for solving the inverse problem of magnetic dipole localization for improved efficiency and precision. As the accuracy of localization system is highly dependent on the sensor spatial location, an optimization framework, based on understanding and classification of experimental sensor characteristics as well as prior knowledge of the general trajectory of the localization pathway, for design of such sensing assemblies is described and investigated in this paper. Both optimized and non-optimized sensor configurations were experimentally evaluated and results show superior performance from the optimized configuration. While the approach presented here utilizes ventriculostomy as an illustrative platform, it can be extended to other medical applications that require localization inside the body.

  16. An automated, fast and accurate registration method to link stranded seeds in permanent prostate implants

    NASA Astrophysics Data System (ADS)

    Westendorp, Hendrik; Nuver, Tonnis T.; Moerland, Marinus A.; Minken, André W.

    2015-10-01

    The geometry of a permanent prostate implant varies over time. Seeds can migrate and edema of the prostate affects the position of seeds. Seed movements directly influence dosimetry which relates to treatment quality. We present a method that tracks all individual seeds over time allowing quantification of seed movements. This linking procedure was tested on transrectal ultrasound (TRUS) and cone-beam CT (CBCT) datasets of 699 patients. These datasets were acquired intraoperatively during a dynamic implantation procedure, that combines both imaging modalities. The procedure was subdivided in four automatic linking steps. (I) The Hungarian Algorithm was applied to initially link seeds in CBCT and the corresponding TRUS datasets. (II) Strands were identified and optimized based on curvature and linefits: non optimal links were removed. (III) The positions of unlinked seeds were reviewed and were linked to incomplete strands if within curvature- and distance-thresholds. (IV) Finally, seeds close to strands were linked, also if the curvature-threshold was violated. After linking the seeds an affine transformation was applied. The procedure was repeated until the results were stable or the 6th iteration ended. All results were visually reviewed for mismatches and uncertainties. Eleven implants showed a mismatch and in 12 cases an uncertainty was identified. On average the linking procedure took 42 ms per case. This accurate and fast method has the potential to be used for other time spans, like Day 30, and other imaging modalities. It can potentially be used during a dynamic implantation procedure to faster and better evaluate the quality of the permanent prostate implant.

  17. Outcomes from the Delphi process of the Thoracic Robotic Curriculum Development Committee.

    PubMed

    Veronesi, Giulia; Dorn, Patrick; Dunning, Joel; Cardillo, Giuseppe; Schmid, Ralph A; Collins, Justin; Baste, Jean-Marc; Limmer, Stefan; Shahin, Ghada M M; Egberts, Jan-Hendrik; Pardolesi, Alessandro; Meacci, Elisa; Stamenkovic, Sasha; Casali, Gianluca; Rueckert, Jens C; Taurchini, Mauro; Santelmo, Nicola; Melfi, Franca; Toker, Alper

    2018-06-01

    As the adoption of robotic procedures becomes more widespread, additional risk related to the learning curve can be expected. This article reports the results of a Delphi process to define procedures to optimize robotic training of thoracic surgeons and to promote safe performance of established robotic interventions as, for example, lung cancer and thymoma surgery. In June 2016, a working panel was spontaneously created by members of the European Society of Thoracic Surgeons (ESTS) and European Association for Cardio-Thoracic Surgery (EACTS) with a specialist interest in robotic thoracic surgery and/or surgical training. An e-consensus-finding exercise using the Delphi methodology was applied requiring 80% agreement to reach consensus on each question. Repeated iterations of anonymous voting continued over 3 rounds. Agreement was reached on many points: a standardized robotic training curriculum for robotic thoracic surgery should be divided into clearly defined sections as a staged learning pathway; the basic robotic curriculum should include a baseline evaluation, an e-learning module, a simulation-based training (including virtual reality simulation, Dry lab and Wet lab) and a robotic theatre (bedside) observation. Advanced robotic training should include e-learning on index procedures (right upper lobe) with video demonstration, access to video library of robotic procedures, simulation training, modular console training to index procedure, transition to full-procedure training with a proctor and final evaluation of the submitted video to certified independent examiners. Agreement was reached on a large number of questions to optimize and standardize training and education of thoracic surgeons in robotic activity. The production of the content of the learning material is ongoing.

  18. 'It is Time to Prepare the Next patient' Real-Time Prediction of Procedure Duration in Laparoscopic Cholecystectomies.

    PubMed

    Guédon, Annetje C P; Paalvast, M; Meeuwsen, F C; Tax, D M J; van Dijke, A P; Wauben, L S G L; van der Elst, M; Dankelman, J; van den Dobbelsteen, J J

    2016-12-01

    Operating Room (OR) scheduling is crucial to allow efficient use of ORs. Currently, the predicted durations of surgical procedures are unreliable and the OR schedulers have to follow the progress of the procedures in order to update the daily planning accordingly. The OR schedulers often acquire the needed information through verbal communication with the OR staff, which causes undesired interruptions of the surgical process. The aim of this study was to develop a system that predicts in real-time the remaining procedure duration and to test this prediction system for reliability and usability in an OR. The prediction system was based on the activation pattern of one single piece of equipment, the electrosurgical device. The prediction system was tested during 21 laparoscopic cholecystectomies, in which the activation of the electrosurgical device was recorded and processed in real-time using pattern recognition methods. The remaining surgical procedure duration was estimated and the optimal timing to prepare the next patient for surgery was communicated to the OR staff. The mean absolute error was smaller for the prediction system (14 min) than for the OR staff (19 min). The OR staff doubted whether the prediction system could take all relevant factors into account but were positive about its potential to shorten waiting times for patients. The prediction system is a promising tool to automatically and objectively predict the remaining procedure duration, and thereby achieve optimal OR scheduling and streamline the patient flow from the nursing department to the OR.

  19. Multimodal Analgesia in Breast Surgical Procedures: Technical and Pharmacological Considerations for Liposomal Bupivacaine Use

    PubMed Central

    Newman, Martin I.; Seeley, Neil; Hutchins, Jacob; Smith, Kevin L.; Mena, Gabriel; Selber, Jesse C.; Saint-Cyr, Michel H.; Gadsden, Jeffrey C.

    2017-01-01

    Enhanced recovery after surgery is a multidisciplinary perioperative clinical pathway that uses evidence-based interventions to improve the patient experience as well as increase satisfaction, reduce costs, mitigate the surgical stress response, accelerate functional recovery, and decrease perioperative complications. One of the most important elements of enhanced recovery pathways is multimodal pain management. Herein, aspects relating to multimodal analgesia following breast surgical procedures are discussed with the understanding that treatment decisions should be individualized and guided by sound clinical judgment. A review of liposomal bupivacaine, a prolonged-release formulation of bupivacaine, in the management of postoperative pain following breast surgical procedures is presented, and technical guidance regarding optimal administration of liposomal bupivacaine is provided. PMID:29062649

  20. An efficient constraint to account for mistuning effects in the optimal design of engine rotors

    NASA Technical Reports Server (NTRS)

    Murthy, Durbha V.; Pierre, Christophe; Ottarsson, Gisli

    1992-01-01

    Blade-to-blade differences in structural properties, unavoidable in practice due to manufacturing tolerances, can have significant influence on the vibratory response of engine rotor blade. Accounting for these differences, also known as mistuning, in design and in optimization procedures is generally not possible. This note presents an easily calculated constraint that can be used in design and optimization procedures to control the sensitivity of final designs to mistuning.

  1. Towards automating the discovery of certain innovative design principles through a clustering-based optimization technique

    NASA Astrophysics Data System (ADS)

    Bandaru, Sunith; Deb, Kalyanmoy

    2011-09-01

    In this article, a methodology is proposed for automatically extracting innovative design principles which make a system or process (subject to conflicting objectives) optimal using its Pareto-optimal dataset. Such 'higher knowledge' would not only help designers to execute the system better, but also enable them to predict how changes in one variable would affect other variables if the system has to retain its optimal behaviour. This in turn would help solve other similar systems with different parameter settings easily without the need to perform a fresh optimization task. The proposed methodology uses a clustering-based optimization technique and is capable of discovering hidden functional relationships between the variables, objective and constraint functions and any other function that the designer wishes to include as a 'basis function'. A number of engineering design problems are considered for which the mathematical structure of these explicit relationships exists and has been revealed by a previous study. A comparison with the multivariate adaptive regression splines (MARS) approach reveals the practicality of the proposed approach due to its ability to find meaningful design principles. The success of this procedure for automated innovization is highly encouraging and indicates its suitability for further development in tackling more complex design scenarios.

  2. Numerical simulation for horizontal subsurface flow constructed wetlands: A short review including geothermal effects and solution bounding in biodegradation procedures

    NASA Astrophysics Data System (ADS)

    Liolios, K.; Tsihrintzis, V.; Angelidis, P.; Georgiev, K.; Georgiev, I.

    2016-10-01

    Current developments on modeling of groundwater flow and contaminant transport and removal in the porous media of Horizontal Subsurface Flow Constructed Wetlands (HSF CWs) are first reviewed in a short way. The two usual environmental engineering approaches, the black-box and the process-based one, are briefly presented. Next, recent research results obtained by using these two approaches are briefly discussed as application examples, where emphasis is given to the evaluation of the optimal design and operation parameters concerning HSF CWs. For the black-box approach, the use of Artificial Neural Networks is discussed for the formulation of models, which predict the removal performance of HSF CWs. A novel mathematical prove is presented, which concerns the dependence of the first-order removal coefficient on the Temperature and the Hydraulic Residence Time. For the process-based approach, an application example is first discussed which concerns procedures to evaluate the optimal range of values for the removal coefficient, dependent on either the Temperature or the Hydraulic Residence Time. This evaluation is based on simulating available experimental results of pilot-scale units operated in Democritus University of Thrace, Xanthi, Greece. Further, in a second example, a novel enlargement of the system of Partial Differential Equations is presented, in order to include geothermal effects. Finally, in a third example, the case of parameters uncertainty concerning biodegradation procedures is considered and the use of upper and a novel approach is presented, which concerns the upper and the lower solution bound for the practical draft design of HSF CWs.

  3. The Need for Integrated Approaches in Metabolic Engineering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lechner, Anna; Brunk, Elizabeth; Keasling, Jay D.

    This review highlights state-of-the-art procedures for heterologous small-molecule biosynthesis, the associated bottlenecks, and new strategies that have the potential to accelerate future accomplishments in metabolic engineering. We emphasize that a combination of different approaches over multiple time and size scales must b e considered for successful pathway engineering in a heterologous host. We have classified these optimization procedures based on the "system" that is being manipulated: transcriptome, translatome, proteome, or reactome. By bridging multiple disciplines, including molecular biology, biochemistry, biophysics, and computational sciences, we can create an integral framework for the discovery and implementation of novel biosynthetic production routes.

  4. Determining animal drug combinations based on efficacy and safety.

    PubMed

    Kratzer, D D; Geng, S

    1986-08-01

    A procedure for deriving drug combinations for animal health is used to derive an optimal combination of 200 mg of novobiocin and 650,000 IU of penicillin for nonlactating cow mastitis treatment. The procedure starts with an estimated second order polynomial response surface equation. That surface is translated into a probability surface with contours called isoprobs. The isoprobs show drug amounts that have equal probability to produce maximal efficacy. Safety factors are incorporated into the probability surface via a noncentrality parameter that causes the isoprobs to expand as safety decreases, resulting in lower amounts of drug being used.

  5. The Need for Integrated Approaches in Metabolic Engineering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lechner, Anna; Brunk, Elizabeth; Keasling, Jay D.

    Highlights include state-of-the-art procedures for heterologous small-molecule biosynthesis, the associated bottlenecks, and new strategies that have the potential to accelerate future accomplishments in metabolic engineering. A combination of different approaches over multiple time and size scales must be considered for successful pathway engineering in a heterologous host. We have classified these optimization procedures based on the “system” that is being manipulated: transcriptome, translatome, proteome, or reactome. Here, by bridging multiple disciplines, including molecular biology, biochemistry, biophysics, and computational sciences, we can create an integral framework for the discovery and implementation of novel biosynthetic production routes.

  6. The Need for Integrated Approaches in Metabolic Engineering

    DOE PAGES

    Lechner, Anna; Brunk, Elizabeth; Keasling, Jay D.

    2016-08-15

    Highlights include state-of-the-art procedures for heterologous small-molecule biosynthesis, the associated bottlenecks, and new strategies that have the potential to accelerate future accomplishments in metabolic engineering. A combination of different approaches over multiple time and size scales must be considered for successful pathway engineering in a heterologous host. We have classified these optimization procedures based on the “system” that is being manipulated: transcriptome, translatome, proteome, or reactome. Here, by bridging multiple disciplines, including molecular biology, biochemistry, biophysics, and computational sciences, we can create an integral framework for the discovery and implementation of novel biosynthetic production routes.

  7. On the solution of evolution equations based on multigrid and explicit iterative methods

    NASA Astrophysics Data System (ADS)

    Zhukov, V. T.; Novikova, N. D.; Feodoritova, O. B.

    2015-08-01

    Two schemes for solving initial-boundary value problems for three-dimensional parabolic equations are studied. One is implicit and is solved using the multigrid method, while the other is explicit iterative and is based on optimal properties of the Chebyshev polynomials. In the explicit iterative scheme, the number of iteration steps and the iteration parameters are chosen as based on the approximation and stability conditions, rather than on the optimization of iteration convergence to the solution of the implicit scheme. The features of the multigrid scheme include the implementation of the intergrid transfer operators for the case of discontinuous coefficients in the equation and the adaptation of the smoothing procedure to the spectrum of the difference operators. The results produced by these schemes as applied to model problems with anisotropic discontinuous coefficients are compared.

  8. Topology optimization under stochastic stiffness

    NASA Astrophysics Data System (ADS)

    Asadpoure, Alireza

    Topology optimization is a systematic computational tool for optimizing the layout of materials within a domain for engineering design problems. It allows variation of structural boundaries and connectivities. This freedom in the design space often enables discovery of new, high performance designs. However, solutions obtained by performing the optimization in a deterministic setting may be impractical or suboptimal when considering real-world engineering conditions with inherent variabilities including (for example) variabilities in fabrication processes and operating conditions. The aim of this work is to provide a computational methodology for topology optimization in the presence of uncertainties associated with structural stiffness, such as uncertain material properties and/or structural geometry. Existing methods for topology optimization under deterministic conditions are first reviewed. Modifications are then proposed to improve the numerical performance of the so-called Heaviside Projection Method (HPM) in continuum domains. Next, two approaches, perturbation and Polynomial Chaos Expansion (PCE), are proposed to account for uncertainties in the optimization procedure. These approaches are intrusive, allowing tight and efficient coupling of the uncertainty quantification with the optimization sensitivity analysis. The work herein develops a robust topology optimization framework aimed at reducing the sensitivity of optimized solutions to uncertainties. The perturbation-based approach combines deterministic topology optimization with a perturbation method for the quantification of uncertainties. The use of perturbation transforms the problem of topology optimization under uncertainty to an augmented deterministic topology optimization problem. The PCE approach combines the spectral stochastic approach for the representation and propagation of uncertainties with an existing deterministic topology optimization technique. The resulting compact representations for the response quantities allow for efficient and accurate calculation of sensitivities of response statistics with respect to the design variables. The proposed methods are shown to be successful at generating robust optimal topologies. Examples from topology optimization in continuum and discrete domains (truss structures) under uncertainty are presented. It is also shown that proposed methods lead to significant computational savings when compared to Monte Carlo-based optimization which involve multiple formations and inversions of the global stiffness matrix and that results obtained from the proposed method are in excellent agreement with those obtained from a Monte Carlo-based optimization algorithm.

  9. Concentric Tube Robot Design and Optimization Based on Task and Anatomical Constraints

    PubMed Central

    Bergeles, Christos; Gosline, Andrew H.; Vasilyev, Nikolay V.; Codd, Patrick J.; del Nido, Pedro J.; Dupont, Pierre E.

    2015-01-01

    Concentric tube robots are catheter-sized continuum robots that are well suited for minimally invasive surgery inside confined body cavities. These robots are constructed from sets of pre-curved superelastic tubes and are capable of assuming complex 3D curves. The family of 3D curves that the robot can assume depends on the number, curvatures, lengths and stiffnesses of the tubes in its tube set. The robot design problem involves solving for a tube set that will produce the family of curves necessary to perform a surgical procedure. At a minimum, these curves must enable the robot to smoothly extend into the body and to manipulate tools over the desired surgical workspace while respecting anatomical constraints. This paper introduces an optimization framework that utilizes procedureor patient-specific image-based anatomical models along with surgical workspace requirements to generate robot tube set designs. The algorithm searches for designs that minimize robot length and curvature and for which all paths required for the procedure consist of stable robot configurations. Two mechanics-based kinematic models are used. Initial designs are sought using a model assuming torsional rigidity. These designs are then refined using a torsionally-compliant model. The approach is illustrated with clinically relevant examples from neurosurgery and intracardiac surgery. PMID:26380575

  10. An Improved Artificial Bee Colony-Based Approach for Zoning Protected Ecological Areas

    PubMed Central

    Shao, Jing; Yang, Lina; Peng, Ling; Chi, Tianhe; Wang, Xiaomeng

    2015-01-01

    China is facing ecological and environmental challenges as its urban growth rate continues to rise, and zoning protected ecological areas is recognized as an effective response measure. Zoning inherently involves both site attributes and aggregation attributes, and the combination of mathematical models and heuristic algorithms have proven advantageous. In this article, an improved artificial bee colony (IABC)-based approach is proposed for zoning protected ecological areas at a regional scale. Three main improvements were made: the first is the use of multiple strategies to generate the initial bee population of a specific quality and diversity, the second is an exploitation search procedure to generate neighbor solutions combining “replace” and “alter” operations, and the third is a “swap” strategy to enable a local search for the iterative optimal solution. The IABC algorithm was verified using simulated data. Then it was applied to define an optimum scheme of protected ecological areas of Sanya (in the Hainan province of China), and a reasonable solution was obtained. Finally, a comparison experiment with other methods (agent-based land allocation model, ant colony optimization, and density slicing) was conducted and demonstrated that the IABC algorithm was more effective and efficient than the other methods. Through this study, we aimed to provide a scientifically sound, practical approach for zoning procedures. PMID:26394148

  11. Derived heuristics-based consistent optimization of material flow in a gold processing plant

    NASA Astrophysics Data System (ADS)

    Myburgh, Christie; Deb, Kalyanmoy

    2018-01-01

    Material flow in a chemical processing plant often follows complicated control laws and involves plant capacity constraints. Importantly, the process involves discrete scenarios which when modelled in a programming format involves if-then-else statements. Therefore, a formulation of an optimization problem of such processes becomes complicated with nonlinear and non-differentiable objective and constraint functions. In handling such problems using classical point-based approaches, users often have to resort to modifications and indirect ways of representing the problem to suit the restrictions associated with classical methods. In a particular gold processing plant optimization problem, these facts are demonstrated by showing results from MATLAB®'s well-known fmincon routine. Thereafter, a customized evolutionary optimization procedure which is capable of handling all complexities offered by the problem is developed. Although the evolutionary approach produced results with comparatively less variance over multiple runs, the performance has been enhanced by introducing derived heuristics associated with the problem. In this article, the development and usage of derived heuristics in a practical problem are presented and their importance in a quick convergence of the overall algorithm is demonstrated.

  12. Towards inverse modeling of turbidity currents: The inverse lock-exchange problem

    NASA Astrophysics Data System (ADS)

    Lesshafft, Lutz; Meiburg, Eckart; Kneller, Ben; Marsden, Alison

    2011-04-01

    A new approach is introduced for turbidite modeling, leveraging the potential of computational fluid dynamics methods to simulate the flow processes that led to turbidite formation. The practical use of numerical flow simulation for the purpose of turbidite modeling so far is hindered by the need to specify parameters and initial flow conditions that are a priori unknown. The present study proposes a method to determine optimal simulation parameters via an automated optimization process. An iterative procedure matches deposit predictions from successive flow simulations against available localized reference data, as in practice may be obtained from well logs, and aims at convergence towards the best-fit scenario. The final result is a prediction of the entire deposit thickness and local grain size distribution. The optimization strategy is based on a derivative-free, surrogate-based technique. Direct numerical simulations are performed to compute the flow dynamics. A proof of concept is successfully conducted for the simple test case of a two-dimensional lock-exchange turbidity current. The optimization approach is demonstrated to accurately retrieve the initial conditions used in a reference calculation.

  13. Transoesophageal echocardiography in the dog.

    PubMed

    Domenech, Oriol; Oliveira, Pedro

    2013-11-01

    Transoesophageal echocardiography (TEE) allows imaging of the heart through the oesophagus using a special transducer mounted on a modified endoscope. The proximity to the heart and minimal intervening structures enables the acquisition of high-resolution images that are consistently superior to routine transthoracic echocardiography and optimal imaging of the heart base anatomy and related structures. TEE provides high-quality real-time imaging free of ionizing radiation, making it an ideal instrument not only for diagnostic purposes, but also for monitoring surgical or minimally invasive cardiac procedures, non-cardiac procedures and critical cases in the intensive care unit. In human medicine, TEE is routinely used in these settings. In veterinary medicine, TEE is increasingly used in referral centres, especially for perioperative assessment and guidance of catheter-based cardiovascular procedures, such as patent ductus arteriosus, balloon valvuloplasty, and atrial and ventricular septal defect occlusion with vascular devices. TEE can also aid in heartworm retrieval procedures. The purpose of this paper is to review the current uses of TEE in veterinary medicine, focusing on technique, indications and complications. Copyright © 2013 Elsevier Ltd. All rights reserved.

  14. Solar energy system economic evaluation for IBM System 3, Glendo, Wyoming

    NASA Technical Reports Server (NTRS)

    1980-01-01

    This analysis was based on the technical and economic models in f-chart design procedures with inputs based on the characteristics of the parameters of present worth of system cost over a projected twenty year life: life cycle savings, year of positive savings, and year of payback for the optimized solar energy system at each of the analysis sites. The sensitivity of the economic evaluation to uncertainties in constituent system and economic variables was also investigated.

  15. Optimal configuration of microstructure in ferroelectric materials by stochastic optimization

    NASA Astrophysics Data System (ADS)

    Jayachandran, K. P.; Guedes, J. M.; Rodrigues, H. C.

    2010-07-01

    An optimization procedure determining the ideal configuration at the microstructural level of ferroelectric (FE) materials is applied to maximize piezoelectricity. Piezoelectricity in ceramic FEs differs significantly from that of single crystals because of the presence of crystallites (grains) possessing crystallographic axes aligned imperfectly. The piezoelectric properties of a polycrystalline (ceramic) FE is inextricably related to the grain orientation distribution (texture). The set of combination of variables, known as solution space, which dictates the texture of a ceramic is unlimited and hence the choice of the optimal solution which maximizes the piezoelectricity is complicated. Thus, a stochastic global optimization combined with homogenization is employed for the identification of the optimal granular configuration of the FE ceramic microstructure with optimum piezoelectric properties. The macroscopic equilibrium piezoelectric properties of polycrystalline FE is calculated using mathematical homogenization at each iteration step. The configuration of grains characterized by its orientations at each iteration is generated using a randomly selected set of orientation distribution parameters. The optimization procedure applied to the single crystalline phase compares well with the experimental data. Apparent enhancement of piezoelectric coefficient d33 is observed in an optimally oriented BaTiO3 single crystal. Based on the good agreement of results with the published data in single crystals, we proceed to apply the methodology in polycrystals. A configuration of crystallites, simultaneously constraining the orientation distribution of the c-axis (polar axis) while incorporating ab-plane randomness, which would multiply the overall piezoelectricity in ceramic BaTiO3 is also identified. The orientation distribution of the c-axes is found to be a narrow Gaussian distribution centered around 45°. The piezoelectric coefficient in such a ceramic is found to be nearly three times as that of the single crystal. Our optimization model provide designs for materials with enhanced piezoelectric performance, which would stimulate further studies involving materials possessing higher spontaneous polarization.

  16. An optimized and validated SPE-LC-MS/MS method for the determination of caffeine and paraxanthine in hair.

    PubMed

    De Kesel, Pieter M M; Lambert, Willy E; Stove, Christophe P

    2015-11-01

    Caffeine is the probe drug of choice to assess the phenotype of the drug metabolizing enzyme CYP1A2. Typically, molar concentration ratios of paraxanthine, caffeine's major metabolite, to its precursor are determined in plasma following administration of a caffeine test dose. The aim of this study was to develop and validate an LC-MS/MS method for the determination of caffeine and paraxanthine in hair. The different steps of a hair extraction procedure were thoroughly optimized. Following a three-step decontamination procedure, caffeine and paraxanthine were extracted from 20 mg of ground hair using a solution of protease type VIII in Tris buffer (pH 7.5). Resulting hair extracts were cleaned up on Strata-X™ SPE cartridges. All samples were analyzed on a Waters Acquity UPLC® system coupled to an AB SCIEX API 4000™ triple quadrupole mass spectrometer. The final method was fully validated based on international guidelines. Linear calibration lines for caffeine and paraxanthine ranged from 20 to 500 pg/mg. Precision (%RSD) and accuracy (%bias) were below 12% and 7%, respectively. The isotopically labeled internal standards compensated for the ion suppression observed for both compounds. Relative matrix effects were below 15%RSD. The recovery of the sample preparation procedure was high (>85%) and reproducible. Caffeine and paraxanthine were stable in hair for at least 644 days. The effect of the hair decontamination procedure was evaluated as well. Finally, the applicability of the developed procedure was demonstrated by determining caffeine and paraxanthine concentrations in hair samples of ten healthy volunteers. The optimized and validated method for determination of caffeine and paraxanthine in hair proved to be reliable and may serve to evaluate the potential of hair analysis for CYP1A2 phenotyping. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. Robust resolution enhancement optimization methods to process variations based on vector imaging model

    NASA Astrophysics Data System (ADS)

    Ma, Xu; Li, Yanqiu; Guo, Xuejia; Dong, Lisong

    2012-03-01

    Optical proximity correction (OPC) and phase shifting mask (PSM) are the most widely used resolution enhancement techniques (RET) in the semiconductor industry. Recently, a set of OPC and PSM optimization algorithms have been developed to solve for the inverse lithography problem, which are only designed for the nominal imaging parameters without giving sufficient attention to the process variations due to the aberrations, defocus and dose variation. However, the effects of process variations existing in the practical optical lithography systems become more pronounced as the critical dimension (CD) continuously shrinks. On the other hand, the lithography systems with larger NA (NA>0.6) are now extensively used, rendering the scalar imaging models inadequate to describe the vector nature of the electromagnetic field in the current optical lithography systems. In order to tackle the above problems, this paper focuses on developing robust gradient-based OPC and PSM optimization algorithms to the process variations under a vector imaging model. To achieve this goal, an integrative and analytic vector imaging model is applied to formulate the optimization problem, where the effects of process variations are explicitly incorporated in the optimization framework. The steepest descent algorithm is used to optimize the mask iteratively. In order to improve the efficiency of the proposed algorithms, a set of algorithm acceleration techniques (AAT) are exploited during the optimization procedure.

  18. Optimization of Boiling Water Reactor Loading Pattern Using Two-Stage Genetic Algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kobayashi, Yoko; Aiyoshi, Eitaro

    2002-10-15

    A new two-stage optimization method based on genetic algorithms (GAs) using an if-then heuristic rule was developed to generate optimized boiling water reactor (BWR) loading patterns (LPs). In the first stage, the LP is optimized using an improved GA operator. In the second stage, an exposure-dependent control rod pattern (CRP) is sought using GA with an if-then heuristic rule. The procedure of the improved GA is based on deterministic operators that consist of crossover, mutation, and selection. The handling of the encoding technique and constraint conditions by that GA reflects the peculiar characteristics of the BWR. In addition, strategies suchmore » as elitism and self-reproduction are effectively used in order to improve the search speed. The LP evaluations were performed with a three-dimensional diffusion code that coupled neutronic and thermal-hydraulic models. Strong axial heterogeneities and constraints dependent on three dimensions have always necessitated the use of three-dimensional core simulators for BWRs, so that optimization of computational efficiency is required. The proposed algorithm is demonstrated by successfully generating LPs for an actual BWR plant in two phases. One phase is only LP optimization applying the Haling technique. The other phase is an LP optimization that considers the CRP during reactor operation. In test calculations, candidates that shuffled fresh and burned fuel assemblies within a reasonable computation time were obtained.« less

  19. Additive manufacturing of reflective optics: evaluating finishing methods

    NASA Astrophysics Data System (ADS)

    Leuteritz, G.; Lachmayer, R.

    2018-02-01

    Individually shaped light distributions become more and more important in lighting technologies and thus the importance of additively manufactured reflectors increases significantly. The vast field of applications ranges from automotive lighting to medical imaging and bolsters the statement. However, the surfaces of additively manufactured reflectors suffer from insufficient optical properties even when manufactured using optimized process parameters for the Selective Laser Melting (SLM) process. Therefore post-process treatments of reflectors are necessary in order to further enhance their optical quality. This work concentrates on the effectiveness of post-process procedures for reflective optics. Based on already optimized aluminum reflectors, which are manufactured with a SLM machine, the parts are differently machined after the SLM process. Selected finishing methods like laser polishing, sputtering or sand blasting are applied and their effects quantified and compared. The post-process procedures are investigated on their impact on surface roughness and reflectance as well as geometrical precision. For each finishing method a demonstrator will be created and compared to a fully milled sample and among themselves. Ultimately, guidelines are developed in order to figure out the optimal treatment of additively manufactured reflectors regarding their optical and geometrical properties. Simulations of the light distributions will be validated with the developed demonstrators.

  20. Resource Allocation and Outpatient Appointment Scheduling Using Simulation Optimization

    PubMed Central

    Ling, Teresa Wai Ching; Yeung, Wing Kwan

    2017-01-01

    This paper studies the real-life problems of outpatient clinics having the multiple objectives of minimizing resource overtime, patient waiting time, and waiting area congestion. In the clinic, there are several patient classes, each of which follows different treatment procedure flow paths through a multiphase and multiserver queuing system with scarce staff and limited space. We incorporate the stochastic factors for the probabilities of the patients being diverted into different flow paths, patient punctuality, arrival times, procedure duration, and the number of accompanied visitors. We present a novel two-stage simulation-based heuristic algorithm to assess various tactical and operational decisions for optimizing the multiple objectives. In stage I, we search for a resource allocation plan, and in stage II, we determine a block appointment schedule by patient class and a service discipline for the daily operational level. We also explore the effects of the separate strategies and their integration to identify the best possible combination. The computational experiments are designed on the basis of data from a study of an ophthalmology clinic in a public hospital. Results show that our approach significantly mitigates the undesirable outcomes by integrating the strategies and increasing the resource flexibility at the bottleneck procedures without adding resources. PMID:29104748

  1. Resource Allocation and Outpatient Appointment Scheduling Using Simulation Optimization.

    PubMed

    Lin, Carrie Ka Yuk; Ling, Teresa Wai Ching; Yeung, Wing Kwan

    2017-01-01

    This paper studies the real-life problems of outpatient clinics having the multiple objectives of minimizing resource overtime, patient waiting time, and waiting area congestion. In the clinic, there are several patient classes, each of which follows different treatment procedure flow paths through a multiphase and multiserver queuing system with scarce staff and limited space. We incorporate the stochastic factors for the probabilities of the patients being diverted into different flow paths, patient punctuality, arrival times, procedure duration, and the number of accompanied visitors. We present a novel two-stage simulation-based heuristic algorithm to assess various tactical and operational decisions for optimizing the multiple objectives. In stage I, we search for a resource allocation plan, and in stage II, we determine a block appointment schedule by patient class and a service discipline for the daily operational level. We also explore the effects of the separate strategies and their integration to identify the best possible combination. The computational experiments are designed on the basis of data from a study of an ophthalmology clinic in a public hospital. Results show that our approach significantly mitigates the undesirable outcomes by integrating the strategies and increasing the resource flexibility at the bottleneck procedures without adding resources.

  2. Optimisation of solid-phase microextraction coupled to HPLC-UV for the determination of organochlorine pesticides and their metabolites in environmental liquid samples.

    PubMed

    Torres Padrón, M E; Sosa Ferrera, Z; Santana Rodríguez, J J

    2006-09-01

    A solid-phase microextraction (SPME) procedure using two commercial fibers coupled with high-performance liquid chromatography (HPLC) is presented for the extraction and determination of organochlorine pesticides in water samples. We have evaluated the extraction efficiency of this kind of compound using two different fibers: 60-mum polydimethylsiloxane-divinylbenzene (PDMS-DVB) and Carbowax/TPR-100 (CW/TPR). Parameters involved in the extraction and desorption procedures (e.g. extraction time, ionic strength, extraction temperature, desorption and soaking time) were studied and optimized to achieve the maximum efficiency. Results indicate that both PDMS-DVB and CW/TPR fibers are suitable for the extraction of this type of compound, and a simple calibration curve method based on simple aqueous standards can be used. All the correlation coefficients were better than 0.9950, and the RSDs ranged from 7% to 13% for 60-mum PDMS-DVB fiber and from 3% to 10% for CW/TPR fiber. Optimized procedures were applied to the determination of a mixture of six organochlorine pesticides in environmental liquid samples (sea, sewage and ground waters), employing HPLC with UV-diode array detector.

  3. An improved exploratory search technique for pure integer linear programming problems

    NASA Technical Reports Server (NTRS)

    Fogle, F. R.

    1990-01-01

    The development is documented of a heuristic method for the solution of pure integer linear programming problems. The procedure draws its methodology from the ideas of Hooke and Jeeves type 1 and 2 exploratory searches, greedy procedures, and neighborhood searches. It uses an efficient rounding method to obtain its first feasible integer point from the optimal continuous solution obtained via the simplex method. Since this method is based entirely on simple addition or subtraction of one to each variable of a point in n-space and the subsequent comparison of candidate solutions to a given set of constraints, it facilitates significant complexity improvements over existing techniques. It also obtains the same optimal solution found by the branch-and-bound technique in 44 of 45 small to moderate size test problems. Two example problems are worked in detail to show the inner workings of the method. Furthermore, using an established weighted scheme for comparing computational effort involved in an algorithm, a comparison of this algorithm is made to the more established and rigorous branch-and-bound method. A computer implementation of the procedure, in PC compatible Pascal, is also presented and discussed.

  4. [Complexity level simulation in the German diagnosis-related groups system: the financial effect of coding of comorbidity diagnostics in urology].

    PubMed

    Wenke, A; Gaber, A; Hertle, L; Roeder, N; Pühse, G

    2012-07-01

    Precise and complete coding of diagnoses and procedures is of value for optimizing revenues within the German diagnosis-related groups (G-DRG) system. The implementation of effective structures for coding is cost-intensive. The aim of this study was to prove whether higher costs can be refunded by complete acquisition of comorbidities and complications. Calculations were based on DRG data of the Department of Urology, University Hospital of Münster, Germany, covering all patients treated in 2009. The data were regrouped and subjected to a process of simulation (increase and decrease of patient clinical complexity levels, PCCL) with the help of recently developed software. In urology a strong dependency of quantity and quality of coding of secondary diagnoses on PCCL and subsequent profits was found. Departmental budgetary procedures can be optimized when coding is effective. The new simulation tool can be a valuable aid to improve profits available for distribution. Nevertheless, calculation of time use and financial needs by this procedure are subject to specific departmental terms and conditions. Completeness of coding of (secondary) diagnoses must be the ultimate administrative goal of patient case documentation in urology.

  5. A reliable algorithm for optimal control synthesis

    NASA Technical Reports Server (NTRS)

    Vansteenwyk, Brett; Ly, Uy-Loi

    1992-01-01

    In recent years, powerful design tools for linear time-invariant multivariable control systems have been developed based on direct parameter optimization. In this report, an algorithm for reliable optimal control synthesis using parameter optimization is presented. Specifically, a robust numerical algorithm is developed for the evaluation of the H(sup 2)-like cost functional and its gradients with respect to the controller design parameters. The method is specifically designed to handle defective degenerate systems and is based on the well-known Pade series approximation of the matrix exponential. Numerical test problems in control synthesis for simple mechanical systems and for a flexible structure with densely packed modes illustrate positively the reliability of this method when compared to a method based on diagonalization. Several types of cost functions have been considered: a cost function for robust control consisting of a linear combination of quadratic objectives for deterministic and random disturbances, and one representing an upper bound on the quadratic objective for worst case initial conditions. Finally, a framework for multivariable control synthesis has been developed combining the concept of closed-loop transfer recovery with numerical parameter optimization. The procedure enables designers to synthesize not only observer-based controllers but also controllers of arbitrary order and structure. Numerical design solutions rely heavily on the robust algorithm due to the high order of the synthesis model and the presence of near-overlapping modes. The design approach is successfully applied to the design of a high-bandwidth control system for a rotorcraft.

  6. Capacity improvement using simulation optimization approaches: A case study in the thermotechnology industry

    NASA Astrophysics Data System (ADS)

    Yelkenci Köse, Simge; Demir, Leyla; Tunalı, Semra; Türsel Eliiyi, Deniz

    2015-02-01

    In manufacturing systems, optimal buffer allocation has a considerable impact on capacity improvement. This study presents a simulation optimization procedure to solve the buffer allocation problem in a heat exchanger production plant so as to improve the capacity of the system. For optimization, three metaheuristic-based search algorithms, i.e. a binary-genetic algorithm (B-GA), a binary-simulated annealing algorithm (B-SA) and a binary-tabu search algorithm (B-TS), are proposed. These algorithms are integrated with the simulation model of the production line. The simulation model, which captures the stochastic and dynamic nature of the production line, is used as an evaluation function for the proposed metaheuristics. The experimental study with benchmark problem instances from the literature and the real-life problem show that the proposed B-TS algorithm outperforms B-GA and B-SA in terms of solution quality.

  7. Globally optimal superconducting magnets part II: symmetric MSE coil arrangement.

    PubMed

    Tieng, Quang M; Vegh, Viktor; Brereton, Ian M

    2009-01-01

    A globally optimal superconducting magnet coil design procedure based on the Minimum Stored Energy (MSE) current density map is outlined. The method has the ability to arrange coils in a manner that generates a strong and homogeneous axial magnetic field over a predefined region, and ensures the stray field external to the assembly and peak magnetic field at the wires are in acceptable ranges. The outlined strategy of allocating coils within a given domain suggests that coils should be placed around the perimeter of the domain with adjacent coils possessing alternating winding directions for optimum performance. The underlying current density maps from which the coils themselves are derived are unique, and optimized to possess minimal stored energy. Therefore, the method produces magnet designs with the lowest possible overall stored energy. Optimal coil layouts are provided for unshielded and shielded short bore symmetric superconducting magnets.

  8. Toward optimal feature and time segment selection by divergence method for EEG signals classification.

    PubMed

    Wang, Jie; Feng, Zuren; Lu, Na; Luo, Jing

    2018-06-01

    Feature selection plays an important role in the field of EEG signals based motor imagery pattern classification. It is a process that aims to select an optimal feature subset from the original set. Two significant advantages involved are: lowering the computational burden so as to speed up the learning procedure and removing redundant and irrelevant features so as to improve the classification performance. Therefore, feature selection is widely employed in the classification of EEG signals in practical brain-computer interface systems. In this paper, we present a novel statistical model to select the optimal feature subset based on the Kullback-Leibler divergence measure, and automatically select the optimal subject-specific time segment. The proposed method comprises four successive stages: a broad frequency band filtering and common spatial pattern enhancement as preprocessing, features extraction by autoregressive model and log-variance, the Kullback-Leibler divergence based optimal feature and time segment selection and linear discriminate analysis classification. More importantly, this paper provides a potential framework for combining other feature extraction models and classification algorithms with the proposed method for EEG signals classification. Experiments on single-trial EEG signals from two public competition datasets not only demonstrate that the proposed method is effective in selecting discriminative features and time segment, but also show that the proposed method yields relatively better classification results in comparison with other competitive methods. Copyright © 2018 Elsevier Ltd. All rights reserved.

  9. [The equivalence and interchangeability of medical articles].

    PubMed

    Antonov, V S

    2013-11-01

    The information concerning the interchangeability of medical articles is highly valuable because it makes it possible to correlate most precisely medical articles with medical technologies and medical care standards and to optimize budget costs under public purchasing. The proposed procedure of determination of interchangeability is based on criteria of equivalence of prescriptions, functional technical and technological characteristics and effectiveness of functioning of medical articles.

  10. Estimation of parameter uncertainty for an activated sludge model using Bayesian inference: a comparison with the frequentist method.

    PubMed

    Zonta, Zivko J; Flotats, Xavier; Magrí, Albert

    2014-08-01

    The procedure commonly used for the assessment of the parameters included in activated sludge models (ASMs) relies on the estimation of their optimal value within a confidence region (i.e. frequentist inference). Once optimal values are estimated, parameter uncertainty is computed through the covariance matrix. However, alternative approaches based on the consideration of the model parameters as probability distributions (i.e. Bayesian inference), may be of interest. The aim of this work is to apply (and compare) both Bayesian and frequentist inference methods when assessing uncertainty for an ASM-type model, which considers intracellular storage and biomass growth, simultaneously. Practical identifiability was addressed exclusively considering respirometric profiles based on the oxygen uptake rate and with the aid of probabilistic global sensitivity analysis. Parameter uncertainty was thus estimated according to both the Bayesian and frequentist inferential procedures. Results were compared in order to evidence the strengths and weaknesses of both approaches. Since it was demonstrated that Bayesian inference could be reduced to a frequentist approach under particular hypotheses, the former can be considered as a more generalist methodology. Hence, the use of Bayesian inference is encouraged for tackling inferential issues in ASM environments.

  11. Optimal reinforcement of training datasets in semi-supervised landmark-based segmentation

    NASA Astrophysics Data System (ADS)

    Ibragimov, Bulat; Likar, Boštjan; Pernuš, Franjo; Vrtovec, Tomaž

    2015-03-01

    During the last couple of decades, the development of computerized image segmentation shifted from unsupervised to supervised methods, which made segmentation results more accurate and robust. However, the main disadvantage of supervised segmentation is a need for manual image annotation that is time-consuming and subjected to human error. To reduce the need for manual annotation, we propose a novel learning approach for training dataset reinforcement in the area of landmark-based segmentation, where newly detected landmarks are optimally combined with reference landmarks from the training dataset and therefore enriches the training process. The approach is formulated as a nonlinear optimization problem, where the solution is a vector of weighting factors that measures how reliable are the detected landmarks. The detected landmarks that are found to be more reliable are included into the training procedure with higher weighting factors, whereas the detected landmarks that are found to be less reliable are included with lower weighting factors. The approach is integrated into the landmark-based game-theoretic segmentation framework and validated against the problem of lung field segmentation from chest radiographs.

  12. Procedural instruction in invasive bedside procedures: a systematic review and meta-analysis of effective teaching approaches.

    PubMed

    Huang, Grace C; McSparron, Jakob I; Balk, Ethan M; Richards, Jeremy B; Smith, C Christopher; Whelan, Julia S; Newman, Lori R; Smetana, Gerald W

    2016-04-01

    Optimal approaches to teaching bedside procedures are unknown. To identify effective instructional approaches in procedural training. We searched PubMed, EMBASE, Web of Science and Cochrane Library through December 2014. We included research articles that addressed procedural training among physicians or physician trainees for 12 bedside procedures. Two independent reviewers screened 9312 citations and identified 344 articles for full-text review. Two independent reviewers extracted data from full-text articles. We included measurements as classified by translational science outcomes T1 (testing settings), T2 (patient care practices) and T3 (patient/public health outcomes). Due to incomplete reporting, we post hoc classified study outcomes as 'negative' or 'positive' based on statistical significance. We performed meta-analyses of outcomes on the subset of studies sharing similar outcomes. We found 161 eligible studies (44 randomised controlled trials (RCTs), 34 non-RCTs and 83 uncontrolled trials). Simulation was the most frequently published educational mode (78%). Our post hoc classification showed that studies involving simulation, competency-based approaches and RCTs had higher frequencies of T2/T3 outcomes. Meta-analyses showed that simulation (risk ratio (RR) 1.54 vs 0.55 for studies with vs without simulation, p=0.013) and competency-based approaches (RR 3.17 vs 0.89, p<0.001) were effective forms of training. This systematic review of bedside procedural skills demonstrates that the current literature is heterogeneous and of varying quality and rigour. Evidence is strongest for the use of simulation and competency-based paradigms in teaching procedures, and these approaches should be the mainstay of programmes that train physicians to perform procedures. Further research should clarify differences among instructional methods (eg, forms of hands-on training) rather than among educational modes (eg, lecture vs simulation). Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  13. A spin column-free approach to sodium hydroxide-based glycan permethylation.

    PubMed

    Hu, Yueming; Borges, Chad R

    2017-07-24

    Glycan permethylation was introduced as a tool to facilitate the study of glycans in 1903. Since that time, permethylation procedures have been continually modified to improve permethylation efficiency and qualitative applicability. Typically, however, either laborious preparation steps or cumbersome and uneconomical spin columns have been needed to obtain decent permethylation yields on small glycan samples. Here we describe a spin column-free (SCF) glycan permethylation procedure that is applicable to both O- and N-linked glycans and can be employed upstream to intact glycan analysis by MALDI-MS, ESI-MS, or glycan linkage analysis by GC-MS. The SCF procedure involves neutralization of NaOH beads by acidified phosphate buffer, which eliminates the risk of glycan oxidative degradation and avoids the use of spin columns. Optimization of the new permethylation procedure provided high permethylation efficiency for both hexose (>98%) and HexNAc (>99%) residues-yields which were comparable to (or better than) those of some widely-used spin column-based procedures. A light vs. heavy labelling approach was employed to compare intact glycan yields from a popular spin-column based approach to the SCF approach. Recovery of intact N-glycans was significantly better with the SCF procedure (p < 0.05), but overall yield of O-glycans was similar or slightly diminished (p < 0.05 for tetrasaccharides or smaller). When the SCF procedure was employed upstream to hydrolysis, reduction and acetylation for glycan linkage analysis of pooled glycans from unfractionated blood plasma, analytical reproducibility was on par with that from previous spin column-based "glycan node" analysis results. When applied to blood plasma samples from stage III-IV breast cancer patients (n = 20) and age-matched controls (n = 20), the SCF procedure facilitated identification of three glycan nodes with significantly different distributions between the cases and controls (ROC c-statistics > 0.75; p < 0.01). In summary, the SCF permethylation procedure expedites and economizes both intact glycan analysis and linkage analysis of glycans from whole biospecimens.

  14. A spin column-free approach to sodium hydroxide-based glycan permethylation†

    PubMed Central

    Hu, Yueming; Borges, Chad R.

    2018-01-01

    Glycan permethylation was introduced as a tool to facilitate the study of glycans in 1903. Since that time, permethylation procedures have been continually modified to improve permethylation efficiency and qualitative applicability. Typically, however, either laborious preparation steps or cumbersome and uneconomical spin columns have been needed to obtain decent permethylation yields on small glycan samples. Here we describe a spin column-free (SCF) glycan permethylation procedure that is applicable to both O- and N-linked glycans and can be employed upstream to intact glycan analysis by MALDI-MS, ESI-MS, or glycan linkage analysis by GC-MS. The SCF procedure involves neutralization of NaOH beads by acidified phosphate buffer, which eliminates the risk of glycan oxidative degradation and avoids the use of spin columns. Optimization of the new permethylation procedure provided high permethylation efficiency for both hexose (>98%) and HexNAc (>99%) residues—yields which were comparable to (or better than) those of some widely-used spin column-based procedures. A light vs. heavy labelling approach was employed to compare intact glycan yields from a popular spin-column based approach to the SCF approach. Recovery of intact N-glycans was significantly better with the SCF procedure (p < 0.05), but overall yield of O-glycans was similar or slightly diminished (p < 0.05 for tetrasaccharides or smaller). When the SCF procedure was employed upstream to hydrolysis, reduction and acetylation for glycan linkage analysis of pooled glycans from unfractionated blood plasma, analytical reproducibility was on par with that from previous spin column-based “glycan node” analysis results. When applied to blood plasma samples from stage III–IV breast cancer patients (n = 20) and age-matched controls (n = 20), the SCF procedure facilitated identification of three glycan nodes with significantly different distributions between the cases and controls (ROC c-statistics > 0.75; p < 0.01). In summary, the SCF permethylation procedure expedites and economizes both intact glycan analysis and linkage analysis of glycans from whole biospecimens. PMID:28635997

  15. The need for a standardized informed consent procedure in live donor nephrectomy: a systematic review.

    PubMed

    Kortram, Kirsten; Lafranca, Jeffrey A; IJzermans, Jan N M; Dor, Frank J M F

    2014-12-15

    Informed consent in live donor nephrectomy is a topic of great interest. Safety and transparency are key items increasingly getting more attention from media and healthcare inspection. Because live donors are not patients, but healthy individuals undergoing elective interventions, they justly insist on optimal conditions and guaranteed safety. Although transplant professionals agree that consent should be voluntary, free of coercion, and fully informed, there is no consensus on which information should be provided, and how the donors' comprehension should be ascertained. Comprehensive searches were conducted in Embase, Medline OvidSP, Web-of-Science, PubMed, CENTRAL (The Cochrane Library 2014, issue 1) and Google Scholar, evaluating the informed consent procedure for live kidney donation. The methodology was in accordance with the Cochrane Handbook for Interventional Systematic Reviews and written based on the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement. The initial search yielded 1,009 hits from which 21 articles fell within the scope of this study. Procedures vary greatly between centers, and transplant professionals vary in the information they disclose. Although research has demonstrated that donors often make their decision based on moral reasoning rather than balancing risks and benefits, providing them with accurate, uniform information remains crucial because donors report feeling misinformed about or unprepared for donation. Although a standardized procedure may not provide the ultimate solution, it is vital to minimize differences in live donor education between transplant centers. There is a definite need for a guideline on how to provide information and obtain informed consent from live kidney donors to assist the transplant community in optimally preparing potential donors.

  16. Downtime procedures for the 21st century: using a fully integrated health record for uninterrupted electronic reporting of laboratory results during laboratory information system downtimes.

    PubMed

    Oral, Bulent; Cullen, Regina M; Diaz, Danny L; Hod, Eldad A; Kratz, Alexander

    2015-01-01

    Downtimes of the laboratory information system (LIS) or its interface to the electronic medical record (EMR) disrupt the reporting of laboratory results. Traditionally, laboratories have relied on paper-based or phone-based reporting methods during these events. We developed a novel downtime procedure that combines advance placement of orders by clinicians for planned downtimes, the printing of laboratory results from instruments, and scanning of the instrument printouts into our EMR. The new procedure allows the analysis of samples from planned phlebotomies with no delays, even during LIS downtimes. It also enables the electronic reporting of all clinically urgent results during downtimes, including intensive care and emergency department samples, thereby largely avoiding paper- and phone-based communication of laboratory results. With the capabilities of EMRs and LISs rapidly evolving, information technology (IT) teams, laboratories, and clinicians need to collaborate closely, review their systems' capabilities, and design innovative ways to apply all available IT functions to optimize patient care during downtimes. Copyright© by the American Society for Clinical Pathology.

  17. A comparison of various modes of liquid-liquid based microextraction techniques: determination of picric acid.

    PubMed

    Burdel, Martin; Šandrejová, Jana; Balogh, Ioseph S; Vishnikin, Andriy; Andruch, Vasil

    2013-03-01

    Three modes of liquid-liquid based microextraction techniques--namely auxiliary solvent-assisted dispersive liquid-liquid microextraction, auxiliary solvent-assisted dispersive liquid-liquid microextraction with low-solvent consumption, and ultrasound-assisted emulsification microextraction--were compared. Picric acid was used as the model analyte. The determination is based on the reaction of picric acid with Astra Phloxine reagent to produce an ion associate easily extractable by various organic solvents, followed by spectrophotometric detection at 558 nm. Each of the compared procedures has both advantages and disadvantages. The main benefit of ultrasound-assisted emulsification microextraction is that no hazardous chlorinated extraction solvents and no dispersive solvent are necessary. Therefore, this procedure was selected for validation. Under optimized experimental conditions (pH 3, 7 × 10(-5) mol/L of Astra Phloxine, and 100 μL of toluene), the calibration plot was linear in the range of 0.02-0.14 mg/L and the LOD was 7 μg/L of picric acid. The developed procedure was applied to the analysis of spiked water samples. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Design and Optimization of Composite Gyroscope Momentum Wheel Rings

    NASA Technical Reports Server (NTRS)

    Bednarcyk, Brett A.; Arnold, Steven M.

    2007-01-01

    Stress analysis and preliminary design/optimization procedures are presented for gyroscope momentum wheel rings composed of metallic, metal matrix composite, and polymer matrix composite materials. The design of these components involves simultaneously minimizing both true part volume and mass, while maximizing angular momentum. The stress analysis results are combined with an anisotropic failure criterion to formulate a new sizing procedure that provides considerable insight into the design of gyroscope momentum wheel ring components. Results compare the performance of two optimized metallic designs, an optimized SiC/Ti composite design, and an optimized graphite/epoxy composite design. The graphite/epoxy design appears to be far superior to the competitors considered unless a much greater premium is placed on volume efficiency compared to mass efficiency.

  19. A Fast Proceduere for Optimizing Thermal Protection Systems of Re-Entry Vehicles

    NASA Astrophysics Data System (ADS)

    Ferraiuolo, M.; Riccio, A.; Tescione, D.; Gigliotti, M.

    The aim of the present work is to introduce a fast procedure to optimize thermal protection systems for re-entry vehicles subjected to high thermal loads. A simplified one-dimensional optimization process, performed in order to find the optimum design variables (lengths, sections etc.), is the first step of the proposed design procedure. Simultaneously, the most suitable materials able to sustain high temperatures and meeting the weight requirements are selected and positioned within the design layout. In this stage of the design procedure, simplified (generalized plane strain) FEM models are used when boundary and geometrical conditions allow the reduction of the degrees of freedom. Those simplified local FEM models can be useful because they are time-saving and very simple to build; they are essentially one dimensional and can be used for optimization processes in order to determine the optimum configuration with regard to weight, temperature and stresses. A triple-layer and a double-layer body, subjected to the same aero-thermal loads, have been optimized to minimize the overall weight. Full two and three-dimensional analyses are performed in order to validate those simplified models. Thermal-structural analyses and optimizations are executed by adopting the Ansys FEM code.

  20. Design of controlled elastic and inelastic structures

    NASA Astrophysics Data System (ADS)

    Reinhorn, A. M.; Lavan, O.; Cimellaro, G. P.

    2009-12-01

    One of the founders of structural control theory and its application in civil engineering, Professor Emeritus Tsu T. Soong, envisioned the development of the integral design of structures protected by active control devices. Most of his disciples and colleagues continuously attempted to develop procedures to achieve such integral control. In his recent papers published jointly with some of the authors of this paper, Professor Soong developed design procedures for the entire structure using a design — redesign procedure applied to elastic systems. Such a procedure was developed as an extension of other work by his disciples. This paper summarizes some recent techniques that use traditional active control algorithms to derive the most suitable (optimal, stable) control force, which could then be implemented with a combination of active, passive and semi-active devices through a simple match or more sophisticated optimal procedures. Alternative design can address the behavior of structures using Liapunov stability criteria. This paper shows a unified procedure which can be applied to both elastic and inelastic structures. Although the implementation does not always preserve the optimal criteria, it is shown that the solutions are effective and practical for design of supplemental damping, stiffness enhancement or softening, and strengthening or weakening.

Top