An efficient constraint to account for mistuning effects in the optimal design of engine rotors
NASA Technical Reports Server (NTRS)
Murthy, Durbha V.; Pierre, Christophe; Ottarsson, Gisli
1992-01-01
Blade-to-blade differences in structural properties, unavoidable in practice due to manufacturing tolerances, can have significant influence on the vibratory response of engine rotor blade. Accounting for these differences, also known as mistuning, in design and in optimization procedures is generally not possible. This note presents an easily calculated constraint that can be used in design and optimization procedures to control the sensitivity of final designs to mistuning.
Performance optimization of helicopter rotor blades
NASA Technical Reports Server (NTRS)
Walsh, Joanne L.
1991-01-01
As part of a center-wide activity at NASA Langley Research Center to develop multidisciplinary design procedures by accounting for discipline interactions, a performance design optimization procedure is developed. The procedure optimizes the aerodynamic performance of rotor blades by selecting the point of taper initiation, root chord, taper ratio, and maximum twist which minimize hover horsepower while not degrading forward flight performance. The procedure uses HOVT (a strip theory momentum analysis) to compute the horse power required for hover and the comprehensive helicopter analysis program CAMRAD to compute the horsepower required for forward flight and maneuver. The optimization algorithm consists of the general purpose optimization program CONMIN and approximate analyses. Sensitivity analyses consisting of derivatives of the objective function and constraints are carried out by forward finite differences. The procedure is applied to a test problem which is an analytical model of a wind tunnel model of a utility rotor blade.
Optimized postweld heat treatment procedures for 17-4 PH stainless steels
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhaduri, A.K.; Sujith, S.; Srinivasan, G.
1995-05-01
The postweld heat treatment (PWHT) procedures for 17-4 PH stainless steel weldments of matching chemistry was optimized vis-a-vis its microstructure prior to welding based on microstructural studies and room-temperature mechanical properties. The 17-4 PH stainless steel was welded in two different prior microstructural conditions (condition A and condition H 1150) and then postweld heat treated to condition H900 or condition H1150, using different heat treatment procedures. Microstructural investigations and room-temperature tensile properties were determined to study the combined effects of prior microstructural and PWHT procedures.
Design of Quiet Rotorcraft Approach Trajectories
NASA Technical Reports Server (NTRS)
Padula, Sharon L.; Burley, Casey L.; Boyd, D. Douglas, Jr.; Marcolini, Michael A.
2009-01-01
A optimization procedure for identifying quiet rotorcraft approach trajectories is proposed and demonstrated. The procedure employs a multi-objective genetic algorithm in order to reduce noise and create approach paths that will be acceptable to pilots and passengers. The concept is demonstrated by application to two different helicopters. The optimized paths are compared with one another and to a standard 6-deg approach path. The two demonstration cases validate the optimization procedure but highlight the need for improved noise prediction techniques and for additional rotorcraft acoustic data sets.
Development of an ELISA for evaluation of swab recovery efficiencies of bovine serum albumin.
Sparding, Nadja; Slotved, Hans-Christian; Nicolaisen, Gert M; Giese, Steen B; Elmlund, Jón; Steenhard, Nina R
2014-01-01
After a potential biological incident the sampling strategy and sample analysis are crucial for the outcome of the investigation and identification. In this study, we have developed a simple sandwich ELISA based on commercial components to quantify BSA (used as a surrogate for ricin) with a detection range of 1.32-80 ng/mL. We used the ELISA to evaluate different protein swabbing procedures (swabbing techniques and after-swabbing treatments) for two swab types: a cotton gauze swab and a flocked nylon swab. The optimal swabbing procedure for each swab type was used to obtain recovery efficiencies from different surface materials. The surface recoveries using the optimal swabbing procedure ranged from 0-60% and were significantly higher from nonporous surfaces compared to porous surfaces. In conclusion, this study presents a swabbing procedure evaluation and a simple BSA ELISA based on commercial components, which are easy to perform in a laboratory with basic facilities. The data indicate that different swabbing procedures were optimal for each of the tested swab types, and the particular swab preference depends on the surface material to be swabbed.
NASA Astrophysics Data System (ADS)
Izah Anuar, Nurul; Saptari, Adi
2016-02-01
This paper addresses the types of particle representation (encoding) procedures in a population-based stochastic optimization technique in solving scheduling problems known in the job-shop manufacturing environment. It intends to evaluate and compare the performance of different particle representation procedures in Particle Swarm Optimization (PSO) in the case of solving Job-shop Scheduling Problems (JSP). Particle representation procedures refer to the mapping between the particle position in PSO and the scheduling solution in JSP. It is an important step to be carried out so that each particle in PSO can represent a schedule in JSP. Three procedures such as Operation and Particle Position Sequence (OPPS), random keys representation and random-key encoding scheme are used in this study. These procedures have been tested on FT06 and FT10 benchmark problems available in the OR-Library, where the objective function is to minimize the makespan by the use of MATLAB software. Based on the experimental results, it is discovered that OPPS gives the best performance in solving both benchmark problems. The contribution of this paper is the fact that it demonstrates to the practitioners involved in complex scheduling problems that different particle representation procedures can have significant effects on the performance of PSO in solving JSP.
Multidisciplinary design optimization using multiobjective formulation techniques
NASA Technical Reports Server (NTRS)
Chattopadhyay, Aditi; Pagaldipti, Narayanan S.
1995-01-01
This report addresses the development of a multidisciplinary optimization procedure using an efficient semi-analytical sensitivity analysis technique and multilevel decomposition for the design of aerospace vehicles. A semi-analytical sensitivity analysis procedure is developed for calculating computational grid sensitivities and aerodynamic design sensitivities. Accuracy and efficiency of the sensitivity analysis procedure is established through comparison of the results with those obtained using a finite difference technique. The developed sensitivity analysis technique are then used within a multidisciplinary optimization procedure for designing aerospace vehicles. The optimization problem, with the integration of aerodynamics and structures, is decomposed into two levels. Optimization is performed for improved aerodynamic performance at the first level and improved structural performance at the second level. Aerodynamic analysis is performed by solving the three-dimensional parabolized Navier Stokes equations. A nonlinear programming technique and an approximate analysis procedure are used for optimization. The proceduredeveloped is applied to design the wing of a high speed aircraft. Results obtained show significant improvements in the aircraft aerodynamic and structural performance when compared to a reference or baseline configuration. The use of the semi-analytical sensitivity technique provides significant computational savings.
Fuel Injector Design Optimization for an Annular Scramjet Geometry
NASA Technical Reports Server (NTRS)
Steffen, Christopher J., Jr.
2003-01-01
A four-parameter, three-level, central composite experiment design has been used to optimize the configuration of an annular scramjet injector geometry using computational fluid dynamics. The computational fluid dynamic solutions played the role of computer experiments, and response surface methodology was used to capture the simulation results for mixing efficiency and total pressure recovery within the scramjet flowpath. An optimization procedure, based upon the response surface results of mixing efficiency, was used to compare the optimal design configuration against the target efficiency value of 92.5%. The results of three different optimization procedures are presented and all point to the need to look outside the current design space for different injector geometries that can meet or exceed the stated mixing efficiency target.
Development of Multiobjective Optimization Techniques for Sonic Boom Minimization
NASA Technical Reports Server (NTRS)
Chattopadhyay, Aditi; Rajadas, John Narayan; Pagaldipti, Naryanan S.
1996-01-01
A discrete, semi-analytical sensitivity analysis procedure has been developed for calculating aerodynamic design sensitivities. The sensitivities of the flow variables and the grid coordinates are numerically calculated using direct differentiation of the respective discretized governing equations. The sensitivity analysis techniques are adapted within a parabolized Navier Stokes equations solver. Aerodynamic design sensitivities for high speed wing-body configurations are calculated using the semi-analytical sensitivity analysis procedures. Representative results obtained compare well with those obtained using the finite difference approach and establish the computational efficiency and accuracy of the semi-analytical procedures. Multidisciplinary design optimization procedures have been developed for aerospace applications namely, gas turbine blades and high speed wing-body configurations. In complex applications, the coupled optimization problems are decomposed into sublevels using multilevel decomposition techniques. In cases with multiple objective functions, formal multiobjective formulation such as the Kreisselmeier-Steinhauser function approach and the modified global criteria approach have been used. Nonlinear programming techniques for continuous design variables and a hybrid optimization technique, based on a simulated annealing algorithm, for discrete design variables have been used for solving the optimization problems. The optimization procedure for gas turbine blades improves the aerodynamic and heat transfer characteristics of the blades. The two-dimensional, blade-to-blade aerodynamic analysis is performed using a panel code. The blade heat transfer analysis is performed using an in-house developed finite element procedure. The optimization procedure yields blade shapes with significantly improved velocity and temperature distributions. The multidisciplinary design optimization procedures for high speed wing-body configurations simultaneously improve the aerodynamic, the sonic boom and the structural characteristics of the aircraft. The flow solution is obtained using a comprehensive parabolized Navier Stokes solver. Sonic boom analysis is performed using an extrapolation procedure. The aircraft wing load carrying member is modeled as either an isotropic or a composite box beam. The isotropic box beam is analyzed using thin wall theory. The composite box beam is analyzed using a finite element procedure. The developed optimization procedures yield significant improvements in all the performance criteria and provide interesting design trade-offs. The semi-analytical sensitivity analysis techniques offer significant computational savings and allow the use of comprehensive analysis procedures within design optimization studies.
Martendal, Edmar; de Souza Silveira, Cristine Durante; Nardini, Giuliana Stael; Carasek, Eduardo
2011-06-17
This study proposes a new approach to the optimization of the extraction of the volatile fraction of plant matrices using the headspace solid-phase microextraction (HS-SPME) technique. The optimization focused on the extraction time and temperature using a CAR/DVB/PDMS 50/30 μm SPME fiber and 100mg of a mixture of plants as the sample in a 15-mL vial. The extraction time (10-60 min) and temperature (5-60 °C) were optimized by means of a central composite design. The chromatogram was divided into four groups of peaks based on the elution temperature to provide a better understanding of the influence of the extraction parameters on the extraction efficiency considering compounds with different volatilities/polarities. In view of the different optimum extraction time and temperature conditions obtained for each group, a new approach based on the use of two extraction temperatures in the same procedure is proposed. The optimum conditions were achieved by extracting for 30 min with a sample temperature of 60 °C followed by a further 15 min at 5 °C. The proposed method was compared with the optimized conventional method based on a single extraction temperature (45 min of extraction at 50 °C) by submitting five samples to both procedures. The proposed method led to better results in all cases, considering as the response both peak area and the number of identified peaks. The newly proposed optimization approach provided an excellent alternative procedure to extract analytes with quite different volatilities in the same procedure. Copyright © 2011 Elsevier B.V. All rights reserved.
A CFD-based aerodynamic design procedure for hypersonic wind-tunnel nozzles
NASA Technical Reports Server (NTRS)
Korte, John J.
1993-01-01
A new procedure which unifies the best of current classical design practices, computational fluid dynamics (CFD), and optimization procedures is demonstrated for designing the aerodynamic lines of hypersonic wind-tunnel nozzles. The new procedure can be used to design hypersonic wind tunnel nozzles with thick boundary layers where the classical design procedure has been shown to break down. An efficient CFD code, which solves the parabolized Navier-Stokes (PNS) equations using an explicit upwind algorithm, is coupled to a least-squares (LS) optimization procedure. A LS problem is formulated to minimize the difference between the computed flow field and the objective function, consisting of the centerline Mach number distribution and the exit Mach number and flow angle profiles. The aerodynamic lines of the nozzle are defined using a cubic spline, the slopes of which are optimized with the design procedure. The advantages of the new procedure are that it allows full use of powerful CFD codes in the design process, solves an optimization problem to determine the new contour, can be used to design new nozzles or improve sections of existing nozzles, and automatically compensates the nozzle contour for viscous effects as part of the unified design procedure. The new procedure is demonstrated by designing two Mach 15, a Mach 12, and a Mach 18 helium nozzles. The flexibility of the procedure is demonstrated by designing the two Mach 15 nozzles using different constraints, the first nozzle for a fixed length and exit diameter and the second nozzle for a fixed length and throat diameter. The computed flow field for the Mach 15 least squares parabolized Navier-Stokes (LS/PNS) designed nozzle is compared with the classically designed nozzle and demonstrates a significant improvement in the flow expansion process and uniform core region.
Optimal Parameters for Intervertebral Disk Resection Using Aqua-Plasma Beams.
Yoon, Sung-Young; Kim, Gon-Ho; Kim, Yushin; Kim, Nack Hwan; Lee, Sangheon; Kawai, Christina; Hong, Youngki
2018-06-14
A minimally invasive procedure for intervertebral disk resection using plasma beams has been developed. Conventional parameters for the plasma procedure such as voltage and tip speed mainly rely on the surgeon's personal experience, without adequate evidence from experiments. Our objective was to determine the optimal parameters for plasma disk resection. Rate of ablation was measured at different procedural tip speeds and voltages using porcine nucleus pulposi. The amount of heat formation during experimental conditions was also measured to evaluate the thermal safety of the plasma procedure. The ablation rate increased at slower procedural speeds and higher voltages. However, for thermal safety, the optimal parameters for plasma procedures with minimal tissue damage were an electrical output of 280 volts root-mean-square (V rms ) and a procedural tip speed of 2.5 mm/s. Our findings provide useful information for an effective and safe plasma procedure for disk resection in a clinical setting. Georg Thieme Verlag KG Stuttgart · New York.
Optimum Design of High Speed Prop-Rotors
NASA Technical Reports Server (NTRS)
Chattopadhyay, Aditi
1992-01-01
The objective of this research is to develop optimization procedures to provide design trends in high speed prop-rotors. The necessary disciplinary couplings are all considered within a closed loop optimization process. The procedures involve the consideration of blade aeroelastic, aerodynamic performance, structural and dynamic design requirements. Further, since the design involves consideration of several different objectives, multiobjective function formulation techniques are developed.
Shape optimization of road tunnel cross-section by simulated annealing
NASA Astrophysics Data System (ADS)
Sobótka, Maciej; Pachnicz, Michał
2016-06-01
The paper concerns shape optimization of a tunnel excavation cross-section. The study incorporates optimization procedure of the simulated annealing (SA). The form of a cost function derives from the energetic optimality condition, formulated in the authors' previous papers. The utilized algorithm takes advantage of the optimization procedure already published by the authors. Unlike other approaches presented in literature, the one introduced in this paper takes into consideration a practical requirement of preserving fixed clearance gauge. Itasca Flac software is utilized in numerical examples. The optimal excavation shapes are determined for five different in situ stress ratios. This factor significantly affects the optimal topology of excavation. The resulting shapes are elongated in the direction of a principal stress greater value. Moreover, the obtained optimal shapes have smooth contours circumscribing the gauge.
Teleportation of squeezing: Optimization using non-Gaussian resources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dell'Anno, Fabio; De Siena, Silvio; Illuminati, Fabrizio
2010-12-15
We study the continuous-variable quantum teleportation of states, statistical moments of observables, and scale parameters such as squeezing. We investigate the problem both in ideal and imperfect Vaidman-Braunstein-Kimble protocol setups. We show how the teleportation fidelity is maximized and the difference between output and input variances is minimized by using suitably optimized entangled resources. Specifically, we consider the teleportation of coherent squeezed states, exploiting squeezed Bell states as entangled resources. This class of non-Gaussian states, introduced by Illuminati and co-workers [F. Dell'Anno, S. De Siena, L. Albano, and F. Illuminati, Phys. Rev. A 76, 022301 (2007); F. Dell'Anno, S. Demore » Siena, and F. Illuminati, ibid. 81, 012333 (2010)], includes photon-added and photon-subtracted squeezed states as special cases. At variance with the case of entangled Gaussian resources, the use of entangled non-Gaussian squeezed Bell resources allows one to choose different optimization procedures that lead to inequivalent results. Performing two independent optimization procedures, one can either maximize the state teleportation fidelity, or minimize the difference between input and output quadrature variances. The two different procedures are compared depending on the degrees of displacement and squeezing of the input states and on the working conditions in ideal and nonideal setups.« less
Static deflection control of flexible beams by piezo-electric actuators
NASA Technical Reports Server (NTRS)
Baz, A. M.
1986-01-01
This study deals with the utilization of piezo-electric actuators in controlling the static deformation of flexible beams. An optimum design procedure is presented to enable the selection of the optimal location, thickness and excitation voltage of the piezo-electric actuators in a way that would minimize the deflection of the beam to which these actuators are bonded. Numerical examples are presented to illustrate the application of the developed optimization procedure in minimizing the structural deformation of beams of different materials when subjected to different loading and end conditions using ceramic or polymeric piezo-electric actuators. The results obtained emphasize the importance of the devised rational procedure in designing beam-actuator systems with minimal elastic distortions.
Stochastic optimization of broadband reflecting photonic structures.
Estrada-Wiese, D; Del Río-Chanona, E A; Del Río, J A
2018-01-19
Photonic crystals (PCs) are built to control the propagation of light within their structure. These can be used for an assortment of applications where custom designed devices are of interest. Among them, one-dimensional PCs can be produced to achieve the reflection of specific and broad wavelength ranges. However, their design and fabrication are challenging due to the diversity of periodic arrangement and layer configuration that each different PC needs. In this study, we present a framework to design high reflecting PCs for any desired wavelength range. Our method combines three stochastic optimization algorithms (Random Search, Particle Swarm Optimization and Simulated Annealing) along with a reduced space-search methodology to obtain a custom and optimized PC configuration. The optimization procedure is evaluated through theoretical reflectance spectra calculated by using the Equispaced Thickness Method, which improves the simulations due to the consideration of incoherent light transmission. We prove the viability of our procedure by fabricating different reflecting PCs made of porous silicon and obtain good agreement between experiment and theory using a merit function. With this methodology, diverse reflecting PCs can be designed for any applications and fabricated with different materials.
Teleportation of squeezing: Optimization using non-Gaussian resources
NASA Astrophysics Data System (ADS)
Dell'Anno, Fabio; de Siena, Silvio; Adesso, Gerardo; Illuminati, Fabrizio
2010-12-01
We study the continuous-variable quantum teleportation of states, statistical moments of observables, and scale parameters such as squeezing. We investigate the problem both in ideal and imperfect Vaidman-Braunstein-Kimble protocol setups. We show how the teleportation fidelity is maximized and the difference between output and input variances is minimized by using suitably optimized entangled resources. Specifically, we consider the teleportation of coherent squeezed states, exploiting squeezed Bell states as entangled resources. This class of non-Gaussian states, introduced by Illuminati and co-workers [F. Dell’Anno, S. De Siena, L. Albano, and F. Illuminati, Phys. Rev. APLRAAN1050-294710.1103/PhysRevA.76.022301 76, 022301 (2007); F. Dell’Anno, S. De Siena, and F. Illuminati, Phys. Rev. APLRAAN1050-294710.1103/PhysRevA.81.012333 81, 012333 (2010)], includes photon-added and photon-subtracted squeezed states as special cases. At variance with the case of entangled Gaussian resources, the use of entangled non-Gaussian squeezed Bell resources allows one to choose different optimization procedures that lead to inequivalent results. Performing two independent optimization procedures, one can either maximize the state teleportation fidelity, or minimize the difference between input and output quadrature variances. The two different procedures are compared depending on the degrees of displacement and squeezing of the input states and on the working conditions in ideal and nonideal setups.
An integrated optimum design approach for high speed prop-rotors including acoustic constraints
NASA Technical Reports Server (NTRS)
Chattopadhyay, Aditi; Wells, Valana; Mccarthy, Thomas; Han, Arris
1993-01-01
The objective of this research is to develop optimization procedures to provide design trends in high speed prop-rotors. The necessary disciplinary couplings are all considered within a closed loop multilevel decomposition optimization process. The procedures involve the consideration of blade-aeroelastic aerodynamic performance, structural-dynamic design requirements, and acoustics. Further, since the design involves consideration of several different objective functions, multiobjective function formulation techniques are developed.
Optimum Design of High-Speed Prop-Rotors
NASA Technical Reports Server (NTRS)
Chattopadhyay, Aditi; McCarthy, Thomas Robert
1993-01-01
An integrated multidisciplinary optimization procedure is developed for application to rotary wing aircraft design. The necessary disciplines such as dynamics, aerodynamics, aeroelasticity, and structures are coupled within a closed-loop optimization process. The procedure developed is applied to address two different problems. The first problem considers the optimization of a helicopter rotor blade and the second problem addresses the optimum design of a high-speed tilting proprotor. In the helicopter blade problem, the objective is to reduce the critical vibratory shear forces and moments at the blade root, without degrading rotor aerodynamic performance and aeroelastic stability. In the case of the high-speed proprotor, the goal is to maximize the propulsive efficiency in high-speed cruise without deteriorating the aeroelastic stability in cruise and the aerodynamic performance in hover. The problems studied involve multiple design objectives; therefore, the optimization problems are formulated using multiobjective design procedures. A comprehensive helicopter analysis code is used for the rotary wing aerodynamic, dynamic and aeroelastic stability analyses and an algorithm developed specifically for these purposes is used for the structural analysis. A nonlinear programming technique coupled with an approximate analysis procedure is used to perform the optimization. The optimum blade designs obtained in each case are compared to corresponding reference designs.
Structural Tailoring of Advanced Turboprops (STAT)
NASA Technical Reports Server (NTRS)
Brown, Kenneth W.
1988-01-01
This interim report describes the progress achieved in the structural Tailoring of Advanced Turboprops (STAT) program which was developed to perform numerical optimizations on highly swept propfan blades. The optimization procedure seeks to minimize an objective function, defined as either direct operating cost or aeroelastic differences between a blade and its scaled model, by tuning internal and external geometry variables that must satisfy realistic blade design constraints. This report provides a detailed description of the input, optimization procedures, approximate analyses and refined analyses, as well as validation test cases for the STAT program. In addition, conclusions and recommendations are summarized.
NASA Astrophysics Data System (ADS)
Franck, Bas A. M.; Dreschler, Wouter A.; Lyzenga, Johannes
2004-12-01
In this study we investigated the reliability and convergence characteristics of an adaptive multidirectional pattern search procedure, relative to a nonadaptive multidirectional pattern search procedure. The procedure was designed to optimize three speech-processing strategies. These comprise noise reduction, spectral enhancement, and spectral lift. The search is based on a paired-comparison paradigm, in which subjects evaluated the listening comfort of speech-in-noise fragments. The procedural and nonprocedural factors that influence the reliability and convergence of the procedure are studied using various test conditions. The test conditions combine different tests, initial settings, background noise types, and step size configurations. Seven normal hearing subjects participated in this study. The results indicate that the reliability of the optimization strategy may benefit from the use of an adaptive step size. Decreasing the step size increases accuracy, while increasing the step size can be beneficial to create clear perceptual differences in the comparisons. The reliability also depends on starting point, stop criterion, step size constraints, background noise, algorithms used, as well as the presence of drifting cues and suboptimal settings. There appears to be a trade-off between reliability and convergence, i.e., when the step size is enlarged the reliability improves, but the convergence deteriorates. .
Automation and Optimization of Multipulse Laser Zona Drilling of Mouse Embryos During Embryo Biopsy.
Wong, Christopher Yee; Mills, James K
2017-03-01
Laser zona drilling (LZD) is a required step in many embryonic surgical procedures, for example, assisted hatching and preimplantation genetic diagnosis. LZD involves the ablation of the zona pellucida (ZP) using a laser while minimizing potentially harmful thermal effects on critical internal cell structures. Develop a method for the automation and optimization of multipulse LZD, applied to cleavage-stage embryos. A two-stage optimization is used. The first stage uses computer vision algorithms to identify embryonic structures and determines the optimal ablation zone farthest away from critical structures such as blastomeres. The second stage combines a genetic algorithm with a previously reported thermal analysis of LZD to optimize the combination of laser pulse locations and pulse durations. The goal is to minimize the peak temperature experienced by the blastomeres while creating the desired opening in the ZP. A proof of concept of the proposed LZD automation and optimization method is demonstrated through experiments on mouse embryos with positive results, as adequately sized openings are created. Automation of LZD is feasible and is a viable step toward the automation of embryo biopsy procedures. LZD is a common but delicate procedure performed by human operators using subjective methods to gauge proper LZD procedure. Automation of LZD removes human error to increase the success rate of LZD. Although the proposed methods are developed for cleavage-stage embryos, the same methods may be applied to most types LZD procedures, embryos at different developmental stages, or nonembryonic cells.
Inverse Modelling to Obtain Head Movement Controller Signal
NASA Technical Reports Server (NTRS)
Kim, W. S.; Lee, S. H.; Hannaford, B.; Stark, L.
1984-01-01
Experimentally obtained dynamics of time-optimal, horizontal head rotations have previously been simulated by a sixth order, nonlinear model driven by rectangular control signals. Electromyography (EMG) recordings have spects which differ in detail from the theoretical rectangular pulsed control signal. Control signals for time-optimal as well as sub-optimal horizontal head rotations were obtained by means of an inverse modelling procedures. With experimentally measured dynamical data serving as the input, this procedure inverts the model to produce the neurological control signals driving muscles and plant. The relationships between these controller signals, and EMG records should contribute to the understanding of the neurological control of movements.
Optimization of wearable microwave antenna with simplified electromagnetic model of the human body
NASA Astrophysics Data System (ADS)
Januszkiewicz, Łukasz; Barba, Paolo Di; Hausman, Sławomir
2017-12-01
In this paper the problem of optimization design of a microwave wearable antenna is investigated. Reference is made to a specific antenna design that is a wideband Vee antenna the geometry of which is characterized by 6 parameters. These parameters were automatically adjusted with an evolution strategy based algorithm EStra to obtain the impedance matching of the antenna located in the proximity of the human body. The antenna was designed to operate in the ISM (industrial, scientific, medical) band which covers the frequency range of 2.4 GHz up to 2.5 GHz. The optimization procedure used the finite-difference time-domain method based full-wave simulator with a simplified human body model. In the optimization procedure small movements of antenna towards or away of the human body that are likely to happen during real use were considered. The stability of the antenna parameters irrespective of the movements of the user's body is an important factor in wearable antenna design. The optimization procedure allowed obtaining good impedance matching for a given range of antenna distances with respect to the human body.
Stoms, David M.; Davis, Frank W.
2014-01-01
Quantitative methods of spatial conservation prioritization have traditionally been applied to issues in conservation biology and reserve design, though their use in other types of natural resource management is growing. The utility maximization problem is one form of a covering problem where multiple criteria can represent the expected social benefits of conservation action. This approach allows flexibility with a problem formulation that is more general than typical reserve design problems, though the solution methods are very similar. However, few studies have addressed optimization in utility maximization problems for conservation planning, and the effect of solution procedure is largely unquantified. Therefore, this study mapped five criteria describing elements of multifunctional agriculture to determine a hypothetical conservation resource allocation plan for agricultural land conservation in the Central Valley of CA, USA. We compared solution procedures within the utility maximization framework to determine the difference between an open source integer programming approach and a greedy heuristic, and find gains from optimization of up to 12%. We also model land availability for conservation action as a stochastic process and determine the decline in total utility compared to the globally optimal set using both solution algorithms. Our results are comparable to other studies illustrating the benefits of optimization for different conservation planning problems, and highlight the importance of maximizing the effectiveness of limited funding for conservation and natural resource management. PMID:25538868
Kreitler, Jason R.; Stoms, David M.; Davis, Frank W.
2014-01-01
Quantitative methods of spatial conservation prioritization have traditionally been applied to issues in conservation biology and reserve design, though their use in other types of natural resource management is growing. The utility maximization problem is one form of a covering problem where multiple criteria can represent the expected social benefits of conservation action. This approach allows flexibility with a problem formulation that is more general than typical reserve design problems, though the solution methods are very similar. However, few studies have addressed optimization in utility maximization problems for conservation planning, and the effect of solution procedure is largely unquantified. Therefore, this study mapped five criteria describing elements of multifunctional agriculture to determine a hypothetical conservation resource allocation plan for agricultural land conservation in the Central Valley of CA, USA. We compared solution procedures within the utility maximization framework to determine the difference between an open source integer programming approach and a greedy heuristic, and find gains from optimization of up to 12%. We also model land availability for conservation action as a stochastic process and determine the decline in total utility compared to the globally optimal set using both solution algorithms. Our results are comparable to other studies illustrating the benefits of optimization for different conservation planning problems, and highlight the importance of maximizing the effectiveness of limited funding for conservation and natural resource management.
Shah, Peer Azmat; Hasbullah, Halabi B; Lawal, Ibrahim A; Aminu Mu'azu, Abubakar; Tang Jung, Low
2014-01-01
Due to the proliferation of handheld mobile devices, multimedia applications like Voice over IP (VoIP), video conferencing, network music, and online gaming are gaining popularity in recent years. These applications are well known to be delay sensitive and resource demanding. The mobility of mobile devices, running these applications, across different networks causes delay and service disruption. Mobile IPv6 was proposed to provide mobility support to IPv6-based mobile nodes for continuous communication when they roam across different networks. However, the Route Optimization procedure in Mobile IPv6 involves the verification of mobile node's reachability at the home address and at the care-of address (home test and care-of test) that results in higher handover delays and signalling overhead. This paper presents an enhanced procedure, time-based one-time password Route Optimization (TOTP-RO), for Mobile IPv6 Route Optimization that uses the concepts of shared secret Token, time based one-time password (TOTP) along with verification of the mobile node via direct communication and maintaining the status of correspondent node's compatibility. The TOTP-RO was implemented in network simulator (NS-2) and an analytical analysis was also made. Analysis showed that TOTP-RO has lower handover delays, packet loss, and signalling overhead with an increased level of security as compared to the standard Mobile IPv6's Return-Routability-based Route Optimization (RR-RO).
Procedure for minimizing the cost per watt of photovoltaic systems
NASA Technical Reports Server (NTRS)
Redfield, D.
1977-01-01
A general analytic procedure is developed that provides a quantitative method for optimizing any element or process in the fabrication of a photovoltaic energy conversion system by minimizing its impact on the cost per watt of the complete system. By determining the effective value of any power loss associated with each element of the system, this procedure furnishes the design specifications that optimize the cost-performance tradeoffs for each element. A general equation is derived that optimizes the properties of any part of the system in terms of appropriate cost and performance functions, although the power-handling components are found to have a different character from the cell and array steps. Another principal result is that a fractional performance loss occurring at any cell- or array-fabrication step produces that same fractional increase in the cost per watt of the complete array. It also follows that no element or process step can be optimized correctly by considering only its own cost and performance
Rossum, Huub H van; Kemperman, Hans
2017-07-26
General application of a moving average (MA) as continuous analytical quality control (QC) for routine chemistry assays has failed due to lack of a simple method that allows optimization of MAs. A new method was applied to optimize the MA for routine chemistry and was evaluated in daily practice as continuous analytical QC instrument. MA procedures were optimized using an MA bias detection simulation procedure. Optimization was graphically supported by bias detection curves. Next, all optimal MA procedures that contributed to the quality assurance were run for 100 consecutive days and MA alarms generated during working hours were investigated. Optimized MA procedures were applied for 24 chemistry assays. During this evaluation, 303,871 MA values and 76 MA alarms were generated. Of all alarms, 54 (71%) were generated during office hours. Of these, 41 were further investigated and were caused by ion selective electrode (ISE) failure (1), calibration failure not detected by QC due to improper QC settings (1), possible bias (significant difference with the other analyzer) (10), non-human materials analyzed (2), extreme result(s) of a single patient (2), pre-analytical error (1), no cause identified (20), and no conclusion possible (4). MA was implemented in daily practice as a continuous QC instrument for 24 routine chemistry assays. In our setup when an MA alarm required follow-up, a manageable number of MA alarms was generated that resulted in valuable MA alarms. For the management of MA alarms, several applications/requirements in the MA management software will simplify the use of MA procedures.
Optimizing cost-efficiency in mean exposure assessment - cost functions reconsidered
2011-01-01
Background Reliable exposure data is a vital concern in medical epidemiology and intervention studies. The present study addresses the needs of the medical researcher to spend monetary resources devoted to exposure assessment with an optimal cost-efficiency, i.e. obtain the best possible statistical performance at a specified budget. A few previous studies have suggested mathematical optimization procedures based on very simple cost models; this study extends the methodology to cover even non-linear cost scenarios. Methods Statistical performance, i.e. efficiency, was assessed in terms of the precision of an exposure mean value, as determined in a hierarchical, nested measurement model with three stages. Total costs were assessed using a corresponding three-stage cost model, allowing costs at each stage to vary non-linearly with the number of measurements according to a power function. Using these models, procedures for identifying the optimally cost-efficient allocation of measurements under a constrained budget were developed, and applied on 225 scenarios combining different sizes of unit costs, cost function exponents, and exposure variance components. Results Explicit mathematical rules for identifying optimal allocation could be developed when cost functions were linear, while non-linear cost functions implied that parts of or the entire optimization procedure had to be carried out using numerical methods. For many of the 225 scenarios, the optimal strategy consisted in measuring on only one occasion from each of as many subjects as allowed by the budget. Significant deviations from this principle occurred if costs for recruiting subjects were large compared to costs for setting up measurement occasions, and, at the same time, the between-subjects to within-subject variance ratio was small. In these cases, non-linearities had a profound influence on the optimal allocation and on the eventual size of the exposure data set. Conclusions The analysis procedures developed in the present study can be used for informed design of exposure assessment strategies, provided that data are available on exposure variability and the costs of collecting and processing data. The present shortage of empirical evidence on costs and appropriate cost functions however impedes general conclusions on optimal exposure measurement strategies in different epidemiologic scenarios. PMID:21600023
Optimizing cost-efficiency in mean exposure assessment--cost functions reconsidered.
Mathiassen, Svend Erik; Bolin, Kristian
2011-05-21
Reliable exposure data is a vital concern in medical epidemiology and intervention studies. The present study addresses the needs of the medical researcher to spend monetary resources devoted to exposure assessment with an optimal cost-efficiency, i.e. obtain the best possible statistical performance at a specified budget. A few previous studies have suggested mathematical optimization procedures based on very simple cost models; this study extends the methodology to cover even non-linear cost scenarios. Statistical performance, i.e. efficiency, was assessed in terms of the precision of an exposure mean value, as determined in a hierarchical, nested measurement model with three stages. Total costs were assessed using a corresponding three-stage cost model, allowing costs at each stage to vary non-linearly with the number of measurements according to a power function. Using these models, procedures for identifying the optimally cost-efficient allocation of measurements under a constrained budget were developed, and applied on 225 scenarios combining different sizes of unit costs, cost function exponents, and exposure variance components. Explicit mathematical rules for identifying optimal allocation could be developed when cost functions were linear, while non-linear cost functions implied that parts of or the entire optimization procedure had to be carried out using numerical methods.For many of the 225 scenarios, the optimal strategy consisted in measuring on only one occasion from each of as many subjects as allowed by the budget. Significant deviations from this principle occurred if costs for recruiting subjects were large compared to costs for setting up measurement occasions, and, at the same time, the between-subjects to within-subject variance ratio was small. In these cases, non-linearities had a profound influence on the optimal allocation and on the eventual size of the exposure data set. The analysis procedures developed in the present study can be used for informed design of exposure assessment strategies, provided that data are available on exposure variability and the costs of collecting and processing data. The present shortage of empirical evidence on costs and appropriate cost functions however impedes general conclusions on optimal exposure measurement strategies in different epidemiologic scenarios.
Torsional Ultrasound Sensor Optimization for Soft Tissue Characterization
Melchor, Juan; Muñoz, Rafael; Rus, Guillermo
2017-01-01
Torsion mechanical waves have the capability to characterize shear stiffness moduli of soft tissue. Under this hypothesis, a computational methodology is proposed to design and optimize a piezoelectrics-based transmitter and receiver to generate and measure the response of torsional ultrasonic waves. The procedure employed is divided into two steps: (i) a finite element method (FEM) is developed to obtain a transmitted and received waveform as well as a resonance frequency of a previous geometry validated with a semi-analytical simplified model and (ii) a probabilistic optimality criteria of the design based on inverse problem from the estimation of robust probability of detection (RPOD) to maximize the detection of the pathology defined in terms of changes of shear stiffness. This study collects different options of design in two separated models, in transmission and contact, respectively. The main contribution of this work describes a framework to establish such as forward, inverse and optimization procedures to choose a set of appropriate parameters of a transducer. This methodological framework may be generalizable for other different applications. PMID:28617353
El-Sheikh, Amjad H; Sweileh, Jamal A; Al-Degs, Yahya S; Insisi, Ahmad A; Al-Rabady, Nancy
2008-02-15
In this work, optimization of multi-residue solid phase extraction (SPE) procedures coupled with high-performance liquid chromatography for the determination of Propoxur, Atrazine and Methidathion from environmental waters is reported. Three different sorbents were used in this work: multi-walled carbon nanotubes (MWCNTs), C18 silica and activated carbon (AC). The three optimized SPE procedures were compared in terms of analytical performance, application to environmental waters, cartridge re-use, adsorption capacity and cost of adsorbent. Although the adsorption capacity of MWCNT was larger than AC and C18, however, the analytical performance of AC could be made close to the other sorbents by appropriate optimization of the SPE procedures. A sample of AC was then oxidized with various oxidizing agents to show that ACs of various surface properties has different enrichment efficiencies. Thus researchers are advised to try AC of various surface properties in SPE of pollutants prior to using expensive sorbents (such as MWCNT and C18 silica).
Kim, Yongbok; Modrick, Joseph M.; Pennington, Edward C.
2016-01-01
The objective of this work is to present commissioning procedures to clinically implement a three‐dimensional (3D), image‐based, treatment‐planning system (TPS) for high‐dose‐rate (HDR) brachytherapy (BT) for gynecological (GYN) cancer. The physical dimensions of the GYN applicators and their values in the virtual applicator library were varied by 0.4 mm of their nominal values. Reconstruction uncertainties of the titanium tandem and ovoids (T&O) were less than 0.4 mm on CT phantom studies and on average between 0.8‐1.0 mm on MRI when compared with X‐rays. In‐house software, HDRCalculator, was developed to check HDR plan parameters such as independently verifying active tandem or cylinder probe length and ovoid or cylinder size, source calibration and treatment date, and differences between average Point A dose and prescription dose. Dose‐volume histograms were validated using another independent TPS. Comprehensive procedures to commission volume optimization algorithms and process in 3D image‐based planning were presented. For the difference between line and volume optimizations, the average absolute differences as a percentage were 1.4% for total reference air KERMA (TRAK) and 1.1% for Point A dose. Volume optimization consistency tests between versions resulted in average absolute differences in 0.2% for TRAK and 0.9 s (0.2%) for total treatment time. The data revealed that the optimizer should run for at least 1 min in order to avoid more than 0.6% dwell time changes. For clinical GYN T&O cases, three different volume optimization techniques (graphical optimization, pure inverse planning, and hybrid inverse optimization) were investigated by comparing them against a conventional Point A technique. End‐to‐end testing was performed using a T&O phantom to ensure no errors or inconsistencies occurred from imaging through to planning and delivery. The proposed commissioning procedures provide a clinically safe implementation technique for 3D image‐based TPS for HDR BT for GYN cancer. PACS number(s): 87.55.D‐ PMID:27074463
Bilayer tablets of Paliperidone for Extended release osmotic drug delivery
NASA Astrophysics Data System (ADS)
Chowdary, K. Sunil; Napoleon, A. A.
2017-11-01
The purpose of this study is to develop and optimize the formulation of paliperidone bilayer tablet core and coating which should meet in vitro performance of trilayered Innovator sample Invega. Optimization of core formulations prepared by different ratio of polyox grades and optimization of coating of (i) sub-coating build-up with hydroxy ethyl cellulose (HEC) and (ii).enteric coating build-up with cellulose acetate (CA). Some important influence factors such as different core tablet compositions and different coating solution ingredients involved in the formulation procedure were investigated. The optimization of formulation and process was conducted by comparing different in vitro release behaviours of Paliperidone. In vitro dissolution studies of Innovator sample (Invega) with formulations of different release rate which ever close release pattern during the whole 24 h test is finalized.
Cao, Wenhua; Lim, Gino; Li, Xiaoqiang; Li, Yupeng; Zhu, X. Ronald; Zhang, Xiaodong
2014-01-01
The purpose of this study is to investigate the feasibility and impact of incorporating deliverable monitor unit (MU) constraints into spot intensity optimization in intensity modulated proton therapy (IMPT) treatment planning. The current treatment planning system (TPS) for IMPT disregards deliverable MU constraints in the spot intensity optimization (SIO) routine. It performs a post-processing procedure on an optimized plan to enforce deliverable MU values that are required by the spot scanning proton delivery system. This procedure can create a significant dose distribution deviation between the optimized and post-processed deliverable plans, especially when small spot spacings are used. In this study, we introduce a two-stage linear programming (LP) approach to optimize spot intensities and constrain deliverable MU values simultaneously, i.e., a deliverable spot intensity optimization (DSIO) model. Thus, the post-processing procedure is eliminated and the associated optimized plan deterioration can be avoided. Four prostate cancer cases at our institution were selected for study and two parallel opposed beam angles were planned for all cases. A quadratic programming (QP) based model without MU constraints, i.e., a conventional spot intensity optimization (CSIO) model, was also implemented to emulate the commercial TPS. Plans optimized by both the DSIO and CSIO models were evaluated for five different settings of spot spacing from 3 mm to 7 mm. For all spot spacings, the DSIO-optimized plans yielded better uniformity for the target dose coverage and critical structure sparing than did the CSIO-optimized plans. With reduced spot spacings, more significant improvements in target dose uniformity and critical structure sparing were observed in the DSIO- than in the CSIO-optimized plans. Additionally, better sparing of the rectum and bladder was achieved when reduced spacings were used for the DSIO-optimized plans. The proposed DSIO approach ensures the deliverability of optimized IMPT plans that take into account MU constraints. This eliminates the post-processing procedure required by the TPS as well as the resultant deteriorating effect on ultimate dose distributions. This approach therefore allows IMPT plans to adopt all possible spot spacings optimally. Moreover, dosimetric benefits can be achieved using smaller spot spacings. PMID:23835656
Optimization experiments with a double Gauss lens
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brixner, B.; Klein, M.M.
1988-05-01
This paper describes how a lens can be generated by starting from plane surfaces. Three different experiments, using the Los Alamos National Laboratory optimization procedure, all converged on the same stable prescriptions in the optimum minimum region. The starts were made first from an already optimized lens appearing in the literature, then from a powerless plane-surfaces configuration, and finally from a crude Super Angulon configuration. In each case the result was a double Gauss lens, which suggests that this type of lens may be the best compact six-glass solution for one imaging problem: an f/2 aperture and a moderate fieldmore » of view. The procedures and results are discussed in detail.« less
Optimization Experiments With A Double Gauss Lens
NASA Astrophysics Data System (ADS)
Brixner, Berlyn; Klein, Morris M.
1988-05-01
This paper describes how a lens can be generated by starting from plane surfaces. Three different experiments, using the Los Alamos National Laboratory optimization procedure, all converged on the same stable prescriptions in the optimum minimum region. The starts were made first from an already optimized lens appearing in the literature, then from a powerless plane-surfaces configuration, and finally from a crude Super Angulon configuration. In each case the result was a double Gauss lens, which suggests that this type of lens may be the best compact six-glass solution for one imaging problem: an f/2 aperture and a moderate field of view. The procedures and results are discussed in detail.
Development of a Composite Tailoring Procedure for Airplane Wings
NASA Technical Reports Server (NTRS)
Chattopadhyay, Aditi
2000-01-01
The quest for finding optimum solutions to engineering problems has existed for a long time. In modern times, the development of optimization as a branch of applied mathematics is regarded to have originated in the works of Newton, Bernoulli and Euler. Venkayya has presented a historical perspective on optimization in [1]. The term 'optimization' is defined by Ashley [2] as a procedure "...which attempts to choose the variables in a design process so as formally to achieve the best value of some performance index while not violating any of the associated conditions or constraints". Ashley presented an extensive review of practical applications of optimization in the aeronautical field till about 1980 [2]. It was noted that there existed an enormous amount of published literature in the field of optimization, but its practical applications in industry were very limited. Over the past 15 years, though, optimization has been widely applied to address practical problems in aerospace design [3-5]. The design of high performance aerospace systems is a complex task. It involves the integration of several disciplines such as aerodynamics, structural analysis, dynamics, and aeroelasticity. The problem involves multiple objectives and constraints pertaining to the design criteria associated with each of these disciplines. Many important trade-offs exist between the parameters involved which are used to define the different disciplines. Therefore, the development of multidisciplinary design optimization (MDO) techniques, in which different disciplines and design parameters are coupled into a closed loop numerical procedure, seems appropriate to address such a complex problem. The importance of MDO in successful design of aerospace systems has been long recognized. Recent developments in this field have been surveyed by Sobieszczanski-Sobieski and Haftka [6].
Digital adaptive flight controller development
NASA Technical Reports Server (NTRS)
Kaufman, H.; Alag, G.; Berry, P.; Kotob, S.
1974-01-01
A design study of adaptive control logic suitable for implementation in modern airborne digital flight computers was conducted. Two designs are described for an example aircraft. Each of these designs uses a weighted least squares procedure to identify parameters defining the dynamics of the aircraft. The two designs differ in the way in which control law parameters are determined. One uses the solution of an optimal linear regulator problem to determine these parameters while the other uses a procedure called single stage optimization. Extensive simulation results and analysis leading to the designs are presented.
Shah, Peer Azmat; Hasbullah, Halabi B.; Lawal, Ibrahim A.; Aminu Mu'azu, Abubakar; Tang Jung, Low
2014-01-01
Due to the proliferation of handheld mobile devices, multimedia applications like Voice over IP (VoIP), video conferencing, network music, and online gaming are gaining popularity in recent years. These applications are well known to be delay sensitive and resource demanding. The mobility of mobile devices, running these applications, across different networks causes delay and service disruption. Mobile IPv6 was proposed to provide mobility support to IPv6-based mobile nodes for continuous communication when they roam across different networks. However, the Route Optimization procedure in Mobile IPv6 involves the verification of mobile node's reachability at the home address and at the care-of address (home test and care-of test) that results in higher handover delays and signalling overhead. This paper presents an enhanced procedure, time-based one-time password Route Optimization (TOTP-RO), for Mobile IPv6 Route Optimization that uses the concepts of shared secret Token, time based one-time password (TOTP) along with verification of the mobile node via direct communication and maintaining the status of correspondent node's compatibility. The TOTP-RO was implemented in network simulator (NS-2) and an analytical analysis was also made. Analysis showed that TOTP-RO has lower handover delays, packet loss, and signalling overhead with an increased level of security as compared to the standard Mobile IPv6's Return-Routability-based Route Optimization (RR-RO). PMID:24688398
NASA Astrophysics Data System (ADS)
Vollant, A.; Balarac, G.; Corre, C.
2017-09-01
New procedures are explored for the development of models in the context of large eddy simulation (LES) of a passive scalar. They rely on the combination of the optimal estimator theory with machine-learning algorithms. The concept of optimal estimator allows to identify the most accurate set of parameters to be used when deriving a model. The model itself can then be defined by training an artificial neural network (ANN) on a database derived from the filtering of direct numerical simulation (DNS) results. This procedure leads to a subgrid scale model displaying good structural performance, which allows to perform LESs very close to the filtered DNS results. However, this first procedure does not control the functional performance so that the model can fail when the flow configuration differs from the training database. Another procedure is then proposed, where the model functional form is imposed and the ANN used only to define the model coefficients. The training step is a bi-objective optimisation in order to control both structural and functional performances. The model derived from this second procedure proves to be more robust. It also provides stable LESs for a turbulent plane jet flow configuration very far from the training database but over-estimates the mixing process in that case.
Schumann, Marcel; Armen, Roger S
2013-05-30
Molecular docking of small-molecules is an important procedure for computer-aided drug design. Modeling receptor side chain flexibility is often important or even crucial, as it allows the receptor to adopt new conformations as induced by ligand binding. However, the accurate and efficient incorporation of receptor side chain flexibility has proven to be a challenge due to the huge computational complexity required to adequately address this problem. Here we describe a new docking approach with a very fast, graph-based optimization algorithm for assignment of the near-optimal set of residue rotamers. We extensively validate our approach using the 40 DUD target benchmarks commonly used to assess virtual screening performance and demonstrate a large improvement using the developed side chain optimization over rigid receptor docking (average ROC AUC of 0.693 vs. 0.623). Compared to numerous benchmarks, the overall performance is better than nearly all other commonly used procedures. Furthermore, we provide a detailed analysis of the level of receptor flexibility observed in docking results for different classes of residues and elucidate potential avenues for further improvement. Copyright © 2013 Wiley Periodicals, Inc.
Di Nardo, Francesco; Mengoni, Michele; Morettini, Micaela
2013-05-01
Present study provides a novel MATLAB-based parameter estimation procedure for individual assessment of hepatic insulin degradation (HID) process from standard frequently-sampled intravenous glucose tolerance test (FSIGTT) data. Direct access to the source code, offered by MATLAB, enabled us to design an optimization procedure based on the alternating use of Gauss-Newton's and Levenberg-Marquardt's algorithms, which assures the full convergence of the process and the containment of computational time. Reliability was tested by direct comparison with the application, in eighteen non-diabetic subjects, of well-known kinetic analysis software package SAAM II, and by application on different data. Agreement between MATLAB and SAAM II was warranted by intraclass correlation coefficients ≥0.73; no significant differences between corresponding mean parameter estimates and prediction of HID rate; and consistent residual analysis. Moreover, MATLAB optimization procedure resulted in a significant 51% reduction of CV% for the worst-estimated parameter by SAAM II and in maintaining all model-parameter CV% <20%. In conclusion, our MATLAB-based procedure was suggested as a suitable tool for the individual assessment of HID process. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
MINIVER: Miniature version of real/ideal gas aero-heating and ablation computer program
NASA Technical Reports Server (NTRS)
Hendler, D. R.
1976-01-01
Computer code is used to determine heat transfer multiplication factors, special flow field simulation techniques, different heat transfer methods, different transition criteria, crossflow simulation, and more efficient thin skin thickness optimization procedure.
NASA Technical Reports Server (NTRS)
Khayat, Michael A.; Wilton, Donald R.; Fink, Patrick W.
2007-01-01
Simple and efficient numerical procedures using singularity cancellation methods are presented for evaluating singular and near-singular potential integrals. Four different transformations are compared and the advantages of the Radial-angular transform are demonstrated. A method is then described for optimizing this integration scheme.
Towards Robust Designs Via Multiple-Objective Optimization Methods
NASA Technical Reports Server (NTRS)
Man Mohan, Rai
2006-01-01
Fabricating and operating complex systems involves dealing with uncertainty in the relevant variables. In the case of aircraft, flow conditions are subject to change during operation. Efficiency and engine noise may be different from the expected values because of manufacturing tolerances and normal wear and tear. Engine components may have a shorter life than expected because of manufacturing tolerances. In spite of the important effect of operating- and manufacturing-uncertainty on the performance and expected life of the component or system, traditional aerodynamic shape optimization has focused on obtaining the best design given a set of deterministic flow conditions. Clearly it is important to both maintain near-optimal performance levels at off-design operating conditions, and, ensure that performance does not degrade appreciably when the component shape differs from the optimal shape due to manufacturing tolerances and normal wear and tear. These requirements naturally lead to the idea of robust optimal design wherein the concept of robustness to various perturbations is built into the design optimization procedure. The basic ideas involved in robust optimal design will be included in this lecture. The imposition of the additional requirement of robustness results in a multiple-objective optimization problem requiring appropriate solution procedures. Typically the costs associated with multiple-objective optimization are substantial. Therefore efficient multiple-objective optimization procedures are crucial to the rapid deployment of the principles of robust design in industry. Hence the companion set of lecture notes (Single- and Multiple-Objective Optimization with Differential Evolution and Neural Networks ) deals with methodology for solving multiple-objective Optimization problems efficiently, reliably and with little user intervention. Applications of the methodologies presented in the companion lecture to robust design will be included here. The evolutionary method (DE) is first used to solve a relatively difficult problem in extended surface heat transfer wherein optimal fin geometries are obtained for different safe operating base temperatures. The objective of maximizing the safe operating base temperature range is in direct conflict with the objective of maximizing fin heat transfer. This problem is a good example of achieving robustness in the context of changing operating conditions. The evolutionary method is then used to design a turbine airfoil; the two objectives being reduced sensitivity of the pressure distribution to small changes in the airfoil shape and the maximization of the trailing edge wedge angle with the consequent increase in airfoil thickness and strength. This is a relevant example of achieving robustness to manufacturing tolerances and wear and tear in the presence of other objectives.
Coelho, V N; Coelho, I M; Souza, M J F; Oliveira, T A; Cota, L P; Haddad, M N; Mladenovic, N; Silva, R C P; Guimarães, F G
2016-01-01
This article presents an Evolution Strategy (ES)--based algorithm, designed to self-adapt its mutation operators, guiding the search into the solution space using a Self-Adaptive Reduced Variable Neighborhood Search procedure. In view of the specific local search operators for each individual, the proposed population-based approach also fits into the context of the Memetic Algorithms. The proposed variant uses the Greedy Randomized Adaptive Search Procedure with different greedy parameters for generating its initial population, providing an interesting exploration-exploitation balance. To validate the proposal, this framework is applied to solve three different [Formula: see text]-Hard combinatorial optimization problems: an Open-Pit-Mining Operational Planning Problem with dynamic allocation of trucks, an Unrelated Parallel Machine Scheduling Problem with Setup Times, and the calibration of a hybrid fuzzy model for Short-Term Load Forecasting. Computational results point out the convergence of the proposed model and highlight its ability in combining the application of move operations from distinct neighborhood structures along the optimization. The results gathered and reported in this article represent a collective evidence of the performance of the method in challenging combinatorial optimization problems from different application domains. The proposed evolution strategy demonstrates an ability of adapting the strength of the mutation disturbance during the generations of its evolution process. The effectiveness of the proposal motivates the application of this novel evolutionary framework for solving other combinatorial optimization problems.
Optimized Non-Obstructive Particle Damping (NOPD) Treatment for Composite Honeycomb Structures
NASA Technical Reports Server (NTRS)
Panossian, H.
2008-01-01
Non-Obstructive Particle Damping (NOPD) technology is a passive vibration damping approach whereby metallic or non-metallic particles in spherical or irregular shapes, of heavy or light consistency, and even liquid particles are placed inside cavities or attached to structures by an appropriate means at strategic locations, to absorb vibration energy. The objective of the work described herein is the development of a design optimization procedure and discussion of test results for such a NOPD treatment on honeycomb (HC) composite structures, based on finite element modeling (FEM) analyses, optimization and tests. Modeling and predictions were performed and tests were carried out to correlate the test data with the FEM. The optimization procedure consisted of defining a global objective function, using finite difference methods, to determine the optimal values of the design variables through quadratic linear programming. The optimization process was carried out by targeting the highest dynamic displacements of several vibration modes of the structure and finding an optimal treatment configuration that will minimize them. An optimal design was thus derived and laboratory tests were conducted to evaluate its performance under different vibration environments. Three honeycomb composite beams, with Nomex core and aluminum face sheets, empty (untreated), uniformly treated with NOPD, and optimally treated with NOPD, according to the analytically predicted optimal design configuration, were tested in the laboratory. It is shown that the beam with optimal treatment has the lowest response amplitude. Described below are results of modal vibration tests and FEM analyses from predictions of the modal characteristics of honeycomb beams under zero, 50% uniform treatment and an optimal NOPD treatment design configuration and verification with test data.
Optimal False Discovery Rate Control for Dependent Data
Xie, Jichun; Cai, T. Tony; Maris, John; Li, Hongzhe
2013-01-01
This paper considers the problem of optimal false discovery rate control when the test statistics are dependent. An optimal joint oracle procedure, which minimizes the false non-discovery rate subject to a constraint on the false discovery rate is developed. A data-driven marginal plug-in procedure is then proposed to approximate the optimal joint procedure for multivariate normal data. It is shown that the marginal procedure is asymptotically optimal for multivariate normal data with a short-range dependent covariance structure. Numerical results show that the marginal procedure controls false discovery rate and leads to a smaller false non-discovery rate than several commonly used p-value based false discovery rate controlling methods. The procedure is illustrated by an application to a genome-wide association study of neuroblastoma and it identifies a few more genetic variants that are potentially associated with neuroblastoma than several p-value-based false discovery rate controlling procedures. PMID:23378870
Mazzitelli, S; Tosi, A; Balestra, C; Nastruzzi, C; Luca, G; Mancuso, F; Calafiore, R; Calvitti, M
2008-09-01
The optimization, through a Design of Experiments (DoE) approach, of a microencapsulation procedure for isolated neonatal porcine islets (NPI) is described. The applied method is based on the generation of monodisperse droplets by a vibrational nozzle. An alginate/polyornithine encapsulation procedure, developed and validated in our laboratory for almost a decade, was used to embody pancreatic islets. We analyzed different experimental parameters including frequency of vibration, amplitude of vibration, polymer pumping rate, and distance between the nozzle and the gelling bath. We produced calcium-alginate gel microbeads with excellent morphological characteristics as well as a very narrow size distribution. The automatically produced microcapsules did not alter morphology, viability and functional properties of the enveloped NPI. The optimization of this automatic procedure may provide a novel approach to obtain a large number of batches possibly suitable for large scale production of immunoisolated NPI for in vivo cell transplantation procedures in humans.
Dai, Hongying; Wu, Guodong; Wu, Michael; Zhi, Degui
2016-01-01
Next-generation sequencing data pose a severe curse of dimensionality, complicating traditional "single marker-single trait" analysis. We propose a two-stage combined p-value method for pathway analysis. The first stage is at the gene level, where we integrate effects within a gene using the Sequence Kernel Association Test (SKAT). The second stage is at the pathway level, where we perform a correlated Lancaster procedure to detect joint effects from multiple genes within a pathway. We show that the Lancaster procedure is optimal in Bahadur efficiency among all combined p-value methods. The Bahadur efficiency,[Formula: see text], compares sample sizes among different statistical tests when signals become sparse in sequencing data, i.e. ε →0. The optimal Bahadur efficiency ensures that the Lancaster procedure asymptotically requires a minimal sample size to detect sparse signals ([Formula: see text]). The Lancaster procedure can also be applied to meta-analysis. Extensive empirical assessments of exome sequencing data show that the proposed method outperforms Gene Set Enrichment Analysis (GSEA). We applied the competitive Lancaster procedure to meta-analysis data generated by the Global Lipids Genetics Consortium to identify pathways significantly associated with high-density lipoprotein cholesterol, low-density lipoprotein cholesterol, triglycerides, and total cholesterol.
Standardization of Solar Mirror Reflectance Measurements - Round Robin Test: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meyen, S.; Lupfert, E.; Fernandez-Garcia, A.
2010-10-01
Within the SolarPaces Task III standardization activities, DLR, CIEMAT, and NREL have concentrated on optimizing the procedure to measure the reflectance of solar mirrors. From this work, the laboratories have developed a clear definition of the method and requirements needed of commercial instruments for reliable reflectance results. A round robin test was performed between the three laboratories with samples that represent all of the commercial solar mirrors currently available for concentrating solar power (CSP) applications. The results show surprisingly large differences in hemispherical reflectance (sh) of 0.007 and specular reflectance (ss) of 0.004 between the laboratories. These differences indicate themore » importance of minimum instrument requirements and standardized procedures. Based on these results, the optimal procedure will be formulated and validated with a new round robin test in which a better accuracy is expected. Improved instruments and reference standards are needed to reach the necessary accuracy for cost and efficiency calculations.« less
Computer-aided diagnostic strategy selection.
Greenes, R A
1986-03-01
Determination of the optimal diagnostic work-up strategy for the patient is becoming a major concern for the practicing physician. Overlap of the indications for various diagnostic procedures, differences in their invasiveness or risk, and high costs have made physicians aware of the need to consider the choice of procedure carefully, as well as its relation to management actions available. In this article, the author discusses research approaches that aim toward development of formal decision analytic methods to allow the physician to determine optimal strategy; clinical algorithms or rules as guides to physician decisions; improved measures for characterizing the performance of diagnostic tests; educational tools for increasing the familiarity of physicians with the concepts underlying these measures and analytic procedures; and computer-based aids for facilitating the employment of these resources in actual clinical practice.
NASA Astrophysics Data System (ADS)
Savelyev, Andrey; Anisimov, Kirill; Kazhan, Egor; Kursakov, Innocentiy; Lysenkov, Alexandr
2016-10-01
The paper is devoted to the development of methodology to optimize external aerodynamics of the engine. Optimization procedure is based on numerical solution of the Reynolds-averaged Navier-Stokes equations. As a method of optimization the surrogate based method is used. As a test problem optimal shape design of turbofan nacelle is considered. The results of the first stage, which investigates classic airplane configuration with engine located under the wing, are presented. Described optimization procedure is considered in the context of multidisciplinary optimization of the 3rd generation, developed in the project AGILE.
Metafitting: Weight optimization for least-squares fitting of PTTI data
NASA Technical Reports Server (NTRS)
Douglas, Rob J.; Boulanger, J.-S.
1995-01-01
For precise time intercomparisons between a master frequency standard and a slave time scale, we have found it useful to quantitatively compare different fitting strategies by examining the standard uncertainty in time or average frequency. It is particularly useful when designing procedures which use intermittent intercomparisons, with some parameterized fit used to interpolate or extrapolate from the calibrating intercomparisons. We use the term 'metafitting' for the choices that are made before a fitting procedure is operationally adopted. We present methods for calculating the standard uncertainty for general, weighted least-squares fits and a method for optimizing these weights for a general noise model suitable for many PTTI applications. We present the results of the metafitting of procedures for the use of a regular schedule of (hypothetical) high-accuracy frequency calibration of a maser time scale. We have identified a cumulative series of improvements that give a significant reduction of the expected standard uncertainty, compared to the simplest procedure of resetting the maser synthesizer after each calibration. The metafitting improvements presented include the optimum choice of weights for the calibration runs, optimized over a period of a week or 10 days.
Aerodynamic design optimization using sensitivity analysis and computational fluid dynamics
NASA Technical Reports Server (NTRS)
Baysal, Oktay; Eleshaky, Mohamed E.
1991-01-01
A new and efficient method is presented for aerodynamic design optimization, which is based on a computational fluid dynamics (CFD)-sensitivity analysis algorithm. The method is applied to design a scramjet-afterbody configuration for an optimized axial thrust. The Euler equations are solved for the inviscid analysis of the flow, which in turn provides the objective function and the constraints. The CFD analysis is then coupled with the optimization procedure that uses a constrained minimization method. The sensitivity coefficients, i.e. gradients of the objective function and the constraints, needed for the optimization are obtained using a quasi-analytical method rather than the traditional brute force method of finite difference approximations. During the one-dimensional search of the optimization procedure, an approximate flow analysis (predicted flow) based on a first-order Taylor series expansion is used to reduce the computational cost. Finally, the sensitivity of the optimum objective function to various design parameters, which are kept constant during the optimization, is computed to predict new optimum solutions. The flow analysis of the demonstrative example are compared with the experimental data. It is shown that the method is more efficient than the traditional methods.
Basic Principles of Lyophilization, Part 1.
Akers, Michael J
2015-01-01
The achievement of a high-quality lyophilized (freeze-dried) dosage form involves the combination of optimal formulation design and optimal freeze-dry cycle design. This 2-part article describes how this can be done. Part 1 discusses the basic principles and procedures of lyophilization up to a discussion on the different stages of lyophilization. The stages of lyophilization are discussed in part 2.
Ryeznik, Yevgen; Sverdlov, Oleksandr; Wong, Weng Kee
2015-08-01
Response-adaptive randomization designs are becoming increasingly popular in clinical trial practice. In this paper, we present RARtool , a user interface software developed in MATLAB for designing response-adaptive randomized comparative clinical trials with censored time-to-event outcomes. The RARtool software can compute different types of optimal treatment allocation designs, and it can simulate response-adaptive randomization procedures targeting selected optimal allocations. Through simulations, an investigator can assess design characteristics under a variety of experimental scenarios and select the best procedure for practical implementation. We illustrate the utility of our RARtool software by redesigning a survival trial from the literature.
Strategic flexibility in computational estimation for Chinese- and Canadian-educated adults.
Xu, Chang; Wells, Emma; LeFevre, Jo-Anne; Imbo, Ineke
2014-09-01
The purpose of the present study was to examine factors that influence strategic flexibility in computational estimation for Chinese- and Canadian-educated adults. Strategic flexibility was operationalized as the percentage of trials on which participants chose the problem-based procedure that best balanced proximity to the correct answer with simplification of the required calculation. For example, on 42 × 57, the optimal problem-based solution is 40 × 60 because 2,400 is closer to the exact answer 2,394 than is 40 × 50 or 50 × 60. In Experiment 1 (n = 50), where participants had free choice of estimation procedures, Chinese-educated participants were more likely to choose the optimal problem-based procedure (80% of trials) than Canadian-educated participants (50%). In Experiment 2 (n = 48), participants had to choose 1 of 3 solution procedures. They showed moderate strategic flexibility that was equal across groups (60%). In Experiment 3 (n = 50), participants were given the same 3 procedure choices as in Experiment 2 but different instructions and explicit feedback. When instructed to respond quickly, both groups showed moderate strategic flexibility as in Experiment 2 (60%). When instructed to respond as accurately as possible or to balance speed and accuracy, they showed very high strategic flexibility (greater than 90%). These findings suggest that solvers will show very different levels of strategic flexibility in response to instructions, feedback, and problem characteristics and that these factors interact with individual differences (e.g., arithmetic skills, nationality) to produce variable response patterns.
NASA Astrophysics Data System (ADS)
Le-Duc, Thang; Ho-Huu, Vinh; Nguyen-Thoi, Trung; Nguyen-Quoc, Hung
2016-12-01
In recent years, various types of magnetorheological brakes (MRBs) have been proposed and optimized by different optimization algorithms that are integrated in commercial software such as ANSYS and Comsol Multiphysics. However, many of these optimization algorithms often possess some noteworthy shortcomings such as the trap of solutions at local extremes, or the limited number of design variables or the difficulty of dealing with discrete design variables. Thus, to overcome these limitations and develop an efficient computation tool for optimal design of the MRBs, an optimization procedure that combines differential evolution (DE), a gradient-free global optimization method with finite element analysis (FEA) is proposed in this paper. The proposed approach is then applied to the optimal design of MRBs with different configurations including conventional MRBs and MRBs with coils placed on the side housings. Moreover, to approach a real-life design, some necessary design variables of MRBs are considered as discrete variables in the optimization process. The obtained optimal design results are compared with those of available optimal designs in the literature. The results reveal that the proposed method outperforms some traditional approaches.
Determination of full piezoelectric complex parameters using gradient-based optimization algorithm
NASA Astrophysics Data System (ADS)
Kiyono, C. Y.; Pérez, N.; Silva, E. C. N.
2016-02-01
At present, numerical techniques allow the precise simulation of mechanical structures, but the results are limited by the knowledge of the material properties. In the case of piezoelectric ceramics, the full model determination in the linear range involves five elastic, three piezoelectric, and two dielectric complex parameters. A successful solution to obtaining piezoceramic properties consists of comparing the experimental measurement of the impedance curve and the results of a numerical model by using the finite element method (FEM). In the present work, a new systematic optimization method is proposed to adjust the full piezoelectric complex parameters in the FEM model. Once implemented, the method only requires the experimental data (impedance modulus and phase data acquired by an impedometer), material density, geometry, and initial values for the properties. This method combines a FEM routine implemented using an 8-noded axisymmetric element with a gradient-based optimization routine based on the method of moving asymptotes (MMA). The main objective of the optimization procedure is minimizing the quadratic difference between the experimental and numerical electrical conductance and resistance curves (to consider resonance and antiresonance frequencies). To assure the convergence of the optimization procedure, this work proposes restarting the optimization loop whenever the procedure ends in an undesired or an unfeasible solution. Two experimental examples using PZ27 and APC850 samples are presented to test the precision of the method and to check the dependency of the frequency range used, respectively.
Design of piezoelectric transformer for DC/DC converter with stochastic optimization method
NASA Astrophysics Data System (ADS)
Vasic, Dejan; Vido, Lionel
2016-04-01
Piezoelectric transformers were adopted in recent year due to their many inherent advantages such as safety, no EMI problem, low housing profile, and high power density, etc. The characteristics of the piezoelectric transformers are well known when the load impedance is a pure resistor. However, when piezoelectric transformers are used in AC/DC or DC/DC converters, there are non-linear electronic circuits connected before and after the transformer. Consequently, the output load is variable and due to the output capacitance of the transformer the optimal working point change. This paper starts from modeling a piezoelectric transformer connected to a full wave rectifier in order to discuss the design constraints and configuration of the transformer. The optimization method adopted here use the MOPSO algorithm (Multiple Objective Particle Swarm Optimization). We start with the formulation of the objective function and constraints; then the results give different sizes of the transformer and the characteristics. In other word, this method is looking for a best size of the transformer for optimal efficiency condition that is suitable for variable load. Furthermore, the size and the efficiency are found to be a trade-off. This paper proposes the completed design procedure to find the minimum size of PT in need. The completed design procedure is discussed by a given specification. The PT derived from the proposed design procedure can guarantee both good efficiency and enough range for load variation.
Procedures for shape optimization of gas turbine disks
NASA Technical Reports Server (NTRS)
Cheu, Tsu-Chien
1989-01-01
Two procedures, the feasible direction method and sequential linear programming, for shape optimization of gas turbine disks are presented. The objective of these procedures is to obtain optimal designs of turbine disks with geometric and stress constraints. The coordinates of the selected points on the disk contours are used as the design variables. Structural weight, stress and their derivatives with respect to the design variables are calculated by an efficient finite element method for design senitivity analysis. Numerical examples of the optimal designs of a disk subjected to thermo-mechanical loadings are presented to illustrate and compare the effectiveness of these two procedures.
ERIC Educational Resources Information Center
O'Leary, Timothy P.; Brown, Richard E.
2013-01-01
We have previously shown that apparatus design can affect visual-spatial cue use and memory performance of mice on the Barnes maze. The present experiment extends these findings by determining the optimal behavioral measures and test procedure for analyzing visuo-spatial learning and memory in three different Barnes maze designs. Male and female…
Exponential Modelling for Mutual-Cohering of Subband Radar Data
NASA Astrophysics Data System (ADS)
Siart, U.; Tejero, S.; Detlefsen, J.
2005-05-01
Increasing resolution and accuracy is an important issue in almost any type of radar sensor application. However, both resolution and accuracy are strongly related to the available signal bandwidth and energy that can be used. Nowadays, often several sensors operating in different frequency bands become available on a sensor platform. It is an attractive goal to use the potential of advanced signal modelling and optimization procedures by making proper use of information stemming from different frequency bands at the RF signal level. An important prerequisite for optimal use of signal energy is coherence between all contributing sensors. Coherent multi-sensor platforms are greatly expensive and are thus not available in general. This paper presents an approach for accurately estimating object radar responses using subband measurements at different RF frequencies. An exponential model approach allows to compensate for the lack of mutual coherence between independently operating sensors. Mutual coherence is recovered from the a-priori information that both sensors have common scattering centers in view. Minimizing the total squared deviation between measured data and a full-range exponential signal model leads to more accurate pole angles and pole magnitudes compared to single-band optimization. The model parameters (range and magnitude of point scatterers) after this full-range optimization process are also more accurate than the parameters obtained from a commonly used super-resolution procedure (root-MUSIC) applied to the non-coherent subband data.
[Optimization of lyophilization procedures for freeze-drying of human red blood cells].
Chen, Lin-feng; Liu, Jing-han; Wang, De-qing; Ouyang, Xi-lin; Zhuang, Yuan; Che, Ji; Yu, Yang; Li, Hui
2010-09-01
To investigate the different parameters of the lyophilization procedures that affect the recovery of the rehydrated red blood cells (RBCs). Human RBCs loaded in tubes were cooled with 4 different modes and subjected to water bath at 25 degrees celsius;. The morphological changes of the RBCs were observed to assess the degree of vitrification, and the specimens were placed in the freeze-dryer with the temperature set up at 40, -50, -60, -70 and -80 degrees celsius;. The rates of temperature rise of the main and secondary drying in the lyophilization procedures were compared, and the water residue in the specimens was determined. The protectant did not show ice crystal in the course of freezing and thawing. No significant difference was found in the recovery rate of the rehydrated RBCs freeze-dried at the minimum temperature of -70 degrees celsius; and -80 degrees celsius; (P > 0.05). The E procedure resulted in the maximum recovery of the RBCs (83.14% ± 9.55%) and Hb (85.33% ± 11.42%), showing significant differences from the other groups(P < 0.01 or 0.05). The recovery of the RBCs showed a positive correlation to the water residue in the samples. Fast cooling in liquid nitrogen and shelf precooling at -70 degrees celsius; with a moderate rate of temperature rise in lyophylization and a start dry temperature close to the shelf equilibrium temperature produce optimal freeze-drying result of human RBCs.
ERIC Educational Resources Information Center
Bergsten, Christer; Engelbrecht, Johann; Kågesten, Owe
2017-01-01
One challenge for an optimal design of the mathematical components in engineering education curricula is to understand how the procedural and conceptual dimensions of mathematical work can be matched with different demands and contexts from the education and practice of engineers. The focus in this paper is on how engineering students respond to…
Enhanced Multiobjective Optimization Technique for Comprehensive Aerospace Design. Part A
NASA Technical Reports Server (NTRS)
Chattopadhyay, Aditi; Rajadas, John N.
1997-01-01
A multidisciplinary design optimization procedure which couples formal multiobjectives based techniques and complex analysis procedures (such as computational fluid dynamics (CFD) codes) developed. The procedure has been demonstrated on a specific high speed flow application involving aerodynamics and acoustics (sonic boom minimization). In order to account for multiple design objectives arising from complex performance requirements, multiobjective formulation techniques are used to formulate the optimization problem. Techniques to enhance the existing Kreisselmeier-Steinhauser (K-S) function multiobjective formulation approach have been developed. The K-S function procedure used in the proposed work transforms a constrained multiple objective functions problem into an unconstrained problem which then is solved using the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm. Weight factors are introduced during the transformation process to each objective function. This enhanced procedure will provide the designer the capability to emphasize specific design objectives during the optimization process. The demonstration of the procedure utilizes a computational Fluid dynamics (CFD) code which solves the three-dimensional parabolized Navier-Stokes (PNS) equations for the flow field along with an appropriate sonic boom evaluation procedure thus introducing both aerodynamic performance as well as sonic boom as the design objectives to be optimized simultaneously. Sensitivity analysis is performed using a discrete differentiation approach. An approximation technique has been used within the optimizer to improve the overall computational efficiency of the procedure in order to make it suitable for design applications in an industrial setting.
Expected p-values in light of an ROC curve analysis applied to optimal multiple testing procedures.
Vexler, Albert; Yu, Jihnhee; Zhao, Yang; Hutson, Alan D; Gurevich, Gregory
2017-01-01
Many statistical studies report p-values for inferential purposes. In several scenarios, the stochastic aspect of p-values is neglected, which may contribute to drawing wrong conclusions in real data experiments. The stochastic nature of p-values makes their use to examine the performance of given testing procedures or associations between investigated factors to be difficult. We turn our focus on the modern statistical literature to address the expected p-value (EPV) as a measure of the performance of decision-making rules. During the course of our study, we prove that the EPV can be considered in the context of receiver operating characteristic (ROC) curve analysis, a well-established biostatistical methodology. The ROC-based framework provides a new and efficient methodology for investigating and constructing statistical decision-making procedures, including: (1) evaluation and visualization of properties of the testing mechanisms, considering, e.g. partial EPVs; (2) developing optimal tests via the minimization of EPVs; (3) creation of novel methods for optimally combining multiple test statistics. We demonstrate that the proposed EPV-based approach allows us to maximize the integrated power of testing algorithms with respect to various significance levels. In an application, we use the proposed method to construct the optimal test and analyze a myocardial infarction disease dataset. We outline the usefulness of the "EPV/ROC" technique for evaluating different decision-making procedures, their constructions and properties with an eye towards practical applications.
Least-squares/parabolized Navier-Stokes procedure for optimizing hypersonic wind tunnel nozzles
NASA Technical Reports Server (NTRS)
Korte, John J.; Kumar, Ajay; Singh, D. J.; Grossman, B.
1991-01-01
A new procedure is demonstrated for optimizing hypersonic wind-tunnel-nozzle contours. The procedure couples a CFD computer code to an optimization algorithm, and is applied to both conical and contoured hypersonic nozzles for the purpose of determining an optimal set of parameters to describe the surface geometry. A design-objective function is specified based on the deviation from the desired test-section flow-field conditions. The objective function is minimized by optimizing the parameters used to describe the nozzle contour based on the solution to a nonlinear least-squares problem. The effect of the changes in the nozzle wall parameters are evaluated by computing the nozzle flow using the parabolized Navier-Stokes equations. The advantage of the new procedure is that it directly takes into account the displacement effect of the boundary layer on the wall contour. The new procedure provides a method for optimizing hypersonic nozzles of high Mach numbers which have been designed by classical procedures, but are shown to produce poor flow quality due to the large boundary layers present in the test section. The procedure is demonstrated by finding the optimum design parameters for a Mach 10 conical nozzle and a Mach 6 and a Mach 15 contoured nozzle.
Delpla, Ianis; Florea, Mihai; Pelletier, Geneviève; Rodriguez, Manuel J
2018-06-04
Trihalomethanes (THMs) and Haloacetic Acids (HAAs) are the main groups detected in drinking water and are consequently strictly regulated. However, the increasing quantity of data for disinfection byproducts (DBPs) produced from research projects and regulatory programs remains largely unexploited, despite a great potential for its use in optimizing drinking water quality monitoring to meet specific objectives. In this work, we developed a procedure to optimize locations and periods for DBPs monitoring based on a set of monitoring scenarios using the cluster analysis technique. The optimization procedure used a robust set of spatio-temporal monitoring results on DBPs (THMs and HAAs) generated from intensive sampling campaigns conducted in a residential sector of a water distribution system. Results shows that cluster analysis allows for the classification of water quality in different groups of THMs and HAAs according to their similarities, and the identification of locations presenting water quality concerns. By using cluster analysis with different monitoring objectives, this work provides a set of monitoring solutions and a comparison between various monitoring scenarios for decision-making purposes. Finally, it was demonstrated that the data from intensive monitoring of free chlorine residual and water temperature as DBP proxy parameters, when processed using cluster analysis, could also help identify the optimal sampling points and periods for regulatory THMs and HAAs monitoring. Copyright © 2018 Elsevier Ltd. All rights reserved.
Minciardi, Riccardo; Paolucci, Massimo; Robba, Michela; Sacile, Roberto
2008-11-01
An approach to sustainable municipal solid waste (MSW) management is presented, with the aim of supporting the decision on the optimal flows of solid waste sent to landfill, recycling and different types of treatment plants, whose sizes are also decision variables. This problem is modeled with a non-linear, multi-objective formulation. Specifically, four objectives to be minimized have been taken into account, which are related to economic costs, unrecycled waste, sanitary landfill disposal and environmental impact (incinerator emissions). An interactive reference point procedure has been developed to support decision making; these methods are considered appropriate for multi-objective decision problems in environmental applications. In addition, interactive methods are generally preferred by decision makers as they can be directly involved in the various steps of the decision process. Some results deriving from the application of the proposed procedure are presented. The application of the procedure is exemplified by considering the interaction with two different decision makers who are assumed to be in charge of planning the MSW system in the municipality of Genova (Italy).
Macyszyn, Luke; Attiah, Mark; Ma, Tracy S; Ali, Zarina; Faught, Ryan; Hossain, Alisha; Man, Karen; Patel, Hiren; Sobota, Rosanna; Zager, Eric L; Stein, Sherman C
2017-05-01
OBJECTIVE Moyamoya disease (MMD) is a chronic cerebrovascular disease that can lead to devastating neurological outcomes. Surgical intervention is the definitive treatment, with direct, indirect, and combined revascularization procedures currently employed by surgeons. The optimal surgical approach, however, remains unclear. In this decision analysis, the authors compared the effectiveness of revascularization procedures in both adult and pediatric patients with MMD. METHODS A comprehensive literature search was performed for studies of MMD. Using complication and success rates from the literature, the authors constructed a decision analysis model for treatment using a direct and indirect revascularization technique. Utility values for the various outcomes and complications were extracted from the literature examining preferences in similar clinical conditions. Sensitivity analysis was performed. RESULTS A structured literature search yielded 33 studies involving 4197 cases. Cases were divided into adult and pediatric populations. These were further subdivided into 3 different treatment groups: indirect, direct, and combined revascularization procedures. In the pediatric population at 5- and 10-year follow-up, there was no significant difference between indirect and combination procedures, but both were superior to direct revascularization. In adults at 4-year follow-up, indirect was superior to direct revascularization. CONCLUSIONS In the absence of factors that dictate a specific approach, the present decision analysis suggests that direct revascularization procedures are inferior in terms of quality-adjusted life years in both adults at 4 years and children at 5 and 10 years postoperatively, respectively. These findings were statistically significant (p < 0.001 in all cases), suggesting that indirect and combination procedures may offer optimal results at long-term follow-up.
Optimization for minimum sensitivity to uncertain parameters
NASA Technical Reports Server (NTRS)
Pritchard, Jocelyn I.; Adelman, Howard M.; Sobieszczanski-Sobieski, Jaroslaw
1994-01-01
A procedure to design a structure for minimum sensitivity to uncertainties in problem parameters is described. The approach is to minimize directly the sensitivity derivatives of the optimum design with respect to fixed design parameters using a nested optimization procedure. The procedure is demonstrated for the design of a bimetallic beam for minimum weight with insensitivity to uncertainties in structural properties. The beam is modeled with finite elements based on two dimensional beam analysis. A sequential quadratic programming procedure used as the optimizer supplies the Lagrange multipliers that are used to calculate the optimum sensitivity derivatives. The method was perceived to be successful from comparisons of the optimization results with parametric studies.
Optimal front light design for reflective displays under different ambient illumination
NASA Astrophysics Data System (ADS)
Wang, Sheng-Po; Chang, Ting-Ting; Li, Chien-Ju; Bai, Yi-Ho; Hu, Kuo-Jui
2011-01-01
The goal of this study is to find out the optimal luminance and color temperature of front light for reflective displays in different ambient illumination by conducting series of psychophysical experiments. A color and brightness tunable front light device with ten LED units was built and been calibrated to present 256 luminance levels and 13 different color temperature at fixed luminance of 200 cd/m2. The experiment results revealed the best luminance and color temperature settings for human observers under different ambient illuminant, which could also assist the e-paper manufacturers to design front light device, and present the best image quality on reflective displays. Furthermore, a similar experiment procedure was conducted by utilizing new flexible e-signage display developed by ITRI and an optimal front light device for the new display panel has been designed and utilized.
Stochastic DG Placement for Conservation Voltage Reduction Based on Multiple Replications Procedure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Zhaoyu; Chen, Bokan; Wang, Jianhui
2015-06-01
Conservation voltage reduction (CVR) and distributed-generation (DG) integration are popular strategies implemented by utilities to improve energy efficiency. This paper investigates the interactions between CVR and DG placement to minimize load consumption in distribution networks, while keeping the lowest voltage level within the predefined range. The optimal placement of DG units is formulated as a stochastic optimization problem considering the uncertainty of DG outputs and load consumptions. A sample average approximation algorithm-based technique is developed to solve the formulated problem effectively. A multiple replications procedure is developed to test the stability of the solution and calculate the confidence interval ofmore » the gap between the candidate solution and optimal solution. The proposed method has been applied to the IEEE 37-bus distribution test system with different scenarios. The numerical results indicate that the implementations of CVR and DG, if combined, can achieve significant energy savings.« less
NASA Astrophysics Data System (ADS)
Hu, Weifei; Park, Dohyun; Choi, DongHoon
2013-12-01
A composite blade structure for a 2 MW horizontal axis wind turbine is optimally designed. Design requirements are simultaneously minimizing material cost and blade weight while satisfying the constraints on stress ratio, tip deflection, fatigue life and laminate layup requirements. The stress ratio and tip deflection under extreme gust loads and the fatigue life under a stochastic normal wind load are evaluated. A blade element wind load model is proposed to explain the wind pressure difference due to blade height change during rotor rotation. For fatigue life evaluation, the stress result of an implicit nonlinear dynamic analysis under a time-varying fluctuating wind is converted to the histograms of mean and amplitude of maximum stress ratio using the rainflow counting algorithm Miner's rule is employed to predict the fatigue life. After integrating and automating the whole analysis procedure an evolutionary algorithm is used to solve the discrete optimization problem.
NASA Technical Reports Server (NTRS)
Stahara, S. S.
1984-01-01
An investigation was carried out to complete the preliminary development of a combined perturbation/optimization procedure and associated computational code for designing optimized blade-to-blade profiles of turbomachinery blades. The overall purpose of the procedures developed is to provide demonstration of a rapid nonlinear perturbation method for minimizing the computational requirements associated with parametric design studies of turbomachinery flows. The method combines the multiple parameter nonlinear perturbation method, successfully developed in previous phases of this study, with the NASA TSONIC blade-to-blade turbomachinery flow solver, and the COPES-CONMIN optimization procedure into a user's code for designing optimized blade-to-blade surface profiles of turbomachinery blades. Results of several design applications and a documented version of the code together with a user's manual are provided.
Automatic digital surface model (DSM) generation from aerial imagery data
NASA Astrophysics Data System (ADS)
Zhou, Nan; Cao, Shixiang; He, Hongyan; Xing, Kun; Yue, Chunyu
2018-04-01
Aerial sensors are widely used to acquire imagery for photogrammetric and remote sensing application. In general, the images have large overlapped region, which provide a lot of redundant geometry and radiation information for matching. This paper presents a POS supported dense matching procedure for automatic DSM generation from aerial imagery data. The method uses a coarse-to-fine hierarchical strategy with an effective combination of several image matching algorithms: image radiation pre-processing, image pyramid generation, feature point extraction and grid point generation, multi-image geometrically constraint cross-correlation (MIG3C), global relaxation optimization, multi-image geometrically constrained least squares matching (MIGCLSM), TIN generation and point cloud filtering. The image radiation pre-processing is used in order to reduce the effects of the inherent radiometric problems and optimize the images. The presented approach essentially consists of 3 components: feature point extraction and matching procedure, grid point matching procedure and relational matching procedure. The MIGCLSM method is used to achieve potentially sub-pixel accuracy matches and identify some inaccurate and possibly false matches. The feasibility of the method has been tested on different aerial scale images with different landcover types. The accuracy evaluation is based on the comparison between the automatic extracted DSMs derived from the precise exterior orientation parameters (EOPs) and the POS.
Brito, Thiago V.; Morley, Steven K.
2017-10-25
A method for comparing and optimizing the accuracy of empirical magnetic field models using in situ magnetic field measurements is presented in this paper. The optimization method minimizes a cost function—τ—that explicitly includes both a magnitude and an angular term. A time span of 21 days, including periods of mild and intense geomagnetic activity, was used for this analysis. A comparison between five magnetic field models (T96, T01S, T02, TS04, and TS07) widely used by the community demonstrated that the T02 model was, on average, the most accurate when driven by the standard model input parameters. The optimization procedure, performedmore » in all models except TS07, generally improved the results when compared to unoptimized versions of the models. Additionally, using more satellites in the optimization procedure produces more accurate results. This procedure reduces the number of large errors in the model, that is, it reduces the number of outliers in the error distribution. The TS04 model shows the most accurate results after the optimization in terms of both the magnitude and direction, when using at least six satellites in the fitting. It gave a smaller error than its unoptimized counterpart 57.3% of the time and outperformed the best unoptimized model (T02) 56.2% of the time. Its median percentage error in |B| was reduced from 4.54% to 3.84%. Finally, the difference among the models analyzed, when compared in terms of the median of the error distributions, is not very large. However, the unoptimized models can have very large errors, which are much reduced after the optimization.« less
2012-01-01
Background Baeyer-Villiger monooxygenases (BVMOs) represent a group of enzymes of considerable biotechnological relevance as illustrated by their growing use as biocatalyst in a variety of synthetic applications. However, due to their increased use the reproducible expression of BVMOs and other biotechnologically relevant enzymes has become a pressing matter while knowledge about the factors governing their reproducible expression is scattered. Results Here, we have used phenylacetone monooxygenase (PAMO) from Thermobifida fusca, a prototype Type I BVMO, as a model enzyme to develop a stepwise strategy to optimize the biotransformation performance of recombinant E. coli expressing PAMO in 96-well microtiter plates in a reproducible fashion. Using this system, the best expression conditions of PAMO were investigated first, including different host strains, temperature as well as time and induction period for PAMO expression. This optimized system was used next to improve biotransformation conditions, the PAMO-catalyzed conversion of phenylacetone, by evaluating the best electron donor, substrate concentration, and the temperature and length of biotransformation. Combining all optimized parameters resulted in a more than four-fold enhancement of the biocatalytic performance and, importantly, this was highly reproducible as indicated by the relative standard deviation of 1% for non-washed cells and 3% for washed cells. Furthermore, the optimized procedure was successfully adapted for activity-based mutant screening. Conclusions Our optimized procedure, which provides a comprehensive overview of the key factors influencing the reproducible expression and performance of a biocatalyst, is expected to form a rational basis for the optimization of miniaturized biotransformations and for the design of novel activity-based screening procedures suitable for BVMOs and other NAD(P)H-dependent enzymes as well. PMID:22720747
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brito, Thiago V.; Morley, Steven K.
A method for comparing and optimizing the accuracy of empirical magnetic field models using in situ magnetic field measurements is presented in this paper. The optimization method minimizes a cost function—τ—that explicitly includes both a magnitude and an angular term. A time span of 21 days, including periods of mild and intense geomagnetic activity, was used for this analysis. A comparison between five magnetic field models (T96, T01S, T02, TS04, and TS07) widely used by the community demonstrated that the T02 model was, on average, the most accurate when driven by the standard model input parameters. The optimization procedure, performedmore » in all models except TS07, generally improved the results when compared to unoptimized versions of the models. Additionally, using more satellites in the optimization procedure produces more accurate results. This procedure reduces the number of large errors in the model, that is, it reduces the number of outliers in the error distribution. The TS04 model shows the most accurate results after the optimization in terms of both the magnitude and direction, when using at least six satellites in the fitting. It gave a smaller error than its unoptimized counterpart 57.3% of the time and outperformed the best unoptimized model (T02) 56.2% of the time. Its median percentage error in |B| was reduced from 4.54% to 3.84%. Finally, the difference among the models analyzed, when compared in terms of the median of the error distributions, is not very large. However, the unoptimized models can have very large errors, which are much reduced after the optimization.« less
New Method of Calibrating IRT Models.
ERIC Educational Resources Information Center
Jiang, Hai; Tang, K. Linda
This discussion of new methods for calibrating item response theory (IRT) models looks into new optimization procedures, such as the Genetic Algorithm (GA) to improve on the use of the Newton-Raphson procedure. The advantages of using a global optimization procedure like GA is that this kind of procedure is not easily affected by local optima and…
Optimization of HTS superconducting magnetic energy storage magnet volume
NASA Astrophysics Data System (ADS)
Korpela, Aki; Lehtonen, Jorma; Mikkonen, Risto
2003-08-01
Nonlinear optimization problems in the field of electromagnetics have been successfully solved by means of sequential quadratic programming (SQP) and the finite element method (FEM). For example, the combination of SQP and FEM has been proven to be an efficient tool in the optimization of low temperature superconductors (LTS) superconducting magnetic energy storage (SMES) magnets. The procedure can also be applied for the optimization of HTS magnets. However, due to a strongly anisotropic material and a slanted electric field, current density characteristic high temperature superconductors HTS optimization is quite different from that of the LTS. In this paper the volumes of solenoidal conduction-cooled Bi-2223/Ag SMES magnets have been optimized at the operation temperature of 20 K. In addition to the electromagnetic constraints the stress caused by the tape bending has also been taken into account. Several optimization runs with different initial geometries were performed in order to find the best possible solution for a certain energy requirement. The optimization constraints describe the steady-state operation, thus the presented coil geometries are designed for slow ramping rates. Different energy requirements were investigated in order to find the energy dependence of the design parameters of optimized solenoidal HTS coils. According to the results, these dependences can be described with polynomial expressions.
Foltran, Fabiana A; Silva, Luciana C C B; Sato, Tatiana O; Coury, Helenice J C G
2013-01-01
The recording of human movement is an essential requirement for biomechanical, clinical, and occupational analysis, allowing assessment of postural variation, occupational risks, and preventive programs in physical therapy and rehabilitation. The flexible electrogoniometer (EGM), considered a reliable and accurate device, is used for dynamic recordings of different joints. Despite these advantages, the EGM is susceptible to measurement errors, known as crosstalk. There are two known types of crosstalk: crosstalk due to sensor rotation and inherent crosstalk. Correction procedures have been proposed to correct these errors; however no study has used both procedures in clinical measures for wrist movements with the aim to optimize the correction. To evaluate the effects of mathematical correction procedures on: 1) crosstalk due to forearm rotation, 2) inherent sensor crosstalk; and 3) the combination of these two procedures. 43 healthy subjects had their maximum range of motion of wrist flexion/extension and ulnar/radials deviation recorded by EGM. The results were analyzed descriptively, and procedures were compared by differences. There was no significant difference in measurements before and after the application of correction procedures (P<0.05). Furthermore, the differences between the correction procedures were less than 5° in most cases, having little impact on the measurements. Considering the time-consuming data analysis, the specific technical knowledge involved, and the inefficient results, the correction procedures are not recommended for wrist recordings by EGM.
Task Analysis - Its Relation to Content Analysis.
ERIC Educational Resources Information Center
Gagne, Robert M.
Task analysis is a procedure having the purpose of identifying different kinds of performances which are outcomes of learning, in order to make possible the specification of optimal instructional conditions for each kind of outcome. Task analysis may be related to content analysis in two different ways: (1) it may be used to identify the probably…
47 CFR 1.2202 - Competitive bidding design options.
Code of Federal Regulations, 2014 CFR
2014-10-01
... Section 1.2202 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Grants...) Procedures that utilize mathematical computer optimization software, such as integer programming, to evaluate... evaluating bids using a ranking based on specified factors. (B) Procedures that combine computer optimization...
Fully integrated aerodynamic/dynamic optimization of helicopter rotor blades
NASA Technical Reports Server (NTRS)
Walsh, Joanne L.; Lamarsh, William J., II; Adelman, Howard M.
1992-01-01
This paper describes a fully integrated aerodynamic/dynamic optimization procedure for helicopter rotor blades. The procedure combines performance and dynamics analyses with a general purpose optimizer. The procedure minimizes a linear combination of power required (in hover, forward flight, and maneuver) and vibratory hub shear. The design variables include pretwist, taper initiation, taper ratio, root chord, blade stiffnesses, tuning masses, and tuning mass locations. Aerodynamic constraints consist of limits on power required in hover, forward flight and maneuver; airfoil section stall; drag divergence Mach number; minimum tip chord; and trim. Dynamic constraints are on frequencies, minimum autorotational inertia, and maximum blade weight. The procedure is demonstrated for two cases. In the first case the objective function involves power required (in hover, forward flight, and maneuver) and dynamics. The second case involves only hover power and dynamics. The designs from the integrated procedure are compared with designs from a sequential optimization approach in which the blade is first optimized for performance and then for dynamics. In both cases, the integrated approach is superior.
Fully integrated aerodynamic/dynamic optimization of helicopter rotor blades
NASA Technical Reports Server (NTRS)
Walsh, Joanne L.; Lamarsh, William J., II; Adelman, Howard M.
1992-01-01
A fully integrated aerodynamic/dynamic optimization procedure is described for helicopter rotor blades. The procedure combines performance and dynamic analyses with a general purpose optimizer. The procedure minimizes a linear combination of power required (in hover, forward flight, and maneuver) and vibratory hub shear. The design variables include pretwist, taper initiation, taper ratio, root chord, blade stiffnesses, tuning masses, and tuning mass locations. Aerodynamic constraints consist of limits on power required in hover, forward flight and maneuvers; airfoil section stall; drag divergence Mach number; minimum tip chord; and trim. Dynamic constraints are on frequencies, minimum autorotational inertia, and maximum blade weight. The procedure is demonstrated for two cases. In the first case, the objective function involves power required (in hover, forward flight and maneuver) and dynamics. The second case involves only hover power and dynamics. The designs from the integrated procedure are compared with designs from a sequential optimization approach in which the blade is first optimized for performance and then for dynamics. In both cases, the integrated approach is superior.
1998-05-01
Coverage Probability with a Random Optimization Procedure: An Artificial Neural Network Approach by Biing T. Guan, George Z. Gertner, and Alan B...Modeling Training Site Vegetation Coverage Probability with a Random Optimizing Procedure: An Artificial Neural Network Approach 6. AUTHOR(S) Biing...coverage based on past coverage. Approach A literature survey was conducted to identify artificial neural network analysis techniques applicable for
NASA Astrophysics Data System (ADS)
Salcedo-Sanz, S.; Camacho-Gómez, C.; Magdaleno, A.; Pereira, E.; Lorenzana, A.
2017-04-01
In this paper we tackle a problem of optimal design and location of Tuned Mass Dampers (TMDs) for structures subjected to earthquake ground motions, using a novel meta-heuristic algorithm. Specifically, the Coral Reefs Optimization (CRO) with Substrate Layer (CRO-SL) is proposed as a competitive co-evolution algorithm with different exploration procedures within a single population of solutions. The proposed approach is able to solve the TMD design and location problem, by exploiting the combination of different types of searching mechanisms. This promotes a powerful evolutionary-like algorithm for optimization problems, which is shown to be very effective in this particular problem of TMDs tuning. The proposed algorithm's performance has been evaluated and compared with several reference algorithms in two building models with two and four floors, respectively.
Andrade-Eiroa, Auréa; Diévart, Pascal; Dagaut, Philippe
2010-04-15
A new procedure for optimizing PAHs separation in very complex mixtures by reverse phase high performance (RPLC) is proposed. It is based on changing gradually the experimental conditions all along the chromatographic procedure as a function of the physical properties of the compounds eluted. The temperature and speed flow gradients allowed obtaining the optimum resolution in large chromatographic determinations where PAHs with very different medium polarizability have to be separated. Whereas optimization procedures of RPLC methodologies had always been accomplished regardless of the physico-chemical properties of the target analytes, we found that resolution is highly dependent on the physico-chemical properties of the target analytes. Based on resolution criterion, optimization process for a 16 EPA PAHs mixture was performed on three sets of difficult-to-separate PAHs pairs: acenaphthene-fluorene (for the optimization procedure in the first part of the chromatogram where light PAHs elute), benzo[g,h,i]perylene-dibenzo[a,h]anthracene and benzo[g,h,i]perylene-indeno[1,2,3-cd]pyrene (for the optimization procedure of the second part of the chromatogram where the heavier PAHs elute). Two-level full factorial designs were applied to detect interactions among variables to be optimized: speed flow, temperature of column oven and mobile-phase gradient in the two parts of the studied chromatogram. Experimental data were fitted by multivariate nonlinear regression models and optimum values of speed flow and temperature were obtained through mathematical analysis of the constructed models. An HPLC system equipped with a reversed phase 5 microm C18, 250 mm x 4.6mm column (with acetonitrile/water mobile phase), a column oven, a binary pump, a photodiode array detector (PDA), and a fluorimetric detector were used in this work. Optimum resolution was achieved operating at 1.0 mL/min in the first part of the chromatogram (until 45 min) and 0.5 mL/min in the second one (from 45 min to the end) and by applying programmed temperature gradient (15 degrees C until 30 min and progressively increasing temperature until reaching 40 degrees C at 45 min). (c) 2009 Elsevier B.V. All rights reserved.
Evaluation of laser cutting process with auxiliary gas pressure by soft computing approach
NASA Astrophysics Data System (ADS)
Lazov, Lyubomir; Nikolić, Vlastimir; Jovic, Srdjan; Milovančević, Miloš; Deneva, Heristina; Teirumenieka, Erika; Arsic, Nebojsa
2018-06-01
Evaluation of the optimal laser cutting parameters is very important for the high cut quality. This is highly nonlinear process with different parameters which is the main challenge in the optimization process. Data mining methodology is one of most versatile method which can be used laser cutting process optimization. Support vector regression (SVR) procedure is implemented since it is a versatile and robust technique for very nonlinear data regression. The goal in this study was to determine the optimal laser cutting parameters to ensure robust condition for minimization of average surface roughness. Three cutting parameters, the cutting speed, the laser power, and the assist gas pressure, were used in the investigation. As a laser type TruLaser 1030 technological system was used. Nitrogen as an assisted gas was used in the laser cutting process. As the data mining method, support vector regression procedure was used. Data mining prediction accuracy was very high according the coefficient (R2) of determination and root mean square error (RMSE): R2 = 0.9975 and RMSE = 0.0337. Therefore the data mining approach could be used effectively for determination of the optimal conditions of the laser cutting process.
Determination of acetaminophen concentrations in serum by high-pressure liquid chromatography.
Horvitz, R A; Jatlow, P I
1977-09-01
We describe a method for determination of serum acetaminophen concentrations in serum by reversed phase high-pressure liquid chromatography. The homolog N-propionyl-p-aminophenol was used as an internal standard. The procedure, which requires only a single extraction with diethyl ether, can be optimized to be linear over the ranges of 10 to 100 or 1 to 20 mg/liter. Within-run CV was 1.2%; between-run CV was 4.4% and 4.9% at two different concentrations. Many commonly used drugs were tested and found not to interfere. The procedure is simple and rapid enough for use on an emergency basis in cases of overdosage, and can be optimized for measurement of either therapeutic or toxic concentrations.
The appropriateness of use of percutaneous transluminal coronary angioplasty in Spain.
Aguilar, M D; Fitch, K; Lázaro, P; Bernstein, S J
2001-05-01
The rapid increase in the number of percutaneous transluminal coronary angioplasty (PTCA) procedures performed in Spain in recent years raises questions about how appropriately this procedure is being used. To examine this issue, we studied the appropriateness of use of PTCA in Spanish patients and factors associated with inappropriate use. We applied criteria for the appropriate use of PTCA developed by an expert panel of Spanish cardiologists and cardiovascular surgeons to a random sample of 1913 patients undergoing PTCA in Spain in 1997. The patients were selected through a two-step sampling process, stratifying by hospital type (public/private) and volume of procedures (low/medium/high). We examined the association between inappropriate use of PTCA and different clinical and sociodemographic factors. Overall, 46% of the PTCA procedures were appropriate, 31% were uncertain and 22% were inappropriate. Two factors contributing to inappropriate use were patients' receipt of less than optimal medical therapy and their failure to undergo stress testing. Institutional type and volume of procedures were not significantly related with inappropriate use. One of every five PTCA procedures in Spain is done for inappropriate reasons. Assuring that patients receive optimal medical therapy and undergo stress testing when indicated could contribute to more appropriate use of PTCA.
Optimizing Teleportation Cost in Distributed Quantum Circuits
NASA Astrophysics Data System (ADS)
Zomorodi-Moghadam, Mariam; Houshmand, Mahboobeh; Houshmand, Monireh
2018-03-01
The presented work provides a procedure for optimizing the communication cost of a distributed quantum circuit (DQC) in terms of the number of qubit teleportations. Because of technology limitations which do not allow large quantum computers to work as a single processing element, distributed quantum computation is an appropriate solution to overcome this difficulty. Previous studies have applied ad-hoc solutions to distribute a quantum system for special cases and applications. In this study, a general approach is proposed to optimize the number of teleportations for a DQC consisting of two spatially separated and long-distance quantum subsystems. To this end, different configurations of locations for executing gates whose qubits are in distinct subsystems are considered and for each of these configurations, the proposed algorithm is run to find the minimum number of required teleportations. Finally, the configuration which leads to the minimum number of teleportations is reported. The proposed method can be used as an automated procedure to find the configuration with the optimal communication cost for the DQC. This cost can be used as a basic measure of the communication cost for future works in the distributed quantum circuits.
NASA Astrophysics Data System (ADS)
Bortolotti, P.; Adolphs, G.; Bottasso, C. L.
2016-09-01
This work is concerned with the development of an optimization methodology for the composite materials used in wind turbine blades. Goal of the approach is to guide designers in the selection of the different materials of the blade, while providing indications to composite manufacturers on optimal trade-offs between mechanical properties and material costs. The method works by using a parametric material model, and including its free parameters amongst the design variables of a multi-disciplinary wind turbine optimization procedure. The proposed method is tested on the structural redesign of a conceptual 10 MW wind turbine blade, its spar caps and shell skin laminates being subjected to optimization. The procedure identifies a blade optimum for a new spar cap laminate characterized by a higher longitudinal Young's modulus and higher cost than the initial one, which however in turn induce both cost and mass savings in the blade. In terms of shell skin, the adoption of a laminate with intermediate properties between a bi-axial one and a tri-axial one also leads to slight structural improvements.
CT-guided brachytherapy of prostate cancer: reduction of effective dose from X-ray examination
NASA Astrophysics Data System (ADS)
Sanin, Dmitriy B.; Biryukov, Vitaliy A.; Rusetskiy, Sergey S.; Sviridov, Pavel V.; Volodina, Tatiana V.
2014-03-01
Computed tomography (CT) is one of the most effective and informative diagnostic method. Though the number of CT scans among all radiographic procedures in the USA and European countries is 11% and 4% respectively, CT makes the highest contribution to the collective effective dose from all radiographic procedures, it is 67% in the USA and 40% in European countries [1-5]. Therefore it is necessary to understand the significance of dose value from CT imaging to a patient . Though CT dose from multiple scans and potential risk is of great concern in pediatric patients, this applies to adults as well. In this connection it is very important to develop optimal approaches to dose reduction and optimization of CT examination. International Commission on Radiological Protection (ICRP) in its publications recommends radiologists to be aware that often CT image quality is higher than it is necessary for diagnostic confidence[6], and there is a potential to reduce the dose which patient gets from CT examination [7]. In recent years many procedures, such as minimally invasive surgery, biopsy, brachytherapy and different types of ablation are carried out under guidance of computed tomography [6;7], and during a procedures multiple CT scans focusing on a specific anatomic region are performed. At the Clinics of MRRC different types of treatment for patients with prostate cancer are used, incuding conformal CT-guided brachytherapy, implantation of microsources of I into the gland under guidance of spiral CT [8]. So, the purpose of the study is to choose optimal method to reduce radiation dose from CT during CT-guided prostate brachytherapy and to obtain the image of desired quality.
40 CFR 91.316 - Hydrocarbon analyzer calibration.
Code of Federal Regulations, 2011 CFR
2011-07-01
... periodic optimization of detector response. Prior to introduction into service and at least annually... nitrogen. (2) One of the following procedures is required for FID or HFID optimization: (i) The procedure outlined in Society of Automotive Engineers (SAE) paper No. 770141, “Optimization of Flame Ionization...
NASA Technical Reports Server (NTRS)
Oconnell, R. F.; Hassig, H. J.; Radovcich, N. A.
1976-01-01
Results of a study of the development of flutter modules applicable to automated structural design of advanced aircraft configurations, such as a supersonic transport, are presented. Automated structural design is restricted to automated sizing of the elements of a given structural model. It includes a flutter optimization procedure; i.e., a procedure for arriving at a structure with minimum mass for satisfying flutter constraints. Methods of solving the flutter equation and computing the generalized aerodynamic force coefficients in the repetitive analysis environment of a flutter optimization procedure are studied, and recommended approaches are presented. Five approaches to flutter optimization are explained in detail and compared. An approach to flutter optimization incorporating some of the methods discussed is presented. Problems related to flutter optimization in a realistic design environment are discussed and an integrated approach to the entire flutter task is presented. Recommendations for further investigations are made. Results of numerical evaluations, applying the five methods of flutter optimization to the same design task, are presented.
How Near is a Near-Optimal Solution: Confidence Limits for the Global Optimum.
1980-05-01
or near-optimal solutions are the only practical solutions available. This paper identifies and compares some procedures which use independent near...approximate or near-optimal solutions are the only practical solutions available. This paper identifies and compares some procedures which use inde- pendent...The objective of this paper is to indicate some relatively new statistical procedures for obtaining an upper confidence limit on G Each of these
Numerical modeling and optimization of the Iguassu gas centrifuge
NASA Astrophysics Data System (ADS)
Bogovalov, S. V.; Borman, V. D.; Borisevich, V. D.; Tronin, V. N.; Tronin, I. V.
2017-07-01
The full procedure of the numerical calculation of the optimized parameters of the Iguassu gas centrifuge (GC) is under discussion. The procedure consists of a few steps. On the first step the problem of a hydrodynamical flow of the gas in the rotating rotor of the GC is solved numerically. On the second step the problem of diffusion of the binary mixture of isotopes is solved. The separation power of the gas centrifuge is calculated after that. On the last step the time consuming procedure of optimization of the GC is performed providing us the maximum of the separation power. The optimization is based on the BOBYQA method exploring the results of numerical simulations of the hydrodynamics and diffusion of the mixture of isotopes. Fast convergence of calculations is achieved due to exploring of a direct solver at the solution of the hydrodynamical and diffusion parts of the problem. Optimized separative power and optimal internal parameters of the Iguassu GC with 1 m rotor were calculated using the developed approach. Optimization procedure converges in 45 iterations taking 811 minutes.
Backscattering measuring system for optimization of intravenous laser irradiation dose
NASA Astrophysics Data System (ADS)
Rusina, Tatyana V.; Popov, V. D.; Melnik, Ivan S.; Dets, Sergiy M.
1996-11-01
Intravenous laser blood irradiation as an effective method of biostimulation and physiotherapy becomes a more popular procedure. Optimal irradiation conditions for each patient are needed to be established individually. A fiber optics feedback system combined with conventional intravenous laser irradiation system was developed to control of irradiation process. The system consists of He-Ne laser, fiber optics probe and signal analyzer. Intravenous blood irradiation was performed in 7 healthy volunteers and 19 patients with different diseases. Measurements in vivo were related to in vitro blood irradiation which was performed in the same conditions with force-circulated venous blood. Comparison of temporal variations of backscattered light during all irradiation procedures has shown a strong discrepancy on optical properties of blood in patients with various health disorders since second procedure. The best cure effect was achieved when intensity of backscattered light was constant during at least five minutes. As a result, the optical irradiation does was considered to be equal 20 minutes' exposure of 3 mW He-Ne laser light at the end of fourth procedure.
Addendum to final report, Optimizing traffic counting procedures.
DOT National Transportation Integrated Search
1987-01-01
The methodology described in entry 55-14 was used with 1980 data for 16 continuous count stations to determine periods that were stable throughout the year for different short counts. It was found that stable periods for short counts occurred mainly ...
NASA Technical Reports Server (NTRS)
Nissim, E.; Abel, I.
1978-01-01
An optimization procedure is developed based on the responses of a system to continuous gust inputs. The procedure uses control law transfer functions which have been partially determined by using the relaxed aerodynamic energy approach. The optimization procedure yields a flutter suppression system which minimizes control surface activity in a gust environment. The procedure is applied to wing flutter of a drone aircraft to demonstrate a 44 percent increase in the basic wing flutter dynamic pressure. It is shown that a trailing edge control system suppresses the flutter instability over a wide range of subsonic mach numbers and flight altitudes. Results of this study confirm the effectiveness of the relaxed energy approach.
Model Specification Searches Using Ant Colony Optimization Algorithms
ERIC Educational Resources Information Center
Marcoulides, George A.; Drezner, Zvi
2003-01-01
Ant colony optimization is a recently proposed heuristic procedure inspired by the behavior of real ants. This article applies the procedure to model specification searches in structural equation modeling and reports the results. The results demonstrate the capabilities of ant colony optimization algorithms for conducting automated searches.
Resource Allocation and Outpatient Appointment Scheduling Using Simulation Optimization
Ling, Teresa Wai Ching; Yeung, Wing Kwan
2017-01-01
This paper studies the real-life problems of outpatient clinics having the multiple objectives of minimizing resource overtime, patient waiting time, and waiting area congestion. In the clinic, there are several patient classes, each of which follows different treatment procedure flow paths through a multiphase and multiserver queuing system with scarce staff and limited space. We incorporate the stochastic factors for the probabilities of the patients being diverted into different flow paths, patient punctuality, arrival times, procedure duration, and the number of accompanied visitors. We present a novel two-stage simulation-based heuristic algorithm to assess various tactical and operational decisions for optimizing the multiple objectives. In stage I, we search for a resource allocation plan, and in stage II, we determine a block appointment schedule by patient class and a service discipline for the daily operational level. We also explore the effects of the separate strategies and their integration to identify the best possible combination. The computational experiments are designed on the basis of data from a study of an ophthalmology clinic in a public hospital. Results show that our approach significantly mitigates the undesirable outcomes by integrating the strategies and increasing the resource flexibility at the bottleneck procedures without adding resources. PMID:29104748
Resource Allocation and Outpatient Appointment Scheduling Using Simulation Optimization.
Lin, Carrie Ka Yuk; Ling, Teresa Wai Ching; Yeung, Wing Kwan
2017-01-01
This paper studies the real-life problems of outpatient clinics having the multiple objectives of minimizing resource overtime, patient waiting time, and waiting area congestion. In the clinic, there are several patient classes, each of which follows different treatment procedure flow paths through a multiphase and multiserver queuing system with scarce staff and limited space. We incorporate the stochastic factors for the probabilities of the patients being diverted into different flow paths, patient punctuality, arrival times, procedure duration, and the number of accompanied visitors. We present a novel two-stage simulation-based heuristic algorithm to assess various tactical and operational decisions for optimizing the multiple objectives. In stage I, we search for a resource allocation plan, and in stage II, we determine a block appointment schedule by patient class and a service discipline for the daily operational level. We also explore the effects of the separate strategies and their integration to identify the best possible combination. The computational experiments are designed on the basis of data from a study of an ophthalmology clinic in a public hospital. Results show that our approach significantly mitigates the undesirable outcomes by integrating the strategies and increasing the resource flexibility at the bottleneck procedures without adding resources.
Beyond the drugs: nonpharmacologic strategies to optimize procedural care in children.
Leroy, Piet L; Costa, Luciane R; Emmanouil, Dimitris; van Beukering, Alice; Franck, Linda S
2016-03-01
Painful and/or stressful medical procedures mean a substantial burden for sick children. There is good evidence that procedural comfort can be optimized by a comprehensive comfort-directed policy containing the triad of nonpharmacological strategies (NPS) in all cases, timely or preventive procedural analgesia if pain is an issue, and procedural sedation. Based both on well-established theoretical frameworks as well as an increasing body of scientific evidence NPS need to be regarded an inextricable part of procedural comfort care. Procedural comfort care must always start with a child-friendly, nonthreatening environment in which well-being, confidence, and self-efficacy are optimized and maintained. This requires a reconsideration of the medical spaces where we provide care, reduction of sensory stimulation, normalized professional behavior, optimal logistics, and coordination and comfort-directed and age-appropriate verbal and nonverbal expression by professionals. Next, age-appropriate distraction techniques and/or hypnosis should be readily available. NPS are useful for all types of medical and dental procedures and should always precede and accompany procedural sedation. NPS should be embedded into a family-centered, care-directed policy as it has been shown that family-centered care can lead to safer, more personalized, and effective care, improved healthcare experiences and patient outcomes, and more responsive organizations.
Optimization of locations of diffusion spots in indoor optical wireless local area networks
NASA Astrophysics Data System (ADS)
Eltokhey, Mahmoud W.; Mahmoud, K. R.; Ghassemlooy, Zabih; Obayya, Salah S. A.
2018-03-01
In this paper, we present a novel optimization of the locations of the diffusion spots in indoor optical wireless local area networks, based on the central force optimization (CFO) scheme. The users' performance uniformity is addressed by using the CFO algorithm, and adopting different objective function's configurations, while considering maximization and minimization of the signal to noise ratio and the delay spread, respectively. We also investigate the effect of varying the objective function's weights on the system and the users' performance as part of the adaptation process. The results show that the proposed objective function configuration-based optimization procedure offers an improvement of 65% in the standard deviation of individual receivers' performance.
40 CFR 90.316 - Hydrocarbon analyzer calibration.
Code of Federal Regulations, 2011 CFR
2011-07-01
...) Initial and periodic optimization of detector response. Prior to initial use and at least annually... nitrogen. (2) Use of one of the following procedures is required for FID or HFID optimization: (i) The procedure outlined in Society of Automotive Engineers (SAE) paper No. 770141, “Optimization of a Flame...
Electric and hybrid vehicles charge efficiency tests of ESB EV-106 lead acid batteries
NASA Technical Reports Server (NTRS)
Rowlette, J. J.
1981-01-01
Charge efficiencies were determined by measurements made under widely differing conditions of temperature, charge procedure, and battery age. The measurements were used to optimize charge procedures and to evaluate the concept of a modified, coulometric state of charge indicator. Charge efficiency determinations were made by measuring gassing rates and oxygen fractions. A novel, positive displacement gas flow meter which proved to be both simple and highly accurate is described and illustrated.
Challenges in Interventional Radiology: The Pregnant Patient
Moon, Eunice K.; Wang, Weiping; Newman, James S.; Bayona-Molano, Maria Del Pilar
2013-01-01
A pregnant patient presenting to interventional radiology (IR) has a different set of needs from any other patient requiring a procedure. Often, the patient's care can be in direct conflict with the growth and development of the fetus, whether it be optimal fluoroscopic imaging, adequate sedation of the mother, or the timing of the needed procedure. Despite the additional risks and complexities associated with pregnancy, IR procedures can be performed safely for the pregnant patient with knowledge of the special and general needs of the pregnant patient, use of acceptable medications and procedures likely to be encountered during pregnancy, in addition to strategies to protect the patient and her fetus from the hazards of radiation. PMID:24436567
Anesthesiology and gastroenterology.
de Villiers, Willem J S
2009-03-01
A successful population-based colorectal cancer screening requires efficient colonoscopy practices that incorporate high throughput, safety, and patient satisfaction. There are several different modalities of nonanesthesiologist-administered sedation currently available and in development that may fulfill these requirements. Modern-day gastroenterology endoscopic procedures are complex and demand the full attention of the attending gastroenterologist and the complete cooperation of the patient. Many of these procedures will also require the anesthesiologist's knowledge, skills, abilities, and experience to ensure optimal procedure results and good patient outcomes. The goal of this review is (1) to provide a gastroenterology perspective on the use of propofol in gastroenterology endoscopic practice, and (2) to describe newer GI endoscopy procedures that gastroenterologists perform that might involve anesthesiologists.
Optimal routing of hazardous substances in time-varying, stochastic transportation networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Woods, A.L.; Miller-Hooks, E.; Mahmassani, H.S.
This report is concerned with the selection of routes in a network along which to transport hazardous substances, taking into consideration several key factors pertaining to the cost of transport and the risk of population exposure in the event of an accident. Furthermore, the fact that travel time and the risk measures are not constant over time is explicitly recognized in the routing decisions. Existing approaches typically assume static conditions, possibly resulting in inefficient route selection and unnecessary risk exposure. The report described the application of recent advances in network analysis methodologies to the problem of routing hazardous substances. Severalmore » specific problem formulations are presented, reflecting different degrees of risk aversion on the part of the decision-maker, as well as different possible operational scenarios. All procedures explicitly consider travel times and travel costs (including risk measures) to be stochastic time-varying quantities. The procedures include both exact algorithms, which may require extensive computational effort in some situations, as well as more efficient heuristics that may not guarantee a Pareto-optimal solution. All procedures are systematically illustrated for an example application using the Texas highway network, for both normal and incident condition scenarios. The application illustrates the trade-offs between the information obtained in the solution and computational efficiency, and highlights the benefits of incorporating these procedures in a decision-support system for hazardous substance shipment routing decisions.« less
Optimization of flexible wing structures subject to strength and induced drag constraints
NASA Technical Reports Server (NTRS)
Haftka, R. T.
1977-01-01
An optimization procedure for designing wing structures subject to stress, strain, and drag constraints is presented. The optimization method utilizes an extended penalty function formulation for converting the constrained problem into a series of unconstrained ones. Newton's method is used to solve the unconstrained problems. An iterative analysis procedure is used to obtain the displacements of the wing structure including the effects of load redistribution due to the flexibility of the structure. The induced drag is calculated from the lift distribution. Approximate expressions for the constraints used during major portions of the optimization process enhance the efficiency of the procedure. A typical fighter wing is used to demonstrate the procedure. Aluminum and composite material designs are obtained. The tradeoff between weight savings and drag reduction is investigated.
NASA Technical Reports Server (NTRS)
Walsh, Joanne L.; Young, Katherine C.; Pritchard, Jocelyn I.; Adelman, Howard M.; Mantay, Wayne R.
1995-01-01
This paper describes an integrated aerodynamic/dynamic/structural (IADS) optimization procedure for helicopter rotor blades. The procedure combines performance, dynamics, and structural analyses with a general-purpose optimizer using multilevel decomposition techniques. At the upper level, the structure is defined in terms of global quantities (stiffness, mass, and average strains). At the lower level, the structure is defined in terms of local quantities (detailed dimensions of the blade structure and stresses). The IADS procedure provides an optimization technique that is compatible with industrial design practices in which the aerodynamic and dynamic designs are performed at a global level and the structural design is carried out at a detailed level with considerable dialog and compromise among the aerodynamic, dynamic, and structural groups. The IADS procedure is demonstrated for several examples.
NASA Technical Reports Server (NTRS)
Walsh, Joanne L.; Young, Katherine C.; Pritchard, Jocelyn I.; Adelman, Howard M.; Mantay, Wayne R.
1994-01-01
This paper describes an integrated aerodynamic, dynamic, and structural (IADS) optimization procedure for helicopter rotor blades. The procedure combines performance, dynamics, and structural analyses with a general purpose optimizer using multilevel decomposition techniques. At the upper level, the structure is defined in terms of local quantities (stiffnesses, mass, and average strains). At the lower level, the structure is defined in terms of local quantities (detailed dimensions of the blade structure and stresses). The IADS procedure provides an optimization technique that is compatible with industrial design practices in which the aerodynamic and dynamic design is performed at a global level and the structural design is carried out at a detailed level with considerable dialogue and compromise among the aerodynamic, dynamic, and structural groups. The IADS procedure is demonstrated for several cases.
An integrated optimum design approach for high speed prop rotors
NASA Technical Reports Server (NTRS)
Chattopadhyay, Aditi; Mccarthy, Thomas R.
1995-01-01
The objective is to develop an optimization procedure for high-speed and civil tilt-rotors by coupling all of the necessary disciplines within a closed-loop optimization procedure. Both simplified and comprehensive analysis codes are used for the aerodynamic analyses. The structural properties are calculated using in-house developed algorithms for both isotropic and composite box beam sections. There are four major objectives of this study. (1) Aerodynamic optimization: The effects of blade aerodynamic characteristics on cruise and hover performance of prop-rotor aircraft are investigated using the classical blade element momentum approach with corrections for the high lift capability of rotors/propellers. (2) Coupled aerodynamic/structures optimization: A multilevel hybrid optimization technique is developed for the design of prop-rotor aircraft. The design problem is decomposed into a level for improved aerodynamics with continuous design variables and a level with discrete variables to investigate composite tailoring. The aerodynamic analysis is based on that developed in objective 1 and the structural analysis is performed using an in-house code which models a composite box beam. The results are compared to both a reference rotor and the optimum rotor found in the purely aerodynamic formulation. (3) Multipoint optimization: The multilevel optimization procedure of objective 2 is extended to a multipoint design problem. Hover, cruise, and take-off are the three flight conditions simultaneously maximized. (4) Coupled rotor/wing optimization: Using the comprehensive rotary wing code CAMRAD, an optimization procedure is developed for the coupled rotor/wing performance in high speed tilt-rotor aircraft. The developed procedure contains design variables which define the rotor and wing planforms.
NASA Technical Reports Server (NTRS)
Korte, John J.; Kumar, Ajay; Singh, D. J.; White, J. A.
1992-01-01
A design program is developed which incorporates a modern approach to the design of supersonic/hypersonic wind-tunnel nozzles. The approach is obtained by the coupling of computational fluid dynamics (CFD) with design optimization. The program can be used to design a 2D or axisymmetric, supersonic or hypersonic, wind-tunnel nozzles that can be modeled with a calorically perfect gas. The nozzle design is obtained by solving a nonlinear least-squares optimization problem (LSOP). The LSOP is solved using an iterative procedure which requires intermediate flowfield solutions. The nozzle flowfield is simulated by solving the Navier-Stokes equations for the subsonic and transonic flow regions and the parabolized Navier-Stokes equations for the supersonic flow regions. The advantages of this method are that the design is based on the solution of the viscous equations eliminating the need to make separate corrections to a design contour, and the flexibility of applying the procedure to different types of nozzle design problems.
NASA Technical Reports Server (NTRS)
Korte, J. J.; Auslender, A. H.
1993-01-01
A new optimization procedure, in which a parabolized Navier-Stokes solver is coupled with a non-linear least-squares optimization algorithm, is applied to the design of a Mach 14, laminar two-dimensional hypersonic subscale flight inlet with an internal contraction ratio of 15:1 and a length-to-throat half-height ratio of 150:1. An automated numerical search of multiple geometric wall contours, which are defined by polynomical splines, results in an optimal geometry that yields the maximum total-pressure recovery for the compression process. Optimal inlet geometry is obtained for both inviscid and viscous flows, with the assumption that the gas is either calorically or thermally perfect. The analysis with a calorically perfect gas results in an optimized inviscid inlet design that is defined by two cubic splines and yields a mass-weighted total-pressure recovery of 0.787, which is a 23% improvement compared with the optimized shock-canceled two-ramp inlet design. Similarly, the design procedure obtains the optimized contour for a viscous calorically perfect gas to yield a mass-weighted total-pressure recovery value of 0.749. Additionally, an optimized contour for a viscous thermally perfect gas is obtained to yield a mass-weighted total-pressure recovery value of 0.768. The design methodology incorporates both complex fluid dynamic physics and optimal search techniques without an excessive compromise of computational speed; hence, this methodology is a practical technique that is applicable to optimal inlet design procedures.
Interactive visual optimization and analysis for RFID benchmarking.
Wu, Yingcai; Chung, Ka-Kei; Qu, Huamin; Yuan, Xiaoru; Cheung, S C
2009-01-01
Radio frequency identification (RFID) is a powerful automatic remote identification technique that has wide applications. To facilitate RFID deployment, an RFID benchmarking instrument called aGate has been invented to identify the strengths and weaknesses of different RFID technologies in various environments. However, the data acquired by aGate are usually complex time varying multidimensional 3D volumetric data, which are extremely challenging for engineers to analyze. In this paper, we introduce a set of visualization techniques, namely, parallel coordinate plots, orientation plots, a visual history mechanism, and a 3D spatial viewer, to help RFID engineers analyze benchmark data visually and intuitively. With the techniques, we further introduce two workflow procedures (a visual optimization procedure for finding the optimum reader antenna configuration and a visual analysis procedure for comparing the performance and identifying the flaws of RFID devices) for the RFID benchmarking, with focus on the performance analysis of the aGate system. The usefulness and usability of the system are demonstrated in the user evaluation.
NASA Astrophysics Data System (ADS)
Rossi, Francesca; Zingoni, Tiziano; Di Cicco, Emiliano; Manetti, Leonardo; Pini, Roberto; Fortuna, Damiano
2011-07-01
Laser light is nowadays routinely used in the aesthetic treatments of facial skin, such as in laser rejuvenation, scar removal etc. The induced thermal damage may be varied by setting different laser parameters, in order to obtain a particular aesthetic result. In this work, it is proposed a theoretical study on the induced thermal damage in the deep tissue, by considering different laser pulse duration. The study is based on the Finite Element Method (FEM): a bidimensional model of the facial skin is depicted in axial symmetry, considering the different skin structures and their different optical and thermal parameters; the conversion of laser light into thermal energy is modeled by the bio-heat equation. The light source is a CO2 laser, with different pulse durations. The model enabled to study the thermal damage induced into the skin, by calculating the Arrhenius integral. The post-processing results enabled to study in space and time the temperature dynamics induced in the facial skin, to study the eventual cumulative effects of subsequent laser pulses and to optimize the procedure for applications in dermatological surgery. The calculated data where then validated in an experimental measurement session, performed in a sheep animal model. Histological analyses were performed on the treated tissues, evidencing the spatial distribution and the entity of the thermal damage in the collageneous tissue. Modeling and experimental results were in good agreement, and they were used to design a new optimized laser based skin resurfacing procedure.
NASA Astrophysics Data System (ADS)
Monica, Z.; Sękala, A.; Gwiazda, A.; Banaś, W.
2016-08-01
Nowadays a key issue is to reduce the energy consumption of road vehicles. In particular solution one could find different strategies of energy optimization. The most popular but not sophisticated is so called eco-driving. In this strategy emphasized is particular behavior of drivers. In more sophisticated solution behavior of drivers is supported by control system measuring driving parameters and suggesting proper operation of the driver. The other strategy is concerned with application of different engineering solutions that aid optimization the process of energy consumption. Such systems take into consideration different parameters measured in real time and next take proper action according to procedures loaded to the control computer of a vehicle. The third strategy bases on optimization of the designed vehicle taking into account especially main sub-systems of a technical mean. In this approach the optimal level of energy consumption by a vehicle is obtained by synergetic results of individual optimization of particular constructional sub-systems of a vehicle. It is possible to distinguish three main sub-systems: the structural one the drive one and the control one. In the case of the structural sub-system optimization of the energy consumption level is related with the optimization or the weight parameter and optimization the aerodynamic parameter. The result is optimized body of a vehicle. Regarding the drive sub-system the optimization of the energy consumption level is related with the fuel or power consumption using the previously elaborated physical models. Finally the optimization of the control sub-system consists in determining optimal control parameters.
Extractive procedure for uranium determination in water samples by liquid scintillation counting.
Gomez Escobar, V; Vera Tomé, F; Lozano, J C; Martín Sánchez, A
1998-07-01
An extractive procedure for uranium determination using liquid scintillation counting with the URAEX cocktail is described. Interference from radon and a strong influence of nitrate ion were detected in this procedure. Interference from radium, thorium and polonium emissions were very low when optimal operating conditions were reached. Quenching effects were considered and the minimum detectable activity was evaluated for different sample volumes. Isotopic analysis of samples can be performed using the proposed method. Comparisons with the results obtained with the general procedure used in alpha spectrometry with passivated implanted planar silicon detectors showed good agreement. The proposed procedure is thus suitable for uranium determination in water samples and can be considered as an alternative to the laborious conventional chemical preparations needed for alpha spectrometry methods using semiconductor detectors.
Assessment of navigation cues with proximal force sensing during endovascular catheterization.
Rafii-Taril, Hedyeh; Payne, Christopher J; Riga, Celia; Bicknell, Colin; Lee, Su-Lin; Yang, Guang-Zhong
2012-01-01
Despite increased use of robotic catheter navigation systems for endovascular intervention procedures, current master-slave platforms have not yet taken into account dexterous manipulation skill used in traditional catheterization procedures. Information on tool forces applied by operators is often limited. A novel force/torque sensor is developed in this paper to obtain behavioural data across different experience levels and identify underlying factors that affect overall operator performance. The miniature device can be attached to any part of the proximal end of the catheter, together with a position sensor attached to the catheter tip, for relating tool forces to catheter dynamics and overall performance. The results show clear differences in manipulation skills between experience groups, thus providing insights into different patterns and range of forces applied during routine endovascular procedures. They also provide important design specifications for ergonomically optimized catheter manipulation platforms with added haptic feedback while maintaining natural skills of the operators.
Planning Risk-Based SQC Schedules for Bracketed Operation of Continuous Production Analyzers.
Westgard, James O; Bayat, Hassan; Westgard, Sten A
2018-02-01
To minimize patient risk, "bracketed" statistical quality control (SQC) is recommended in the new CLSI guidelines for SQC (C24-Ed4). Bracketed SQC requires that a QC event both precedes and follows (brackets) a group of patient samples. In optimizing a QC schedule, the frequency of QC or run size becomes an important planning consideration to maintain quality and also facilitate responsive reporting of results from continuous operation of high production analytic systems. Different plans for optimizing a bracketed SQC schedule were investigated on the basis of Parvin's model for patient risk and CLSI C24-Ed4's recommendations for establishing QC schedules. A Sigma-metric run size nomogram was used to evaluate different QC schedules for processes of different sigma performance. For high Sigma performance, an effective SQC approach is to employ a multistage QC procedure utilizing a "startup" design at the beginning of production and a "monitor" design periodically throughout production. Example QC schedules are illustrated for applications with measurement procedures having 6-σ, 5-σ, and 4-σ performance. Continuous production analyzers that demonstrate high σ performance can be effectively controlled with multistage SQC designs that employ a startup QC event followed by periodic monitoring or bracketing QC events. Such designs can be optimized to minimize the risk of harm to patients. © 2017 American Association for Clinical Chemistry.
Genetic algorithm dynamics on a rugged landscape
NASA Astrophysics Data System (ADS)
Bornholdt, Stefan
1998-04-01
The genetic algorithm is an optimization procedure motivated by biological evolution and is successfully applied to optimization problems in different areas. A statistical mechanics model for its dynamics is proposed based on the parent-child fitness correlation of the genetic operators, making it applicable to general fitness landscapes. It is compared to a recent model based on a maximum entropy ansatz. Finally it is applied to modeling the dynamics of a genetic algorithm on the rugged fitness landscape of the NK model.
Ground coupled solar heat pumps: analysis of four options
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andrews, J.W.
Heat pump systems which utilize both solar energy and energy withdrawn from the ground are analyzed using a simplified procedure which optimizes the solar storage temperature on a monthly basis. Four ways of introducing collected solar energy to the system are optimized and compared. These include use of actively collected thermal input to the heat pump; use of collected solar energy to heat the load directly (two different ways); and use of a passive option to reduce the effective heating load.
A neurocomputational theory of how explicit learning bootstraps early procedural learning.
Paul, Erick J; Ashby, F Gregory
2013-01-01
It is widely accepted that human learning and memory is mediated by multiple memory systems that are each best suited to different requirements and demands. Within the domain of categorization, at least two systems are thought to facilitate learning: an explicit (declarative) system depending largely on the prefrontal cortex, and a procedural (non-declarative) system depending on the basal ganglia. Substantial evidence suggests that each system is optimally suited to learn particular categorization tasks. However, it remains unknown precisely how these systems interact to produce optimal learning and behavior. In order to investigate this issue, the present research evaluated the progression of learning through simulation of categorization tasks using COVIS, a well-known model of human category learning that includes both explicit and procedural learning systems. Specifically, the model's parameter space was thoroughly explored in procedurally learned categorization tasks across a variety of conditions and architectures to identify plausible interaction architectures. The simulation results support the hypothesis that one-way interaction between the systems occurs such that the explicit system "bootstraps" learning early on in the procedural system. Thus, the procedural system initially learns a suboptimal strategy employed by the explicit system and later refines its strategy. This bootstrapping could be from cortical-striatal projections that originate in premotor or motor regions of cortex, or possibly by the explicit system's control of motor responses through basal ganglia-mediated loops.
Aerodynamic Design Using Neural Networks
NASA Technical Reports Server (NTRS)
Rai, Man Mohan; Madavan, Nateri K.
2003-01-01
The design of aerodynamic components of aircraft, such as wings or engines, involves a process of obtaining the most optimal component shape that can deliver the desired level of component performance, subject to various constraints, e.g., total weight or cost, that the component must satisfy. Aerodynamic design can thus be formulated as an optimization problem that involves the minimization of an objective function subject to constraints. A new aerodynamic design optimization procedure based on neural networks and response surface methodology (RSM) incorporates the advantages of both traditional RSM and neural networks. The procedure uses a strategy, denoted parameter-based partitioning of the design space, to construct a sequence of response surfaces based on both neural networks and polynomial fits to traverse the design space in search of the optimal solution. Some desirable characteristics of the new design optimization procedure include the ability to handle a variety of design objectives, easily impose constraints, and incorporate design guidelines and rules of thumb. It provides an infrastructure for variable fidelity analysis and reduces the cost of computation by using less-expensive, lower fidelity simulations in the early stages of the design evolution. The initial or starting design can be far from optimal. The procedure is easy and economical to use in large-dimensional design space and can be used to perform design tradeoff studies rapidly. Designs involving multiple disciplines can also be optimized. Some practical applications of the design procedure that have demonstrated some of its capabilities include the inverse design of an optimal turbine airfoil starting from a generic shape and the redesign of transonic turbines to improve their unsteady aerodynamic characteristics.
Aerodynamic shape optimization using preconditioned conjugate gradient methods
NASA Technical Reports Server (NTRS)
Burgreen, Greg W.; Baysal, Oktay
1993-01-01
In an effort to further improve upon the latest advancements made in aerodynamic shape optimization procedures, a systematic study is performed to examine several current solution methodologies as applied to various aspects of the optimization procedure. It is demonstrated that preconditioned conjugate gradient-like methodologies dramatically decrease the computational efforts required for such procedures. The design problem investigated is the shape optimization of the upper and lower surfaces of an initially symmetric (NACA-012) airfoil in inviscid transonic flow and at zero degree angle-of-attack. The complete surface shape is represented using a Bezier-Bernstein polynomial. The present optimization method then automatically obtains supercritical airfoil shapes over a variety of freestream Mach numbers. Furthermore, the best optimization strategy examined resulted in a factor of 8 decrease in computational time as well as a factor of 4 decrease in memory over the most efficient strategies in current use.
A Complete Procedure for Predicting and Improving the Performance of HAWT's
NASA Astrophysics Data System (ADS)
Al-Abadi, Ali; Ertunç, Özgür; Sittig, Florian; Delgado, Antonio
2014-06-01
A complete procedure for predicting and improving the performance of the horizontal axis wind turbine (HAWT) has been developed. The first process is predicting the power extracted by the turbine and the derived rotor torque, which should be identical to that of the drive unit. The BEM method and a developed post-stall treatment for resolving stall-regulated HAWT is incorporated in the prediction. For that, a modified stall-regulated prediction model, which can predict the HAWT performance over the operating range of oncoming wind velocity, is derived from existing models. The model involves radius and chord, which has made it more general in applications for predicting the performance of different scales and rotor shapes of HAWTs. The second process is modifying the rotor shape by an optimization process, which can be applied to any existing HAWT, to improve its performance. A gradient- based optimization is used for adjusting the chord and twist angle distribution of the rotor blade to increase the extraction of the power while keeping the drive torque constant, thus the same drive unit can be kept. The final process is testing the modified turbine to predict its enhanced performance. The procedure is applied to NREL phase-VI 10kW as a baseline turbine. The study has proven the applicability of the developed model in predicting the performance of the baseline as well as the optimized turbine. In addition, the optimization method has shown that the power coefficient can be increased while keeping same design rotational speed.
NASA Technical Reports Server (NTRS)
Pandya, Mohagna J.; Baysal, Oktay
1997-01-01
A gradient-based shape optimization based on quasi-analytical sensitivities has been extended for practical three-dimensional aerodynamic applications. The flow analysis has been rendered by a fully implicit, finite-volume formulation of the Euler and Thin-Layer Navier-Stokes (TLNS) equations. Initially, the viscous laminar flow analysis for a wing has been compared with an independent computational fluid dynamics (CFD) code which has been extensively validated. The new procedure has been demonstrated in the design of a cranked arrow wing at Mach 2.4 with coarse- and fine-grid based computations performed with Euler and TLNS equations. The influence of the initial constraints on the geometry and aerodynamics of the optimized shape has been explored. Various final shapes generated for an identical initial problem formulation but with different optimization path options (coarse or fine grid, Euler or TLNS), have been aerodynamically evaluated via a common fine-grid TLNS-based analysis. The initial constraint conditions show significant bearing on the optimization results. Also, the results demonstrate that to produce an aerodynamically efficient design, it is imperative to include the viscous physics in the optimization procedure with the proper resolution. Based upon the present results, to better utilize the scarce computational resources, it is recommended that, a number of viscous coarse grid cases using either a preconditioned bi-conjugate gradient (PbCG) or an alternating-direction-implicit (ADI) method, should initially be employed to improve the optimization problem definition, the design space and initial shape. Optimized shapes should subsequently be analyzed using a high fidelity (viscous with fine-grid resolution) flow analysis to evaluate their true performance potential. Finally, a viscous fine-grid-based shape optimization should be conducted, using an ADI method, to accurately obtain the final optimized shape.
Validation of the procedures. [integrated multidisciplinary optimization of rotorcraft
NASA Technical Reports Server (NTRS)
Mantay, Wayne R.
1989-01-01
Validation strategies are described for procedures aimed at improving the rotor blade design process through a multidisciplinary optimization approach. Validation of the basic rotor environment prediction tools and the overall rotor design are discussed.
Anatomy of liver arteries for interventional radiology.
Favelier, S; Germain, T; Genson, P-Y; Cercueil, J-P; Denys, A; Krausé, D; Guiu, B
2015-06-01
The availability of intra-arterial hepatic therapies (radio and/or chemo-embolisation, intra-arterial hepatic chemotherapy) has convinced radiologists to perfect their knowledge of the anatomy of the liver arteries. These sometimes, complex procedures most often require selective arterial catheterization. Knowledge of the different arteries in the liver and the peripheral organs is therefore essential to optimize the procedure and avoid eventual complications. This paper aims to describe the anatomy of the liver arteries and the variants, applying it to angiography images, and to understand the implications of such variations in interventional radiological procedures. Copyright © 2013 Éditions françaises de radiologie. Published by Elsevier Masson SAS. All rights reserved.
Adaptive sampling of information in perceptual decision-making.
Cassey, Thomas C; Evens, David R; Bogacz, Rafal; Marshall, James A R; Ludwig, Casimir J H
2013-01-01
In many perceptual and cognitive decision-making problems, humans sample multiple noisy information sources serially, and integrate the sampled information to make an overall decision. We derive the optimal decision procedure for two-alternative choice tasks in which the different options are sampled one at a time, sources vary in the quality of the information they provide, and the available time is fixed. To maximize accuracy, the optimal observer allocates time to sampling different information sources in proportion to their noise levels. We tested human observers in a corresponding perceptual decision-making task. Observers compared the direction of two random dot motion patterns that were triggered only when fixated. Observers allocated more time to the noisier pattern, in a manner that correlated with their sensory uncertainty about the direction of the patterns. There were several differences between the optimal observer predictions and human behaviour. These differences point to a number of other factors, beyond the quality of the currently available sources of information, that influences the sampling strategy.
NASA Astrophysics Data System (ADS)
Peng, Guoyi; Cao, Shuliang; Ishizuka, Masaru; Hayama, Shinji
2002-06-01
This paper is concerned with the design optimization of axial flow hydraulic turbine runner blade geometry. In order to obtain a better design plan with good performance, a new comprehensive performance optimization procedure has been presented by combining a multi-variable multi-objective constrained optimization model with a Q3D inverse computation and a performance prediction procedure. With careful analysis of the inverse design of axial hydraulic turbine runner, the total hydraulic loss and the cavitation coefficient are taken as optimization objectives and a comprehensive objective function is defined using the weight factors. Parameters of a newly proposed blade bound circulation distribution function and parameters describing positions of blade leading and training edges in the meridional flow passage are taken as optimization variables.The optimization procedure has been applied to the design optimization of a Kaplan runner with specific speed of 440 kW. Numerical results show that the performance of designed runner is successfully improved through optimization computation. The optimization model is found to be validated and it has the feature of good convergence. With the multi-objective optimization model, it is possible to control the performance of designed runner by adjusting the value of weight factors defining the comprehensive objective function. Copyright
Computational wing optimization and comparisons with experiment for a semi-span wing model
NASA Technical Reports Server (NTRS)
Waggoner, E. G.; Haney, H. P.; Ballhaus, W. F.
1978-01-01
A computational wing optimization procedure was developed and verified by an experimental investigation of a semi-span variable camber wing model in the NASA Ames Research Center 14 foot transonic wind tunnel. The Bailey-Ballhaus transonic potential flow analysis and Woodward-Carmichael linear theory codes were linked to Vanderplaats constrained minimization routine to optimize model configurations at several subsonic and transonic design points. The 35 deg swept wing is characterized by multi-segmented leading and trailing edge flaps whose hinge lines are swept relative to the leading and trailing edges of the wing. By varying deflection angles of the flap segments, camber and twist distribution can be optimized for different design conditions. Results indicate that numerical optimization can be both an effective and efficient design tool. The optimized configurations had as good or better lift to drag ratios at the design points as the best designs previously tested during an extensive parametric study.
Tuning rules for robust FOPID controllers based on multi-objective optimization with FOPDT models.
Sánchez, Helem Sabina; Padula, Fabrizio; Visioli, Antonio; Vilanova, Ramon
2017-01-01
In this paper a set of optimally balanced tuning rules for fractional-order proportional-integral-derivative controllers is proposed. The control problem of minimizing at once the integrated absolute error for both the set-point and the load disturbance responses is addressed. The control problem is stated as a multi-objective optimization problem where a first-order-plus-dead-time process model subject to a robustness, maximum sensitivity based, constraint has been considered. A set of Pareto optimal solutions is obtained for different normalized dead times and then the optimal balance between the competing objectives is obtained by choosing the Nash solution among the Pareto-optimal ones. A curve fitting procedure has then been applied in order to generate suitable tuning rules. Several simulation results show the effectiveness of the proposed approach. Copyright © 2016. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Luo, Yangjun; Niu, Yanzhuang; Li, Ming; Kang, Zhan
2017-06-01
In order to eliminate stress-related wrinkles in cable-suspended membrane structures and to provide simple and reliable deployment, this study presents a multi-material topology optimization model and an effective solution procedure for generating optimal connected layouts for membranes and cables. On the basis of the principal stress criterion of membrane wrinkling behavior and the density-based interpolation of multi-phase materials, the optimization objective is to maximize the total structural stiffness while satisfying principal stress constraints and specified material volume requirements. By adopting the cosine-type relaxation scheme to avoid the stress singularity phenomenon, the optimization model is successfully solved through a standard gradient-based algorithm. Four-corner tensioned membrane structures with different loading cases were investigated to demonstrate the effectiveness of the proposed method in automatically finding the optimal design composed of curved boundary cables and wrinkle-free membranes.
Two-Dimensional High-Lift Aerodynamic Optimization Using Neural Networks
NASA Technical Reports Server (NTRS)
Greenman, Roxana M.
1998-01-01
The high-lift performance of a multi-element airfoil was optimized by using neural-net predictions that were trained using a computational data set. The numerical data was generated using a two-dimensional, incompressible, Navier-Stokes algorithm with the Spalart-Allmaras turbulence model. Because it is difficult to predict maximum lift for high-lift systems, an empirically-based maximum lift criteria was used in this study to determine both the maximum lift and the angle at which it occurs. The 'pressure difference rule,' which states that the maximum lift condition corresponds to a certain pressure difference between the peak suction pressure and the pressure at the trailing edge of the element, was applied and verified with experimental observations for this configuration. Multiple input, single output networks were trained using the NASA Ames variation of the Levenberg-Marquardt algorithm for each of the aerodynamic coefficients (lift, drag and moment). The artificial neural networks were integrated with a gradient-based optimizer. Using independent numerical simulations and experimental data for this high-lift configuration, it was shown that this design process successfully optimized flap deflection, gap, overlap, and angle of attack to maximize lift. Once the neural nets were trained and integrated with the optimizer, minimal additional computer resources were required to perform optimization runs with different initial conditions and parameters. Applying the neural networks within the high-lift rigging optimization process reduced the amount of computational time and resources by 44% compared with traditional gradient-based optimization procedures for multiple optimization runs.
78 FR 53237 - Establishment of Area Navigation (RNAV) Routes; Washington, DC
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-29
... ``Optimization of Airspace and Procedures in a Metroplex (OAPM)'' effort in that this rule did not include T.... The new routes support the Washington, DC Optimization of Airspace and Procedures in a Metroplex (OAPM...
Sewell, Justin L; Boscardin, Christy K; Young, John Q; Ten Cate, Olle; O'Sullivan, Patricia S
2017-11-01
Cognitive load theory, focusing on limits of the working memory, is relevant to medical education; however, factors associated with cognitive load during procedural skills training are not well characterized. The authors sought to determine how features of learners, patients/tasks, settings, and supervisors were associated with three types of cognitive load among learners performing a specific procedure, colonoscopy, to identify implications for procedural teaching. Data were collected through an electronically administered survey sent to 1,061 U.S. gastroenterology fellows during the 2014-2015 academic year; 477 (45.0%) participated. Participants completed the survey immediately following a colonoscopy. Using multivariable linear regression analyses, the authors identified sets of features associated with intrinsic, extraneous, and germane loads. Features associated with intrinsic load included learners (prior experience and year in training negatively associated, fatigue positively associated) and patient/tasks (procedural complexity positively associated, better patient tolerance negatively associated). Features associated with extraneous load included learners (fatigue positively associated), setting (queue order positively associated), and supervisors (supervisor engagement and confidence negatively associated). Only one feature, supervisor engagement, was (positively) associated with germane load. These data support practical recommendations for teaching procedural skills through the lens of cognitive load theory. To optimize intrinsic load, level of experience and competence of learners should be balanced with procedural complexity; part-task approaches and scaffolding may be beneficial. To reduce extraneous load, teachers should remain engaged, and factors within the procedural setting that may interfere with learning should be minimized. To optimize germane load, teachers should remain engaged.
Multiparameter optimization of mammography: an update
NASA Astrophysics Data System (ADS)
Jafroudi, Hamid; Muntz, E. P.; Jennings, Robert J.
1994-05-01
Previously in this forum we have reported the application of multiparameter optimization techniques to the design of a minimum dose mammography system. The approach used a reference system to define the physical imaging performance required and the dose to which the dose for the optimized system should be compared. During the course of implementing the resulting design in hardware suitable for laboratory testing, the state of the art in mammographic imaging changed, so that the original reference system, which did not have a grid, was no longer appropriate. A reference system with a grid was selected in response to this change, and at the same time the optimization procedure was modified, to make it more general and to facilitate study of the optimized design under a variety of conditions. We report the changes in the procedure, and the results obtained using the revised procedure and the up- to-date reference system. Our results, which are supported by laboratory measurements, indicate that the optimized design can image small objects as well as the reference system using only about 30% of the dose required by the reference system. Hardware meeting the specification produced by the optimization procedure and suitable for clinical use is currently under evaluation in the Diagnostic Radiology Department at the Clinical Center, NH.
Distributed Method to Optimal Profile Descent
NASA Astrophysics Data System (ADS)
Kim, Geun I.
Current ground automation tools for Optimal Profile Descent (OPD) procedures utilize path stretching and speed profile change to maintain proper merging and spacing requirements at high traffic terminal area. However, low predictability of aircraft's vertical profile and path deviation during decent add uncertainty to computing estimated time of arrival, a key information that enables the ground control center to manage airspace traffic effectively. This paper uses an OPD procedure that is based on a constant flight path angle to increase the predictability of the vertical profile and defines an OPD optimization problem that uses both path stretching and speed profile change while largely maintaining the original OPD procedure. This problem minimizes the cumulative cost of performing OPD procedures for a group of aircraft by assigning a time cost function to each aircraft and a separation cost function to a pair of aircraft. The OPD optimization problem is then solved in a decentralized manner using dual decomposition techniques under inter-aircraft ADS-B mechanism. This method divides the optimization problem into more manageable sub-problems which are then distributed to the group of aircraft. Each aircraft solves its assigned sub-problem and communicate the solutions to other aircraft in an iterative process until an optimal solution is achieved thus decentralizing the computation of the optimization problem.
NASA Astrophysics Data System (ADS)
Wang, Fengwen; Jensen, Jakob S.; Sigmund, Ole
2012-10-01
Photonic crystal waveguides are optimized for modal confinement and loss related to slow light with high group index. A detailed comparison between optimized circular-hole based waveguides and optimized waveguides with free topology is performed. Design robustness with respect to manufacturing imperfections is enforced by considering different design realizations generated from under-, standard- and over-etching processes in the optimization procedure. A constraint ensures a certain modal confinement, and loss related to slow light with high group index is indirectly treated by penalizing field energy located in air regions. It is demonstrated that slow light with a group index up to ng = 278 can be achieved by topology optimized waveguides with promising modal confinement and restricted group-velocity-dispersion. All the topology optimized waveguides achieve a normalized group-index bandwidth of 0.48 or above. The comparisons between circular-hole based designs and topology optimized designs illustrate that the former can be efficient for dispersion engineering but that larger improvements are possible if irregular geometries are allowed.
Optimization of life support systems and their systems reliability
NASA Technical Reports Server (NTRS)
Fan, L. T.; Hwang, C. L.; Erickson, L. E.
1971-01-01
The identification, analysis, and optimization of life support systems and subsystems have been investigated. For each system or subsystem that has been considered, the procedure involves the establishment of a set of system equations (or mathematical model) based on theory and experimental evidences; the analysis and simulation of the model; the optimization of the operation, control, and reliability; analysis of sensitivity of the system based on the model; and, if possible, experimental verification of the theoretical and computational results. Research activities include: (1) modeling of air flow in a confined space; (2) review of several different gas-liquid contactors utilizing centrifugal force: (3) review of carbon dioxide reduction contactors in space vehicles and other enclosed structures: (4) application of modern optimal control theory to environmental control of confined spaces; (5) optimal control of class of nonlinear diffusional distributed parameter systems: (6) optimization of system reliability of life support systems and sub-systems: (7) modeling, simulation and optimal control of the human thermal system: and (8) analysis and optimization of the water-vapor eletrolysis cell.
Optimization applications in aircraft engine design and test
NASA Technical Reports Server (NTRS)
Pratt, T. K.
1984-01-01
Starting with the NASA-sponsored STAEBL program, optimization methods based primarily upon the versatile program COPES/CONMIN were introduced over the past few years to a broad spectrum of engineering problems in structural optimization, engine design, engine test, and more recently, manufacturing processes. By automating design and testing processes, many repetitive and costly trade-off studies have been replaced by optimization procedures. Rather than taking engineers and designers out of the loop, optimization has, in fact, put them more in control by providing sophisticated search techniques. The ultimate decision whether to accept or reject an optimal feasible design still rests with the analyst. Feedback obtained from this decision process has been invaluable since it can be incorporated into the optimization procedure to make it more intelligent. On several occasions, optimization procedures have produced novel designs, such as the nonsymmetric placement of rotor case stiffener rings, not anticipated by engineering designers. In another case, a particularly difficult resonance contraint could not be satisfied using hand iterations for a compressor blade, when the STAEBL program was applied to the problem, a feasible solution was obtained in just two iterations.
Pavlović, Dragana Mutavdžić; Ašperger, Danijela; Tolić, Dijana; Babić, Sandra
2013-09-01
This paper describes the development, optimization, and validation of a method for the determination of five pharmaceuticals from different therapeutic classes (antibiotics, anthelmintics, glucocorticoides) in water samples. Water samples were prepared using SPE and extracts were analyzed by HPLC with diode-array detection. The efficiency of 11 different SPE cartridges to extract the investigated compounds from water was tested in preliminary experiments. Then, the pH of the water sample, elution solvent, and sorbent mass were optimized. Except for optimization of the SPE procedure, selection of the optimal HPLC column with different stationary phases from different manufacturers has been performed. The developed method was validated using spring water samples spiked with appropriate concentrations of pharmaceuticals. Good linearity was obtained in the range of 2.4-200 μg/L, depending on the pharmaceutical with the correlation coefficients >0.9930 in all cases, except for ciprofloxacin (0.9866). Also, the method has revealed that low LODs (0.7-3.9 μg/L), good precision (intra- and interday) with RSD below 17% and recoveries above 98% for all pharmaceuticals. The method has been successfully applied to the analysis of production wastewater samples from the pharmaceutical industry. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Sergeant, Martin J.; Constantinidou, Chrystala; Cogan, Tristan; Penn, Charles W.; Pallen, Mark J.
2012-01-01
The analysis of 16S-rDNA sequences to assess the bacterial community composition of a sample is a widely used technique that has increased with the advent of high throughput sequencing. Although considerable effort has been devoted to identifying the most informative region of the 16S gene and the optimal informatics procedures to process the data, little attention has been paid to the PCR step, in particular annealing temperature and primer length. To address this, amplicons derived from 16S-rDNA were generated from chicken caecal content DNA using different annealing temperatures, primers and different DNA extraction procedures. The amplicons were pyrosequenced to determine the optimal protocols for capture of maximum bacterial diversity from a chicken caecal sample. Even at very low annealing temperatures there was little effect on the community structure, although the abundance of some OTUs such as Bifidobacterium increased. Using shorter primers did not reveal any novel OTUs but did change the community profile obtained. Mechanical disruption of the sample by bead beating had a significant effect on the results obtained, as did repeated freezing and thawing. In conclusion, existing primers and standard annealing temperatures captured as much diversity as lower annealing temperatures and shorter primers. PMID:22666455
Sergeant, Martin J; Constantinidou, Chrystala; Cogan, Tristan; Penn, Charles W; Pallen, Mark J
2012-01-01
The analysis of 16S-rDNA sequences to assess the bacterial community composition of a sample is a widely used technique that has increased with the advent of high throughput sequencing. Although considerable effort has been devoted to identifying the most informative region of the 16S gene and the optimal informatics procedures to process the data, little attention has been paid to the PCR step, in particular annealing temperature and primer length. To address this, amplicons derived from 16S-rDNA were generated from chicken caecal content DNA using different annealing temperatures, primers and different DNA extraction procedures. The amplicons were pyrosequenced to determine the optimal protocols for capture of maximum bacterial diversity from a chicken caecal sample. Even at very low annealing temperatures there was little effect on the community structure, although the abundance of some OTUs such as Bifidobacterium increased. Using shorter primers did not reveal any novel OTUs but did change the community profile obtained. Mechanical disruption of the sample by bead beating had a significant effect on the results obtained, as did repeated freezing and thawing. In conclusion, existing primers and standard annealing temperatures captured as much diversity as lower annealing temperatures and shorter primers.
Cavaliere, Chiara; Capriotti, Anna Laura; Ferraris, Francesca; Foglia, Patrizia; Samperi, Roberto; Ventura, Salvatore; Laganà, Aldo
2016-03-18
A multiresidue analytical method for the determination of 11 perfluorinated compounds and 22 endocrine-disrupting compounds (ECDs) including 13 natural and synthetic estrogens (free and conjugated forms), 2 alkylphenols, 1 plasticiser, 2 UV-filters, 1 antimicrobial, and 2 organophosphorus compounds in sediments has been developed. Ultrasound-assisted extraction followed by solid phase extraction (SPE) with graphitized carbon black (GCB) cartridge as clean-up step were used. The extraction process yield was optimized in terms of solvent composition. Then, a 3(2) experimental design was used to optimize solvent volume and sonication time by response surface methodology, which simplifies the optimization procedure. The final extract was analyzed by ultra-high performance liquid chromatography coupled with tandem mass spectrometry. The optimized sample preparation method is simple and robust, and allows recovery of ECDs belonging to different classes in a complex matrix such as sediment. The use of GCB for SPE allowed to obtain with a single clean-up procedure excellent recoveries ranging between 75 and 110% (relative standard deviation <16%). The developed methodology has been successfully applied to the analysis of ECDs in sediments from different rivers and lakes of the Lazio Region (Italy). These analyses have shown the ubiquitous presence of chloro-substituted organophosphorus flame retardants and bisphenol A, while other analyzed compounds were occasionally found at concentration between the limit of detection and quantification. Copyright © 2016 Elsevier B.V. All rights reserved.
Perez, Pablo A; Hintelman, Holger; Quiroz, Waldo; Bravo, Manuel A
2017-11-01
In the present work, the efficiency of distillation process for extracting monomethylmercury (MMHg) from soil samples was studied and optimized using an experimental design methodology. The influence of soil composition on MMHg extraction was evaluated by testing of four soil samples with different geochemical characteristics. Optimization suggested that the acid concentration and the duration of the distillation process were most significant and the most favorable conditions, established as a compromise for the studied soils, were determined to be a 70 min distillation using an 0.2 M acid. Corresponding limits of detection (LOD) and quantification (LOQ) were 0.21 and 0.7 pg absolute, respectively. The optimized methodology was applied with satisfactory results to soil samples and was compared to a reference methodology based on isotopic dilution analysis followed by gas chromatography-inductively coupled plasma mass spectrometry (IDA-GC-ICP-MS). Using the optimized conditions, recoveries ranged from 82 to 98%, which is an increase of 9-34% relative to the previously used standard operating procedure. Finally, the validated methodology was applied to quantify MMHg in soils collected from different sites impacted by coal fired power plants in the north-central zone of Chile, measuring MMHg concentrations ranging from 0.091 to 2.8 ng g -1 . These data are to the best of our knowledge the first MMHg measurements reported for Chile. Copyright © 2017 Elsevier Ltd. All rights reserved.
Neural Net-Based Redesign of Transonic Turbines for Improved Unsteady Aerodynamic Performance
NASA Technical Reports Server (NTRS)
Madavan, Nateri K.; Rai, Man Mohan; Huber, Frank W.
1998-01-01
A recently developed neural net-based aerodynamic design procedure is used in the redesign of a transonic turbine stage to improve its unsteady aerodynamic performance. The redesign procedure used incorporates the advantages of both traditional response surface methodology (RSM) and neural networks by employing a strategy called parameter-based partitioning of the design space. Starting from the reference design, a sequence of response surfaces based on both neural networks and polynomial fits are constructed to traverse the design space in search of an optimal solution that exhibits improved unsteady performance. The procedure combines the power of neural networks and the economy of low-order polynomials (in terms of number of simulations required and network training requirements). A time-accurate, two-dimensional, Navier-Stokes solver is used to evaluate the various intermediate designs and provide inputs to the optimization procedure. The optimization procedure yields a modified design that improves the aerodynamic performance through small changes to the reference design geometry. The computed results demonstrate the capabilities of the neural net-based design procedure, and also show the tremendous advantages that can be gained by including high-fidelity unsteady simulations that capture the relevant flow physics in the design optimization process.
Systems and methods for energy cost optimization in a building system
Turney, Robert D.; Wenzel, Michael J.
2016-09-06
Methods and systems to minimize energy cost in response to time-varying energy prices are presented for a variety of different pricing scenarios. A cascaded model predictive control system is disclosed comprising an inner controller and an outer controller. The inner controller controls power use using a derivative of a temperature setpoint and the outer controller controls temperature via a power setpoint or power deferral. An optimization procedure is used to minimize a cost function within a time horizon subject to temperature constraints, equality constraints, and demand charge constraints. Equality constraints are formulated using system model information and system state information whereas demand charge constraints are formulated using system state information and pricing information. A masking procedure is used to invalidate demand charge constraints for inactive pricing periods including peak, partial-peak, off-peak, critical-peak, and real-time.
NASA Astrophysics Data System (ADS)
Salcedo-Sanz, S.
2016-10-01
Meta-heuristic algorithms are problem-solving methods which try to find good-enough solutions to very hard optimization problems, at a reasonable computation time, where classical approaches fail, or cannot even been applied. Many existing meta-heuristics approaches are nature-inspired techniques, which work by simulating or modeling different natural processes in a computer. Historically, many of the most successful meta-heuristic approaches have had a biological inspiration, such as evolutionary computation or swarm intelligence paradigms, but in the last few years new approaches based on nonlinear physics processes modeling have been proposed and applied with success. Non-linear physics processes, modeled as optimization algorithms, are able to produce completely new search procedures, with extremely effective exploration capabilities in many cases, which are able to outperform existing optimization approaches. In this paper we review the most important optimization algorithms based on nonlinear physics, how they have been constructed from specific modeling of a real phenomena, and also their novelty in terms of comparison with alternative existing algorithms for optimization. We first review important concepts on optimization problems, search spaces and problems' difficulty. Then, the usefulness of heuristics and meta-heuristics approaches to face hard optimization problems is introduced, and some of the main existing classical versions of these algorithms are reviewed. The mathematical framework of different nonlinear physics processes is then introduced as a preparatory step to review in detail the most important meta-heuristics based on them. A discussion on the novelty of these approaches, their main computational implementation and design issues, and the evaluation of a novel meta-heuristic based on Strange Attractors mutation will be carried out to complete the review of these techniques. We also describe some of the most important application areas, in broad sense, of meta-heuristics, and describe free-accessible software frameworks which can be used to make easier the implementation of these algorithms.
Helicopter Flight Procedures for Community Noise Reduction
NASA Technical Reports Server (NTRS)
Greenwood, Eric
2017-01-01
A computationally efficient, semiempirical noise model suitable for maneuvering flight noise prediction is used to evaluate the community noise impact of practical variations on several helicopter flight procedures typical of normal operations. Turns, "quick-stops," approaches, climbs, and combinations of these maneuvers are assessed. Relatively small variations in flight procedures are shown to cause significant changes to Sound Exposure Levels over a wide area. Guidelines are developed for helicopter pilots intended to provide effective strategies for reducing the negative effects of helicopter noise on the community. Finally, direct optimization of flight trajectories is conducted to identify low noise optimal flight procedures and quantify the magnitude of community noise reductions that can be obtained through tailored helicopter flight procedures. Physically realizable optimal turns and approaches are identified that achieve global noise reductions of as much as 10 dBA Sound Exposure Level.
TU-D-201-07: Severity Indication in High Dose Rate Brachytherapy Emergency Response Procedure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, K; Rustad, F
Purpose: Understanding the corresponding dose to different staff during the High Dose Rate (HDR) Brachytherapy emergency response procedure could help to develop a strategy in efficiency and effective action. In this study, the variation and risk analysis methodology was developed to simulation the HDR emergency response procedure based on severity indicator. Methods: A GammaMedplus iX HDR unit from Varian Medical System was used for this simulation. The emergency response procedure was decomposed based on risk management methods. Severity indexes were used to identify the impact of a risk occurrence on the step including dose to patient and dose to operationmore » staff by varying the time, HDR source activity, distance from the source to patient and staff and the actions. These actions in 7 steps were to press the interrupt button, press emergency shutoff switch, press emergency button on the afterloader keypad, turn emergency hand-crank, remove applicator from the patient, disconnect transfer tube and move afterloader from the patient, and execute emergency surgical recovery. Results: Given the accumulated time in second at the assumed 7 steps were 15, 5, 30, 15, 180, 120, 1800, and the dose rate of HDR source is 10 Ci, the accumulated dose in cGy to patient at 1cm distance were 188, 250, 625, 813, 3063, 4563 and 27063, and the accumulated exposure in rem to operator at outside the vault, 1m and 10cm distance were 0.0, 0.0, 0.1, 0.1, 22.6, 37.6 and 262.6. The variation was determined by the operators in action at different time and distance from the HDR source. Conclusion: The time and dose were estimated for a HDR unit emergency response procedure. It provided information in making optimal decision during the emergency procedure. Further investigation would be to optimize and standardize the responses for other emergency procedure by time-spatial-dose severity function.« less
NASA Technical Reports Server (NTRS)
Korte, John J.
1992-01-01
A new procedure unifying the best of present classical design practices, CFD and optimization procedures, is demonstrated for designing the aerodynamic lines of hypersonic wind tunnel nozzles. This procedure can be employed to design hypersonic wind tunnel nozzles with thick boundary layers where the classical design procedure has been demonstrated to break down. Advantages of this procedure allow full utilization of powerful CFD codes in the design process, solves an optimization problem to determine the new contour, may be used to design new nozzles or improve sections of existing nozzles, and automatically compensates the nozzle contour for viscous effects as part of the unified design procedure.
Optimization of a Tube Hydroforming Process
NASA Astrophysics Data System (ADS)
Abedrabbo, Nader; Zafar, Naeem; Averill, Ron; Pourboghrat, Farhang; Sidhu, Ranny
2004-06-01
An approach is presented to optimize a tube hydroforming process using a Genetic Algorithm (GA) search method. The goal of the study is to maximize formability by identifying the optimal internal hydraulic pressure and feed rate while satisfying the forming limit diagram (FLD). The optimization software HEEDS is used in combination with the nonlinear structural finite element code LS-DYNA to carry out the investigation. In particular, a sub-region of a circular tube blank is formed into a square die. Compared to the best results of a manual optimization procedure, a 55% increase in expansion was achieved when using the pressure and feed profiles identified by the automated optimization procedure.
Iterative pass optimization of sequence data
NASA Technical Reports Server (NTRS)
Wheeler, Ward C.
2003-01-01
The problem of determining the minimum-cost hypothetical ancestral sequences for a given cladogram is known to be NP-complete. This "tree alignment" problem has motivated the considerable effort placed in multiple sequence alignment procedures. Wheeler in 1996 proposed a heuristic method, direct optimization, to calculate cladogram costs without the intervention of multiple sequence alignment. This method, though more efficient in time and more effective in cladogram length than many alignment-based procedures, greedily optimizes nodes based on descendent information only. In their proposal of an exact multiple alignment solution, Sankoff et al. in 1976 described a heuristic procedure--the iterative improvement method--to create alignments at internal nodes by solving a series of median problems. The combination of a three-sequence direct optimization with iterative improvement and a branch-length-based cladogram cost procedure, provides an algorithm that frequently results in superior (i.e., lower) cladogram costs. This iterative pass optimization is both computation and memory intensive, but economies can be made to reduce this burden. An example in arthropod systematics is discussed. c2003 The Willi Hennig Society. Published by Elsevier Science (USA). All rights reserved.
Meng, Jiang; Dong, Xiao-ping; Zhou, Yi-sheng; Jiang, Zhi-hong; Leung, Kelvin Sze-Yin; Zhao, Zhong-zhen
2007-02-01
To optimize the extraction procedure of essential oil from H. cordata using the SFE-CO2 and analyze the chemical composition of the essential oil. The extraction procedure of essential oil from fresh H. cordata was optimized with the orthogonal experiment. Essential oil of fresh H. cordata was analysed by GC-MS. The optimize preparative procedure was as follow: essential oil of H. cordata was extracted at a temperature of 35 degrees C, pressure of 15,000 kPa for 20 min. 38 chemical components were identified and the relative contents were quantified. The optimum preparative procedure is reliable and can guarantee the quality of essential oil.
Vortex generator design for aircraft inlet distortion as a numerical optimization problem
NASA Technical Reports Server (NTRS)
Anderson, Bernhard H.; Levy, Ralph
1991-01-01
Aerodynamic compatibility of aircraft/inlet/engine systems is a difficult design problem for aircraft that must operate in many different flight regimes. Takeoff, subsonic cruise, supersonic cruise, transonic maneuvering, and high altitude loiter each place different constraints on inlet design. Vortex generators, small wing like sections mounted on the inside surfaces of the inlet duct, are used to control flow separation and engine face distortion. The design of vortex generator installations in an inlet is defined as a problem addressable by numerical optimization techniques. A performance parameter is suggested to account for both inlet distortion and total pressure loss at a series of design flight conditions. The resulting optimization problem is difficult since some of the design parameters take on integer values. If numerical procedures could be used to reduce multimillion dollar development test programs to a small set of verification tests, numerical optimization could have a significant impact on both cost and elapsed time to design new aircraft.
Ring rolling process simulation for geometry optimization
NASA Astrophysics Data System (ADS)
Franchi, Rodolfo; Del Prete, Antonio; Donatiello, Iolanda; Calabrese, Maurizio
2017-10-01
Ring Rolling is a complex hot forming process where different rolls are involved in the production of seamless rings. Since each roll must be independently controlled, different speed laws must be set; usually, in the industrial environment, a milling curve is introduced to monitor the shape of the workpiece during the deformation in order to ensure the correct ring production. In the present paper a ring rolling process has been studied and optimized in order to obtain anular components to be used in aerospace applications. In particular, the influence of process input parameters (feed rate of the mandrel and angular speed of main roll) on geometrical features of the final ring has been evaluated. For this purpose, a three-dimensional finite element model for HRR (Hot Ring Rolling) has been implemented in SFTC DEFORM V11. The FEM model has been used to formulate a proper optimization problem. The optimization procedure has been implemented in the commercial software DS ISight in order to find the combination of process parameters which allows to minimize the percentage error of each obtained dimension with respect to its nominal value. The software allows to find the relationship between input and output parameters applying Response Surface Methodology (RSM), by using the exact values of output parameters in the control points of the design space explored through FEM simulation. Once this relationship is known, the values of the output parameters can be calculated for each combination of the input parameters. After the calculation of the response surfaces for the selected output parameters, an optimization procedure based on Genetic Algorithms has been applied. At the end, the error between each obtained dimension and its nominal value has been minimized. The constraints imposed were the maximum values of standard deviations of the dimensions obtained for the final ring.
Improving stability and strength characteristics of framed structures with nonlinear behavior
NASA Technical Reports Server (NTRS)
Pezeshk, Shahram
1990-01-01
In this paper an optimal design procedure is introduced to improve the overall performance of nonlinear framed structures. The design methodology presented here is a multiple-objective optimization procedure whose objective functions involve the buckling eigenvalues and eigenvectors of the structure. A constant volume with bounds on the design variables is used in conjunction with an optimality criterion approach. The method provides a general tool for solving complex design problems and generally leads to structures with better limit strength and stability. Many algorithms have been developed to improve the limit strength of structures. In most applications geometrically linear analysis is employed with the consequence that overall strength of the design is overestimated. Directly optimizing the limit load of the structure would require a full nonlinear analysis at each iteration which would be prohibitively expensive. The objective of this paper is to develop an algorithm that can improve the limit-load of geometrically nonlinear framed structures while avoiding the nonlinear analysis. One of the novelties of the new design methodology is its ability to efficiently model and design structures under multiple loading conditions. These loading conditions can be different factored loads or any kind of loads that can be applied to the structure simultaneously or independently. Attention is focused on optimal design of space framed structures. Three-dimensional design problems are more complicated to carry out, but they yield insight into real behavior of the structure and can help avoiding some of the problems that might appear in planar design procedure such as the need for out-of-plane buckling constraint. Although researchers in the field of structural engineering generally agree that optimum design of three-dimension building frames especially in the seismic regions would be beneficial, methods have been slow to emerge. Most of the research in this area has dealt with the optimization of truss and plane frame structures.
NASA Technical Reports Server (NTRS)
Martin, Carl J., Jr.
1996-01-01
This report describes a structural optimization procedure developed for use with the Engineering Analysis Language (EAL) finite element analysis system. The procedure is written primarily in the EAL command language. Three external processors which are written in FORTRAN generate equivalent stiffnesses and evaluate stress and local buckling constraints for the sections. Several built-up structural sections were coded into the design procedures. These structural sections were selected for use in aircraft design, but are suitable for other applications. Sensitivity calculations use the semi-analytic method, and an extensive effort has been made to increase the execution speed and reduce the storage requirements. There is also an approximate sensitivity update method included which can significantly reduce computational time. The optimization is performed by an implementation of the MINOS V5.4 linear programming routine in a sequential liner programming procedure.
NASA Technical Reports Server (NTRS)
Garg, Sanjay; Schmidt, Phillip H.
1993-01-01
A parameter optimization framework has earlier been developed to solve the problem of partitioning a centralized controller into a decentralized, hierarchical structure suitable for integrated flight/propulsion control implementation. This paper presents results from the application of the controller partitioning optimization procedure to IFPC design for a Short Take-Off and Vertical Landing (STOVL) aircraft in transition flight. The controller partitioning problem and the parameter optimization algorithm are briefly described. Insight is provided into choosing various 'user' selected parameters in the optimization cost function such that the resulting optimized subcontrollers will meet the characteristics of the centralized controller that are crucial to achieving the desired closed-loop performance and robustness, while maintaining the desired subcontroller structure constraints that are crucial for IFPC implementation. The optimization procedure is shown to improve upon the initial partitioned subcontrollers and lead to performance comparable to that achieved with the centralized controller. This application also provides insight into the issues that should be addressed at the centralized control design level in order to obtain implementable partitioned subcontrollers.
NASA Astrophysics Data System (ADS)
Bykovsky, A. Yu; Sherbakov, A. A.
2016-08-01
The C-valued Allen-Givone algebra is the attractive tool for modeling of a robotic agent, but it requires the consensus method of minimization for the simplification of logic expressions. This procedure substitutes some undefined states of the function for the maximal truth value, thus extending the initially given truth table. This further creates the problem of different formal representations for the same initially given function. The multi-criteria optimization is proposed for the deliberate choice of undefined states and model formation.
Supercontinuum generation in a tapered tellurite microstructured optical fiber
NASA Astrophysics Data System (ADS)
Yan, X.; Ohishi, Y.
2014-07-01
Supercontinuum generation (SCG) was investigated in tapered tellurite microstructured optical fibers (MOFs) for various taper profiles. We emphasize on the procedure for finding the dispersion profile that achieve the best width of the SC spectra. An enhancement of the SCG is achieved by varying the taper waist diameter along its length in a carefully designed, and an optimal degree of tapering is found to exist for tapers with an axially uniform waist. We also show the XFROG spectrograms of the pulses propagating through different tapered fibers, confirming the optimized taper conditions.
Hernández-Borges, Javier; Rodriguez-Delgado, Miguel Angel; García-Montelongo, Francisco J; Cifuentes, Alejandro
2005-06-01
In this work, the determination of a group of triazolopyrimidine sulfoanilide herbicides (cloransulam-methyl, metosulam, flumetsulam, florasulam, and diclosulam) in soy milk by capillary electrophoresis-mass spectrometry (CE-MS) is presented. The main electrospray interface (ESI) parameters (nebulizer pressure, dry gas flow rate, dry gas temperature, and composition of the sheath liquid) are optimized using a central composite design. To increase the sensitivity of the CE-MS method, an off-line sample preconcentration procedure based on solid-phase extraction (SPE) is combined with an on-line stacking procedure (i.e. normal stacking mode, NSM). Samples could be injected for up to 100 s, providing limits of detection (LODs) down to 74 microg/L, i.e., at the low ppb level, with relative standard deviation values (RSD,%) between 3.8% and 6.4% for peak areas on the same day, and between 6.5% and 8.1% on three different days. The usefulness of the optimized SPE-NSM-CE-MS procedure is demonstrated through the sensitive quantification of the selected pesticides in soy milk samples.
Rotationplasty of the lower limb for congenital defects of the femur.
Torode, I P; Gillespie, R
1983-11-01
The operative technique for combined fusion of the knee and rotationplasty of the limb in the management of congenital deficiency of the femur is presented. The technique described allows earlier definitive prosthetic fitting of a child with proximal femoral deficiency; it has reduced the number of operative procedures needed to obtain the optimal function from that deficient limb; and it has enabled these procedures to be performed at an earlier age. The technique differs from those previously described and represents a significant improvement in management of the patient with femoral deficiency.
Ambrosini, Emilia; Ferrante, Simona; Schauer, Thomas; Ferrigno, Giancarlo; Molteni, Franco; Pedrocchi, Alessandra
2014-01-01
Cycling induced by Functional Electrical Stimulation (FES) training currently requires a manual setting of different parameters, which is a time-consuming and scarcely repeatable procedure. We proposed an automatic procedure for setting session-specific parameters optimized for hemiparetic patients. This procedure consisted of the identification of the stimulation strategy as the angular ranges during which FES drove the motion, the comparison between the identified strategy and the physiological muscular activation strategy, and the setting of the pulse amplitude and duration of each stimulated muscle. Preliminary trials on 10 healthy volunteers helped define the procedure. Feasibility tests on 8 hemiparetic patients (5 stroke, 3 traumatic brain injury) were performed. The procedure maximized the motor output within the tolerance constraint, identified a biomimetic strategy in 6 patients, and always lasted less than 5 minutes. Its reasonable duration and automatic nature make the procedure usable at the beginning of every training session, potentially enhancing the performance of FES-cycling training.
Functional Compromise in the Middle Vault in the Management of Revision Rhinoplasty.
Wang, Leo; Friedman, Oren
2018-06-01
As rhinoplasty procedures become more common, the need for revision surgeries increases as well. Unlike primary rhinoplasties, revision rhinoplasties can be more challenging because of anatomic differences from initial surgery, a lack of available cartilage, tissue remodeling responses, and other complications. As such, surgeons should be prepared to address revision rhinoplasty patients differently from primary rhinoplasty patients. Here, the authors describe a generalizable approach to revision functional rhinoplasty patients and detail some of the surgical techniques that can be employed to achieve optimal outcomes, with particular attention paid to procedures that can be used in the middle vault. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
Treatment of Periprosthetic Infections: An Economic Analysis
Hernández-Vaquero, Daniel; Fernández-Fairen, Mariano; Torres, Ana; Menzie, Ann M.; Fernández-Carreira, José Manuel; Murcia-Mazon, Antonio; Merzthal, Luis
2013-01-01
This review summarizes the existing economic literature, assesses the value of current data, and presents procedures that are the less costly and more effective options for the treatment of periprosthetic infections of knee and hip. Optimizing antibiotic use in the prevention and treatment of periprosthetic infection, combined with systemic and behavioral changes in the operating room, the detection and treatment of high-risk patient groups, as well as the rational management of the existing infection by using the different procedures according to each particular case, could allow for improved outcomes and lead to the highest quality of life for patients and the lowest economic impact. Nevertheless, the costeffectiveness of different interventions to treat periprosthetic infections remains unclear. PMID:23781163
Optimum Policy Regions for Computer-Directed Teaching Systems.
ERIC Educational Resources Information Center
Smallwood, Richard D.
The development of computer-directed instruction in which the learning protocol is tailored to each student on the basis of his learning history requires a means by which the many different trajectories open to a student can be resolved. Such an optimization procedure can be constructed to reduce the long and costly calculations associated with…
Using Green Star Metrics to Optimize the Greenness of Literature Protocols for Syntheses
ERIC Educational Resources Information Center
Duarte, Rita C. C.; Ribeiro, M. Gabriela T. C.; Machado, Adélio A. S. C.
2015-01-01
A procedure to improve the greenness of a synthesis, without performing laboratory work, using alternative protocols available in the literature is presented. The greenness evaluation involves the separate assessment of the different steps described in the available protocols--reaction, isolation, and purification--as well as the global process,…
Design optimization studies using COSMIC NASTRAN
NASA Technical Reports Server (NTRS)
Pitrof, Stephen M.; Bharatram, G.; Venkayya, Vipperla B.
1993-01-01
The purpose of this study is to create, test and document a procedure to integrate mathematical optimization algorithms with COSMIC NASTRAN. This procedure is very important to structural design engineers who wish to capitalize on optimization methods to ensure that their design is optimized for its intended application. The OPTNAST computer program was created to link NASTRAN and design optimization codes into one package. This implementation was tested using two truss structure models and optimizing their designs for minimum weight, subject to multiple loading conditions and displacement and stress constraints. However, the process is generalized so that an engineer could design other types of elements by adding to or modifying some parts of the code.
Multidisciplinary aerospace design optimization: Survey of recent developments
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw; Haftka, Raphael T.
1995-01-01
The increasing complexity of engineering systems has sparked increasing interest in multidisciplinary optimization (MDO). This paper presents a survey of recent publications in the field of aerospace where interest in MDO has been particularly intense. The two main challenges of MDO are computational expense and organizational complexity. Accordingly the survey is focussed on various ways different researchers use to deal with these challenges. The survey is organized by a breakdown of MDO into its conceptual components. Accordingly, the survey includes sections on Mathematical Modeling, Design-oriented Analysis, Approximation Concepts, Optimization Procedures, System Sensitivity, and Human Interface. With the authors' main expertise being in the structures area, the bulk of the references focus on the interaction of the structures discipline with other disciplines. In particular, two sections at the end focus on two such interactions that have recently been pursued with a particular vigor: Simultaneous Optimization of Structures and Aerodynamics, and Simultaneous Optimization of Structures Combined With Active Control.
Optimization of Progressive Freeze Concentration on Apple Juice via Response Surface Methodology
NASA Astrophysics Data System (ADS)
Samsuri, S.; Amran, N. A.; Jusoh, M.
2018-05-01
In this work, a progressive freeze concentration (PFC) system was developed to concentrate apple juice and was optimized by response surface methodology (RSM). The effects of various operating conditions such as coolant temperature, circulation flowrate, circulation time and shaking speed to effective partition constant (K) were investigated. Five different level of central composite design (CCD) was employed to search for optimal concentration of concentrated apple juice. A full quadratic model for K was established by using method of least squares. A coefficient of determination (R2) of this model was found to be 0.7792. The optimum conditions were found to be coolant temperature = -10.59 °C, circulation flowrate = 3030.23 mL/min, circulation time = 67.35 minutes and shaking speed = 30.96 ohm. A validation experiment was performed to evaluate the accuracy of the optimization procedure and the best K value of 0.17 was achieved under the optimized conditions.
Shape Optimization and Modular Discretization for the Development of a Morphing Wingtip
NASA Astrophysics Data System (ADS)
Morley, Joshua
Better knowledge in the areas of aerodynamics and optimization has allowed designers to develop efficient wingtip structures in recent years. However, the requirements faced by wingtip devices can be considerably different amongst an aircraft's flight regimes. Traditional static wingtip devices are then a compromise between conflicting requirements, resulting in less than optimal performance within each regime. Alternatively, a morphing wingtip can reconfigure leading to improved performance over a range of dissimilar flight conditions. Developed within this thesis, is a modular morphing wingtip concept that centers on the use of variable geometry truss mechanisms to permit morphing. A conceptual design framework is established to aid in the development of the concept. The framework uses a metaheuristic optimization procedure to determine optimal continuous wingtip configurations. The configurations are then discretized for the modular concept. The functionality of the framework is demonstrated through a design study on a hypothetical wing/winglet within the thesis.
Performance optimization of an MHD generator with physical constraints
NASA Technical Reports Server (NTRS)
Pian, C. C. P.; Seikel, G. R.; Smith, J. M.
1979-01-01
A technique has been described which optimizes the power out of a Faraday MHD generator operating under a prescribed set of electrical and magnetic constraints. The method does not rely on complicated numerical optimization techniques. Instead the magnetic field and the electrical loading are adjusted at each streamwise location such that the resultant generator design operates at the most limiting of the cited stress levels. The simplicity of the procedure makes it ideal for optimizing generator designs for system analysis studies of power plants. The resultant locally optimum channel designs are, however, not necessarily the global optimum designs. The results of generator performance calculations are presented for an approximately 2000 MWe size plant. The difference between the maximum power generator design and the optimal design which maximizes net MHD power are described. The sensitivity of the generator performance to the various operational parameters are also presented.
NASA Technical Reports Server (NTRS)
Scott, Elaine P.
1993-01-01
Thermal stress analyses are an important aspect in the development of aerospace vehicles such as the National Aero-Space Plane (NASP) and the High-Speed Civil Transport (HSCT) at NASA-LaRC. These analyses require knowledge of the temperature within the structures which consequently necessitates the need for thermal property data. The initial goal of this research effort was to develop a methodology for the estimation of thermal properties of aerospace structural materials at room temperature and to develop a procedure to optimize the estimation process. The estimation procedure was implemented utilizing a general purpose finite element code. In addition, an optimization procedure was developed and implemented to determine critical experimental parameters to optimize the estimation procedure. Finally, preliminary experiments were conducted at the Aircraft Structures Branch (ASB) laboratory.
NASA Astrophysics Data System (ADS)
Scott, Elaine P.
1993-12-01
Thermal stress analyses are an important aspect in the development of aerospace vehicles such as the National Aero-Space Plane (NASP) and the High-Speed Civil Transport (HSCT) at NASA-LaRC. These analyses require knowledge of the temperature within the structures which consequently necessitates the need for thermal property data. The initial goal of this research effort was to develop a methodology for the estimation of thermal properties of aerospace structural materials at room temperature and to develop a procedure to optimize the estimation process. The estimation procedure was implemented utilizing a general purpose finite element code. In addition, an optimization procedure was developed and implemented to determine critical experimental parameters to optimize the estimation procedure. Finally, preliminary experiments were conducted at the Aircraft Structures Branch (ASB) laboratory.
NASA Astrophysics Data System (ADS)
Silvestro, Paolo Cosmo; Casa, Raffaele; Pignatti, Stefano; Castaldi, Fabio; Yang, Hao; Guijun, Yang
2016-08-01
The aim of this work was to develop a tool to evaluate the effect of water stress on yield losses at the farmland and regional scale, by assimilating remotely sensed biophysical variables into crop growth models. Biophysical variables were retrieved from HJ1A, HJ1B and Landsat 8 images, using an algorithm based on the training of artificial neural networks on PROSAIL.For the assimilation, two crop models of differing degree of complexity were used: Aquacrop and SAFY. For Aquacrop, an optimization procedure to reduce the difference between the remotely sensed and simulated CC was developed. For the modified version of SAFY, the assimilation procedure was based on the Ensemble Kalman Filter.These procedures were tested in a spatialized application, by using data collected in the rural area of Yangling (Shaanxi Province) between 2013 and 2015Results were validated by utilizing yield data both from ground measurements and statistical survey.
NASA Astrophysics Data System (ADS)
Kozioł, Michał
2017-10-01
The article presents a parametric model describing the registered distributions spectrum of optical radiation emitted by electrical discharges generated in the systems: the needle- needle, the needleplate and in the system for surface discharges. Generation of electrical discharges and registration of the emitted radiation was carried out in three different electrical insulating oils: fabric new, operated (used) and operated with air bubbles. For registration of optical spectra in the range of ultraviolet, visible and near infrared a high resolution spectrophotometer was. The proposed mathematical model was developed in a regression procedure using gauss-sigmoid type function. The dependent variable was the intensity of the recorded optical signals. In order to estimate the optimal parameters of the model an evolutionary algorithm was used. The optimization procedure was performed in Matlab environment. For determination of the matching quality of theoretical parameters of the regression function to the empirical data determination coefficient R2 was applied.
Drevinskas, Tomas; Mickienė, Rūta; Maruška, Audrius; Stankevičius, Mantas; Tiso, Nicola; Mikašauskaitė, Jurgita; Ragažinskienė, Ona; Levišauskas, Donatas; Bartkuvienė, Violeta; Snieškienė, Vilija; Stankevičienė, Antanina; Polcaro, Chiara; Galli, Emanuela; Donati, Enrica; Tekorius, Tomas; Kornyšova, Olga; Kaškonienė, Vilma
2016-02-01
The miniaturization and optimization of a white rot fungal bioremediation experiment is described in this paper. The optimized procedure allows determination of the degradation kinetics of anthracene. The miniaturized procedure requires only 2.5 ml of culture medium. The experiment is more precise, robust, and better controlled comparing it to classical tests in flasks. Using this technique, different parts, i.e., the culture medium, the fungi, and the cotton seal, can be analyzed. A simple sample preparation speeds up the analytical process. Experiments performed show degradation of anthracene up to approximately 60% by Irpex lacteus and up to approximately 40% by Pleurotus ostreatus in 25 days. Bioremediation of anthracene by the consortium of I. lacteus and P. ostreatus shows the biodegradation of anthracene up to approximately 56% in 23 days. At the end of the experiment, the surface tension of culture medium decreased comparing it to the blank, indicating generation of surfactant compounds.
Efficient fractal-based mutation in evolutionary algorithms from iterated function systems
NASA Astrophysics Data System (ADS)
Salcedo-Sanz, S.; Aybar-Ruíz, A.; Camacho-Gómez, C.; Pereira, E.
2018-03-01
In this paper we present a new mutation procedure for Evolutionary Programming (EP) approaches, based on Iterated Function Systems (IFSs). The new mutation procedure proposed consists of considering a set of IFS which are able to generate fractal structures in a two-dimensional phase space, and use them to modify a current individual of the EP algorithm, instead of using random numbers from different probability density functions. We test this new proposal in a set of benchmark functions for continuous optimization problems. In this case, we compare the proposed mutation against classical Evolutionary Programming approaches, with mutations based on Gaussian, Cauchy and chaotic maps. We also include a discussion on the IFS-based mutation in a real application of Tuned Mass Dumper (TMD) location and optimization for vibration cancellation in buildings. In both practical cases, the proposed EP with the IFS-based mutation obtained extremely competitive results compared to alternative classical mutation operators.
Optimally robust redundancy relations for failure detection in uncertain systems
NASA Technical Reports Server (NTRS)
Lou, X.-C.; Willsky, A. S.; Verghese, G. C.
1986-01-01
All failure detection methods are based, either explicitly or implicitly, on the use of redundancy, i.e. on (possibly dynamic) relations among the measured variables. The robustness of the failure detection process consequently depends to a great degree on the reliability of the redundancy relations, which in turn is affected by the inevitable presence of model uncertainties. In this paper the problem of determining redundancy relations that are optimally robust is addressed in a sense that includes several major issues of importance in practical failure detection and that provides a significant amount of intuition concerning the geometry of robust failure detection. A procedure is given involving the construction of a single matrix and its singular value decomposition for the determination of a complete sequence of redundancy relations, ordered in terms of their level of robustness. This procedure also provides the basis for comparing levels of robustness in redundancy provided by different sets of sensors.
Solution-mediated cladding doping of commercial polymer optical fibers
NASA Astrophysics Data System (ADS)
Stajanca, Pavol; Topolniak, Ievgeniia; Pötschke, Samuel; Krebber, Katerina
2018-03-01
Solution doping of commercial polymethyl methacrylate (PMMA) polymer optical fibers (POFs) is presented as a novel approach for preparation of custom cladding-doped POFs (CD-POFs). The presented method is based on a solution-mediated diffusion of dopant molecules into the fiber cladding upon soaking of POFs in a methanol-dopant solution. The method was tested on three different commercial POFs using Rhodamine B as a fluorescent dopant. The dynamics of the diffusion process was studied in order to optimize the doping procedure in terms of selection of the most suitable POF, doping time and conditions. Using the optimized procedure, longer segment of fluorescent CD-POF was prepared and its performance was characterized. Fiber's potential for sensing and illumination applications was demonstrated and discussed. The proposed method represents a simple and cheap way for fabrication of custom, short to medium length CD-POFs with various dopants.
Roback, M G; Green, S M; Andolfatto, G; Leroy, P L; Mason, K P
2018-01-01
Many hospitals, and medical and dental clinics and offices, routinely monitor their procedural-sedation practices-tracking adverse events, outcomes, and efficacy in order to optimize the sedation delivery and practice. Currently, there exist substantial differences between settings in the content, collection, definition, and interpretation of such sedation outcomes, with resulting widespread reporting variation. With the objective of reducing such disparities, the International Committee for the Advancement of Procedural Sedation has herein developed a multidisciplinary, consensus-based, standardized tool intended to be applicable for all types of sedation providers in all locations worldwide. This tool is amenable for inclusion in either a paper or an electronic medical record. An additional, parallel research tool is presented to promote consistency and standardized data collection for procedural-sedation investigations. Copyright © 2017. Published by Elsevier Ltd.
Geometrical Optimization Approach to Isomerization: Models and Limitations.
Chang, Bo Y; Shin, Seokmin; Engel, Volker; Sola, Ignacio R
2017-11-02
We study laser-driven isomerization reactions through an excited electronic state using the recently developed Geometrical Optimization procedure. Our goal is to analyze whether an initial wave packet in the ground state, with optimized amplitudes and phases, can be used to enhance the yield of the reaction at faster rates, driven by a single picosecond pulse or a pair of femtosecond pulses resonant with the electronic transition. We show that the symmetry of the system imposes limitations in the optimization procedure, such that the method rediscovers the pump-dump mechanism.
Development of an algorithm to plan and simulate a new interventional procedure.
Fujita, Buntaro; Kütting, Maximilian; Scholtz, Smita; Utzenrath, Marc; Hakim-Meibodi, Kavous; Paluszkiewicz, Lech; Schmitz, Christoph; Börgermann, Jochen; Gummert, Jan; Steinseifer, Ulrich; Ensminger, Stephan
2015-07-01
The number of implanted biological valves for treatment of valvular heart disease is growing and a percentage of these patients will eventually undergo a transcatheter valve-in-valve (ViV) procedure. Some of these patients will represent challenging cases. The aim of this study was to develop a feasible algorithm to plan and in vitro simulate a new interventional procedure to improve patient outcome. In addition to standard diagnostic routine, our algorithm includes 3D printing of the annulus, hydrodynamic measurements and high-speed analysis of leaflet kinematics after simulation of the procedure in different prosthesis positions as well as X-ray imaging of the most suitable valve position to create a 'blueprint' for the patient procedure. This algorithm was developed for a patient with a degenerated Perceval aortic sutureless prosthesis requiring a ViV procedure. Different ViV procedures were assessed in the algorithm and based on these results the best option for the patient was chosen. The actual procedure went exactly as planned with help of this algorithm. Here we have developed a new technically feasible algorithm simulating important aspects of a novel interventional procedure prior to the actual procedure. This algorithm can be applied to virtually all patients requiring a novel interventional procedure to help identify risks and find optimal parameters for prosthesis selection and placement in order to maximize safety for the patient. © The Author 2015. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.
Articulated Arm Coordinate Measuring Machine Calibration by Laser Tracker Multilateration
Majarena, Ana C.; Brau, Agustín; Velázquez, Jesús
2014-01-01
A new procedure for the calibration of an articulated arm coordinate measuring machine (AACMM) is presented in this paper. First, a self-calibration algorithm of four laser trackers (LTs) is developed. The spatial localization of a retroreflector target, placed in different positions within the workspace, is determined by means of a geometric multilateration system constructed from the four LTs. Next, a nonlinear optimization algorithm for the identification procedure of the AACMM is explained. An objective function based on Euclidean distances and standard deviations is developed. This function is obtained from the captured nominal data (given by the LTs used as a gauge instrument) and the data obtained by the AACMM and compares the measured and calculated coordinates of the target to obtain the identified model parameters that minimize this difference. Finally, results show that the procedure presented, using the measurements of the LTs as a gauge instrument, is very effective by improving the AACMM precision. PMID:24688418
Bias in error estimation when using cross-validation for model selection.
Varma, Sudhir; Simon, Richard
2006-02-23
Cross-validation (CV) is an effective method for estimating the prediction error of a classifier. Some recent articles have proposed methods for optimizing classifiers by choosing classifier parameter values that minimize the CV error estimate. We have evaluated the validity of using the CV error estimate of the optimized classifier as an estimate of the true error expected on independent data. We used CV to optimize the classification parameters for two kinds of classifiers; Shrunken Centroids and Support Vector Machines (SVM). Random training datasets were created, with no difference in the distribution of the features between the two classes. Using these "null" datasets, we selected classifier parameter values that minimized the CV error estimate. 10-fold CV was used for Shrunken Centroids while Leave-One-Out-CV (LOOCV) was used for the SVM. Independent test data was created to estimate the true error. With "null" and "non null" (with differential expression between the classes) data, we also tested a nested CV procedure, where an inner CV loop is used to perform the tuning of the parameters while an outer CV is used to compute an estimate of the error. The CV error estimate for the classifier with the optimal parameters was found to be a substantially biased estimate of the true error that the classifier would incur on independent data. Even though there is no real difference between the two classes for the "null" datasets, the CV error estimate for the Shrunken Centroid with the optimal parameters was less than 30% on 18.5% of simulated training data-sets. For SVM with optimal parameters the estimated error rate was less than 30% on 38% of "null" data-sets. Performance of the optimized classifiers on the independent test set was no better than chance. The nested CV procedure reduces the bias considerably and gives an estimate of the error that is very close to that obtained on the independent testing set for both Shrunken Centroids and SVM classifiers for "null" and "non-null" data distributions. We show that using CV to compute an error estimate for a classifier that has itself been tuned using CV gives a significantly biased estimate of the true error. Proper use of CV for estimating true error of a classifier developed using a well defined algorithm requires that all steps of the algorithm, including classifier parameter tuning, be repeated in each CV loop. A nested CV procedure provides an almost unbiased estimate of the true error.
Kawabe, Takefumi; Tomitsuka, Toshiaki; Kajiro, Toshi; Kishi, Naoyuki; Toyo'oka, Toshimasa
2013-01-18
An optimization procedure of ternary isocratic mobile phase composition in the HPLC method using a statistical prediction model and visualization technique is described. In this report, two prediction models were first evaluated to obtain reliable prediction results. The retention time prediction model was constructed by modification from past respectable knowledge of retention modeling against ternary solvent strength changes. An excellent correlation between observed and predicted retention time was given in various kinds of pharmaceutical compounds by the multiple regression modeling of solvent strength parameters. The peak width of half height prediction model employed polynomial fitting of the retention time, because a linear relationship between the peak width of half height and the retention time was not obtained even after taking into account the contribution of the extra-column effect based on a moment method. Accurate prediction results were able to be obtained by such model, showing mostly over 0.99 value of correlation coefficient between observed and predicted peak width of half height. Then, a procedure to visualize a resolution Design Space was tried as the secondary challenge. An artificial neural network method was performed to link directly between ternary solvent strength parameters and predicted resolution, which were determined by accurate prediction results of retention time and a peak width of half height, and to visualize appropriate ternary mobile phase compositions as a range of resolution over 1.5 on the contour profile. By using mixtures of similar pharmaceutical compounds in case studies, we verified a possibility of prediction to find the optimal range of condition. Observed chromatographic results on the optimal condition mostly matched with the prediction and the average of difference between observed and predicted resolution were approximately 0.3. This means that enough accuracy for prediction could be achieved by the proposed procedure. Consequently, the procedure to search the optimal range of ternary solvent strength achieving an appropriate separation is provided by using the resolution Design Space based on accurate prediction. Copyright © 2012 Elsevier B.V. All rights reserved.
Calvano, C D; Aresta, A; Iacovone, M; De Benedetto, G E; Zambonin, C G; Battaglia, M; Ditonno, P; Rutigliano, M; Bettocchi, C
2010-03-11
Protein analysis in biological fluids, such as urine, by means of mass spectrometry (MS) still suffers for insufficient standardization in protocols for sample collection, storage and preparation. In this work, the influence of these variables on healthy donors human urine protein profiling performed by matrix assisted laser desorption ionization time-of-flight mass spectrometry (MALDI-TOF-MS) was studied. A screening of various urine sample pre-treatment procedures and different sample deposition approaches on the MALDI target was performed. The influence of urine samples storage time and temperature on spectral profiles was evaluated by means of principal component analysis (PCA). The whole optimized procedure was eventually applied to the MALDI-TOF-MS analysis of human urine samples taken from prostate cancer patients. The best results in terms of detected ions number and abundance in the MS spectra were obtained by using home-made microcolumns packed with hydrophilic-lipophilic balance (HLB) resin as sample pre-treatment method; this procedure was also less expensive and suitable for high throughput analyses. Afterwards, the spin coating approach for sample deposition on the MALDI target plate was optimized, obtaining homogenous and reproducible spots. Then, PCA indicated that low storage temperatures of acidified and centrifuged samples, together with short handling time, allowed to obtain reproducible profiles without artifacts contribution due to experimental conditions. Finally, interesting differences were found by comparing the MALDI-TOF-MS protein profiles of pooled urine samples of healthy donors and prostate cancer patients. The results showed that analytical and pre-analytical variables are crucial for the success of urine analysis, to obtain meaningful and reproducible data, even if the intra-patient variability is very difficult to avoid. It has been proven how pooled urine samples can be an interesting way to make easier the comparison between healthy and pathological samples and to individuate possible differences in the protein expression between the two sets of samples. Copyright 2009 Elsevier B.V. All rights reserved.
Optimizing Cold Water Immersion for Exercise-Induced Hyperthermia: A Meta-analysis.
Zhang, Yang; Davis, Jon-Kyle; Casa, Douglas J; Bishop, Phillip A
2015-11-01
Cold water immersion (CWI) provides rapid cooling in events of exertional heat stroke. Optimal procedures for CWI in the field are not well established. This meta-analysis aimed to provide structured analysis of the effectiveness of CWI on the cooling rate in healthy adults subjected to exercise-induced hyperthermia. An electronic search (December 2014) was conducted using the PubMed and Web of Science. The mean difference of the cooling rate between CWI and passive recovery was calculated. Pooled analyses were based on a random-effects model. Sources of heterogeneity were identified through a mixed-effects model Q statistic. Inferential statistics aggregated the CWI cooling rate for extrapolation. Nineteen studies qualified for inclusion. Results demonstrate CWI elicited a significant effect: mean difference, 0.03°C·min(-1); 95% confidence interval, 0.03-0.04°C·min(-1). A conservative, observed estimate of the CWI cooling rate was 0.08°C·min(-1) across various conditions. CWI cooled individuals twice as fast as passive recovery. Subgroup analyses revealed that cooling was more effective (Q test P < 0.10) when preimmersion core temperature ≥38.6°C, immersion water temperature ≤10°C, ambient temperature ≥20°C, immersion duration ≤10 min, and using torso plus limbs immersion. There is insufficient evidence of effect using forearms/hands CWI for rapid cooling: mean difference, 0.01°C·min(-1); 95% confidence interval, -0.01°C·min(-1) to 0.04°C·min(-1). A combined data summary, pertaining to 607 subjects from 29 relevant studies, was presented for referencing the weighted cooling rate and recovery time, aiming for practitioners to better plan emergency procedures. An optimal procedure for yielding high cooling rates is proposed. Using prompt vigorous CWI should be encouraged for treating exercise-induced hyperthermia whenever possible, using cold water temperature (approximately 10°C) and maximizing body surface contact (whole-body immersion).
Long-term psychosocial consequences of surgical congenital malformations.
Diseth, Trond H; Emblem, Ragnhild
2017-10-01
Surgical congenital malformations often represent years of treatment, large number of hospital stays, treatment procedures, and long-term functional sequels affecting patients' psychosocial functioning. Both functional defects and psychosocial difficulties that occur commonly in childhood may pass through adolescence on to adulthood. This overview presents reports published over the past 3 decades to elucidate the long-term psychosocial consequences of surgical congenital malformations. Literature searches conducted on PubMed database revealed that less than 1% of all the records of surgical congenital malformations described long-term psychosocial consequences, but with diverse findings. This inconsistency may be due to methodological differences or deficiencies; especially in study design, patient sampling, and methods. Most of the studies revealed that the functional deficits may have great impact on patients' mental health, psychosocial functioning, and QoL; both short- and long-term negative consequences. Factors other than functional problems, e.g., repeated anesthesia, multiple hospitalization, traumatic treatment procedures, and parental dysfunctioning, may also predict long-term mental health and psychosocial functioning. Through multidisciplinary approach, pediatric surgeons should also be aware of deficits in emotional and psychosocial functioning. To achieve overall optimal psychosocial functioning, the challenge is to find a compromise between physically optimal treatment procedures and procedures that are not psychologically detrimental. Copyright © 2017. Published by Elsevier Inc.
Multi-objective Optimization of Departure Procedures at Gimpo International Airport
NASA Astrophysics Data System (ADS)
Kim, Junghyun; Lim, Dongwook; Monteiro, Dylan Jonathan; Kirby, Michelle; Mavris, Dimitri
2018-04-01
Most aviation communities have increasing concerns about the environmental impacts, which are directly linked to health issues for local residents near the airport. In this study, the environmental impact of different departure procedures using the Aviation Environmental Design Tool (AEDT) was analyzed. First, actual operational data were compiled at Gimpo International Airport (March 20, 2017) from an open source. Two modifications were made in the AEDT to model the operational circumstances better and the preliminary AEDT simulations were performed according to the acquired operational procedures. Simulated noise results showed good agreements with noise measurement data at specific locations. Second, a multi-objective optimization of departure procedures was performed for the Boeing 737-800. Four design variables were selected and AEDT was linked to a variety of advanced design methods. The results showed that takeoff thrust had the greatest influence and it was found that fuel burn and noise had an inverse relationship. Two points representing each fuel burn and noise optimum on the Pareto front were parsed and run in AEDT to compare with the baseline. The results showed that the noise optimum case reduced Sound Exposure Level 80-dB noise exposure area by approximately 5% while the fuel burn optimum case reduced total fuel burn by 1% relative to the baseline for aircraft-level analysis.
The use of optimization techniques to design controlled diffusion compressor blading
NASA Technical Reports Server (NTRS)
Sanger, N. L.
1982-01-01
A method for automating compressor blade design using numerical optimization, and applied to the design of a controlled diffusion stator blade row is presented. A general purpose optimization procedure is employed, based on conjugate directions for locally unconstrained problems and on feasible directions for locally constrained problems. Coupled to the optimizer is an analysis package consisting of three analysis programs which calculate blade geometry, inviscid flow, and blade surface boundary layers. The optimizing concepts and selection of design objective and constraints are described. The procedure for automating the design of a two dimensional blade section is discussed, and design results are presented.
Sa, Young Jo; Lee, Jongho; Jeong, Jin Yong; Choi, Moonhee; Park, Soo Seog; Sim, Sung Bo; Jo, Keon Hyon
2016-01-19
Bar displacement is one of the most common and serious complications after the Nuss procedure. However, measurements of and factors affecting bar displacement have not been reported. The objectives of this study were to develop a decision model to guide surgeons considering repeat treatment and to estimate optimal cut-off values to determine whether reoperation to correct bar displacement is warranted. From July 2011 to August 2013, ninety bars were inserted in 61 patients who underwent Nuss procedures for pectus excavatum. Group A did not need surgical intervention and Group B required reoperation for bar displacement. Bar position was measured as the distance from the posterior superior end of the sternal body to the upper border of the metal bar on lateral chest radiographs. The bar displacement index (BDI) was calculated using D0 - Dx / D0 x 100 (D0: bar position the day after surgery; Dx: minimal or maximal distance of bar position on the following postoperative days). The optimal cut-off values of BDI warranting reoperation were assessed on the basis of ROC curve analysis. Of the 61 patients, 32 had single bars inserted whereas 29 had parallel bars inserted. There was a significant difference in age (14.0 ± 7.5 vs. 23.3 ± 12.0, p = 0.0062), preoperative Haller index (HI) (4.0 ± 1.1 vs. 5.0 ± 1.0, p = 0.033), and postoperative HI (2.7 ± 0.4 vs. 3.2 ± 0.5 p = 0.006) between the two groups. The optimal cut-off value of BDI was 8.7. We developed a BDI model for surgeons considering performing reoperation after Nuss procedure. The optimal cut-off value of BDI was 8.7. This model may help surgeons to decide objectively whether corrective surgery should be performed. The main factors affecting the relationship between bar displacement and reoperation were age and preoperative HI.
Bustamante, Luis; Cárdenas, Diana; von Baer, Dietrich; Pastene, Edgar; Duran-Sandoval, Daniel; Vergara, Carola; Mardones, Claudia
2017-09-01
Miniaturized sample pretreatments for the analysis of phenolic metabolites in plasma, involving protein precipitation, enzymatic deconjugation, extraction procedures, and different derivatization reactions were systematically evaluated. The analyses were conducted by gas chromatography with mass spectrometry for the evaluation of 40 diet-derived phenolic compounds. Enzyme purification was necessary for the phenolic deconjugation before extraction. Trimethylsilanization reagent and two different tetrabutylammonium salts for derivatization reactions were compared. The optimum reaction conditions were 50 μL of trimethylsilanization reagent at 90°C for 30 min, while tetrabutylammonium salts were associated with loss of sensitivity due to rapid activation of the inert gas chromatograph liner. Phenolic acids extractions from plasma were optimized. Optimal microextraction by packed sorbent performance was achieved using an octadecylsilyl packed bed and better recoveries for less polar compounds, such as methoxylated derivatives, were observed. Despite the low recovery for many analytes, repeatability using an automated extraction procedure in the gas chromatograph inlet was 2.5%. Instead, using liquid-liquid microextraction, better recoveries (80-110%) for all analytes were observed at the expense of repeatability (3.8-18.4%). The phenolic compounds in gerbil plasma samples, collected before and 4 h after the administration of a calafate extract, were analyzed with the optimized methodology. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
2015-01-01
With ever-growing aging population and demand for denture treatments, pressure-induced mucosa lesion and residual ridge resorption remain main sources of clinical complications. Conventional denture design and fabrication are challenged for its labor and experience intensity, urgently necessitating an automatic procedure. This study aims to develop a fully automatic procedure enabling shape optimization and additive manufacturing of removable partial dentures (RPD), to maximize the uniformity of contact pressure distribution on the mucosa, thereby reducing associated clinical complications. A 3D heterogeneous finite element (FE) model was constructed from CT scan, and the critical tissue of mucosa was modeled as a hyperelastic material from in vivo clinical data. A contact shape optimization algorithm was developed based on the bi-directional evolutionary structural optimization (BESO) technique. Both initial and optimized dentures were prototyped by 3D printing technology and evaluated with in vitro tests. Through the optimization, the peak contact pressure was reduced by 70%, and the uniformity was improved by 63%. In vitro tests verified the effectiveness of this procedure, and the hydrostatic pressure induced in the mucosa is well below clinical pressure-pain thresholds (PPT), potentially lessening risk of residual ridge resorption. This proposed computational optimization and additive fabrication procedure provides a novel method for fast denture design and adjustment at low cost, with quantitative guidelines and computer aided design and manufacturing (CAD/CAM) for a specific patient. The integration of digitalized modeling, computational optimization, and free-form fabrication enables more efficient clinical adaptation. The customized optimal denture design is expected to minimize pain/discomfort and potentially reduce long-term residual ridge resorption. PMID:26161878
Debaize, Lydie; Jakobczyk, Hélène; Rio, Anne-Gaëlle; Gandemer, Virginie; Troadec, Marie-Bérengère
2017-01-01
Genetic abnormalities, including chromosomal translocations, are described for many hematological malignancies. From the clinical perspective, detection of chromosomal abnormalities is relevant not only for diagnostic and treatment purposes but also for prognostic risk assessment. From the translational research perspective, the identification of fusion proteins and protein interactions has allowed crucial breakthroughs in understanding the pathogenesis of malignancies and consequently major achievements in targeted therapy. We describe the optimization of the Proximity Ligation Assay (PLA) to ascertain the presence of fusion proteins, and protein interactions in non-adherent pre-B cells. PLA is an innovative method of protein-protein colocalization detection by molecular biology that combines the advantages of microscopy with the advantages of molecular biology precision, enabling detection of protein proximity theoretically ranging from 0 to 40 nm. We propose an optimized PLA procedure. We overcome the issue of maintaining non-adherent hematological cells by traditional cytocentrifugation and optimized buffers, by changing incubation times, and modifying washing steps. Further, we provide convincing negative and positive controls, and demonstrate that optimized PLA procedure is sensitive to total protein level. The optimized PLA procedure allows the detection of fusion proteins and protein interactions on non-adherent cells. The optimized PLA procedure described here can be readily applied to various non-adherent hematological cells, from cell lines to patients' cells. The optimized PLA protocol enables detection of fusion proteins and their subcellular expression, and protein interactions in non-adherent cells. Therefore, the optimized PLA protocol provides a new tool that can be adopted in a wide range of applications in the biological field.
Detonation energies of explosives by optimized JCZ3 procedures
NASA Astrophysics Data System (ADS)
Stiel, Leonard I.; Baker, Ernest L.
1998-07-01
Procedures for the detonation properties of explosives have been extended for the calculation of detonation energies at adiabatic expansion conditions. The use of the JCZ3 equation of state with optimized Exp-6 potential parameters leads to lower errors in comparison to JWL detonation energies than for other methods tested.
[Preparation procedures of anti-complementary polysaccharides from Houttuynia cordata].
Zhang, Juanjuan; Lu, Yan; Chen, Daofeng
2012-07-01
To establish and optimize the preparation procedures of the anti-complementary polysaccharides from Houttuynia cordata. Based on the yield and anti-complementary activity in vitro, the conditions of extraction and alcohol precipitating process were optimized by orthogonal tests. The optimal condition of deproteinization was determined according to the results of protein removed and polysaccharide maintained. The best decoloring method was also optimized by orthogonal experimental design. The optimized preparation procedures were given as follows: extract the coarse powder 3 times with 50 times volume of water at 90 degrees C for 2 hours every time, combine the extracts and concentrate appropriately, equivalent to 0.12 g of H. cordata per milliliter. Add 4 times volume of 90% ethanol to the extract, allow to stand for 24 hours to precipitate totally, filter and the precipitate was successfully washed with anhydrous alcohol, acetone and anhydrous ether. Resolve the residue with water, add trichloroacetic acid (TCA) to a concentration of 20% to remove protein. Decoloration was at a concentration of 3% with activated carbon at pH 3.0, 50 degrees C for 50 min. The above procedures above were tested 3 times, resulting in the average yield of polysaccharides at 4.03% (RSD 0.96%), the average concentrations of polysaccharides and protein at 80.97% (RSD 1.5%) and 2.02% (RSD 2.3%), and average CH50 at 0.079 g x L-(-1) (RSD 3.6%). The established and optimized procedures are repeatable and reliable to prepare the anti-complementary polysaccharides with high quality and activity from H. cordata.
Influence of diagnostic criteria on the interpretation of adrenal vein sampling.
Lethielleux, Gaëlle; Amar, Laurence; Raynaud, Alain; Plouin, Pierre-François; Steichen, Olivier
2015-04-01
Guidelines promote the use of adrenal vein sampling (AVS) to document lateralized aldosterone hypersecretion in primary aldosteronism. However, there are large discrepancies between institutions in the criteria used to interpret its results. This study evaluates the consequences of these differences on the classification and management of patients. The results of all 537 AVS procedures performed between January 2001 and July 2010 in our institution were interpreted with 4 diagnostic criteria used in experienced institutions where AVS is performed without cosyntropin (Brisbane, Padua, Paris, and Turin) and with criteria proposed by a recent consensus statement. AVS procedures were classified as unsuccessful, lateralized, or not lateralized according to each set of criteria. Almost 5× more AVS procedures were classified as unsuccessful with the strictest criteria than with the least strict criteria (18% versus 4%, respectively). Similarly, over 2× more AVS procedures were classified as lateralized with the least stringent criteria than with the most stringent criteria (60% versus 26%, respectively). Multiple samples were available from ≥1 side for 155 AVS procedures. These procedures were classified differently by ≥2 right-left sample pairs in 12% to 20% of cases. Thus, different sets of criteria used to interpret AVS in experienced institutions translate into heterogeneous classifications and hence management decisions, for patients with primary aldosteronism. Defining the most appropriate procedures and diagnostic criteria is needed for AVS to achieve optimal performance and fully justify its status as a gold standard. © 2015 American Heart Association, Inc.
Mathematical calibration procedure of a capacitive sensor-based indexed metrology platform
NASA Astrophysics Data System (ADS)
Brau-Avila, A.; Santolaria, J.; Acero, R.; Valenzuela-Galvan, M.; Herrera-Jimenez, V. M.; Aguilar, J. J.
2017-03-01
The demand for faster and more reliable measuring tasks for the control and quality assurance of modern production systems has created new challenges for the field of coordinate metrology. Thus, the search for new solutions in coordinate metrology systems and the need for the development of existing ones still persists. One example of such a system is the portable coordinate measuring machine (PCMM), the use of which in industry has considerably increased in recent years, mostly due to its flexibility for accomplishing in-line measuring tasks as well as its reduced cost and operational advantages compared to traditional coordinate measuring machines. Nevertheless, PCMMs have a significant drawback derived from the techniques applied in the verification and optimization procedures of their kinematic parameters. These techniques are based on the capture of data with the measuring instrument from a calibrated gauge object, fixed successively in various positions so that most of the instrument measuring volume is covered, which results in time-consuming, tedious and expensive verification and optimization procedures. In this work the mathematical calibration procedure of a capacitive sensor-based indexed metrology platform (IMP) is presented. This calibration procedure is based on the readings and geometric features of six capacitive sensors and their targets with nanometer resolution. The final goal of the IMP calibration procedure is to optimize the geometric features of the capacitive sensors and their targets in order to use the optimized data in the verification procedures of PCMMs.
Recent advances in integrated multidisciplinary optimization of rotorcraft
NASA Technical Reports Server (NTRS)
Adelman, Howard M.; Walsh, Joanne L.; Pritchard, Jocelyn I.
1992-01-01
A joint activity involving NASA and Army researchers at NASA LaRC to develop optimization procedures to improve the rotor blade design process by integrating appropriate disciplines and accounting for all of the important interactions among the disciplines is described. The disciplines involved include rotor aerodynamics, rotor dynamics, rotor structures, airframe dynamics, and acoustics. The work is focused on combining these five key disciplines in an optimization procedure capable of designing a rotor system to satisfy multidisciplinary design requirements. Fundamental to the plan is a three-phased approach. In phase 1, the disciplines of blade dynamics, blade aerodynamics, and blade structure are closely coupled while acoustics and airframe dynamics are decoupled and are accounted for as effective constraints on the design for the first three disciplines. In phase 2, acoustics is integrated with the first three disciplines. Finally, in phase 3, airframe dynamics is integrated with the other four disciplines. Representative results from work performed to date are described. These include optimal placement of tuning masses for reduction of blade vibratory shear forces, integrated aerodynamic/dynamic optimization, and integrated aerodynamic/dynamic/structural optimization. Examples of validating procedures are described.
Recent advances in multidisciplinary optimization of rotorcraft
NASA Technical Reports Server (NTRS)
Adelman, Howard M.; Walsh, Joanne L.; Pritchard, Jocelyn I.
1992-01-01
A joint activity involving NASA and Army researchers at NASA LaRC to develop optimization procedures to improve the rotor blade design process by integrating appropriate disciplines and accounting for all of the important interactions among the disciplines is described. The disciplines involved include rotor aerodynamics, rotor dynamics, rotor structures, airframe dynamics, and acoustics. The work is focused on combining these five key disciplines in an optimization procedure capable of designing a rotor system to satisfy multidisciplinary design requirements. Fundamental to the plan is a three-phased approach. In phase 1, the disciplines of blade dynamics, blade aerodynamics, and blade structure are closely coupled while acoustics and airframe dynamics are decoupled and are accounted for as effective constraints on the design for the first three disciplines. In phase 2, acoustics is integrated with the first three disciplines. Finally, in phase 3, airframe dynamics is integrated with the other four disciplines. Representative results from work performed to date are described. These include optimal placement of tuning masses for reduction of blade vibratory shear forces, integrated aerodynamic/dynamic optimization, and integrated aerodynamic/dynamic/structural optimization. Examples of validating procedures are described.
CCD image sensor induced error in PIV applications
NASA Astrophysics Data System (ADS)
Legrand, M.; Nogueira, J.; Vargas, A. A.; Ventas, R.; Rodríguez-Hidalgo, M. C.
2014-06-01
The readout procedure of charge-coupled device (CCD) cameras is known to generate some image degradation in different scientific imaging fields, especially in astrophysics. In the particular field of particle image velocimetry (PIV), widely extended in the scientific community, the readout procedure of the interline CCD sensor induces a bias in the registered position of particle images. This work proposes simple procedures to predict the magnitude of the associated measurement error. Generally, there are differences in the position bias for the different images of a certain particle at each PIV frame. This leads to a substantial bias error in the PIV velocity measurement (˜0.1 pixels). This is the order of magnitude that other typical PIV errors such as peak-locking may reach. Based on modern CCD technology and architecture, this work offers a description of the readout phenomenon and proposes a modeling for the CCD readout bias error magnitude. This bias, in turn, generates a velocity measurement bias error when there is an illumination difference between two successive PIV exposures. The model predictions match the experiments performed with two 12-bit-depth interline CCD cameras (MegaPlus ES 4.0/E incorporating the Kodak KAI-4000M CCD sensor with 4 megapixels). For different cameras, only two constant values are needed to fit the proposed calibration model and predict the error from the readout procedure. Tests by different researchers using different cameras would allow verification of the model, that can be used to optimize acquisition setups. Simple procedures to obtain these two calibration values are also described.
Everett, Gregory E; Joe Olmi, D; Edwards, Ron P; Tingstrom, Daniel H; Sterling-Turner, Heather E; Christ, Theodore J
2007-07-01
The present study evaluates the effectiveness of two time-out (TO) procedures in reducing escape-maintained noncompliance of 4 children. Noncompliant behavioral function was established via a functional assessment (FA), including indirect and direct descriptive procedures and brief confirmatory experimental analyses. Following FA, parents were taught to consequate noncompliance with two different TO procedures, one without and one with escape extinction following TO release. Although results indicate TO without escape extinction is effective in increasing compliance above baseline levels, more optimal levels of compliance were obtained for all 4 children when escape extinction was added to the TO procedures already in place. Results indicate efficacy of TO with escape extinction when applied to escape-maintained noncompliance and are discussed as an initial example of the successful application of TO to behaviors maintained by negative reinforcement.
An artificial system for selecting the optimal surgical team.
Saberi, Nahid; Mahvash, Mohsen; Zenati, Marco
2015-01-01
We introduce an intelligent system to optimize a team composition based on the team's historical outcomes and apply this system to compose a surgical team. The system relies on a record of the procedures performed in the past. The optimal team composition is the one with the lowest probability of unfavorable outcome. We use the theory of probability and the inclusion exclusion principle to model the probability of team outcome for a given composition. A probability value is assigned to each person of database and the probability of a team composition is calculated from them. The model allows to determine the probability of all possible team compositions even if there is no recoded procedure for some team compositions. From an analytical perspective, assembling an optimal team is equivalent to minimizing the overlap of team members who have a recurring tendency to be involved with procedures of unfavorable results. A conceptual example shows the accuracy of the proposed system on obtaining the optimal team.
Schwerin, Susan C; Hutchinson, Elizabeth B; Radomski, Kryslaine L; Ngalula, Kapinga P; Pierpaoli, Carlo M; Juliano, Sharon L
2017-06-15
Although rodent TBI studies provide valuable information regarding the effects of injury and recovery, an animal model with neuroanatomical characteristics closer to humans may provide a more meaningful basis for clinical translation. The ferret has a high white/gray matter ratio, gyrencephalic neocortex, and ventral hippocampal location. Furthermore, ferrets are amenable to behavioral training, have a body size compatible with pre-clinical MRI, and are cost-effective. We optimized the surgical procedure for controlled cortical impact (CCI) using 9 adult male ferrets. We used subject-specific brain/skull morphometric data from anatomical MRIs to overcome across-subject variability for lesion placement. We also reflected the temporalis muscle, closed the craniotomy, and used antibiotics. We then gathered MRI, behavioral, and immunohistochemical data from 6 additional animals using the optimized surgical protocol: 1 control, 3 mild, and 1 severely injured animals (surviving one week) and 1 moderately injured animal surviving sixteen weeks. The optimized surgical protocol resulted in consistent injury placement. Astrocytic reactivity increased with injury severity showing progressively greater numbers of astrocytes within the white matter. The density and morphological changes of microglia amplified with injury severity or time after injury. Motor and cognitive impairments scaled with injury severity. The optimized surgical methods differ from those used in the rodent, and are integral to success using a ferret model. We optimized ferret CCI surgery for consistent injury placement. The ferret is an excellent animal model to investigate pathophysiological and behavioral changes associated with TBI. Published by Elsevier B.V.
Meng, Miao; Kiani, Mehdi
2017-02-01
Ultrasound has been recently proposed as an alternative modality for efficient wireless power transmission (WPT) to biomedical implants with millimeter (mm) dimensions. This paper presents the theory and design methodology of ultrasonic WPT links that involve mm-sized receivers (Rx). For given load (R L ) and powering distance (d), the optimal geometries of transmitter (Tx) and Rx ultrasonic transducers, including their diameter and thickness, as well as the optimal operation frequency (f c ) are found through a recursive design procedure to maximize the power transmission efficiency (PTE). First, a range of realistic f c s is found based on the Rx thickness constrain. For a chosen f c within the range, the diameter and thickness of the Rx transducer are then swept together to maximize PTE. Then, the diameter and thickness of the Tx transducer are optimized to maximize PTE. Finally, this procedure is repeated for different f c s to find the optimal f c and its corresponding transducer geometries that maximize PTE. A design example of ultrasonic link has been presented and optimized for WPT to a 1 mm 3 implant, including a disk-shaped piezoelectric transducer on a silicon die. In simulations, a PTE of 2.11% at f c of 1.8 MHz was achieved for R L of 2.5 [Formula: see text] at [Formula: see text]. In order to validate our simulations, an ultrasonic link was optimized for a 1 mm 3 piezoelectric transducer mounted on a printed circuit board (PCB), which led to simulated and measured PTEs of 0.65% and 0.66% at f c of 1.1 MHz for R L of 2.5 [Formula: see text] at [Formula: see text], respectively.
Optimal design application on the advanced aeroelastic rotor blade
NASA Technical Reports Server (NTRS)
Wei, F. S.; Jones, R.
1985-01-01
The vibration and performance optimization procedure using regression analysis was successfully applied to an advanced aeroelastic blade design study. The major advantage of this regression technique is that multiple optimizations can be performed to evaluate the effects of various objective functions and constraint functions. The data bases obtained from the rotorcraft flight simulation program C81 and Myklestad mode shape program are analytically determined as a function of each design variable. This approach has been verified for various blade radial ballast weight locations and blade planforms. This method can also be utilized to ascertain the effect of a particular cost function which is composed of several objective functions with different weighting factors for various mission requirements without any additional effort.
Optimal control of CPR procedure using hemodynamic circulation model
Lenhart, Suzanne M.; Protopopescu, Vladimir A.; Jung, Eunok
2007-12-25
A method for determining a chest pressure profile for cardiopulmonary resuscitation (CPR) includes the steps of representing a hemodynamic circulation model based on a plurality of difference equations for a patient, applying an optimal control (OC) algorithm to the circulation model, and determining a chest pressure profile. The chest pressure profile defines a timing pattern of externally applied pressure to a chest of the patient to maximize blood flow through the patient. A CPR device includes a chest compressor, a controller communicably connected to the chest compressor, and a computer communicably connected to the controller. The computer determines the chest pressure profile by applying an OC algorithm to a hemodynamic circulation model based on the plurality of difference equations.
Engineering calculations for communications systems planning
NASA Technical Reports Server (NTRS)
Levis, C. A.; Martin, C. H.; Wang, C. W.; Gonsalvez, D.
1982-01-01
The single entry interference problem is treated for frequency sharing between the broadcasting satellite and intersatellite services near 23 GHz. It is recommended that very long (more than 120 longitude difference) intersatellite hops be relegated to the unshared portion of the band. When this is done, it is found that suitable orbit assignments can be determined easily with the aid of a set of universal curves. An attempt to develop synthesis procedures for optimally assigning frequencies and orbital slots for the broadcasting satellite service in region 2 was initiated. Several discrete programming and continuous optimization techniques are discussed.
Structural tailoring of advanced turboprops
NASA Technical Reports Server (NTRS)
Brown, K. W.; Hopkins, Dale A.
1988-01-01
The Structural Tailoring of Advanced Turboprops (STAT) computer program was developed to perform numerical optimization on highly swept propfan blades. The optimization procedure seeks to minimize an objective function defined as either: (1) direct operating cost of full scale blade or, (2) aeroelastic differences between a blade and its scaled model, by tuning internal and external geometry variables that must satisfy realistic blade design constraints. The STAT analysis system includes an aerodynamic efficiency evaluation, a finite element stress and vibration analysis, an acoustic analysis, a flutter analysis, and a once-per-revolution forced response life prediction capability. STAT includes all relevant propfan design constraints.
NASA Technical Reports Server (NTRS)
Saravanos, D. A.; Morel, M. R.; Chamis, C. C.
1991-01-01
A methodology is developed to tailor fabrication and material parameters of metal-matrix laminates for maximum loading capacity under thermomechanical loads. The stresses during the thermomechanical response are minimized subject to failure constrains and bounds on the laminate properties. The thermomechanical response of the laminate is simulated using nonlinear composite mechanics. Evaluations of the method on a graphite/copper symmetric cross-ply laminate were performed. The cross-ply laminate required different optimum fabrication procedures than a unidirectional composite. Also, the consideration of the thermomechanical cycle had a significant effect on the predicted optimal process.
Shape design of internal cooling passages within a turbine blade
NASA Astrophysics Data System (ADS)
Nowak, Grzegorz; Nowak, Iwona
2012-04-01
The article concerns the optimization of the shape and location of non-circular passages cooling the blade of a gas turbine. To model the shape, four Bezier curves which form a closed profile of the passage were used. In order to match the shape of the passage to the blade profile, a technique was put forward to copy and scale the profile fragments into the component, and build the outline of the passage on the basis of them. For so-defined cooling passages, optimization calculations were carried out with a view to finding their optimal shape and location in terms of the assumed objectives. The task was solved as a multi-objective problem with the use of the Pareto method, for a cooling system composed of four and five passages. The tool employed for the optimization was the evolutionary algorithm. The article presents the impact of the population on the task convergence, and discusses the impact of different optimization objectives on the Pareto optimal solutions obtained. Due to the problem of different impacts of individual objectives on the position of the solution front which was noticed during the calculations, a two-step optimization procedure was introduced. Also, comparative optimization calculations for the scalar objective function were carried out and set up against the non-dominated solutions obtained in the Pareto approach. The optimization process resulted in a configuration of the cooling system that allows a significant reduction in the temperature of the blade and its thermal stress.
A derived heuristics based multi-objective optimization procedure for micro-grid scheduling
NASA Astrophysics Data System (ADS)
Li, Xin; Deb, Kalyanmoy; Fang, Yanjun
2017-06-01
With the availability of different types of power generators to be used in an electric micro-grid system, their operation scheduling as the load demand changes with time becomes an important task. Besides satisfying load balance constraints and the generator's rated power, several other practicalities, such as limited availability of grid power and restricted ramping of power output from generators, must all be considered during the operation scheduling process, which makes it difficult to decide whether the optimization results are accurate and satisfactory. In solving such complex practical problems, heuristics-based customized optimization algorithms are suggested. However, due to nonlinear and complex interactions of variables, it is difficult to come up with heuristics in such problems off-hand. In this article, a two-step strategy is proposed in which the first task deciphers important heuristics about the problem and the second task utilizes the derived heuristics to solve the original problem in a computationally fast manner. Specifically, the specific operation scheduling is considered from a two-objective (cost and emission) point of view. The first task develops basic and advanced level knowledge bases offline from a series of prior demand-wise optimization runs and then the second task utilizes them to modify optimized solutions in an application scenario. Results on island and grid connected modes and several pragmatic formulations of the micro-grid operation scheduling problem clearly indicate the merit of the proposed two-step procedure.
Trautz, Florian; Dreßler, Jan; Stassart, Ruth; Müller, Wolf; Ondruschka, Benjamin
2018-01-03
Immunohistochemistry (IHC) has become an integral part in forensic histopathology over the last decades. However, the underlying methods for IHC vary greatly depending on the institution, creating a lack of comparability. The aim of this study was to assess the optimal approach for different technical aspects of IHC, in order to improve and standardize this procedure. Therefore, qualitative results from manual and automatic IHC staining of brain samples were compared, as well as potential differences in suitability of common IHC glass slides. Further, possibilities of image digitalization and connected issues were investigated. In our study, automatic staining showed more consistent staining results, compared to manual staining procedures. Digitalization and digital post-processing facilitated direct analysis and analysis for reproducibility considerably. No differences were found for different commercially available microscopic glass slides regarding suitability of IHC brain researches, but a certain rate of tissue loss should be expected during the staining process.
Communication: A difference density picture for the self-consistent field ansatz.
Parrish, Robert M; Liu, Fang; Martínez, Todd J
2016-04-07
We formulate self-consistent field (SCF) theory in terms of an interaction picture where the working variable is the difference density matrix between the true system and a corresponding superposition of atomic densities. As the difference density matrix directly represents the electronic deformations inherent in chemical bonding, this "difference self-consistent field (dSCF)" picture provides a number of significant conceptual and computational advantages. We show that this allows for a stable and efficient dSCF iterative procedure with wholly single-precision Coulomb and exchange matrix builds. We also show that the dSCF iterative procedure can be performed with aggressive screening of the pair space. These approximations are tested and found to be accurate for systems with up to 1860 atoms and >10 000 basis functions, providing for immediate overall speedups of up to 70% in the heavily optimized TeraChem SCF implementation.
Communication: A difference density picture for the self-consistent field ansatz
NASA Astrophysics Data System (ADS)
Parrish, Robert M.; Liu, Fang; Martínez, Todd J.
2016-04-01
We formulate self-consistent field (SCF) theory in terms of an interaction picture where the working variable is the difference density matrix between the true system and a corresponding superposition of atomic densities. As the difference density matrix directly represents the electronic deformations inherent in chemical bonding, this "difference self-consistent field (dSCF)" picture provides a number of significant conceptual and computational advantages. We show that this allows for a stable and efficient dSCF iterative procedure with wholly single-precision Coulomb and exchange matrix builds. We also show that the dSCF iterative procedure can be performed with aggressive screening of the pair space. These approximations are tested and found to be accurate for systems with up to 1860 atoms and >10 000 basis functions, providing for immediate overall speedups of up to 70% in the heavily optimized TeraChem SCF implementation.
The Need for Integrated Approaches in Metabolic Engineering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lechner, Anna; Brunk, Elizabeth; Keasling, Jay D.
This review highlights state-of-the-art procedures for heterologous small-molecule biosynthesis, the associated bottlenecks, and new strategies that have the potential to accelerate future accomplishments in metabolic engineering. We emphasize that a combination of different approaches over multiple time and size scales must b e considered for successful pathway engineering in a heterologous host. We have classified these optimization procedures based on the "system" that is being manipulated: transcriptome, translatome, proteome, or reactome. By bridging multiple disciplines, including molecular biology, biochemistry, biophysics, and computational sciences, we can create an integral framework for the discovery and implementation of novel biosynthetic production routes.
The Need for Integrated Approaches in Metabolic Engineering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lechner, Anna; Brunk, Elizabeth; Keasling, Jay D.
Highlights include state-of-the-art procedures for heterologous small-molecule biosynthesis, the associated bottlenecks, and new strategies that have the potential to accelerate future accomplishments in metabolic engineering. A combination of different approaches over multiple time and size scales must be considered for successful pathway engineering in a heterologous host. We have classified these optimization procedures based on the “system” that is being manipulated: transcriptome, translatome, proteome, or reactome. Here, by bridging multiple disciplines, including molecular biology, biochemistry, biophysics, and computational sciences, we can create an integral framework for the discovery and implementation of novel biosynthetic production routes.
The Need for Integrated Approaches in Metabolic Engineering
Lechner, Anna; Brunk, Elizabeth; Keasling, Jay D.
2016-08-15
Highlights include state-of-the-art procedures for heterologous small-molecule biosynthesis, the associated bottlenecks, and new strategies that have the potential to accelerate future accomplishments in metabolic engineering. A combination of different approaches over multiple time and size scales must be considered for successful pathway engineering in a heterologous host. We have classified these optimization procedures based on the “system” that is being manipulated: transcriptome, translatome, proteome, or reactome. Here, by bridging multiple disciplines, including molecular biology, biochemistry, biophysics, and computational sciences, we can create an integral framework for the discovery and implementation of novel biosynthetic production routes.
A new surgical and technical approach in zygomatic implantology
GRECCHI, F.; BIANCHI, A.E.; SIERVO, S.; GRECCHI, E.; LAURITANO, D.
2017-01-01
SUMMARY Purpose Different surgical approaches for zygomatic implantology using new designed implants are reported. Material and methods The surgical technique is described and two cases reported. The zygomatic fixture has a complete extrasinus path in order to preserve the sinus membrane and to avoid any post-surgical sinus sequelae. Results The surgical procedure allows an optimal position of the implant and consequently an ideal emergence of the fixture on the alveolar crest. Conclusion The surgical procedures and the zygomatic implant design reduce remarkably the serious post-operative sequelae due to the intrasinus path of the zygomatic fixtures. PMID:29876045
Kubová, Jana; Matús, Peter; Bujdos, Marek; Hagarová, Ingrid; Medved', Ján
2008-05-30
The prediction of soil metal phytoavailability using the chemical extractions is a conventional approach routinely used in soil testing. The adequacy of such soil tests for this purpose is commonly assessed through a comparison of extraction results with metal contents in relevant plants. In this work, the fractions of selected risk metals (Al, As, Cd, Cu, Fe, Mn, Ni, Pb, Zn) that can be taken up by various plants were obtained by optimized BCR (Community Bureau of Reference) three-step sequential extraction procedure (SEP) and by single 0.5 mol L(-1) HCl extraction. These procedures were validated using five soil and sediment reference materials (SRM 2710, SRM 2711, CRM 483, CRM 701, SRM RTH 912) and applied to significantly different acidified soils for the fractionation of studied metals. The new indicative values of Al, Cd, Cu, Fe, Mn, P, Pb and Zn fractional concentrations for these reference materials were obtained by the dilute HCl single extraction. The influence of various soil genesis, content of essential elements (Ca, Mg, K, P) and different anthropogenic sources of acidification on extraction yields of individual risk metal fractions was investigated. The concentrations of studied elements were determined by atomic spectrometry methods (flame, graphite furnace and hydride generation atomic absorption spectrometry and inductively coupled plasma optical emission spectrometry). It can be concluded that the data of extraction yields from first BCR SEP acid extractable step and soil-plant transfer coefficients can be applied to the prediction of qualitative mobility of selected risk metals in different soil systems.
Recommended CENWAVE Settings for NUV COS ACQ/PEAKXD Procedure
NASA Astrophysics Data System (ADS)
Indriolo, Nick; Plesha, Rachel; Penton, Steven V.
2017-05-01
Spectroscopic target acquisitions with COS begin with the ACQ/PEAKXD procedure, which centers the external target in the science aperture in the cross-dispersion direction. During this procedure the external target is observed through the Primary Science Aperture (PSA) or Bright Object Aperture (BOA) and the Pt-Ne hollow cathode lamp is flashed on to produce an emission line spectrum in the Wavelength Calibration Aperture(WCA). The separation between the centroids of the WCA and PSA (or BOA) spectra is measured and compared to the known separation between the WCA and the center of the PSA (or BOA). In this way, the slew required to move the target to the center of the PSA (BOA) in the cross-dispersion direction is determined. This procedure requires an accurate measurement of the center of the WCA spectrum in the cross-dispersion direction. Each CENWAVE setting has a different distribution of emission lines from the Pt-Ne lamp on the NUV detector. Due to effects such as lamp aging and optics select mechanism (OSM) drift, the flux in the WCA spectrum for a given CENWAVE can change with time, and it is possible that some settings do not provide enough flux to reliably measure the center of the WCA spectrum. In this ISR we use all available NUV WCA data from 2010 Jan 01 through 2016 Oct 07 to determine which CENWAVE settings are optimal for the ACQ/PEAKXD procedure. These optimal settings are recommended in the Cycle 25 COS Instrument Handbook.
Standardless quantification by parameter optimization in electron probe microanalysis
NASA Astrophysics Data System (ADS)
Limandri, Silvina P.; Bonetto, Rita D.; Josa, Víctor Galván; Carreras, Alejo C.; Trincavelli, Jorge C.
2012-11-01
A method for standardless quantification by parameter optimization in electron probe microanalysis is presented. The method consists in minimizing the quadratic differences between an experimental spectrum and an analytical function proposed to describe it, by optimizing the parameters involved in the analytical prediction. This algorithm, implemented in the software POEMA (Parameter Optimization in Electron Probe Microanalysis), allows the determination of the elemental concentrations, along with their uncertainties. The method was tested in a set of 159 elemental constituents corresponding to 36 spectra of standards (mostly minerals) that include trace elements. The results were compared with those obtained with the commercial software GENESIS Spectrum® for standardless quantification. The quantifications performed with the method proposed here are better in the 74% of the cases studied. In addition, the performance of the method proposed is compared with the first principles standardless analysis procedure DTSA for a different data set, which excludes trace elements. The relative deviations with respect to the nominal concentrations are lower than 0.04, 0.08 and 0.35 for the 66% of the cases for POEMA, GENESIS and DTSA, respectively.
NASA Astrophysics Data System (ADS)
Arroyo, Orlando; Gutiérrez, Sergio
2017-07-01
Several seismic optimization methods have been proposed to improve the performance of reinforced concrete framed (RCF) buildings; however, they have not been widely adopted among practising engineers because they require complex nonlinear models and are computationally expensive. This article presents a procedure to improve the seismic performance of RCF buildings based on eigenfrequency optimization, which is effective, simple to implement and efficient. The method is used to optimize a 10-storey regular building, and its effectiveness is demonstrated by nonlinear time history analyses, which show important reductions in storey drifts and lateral displacements compared to a non-optimized building. A second example for an irregular six-storey building demonstrates that the method provides benefits to a wide range of RCF structures and supports the applicability of the proposed method.
Von Benda-Beckmann, Alexander M; Wensveen, Paul J; Kvadsheim, Petter H; Lam, Frans-Peter A; Miller, Patrick J O; Tyack, Peter L; Ainslie, Michael A
2014-02-01
Ramp-up or soft-start procedures (i.e., gradual increase in the source level) are used to mitigate the effect of sonar sound on marine mammals, although no one to date has tested whether ramp-up procedures are effective at reducing the effect of sound on marine mammals. We investigated the effectiveness of ramp-up procedures in reducing the area within which changes in hearing thresholds can occur. We modeled the level of sound killer whales (Orcinus orca) were exposed to from a generic sonar operation preceded by different ramp-up schemes. In our model, ramp-up procedures reduced the risk of killer whales receiving sounds of sufficient intensity to affect their hearing. The effectiveness of the ramp-up procedure depended strongly on the assumed response threshold and differed with ramp-up duration, although extending the duration of the ramp up beyond 5 min did not add much to its predicted mitigating effect. The main factors that limited effectiveness of ramp up in a typical antisubmarine warfare scenario were high source level, rapid moving sonar source, and long silences between consecutive sonar transmissions. Our exposure modeling approach can be used to evaluate and optimize mitigation procedures. © 2013 Society for Conservation Biology.
Optimal configuration of microstructure in ferroelectric materials by stochastic optimization
NASA Astrophysics Data System (ADS)
Jayachandran, K. P.; Guedes, J. M.; Rodrigues, H. C.
2010-07-01
An optimization procedure determining the ideal configuration at the microstructural level of ferroelectric (FE) materials is applied to maximize piezoelectricity. Piezoelectricity in ceramic FEs differs significantly from that of single crystals because of the presence of crystallites (grains) possessing crystallographic axes aligned imperfectly. The piezoelectric properties of a polycrystalline (ceramic) FE is inextricably related to the grain orientation distribution (texture). The set of combination of variables, known as solution space, which dictates the texture of a ceramic is unlimited and hence the choice of the optimal solution which maximizes the piezoelectricity is complicated. Thus, a stochastic global optimization combined with homogenization is employed for the identification of the optimal granular configuration of the FE ceramic microstructure with optimum piezoelectric properties. The macroscopic equilibrium piezoelectric properties of polycrystalline FE is calculated using mathematical homogenization at each iteration step. The configuration of grains characterized by its orientations at each iteration is generated using a randomly selected set of orientation distribution parameters. The optimization procedure applied to the single crystalline phase compares well with the experimental data. Apparent enhancement of piezoelectric coefficient d33 is observed in an optimally oriented BaTiO3 single crystal. Based on the good agreement of results with the published data in single crystals, we proceed to apply the methodology in polycrystals. A configuration of crystallites, simultaneously constraining the orientation distribution of the c-axis (polar axis) while incorporating ab-plane randomness, which would multiply the overall piezoelectricity in ceramic BaTiO3 is also identified. The orientation distribution of the c-axes is found to be a narrow Gaussian distribution centered around 45°. The piezoelectric coefficient in such a ceramic is found to be nearly three times as that of the single crystal. Our optimization model provide designs for materials with enhanced piezoelectric performance, which would stimulate further studies involving materials possessing higher spontaneous polarization.
Efficient Simulation Budget Allocation for Selecting an Optimal Subset
NASA Technical Reports Server (NTRS)
Chen, Chun-Hung; He, Donghai; Fu, Michael; Lee, Loo Hay
2008-01-01
We consider a class of the subset selection problem in ranking and selection. The objective is to identify the top m out of k designs based on simulated output. Traditional procedures are conservative and inefficient. Using the optimal computing budget allocation framework, we formulate the problem as that of maximizing the probability of correc tly selecting all of the top-m designs subject to a constraint on the total number of samples available. For an approximation of this corre ct selection probability, we derive an asymptotically optimal allocat ion and propose an easy-to-implement heuristic sequential allocation procedure. Numerical experiments indicate that the resulting allocatio ns are superior to other methods in the literature that we tested, and the relative efficiency increases for larger problems. In addition, preliminary numerical results indicate that the proposed new procedur e has the potential to enhance computational efficiency for simulation optimization.
De Kesel, Pieter M M; Lambert, Willy E; Stove, Christophe P
2015-11-01
Caffeine is the probe drug of choice to assess the phenotype of the drug metabolizing enzyme CYP1A2. Typically, molar concentration ratios of paraxanthine, caffeine's major metabolite, to its precursor are determined in plasma following administration of a caffeine test dose. The aim of this study was to develop and validate an LC-MS/MS method for the determination of caffeine and paraxanthine in hair. The different steps of a hair extraction procedure were thoroughly optimized. Following a three-step decontamination procedure, caffeine and paraxanthine were extracted from 20 mg of ground hair using a solution of protease type VIII in Tris buffer (pH 7.5). Resulting hair extracts were cleaned up on Strata-X™ SPE cartridges. All samples were analyzed on a Waters Acquity UPLC® system coupled to an AB SCIEX API 4000™ triple quadrupole mass spectrometer. The final method was fully validated based on international guidelines. Linear calibration lines for caffeine and paraxanthine ranged from 20 to 500 pg/mg. Precision (%RSD) and accuracy (%bias) were below 12% and 7%, respectively. The isotopically labeled internal standards compensated for the ion suppression observed for both compounds. Relative matrix effects were below 15%RSD. The recovery of the sample preparation procedure was high (>85%) and reproducible. Caffeine and paraxanthine were stable in hair for at least 644 days. The effect of the hair decontamination procedure was evaluated as well. Finally, the applicability of the developed procedure was demonstrated by determining caffeine and paraxanthine concentrations in hair samples of ten healthy volunteers. The optimized and validated method for determination of caffeine and paraxanthine in hair proved to be reliable and may serve to evaluate the potential of hair analysis for CYP1A2 phenotyping. Copyright © 2015 Elsevier B.V. All rights reserved.
Ensuring Effective Prevention of Iodine Deficiency Disorders.
Völzke, Henry; Caron, Philippe; Dahl, Lisbeth; de Castro, João J; Erlund, Iris; Gaberšček, Simona; Gunnarsdottir, Ingibjörg; Hubalewska-Dydejczyk, Alicja; Ittermann, Till; Ivanova, Ludmila; Karanfilski, Borislav; Khattak, Rehman M; Kusić, Zvonko; Laurberg, Peter; Lazarus, John H; Markou, Kostas B; Moreno-Reyes, Rodrigo; Nagy, Endre V; Peeters, Robin P; Pīrāgs, Valdis; Podoba, Ján; Rayman, Margaret P; Rochau, Ursula; Siebert, Uwe; Smyth, Peter P; Thuesen, Betina H; Troen, Aron; Vila, Lluís; Vitti, Paolo; Zamrazil, Vaclav; Zimmermann, Michael B
2016-02-01
Programs initiated to prevent iodine deficiency disorders (IDD) may not remain effective due to changes in government policies, commercial factors, and human behavior that may affect the efficacy of IDD prevention programs in unpredictable directions. Monitoring and outcome studies are needed to optimize the effectiveness of IDD prevention. Although the need for monitoring is compelling, the current reality in Europe is less than optimal. Regular and systematic monitoring surveys have only been established in a few countries, and comparability across the studies is hampered by the lack of centralized standardization procedures. In addition, data on outcomes and the cost of achieving them are needed in order to provide evidence of the beneficial effects of IDD prevention in countries with mild iodine deficiency. Monitoring studies can be optimized by including centralized standardization procedures that improve the comparison between studies. No study of iodine consumption can replace the direct measurement of health outcomes and the evaluation of the costs and benefits of the program. It is particularly important that health economic evaluation should be conducted in mildly iodine-deficient areas and that it should include populations from regions with different environmental, ethnic, and cultural backgrounds.
An algorithm for the optimal collection of wet waste.
Laureri, Federica; Minciardi, Riccardo; Robba, Michela
2016-02-01
This work refers to the development of an approach for planning wet waste (food waste and other) collection at a metropolitan scale. Some specific modeling features distinguish this specific waste collection problem from the other ones. For instance, there may be significant differences as regards the values of the parameters (such as weight and volume) characterizing the various collection points. As it happens for classical waste collection planning, even in the case of wet waste, one has to deal with difficult combinatorial problems, where the determination of an optimal solution may require a very large computational effort, in the case of problem instances having a noticeable dimensionality. For this reason, in this work, a heuristic procedure for the optimal planning of wet waste is developed and applied to problem instances drawn from a real case study. The performances that can be obtained by applying such a procedure are evaluated by a comparison with those obtainable via a general-purpose mathematical programming software package, as well as those obtained by applying very simple decision rules commonly used in practice. The considered case study consists in an area corresponding to the historical center of the Municipality of Genoa. Copyright © 2015 Elsevier Ltd. All rights reserved.
Additive manufacturing of reflective optics: evaluating finishing methods
NASA Astrophysics Data System (ADS)
Leuteritz, G.; Lachmayer, R.
2018-02-01
Individually shaped light distributions become more and more important in lighting technologies and thus the importance of additively manufactured reflectors increases significantly. The vast field of applications ranges from automotive lighting to medical imaging and bolsters the statement. However, the surfaces of additively manufactured reflectors suffer from insufficient optical properties even when manufactured using optimized process parameters for the Selective Laser Melting (SLM) process. Therefore post-process treatments of reflectors are necessary in order to further enhance their optical quality. This work concentrates on the effectiveness of post-process procedures for reflective optics. Based on already optimized aluminum reflectors, which are manufactured with a SLM machine, the parts are differently machined after the SLM process. Selected finishing methods like laser polishing, sputtering or sand blasting are applied and their effects quantified and compared. The post-process procedures are investigated on their impact on surface roughness and reflectance as well as geometrical precision. For each finishing method a demonstrator will be created and compared to a fully milled sample and among themselves. Ultimately, guidelines are developed in order to figure out the optimal treatment of additively manufactured reflectors regarding their optical and geometrical properties. Simulations of the light distributions will be validated with the developed demonstrators.
Identifying the bad guy in a lineup using confidence judgments under deadline pressure.
Brewer, Neil; Weber, Nathan; Wootton, David; Lindsay, D Stephen
2012-10-01
Eyewitness-identification tests often culminate in witnesses not picking the culprit or identifying innocent suspects. We tested a radical alternative to the traditional lineup procedure used in such tests. Rather than making a positive identification, witnesses made confidence judgments under a short deadline about whether each lineup member was the culprit. We compared this deadline procedure with the traditional sequential-lineup procedure in three experiments with retention intervals ranging from 5 min to 1 week. A classification algorithm that identified confidence criteria that optimally discriminated accurate from inaccurate decisions revealed that decision accuracy was 24% to 66% higher under the deadline procedure than under the traditional procedure. Confidence profiles across lineup stimuli were more informative than were identification decisions about the likelihood that an individual witness recognized the culprit or correctly recognized that the culprit was not present. Large differences between the maximum and the next-highest confidence value signaled very high accuracy. Future support for this procedure across varied conditions would highlight a viable alternative to the problematic lineup procedures that have traditionally been used by law enforcement.
Coordinated and uncoordinated optimization of networks
NASA Astrophysics Data System (ADS)
Brede, Markus
2010-06-01
In this paper, we consider spatial networks that realize a balance between an infrastructure cost (the cost of wire needed to connect the network in space) and communication efficiency, measured by average shortest path length. A global optimization procedure yields network topologies in which this balance is optimized. These are compared with network topologies generated by a competitive process in which each node strives to optimize its own cost-communication balance. Three phases are observed in globally optimal configurations for different cost-communication trade offs: (i) regular small worlds, (ii) starlike networks, and (iii) trees with a center of interconnected hubs. In the latter regime, i.e., for very expensive wire, power laws in the link length distributions P(w)∝w-α are found, which can be explained by a hierarchical organization of the networks. In contrast, in the local optimization process the presence of sharp transitions between different network regimes depends on the dimension of the underlying space. Whereas for d=∞ sharp transitions between fully connected networks, regular small worlds, and highly cliquish periphery-core networks are found, for d=1 sharp transitions are absent and the power law behavior in the link length distribution persists over a much wider range of link cost parameters. The measured power law exponents are in agreement with the hypothesis that the locally optimized networks consist of multiple overlapping suboptimal hierarchical trees.
Torres Padrón, M E; Sosa Ferrera, Z; Santana Rodríguez, J J
2006-09-01
A solid-phase microextraction (SPME) procedure using two commercial fibers coupled with high-performance liquid chromatography (HPLC) is presented for the extraction and determination of organochlorine pesticides in water samples. We have evaluated the extraction efficiency of this kind of compound using two different fibers: 60-mum polydimethylsiloxane-divinylbenzene (PDMS-DVB) and Carbowax/TPR-100 (CW/TPR). Parameters involved in the extraction and desorption procedures (e.g. extraction time, ionic strength, extraction temperature, desorption and soaking time) were studied and optimized to achieve the maximum efficiency. Results indicate that both PDMS-DVB and CW/TPR fibers are suitable for the extraction of this type of compound, and a simple calibration curve method based on simple aqueous standards can be used. All the correlation coefficients were better than 0.9950, and the RSDs ranged from 7% to 13% for 60-mum PDMS-DVB fiber and from 3% to 10% for CW/TPR fiber. Optimized procedures were applied to the determination of a mixture of six organochlorine pesticides in environmental liquid samples (sea, sewage and ground waters), employing HPLC with UV-diode array detector.
Adaptive Modeling Procedure Selection by Data Perturbation.
Zhang, Yongli; Shen, Xiaotong
2015-10-01
Many procedures have been developed to deal with the high-dimensional problem that is emerging in various business and economics areas. To evaluate and compare these procedures, modeling uncertainty caused by model selection and parameter estimation has to be assessed and integrated into a modeling process. To do this, a data perturbation method estimates the modeling uncertainty inherited in a selection process by perturbing the data. Critical to data perturbation is the size of perturbation, as the perturbed data should resemble the original dataset. To account for the modeling uncertainty, we derive the optimal size of perturbation, which adapts to the data, the model space, and other relevant factors in the context of linear regression. On this basis, we develop an adaptive data-perturbation method that, unlike its nonadaptive counterpart, performs well in different situations. This leads to a data-adaptive model selection method. Both theoretical and numerical analysis suggest that the data-adaptive model selection method adapts to distinct situations in that it yields consistent model selection and optimal prediction, without knowing which situation exists a priori. The proposed method is applied to real data from the commodity market and outperforms its competitors in terms of price forecasting accuracy.
A Robust Adaptive Autonomous Approach to Optimal Experimental Design
NASA Astrophysics Data System (ADS)
Gu, Hairong
Experimentation is the fundamental tool of scientific inquiries to understand the laws governing the nature and human behaviors. Many complex real-world experimental scenarios, particularly in quest of prediction accuracy, often encounter difficulties to conduct experiments using an existing experimental procedure for the following two reasons. First, the existing experimental procedures require a parametric model to serve as the proxy of the latent data structure or data-generating mechanism at the beginning of an experiment. However, for those experimental scenarios of concern, a sound model is often unavailable before an experiment. Second, those experimental scenarios usually contain a large number of design variables, which potentially leads to a lengthy and costly data collection cycle. Incompetently, the existing experimental procedures are unable to optimize large-scale experiments so as to minimize the experimental length and cost. Facing the two challenges in those experimental scenarios, the aim of the present study is to develop a new experimental procedure that allows an experiment to be conducted without the assumption of a parametric model while still achieving satisfactory prediction, and performs optimization of experimental designs to improve the efficiency of an experiment. The new experimental procedure developed in the present study is named robust adaptive autonomous system (RAAS). RAAS is a procedure for sequential experiments composed of multiple experimental trials, which performs function estimation, variable selection, reverse prediction and design optimization on each trial. Directly addressing the challenges in those experimental scenarios of concern, function estimation and variable selection are performed by data-driven modeling methods to generate a predictive model from data collected during the course of an experiment, thus exempting the requirement of a parametric model at the beginning of an experiment; design optimization is performed to select experimental designs on the fly of an experiment based on their usefulness so that fewest designs are needed to reach useful inferential conclusions. Technically, function estimation is realized by Bayesian P-splines, variable selection is realized by Bayesian spike-and-slab prior, reverse prediction is realized by grid-search and design optimization is realized by the concepts of active learning. The present study demonstrated that RAAS achieves statistical robustness by making accurate predictions without the assumption of a parametric model serving as the proxy of latent data structure while the existing procedures can draw poor statistical inferences if a misspecified model is assumed; RAAS also achieves inferential efficiency by taking fewer designs to acquire useful statistical inferences than non-optimal procedures. Thus, RAAS is expected to be a principled solution to real-world experimental scenarios pursuing robust prediction and efficient experimentation.
NASA Technical Reports Server (NTRS)
Wheeler, Ward C.
2003-01-01
A method to align sequence data based on parsimonious synapomorphy schemes generated by direct optimization (DO; earlier termed optimization alignment) is proposed. DO directly diagnoses sequence data on cladograms without an intervening multiple-alignment step, thereby creating topology-specific, dynamic homology statements. Hence, no multiple-alignment is required to generate cladograms. Unlike general and globally optimal multiple-alignment procedures, the method described here, implied alignment (IA), takes these dynamic homologies and traces them back through a single cladogram, linking the unaligned sequence positions in the terminal taxa via DO transformation series. These "lines of correspondence" link ancestor-descendent states and, when displayed as linearly arrayed columns without hypothetical ancestors, are largely indistinguishable from standard multiple alignment. Since this method is based on synapomorphy, the treatment of certain classes of insertion-deletion (indel) events may be different from that of other alignment procedures. As with all alignment methods, results are dependent on parameter assumptions such as indel cost and transversion:transition ratios. Such an IA could be used as a basis for phylogenetic search, but this would be questionable since the homologies derived from the implied alignment depend on its natal cladogram and any variance, between DO and IA + Search, due to heuristic approach. The utility of this procedure in heuristic cladogram searches using DO and the improvement of heuristic cladogram cost calculations are discussed. c2003 The Willi Hennig Society. Published by Elsevier Science (USA). All rights reserved.
Arce, M M; Sanllorente, S; Ortiz, M C; Sarabia, L A
2018-01-26
Legal limits for phenol and bisphenol-A (BPA) in toys are 15 and 0.1 mg L -1 respectively. The latest studies show that in Europe the content of BPA, which reaches our bodies through different contact routes, in no cases exceed legal limits. But it is true that the effects caused by continued intake of this analyte for a long time and other possible processes that could increase their migration, are still under consideration by the health agencies responsible. A multiresponse optimization using a D-optimal design for simultaneously optimising two experimental factors (temperature and flow) at three levels and one (mobile phase composition) at four levels, in the determination by means of HPLC-FLD is proposed in this work. The D-optimal design allows ones to reduce the experimental effort from 36 to 11 experiments guaranteeing the quality of the estimates. The model fitted is validated and, after the responses are estimated in the whole experimental domain, the experimental conditions that maximize peak areas and minimize retention times for both analytes are chosen by means of a Pareto front. In this way, the sensitivity and the time of the analysis have been improved with this optimization. Decision limit and capability of detection at the limits obtained were 33.9 and 66.1 μg L -1 for phenol and 25.6 and 50.0 for BPA μg L -1 respectively when the probabilities of false negative and false positive were fixed at 0.05. The procedure has been successfully applied to determine phenol and BPA in different samples (toys, clinical serum bags and artificial tears). The simulants HCl 0.07 M and water were used for the analysis of toys. The quantity of phenol found in serum bags and in artificial tears ranged from 15 to 600 μg L -1 . No BPA has been found in the objects analysed. In addition, this work incorporates computer programmes which implement the procedure used (COOrdinates parallel plot and Pareto FROnt, COO-FRO) such that it can be used in any other chromatographic optimization. Copyright © 2017 Elsevier B.V. All rights reserved.
Jig-Shape Optimization of a Low-Boom Supersonic Aircraft
NASA Technical Reports Server (NTRS)
Pak, Chan-Gi
2018-01-01
A simple approach for optimizing the jig-shape is proposed in this study. This simple approach is based on an unconstrained optimization problem and applied to a low-boom supersonic aircraft. In this study, the jig-shape optimization is performed using the two-step approach. First, starting design variables are computed using the least-squares surface fitting technique. Next, the jig-shape is further tuned using a numerical optimization procedure based on an in-house object-oriented optimization tool. During the numerical optimization procedure, a design jig-shape is determined by the baseline jig-shape and basis functions. A total of 12 symmetric mode shapes of the cruise-weight configuration, rigid pitch shape, rigid left and right stabilator rotation shapes, and a residual shape are selected as sixteen basis functions. After three optimization runs, the trim shape error distribution is improved, and the maximum trim shape error of 0.9844 inches of the starting configuration becomes 0.00367 inch by the end of the third optimization run.
Toropov, Andrey A; Toropova, Alla P; Benfenati, Emilio; Salmona, Mario
2018-06-01
The aim of the present work is an attempt to define computable measure of similarity between different endpoints. The similarity of structural alerts of different biochemical endpoints can be used to solve tasks of medicinal chemistry. Optimal descriptors are a tool to build up models for different endpoints. The optimal descriptor is calculated with simplified molecular input-line entry system (SMILES). A group of elements (single symbol or pair of symbols) can represent any SMILES. Each element of SMILES can be represented by so-called correlation weight i.e. coefficient that should be used to calculate descriptor. Numerical data on the correlation weights are calculated by the Monte Carlo method, i.e. by optimization procedure, which gives maximal correlation coefficient between the optimal descriptor and endpoint for the training set. Statistically stable correlation weights observed in several runs of the optimization can be examined as structural alerts, which are promoters of the increase or the decrease of a biochemical activity of a substance. Having data on several runs of the optimization correlation weights, one can extract list of promoters of increase and list of promoters of decrease for an endpoint. The study of similarity and dissimilarity of the above lists has been carried out for the following pairs of endpoints: (i) mutagenicity and anticancer activity; (ii) mutagenicity and blood brain barrier; and (iii) blood brain barrier and anticancer activity. The computational experiment confirms that similarity and dissimilarity for pairs of endpoints can be measured.
NASA Technical Reports Server (NTRS)
White, Warren B.; Tai, Chang-Kou; Holland, William R.
1990-01-01
The optimal interpolation method of Lorenc (1981) was used to conduct continuous assimilation of altimetric sea level differences from the simulated Geosat exact repeat mission (ERM) into a three-layer quasi-geostrophic eddy-resolving numerical ocean box model that simulates the statistics of mesoscale eddy activity in the western North Pacific. Assimilation was conducted continuously as the Geosat tracks appeared in simulated real time/space, with each track repeating every 17 days, but occurring at different times and locations within the 17-day period, as would have occurred in a realistic nowcast situation. This interpolation method was also used to conduct the assimilation of referenced altimetric sea level differences into the same model, performing the referencing of altimetric sea sevel differences by using the simulated sea level. The results of this dynamical interpolation procedure are compared with those of a statistical (i.e., optimum) interpolation procedure.
Design and Optimization of Composite Gyroscope Momentum Wheel Rings
NASA Technical Reports Server (NTRS)
Bednarcyk, Brett A.; Arnold, Steven M.
2007-01-01
Stress analysis and preliminary design/optimization procedures are presented for gyroscope momentum wheel rings composed of metallic, metal matrix composite, and polymer matrix composite materials. The design of these components involves simultaneously minimizing both true part volume and mass, while maximizing angular momentum. The stress analysis results are combined with an anisotropic failure criterion to formulate a new sizing procedure that provides considerable insight into the design of gyroscope momentum wheel ring components. Results compare the performance of two optimized metallic designs, an optimized SiC/Ti composite design, and an optimized graphite/epoxy composite design. The graphite/epoxy design appears to be far superior to the competitors considered unless a much greater premium is placed on volume efficiency compared to mass efficiency.
A Fast Proceduere for Optimizing Thermal Protection Systems of Re-Entry Vehicles
NASA Astrophysics Data System (ADS)
Ferraiuolo, M.; Riccio, A.; Tescione, D.; Gigliotti, M.
The aim of the present work is to introduce a fast procedure to optimize thermal protection systems for re-entry vehicles subjected to high thermal loads. A simplified one-dimensional optimization process, performed in order to find the optimum design variables (lengths, sections etc.), is the first step of the proposed design procedure. Simultaneously, the most suitable materials able to sustain high temperatures and meeting the weight requirements are selected and positioned within the design layout. In this stage of the design procedure, simplified (generalized plane strain) FEM models are used when boundary and geometrical conditions allow the reduction of the degrees of freedom. Those simplified local FEM models can be useful because they are time-saving and very simple to build; they are essentially one dimensional and can be used for optimization processes in order to determine the optimum configuration with regard to weight, temperature and stresses. A triple-layer and a double-layer body, subjected to the same aero-thermal loads, have been optimized to minimize the overall weight. Full two and three-dimensional analyses are performed in order to validate those simplified models. Thermal-structural analyses and optimizations are executed by adopting the Ansys FEM code.
Let's get technical! Gaming and technology for weight control and health promotion in children.
Baranowski, Tom; Frankel, Leslie
2012-02-01
Most children, including lower socioeconomic status and ethnic minority children, play video games, use computers, and have cell phones, and growing numbers have smart phones and electronic tablets. They are comfortable with, even prefer, electronic media. Many expect to be entertained and have a low tolerance for didactic methods. Thus, health promotion with children needs to incorporate more interactive media. Interactive media for weight control and health promotion among children can be broadly classified into web-based educational/therapeutic programs, tailored motivational messaging systems, data monitoring and feedback systems, active video games, and diverse forms of interactive multimedia experiences involving games. This article describes the primary characteristics of these different technological methods; presents the strengths and weaknesses of each in meeting the needs of children of different ages; emphasizes that we are in the earliest stages of knowing how best to design these systems, including selecting the optimal requisite behavioral change theories; and identifies high-priority research issues. Gaming and technology offer many exciting, innovative opportunities for engaging children and promoting diet and physical activity changes that can contribute to obesity prevention and weight loss maintenance. Research needs to clarify optimal procedures for effectively promoting change with each change procedure.
Optimized positioning of autonomous surgical lamps
NASA Astrophysics Data System (ADS)
Teuber, Jörn; Weller, Rene; Kikinis, Ron; Oldhafer, Karl-Jürgen; Lipp, Michael J.; Zachmann, Gabriel
2017-03-01
We consider the problem of finding automatically optimal positions of surgical lamps throughout the whole surgical procedure, where we assume that future lamps could be robotized. We propose a two-tiered optimization technique for the real-time autonomous positioning of those robotized surgical lamps. Typically, finding optimal positions for surgical lamps is a multi-dimensional problem with several, in part conflicting, objectives, such as optimal lighting conditions at every point in time while minimizing the movement of the lamps in order to avoid distractions of the surgeon. Consequently, we use multi-objective optimization (MOO) to find optimal positions in real-time during the entire surgery. Due to the conflicting objectives, there is usually not a single optimal solution for such kinds of problems, but a set of solutions that realizes a Pareto-front. When our algorithm selects a solution from this set it additionally has to consider the individual preferences of the surgeon. This is a highly non-trivial task because the relationship between the solution and the parameters is not obvious. We have developed a novel meta-optimization that considers exactly this challenge. It delivers an easy to understand set of presets for the parameters and allows a balance between the lamp movement and lamp obstruction. This metaoptimization can be pre-computed for different kinds of operations and it then used by our online optimization for the selection of the appropriate Pareto solution. Both optimization approaches use data obtained by a depth camera that captures the surgical site but also the environment around the operating table. We have evaluated our algorithms with data recorded during a real open abdominal surgery. It is available for use for scientific purposes. The results show that our meta-optimization produces viable parameter sets for different parts of an intervention even when trained on a small portion of it.
Closed loop problems in biomechanics. Part II--an optimization approach.
Vaughan, C L; Hay, J G; Andrews, J G
1982-01-01
A closed loop problem in biomechanics may be defined as a problem in which there are one or more closed loops formed by the human body in contact with itself or with an external system. Under certain conditions the problem is indeterminate--the unknown forces and torques outnumber the equations. Force transducing devices, which would help solve this problem, have serious drawbacks, and existing methods are inaccurate and non-general. The purposes of the present paper are (1) to develop a general procedure for solving closed loop problems; (2) to illustrate the application of the procedure; and (3) to examine the validity of the procedure. A mathematical optimization approach is applied to the solution of three different closed loop problems--walking up stairs, vertical jumping and cartwheeling. The following conclusions are drawn: (1) the method described is reasonably successful for predicting horizontal and vertical reaction forces at the distal segments although problems exist for predicting the points of application of these forces; (2) the results provide some support for the notion that the human neuromuscular mechanism attempts to minimize the joint torques and thus, to a certain degree, the amount of muscular effort; (3) in the validation procedure it is desirable to have a force device for each of the distal segments in contact with a fixed external system; and (4) the method is sufficiently general to be applied to all classes of closed loop problems.
[Surveillance cultures after high-level disinfection of flexible endoscopes in a general hospital].
Robles, Christian; Turín, Christie; Villar, Alicia; Huerta-Mercado, Jorge; Samalvides, Frine
2014-04-01
Flexible endoscopes are instruments with a complex structure which are used in invasive gastroenterological procedures, therefore high-level disinfection (HLD) is recommended as an appropriate reprocessing method. However, most hospitals do not perform a quality control to assess the compliance and results of the disinfection process. To evaluate the effectiveness of the flexible endoscopes’ decontamination after high-level disinfection by surveillance cultures and to assess the compliance with the reprocessing guidelines. Descriptive study conducted in January 2013 in the Gastroenterological Unit of a tertiary hospital. 30 endoscopic procedures were randomly selected. Compliance with guidelines was evaluated and surveillance cultures for common bacteria were performed after the disinfection process. On the observational assessment, compliance with the guidelines was as follows: pre-cleaning 9 (30%), cleaning 5 (16.7%), rinse 3 (10%), first drying 30 (100%), disinfection 30 (100%), final rinse 0 (0%) and final drying 30 (100%), demonstrating that only 3 of 7 stages of the disinfection process were optimally performed. In the microbiological evaluation, 2 (6.7%) of the 30 procedures had a positive culture obtained from the surface of the endoscope. Furthermore, 1 (4.2%) of the 24 biopsy forcepsgave a positive culture. The organisms isolated were different Pseudomonas species. High-level disinfection procedures were not optimally performed, finding in 6.7% positive cultures of Pseudomonas species.
van Rossum, Huub H; Kemperman, Hans
2017-02-01
To date, no practical tools are available to obtain optimal settings for moving average (MA) as a continuous analytical quality control instrument. Also, there is no knowledge of the true bias detection properties of applied MA. We describe the use of bias detection curves for MA optimization and MA validation charts for validation of MA. MA optimization was performed on a data set of previously obtained consecutive assay results. Bias introduction and MA bias detection were simulated for multiple MA procedures (combination of truncation limits, calculation algorithms and control limits) and performed for various biases. Bias detection curves were generated by plotting the median number of test results needed for bias detection against the simulated introduced bias. In MA validation charts the minimum, median, and maximum numbers of assay results required for MA bias detection are shown for various bias. Their use was demonstrated for sodium, potassium, and albumin. Bias detection curves allowed optimization of MA settings by graphical comparison of bias detection properties of multiple MA. The optimal MA was selected based on the bias detection characteristics obtained. MA validation charts were generated for selected optimal MA and provided insight into the range of results required for MA bias detection. Bias detection curves and MA validation charts are useful tools for optimization and validation of MA procedures.
Teaching and assessing procedural skills using simulation: metrics and methodology.
Lammers, Richard L; Davenport, Moira; Korley, Frederick; Griswold-Theodorson, Sharon; Fitch, Michael T; Narang, Aneesh T; Evans, Leigh V; Gross, Amy; Rodriguez, Elliot; Dodge, Kelly L; Hamann, Cara J; Robey, Walter C
2008-11-01
Simulation allows educators to develop learner-focused training and outcomes-based assessments. However, the effectiveness and validity of simulation-based training in emergency medicine (EM) requires further investigation. Teaching and testing technical skills require methods and assessment instruments that are somewhat different than those used for cognitive or team skills. Drawing from work published by other medical disciplines as well as educational, behavioral, and human factors research, the authors developed six research themes: measurement of procedural skills; development of performance standards; assessment and validation of training methods, simulator models, and assessment tools; optimization of training methods; transfer of skills learned on simulator models to patients; and prevention of skill decay over time. The article reviews relevant and established educational research methodologies and identifies gaps in our knowledge of how physicians learn procedures. The authors present questions requiring further research that, once answered, will advance understanding of simulation-based procedural training and assessment in EM.
Improving the Unsteady Aerodynamic Performance of Transonic Turbines using Neural Networks
NASA Technical Reports Server (NTRS)
Rai, Man Mohan; Madavan, Nateri K.; Huber, Frank W.
1999-01-01
A recently developed neural net-based aerodynamic design procedure is used in the redesign of a transonic turbine stage to improve its unsteady aerodynamic performance. The redesign procedure used incorporates the advantages of both traditional response surface methodology and neural networks by employing a strategy called parameter-based partitioning of the design space. Starting from the reference design, a sequence of response surfaces based on both neural networks and polynomial fits are constructed to traverse the design space in search of an optimal solution that exhibits improved unsteady performance. The procedure combines the power of neural networks and the economy of low-order polynomials (in terms of number of simulations required and network training requirements). A time-accurate, two-dimensional, Navier-Stokes solver is used to evaluate the various intermediate designs and provide inputs to the optimization procedure. The procedure yielded a modified design that improves the aerodynamic performance through small changes to the reference design geometry. These results demonstrate the capabilities of the neural net-based design procedure, and also show the advantages of including high-fidelity unsteady simulations that capture the relevant flow physics in the design optimization process.
Design of controlled elastic and inelastic structures
NASA Astrophysics Data System (ADS)
Reinhorn, A. M.; Lavan, O.; Cimellaro, G. P.
2009-12-01
One of the founders of structural control theory and its application in civil engineering, Professor Emeritus Tsu T. Soong, envisioned the development of the integral design of structures protected by active control devices. Most of his disciples and colleagues continuously attempted to develop procedures to achieve such integral control. In his recent papers published jointly with some of the authors of this paper, Professor Soong developed design procedures for the entire structure using a design — redesign procedure applied to elastic systems. Such a procedure was developed as an extension of other work by his disciples. This paper summarizes some recent techniques that use traditional active control algorithms to derive the most suitable (optimal, stable) control force, which could then be implemented with a combination of active, passive and semi-active devices through a simple match or more sophisticated optimal procedures. Alternative design can address the behavior of structures using Liapunov stability criteria. This paper shows a unified procedure which can be applied to both elastic and inelastic structures. Although the implementation does not always preserve the optimal criteria, it is shown that the solutions are effective and practical for design of supplemental damping, stiffness enhancement or softening, and strengthening or weakening.
NASA Astrophysics Data System (ADS)
Chiadamrong, N.; Piyathanavong, V.
2017-12-01
Models that aim to optimize the design of supply chain networks have gained more interest in the supply chain literature. Mixed-integer linear programming and discrete-event simulation are widely used for such an optimization problem. We present a hybrid approach to support decisions for supply chain network design using a combination of analytical and discrete-event simulation models. The proposed approach is based on iterative procedures until the difference between subsequent solutions satisfies the pre-determined termination criteria. The effectiveness of proposed approach is illustrated by an example, which shows closer to optimal results with much faster solving time than the results obtained from the conventional simulation-based optimization model. The efficacy of this proposed hybrid approach is promising and can be applied as a powerful tool in designing a real supply chain network. It also provides the possibility to model and solve more realistic problems, which incorporate dynamism and uncertainty.
NASA Astrophysics Data System (ADS)
Li, Hechao
An accurate knowledge of the complex microstructure of a heterogeneous material is crucial for quantitative structure-property relations establishment and its performance prediction and optimization. X-ray tomography has provided a non-destructive means for microstructure characterization in both 3D and 4D (i.e., structural evolution over time). Traditional reconstruction algorithms like filtered-back-projection (FBP) method or algebraic reconstruction techniques (ART) require huge number of tomographic projections and segmentation process before conducting microstructural quantification. This can be quite time consuming and computationally intensive. In this thesis, a novel procedure is first presented that allows one to directly extract key structural information in forms of spatial correlation functions from limited x-ray tomography data. The key component of the procedure is the computation of a "probability map", which provides the probability of an arbitrary point in the material system belonging to specific phase. The correlation functions of interest are then readily computed from the probability map. Using effective medium theory, accurate predictions of physical properties (e.g., elastic moduli) can be obtained. Secondly, a stochastic optimization procedure that enables one to accurately reconstruct material microstructure from a small number of x-ray tomographic projections (e.g., 20 - 40) is presented. Moreover, a stochastic procedure for multi-modal data fusion is proposed, where both X-ray projections and correlation functions computed from limited 2D optical images are fused to accurately reconstruct complex heterogeneous materials in 3D. This multi-modal reconstruction algorithm is proved to be able to integrate the complementary data to perform an excellent optimization procedure, which indicates its high efficiency in using limited structural information. Finally, the accuracy of the stochastic reconstruction procedure using limited X-ray projection data is ascertained by analyzing the microstructural degeneracy and the roughness of energy landscape associated with different number of projections. Ground-state degeneracy of a microstructure is found to decrease with increasing number of projections, which indicates a higher probability that the reconstructed configurations match the actual microstructure. The roughness of energy landscape can also provide information about the complexity and convergence behavior of the reconstruction for given microstructures and projection number.
Rats behave optimally in a sunk cost task.
Yáñez, Nataly; Bouzas, Arturo; Orduña, Vladimir
2017-07-01
The sunk cost effect has been defined as the tendency to persist in an alternative once an investment of effort, time or money has been made, even if better options are available. The goal of this study was to investigate in rats the relationship between sunk cost and the information about when it is optimal to leave the situation, which was studied by Navarro and Fantino (2005) with pigeons. They developed a procedure in which different fixed-ratio schedules were randomly presented, with the richest one being more likely; subjects could persist in the trial until they obtained the reinforcer, or start a new trial in which the most favorable option would be available with a high probability. The information about the expected number of responses needed to obtain the reinforcer was manipulated through the presence or absence of discriminative stimuli; also, they used different combinations of schedule values and their probabilities of presentation to generate escape-optimal and persistence- optimal conditions. They found optimal behavior in the conditions with presence of discriminative stimuli, but non-optimal behavior when they were absent. Unlike their results, we found optimal behavior in both conditions regardless of the absence of discriminative stimuli; rats seemed to use the number of responses already emitted in the trial as a criterion to escape. In contrast to pigeons, rats behaved optimally and the sunk cost effect was not observed. Copyright © 2017 Elsevier B.V. All rights reserved.
Developing a robotic pancreas program: the Dutch experience
Nota, Carolijn L.; Zwart, Maurice J.; Fong, Yuman; Hagendoorn, Jeroen; Hogg, Melissa E.; Koerkamp, Bas Groot; Besselink, Marc G.
2017-01-01
Robot-assisted surgery has been developed to overcome limitations of conventional laparoscopy aiming to further optimize minimally invasive surgery. Despite the fact that robotics already have been widely adopted in urology, gynecology, and several gastro-intestinal procedures, like colorectal surgery, pancreatic surgery lags behind. Due to the complex nature of the procedure, surgeons probably have been hesitant to apply minimally invasive techniques in pancreatic surgery. Nevertheless, the past few years pancreatic surgery has been catching up. An increasing number of procedures are being performed laparoscopically and robotically, despite it being a highly complex procedure with high morbidity and mortality rates. Since the complex nature and extensiveness of the procedure, the start of a robotic pancreatic program should be properly prepared and should comply with several conditions within high-volume centers. Robotic training plays a significant role in the preparation. In this review we discuss the different aspects of preparation when working towards the start of a robotic pancreas program against the background of our nationwide experience in the Netherlands. PMID:29078666
Developing a robotic pancreas program: the Dutch experience.
Nota, Carolijn L; Zwart, Maurice J; Fong, Yuman; Hagendoorn, Jeroen; Hogg, Melissa E; Koerkamp, Bas Groot; Besselink, Marc G; Molenaar, I Quintus
2017-01-01
Robot-assisted surgery has been developed to overcome limitations of conventional laparoscopy aiming to further optimize minimally invasive surgery. Despite the fact that robotics already have been widely adopted in urology, gynecology, and several gastro-intestinal procedures, like colorectal surgery, pancreatic surgery lags behind. Due to the complex nature of the procedure, surgeons probably have been hesitant to apply minimally invasive techniques in pancreatic surgery. Nevertheless, the past few years pancreatic surgery has been catching up. An increasing number of procedures are being performed laparoscopically and robotically, despite it being a highly complex procedure with high morbidity and mortality rates. Since the complex nature and extensiveness of the procedure, the start of a robotic pancreatic program should be properly prepared and should comply with several conditions within high-volume centers. Robotic training plays a significant role in the preparation. In this review we discuss the different aspects of preparation when working towards the start of a robotic pancreas program against the background of our nationwide experience in the Netherlands.
Meischl, Florian; Kirchler, Christian Günter; Jäger, Michael Andreas; Huck, Christian Wolfgang; Rainer, Matthias
2018-02-01
We present a novel method for the quantitative determination of the clean-up efficiency to provide a calculated parameter for peak purity through iterative fitting in conjunction with design of experiments. Rosemary extracts were used and analyzed before and after solid-phase extraction using a self-fabricated mixed-mode sorbent based on poly(N-vinylimidazole/ethylene glycol dimethacrylate). Optimization was performed by variation of washing steps using a full three-level factorial design and response surface methodology. Separation efficiency of rosmarinic acid from interfering compounds was calculated using an iterative fit of Gaussian-like signals and quantifications were performed by the separate integration of the two interfering peak areas. Results and recoveries were analyzed using Design-Expert® software and revealed significant differences between the washing steps. Optimized parameters were considered and used for all further experiments. Furthermore, the solid-phase extraction procedure was tested and compared with commercial available sorbents. In contrast to generic protocols of the manufacturers, the optimized procedure showed excellent recoveries and clean-up rates for the polymer with ion exchange properties. Finally, rosemary extracts from different manufacturing areas and application types were studied to verify the developed method for its applicability. The cleaned-up extracts were analyzed by liquid chromatography with tandem mass spectrometry for detailed compound evaluation to exclude any interference from coeluting molecules. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Jha, Ratneshwar
Multidisciplinary design optimization (MDO) procedures have been developed for smart composite wings and turbomachinery blades. The analysis and optimization methods used are computationally efficient and sufficiently rigorous. Therefore, the developed MDO procedures are well suited for actual design applications. The optimization procedure for the conceptual design of composite aircraft wings with surface bonded piezoelectric actuators involves the coupling of structural mechanics, aeroelasticity, aerodynamics and controls. The load carrying member of the wing is represented as a single-celled composite box beam. Each wall of the box beam is analyzed as a composite laminate using a refined higher-order displacement field to account for the variations in transverse shear stresses through the thickness. Therefore, the model is applicable for the analysis of composite wings of arbitrary thickness. Detailed structural modeling issues associated with piezoelectric actuation of composite structures are considered. The governing equations of motion are solved using the finite element method to analyze practical wing geometries. Three-dimensional aerodynamic computations are performed using a panel code based on the constant-pressure lifting surface method to obtain steady and unsteady forces. The Laplace domain method of aeroelastic analysis produces root-loci of the system which gives an insight into the physical phenomena leading to flutter/divergence and can be efficiently integrated within an optimization procedure. The significance of the refined higher-order displacement field on the aeroelastic stability of composite wings has been established. The effect of composite ply orientations on flutter and divergence speeds has been studied. The Kreisselmeier-Steinhauser (K-S) function approach is used to efficiently integrate the objective functions and constraints into a single envelope function. The resulting unconstrained optimization problem is solved using the Broyden-Fletcher-Goldberg-Shanno algorithm. The optimization problem is formulated with the objective of simultaneously minimizing wing weight and maximizing its aerodynamic efficiency. Design variables include composite ply orientations, ply thicknesses, wing sweep, piezoelectric actuator thickness and actuator voltage. Constraints are placed on the flutter/divergence dynamic pressure, wing root stresses and the maximum electric field applied to the actuators. Numerical results are presented showing significant improvements, after optimization, compared to reference designs. The multidisciplinary optimization procedure for the design of turbomachinery blades integrates aerodynamic and heat transfer design objective criteria along with various mechanical and geometric constraints on the blade geometry. The airfoil shape is represented by Bezier-Bernstein polynomials, which results in a relatively small number of design variables for the optimization. Thin shear layer approximation of the Navier-Stokes equation is used for the viscous flow calculations. Grid generation is accomplished by solving Poisson equations. The maximum and average blade temperatures are obtained through a finite element analysis. Total pressure and exit kinetic energy losses are minimized, with constraints on blade temperatures and geometry. The constrained multiobjective optimization problem is solved using the K-S function approach. The results for the numerical example show significant improvements after optimization.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gorman, A; Seabrook, G; Brakken, A
Purpose: Small surgical devices and needles are used in many surgical procedures. Conventionally, an x-ray film is taken to identify missing devices/needles if post procedure count is incorrect. There is no data to indicate smallest surgical devices/needles that can be identified with digital radiography (DR), and its optimized acquisition technique. Methods: In this study, the DR equipment used is a Canon RadPro mobile with CXDI-70c wireless DR plate, and the same DR plate on a fixed Siemens Multix unit. Small surgical devices and needles tested include Rubber Shod, Bulldog, Fogarty Hydrogrip, and needles with sizes 3-0 C-T1 through 8-0 BV175-6.more » They are imaged with PMMA block phantoms with thickness of 2–8 inch, and an abdomen phantom. Various DR techniques are used. Images are reviewed on the portable x-ray acquisition display, a clinical workstation, and a diagnostic workstation. Results: all small surgical devices and needles are visible in portable DR images with 2–8 inch of PMMA. However, when they are imaged with the abdomen phantom plus 2 inch of PMMA, needles smaller than 9.3 mm length can not be visualized at the optimized technique of 81 kV and 16 mAs. There is no significant difference in visualization with various techniques, or between mobile and fixed radiography unit. However, there is noticeable difference in visualizing the smallest needle on a diagnostic reading workstation compared to the acquisition display on a portable x-ray unit. Conclusion: DR images should be reviewed on a diagnostic reading workstation. Using optimized DR techniques, the smallest needle that can be identified on all phantom studies is 9.3 mm. Sample DR images of various small surgical devices/needles available on diagnostic workstation for comparison may improve their identification. Further in vivo study is needed to confirm the optimized digital radiography technique for identification of lost small surgical devices and needles.« less
NASA Astrophysics Data System (ADS)
Zhmud, V. A.; Reva, I. L.; Dimitrov, L. V.
2017-01-01
The design of robust feedback systems by means of the numerical optimization method is mostly accomplished with modeling of the several systems simultaneously. In each such system, regulators are similar. But the object models are different. It includes all edge values from the possible variants of the object model parameters. With all this, not all possible sets of model parameters are taken into account. Hence, the regulator can be not robust, i. e. it can not provide system stability in some cases, which were not tested during the optimization procedure. The paper proposes an alternative method. It consists in sequent changing of all parameters according to harmonic low. The frequencies of changing of each parameter are aliquant. It provides full covering of the parameters space.
Multi-level Hierarchical Poly Tree computer architectures
NASA Technical Reports Server (NTRS)
Padovan, Joe; Gute, Doug
1990-01-01
Based on the concept of hierarchical substructuring, this paper develops an optimal multi-level Hierarchical Poly Tree (HPT) parallel computer architecture scheme which is applicable to the solution of finite element and difference simulations. Emphasis is given to minimizing computational effort, in-core/out-of-core memory requirements, and the data transfer between processors. In addition, a simplified communications network that reduces the number of I/O channels between processors is presented. HPT configurations that yield optimal superlinearities are also demonstrated. Moreover, to generalize the scope of applicability, special attention is given to developing: (1) multi-level reduction trees which provide an orderly/optimal procedure by which model densification/simplification can be achieved, as well as (2) methodologies enabling processor grading that yields architectures with varying types of multi-level granularity.
Boaretti, Carlo; Roso, Martina; Lorenzetti, Alessandra; Modesti, Michele
2015-01-01
In this study electrospun nanofibers of partially sulfonated polyether ether ketone have been produced as a preliminary step for a possible development of composite proton exchange membranes for fuel cells. Response surface methodology has been employed for the modelling and optimization of the electrospinning process, using a Box-Behnken design. The investigation, based on a second order polynomial model, has been focused on the analysis of the effect of both process (voltage, tip-to-collector distance, flow rate) and material (sulfonation degree) variables on the mean fiber diameter. The final model has been verified by a series of statistical tests on the residuals and validated by a comparison procedure of samples at different sulfonation degrees, realized according to optimized conditions, for the production of homogeneous thin nanofibers. PMID:28793427
Boaretti, Carlo; Roso, Martina; Lorenzetti, Alessandra; Modesti, Michele
2015-07-07
In this study electrospun nanofibers of partially sulfonated polyether ether ketone have been produced as a preliminary step for a possible development of composite proton exchange membranes for fuel cells. Response surface methodology has been employed for the modelling and optimization of the electrospinning process, using a Box-Behnken design. The investigation, based on a second order polynomial model, has been focused on the analysis of the effect of both process (voltage, tip-to-collector distance, flow rate) and material (sulfonation degree) variables on the mean fiber diameter. The final model has been verified by a series of statistical tests on the residuals and validated by a comparison procedure of samples at different sulfonation degrees, realized according to optimized conditions, for the production of homogeneous thin nanofibers.
Ngaile, J E; Msaki, P K; Kazema, R R; Schreiner, L J
2017-04-25
The aim of this study was to investigate the nature and causes of radiation dose imparted to patients undergoing barium-based X-ray fluoroscopy procedures in Tanzania and to compare these doses to those reported in the literature from other regions worldwide. The air kerma area product (KAP) to patient undergoing barium investigations of gastrointestinal tract system was obtained from four consultant hospitals. The KAP was determined using a flat transparent transmission ionization chamber. Mean values of KAP for barium swallow (BS), barium meal (BM) and barium enema (BE) were 2.79, 2.62 and 15.04 Gy cm2, respectively. The mean values of KAP per hospital for the BS, BM and BE procedures varied by factors of up to 7.3, 1.6 and 2.0, respectively. The overall difference between individual patient doses across the four consultant hospitals investigated differed by factors of up to 53, 29.5 and 12 for the BS, BM and BE procedures, respectively. The majority of the mean values of KAP was lower than the reported values for Ghana, Greece, Spain and the UK, while slightly higher than those reported for India. The observed wide variation of KAP values for the same fluoroscopy procedure within and among the hospitals was largely attributed to the dynamic nature of the procedures, the patient characteristics, the skills and experience of personnel, and the different examination protocols employed among hospitals. The observed great variations of procedural protocols and patient doses within and across the hospitals call for the need to standardize examination protocols and optimize barium-based fluoroscopy procedures. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
A New Model for a Carpool Matching Service.
Xia, Jizhe; Curtin, Kevin M; Li, Weihong; Zhao, Yonglong
2015-01-01
Carpooling is an effective means of reducing traffic. A carpool team shares a vehicle for their commute, which reduces the number of vehicles on the road during rush hour periods. Carpooling is officially sanctioned by most governments, and is supported by the construction of high-occupancy vehicle lanes. A number of carpooling services have been designed in order to match commuters into carpool teams, but it known that the determination of optimal carpool teams is a combinatorially complex problem, and therefore technological solutions are difficult to achieve. In this paper, a model for carpool matching services is proposed, and both optimal and heuristic approaches are tested to find solutions for that model. The results show that different solution approaches are preferred over different ranges of problem instances. Most importantly, it is demonstrated that a new formulation and associated solution procedures can permit the determination of optimal carpool teams and routes. An instantiation of the model is presented (using the street network of Guangzhou city, China) to demonstrate how carpool teams can be determined.
Albanese, Mark A; Farrell, Philip; Dottl, Susan L
2005-01-01
Using Medical College Admission Test-grade point average (MCAT-GPA) scores as a threshold has the potential to address issues raised in recent Supreme Court cases, but it introduces complicated methodological issues for medical school admissions. To assess various statistical indexes to determine optimally discriminating thresholds for MCAT-GPA scores. Entering classes from 1992 through 1998 (N = 752) are used to develop guidelines for cut scores that optimize discrimination between students who pass and do not pass the United States Medical Licensing Examination (USMLE) Step 1 on the first attempt. Risk differences, odds ratios, sensitivity, and specificity discriminated best for setting thresholds. Compensatory versus noncompensatory procedures both accounted for 54% of Step 1 failures, but demanded different performance requirements (noncompensatory MCAT-biological sciences = 8, physical sciences = 7, verbal reasoning = 7--sum of scores = 22; compensatory MCAT total = 24). Rational and defensible intellectual achievement thresholds that are likely to comply with recent Supreme Court decisions can be set from MCAT scores and GPAs.
A New Model for a Carpool Matching Service
Xia, Jizhe; Curtin, Kevin M.; Li, Weihong; Zhao, Yonglong
2015-01-01
Carpooling is an effective means of reducing traffic. A carpool team shares a vehicle for their commute, which reduces the number of vehicles on the road during rush hour periods. Carpooling is officially sanctioned by most governments, and is supported by the construction of high-occupancy vehicle lanes. A number of carpooling services have been designed in order to match commuters into carpool teams, but it known that the determination of optimal carpool teams is a combinatorially complex problem, and therefore technological solutions are difficult to achieve. In this paper, a model for carpool matching services is proposed, and both optimal and heuristic approaches are tested to find solutions for that model. The results show that different solution approaches are preferred over different ranges of problem instances. Most importantly, it is demonstrated that a new formulation and associated solution procedures can permit the determination of optimal carpool teams and routes. An instantiation of the model is presented (using the street network of Guangzhou city, China) to demonstrate how carpool teams can be determined. PMID:26125552
NASA Astrophysics Data System (ADS)
Silva, Guilherme Augusto Lopes da; Nicoletti, Rodrigo
2017-06-01
This work focuses on the placement of natural frequencies of beams to desired frequency regions. More specifically, we investigate the effects of combining mode shapes to shape a beam to change its natural frequencies, both numerically and experimentally. First, we present a parametric analysis of a shaped beam and we analyze the resultant effects for different boundary conditions and mode shapes. Second, we present an optimization procedure to find the optimum shape of the beam for desired natural frequencies. In this case, we adopt the Nelder-Mead simplex search method, which allows a broad search of the optimum shape in the solution domain. Finally, the obtained results are verified experimentally for a clamped-clamped beam in three different optimization runs. Results show that the method is effective in placing natural frequencies at desired values (experimental results lie within a 10% error to the expected theoretical ones). However, the beam must be axially constrained to have the natural frequencies changed.
Maze Procedures for Atrial Fibrillation, From History to Practice.
Kik, Charles; Bogers, Ad J J C
2011-10-01
Atrial fibrillation may result in significant symptoms, (systemic) thrombo-embolism, as well as tachycardia-induced cardiomyopathy with cardiac failure, and consequently be associated with significant morbidity and mortality. Nowadays symptomatic atrial fibrillation can be treated with catheter-based ablation, surgical ablation or hybrid approaches. In this setting a fairly large number of surgical approaches and procedures are described and being practised. It should be clear that the Cox-maze procedure resulted from building up evidence and experience in different steps, while some of the present surgical approaches and techniques are being based only on technical feasibility with limited experience, rather than on a process of consequent methodology. Some of the issues still under debate are whether or not the maze procedure can be limited to the left atrium or even to isolation of the pulmonary veins or that bi-atrial procedures are indicated, whether or not cardiopulmonary bypass is to be applied and which route of exposure facilitates an optimal result. In addition, maze procedures are not procedures guide by electrophysiological mapping. At least in theory not in all patients all lesions of the maze procedures are necessary. A history and aspects of current practise in surgical treatment of atrial fibrillation is presented.
Maze Procedures for Atrial Fibrillation, From History to Practice
Kik, Charles; Bogers, Ad J.J.C.
2011-01-01
Atrial fibrillation may result in significant symptoms, (systemic) thrombo-embolism, as well as tachycardia-induced cardiomyopathy with cardiac failure, and consequently be associated with significant morbidity and mortality. Nowadays symptomatic atrial fibrillation can be treated with catheter-based ablation, surgical ablation or hybrid approaches. In this setting a fairly large number of surgical approaches and procedures are described and being practised. It should be clear that the Cox-maze procedure resulted from building up evidence and experience in different steps, while some of the present surgical approaches and techniques are being based only on technical feasibility with limited experience, rather than on a process of consequent methodology. Some of the issues still under debate are whether or not the maze procedure can be limited to the left atrium or even to isolation of the pulmonary veins or that bi-atrial procedures are indicated, whether or not cardiopulmonary bypass is to be applied and which route of exposure facilitates an optimal result. In addition, maze procedures are not procedures guide by electrophysiological mapping. At least in theory not in all patients all lesions of the maze procedures are necessary. A history and aspects of current practise in surgical treatment of atrial fibrillation is presented. PMID:28357007
NASA Astrophysics Data System (ADS)
Zong, Yali; Hu, Naigang; Duan, Baoyan; Yang, Guigeng; Cao, Hongjun; Xu, Wanye
2016-03-01
Inevitable manufacturing errors and inconsistency between assumed and actual boundary conditions can affect the shape precision and cable tensions of a cable-network antenna, and even result in failure of the structure in service. In this paper, an analytical sensitivity analysis method of the shape precision and cable tensions with respect to the parameters carrying uncertainty was studied. Based on the sensitivity analysis, an optimal design procedure was proposed to alleviate the effects of the parameters that carry uncertainty. The validity of the calculated sensitivities is examined by those computed by a finite difference method. Comparison with a traditional design method shows that the presented design procedure can remarkably reduce the influence of the uncertainties on the antenna performance. Moreover, the results suggest that especially slender front net cables, thick tension ties, relatively slender boundary cables and high tension level can improve the ability of cable-network antenna structures to resist the effects of the uncertainties on the antenna performance.
A robust active control system for shimmy damping in the presence of free play and uncertainties
NASA Astrophysics Data System (ADS)
Orlando, Calogero; Alaimo, Andrea
2017-02-01
Shimmy vibration is the oscillatory motion of the fork-wheel assembly about the steering axis. It represents one of the major problem of aircraft landing gear because it can lead to excessive wear, discomfort as well as safety concerns. Based on the nonlinear model of the mechanics of a single wheel nose landing gear (NLG), electromechanical actuator and tire elasticity, a robust active controller capable of damping shimmy vibration is designed and investigated in this study. A novel Decline Population Swarm Optimization (PDSO) procedure is introduced and used to select the optimal parameters for the controller. The PDSO procedure is based on a decline demographic model and shows high global search capability with reduced computational costs. The open and closed loop system behavior is analyzed under different case studies of aeronautical interest and the effects of torsional free play on the nose landing gear response are also studied. Plant parameters probabilistic uncertainties are then taken into account to assess the active controller robustness using a stochastic approach.
Pérez-Trujillo, J P; Frías, S; Conde, J E; Rodríguez-Delgado, M A
2002-07-19
A solid-phase microextraction (SPME) procedure using three commercialised fibers (Carbowax-divinylbenzene, Carboxen-polydimethylsiloxane and divinylbenzene-Carboxen-polydimethylsiloxane) is presented for the determination of a selected group of organochlorine compounds in water samples. The extraction performances of these compounds were compared using fibers with two and three coatings. The optimal experimental procedures for the adsorption and desorption of pesticides were determined. The limits of detection with the divinylbenzene-Carboxen-polydimethylsiloxane fiber at levels below ng l(-1) were similar or lower than values presented in the literature for several of these compounds using polydimethylsiloxane fiber. The advantages of using this fiber, such as no salt addition, are discussed. Finally, the optimised procedures were applied successfully for the determination of these compounds in polluted ground water samples.
Multipurpose silicon photonics signal processor core.
Pérez, Daniel; Gasulla, Ivana; Crudgington, Lee; Thomson, David J; Khokhar, Ali Z; Li, Ke; Cao, Wei; Mashanovich, Goran Z; Capmany, José
2017-09-21
Integrated photonics changes the scaling laws of information and communication systems offering architectural choices that combine photonics with electronics to optimize performance, power, footprint, and cost. Application-specific photonic integrated circuits, where particular circuits/chips are designed to optimally perform particular functionalities, require a considerable number of design and fabrication iterations leading to long development times. A different approach inspired by electronic Field Programmable Gate Arrays is the programmable photonic processor, where a common hardware implemented by a two-dimensional photonic waveguide mesh realizes different functionalities through programming. Here, we report the demonstration of such reconfigurable waveguide mesh in silicon. We demonstrate over 20 different functionalities with a simple seven hexagonal cell structure, which can be applied to different fields including communications, chemical and biomedical sensing, signal processing, multiprocessor networks, and quantum information systems. Our work is an important step toward this paradigm.Integrated optical circuits today are typically designed for a few special functionalities and require complex design and development procedures. Here, the authors demonstrate a reconfigurable but simple silicon waveguide mesh with different functionalities.
Network placement optimization for large-scale distributed system
NASA Astrophysics Data System (ADS)
Ren, Yu; Liu, Fangfang; Fu, Yunxia; Zhou, Zheng
2018-01-01
The network geometry strongly influences the performance of the distributed system, i.e., the coverage capability, measurement accuracy and overall cost. Therefore the network placement optimization represents an urgent issue in the distributed measurement, even in large-scale metrology. This paper presents an effective computer-assisted network placement optimization procedure for the large-scale distributed system and illustrates it with the example of the multi-tracker system. To get an optimal placement, the coverage capability and the coordinate uncertainty of the network are quantified. Then a placement optimization objective function is developed in terms of coverage capabilities, measurement accuracy and overall cost. And a novel grid-based encoding approach for Genetic algorithm is proposed. So the network placement is optimized by a global rough search and a local detailed search. Its obvious advantage is that there is no need for a specific initial placement. At last, a specific application illustrates this placement optimization procedure can simulate the measurement results of a specific network and design the optimal placement efficiently.
NASA Astrophysics Data System (ADS)
S, Kyriacou; E, Kontoleontos; S, Weissenberger; L, Mangani; E, Casartelli; I, Skouteropoulou; M, Gattringer; A, Gehrer; M, Buchmayr
2014-03-01
An efficient hydraulic optimization procedure, suitable for industrial use, requires an advanced optimization tool (EASY software), a fast solver (block coupled CFD) and a flexible geometry generation tool. EASY optimization software is a PCA-driven metamodel-assisted Evolutionary Algorithm (MAEA (PCA)) that can be used in both single- (SOO) and multiobjective optimization (MOO) problems. In MAEAs, low cost surrogate evaluation models are used to screen out non-promising individuals during the evolution and exclude them from the expensive, problem specific evaluation, here the solution of Navier-Stokes equations. For additional reduction of the optimization CPU cost, the PCA technique is used to identify dependences among the design variables and to exploit them in order to efficiently drive the application of the evolution operators. To further enhance the hydraulic optimization procedure, a very robust and fast Navier-Stokes solver has been developed. This incompressible CFD solver employs a pressure-based block-coupled approach, solving the governing equations simultaneously. This method, apart from being robust and fast, also provides a big gain in terms of computational cost. In order to optimize the geometry of hydraulic machines, an automatic geometry and mesh generation tool is necessary. The geometry generation tool used in this work is entirely based on b-spline curves and surfaces. In what follows, the components of the tool chain are outlined in some detail and the optimization results of hydraulic machine components are shown in order to demonstrate the performance of the presented optimization procedure.
Ayvaz, M Tamer
2010-09-20
This study proposes a linked simulation-optimization model for solving the unknown groundwater pollution source identification problems. In the proposed model, MODFLOW and MT3DMS packages are used to simulate the flow and transport processes in the groundwater system. These models are then integrated with an optimization model which is based on the heuristic harmony search (HS) algorithm. In the proposed simulation-optimization model, the locations and release histories of the pollution sources are treated as the explicit decision variables and determined through the optimization model. Also, an implicit solution procedure is proposed to determine the optimum number of pollution sources which is an advantage of this model. The performance of the proposed model is evaluated on two hypothetical examples for simple and complex aquifer geometries, measurement error conditions, and different HS solution parameter sets. Identified results indicated that the proposed simulation-optimization model is an effective way and may be used to solve the inverse pollution source identification problems. Copyright (c) 2010 Elsevier B.V. All rights reserved.
A linearized theory method of constrained optimization for supersonic cruise wing design
NASA Technical Reports Server (NTRS)
Miller, D. S.; Carlson, H. W.; Middleton, W. D.
1976-01-01
A linearized theory wing design and optimization procedure which allows physical realism and practical considerations to be imposed as constraints on the optimum (least drag due to lift) solution is discussed and examples of application are presented. In addition to the usual constraints on lift and pitching moment, constraints are imposed on wing surface ordinates and wing upper surface pressure levels and gradients. The design procedure also provides the capability of including directly in the optimization process the effects of other aircraft components such as a fuselage, canards, and nacelles.
Basic research for the geodynamics program
NASA Technical Reports Server (NTRS)
1984-01-01
Some objectives of this geodynamic program are: (1) optimal utilization of laser and VLBI observations as reference frames for geodynamics, (2) utilization of range difference observations in geodynamics, and (3) estimation techniques in crustal deformation analysis. The determination of Earth rotation parameters from different space geodetic systems is studied. Also reported on is the utilization of simultaneous laser range differences for the determination of baseline variation. An algorithm for the analysis of regional or local crustal deformation measurements is proposed along with other techniques and testing procedures. Some results of the reference from comparisons in terms of the pole coordinates from different techniques are presented.
Chung, Byunghoon; Lee, Hun; Choi, Bong Joon; Seo, Kyung Ryul; Kim, Eung Kwon; Kim, Dae Yune; Kim, Tae-Im
2017-02-01
The purpose of this study was to investigate the clinical efficacy of an optimized prolate ablation procedure for correcting residual refractive errors following laser surgery. We analyzed 24 eyes of 15 patients who underwent an optimized prolate ablation procedure for the correction of residual refractive errors following laser in situ keratomileusis, laser-assisted subepithelial keratectomy, or photorefractive keratectomy surgeries. Preoperative ophthalmic examinations were performed, and uncorrected distance visual acuity, corrected distance visual acuity, manifest refraction values (sphere, cylinder, and spherical equivalent), point spread function, modulation transfer function, corneal asphericity (Q value), ocular aberrations, and corneal haze measurements were obtained postoperatively at 1, 3, and 6 months. Uncorrected distance visual acuity improved and refractive errors decreased significantly at 1, 3, and 6 months postoperatively. Total coma aberration increased at 3 and 6 months postoperatively, while changes in all other aberrations were not statistically significant. Similarly, no significant changes in point spread function were detected, but modulation transfer function increased significantly at the postoperative time points measured. The optimized prolate ablation procedure was effective in terms of improving visual acuity and objective visual performance for the correction of persistent refractive errors following laser surgery.
O'Conner-Von, Susan; Turner, Helen N
2013-12-01
The ASPMN strongly recommends that infants who are being circumcised must receive optimal pain management. ‘‘If a decision for circumcision is made, procedural analgesia should be provided’’ (AAP, 1999, p. 691). Therefore, it is the position of the ASPMN that optimal pain management must be provided throughout the circumcision process. Furthermore, parents must be prepared for the procedure and educated about infant pain assessment. They must also be informed of pharmacologic and integrative pain management therapies that are appropriate before, during, and after the procedure.
NASA Astrophysics Data System (ADS)
Zeng, Hao; Zhang, Jingrui
2018-04-01
The low-thrust version of the fuel-optimal transfers between periodic orbits with different energies in the vicinity of five libration points is exploited deeply in the Circular Restricted Three-Body Problem. Indirect optimization technique incorporated with constraint gradients is employed to further improve the computational efficiency and accuracy of the algorithm. The required optimal thrust magnitude and direction can be determined to create the bridging trajectory that connects the invariant manifolds. A hierarchical design strategy dividing the constraint set is proposed to seek the optimal solution when the problem cannot be solved directly. Meanwhile, the solution procedure and the value ranges of used variables are summarized. To highlight the effectivity of the transfer scheme and aim at different types of libration point orbits, transfer trajectories between some sample orbits, including Lyapunov orbits, planar orbits, halo orbits, axial orbits, vertical orbits and butterfly orbits for collinear and triangular libration points, are investigated with various time of flight. Numerical results show that the fuel consumption varies from a few kilograms to tens of kilograms, related to the locations and the types of mission orbits as well as the corresponding invariant manifold structures, and indicates that the low-thrust transfers may be a beneficial option for the extended science missions around different libration points.
Suwannasom, Pannipa; Sotomi, Yohei; Ishibashi, Yuki; Cavalcante, Rafael; Albuquerque, Felipe N; Macaya, Carlos; Ormiston, John A; Hill, Jonathan; Lang, Irene M; Egred, Mohaned; Fajadet, Jean; Lesiak, Maciej; Tijssen, Jan G; Wykrzykowska, Joanna J; de Winter, Robbert J; Chevalier, Bernard; Serruys, Patrick W; Onuma, Yoshinobu
2016-06-27
The study sought to investigate the relationship between post-procedural asymmetry, expansion, and eccentricity indices of metallic everolimus-eluting stent (EES) and bioresorbable vascular scaffold (BVS) and their respective impact on clinical events at 1-year follow-up. Mechanical properties of a fully BVS are inherently different from those of permanent metallic stent. The ABSORB II (A bioresorbable everolimus-eluting scaffold versus a metallic everolimus-eluting stent for ischaemic heart disease caused by de-novo native coronary artery lesions) trial compared the BVS and metallic EES in the treatment of a de novo coronary artery stenosis. Protocol-mandated intravascular ultrasound imaging was performed pre- and post-procedure in 470 patients (162 metallic EES and 308 BVS). Asymmetry index (AI) was calculated per lesion as: (1 - minimum scaffold/stent diameter/maximum scaffold/stent diameter). Expansion index and optimal scaffold/stent expansion followed the definition of the MUSIC (Multicenter Ultrasound Stenting in Coronaries) study. Eccentricity index (EI) was calculated as the ratio of minimum and maximum scaffold/stent diameter per cross section. The incidence of device-oriented composite endpoint (DoCE) was collected. Post-procedure, the metallic EES group was more symmetric and concentric than the BVS group. Only 8.0% of the BVS arm and 20.0% of the metallic EES arm achieved optimal scaffold/stent expansion (p < 0.001). At 1 year, there was no difference in the DoCE between both devices (BVS 5.2% vs. EES 3.1%; p = 0.29). Post-procedural devices asymmetry and eccentricity were related to higher event rates while there was no relevance to the expansion status. Subsequent multivariate analysis identified that post-procedural AI >0.30 is an independent predictor of DoCE (hazard ratio: 3.43; 95% confidence interval: 1.08 to 10.92; p = 0.037). BVS implantation is more frequently associated with post-procedural asymmetric and eccentric morphology compared to metallic EES. Post-procedural devices asymmetry were independently associated with DoCE following percutaneous coronary intervention. However, this approach should be viewed as hypothesis generating due to low event rates. (ABSORB II Randomized Controlled Trial [ABSORB II]; NCT01425281). Copyright © 2016 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parrish, Robert M.; Liu, Fang; Martínez, Todd J., E-mail: toddjmartinez@gmail.com
We formulate self-consistent field (SCF) theory in terms of an interaction picture where the working variable is the difference density matrix between the true system and a corresponding superposition of atomic densities. As the difference density matrix directly represents the electronic deformations inherent in chemical bonding, this “difference self-consistent field (dSCF)” picture provides a number of significant conceptual and computational advantages. We show that this allows for a stable and efficient dSCF iterative procedure with wholly single-precision Coulomb and exchange matrix builds. We also show that the dSCF iterative procedure can be performed with aggressive screening of the pair space.more » These approximations are tested and found to be accurate for systems with up to 1860 atoms and >10 000 basis functions, providing for immediate overall speedups of up to 70% in the heavily optimized TERACHEM SCF implementation.« less
Gedvilas, Mindaugas; Ratautas, Karolis; Kacar, Elif; Stankevičienė, Ina; Jagminienė, Aldona; Norkus, Eugenijus; Li Pira, Nello; Račiukaitis, Gediminas
2016-01-01
In this work a novel colour-difference measurement method for the quality evaluation of copper deposited on a polymer is proposed. Laser-induced selective activation (LISA) was performed onto the surface of the polycarbonate/acrylonitrile butadiene styrene (PC/ABS) polymer by using nanosecond laser irradiation. The laser activated PC/ABS polymer was copper plated by using the electroless copper plating (ECP) procedure. The sheet resistance measured by using a four-point probe technique was found to decrease by the power law with the colour-difference of the sample images after LISA and ECP procedures. The percolation theory of the electrical conductivity of the insulator conductor mixture has been adopted in order to explain the experimental results. The new proposed method was used to determine an optimal set of the laser processing parameters for best plating conditions. PMID:26960432
Fast Numerical Methods for the Design of Layered Photonic Structures with Rough Interfaces
NASA Technical Reports Server (NTRS)
Komarevskiy, Nikolay; Braginsky, Leonid; Shklover, Valery; Hafner, Christian; Lawson, John
2011-01-01
Modified boundary conditions (MBC) and a multilayer approach (MA) are proposed as fast and efficient numerical methods for the design of 1D photonic structures with rough interfaces. These methods are applicable for the structures, composed of materials with arbitrary permittivity tensor. MBC and MA are numerically validated on different types of interface roughness and permittivities of the constituent materials. The proposed methods can be combined with the 4x4 scattering matrix method as a field solver and an evolutionary strategy as an optimizer. The resulted optimization procedure is fast, accurate, numerically stable and can be used to design structures for various applications.
The optimal power puzzle: scrutiny of the monotone likelihood ratio assumption in multiple testing.
Cao, Hongyuan; Sun, Wenguang; Kosorok, Michael R
2013-01-01
In single hypothesis testing, power is a non-decreasing function of type I error rate; hence it is desirable to test at the nominal level exactly to achieve optimal power. The puzzle lies in the fact that for multiple testing, under the false discovery rate paradigm, such a monotonic relationship may not hold. In particular, exact false discovery rate control may lead to a less powerful testing procedure if a test statistic fails to fulfil the monotone likelihood ratio condition. In this article, we identify different scenarios wherein the condition fails and give caveats for conducting multiple testing in practical settings.
Flexible Approximation Model Approach for Bi-Level Integrated System Synthesis
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw; Kim, Hongman; Ragon, Scott; Soremekun, Grant; Malone, Brett
2004-01-01
Bi-Level Integrated System Synthesis (BLISS) is an approach that allows design problems to be naturally decomposed into a set of subsystem optimizations and a single system optimization. In the BLISS approach, approximate mathematical models are used to transfer information from the subsystem optimizations to the system optimization. Accurate approximation models are therefore critical to the success of the BLISS procedure. In this paper, new capabilities that are being developed to generate accurate approximation models for BLISS procedure will be described. The benefits of using flexible approximation models such as Kriging will be demonstrated in terms of convergence characteristics and computational cost. An approach of dealing with cases where subsystem optimization cannot find a feasible design will be investigated by using the new flexible approximation models for the violated local constraints.
Steepest descent method implementation on unconstrained optimization problem using C++ program
NASA Astrophysics Data System (ADS)
Napitupulu, H.; Sukono; Mohd, I. Bin; Hidayat, Y.; Supian, S.
2018-03-01
Steepest Descent is known as the simplest gradient method. Recently, many researches are done to obtain the appropriate step size in order to reduce the objective function value progressively. In this paper, the properties of steepest descent method from literatures are reviewed together with advantages and disadvantages of each step size procedure. The development of steepest descent method due to its step size procedure is discussed. In order to test the performance of each step size, we run a steepest descent procedure in C++ program. We implemented it to unconstrained optimization test problem with two variables, then we compare the numerical results of each step size procedure. Based on the numerical experiment, we conclude the general computational features and weaknesses of each procedure in each case of problem.
Oberacher, Herbert; Pavlic, Marion; Libiseller, Kathrin; Schubert, Birthe; Sulyok, Michael; Schuhmacher, Rainer; Csaszar, Edina; Köfeler, Harald C
2009-04-01
A sophisticated matching algorithm developed for highly efficient identity search within tandem mass spectral libraries is presented. For the optimization of the search procedure a collection of 410 tandem mass spectra corresponding to 22 compounds was used. The spectra were acquired in three different laboratories on four different instruments. The following types of tandem mass spectrometric instruments were used: quadrupole-quadrupole-time-of-flight (QqTOF), quadrupole-quadrupole-linear ion trap (QqLIT), quadrupole-quadrupole-quadrupole (QqQ), and linear ion trap-Fourier transform ion cyclotron resonance mass spectrometer (LIT-FTICR). The obtained spectra were matched to an established MS/MS-spectral library that contained 3759 MS/MS-spectra corresponding to 402 different reference compounds. All 22 test compounds were part of the library. A dynamic intensity cut-off, the search for neutral losses, and optimization of the formula used to calculate the match probability were shown to significantly enhance the performance of the presented library search approach. With the aid of these features the average number of correct assignments was increased to 98%. For statistical evaluation of the match reliability the set of fragment ion spectra was extended with 300 spectra corresponding to 100 compounds not included in the reference library. Performance was checked with the aid of receiver operating characteristic (ROC) curves. Using the magnitude of the match probability as well as the precursor ion mass as benchmarks to rate the obtained top hit, overall correct classification of a compound being included or not included in the mass spectrometric library, was obtained in more than 95% of cases clearly indicating a high predictive accuracy of the established matching procedure. Copyright (c) 2009 John Wiley & Sons, Ltd.
Large Scale Bacterial Colony Screening of Diversified FRET Biosensors
Litzlbauer, Julia; Schifferer, Martina; Ng, David; Fabritius, Arne; Thestrup, Thomas; Griesbeck, Oliver
2015-01-01
Biosensors based on Förster Resonance Energy Transfer (FRET) between fluorescent protein mutants have started to revolutionize physiology and biochemistry. However, many types of FRET biosensors show relatively small FRET changes, making measurements with these probes challenging when used under sub-optimal experimental conditions. Thus, a major effort in the field currently lies in designing new optimization strategies for these types of sensors. Here we describe procedures for optimizing FRET changes by large scale screening of mutant biosensor libraries in bacterial colonies. We describe optimization of biosensor expression, permeabilization of bacteria, software tools for analysis, and screening conditions. The procedures reported here may help in improving FRET changes in multiple suitable classes of biosensors. PMID:26061878
Artenjak, Andrej; Leonardi, Adrijana; Križaj, Igor; Ambrožič, Aleš; Sodin-Semrl, Snezna; Božič, Borut; Čučnik, Saša
2014-01-01
Patient biological material for isolation of β2-glycoprotein I (β2GPI) and high avidity IgG anti-β2-glycoprotein I antibodies (HAv anti-β2GPI) dictates its full utilization. The aim of our study was to evaluate/improve procedures for isolation of unnicked β2GPI and HAv aβ2GPI to gain unmodified proteins in higher yields/purity. Isolation of β2GPI from plasma was a stepwise procedure combining nonspecific and specific methods. For isolation of polyclonal HAv aβ2GPI affinity chromatographies with immobilized protein G and human β2GPI were used. The unknown protein found during isolation was identified by liquid chromatography electrospray ionization mass spectrometry and the nonredundant National Center for Biotechnology Information database. The average mass of the isolated unnicked purified β2GPI increased from 6.56 mg to 9.94 mg. In the optimized isolation procedure the high molecular weight protein (proteoglycan 4) was successfully separated from β2GPI in the 1st peaks with size exclusion chromatography. The average efficiency of the isolation procedure for polyclonal HAv anti-β2GPI from different matrixes was 13.8%, as determined by our in-house anti-β2GPI ELISA. We modified the in-house isolation and purification procedures of unnicked β2GPI and HAv anti-β2GPI, improving the purity of antigen and antibodies as well as increasing the number of tests routinely performed with the in-house ELISA by ~50%. PMID:24741579
Adaptive feature selection using v-shaped binary particle swarm optimization.
Teng, Xuyang; Dong, Hongbin; Zhou, Xiurong
2017-01-01
Feature selection is an important preprocessing method in machine learning and data mining. This process can be used not only to reduce the amount of data to be analyzed but also to build models with stronger interpretability based on fewer features. Traditional feature selection methods evaluate the dependency and redundancy of features separately, which leads to a lack of measurement of their combined effect. Moreover, a greedy search considers only the optimization of the current round and thus cannot be a global search. To evaluate the combined effect of different subsets in the entire feature space, an adaptive feature selection method based on V-shaped binary particle swarm optimization is proposed. In this method, the fitness function is constructed using the correlation information entropy. Feature subsets are regarded as individuals in a population, and the feature space is searched using V-shaped binary particle swarm optimization. The above procedure overcomes the hard constraint on the number of features, enables the combined evaluation of each subset as a whole, and improves the search ability of conventional binary particle swarm optimization. The proposed algorithm is an adaptive method with respect to the number of feature subsets. The experimental results show the advantages of optimizing the feature subsets using the V-shaped transfer function and confirm the effectiveness and efficiency of the feature subsets obtained under different classifiers.
Adaptive feature selection using v-shaped binary particle swarm optimization
Dong, Hongbin; Zhou, Xiurong
2017-01-01
Feature selection is an important preprocessing method in machine learning and data mining. This process can be used not only to reduce the amount of data to be analyzed but also to build models with stronger interpretability based on fewer features. Traditional feature selection methods evaluate the dependency and redundancy of features separately, which leads to a lack of measurement of their combined effect. Moreover, a greedy search considers only the optimization of the current round and thus cannot be a global search. To evaluate the combined effect of different subsets in the entire feature space, an adaptive feature selection method based on V-shaped binary particle swarm optimization is proposed. In this method, the fitness function is constructed using the correlation information entropy. Feature subsets are regarded as individuals in a population, and the feature space is searched using V-shaped binary particle swarm optimization. The above procedure overcomes the hard constraint on the number of features, enables the combined evaluation of each subset as a whole, and improves the search ability of conventional binary particle swarm optimization. The proposed algorithm is an adaptive method with respect to the number of feature subsets. The experimental results show the advantages of optimizing the feature subsets using the V-shaped transfer function and confirm the effectiveness and efficiency of the feature subsets obtained under different classifiers. PMID:28358850
Pianowski, Giselle; Meyer, Gregory J; Villemor-Amaral, Anna Elisa de
2016-01-01
Exner ( 1989 ) and Weiner ( 2003 ) identified 3 types of Rorschach codes that are most likely to contain personally relevant projective material: Distortions, Movement, and Embellishments. We examine how often these types of codes occur in normative data and whether their frequency changes for the 1st, 2nd, 3rd, 4th, or last response to a card. We also examine the impact on these variables of the Rorschach Performance Assessment System's (R-PAS) statistical modeling procedures that convert the distribution of responses (R) from Comprehensive System (CS) administered protocols to match the distribution of R found in protocols obtained using R-optimized administration guidelines. In 2 normative reference databases, the results indicated that about 40% of responses (M = 39.25) have 1 type of code, 15% have 2 types, and 1.5% have all 3 types, with frequencies not changing by response number. In addition, there were no mean differences in the original CS and R-optimized modeled records (M Cohen's d = -0.04 in both databases). When considered alongside findings showing minimal differences between the protocols of people randomly assigned to CS or R-optimized administration, the data suggest R-optimized administration should not alter the extent to which potential projective material is present in a Rorschach protocol.
[Basic research on digital logistic management of hospital].
Cao, Hui
2010-05-01
This paper analyzes and explores the possibilities of digital information-based management realized by equipment department, general services department, supply room and other material flow departments in different hospitals in order to optimize the procedures of information-based asset management. There are various analytical methods of medical supplies business models, providing analytical data for correct decisions made by departments and leaders of hospital and the governing authorities.
James S. Han; Theodore Mianowski; Yi-yu Lin
1999-01-01
The efficacy of fiber length measurement techniques such as digitizing, the Kajaani procedure, and NIH Image are compared in order to determine the optimal tool. Kenaf bast fibers, aspen, and red pine fibers were collected from different anatomical parts, and the fiber lengths were compared using various analytical tools. A statistical analysis on the validity of the...
Sampling design optimization for spatial functions
Olea, R.A.
1984-01-01
A new procedure is presented for minimizing the sampling requirements necessary to estimate a mappable spatial function at a specified level of accuracy. The technique is based on universal kriging, an estimation method within the theory of regionalized variables. Neither actual implementation of the sampling nor universal kriging estimations are necessary to make an optimal design. The average standard error and maximum standard error of estimation over the sampling domain are used as global indices of sampling efficiency. The procedure optimally selects those parameters controlling the magnitude of the indices, including the density and spatial pattern of the sample elements and the number of nearest sample elements used in the estimation. As an illustration, the network of observation wells used to monitor the water table in the Equus Beds of Kansas is analyzed and an improved sampling pattern suggested. This example demonstrates the practical utility of the procedure, which can be applied equally well to other spatial sampling problems, as the procedure is not limited by the nature of the spatial function. ?? 1984 Plenum Publishing Corporation.
Herrero, A; Sanllorente, S; Reguera, C; Ortiz, M C; Sarabia, L A
2016-11-16
A new strategy to approach multiresponse optimization in conjunction to a D-optimal design for simultaneously optimizing a large number of experimental factors is proposed. The procedure is applied to the determination of biogenic amines (histamine, putrescine, cadaverine, tyramine, tryptamine, 2-phenylethylamine, spermine and spermidine) in swordfish by HPLC-FLD after extraction with an acid and subsequent derivatization with dansyl chloride. Firstly, the extraction from a solid matrix and the derivatization of the extract are optimized. Ten experimental factors involved in both stages are studied, seven of them at two levels and the remaining at three levels; the use of a D-optimal design leads to optimize the ten experimental variables, significantly reducing by a factor of 67 the experimental effort needed but guaranteeing the quality of the estimates. A model with 19 coefficients, which includes those corresponding to the main effects and two possible interactions, is fitted to the peak area of each amine. Then, the validated models are used to predict the response (peak area) of the 3456 experiments of the complete factorial design. The variability among peak areas ranges from 13.5 for 2-phenylethylamine to 122.5 for spermine, which shows, to a certain extent, the high and different effect of the pretreatment on the responses. Then the percentiles are calculated from the peak areas of each amine. As the experimental conditions are in conflict, the optimal solution for the multiresponse optimization is chosen from among those which have all the responses greater than a certain percentile for all the amines. The developed procedure reaches decision limits down to 2.5 μg L -1 for cadaverine or 497 μg L -1 for histamine in solvent and 0.07 mg kg -1 and 14.81 mg kg -1 in fish (probability of false positive equal to 0.05), respectively. Copyright © 2016 Elsevier B.V. All rights reserved.
Efficient Robust Optimization of Metal Forming Processes using a Sequential Metamodel Based Strategy
NASA Astrophysics Data System (ADS)
Wiebenga, J. H.; Klaseboer, G.; van den Boogaard, A. H.
2011-08-01
The coupling of Finite Element (FE) simulations to mathematical optimization techniques has contributed significantly to product improvements and cost reductions in the metal forming industries. The next challenge is to bridge the gap between deterministic optimization techniques and the industrial need for robustness. This paper introduces a new and generally applicable structured methodology for modeling and solving robust optimization problems. Stochastic design variables or noise variables are taken into account explicitly in the optimization procedure. The metamodel-based strategy is combined with a sequential improvement algorithm to efficiently increase the accuracy of the objective function prediction. This is only done at regions of interest containing the optimal robust design. Application of the methodology to an industrial V-bending process resulted in valuable process insights and an improved robust process design. Moreover, a significant improvement of the robustness (>2σ) was obtained by minimizing the deteriorating effects of several noise variables. The robust optimization results demonstrate the general applicability of the robust optimization strategy and underline the importance of including uncertainty and robustness explicitly in the numerical optimization procedure.
High-Lift Optimization Design Using Neural Networks on a Multi-Element Airfoil
NASA Technical Reports Server (NTRS)
Greenman, Roxana M.; Roth, Karlin R.; Smith, Charles A. (Technical Monitor)
1998-01-01
The high-lift performance of a multi-element airfoil was optimized by using neural-net predictions that were trained using a computational data set. The numerical data was generated using a two-dimensional, incompressible, Navier-Stokes algorithm with the Spalart-Allmaras turbulence model. Because it is difficult to predict maximum lift for high-lift systems, an empirically-based maximum lift criteria was used in this study to determine both the maximum lift and the angle at which it occurs. Multiple input, single output networks were trained using the NASA Ames variation of the Levenberg-Marquardt algorithm for each of the aerodynamic coefficients (lift, drag, and moment). The artificial neural networks were integrated with a gradient-based optimizer. Using independent numerical simulations and experimental data for this high-lift configuration, it was shown that this design process successfully optimized flap deflection, gap, overlap, and angle of attack to maximize lift. Once the neural networks were trained and integrated with the optimizer, minimal additional computer resources were required to perform optimization runs with different initial conditions and parameters. Applying the neural networks within the high-lift rigging optimization process reduced the amount of computational time and resources by 83% compared with traditional gradient-based optimization procedures for multiple optimization runs.
NASA Astrophysics Data System (ADS)
Salmin, Vadim V.
2017-01-01
Flight mechanics with a low-thrust is a new chapter of mechanics of space flight, considered plurality of all problems trajectory optimization and movement control laws and the design parameters of spacecraft. Thus tasks associated with taking into account the additional factors in mathematical models of the motion of spacecraft becomes increasingly important, as well as additional restrictions on the possibilities of the thrust vector control. The complication of the mathematical models of controlled motion leads to difficulties in solving optimization problems. Author proposed methods of finding approximate optimal control and evaluating their optimality based on analytical solutions. These methods are based on the principle of extending the class of admissible states and controls and sufficient conditions for the absolute minimum. Developed procedures of the estimation enabling to determine how close to the optimal founded solution, and indicate ways to improve them. Authors describes procedures of estimate for approximately optimal control laws for space flight mechanics problems, in particular for optimization flight low-thrust between the circular non-coplanar orbits, optimization the control angle and trajectory movement of the spacecraft during interorbital flights, optimization flights with low-thrust between arbitrary elliptical orbits Earth satellites.
Optimization of thermal processing of canned mussels.
Ansorena, M R; Salvadori, V O
2011-10-01
The design and optimization of thermal processing of solid-liquid food mixtures, such as canned mussels, requires the knowledge of the thermal history at the slowest heating point. In general, this point does not coincide with the geometrical center of the can, and the results show that it is located along the axial axis at a height that depends on the brine content. In this study, a mathematical model for the prediction of the temperature at this point was developed using the discrete transfer function approach. Transfer function coefficients were experimentally obtained, and prediction equations fitted to consider other can dimensions and sampling interval. This model was coupled with an optimization routine in order to search for different retort temperature profiles to maximize a quality index. Both constant retort temperature (CRT) and variable retort temperature (VRT; discrete step-wise and exponential) were considered. In the CRT process, the optimal retort temperature was always between 134 °C and 137 °C, and high values of thiamine retention were achieved. A significant improvement in surface quality index was obtained for optimal VRT profiles compared to optimal CRT. The optimization procedure shown in this study produces results that justify its utilization in the industry.
NASA Astrophysics Data System (ADS)
Koziel, Slawomir; Bekasiewicz, Adrian
2018-02-01
In this article, a simple yet efficient and reliable technique for fully automated multi-objective design optimization of antenna structures using sequential domain patching (SDP) is discussed. The optimization procedure according to SDP is a two-step process: (i) obtaining the initial set of Pareto-optimal designs representing the best possible trade-offs between considered conflicting objectives, and (ii) Pareto set refinement for yielding the optimal designs at the high-fidelity electromagnetic (EM) simulation model level. For the sake of computational efficiency, the first step is realized at the level of a low-fidelity (coarse-discretization) EM model by sequential construction and relocation of small design space segments (patches) in order to create a path connecting the extreme Pareto front designs obtained beforehand. The second stage involves response correction techniques and local response surface approximation models constructed by reusing EM simulation data acquired in the first step. A major contribution of this work is an automated procedure for determining the patch dimensions. It allows for appropriate selection of the number of patches for each geometry variable so as to ensure reliability of the optimization process while maintaining its low cost. The importance of this procedure is demonstrated by comparing it with uniform patch dimensions.
Economic optimization of natural hazard protection - conceptual study of existing approaches
NASA Astrophysics Data System (ADS)
Spackova, Olga; Straub, Daniel
2013-04-01
Risk-based planning of protection measures against natural hazards has become a common practice in many countries. The selection procedure aims at identifying an economically efficient strategy with regard to the estimated costs and risk (i.e. expected damage). A correct setting of the evaluation methodology and decision criteria should ensure an optimal selection of the portfolio of risk protection measures under a limited state budget. To demonstrate the efficiency of investments, indicators such as Benefit-Cost Ratio (BCR), Marginal Costs (MC) or Net Present Value (NPV) are commonly used. However, the methodologies for efficiency evaluation differ amongst different countries and different hazard types (floods, earthquakes etc.). Additionally, several inconsistencies can be found in the applications of the indicators in practice. This is likely to lead to a suboptimal selection of the protection strategies. This study provides a general formulation for optimization of the natural hazard protection measures from a socio-economic perspective. It assumes that all costs and risks can be expressed in monetary values. The study regards the problem as a discrete hierarchical optimization, where the state level sets the criteria and constraints, while the actual optimization is made on the regional level (towns, catchments) when designing particular protection measures and selecting the optimal protection level. The study shows that in case of an unlimited budget, the task is quite trivial, as it is sufficient to optimize the protection measures in individual regions independently (by minimizing the sum of risk and cost). However, if the budget is limited, the need for an optimal allocation of resources amongst the regions arises. To ensure this, minimum values of BCR or MC can be required by the state, which must be achieved in each region. The study investigates the meaning of these indicators in the optimization task at the conceptual level and compares their suitability. To illustrate the theoretical findings, the indicators are tested on a hypothetical example of five regions with different risk levels. Last but not least, political and societal aspects and limitations in the use of the risk-based optimization framework are discussed.
Theivendran, Shevanuja; Dass, Amala
2017-08-01
Ultrasmall nanomolecules (<2 nm) such as Au 25 (SCH 2 CH 2 Ph) 18 , Au 38 (SCH 2 CH 2 Ph) 24 , and Au 144 (SCH 2 CH 2 Ph) 60 are well studied and can be prepared using established synthetic procedures. No such synthetic protocols that result in high yield products from commercially available starting materials exist for Au 36 (SPh-X) 24 . Here, we report a synthetic procedure for the large-scale synthesis of highly stable Au 36 (SPh-X) 24 with a yield of ∼42%. Au 36 (SPh-X) 24 was conveniently synthesized by using tert-butylbenzenethiol (HSPh-tBu, TBBT) as the ligand, giving a more stable product with better shelf life and higher yield than previously reported for making Au 36 (SPh) 24 from thiophenol (PhSH). The choice of thiol, solvent, and reaction conditions were modified for the optimization of the synthetic procedure. The purposes of this work are to (1) optimize the existing procedure to obtain stable product with better yield, (2) develop a scalable synthetic procedure, (3) demonstrate the superior stability of Au 36 (SPh-tBu) 24 when compared to Au 36 (SPh) 24 , and (4) demonstrate the reproducibility and robustness of the optimized synthetic procedure.
Optimization of intra-voxel incoherent motion imaging at 3.0 Tesla for fast liver examination.
Leporq, Benjamin; Saint-Jalmes, Hervé; Rabrait, Cecile; Pilleul, Frank; Guillaud, Olivier; Dumortier, Jérôme; Scoazec, Jean-Yves; Beuf, Olivier
2015-05-01
Optimization of multi b-values MR protocol for fast intra-voxel incoherent motion imaging of the liver at 3.0 Tesla. A comparison of four different acquisition protocols were carried out based on estimated IVIM (DSlow , DFast , and f) and ADC-parameters in 25 healthy volunteers. The effects of respiratory gating compared with free breathing acquisition then diffusion gradient scheme (simultaneous or sequential) and finally use of weighted averaging for different b-values were assessed. An optimization study based on Cramer-Rao lower bound theory was then performed to minimize the number of b-values required for a suitable quantification. The duration-optimized protocol was evaluated on 12 patients with chronic liver diseases No significant differences of IVIM parameters were observed between the assessed protocols. Only four b-values (0, 12, 82, and 1310 s.mm(-2) ) were found mandatory to perform a suitable quantification of IVIM parameters. DSlow and DFast significantly decreased between nonadvanced and advanced fibrosis (P < 0.05 and P < 0.01) whereas perfusion fraction and ADC variations were not found to be significant. Results showed that IVIM could be performed in free breathing, with a weighted-averaging procedure, a simultaneous diffusion gradient scheme and only four optimized b-values (0, 10, 80, and 800) reducing scan duration by a factor of nine compared with a nonoptimized protocol. Preliminary results have shown that parameters such as DSlow and DFast based on optimized IVIM protocol can be relevant biomarkers to distinguish between nonadvanced and advanced fibrosis. © 2014 Wiley Periodicals, Inc.
Optimizing dynamic downscaling in one-way nesting using a regional ocean model
NASA Astrophysics Data System (ADS)
Pham, Van Sy; Hwang, Jin Hwan; Ku, Hyeyun
2016-10-01
Dynamical downscaling with nested regional oceanographic models has been demonstrated to be an effective approach for both operationally forecasted sea weather on regional scales and projections of future climate change and its impact on the ocean. However, when nesting procedures are carried out in dynamic downscaling from a larger-scale model or set of observations to a smaller scale, errors are unavoidable due to the differences in grid sizes and updating intervals. The present work assesses the impact of errors produced by nesting procedures on the downscaled results from Ocean Regional Circulation Models (ORCMs). Errors are identified and evaluated based on their sources and characteristics by employing the Big-Brother Experiment (BBE). The BBE uses the same model to produce both nesting and nested simulations; so it addresses those error sources separately (i.e., without combining the contributions of errors from different sources). Here, we focus on discussing errors resulting from the spatial grids' differences, the updating times and the domain sizes. After the BBE was separately run for diverse cases, a Taylor diagram was used to analyze the results and recommend an optimal combination of grid size, updating period and domain sizes. Finally, suggested setups for the downscaling were evaluated by examining the spatial correlations of variables and the relative magnitudes of variances between the nested model and the original data.
Damage identification on spatial Timoshenko arches by means of genetic algorithms
NASA Astrophysics Data System (ADS)
Greco, A.; D'Urso, D.; Cannizzaro, F.; Pluchino, A.
2018-05-01
In this paper a procedure for the dynamic identification of damage in spatial Timoshenko arches is presented. The proposed approach is based on the calculation of an arbitrary number of exact eigen-properties of a damaged spatial arch by means of the Wittrick and Williams algorithm. The proposed damage model considers a reduction of the volume in a part of the arch, and is therefore suitable, differently than what is commonly proposed in the main part of the dedicated literature, not only for concentrated cracks but also for diffused damaged zones which may involve a loss of mass. Different damage scenarios can be taken into account with variable location, intensity and extension of the damage as well as number of damaged segments. An optimization procedure, aiming at identifying which damage configuration minimizes the difference between its eigen-properties and a set of measured modal quantities for the structure, is implemented making use of genetic algorithms. In this context, an initial random population of chromosomes, representing different damage distributions along the arch, is forced to evolve towards the fittest solution. Several applications with different, single or multiple, damaged zones and boundary conditions confirm the validity and the applicability of the proposed procedure even in presence of instrumental errors on the measured data.
Efficient sensitivity analysis and optimization of a helicopter rotor
NASA Technical Reports Server (NTRS)
Lim, Joon W.; Chopra, Inderjit
1989-01-01
Aeroelastic optimization of a system essentially consists of the determination of the optimum values of design variables which minimize the objective function and satisfy certain aeroelastic and geometric constraints. The process of aeroelastic optimization analysis is illustrated. To carry out aeroelastic optimization effectively, one needs a reliable analysis procedure to determine steady response and stability of a rotor system in forward flight. The rotor dynamic analysis used in the present study developed inhouse at the University of Maryland is based on finite elements in space and time. The analysis consists of two major phases: vehicle trim and rotor steady response (coupled trim analysis), and aeroelastic stability of the blade. For a reduction of helicopter vibration, the optimization process requires the sensitivity derivatives of the objective function and aeroelastic stability constraints. For this, the derivatives of steady response, hub loads and blade stability roots are calculated using a direct analytical approach. An automated optimization procedure is developed by coupling the rotor dynamic analysis, design sensitivity analysis and constrained optimization code CONMIN.
Troncossi, Marco; Borghi, Corrado; Chiossi, Marco; Davalli, Angelo; Parenti-Castelli, Vincenzo
2009-05-01
The application of a design methodology for the determination of the optimal prosthesis architecture for a given upper limb amputee is presented in this paper along with the discussion of its results. In particular, a novel procedure was used to provide the main guidelines for the design of an actuated shoulder articulation for externally powered prostheses. The topology and the geometry of the new articulation were determined as the optimal compromise between wearability (for the ease of use and the patient's comfort) and functionality of the device (in terms of mobility, velocity, payload, etc.). This choice was based on kinematic and kinetostatic analyses of different upper limb prosthesis models and on purpose-built indices that were set up to evaluate the models from different viewpoints. Only 12 of the 31 simulated prostheses proved a sufficient level of functionality: among these, the optimal solution was an articulation having two actuated revolute joints with orthogonal axes for the elevation of the upper arm in any vertical plane and a frictional joint for the passive adjustment of the humeral intra-extra rotation. A prototype of the mechanism is at the clinical test stage.
Integrated topology and shape optimization in structural design
NASA Technical Reports Server (NTRS)
Bremicker, M.; Chirehdast, M.; Kikuchi, N.; Papalambros, P. Y.
1990-01-01
Structural optimization procedures usually start from a given design topology and vary its proportions or boundary shapes to achieve optimality under various constraints. Two different categories of structural optimization are distinguished in the literature, namely sizing and shape optimization. A major restriction in both cases is that the design topology is considered fixed and given. Questions concerning the general layout of a design (such as whether a truss or a solid structure should be used) as well as more detailed topology features (e.g., the number and connectivities of bars in a truss or the number of holes in a solid) have to be resolved by design experience before formulating the structural optimization model. Design quality of an optimized structure still depends strongly on engineering intuition. This article presents a novel approach for initiating formal structural optimization at an earlier stage, where the design topology is rigorously generated in addition to selecting shape and size dimensions. A three-phase design process is discussed: an optimal initial topology is created by a homogenization method as a gray level image, which is then transformed to a realizable design using computer vision techniques; this design is then parameterized and treated in detail by sizing and shape optimization. A fully automated process is described for trusses. Optimization of two dimensional solid structures is also discussed. Several application-oriented examples illustrate the usefulness of the proposed methodology.
Blackout detection as a multiobjective optimization problem.
Chaudhary, A M; Trachtenberg, E A
1991-01-01
We study new fast computational procedures for a pilot blackout (total loss of vision) detection in real time. Their validity is demonstrated by data acquired during experiments with volunteer pilots on a human centrifuge. A new systematic class of very fast suboptimal group filters is employed. The utilization of various inherent group invariancies of signals involved allows us to solve the detection problem via estimation with respect to many performance criteria. The complexity of the procedures in terms of the number of computer operations required for their implementation is investigated. Various classes of such prediction procedures are investigated, analyzed and trade offs are established. Also we investigated the validity of suboptimal filtering using different group filters for different performance criteria, namely: the number of false detections, the number of missed detections, the accuracy of detection and the closeness of all procedures to a certain bench mark technique in terms of dispersion squared (mean square error). The results are compared to recent studies of detection of evoked potentials using estimation. The group filters compare favorably with conventional techniques in many cases with respect to the above mentioned criteria. Their main advantage is the fast computational processing.
Navarrete-Bolaños, J L; Téllez-Martínez, M G; Miranda-López, R; Jiménez-Islas, H
2017-07-03
For any fermentation process, the production cost depends on several factors, such as the genetics of the microorganism, the process condition, and the culture medium composition. In this work, a guideline for the design of cost-efficient culture media using a sequential approach based on response surface methodology is described. The procedure was applied to analyze and optimize a culture medium of registered trademark and a base culture medium obtained as a result of the screening analysis from different culture media used to grow the same strain according to the literature. During the experiments, the procedure quantitatively identified an appropriate array of micronutrients to obtain a significant yield and find a minimum number of culture medium ingredients without limiting the process efficiency. The resultant culture medium showed an efficiency that compares favorably with the registered trademark medium at a 95% lower cost as well as reduced the number of ingredients in the base culture medium by 60% without limiting the process efficiency. These results demonstrated that, aside from satisfying the qualitative requirements, an optimum quantity of each constituent is needed to obtain a cost-effective culture medium. Study process variables for optimized culture medium and scaling-up production for the optimal values are desirable.
Optimization and quality control of genome-wide Hi-C library preparation.
Zhang, Xiang-Yuan; He, Chao; Ye, Bing-Yu; Xie, De-Jian; Shi, Ming-Lei; Zhang, Yan; Shen, Wen-Long; Li, Ping; Zhao, Zhi-Hu
2017-09-20
Highest-throughput chromosome conformation capture (Hi-C) is one of the key assays for genome- wide chromatin interaction studies. It is a time-consuming process that involves many steps and many different kinds of reagents, consumables, and equipments. At present, the reproducibility is unsatisfactory. By optimizing the key steps of the Hi-C experiment, such as crosslinking, pretreatment of digestion, inactivation of restriction enzyme, and in situ ligation etc., we established a robust Hi-C procedure and prepared two biological replicates of Hi-C libraries from the GM12878 cells. After preliminary quality control by Sanger sequencing, the two replicates were high-throughput sequenced. The bioinformatics analysis of the raw sequencing data revealed the mapping-ability and pair-mate rate of the raw data were around 90% and 72%, respectively. Additionally, after removal of self-circular ligations and dangling-end products, more than 96% of the valid pairs were reached. Genome-wide interactome profiling shows clear topological associated domains (TADs), which is consistent with previous reports. Further correlation analysis showed that the two biological replicates strongly correlate with each other in terms of both bin coverage and all bin pairs. All these results indicated that the optimized Hi-C procedure is robust and stable, which will be very helpful for the wide applications of the Hi-C assay.
Treder, Krzysztof; Chołuj, Joanna; Zacharzewska, Bogumiła; Babujee, Lavanya; Mielczarek, Mateusz; Burzyński, Adam; Rakotondrafara, Aurélie M
2018-02-01
Potato virus Y (PVY) infection has been a global challenge for potato production and the leading cause of downgrading and rejection of seed crops for certification. Accurate and timely diagnosis is a key for effective disease control. Here, we have optimized a reverse transcription loop-mediated amplification (RT-LAMP) assay to differentiate the PVY O and N serotypes. The RT-LAMP assay is based on isothermal autocyclic strand displacement during DNA synthesis. The high specificity of this method relies heavily on the primer sets designed for the amplification of the targeted regions. We designed specific primer sets targeting a region within the coat protein gene that contains nucleotide signatures typical for O and N coat protein types, and these primers differ in their annealing temperature. Combining this assay with total RNA extraction by magnetic capture, we have established a highly sensitive, simplified and shortened RT-LAMP procedure as an alternative to conventional nucleic acid assays for diagnosis. This optimized procedure for virus detection may be used as a preliminary test for identifying the viral serotype prior to investing time and effort in multiplex RT-PCR tests when a specific strain is needed.
Masci, Alessandra; Coccia, Andrea; Lendaro, Eugenio; Mosca, Luciana; Paolicelli, Patrizia; Cesa, Stefania
2016-07-01
Pomegranate is a functional food of great interest, due to its multiple beneficial effects on human health. This fruit is rich in anthocyanins and ellagitannins, which exert a protective role towards degenerative diseases. The aim of the present work was to optimize the extraction procedure, from different parts of the fruit, to obtain extracts enriched in selected polyphenols while retaining biological activity. Whole fruits or peels of pomegranate cultivars, with different geographic origin, were subjected to several extraction methods. The obtained extracts were analyzed for polyphenolic content, evaluated for antioxidant capacity and tested for antiproliferative activity on human bladder cancer T24 cells. Two different extraction procedures, employing ethyl acetate as a solvent, were useful in obtaining extracts enriched in ellagic acid and/or punicalagins. Antioxidative and antiproliferative assays demonstrated that the antioxidant capability is directly related to the phenolic content, whereas the antiproliferative activity is to be mainly attributed to ellagic acid. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Kuang, Simeng Max
This thesis contains two topics in data analysis. The first topic consists of the introduction of algorithms for sample-based optimal transport and barycenter problems. In chapter 1, a family of algorithms is introduced to solve both the L2 optimal transport problem and the Wasserstein barycenter problem. Starting from a theoretical perspective, the new algorithms are motivated from a key characterization of the barycenter measure, which suggests an update that reduces the total transportation cost and stops only when the barycenter is reached. A series of general theorems is given to prove the convergence of all the algorithms. We then extend the algorithms to solve sample-based optimal transport and barycenter problems, in which only finite sample sets are available instead of underlying probability distributions. A unique feature of the new approach is that it compares sample sets in terms of the expected values of a set of feature functions, which at the same time induce the function space of optimal maps and can be chosen by users to incorporate their prior knowledge of the data. All the algorithms are implemented and applied to various synthetic example and practical applications. On synthetic examples it is found that both the SOT algorithm and the SCB algorithm are able to find the true solution and often converge in a handful of iterations. On more challenging applications including Gaussian mixture models, color transfer and shape transform problems, the algorithms give very good results throughout despite the very different nature of the corresponding datasets. In chapter 2, a preconditioning procedure is developed for the L2 and more general optimal transport problems. The procedure is based on a family of affine map pairs, which transforms the original measures into two new measures that are closer to each other, while preserving the optimality of solutions. It is proved that the preconditioning procedure minimizes the remaining transportation cost among all admissible affine maps. The procedure can be used on both continuous measures and finite sample sets from distributions. In numerical examples, the procedure is applied to multivariate normal distributions, to a two-dimensional shape transform problem and to color transfer problems. For the second topic, we present an extension to anisotropic flows of the recently developed Helmholtz and wave-vortex decomposition method for one-dimensional spectra measured along ship or aircraft tracks in Buhler et al. (J. Fluid Mech., vol. 756, 2014, pp. 1007-1026). While in the original method the flow was assumed to be homogeneous and isotropic in the horizontal plane, we allow the flow to have a simple kind of horizontal anisotropy that is chosen in a self-consistent manner and can be deduced from the one-dimensional power spectra of the horizontal velocity fields and their cross-correlation. The key result is that an exact and robust Helmholtz decomposition of the horizontal kinetic energy spectrum can be achieved in this anisotropic flow setting, which then also allows the subsequent wave-vortex decomposition step. The new method is developed theoretically and tested with encouraging results on challenging synthetic data as well as on ocean data from the Gulf Stream.
Hellerhoff, K
2010-11-01
In recent years digital full field mammography has increasingly replaced conventional film mammography. High quality imaging is guaranteed by high quantum efficiency and very good contrast resolution with optimized dosing even for women with dense glandular tissue. However, digital mammography remains a projection procedure by which overlapping tissue limits the detectability of subtle alterations. Tomosynthesis is a procedure developed from digital mammography for slice examination of breasts which eliminates the effects of overlapping tissue and allows 3D imaging of breasts. A curved movement of the X-ray tube during scanning allows the acquisition of many 2D images from different angles. Subseqently, reconstruction algorithms employing a shift and add method improve the recognition of details at a defined level and at the same time eliminate smear artefacts due to overlapping structures. The total dose corresponds to that of conventional mammography imaging. The technical procedure, including the number of levels, suitable anodes/filter combinations, angle regions of images and selection of reconstruction algorithms, is presently undergoing optimization. Previous studies on the clinical value of tomosynthesis have examined screening parameters, such as recall rate and detection rate as well as information on tumor extent for histologically proven breast tumors. More advanced techniques, such as contrast medium-enhanced tomosynthesis, are presently under development and dual-energy imaging is of particular importance.
t'Kindt, Ruben; De Veylder, Lieven; Storme, Michael; Deforce, Dieter; Van Bocxlaer, Jan
2008-08-01
This study treats the optimization of methods for homogenizing Arabidopsis thaliana plant leaves as well as cell cultures, and extracting their metabolites for metabolomics analysis by conventional liquid chromatography electrospray ionization mass spectrometry (LC-ESI/MS). Absolute recovery, process efficiency and procedure repeatability have been compared between different pre-LC-MS homogenization/extraction procedures through the use of samples fortified before extraction with a range of representative metabolites. Hereby, the magnitude of the matrix effect observed in the ensuing LC-MS based metabolomics analysis was evaluated. Based on relative recovery and repeatability of key metabolites, comprehensiveness of extraction (number of m/z-retention time pairs) and clean-up potential of the approach (minimum matrix effects), the most appropriate sample pre-treatment was adopted. It combines liquid nitrogen homogenization for plant leaves with thermomixer based extraction using MeOH/H(2)O 80/20. As such, an efficient and highly reproducible LC-MS plant metabolomics set-up is achieved, as illustrated by the obtained results for both LC-MS (8.88%+/-5.16 versus 7.05%+/-4.45) and technical variability (12.53%+/-11.21 versus 9.31%+/-6.65) data in a comparative investigation of A. thaliana plant leaves and cell cultures, respectively.
Optimization of Analytical Potentials for Coarse-Grained Biopolymer Models.
Mereghetti, Paolo; Maccari, Giuseppe; Spampinato, Giulia Lia Beatrice; Tozzini, Valentina
2016-08-25
The increasing trend in the recent literature on coarse grained (CG) models testifies their impact in the study of complex systems. However, the CG model landscape is variegated: even considering a given resolution level, the force fields are very heterogeneous and optimized with very different parametrization procedures. Along the road for standardization of CG models for biopolymers, here we describe a strategy to aid building and optimization of statistics based analytical force fields and its implementation in the software package AsParaGS (Assisted Parameterization platform for coarse Grained modelS). Our method is based on the use and optimization of analytical potentials, optimized by targeting internal variables statistical distributions by means of the combination of different algorithms (i.e., relative entropy driven stochastic exploration of the parameter space and iterative Boltzmann inversion). This allows designing a custom model that endows the force field terms with a physically sound meaning. Furthermore, the level of transferability and accuracy can be tuned through the choice of statistical data set composition. The method-illustrated by means of applications to helical polypeptides-also involves the analysis of two and three variable distributions, and allows handling issues related to the FF term correlations. AsParaGS is interfaced with general-purpose molecular dynamics codes and currently implements the "minimalist" subclass of CG models (i.e., one bead per amino acid, Cα based). Extensions to nucleic acids and different levels of coarse graining are in the course.
A mixed optimization method for automated design of fuselage structures.
NASA Technical Reports Server (NTRS)
Sobieszczanski, J.; Loendorf, D.
1972-01-01
A procedure for automating the design of transport aircraft fuselage structures has been developed and implemented in the form of an operational program. The structure is designed in two stages. First, an overall distribution of structural material is obtained by means of optimality criteria to meet strength and displacement constraints. Subsequently, the detailed design of selected rings and panels consisting of skin and stringers is performed by mathematical optimization accounting for a set of realistic design constraints. The practicality and computer efficiency of the procedure is demonstrated on cylindrical and area-ruled large transport fuselages.
NASA Astrophysics Data System (ADS)
Calderone, Luigi; Pinola, Licia; Varoli, Vincenzo
1992-04-01
The paper describes an analytical procedure to optimize the feed-forward compensation for any PWM dc/dc converters. The aims of achieving zero dc audiosusceptibility was found to be possible for the buck, buck-boost, Cuk, and SEPIC cells; for the boost converter, however, only nonoptimal compensation is feasible. Rules for the design of PWM controllers and procedures for the evaluation of the hardware-introduced errors are discussed. A PWM controller implementing the optimal feed-forward compensation for buck-boost, Cuk, and SEPIC cells is described and fully experimentally characterized.
Thermal/Structural Tailoring of Engine Blades (T/STAEBL) User's manual
NASA Technical Reports Server (NTRS)
Brown, K. W.
1994-01-01
The Thermal/Structural Tailoring of Engine Blades (T/STAEBL) system is a computer code that is able to perform numerical optimizations of cooled jet engine turbine blades and vanes. These optimizations seek an airfoil design of minimum operating cost that satisfies realistic design constraints. This report documents the organization of the T/STAEBL computer program, its design and analysis procedure, its optimization procedure, and provides an overview of the input required to run the program, as well as the computer resources required for its effective use. Additionally, usage of the program is demonstrated through a validation test case.
Thermal/Structural Tailoring of Engine Blades (T/STAEBL): User's manual
NASA Astrophysics Data System (ADS)
Brown, K. W.
1994-03-01
The Thermal/Structural Tailoring of Engine Blades (T/STAEBL) system is a computer code that is able to perform numerical optimizations of cooled jet engine turbine blades and vanes. These optimizations seek an airfoil design of minimum operating cost that satisfies realistic design constraints. This report documents the organization of the T/STAEBL computer program, its design and analysis procedure, its optimization procedure, and provides an overview of the input required to run the program, as well as the computer resources required for its effective use. Additionally, usage of the program is demonstrated through a validation test case.
Neural networks for structural design - An integrated system implementation
NASA Technical Reports Server (NTRS)
Berke, Laszlo; Hafez, Wassim; Pao, Yoh-Han
1992-01-01
The development of powerful automated procedures to aid the creative designer is becoming increasingly critical for complex design tasks. In the work described here Artificial Neural Nets are applied to acquire structural analysis and optimization domain expertise. Based on initial instructions from the user an automated procedure generates random instances of structural analysis and/or optimization 'experiences' that cover a desired domain. It extracts training patterns from the created instances, constructs and trains an appropriate network architecture and checks the accuracy of net predictions. The final product is a trained neural net that can estimate analysis and/or optimization results instantaneously.
On the functional optimization of a certain class of nonstationary spatial functions
Christakos, G.; Paraskevopoulos, P.N.
1987-01-01
Procedures are developed in order to obtain optimal estimates of linear functionals for a wide class of nonstationary spatial functions. These procedures rely on well-established constrained minimum-norm criteria, and are applicable to multidimensional phenomena which are characterized by the so-called hypothesis of inherentity. The latter requires elimination of the polynomial, trend-related components of the spatial function leading to stationary quantities, and also it generates some interesting mathematics within the context of modelling and optimization in several dimensions. The arguments are illustrated using various examples, and a case study computed in detail. ?? 1987 Plenum Publishing Corporation.
Turbomachinery Airfoil Design Optimization Using Differential Evolution
NASA Technical Reports Server (NTRS)
Madavan, Nateri K.; Biegel, Bryan A. (Technical Monitor)
2002-01-01
An aerodynamic design optimization procedure that is based on a evolutionary algorithm known at Differential Evolution is described. Differential Evolution is a simple, fast, and robust evolutionary strategy that has been proven effective in determining the global optimum for several difficult optimization problems, including highly nonlinear systems with discontinuities and multiple local optima. The method is combined with a Navier-Stokes solver that evaluates the various intermediate designs and provides inputs to the optimization procedure. An efficient constraint handling mechanism is also incorporated. Results are presented for the inverse design of a turbine airfoil from a modern jet engine. The capability of the method to search large design spaces and obtain the optimal airfoils in an automatic fashion is demonstrated. Substantial reductions in the overall computing time requirements are achieved by using the algorithm in conjunction with neural networks.
A Rapid Aerodynamic Design Procedure Based on Artificial Neural Networks
NASA Technical Reports Server (NTRS)
Rai, Man Mohan
2001-01-01
An aerodynamic design procedure that uses neural networks to model the functional behavior of the objective function in design space has been developed. This method incorporates several improvements to an earlier method that employed a strategy called parameter-based partitioning of the design space in order to reduce the computational costs associated with design optimization. As with the earlier method, the current method uses a sequence of response surfaces to traverse the design space in search of the optimal solution. The new method yields significant reductions in computational costs by using composite response surfaces with better generalization capabilities and by exploiting synergies between the optimization method and the simulation codes used to generate the training data. These reductions in design optimization costs are demonstrated for a turbine airfoil design study where a generic shape is evolved into an optimal airfoil.
On the effect of response transformations in sequential parameter optimization.
Wagner, Tobias; Wessing, Simon
2012-01-01
Parameter tuning of evolutionary algorithms (EAs) is attracting more and more interest. In particular, the sequential parameter optimization (SPO) framework for the model-assisted tuning of stochastic optimizers has resulted in established parameter tuning algorithms. In this paper, we enhance the SPO framework by introducing transformation steps before the response aggregation and before the actual modeling. Based on design-of-experiments techniques, we empirically analyze the effect of integrating different transformations. We show that in particular, a rank transformation of the responses provides significant improvements. A deeper analysis of the resulting models and additional experiments with adaptive procedures indicates that the rank and the Box-Cox transformation are able to improve the properties of the resultant distributions with respect to symmetry and normality of the residuals. Moreover, model-based effect plots document a higher discriminatory power obtained by the rank transformation.
Reduced state feedback gain computation. [optimization and control theory for aircraft control
NASA Technical Reports Server (NTRS)
Kaufman, H.
1976-01-01
Because application of conventional optimal linear regulator theory to flight controller design requires the capability of measuring and/or estimating the entire state vector, it is of interest to consider procedures for computing controls which are restricted to be linear feedback functions of a lower dimensional output vector and which take into account the presence of measurement noise and process uncertainty. Therefore, a stochastic linear model that was developed is presented which accounts for aircraft parameter and initial uncertainty, measurement noise, turbulence, pilot command and a restricted number of measurable outputs. Optimization with respect to the corresponding output feedback gains was performed for both finite and infinite time performance indices without gradient computation by using Zangwill's modification of a procedure originally proposed by Powell. Results using a seventh order process show the proposed procedures to be very effective.
Investigation of Low-Reynolds-Number Rocket Nozzle Design Using PNS-Based Optimization Procedure
NASA Technical Reports Server (NTRS)
Hussaini, M. Moin; Korte, John J.
1996-01-01
An optimization approach to rocket nozzle design, based on computational fluid dynamics (CFD) methodology, is investigated for low-Reynolds-number cases. This study is undertaken to determine the benefits of this approach over those of classical design processes such as Rao's method. A CFD-based optimization procedure, using the parabolized Navier-Stokes (PNS) equations, is used to design conical and contoured axisymmetric nozzles. The advantage of this procedure is that it accounts for viscosity during the design process; other processes make an approximated boundary-layer correction after an inviscid design is created. Results showed significant improvement in the nozzle thrust coefficient over that of the baseline case; however, the unusual nozzle design necessitates further investigation of the accuracy of the PNS equations for modeling expanding flows with thick laminar boundary layers.
Three-dimensional aerodynamic shape optimization of supersonic delta wings
NASA Technical Reports Server (NTRS)
Burgreen, Greg W.; Baysal, Oktay
1994-01-01
A recently developed three-dimensional aerodynamic shape optimization procedure AeSOP(sub 3D) is described. This procedure incorporates some of the most promising concepts from the area of computational aerodynamic analysis and design, specifically, discrete sensitivity analysis, a fully implicit 3D Computational Fluid Dynamics (CFD) methodology, and 3D Bezier-Bernstein surface parameterizations. The new procedure is demonstrated in the preliminary design of supersonic delta wings. Starting from a symmetric clipped delta wing geometry, a Mach 1.62 asymmetric delta wing and two Mach 1. 5 cranked delta wings were designed subject to various aerodynamic and geometric constraints.
Morabito, Marco; Pavlinic, Daniela Z; Crisci, Alfonso; Capecchi, Valerio; Orlandini, Simone; Mekjavic, Igor B
2011-07-01
Military and civil defense personnel are often involved in complex activities in a variety of outdoor environments. The choice of appropriate clothing ensembles represents an important strategy to establish the success of a military mission. The main aim of this study was to compare the known clothing insulation of the garment ensembles worn by soldiers during two winter outdoor field trials (hike and guard duty) with the estimated optimal clothing thermal insulations recommended to maintain thermoneutrality, assessed by using two different biometeorological procedures. The overall aim was to assess the applicability of such biometeorological procedures to weather forecast systems, thereby developing a comprehensive biometeorological tool for military operational forecast purposes. Military trials were carried out during winter 2006 in Pokljuka (Slovenia) by Slovene Armed Forces personnel. Gastrointestinal temperature, heart rate and environmental parameters were measured with portable data acquisition systems. The thermal characteristics of the clothing ensembles worn by the soldiers, namely thermal resistance, were determined with a sweating thermal manikin. Results showed that the clothing ensemble worn by the military was appropriate during guard duty but generally inappropriate during the hike. A general under-estimation of the biometeorological forecast model in predicting the optimal clothing insulation value was observed and an additional post-processing calibration might further improve forecast accuracy. This study represents the first step in the development of a comprehensive personalized biometeorological forecast system aimed at improving recommendations regarding the optimal thermal insulation of military garment ensembles for winter activities.
González, Mónica; Méndez, Jesús; Carnero, Aurelio; Lobo, M Gloria; Afonso, Ana
2002-11-20
A simple method was developed for the extraction and determination of color pigments in cochineals (Dactylopius coccus Costa). The procedure was based on the solvent extraction of pigments in insect samples using methanol:water (65:35, v:v) as extractant. Two-level factorial design was used in order to optimize the solvent extraction parameters: temperature, time, methanol concentration in the extractant mixture, and the number of extractions. The results suggest that the number of extractions is statistically the most significant factor. The separation and determination of the pigments was carried out by high-performance liquid chromatography with UV-visible detection. Because the absorption spectra of different pigments are different in the visible region, it is convenient to use a diode array detector to obtain chromatographic profiles that allow for the characterization of the extracted pigments.
Determination of the Prosumer's Optimal Bids
NASA Astrophysics Data System (ADS)
Ferruzzi, Gabriella; Rossi, Federico; Russo, Angela
2015-12-01
This paper considers a microgrid connected with a medium-voltage (MV) distribution network. It is assumed that the microgrid, which is managed by a prosumer, operates in a competitive environment and participates in the day-ahead market. Then, as the first step of the short-term management problem, the prosumer must determine the bids to be submitted to the market. The offer strategy is based on the application of an optimization model, which is solved for different hourly price profiles of energy exchanged with the main grid. The proposed procedure is applied to a microgrid and four different its configurations were analyzed. The configurations consider the presence of thermoelectric units that only produce electricity, a boiler or/and cogeneration power plants for the thermal loads, and an electric storage system. The numerical results confirmed the numerous theoretical considerations that have been made.
Miller, David J; Nelson, Carl A; Oleynikov, Dmitry
2009-05-01
With a limited number of access ports, minimally invasive surgery (MIS) often requires the complete removal of one tool and reinsertion of another. Modular or multifunctional tools can be used to avoid this step. In this study, soft computing techniques are used to optimally arrange a modular tool's functional tips, allowing surgeons to deliver treatment of improved quality in less time, decreasing overall cost. The investigators watched University Medical Center surgeons perform MIS procedures (e.g., cholecystectomy and Nissen fundoplication) and recorded the procedures to digital video. The video was then used to analyze the types of instruments used, the duration of each use, and the function of each instrument. These data were aggregated with fuzzy logic techniques using four membership functions to quantify the overall usefulness of each tool. This allowed subsequent optimization of the arrangement of functional tips within the modular tool to decrease overall time spent changing instruments during simulated surgical procedures based on the video recordings. Based on a prototype and a virtual model of a multifunction laparoscopic tool designed by the investigators that can interchange six different instrument tips through the tool's shaft, the range of tool change times is approximately 11-13 s. Using this figure, estimated time savings for the procedures analyzed ranged from 2.5 to over 32 min, and on average, total surgery time can be reduced by almost 17% by using the multifunction tool.
Investigations of the pushability behavior of cardiovascular angiographic catheters.
Bloss, Peter; Rothe, Wolfgang; Wünsche, Peter; Werner, Christian; Rothe, Alexander; Kneissl, Georg Dieter; Burger, Wolfram; Rehberg, Elisabeth
2003-01-01
The placement of angiographic catheters into the vascular system is a routine procedure in modern clinical business. The definition of objective but not yet available evaluation protocols based on measurable physical quantities correlated to the empirical clinical findings is of utmost importance for catheter manufacturers for in-house product screening and optimization. In this context, we present an assessment of multiple mechanical and surface catheter properties such as static and kinetic friction, bending stiffness, microscopic surface topology, surface roughness, surface free energy and their interrelation. Theoretical framework, description of experimental methods and extensive data measured on several different catheters are provided and in conclusion a testing procedure is defined. Although this procedure is based on the measurement of several physical quantities it can be easily implemented by commercial laboratories testing catheters as it is based on relatively low-cost standard methods.
Complications of Bariatric Surgery: What You Can Expect to See in Your GI Practice.
Schulman, Allison R; Thompson, Christopher C
2017-11-01
Obesity is one of the most significant health problems worldwide. Bariatric surgery has become one of the fastest growing operative procedures and has gained acceptance as the leading option for weight-loss. Despite improvement in the performance of bariatric surgical procedures, complications are not uncommon. There are a number of unique complications that arise in this patient population and require specific knowledge for proper management. Furthermore, conditions unrelated to the altered anatomy typically require a different management strategy. As such, a basic understanding of surgical anatomy, potential complications, and endoscopic tools and techniques for optimal management is essential for the practicing gastroenterologist. Gastroenterologists should be familiar with these procedures and complication management strategies. This review will cover these topics and focus on major complications that gastroenterologists will be most likely to see in their practice.
A randomized study of a method for optimizing adolescent assent to biomedical research.
Annett, Robert D; Brody, Janet L; Scherer, David G; Turner, Charles W; Dalen, Jeanne; Raissy, Hengameh
2017-01-01
Voluntary consent/assent with adolescents invited to participate in research raises challenging problems. No studies to date have attempted to manipulate autonomy in relation to assent/consent processes. This study evaluated the effects of an autonomy-enhanced individualized assent/consent procedure embedded within a randomized pediatric asthma clinical trial. Families were randomly assigned to remain together or separated during a consent/assent process; the latter we characterize as an autonomy-enhanced assent/consent procedure. We hypothesized that separating adolescents from their parents would improve adolescent assent by increasing knowledge and appreciation of the clinical trial and willingness to participate. Sixty-four adolescent-parent dyads completed procedures. The together versus separate randomization made no difference in adolescent or parent willingness to participate. However, significant differences were found in both parent and adolescent knowledge of the asthma clinical trial based on the assent/consent procedure and adolescent age. The separate assent/consent procedure improved knowledge of study risks and benefits for older adolescents and their parents but not for the younger youth or their parents. Regardless of the assent/consent process, younger adolescents had lower comprehension of information associated with the study medication and research risks and benefits, but not study procedures or their research rights and privileges. The use of an autonomy-enhanced assent/consent procedure for adolescents may improve their and their parent's informed assent/consent without impacting research participation decisions. Traditional assent/consent procedures may result in a "diffusion of responsibility" effect between parents and older adolescents, specifically in attending to key information associated with study risks and benefits.
Traveling-Wave Tube Cold-Test Circuit Optimization Using CST MICROWAVE STUDIO
NASA Technical Reports Server (NTRS)
Chevalier, Christine T.; Kory, Carol L.; Wilson, Jeffrey D.; Wintucky, Edwin G.; Dayton, James A., Jr.
2003-01-01
The internal optimizer of CST MICROWAVE STUDIO (MWS) was used along with an application-specific Visual Basic for Applications (VBA) script to develop a method to optimize traveling-wave tube (TWT) cold-test circuit performance. The optimization procedure allows simultaneous optimization of circuit specifications including on-axis interaction impedance, bandwidth or geometric limitations. The application of Microwave Studio to TWT cold-test circuit optimization is described.
Performance, optimization, and latest development of the SRI family of rotary cryocoolers
NASA Astrophysics Data System (ADS)
Dovrtel, Klemen; Megušar, Franc
2017-05-01
In this paper the SRI family of Le-tehnika rotary cryocoolers is presented (SRI401, SRI423/SRI421 and SRI474). The Stirling coolers cooling power range starts from 0.25W to 0.75W at 77K with available temperature range from 60K to 150K and are fitted to typical dewar detector sizes and powers supply voltages. The DDCA performance optimizing procedure is presented. The procedure includes cooler steady state performance mapping and optimization and cooldown optimization. The current cryogenic performance status and reliability evaluation method and figures are presented on the existing and new units. The latest improved SRI401 demonstrated MTTF close to 25'000 hours and the test is still on going.
Wave drag as the objective function in transonic fighter wing optimization
NASA Technical Reports Server (NTRS)
Phillips, P. S.
1984-01-01
The original computational method for determining wave drag in a three dimensional transonic analysis method was replaced by a wave drag formula based on the loss in momentum across an isentropic shock. This formula was used as the objective function in a numerical optimization procedure to reduce the wave drag of a fighter wing at transonic maneuver conditions. The optimization procedure minimized wave drag through modifications to the wing section contours defined by a wing profile shape function. A significant reduction in wave drag was achieved while maintaining a high lift coefficient. Comparisons of the pressure distributions for the initial and optimized wing geometries showed significant reductions in the leading-edge peaks and shock strength across the span.
Optimized tomography of continuous variable systems using excitation counting
NASA Astrophysics Data System (ADS)
Shen, Chao; Heeres, Reinier W.; Reinhold, Philip; Jiang, Luyao; Liu, Yi-Kai; Schoelkopf, Robert J.; Jiang, Liang
2016-11-01
We propose a systematic procedure to optimize quantum state tomography protocols for continuous variable systems based on excitation counting preceded by a displacement operation. Compared with conventional tomography based on Husimi or Wigner function measurement, the excitation counting approach can significantly reduce the number of measurement settings. We investigate both informational completeness and robustness, and provide a bound of reconstruction error involving the condition number of the sensing map. We also identify the measurement settings that optimize this error bound, and demonstrate that the improved reconstruction robustness can lead to an order-of-magnitude reduction of estimation error with given resources. This optimization procedure is general and can incorporate prior information of the unknown state to further simplify the protocol.
Mandel, Jacob E; Morel-Ovalle, Louis; Boas, Franz E; Ziv, Etay; Yarmohammadi, Hooman; Deipolyi, Amy; Mohabir, Heeralall R; Erinjeri, Joseph P
2018-02-20
The purpose of this study is to determine whether a custom Google Maps application can optimize site selection when scheduling outpatient interventional radiology (IR) procedures within a multi-site hospital system. The Google Maps for Business Application Programming Interface (API) was used to develop an internal web application that uses real-time traffic data to determine estimated travel time (ETT; minutes) and estimated travel distance (ETD; miles) from a patient's home to each a nearby IR facility in our hospital system. Hypothetical patient home addresses based on the 33 cities comprising our institution's catchment area were used to determine the optimal IR site for hypothetical patients traveling from each city based on real-time traffic conditions. For 10/33 (30%) cities, there was discordance between the optimal IR site based on ETT and the optimal IR site based on ETD at non-rush hour time or rush hour time. By choosing to travel to an IR site based on ETT rather than ETD, patients from discordant cities were predicted to save an average of 7.29 min during non-rush hour (p = 0.03), and 28.80 min during rush hour (p < 0.001). Using a custom Google Maps application to schedule outpatients for IR procedures can effectively reduce patient travel time when more than one location providing IR procedures is available within the same hospital system.
NASA Technical Reports Server (NTRS)
Holms, A. G.
1982-01-01
A previous report described a backward deletion procedure of model selection that was optimized for minimum prediction error and which used a multiparameter combination of the F - distribution and an order statistics distribution of Cochran's. A computer program is described that applies the previously optimized procedure to real data. The use of the program is illustrated by examples.
2014-10-01
nonlinear and non-stationary signals. It aims at decomposing a signal, via an iterative sifting procedure, into several intrinsic mode functions ...stationary signals. It aims at decomposing a signal, via an iterative sifting procedure into several intrinsic mode functions (IMFs), and each of the... function , optimization. 1 Introduction It is well known that nonlinear and non-stationary signal analysis is important and difficult. His- torically
Korpus, Christoph; Pikal, Michael; Friess, Wolfgang
2016-11-01
The aim of this study was to determine the heat transfer characteristics of an optimized flexible holder device, using Tunable Diode Laser Absorption Spectroscopy, the Pressure Rise Test, and the gravimetric procedure. Two different controlled nucleation methods were tested, and an improved sublimation process, "preheated plate," was developed. Tunable Diode Laser Absorption Spectroscopy identified an initial sublimation burst phase. Accordingly, steady-state equations were adapted for the gravimetric procedure, to account for this initial non-steady-state period. The heat transfer coefficient, K DCC , describing the transfer from the holder to the DCC, was the only heat transfer coefficient showing a clear pressure dependence with values ranging from 3.81E-04 cal/(g·cm 2 ·K) at 40 mTorr to 7.38E-04 cal/(g·cm 2 ·K) at 200 mTorr. The heat transfer coefficient, K tot , reflecting the overall energy transfer via the holder, increased by around 24% from 40 to 200 mTorr. This resulted in a pressure-independent sublimation rate of around 42 ± 1.06 mg/h over the whole pressure range. Hence, this pressure-dependent increase in energy transfer completely compensated the decrease in driving force of sublimation. The "flexible holder" shows a substantially reduced impact of atypical radiation, improved drying homogeneity, and ultimately a better transferability of the freeze-drying cycle for process optimization. Copyright © 2016 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
Robbins, Spring Chenoa Cooper; Bernard, Diana; McCaffery, Kirsten; Skinner, S Rachel
2010-09-01
To date, no published studies examine procedural factors of the school-based human papillomavirus (HPV) vaccination program from the perspective of those involved. This study examines the factors that were perceived to impact optimal vaccination experience. Schools across Sydney were selected to reflect a range of vaccination coverage at the school level and different school types to ensure a range of experiences. Semi-structured focus groups were conducted with girls; and one-on-one interviews were undertaken with parents, teachers and nurses until saturation of data in all emergent themes was reached. Focus groups and interviews explored participants' experiences in school-based HPV vaccination. Transcripts were analysed, letting themes emerge. Themes related to participants' experience of the organisational, logistical and procedural aspects of the vaccination program and their perceptions of an optimal process were organised into two categories: (1) preparation for the vaccination program and (2) vaccination day strategies. In (1), themes emerged regarding commitment to the process from those involved, planning time and space for vaccinations, communication within and between agencies, and flexibility. In (2), themes included vaccinating the most anxious girls first, facilitating peer support, use of distraction techniques, minimising waiting time girls, and support staff. A range of views exists on what constitutes an optimal school-based program. Several findings were identified that should be considered in the development of guidelines for implementing school-based programs. Future research should evaluate how different approaches to acquiring parental consent, and the use of anxiety and fear reduction strategies impact experience and uptake in the school-based setting.
Noise pollution mapping approach and accuracy on landscape scales.
Iglesias Merchan, Carlos; Diaz-Balteiro, Luis
2013-04-01
Noise mapping allows the characterization of environmental variables, such as noise pollution or soundscape, depending on the task. Strategic noise mapping (as per Directive 2002/49/EC, 2002) is a tool intended for the assessment of noise pollution at the European level every five years. These maps are based on common methods and procedures intended for human exposure assessment in the European Union that could be also be adapted for assessing environmental noise pollution in natural parks. However, given the size of such areas, there could be an alternative approach to soundscape characterization rather than using human noise exposure procedures. It is possible to optimize the size of the mapping grid used for such work by taking into account the attributes of the area to be studied and the desired outcome. This would then optimize the mapping time and the cost. This type of optimization is important in noise assessment as well as in the study of other environmental variables. This study compares 15 models, using different grid sizes, to assess the accuracy of the noise mapping of the road traffic noise at a landscape scale, with respect to noise and landscape indicators. In a study area located in the Manzanares High River Basin Regional Park in Spain, different accuracy levels (Kappa index values from 0.725 to 0.987) were obtained depending on the terrain and noise source properties. The time taken for the calculations and the noise mapping accuracy results reveal the potential for setting the map resolution in line with decision-makers' criteria and budget considerations. Copyright © 2013 Elsevier B.V. All rights reserved.
Interpretation of the results of statistical measurements. [search for basic probability model
NASA Technical Reports Server (NTRS)
Olshevskiy, V. V.
1973-01-01
For random processes, the calculated probability characteristic, and the measured statistical estimate are used in a quality functional, which defines the difference between the two functions. Based on the assumption that the statistical measurement procedure is organized so that the parameters for a selected model are optimized, it is shown that the interpretation of experimental research is a search for a basic probability model.
Feinstein, Wei P; Brylinski, Michal
2015-01-01
Computational approaches have emerged as an instrumental methodology in modern research. For example, virtual screening by molecular docking is routinely used in computer-aided drug discovery. One of the critical parameters for ligand docking is the size of a search space used to identify low-energy binding poses of drug candidates. Currently available docking packages often come with a default protocol for calculating the box size, however, many of these procedures have not been systematically evaluated. In this study, we investigate how the docking accuracy of AutoDock Vina is affected by the selection of a search space. We propose a new procedure for calculating the optimal docking box size that maximizes the accuracy of binding pose prediction against a non-redundant and representative dataset of 3,659 protein-ligand complexes selected from the Protein Data Bank. Subsequently, we use the Directory of Useful Decoys, Enhanced to demonstrate that the optimized docking box size also yields an improved ranking in virtual screening. Binding pockets in both datasets are derived from the experimental complex structures and, additionally, predicted by eFindSite. A systematic analysis of ligand binding poses generated by AutoDock Vina shows that the highest accuracy is achieved when the dimensions of the search space are 2.9 times larger than the radius of gyration of a docking compound. Subsequent virtual screening benchmarks demonstrate that this optimized docking box size also improves compound ranking. For instance, using predicted ligand binding sites, the average enrichment factor calculated for the top 1 % (10 %) of the screening library is 8.20 (3.28) for the optimized protocol, compared to 7.67 (3.19) for the default procedure. Depending on the evaluation metric, the optimal docking box size gives better ranking in virtual screening for about two-thirds of target proteins. This fully automated procedure can be used to optimize docking protocols in order to improve the ranking accuracy in production virtual screening simulations. Importantly, the optimized search space systematically yields better results than the default method not only for experimental pockets, but also for those predicted from protein structures. A script for calculating the optimal docking box size is freely available at www.brylinski.org/content/docking-box-size. Graphical AbstractWe developed a procedure to optimize the box size in molecular docking calculations. Left panel shows the predicted binding pose of NADP (green sticks) compared to the experimental complex structure of human aldose reductase (blue sticks) using a default protocol. Right panel shows the docking accuracy using an optimized box size.
Distribution of materials in construction and demolition waste in Portugal.
Coelho, André; de Brito, Jorge
2011-08-01
It may not be enough simply to know the global volume of construction and demolition waste (CDW) generated in a certain region or country if one wants to estimate, for instance, the revenue accruing from separating several types of materials from the input entering a given CDW recycling plant. A more detailed determination of the distribution of the materials within the generated CDW is needed and the present paper addresses this issue, distinguishing different buildings and types of operation (new construction, retrofitting and demolition). This has been achieved by measuring the materials from buildings of different ages within the Portuguese building stock, and by using direct data from demolition/retrofitting sites and new construction average values reported in the literature. An attempt to establish a benchmark with other countries is also presented. This knowledge may also benefit industry management, especially that related to CDW recycling, helping to optimize procedures, equipment size and operation and even industrial plant spatial distribution. In an extremely competitive market, where as in Portugal low-tech and high environmental impact procedures remain the norm in the construction industry (in particular, the construction waste industry), the introduction of a successful recycling industry is only possible with highly optimized processes and based on a knowledge-based approach to problems.
Study on design and cutting parameters of rotating needles for core biopsy.
Giovannini, Marco; Ren, Huaqing; Cao, Jian; Ehmann, Kornel
2018-06-15
Core needle biopsies are widely adopted medical procedures that consist in the removal of biological tissue to better identify a lesion or an abnormality observed through a physical exam or a radiology scan. These procedures can provide significantly more information than most medical tests and they are usually performed on bone lesions, breast masses, lymph nodes and the prostate. The quality of the samples mainly depends on the forces exerted by the needle during the cutting process. The reduction of these forces is critical to extract high-quality tissue samples. The most critical factors that affect the cutting forces are the geometry of the needle tip and its motion while it is penetrating the tissue. However, optimal needle tip configurations and cutting parameters are not well established for rotating insertions. In this paper, the geometry and cutting forces of hollow needles are investigated. The fundamental goal of this study is to provide a series of guidelines for clinicians and surgeons to properly select the optimal tip geometries and speeds. Analytical models related to the cutting angles of several needle tip designs are presented and compared. Several needle tip geometries were manufactured from a 14-gauge cannula, commonly adopted during breast biopsies. The needles were then tested at different speeds and on different phantom tissues. According to these experimental measurements recommendations were formulated for rotating needle insertions. The findings of this study can be applied and extended to several biopsy procedures in which a cannula is used to extract tissue samples. Copyright © 2018 Elsevier Ltd. All rights reserved.
Multidisciplinary design optimization - An emerging new engineering discipline
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw
1993-01-01
A definition of the multidisciplinary design optimization (MDO) is introduced, and functionality and relationship of the MDO conceptual components are examined. The latter include design-oriented analysis, approximation concepts, mathematical system modeling, design space search, an optimization procedure, and a humane interface.
Optimal Robust Motion Controller Design Using Multiobjective Genetic Algorithm
Svečko, Rajko
2014-01-01
This paper describes the use of a multiobjective genetic algorithm for robust motion controller design. Motion controller structure is based on a disturbance observer in an RIC framework. The RIC approach is presented in the form with internal and external feedback loops, in which an internal disturbance rejection controller and an external performance controller must be synthesised. This paper involves novel objectives for robustness and performance assessments for such an approach. Objective functions for the robustness property of RIC are based on simple even polynomials with nonnegativity conditions. Regional pole placement method is presented with the aims of controllers' structures simplification and their additional arbitrary selection. Regional pole placement involves arbitrary selection of central polynomials for both loops, with additional admissible region of the optimized pole location. Polynomial deviation between selected and optimized polynomials is measured with derived performance objective functions. A multiobjective function is composed of different unrelated criteria such as robust stability, controllers' stability, and time-performance indexes of closed loops. The design of controllers and multiobjective optimization procedure involve a set of the objectives, which are optimized simultaneously with a genetic algorithm—differential evolution. PMID:24987749
Development of an Optimization Methodology for the Aluminum Alloy Wheel Casting Process
NASA Astrophysics Data System (ADS)
Duan, Jianglan; Reilly, Carl; Maijer, Daan M.; Cockcroft, Steve L.; Phillion, Andre B.
2015-08-01
An optimization methodology has been developed for the aluminum alloy wheel casting process. The methodology is focused on improving the timing of cooling processes in a die to achieve improved casting quality. This methodology utilizes (1) a casting process model, which was developed within the commercial finite element package, ABAQUS™—ABAQUS is a trademark of Dassault Systèms; (2) a Python-based results extraction procedure; and (3) a numerical optimization module from the open-source Python library, Scipy. To achieve optimal casting quality, a set of constraints have been defined to ensure directional solidification, and an objective function, based on the solidification cooling rates, has been defined to either maximize, or target a specific, cooling rate. The methodology has been applied to a series of casting and die geometries with different cooling system configurations, including a 2-D axisymmetric wheel and die assembly generated from a full-scale prototype wheel. The results show that, with properly defined constraint and objective functions, solidification conditions can be improved and optimal cooling conditions can be achieved leading to process productivity and product quality improvements.
Observations Regarding Use of Advanced CFD Analysis, Sensitivity Analysis, and Design Codes in MDO
NASA Technical Reports Server (NTRS)
Newman, Perry A.; Hou, Gene J. W.; Taylor, Arthur C., III
1996-01-01
Observations regarding the use of advanced computational fluid dynamics (CFD) analysis, sensitivity analysis (SA), and design codes in gradient-based multidisciplinary design optimization (MDO) reflect our perception of the interactions required of CFD and our experience in recent aerodynamic design optimization studies using CFD. Sample results from these latter studies are summarized for conventional optimization (analysis - SA codes) and simultaneous analysis and design optimization (design code) using both Euler and Navier-Stokes flow approximations. The amount of computational resources required for aerodynamic design using CFD via analysis - SA codes is greater than that required for design codes. Thus, an MDO formulation that utilizes the more efficient design codes where possible is desired. However, in the aerovehicle MDO problem, the various disciplines that are involved have different design points in the flight envelope; therefore, CFD analysis - SA codes are required at the aerodynamic 'off design' points. The suggested MDO formulation is a hybrid multilevel optimization procedure that consists of both multipoint CFD analysis - SA codes and multipoint CFD design codes that perform suboptimizations.
NASA Astrophysics Data System (ADS)
Mishra, Vinod Kumar
2017-09-01
In this paper we develop an inventory model, to determine the optimal ordering quantities, for a set of two substitutable deteriorating items. In this inventory model the inventory level of both items depleted due to demands and deterioration and when an item is out of stock, its demands are partially fulfilled by the other item and all unsatisfied demand is lost. Each substituted item incurs a cost of substitution and the demands and deterioration is considered to be deterministic and constant. Items are order jointly in each ordering cycle, to take the advantages of joint replenishment. The problem is formulated and a solution procedure is developed to determine the optimal ordering quantities that minimize the total inventory cost. We provide an extensive numerical and sensitivity analysis to illustrate the effect of different parameter on the model. The key observation on the basis of numerical analysis, there is substantial improvement in the optimal total cost of the inventory model with substitution over without substitution.
Optimal Frequency-Domain System Realization with Weighting
NASA Technical Reports Server (NTRS)
Juang, Jer-Nan; Maghami, Peiman G.
1999-01-01
Several approaches are presented to identify an experimental system model directly from frequency response data. The formulation uses a matrix-fraction description as the model structure. Frequency weighting such as exponential weighting is introduced to solve a weighted least-squares problem to obtain the coefficient matrices for the matrix-fraction description. A multi-variable state-space model can then be formed using the coefficient matrices of the matrix-fraction description. Three different approaches are introduced to fine-tune the model using nonlinear programming methods to minimize the desired cost function. The first method uses an eigenvalue assignment technique to reassign a subset of system poles to improve the identified model. The second method deals with the model in the real Schur or modal form, reassigns a subset of system poles, and adjusts the columns (rows) of the input (output) influence matrix using a nonlinear optimizer. The third method also optimizes a subset of poles, but the input and output influence matrices are refined at every optimization step through least-squares procedures.
Optimal experimental designs for the estimation of thermal properties of composite materials
NASA Technical Reports Server (NTRS)
Scott, Elaine P.; Moncman, Deborah A.
1994-01-01
Reliable estimation of thermal properties is extremely important in the utilization of new advanced materials, such as composite materials. The accuracy of these estimates can be increased if the experiments are designed carefully. The objectives of this study are to design optimal experiments to be used in the prediction of these thermal properties and to then utilize these designs in the development of an estimation procedure to determine the effective thermal properties (thermal conductivity and volumetric heat capacity). The experiments were optimized by choosing experimental parameters that maximize the temperature derivatives with respect to all of the unknown thermal properties. This procedure has the effect of minimizing the confidence intervals of the resulting thermal property estimates. Both one-dimensional and two-dimensional experimental designs were optimized. A heat flux boundary condition is required in both analyses for the simultaneous estimation of the thermal properties. For the one-dimensional experiment, the parameters optimized were the heating time of the applied heat flux, the temperature sensor location, and the experimental time. In addition to these parameters, the optimal location of the heat flux was also determined for the two-dimensional experiments. Utilizing the optimal one-dimensional experiment, the effective thermal conductivity perpendicular to the fibers and the effective volumetric heat capacity were then estimated for an IM7-Bismaleimide composite material. The estimation procedure used is based on the minimization of a least squares function which incorporates both calculated and measured temperatures and allows for the parameters to be estimated simultaneously.
Use of the variable gain settings on SPOT
Chavez, P.S.
1989-01-01
Often the brightness or digital number (DN) range of satellite image data is less than optimal and uses only a portion of the available values (0 to 255) because the range of reflectance values is small. Most imaging systems have been designed with only two gain settings, normal and high. The SPOT High Resolution Visible (HRV) imaging system has the capability to collect image data using one of eight different gain settings. With the proper procedure this allows the brightness or reflectance resolution, which is directly related to the range of DN values recorded, to be optimized for any given site as compared to using a single set of gain settings everywhere. -from Author
Fogel, Mina; Harari, Ayelet; Müller-Holzner, Elisabeth; Zeimet, Alain G; Moldenhauer, Gerhard; Altevogt, Peter
2014-06-25
The L1 cell adhesion molecule (L1CAM) is overexpressed in many human cancers and can serve as a biomarker for prognosis in most of these cancers (including type I endometrial carcinomas). Here we provide an optimized immunohistochemical staining procedure for a widely used automated platform (VENTANA™), which has recourse to commercially available primary antibody and detection reagents. In parallel, we optimized the staining on a semi-automated BioGenix (i6000) immunostainer. These protocols yield good stainings and should represent the basis for a reliable and standardized immunohistochemical detection of L1CAM in a variety of malignancies in different laboratories.
Communication theory of quantum systems. Ph.D. Thesis, 1970
NASA Technical Reports Server (NTRS)
Yuen, H. P. H.
1971-01-01
Communication theory problems incorporating quantum effects for optical-frequency applications are discussed. Under suitable conditions, a unique quantum channel model corresponding to a given classical space-time varying linear random channel is established. A procedure is described by which a proper density-operator representation applicable to any receiver configuration can be constructed directly from the channel output field. Some examples illustrating the application of our methods to the development of optical quantum channel representations are given. Optimizations of communication system performance under different criteria are considered. In particular, certain necessary and sufficient conditions on the optimal detector in M-ary quantum signal detection are derived. Some examples are presented. Parameter estimation and channel capacity are discussed briefly.
Optimal design of a hybrid MR brake for haptic wrist application
NASA Astrophysics Data System (ADS)
Nguyen, Quoc Hung; Nguyen, Phuong Bac; Choi, Seung-Bok
2011-03-01
In this work, a new configuration of a magnetorheological (MR) brake is proposed and an optimal design of the proposed MR brake for haptic wrist application is performed considering the required braking torque, the zero-field friction torque, the size and mass of the brake. The proposed MR brake configuration is a combination of disc-type and drum-type which is referred as a hybrid configuration in this study. After the MR brake with the hybrid configuration is proposed, braking torque of the brake is analyzed based on Bingham rheological model of the MR fluid. The zero-field friction torque of the MR brake is also obtained. An optimization procedure based on finite element analysis integrated with an optimization tool is developed for the MR brake. The purpose of the optimal design is to find the optimal geometric dimensions of the MR brake structure that can produce the required braking torque and minimize the uncontrollable torque (passive torque) of the haptic wrist. Based on developed optimization procedure, optimal solution of the proposed MR brake is achieved. The proposed optimized hybrid brake is then compared with conventional types of MR brake and discussions on working performance of the proposed MR brake are described.
Vaccaro, G; Pelaez, J I; Gil, J A
2016-07-01
Objective masticatory performance assessment using two-coloured specimens relies on image processing techniques; however, just a few approaches have been tested and no comparative studies are reported. The aim of this study was to present a selection procedure of the optimal image analysis method for masticatory performance assessment with a given two-coloured chewing gum. Dentate participants (n = 250; 25 ± 6·3 years) chewed red-white chewing gums for 3, 6, 9, 12, 15, 18, 21 and 25 cycles (2000 samples). Digitalised images of retrieved specimens were analysed using 122 image processing methods (IPMs) based on feature extraction algorithms (pixel values and histogram analysis). All IPMs were tested following the criteria of: normality of measurements (Kolmogorov-Smirnov), ability to detect differences among mixing states (anova corrected with post hoc Bonferroni) and moderate-to-high correlation with the number of cycles (Spearman's Rho). The optimal IPM was chosen using multiple criteria decision analysis (MCDA). Measurements provided by all IPMs proved to be normally distributed (P < 0·05), 116 proved sensible to mixing states (P < 0·05), and 35 showed moderate-to-high correlation with the number of cycles (|ρ| > 0·5; P < 0·05). The variance of the histogram of the Hue showed the highest correlation with the number of cycles (ρ = 0·792; P < 0·0001) and the highest MCDA score (optimal). The proposed procedure proved to be reliable and able to select the optimal approach among multiple IPMs. This experiment may be reproduced to identify the optimal approach for each case of locally available test foods. © 2016 John Wiley & Sons Ltd.
Optimizing probability of detection point estimate demonstration
NASA Astrophysics Data System (ADS)
Koshti, Ajay M.
2017-04-01
The paper provides discussion on optimizing probability of detection (POD) demonstration experiments using point estimate method. The optimization is performed to provide acceptable value for probability of passing demonstration (PPD) and achieving acceptable value for probability of false (POF) calls while keeping the flaw sizes in the set as small as possible. POD Point estimate method is used by NASA for qualifying special NDE procedures. The point estimate method uses binomial distribution for probability density. Normally, a set of 29 flaws of same size within some tolerance are used in the demonstration. Traditionally largest flaw size in the set is considered to be a conservative estimate of the flaw size with minimum 90% probability and 95% confidence. The flaw size is denoted as α90/95PE. The paper investigates relationship between range of flaw sizes in relation to α90, i.e. 90% probability flaw size, to provide a desired PPD. The range of flaw sizes is expressed as a proportion of the standard deviation of the probability density distribution. Difference between median or average of the 29 flaws and α90 is also expressed as a proportion of standard deviation of the probability density distribution. In general, it is concluded that, if probability of detection increases with flaw size, average of 29 flaw sizes would always be larger than or equal to α90 and is an acceptable measure of α90/95PE. If NDE technique has sufficient sensitivity and signal-to-noise ratio, then the 29 flaw-set can be optimized to meet requirements of minimum required PPD, maximum allowable POF, requirements on flaw size tolerance about mean flaw size and flaw size detectability requirements. The paper provides procedure for optimizing flaw sizes in the point estimate demonstration flaw-set.
The Need for Integrated Approaches in Metabolic Engineering.
Lechner, Anna; Brunk, Elizabeth; Keasling, Jay D
2016-11-01
This review highlights state-of-the-art procedures for heterologous small-molecule biosynthesis, the associated bottlenecks, and new strategies that have the potential to accelerate future accomplishments in metabolic engineering. We emphasize that a combination of different approaches over multiple time and size scales must be considered for successful pathway engineering in a heterologous host. We have classified these optimization procedures based on the "system" that is being manipulated: transcriptome, translatome, proteome, or reactome. By bridging multiple disciplines, including molecular biology, biochemistry, biophysics, and computational sciences, we can create an integral framework for the discovery and implementation of novel biosynthetic production routes. Copyright © 2016 Cold Spring Harbor Laboratory Press; all rights reserved.
NASA Technical Reports Server (NTRS)
Hayes, J. D.
1972-01-01
The feasibility of monitoring volatile contaminants in a large space simulation chamber using techniques of internal reflection spectroscopy was demonstrated analytically and experimentally. The infrared spectral region was selected as the operational spectral range in order to provide unique identification of the contaminants along with sufficient sensitivity to detect trace contaminant concentrations. It was determined theoretically that a monolayer of the contaminants could be detected and identified using optimized experimental procedures. This ability was verified experimentally. Procedures were developed to correct the attenuated total reflectance spectra for thick sample distortion. However, by using two different element designs the need for such correction can be avoided.
Advances for the Topographic Characterisation of SMC Materials
Calvimontes, Alfredo; Grundke, Karina; Müller, Anett; Stamm, Manfred
2009-01-01
For a comprehensive study of Sheet Moulding Compound (SMC) surfaces, topographical data obtained by a contact-free optical method (chromatic aberration confocal imaging) were systematically acquired to characterise these surfaces with regard to their statistical, functional and volumetrical properties. Optimal sampling conditions (cut-off length and resolution) were obtained by a topographical-statistical procedure proposed in the present work. By using different length scales specific morphologies due to the influence of moulding conditions, metallic mould topography, glass fibre content and glass fibre orientation can be characterized. The aim of this study is to suggest a systematic topographical characterization procedure for composite materials in order to study and recognize the influence of production conditions on their surface quality.
NASA Astrophysics Data System (ADS)
Bosman, Peter A. N.; Alderliesten, Tanja
2016-03-01
We recently demonstrated the strong potential of using dual-dynamic transformation models when tackling deformable image registration problems involving large anatomical differences. Dual-dynamic transformation models employ two moving grids instead of the common single moving grid for the target image (and single fixed grid for the source image). We previously employed powerful optimization algorithms to make use of the additional flexibility offered by a dual-dynamic transformation model with good results, directly obtaining insight into the trade-off between important registration objectives as a result of taking a multi-objective approach to optimization. However, optimization has so far been initialized using two regular grids, which still leaves a great potential of dual-dynamic transformation models untapped: a-priori grid alignment with image structures/areas that are expected to deform more. This allows (far) less grid points to be used, compared to using a sufficiently refined regular grid, leading to (far) more efficient optimization, or, equivalently, more accurate results using the same number of grid points. We study the implications of exploiting this potential by experimenting with two new smart grid initialization procedures: one manual expert-based and one automated image-feature-based. We consider a CT test case with large differences in bladder volume with and without a multi-resolution scheme and find a substantial benefit of using smart grid initialization.
2011-01-01
When applying echo-Doppler imaging for either clinical or research purposes it is very important to select the most adequate modality/technology and choose the most reliable and reproducible measurements. Quality control is a mainstay to reduce variability among institutions and operators and must be obtained by using appropriate procedures for data acquisition, storage and interpretation of echo-Doppler data. This goal can be achieved by employing an echo core laboratory (ECL), with the responsibility for standardizing image acquisition processes (performed at the peripheral echo-labs) and analysis (by monitoring and optimizing the internal intra- and inter-reader variability of measurements). Accordingly, the Working Group of Echocardiography of the Italian Society of Cardiology decided to design standardized procedures for imaging acquisition in peripheral laboratories and reading procedures and to propose a methodological approach to assess the reproducibility of echo-Doppler parameters of cardiac structure and function by using both standard and advanced technologies. A number of cardiologists experienced in cardiac ultrasound was involved to set up an ECL available for future studies involving complex imaging or including echo-Doppler measures as primary or secondary efficacy or safety end-points. The present manuscript describes the methodology of the procedures (imaging acquisition and measurement reading) and provides the documentation of the work done so far to test the reproducibility of the different echo-Doppler modalities (standard and advanced). These procedures can be suggested for utilization also in non referall echocardiographic laboratories as an "inside" quality check, with the aim at optimizing clinical consistency of echo-Doppler data. PMID:21943283
ERIC Educational Resources Information Center
Fasoula, S.; Nikitas, P.; Pappa-Louisi, A.
2017-01-01
A series of Microsoft Excel spreadsheets were developed to simulate the process of separation optimization under isocratic and simple gradient conditions. The optimization procedure is performed in a stepwise fashion using simple macros for an automatic application of this approach. The proposed optimization approach involves modeling of the peak…
Optimization of reinforced concrete slabs
NASA Technical Reports Server (NTRS)
Ferritto, J. M.
1979-01-01
Reinforced concrete cells composed of concrete slabs and used to limit the effects of accidental explosions during hazardous explosives operations are analyzed. An automated design procedure which considers the dynamic nonlinear behavior of the reinforced concrete of arbitrary geometrical and structural configuration subjected to dynamic pressure loading is discussed. The optimum design of the slab is examined using an interior penalty function. The optimization procedure is presented and the results are discussed and compared with finite element analysis.
Optimizing a liquid propellant rocket engine with an automated combustor design code (AUTOCOM)
NASA Technical Reports Server (NTRS)
Hague, D. S.; Reichel, R. H.; Jones, R. T.; Glatt, C. R.
1972-01-01
A procedure for automatically designing a liquid propellant rocket engine combustion chamber in an optimal fashion is outlined. The procedure is contained in a digital computer code, AUTOCOM. The code is applied to an existing engine, and design modifications are generated which provide a substantial potential payload improvement over the existing design. Computer time requirements for this payload improvement were small, approximately four minutes in the CDC 6600 computer.
Detonation Energies of Explosives by Optimized JCZ3 Procedures
NASA Astrophysics Data System (ADS)
Stiel, Leonard; Baker, Ernest
1997-07-01
Procedures for the detonation properties of explosives have been extended for the calculation of detonation energies at adiabatic expansion conditions. Advanced variable metric optimization routines developed by ARDEC are utilized to establish chemical reaction equilibrium by the minimization of the Helmholtz free energy of the system. The use of the JCZ3 equation of state with optimized Exp-6 potential parameters leads to lower errors in JWL detonation energies than the TIGER JCZ3 procedure and other methods tested for relative volumes to 7.0. For the principal isentrope with C-J parameters and freeze conditions established at elevated pressures with the JCZ3 equation of state, best results are obtained if an alternate volumetric relationship is utilized at the highest expansions. Efficient subroutines (designated JAGUAR) have been developed which incorporate the ability to automatically generate JWL and JWLB equation of state parameters. abstract.
Evaluation of Genetic Algorithm Concepts Using Model Problems. Part 2; Multi-Objective Optimization
NASA Technical Reports Server (NTRS)
Holst, Terry L.; Pulliam, Thomas H.
2003-01-01
A genetic algorithm approach suitable for solving multi-objective optimization problems is described and evaluated using a series of simple model problems. Several new features including a binning selection algorithm and a gene-space transformation procedure are included. The genetic algorithm is suitable for finding pareto optimal solutions in search spaces that are defined by any number of genes and that contain any number of local extrema. Results indicate that the genetic algorithm optimization approach is flexible in application and extremely reliable, providing optimal results for all optimization problems attempted. The binning algorithm generally provides pareto front quality enhancements and moderate convergence efficiency improvements for most of the model problems. The gene-space transformation procedure provides a large convergence efficiency enhancement for problems with non-convoluted pareto fronts and a degradation in efficiency for problems with convoluted pareto fronts. The most difficult problems --multi-mode search spaces with a large number of genes and convoluted pareto fronts-- require a large number of function evaluations for GA convergence, but always converge.
Cryogenic Tank Structure Sizing With Structural Optimization Method
NASA Technical Reports Server (NTRS)
Wang, J. T.; Johnson, T. F.; Sleight, D. W.; Saether, E.
2001-01-01
Structural optimization methods in MSC /NASTRAN are used to size substructures and to reduce the weight of a composite sandwich cryogenic tank for future launch vehicles. Because the feasible design space of this problem is non-convex, many local minima are found. This non-convex problem is investigated in detail by conducting a series of analyses along a design line connecting two feasible designs. Strain constraint violations occur for some design points along the design line. Since MSC/NASTRAN uses gradient-based optimization procedures. it does not guarantee that the lowest weight design can be found. In this study, a simple procedure is introduced to create a new starting point based on design variable values from previous optimization analyses. Optimization analysis using this new starting point can produce a lower weight design. Detailed inputs for setting up the MSC/NASTRAN optimization analysis and final tank design results are presented in this paper. Approaches for obtaining further weight reductions are also discussed.
de Oliveira, Gabriel Barros; de Castro Gomes Vieira, Carolyne Menezes; Orlando, Ricardo Mathias; Faria, Adriana Ferreira
2017-10-15
This work involved the optimization and validation of a method, according to Directive 2002/657/EC and the Analytical Quality Assurance Manual of Ministério da Agricultura, Pecuária e Abastecimento, Brazil, for simultaneous extraction and determination of fumonisins B1 and B2 in maize. The extraction procedure was based on a matrix solid phase dispersion approach, the optimization of which employed a sequence of different factorial designs. A liquid chromatography-tandem mass spectrometry method was developed for determining these analytes using the selected reaction monitoring mode. The optimized method employed only 1g of silica gel for dispersion and elution with 70% ammonium formate aqueous buffer (50mmolL -1 , pH 9), representing a simple, cheap and chemically friendly sample preparation method. Trueness (recoveries: 86-106%), precision (RSD ≤19%), decision limits, detection capabilities and measurement uncertainties were calculated for the validated method. The method scope was expanded to popcorn kernels, white maize kernels and yellow maize grits. Copyright © 2017 Elsevier Ltd. All rights reserved.
Ramachandra, Ranjan; de Jonge, Niels
2012-01-01
Three-dimensional (3D) data sets were recorded of gold nanoparticles placed on both sides of silicon nitride membranes using focal series aberration-corrected scanning transmission electron microscopy (STEM). The deconvolution of the 3D datasets was optimized to obtain the highest possible axial resolution. The deconvolution involved two different point spread function (PSF)s, each calculated iteratively via blind deconvolution.. Supporting membranes of different thicknesses were tested to study the effect of beam broadening on the deconvolution. It was found that several iterations of deconvolution was efficient in reducing the imaging noise. With an increasing number of iterations, the axial resolution was increased, and most of the structural information was preserved. Additional iterations improved the axial resolution by maximal a factor of 4 to 6, depending on the particular dataset, and up to 8 nm maximal, but at the cost of a reduction of the lateral size of the nanoparticles in the image. Thus, the deconvolution procedure optimized for highest axial resolution is best suited for applications where one is interested in the 3D locations of nanoparticles only. PMID:22152090
Computer-oriented synthesis of wide-band non-uniform negative resistance amplifiers
NASA Technical Reports Server (NTRS)
Branner, G. R.; Chan, S.-P.
1975-01-01
This paper presents a synthesis procedure which provides design values for broad-band amplifiers using non-uniform negative resistance devices. Employing a weighted least squares optimization scheme, the technique, based on an extension of procedures for uniform negative resistance devices, is capable of providing designs for a variety of matching network topologies. It also provides, for the first time, quantitative results for predicting the effects of parameter element variations on overall amplifier performance. The technique is also unique in that it employs exact partial derivatives for optimization and sensitivity computation. In comparison with conventional procedures, significantly improved broad-band designs are shown to result.
Optimization techniques for integrating spatial data
Herzfeld, U.C.; Merriam, D.F.
1995-01-01
Two optimization techniques ta predict a spatial variable from any number of related spatial variables are presented. The applicability of the two different methods for petroleum-resource assessment is tested in a mature oil province of the Midcontinent (USA). The information on petroleum productivity, usually not directly accessible, is related indirectly to geological, geophysical, petrographical, and other observable data. This paper presents two approaches based on construction of a multivariate spatial model from the available data to determine a relationship for prediction. In the first approach, the variables are combined into a spatial model by an algebraic map-comparison/integration technique. Optimal weights for the map comparison function are determined by the Nelder-Mead downhill simplex algorithm in multidimensions. Geologic knowledge is necessary to provide a first guess of weights to start the automatization, because the solution is not unique. In the second approach, active set optimization for linear prediction of the target under positivity constraints is applied. Here, the procedure seems to select one variable from each data type (structure, isopachous, and petrophysical) eliminating data redundancy. Automating the determination of optimum combinations of different variables by applying optimization techniques is a valuable extension of the algebraic map-comparison/integration approach to analyzing spatial data. Because of the capability of handling multivariate data sets and partial retention of geographical information, the approaches can be useful in mineral-resource exploration. ?? 1995 International Association for Mathematical Geology.
NASA Astrophysics Data System (ADS)
Wang, Ke; Guo, Ping; Luo, A.-Li
2017-03-01
Spectral feature extraction is a crucial procedure in automated spectral analysis. This procedure starts from the spectral data and produces informative and non-redundant features, facilitating the subsequent automated processing and analysis with machine-learning and data-mining techniques. In this paper, we present a new automated feature extraction method for astronomical spectra, with application in spectral classification and defective spectra recovery. The basic idea of our approach is to train a deep neural network to extract features of spectra with different levels of abstraction in different layers. The deep neural network is trained with a fast layer-wise learning algorithm in an analytical way without any iterative optimization procedure. We evaluate the performance of the proposed scheme on real-world spectral data. The results demonstrate that our method is superior regarding its comprehensive performance, and the computational cost is significantly lower than that for other methods. The proposed method can be regarded as a new valid alternative general-purpose feature extraction method for various tasks in spectral data analysis.
Overview of refinement procedures within REFMAC5: utilizing data from different sources.
Kovalevskiy, Oleg; Nicholls, Robert A; Long, Fei; Carlon, Azzurra; Murshudov, Garib N
2018-03-01
Refinement is a process that involves bringing into agreement the structural model, available prior knowledge and experimental data. To achieve this, the refinement procedure optimizes a posterior conditional probability distribution of model parameters, including atomic coordinates, atomic displacement parameters (B factors), scale factors, parameters of the solvent model and twin fractions in the case of twinned crystals, given observed data such as observed amplitudes or intensities of structure factors. A library of chemical restraints is typically used to ensure consistency between the model and the prior knowledge of stereochemistry. If the observation-to-parameter ratio is small, for example when diffraction data only extend to low resolution, the Bayesian framework implemented in REFMAC5 uses external restraints to inject additional information extracted from structures of homologous proteins, prior knowledge about secondary-structure formation and even data obtained using different experimental methods, for example NMR. The refinement procedure also generates the `best' weighted electron-density maps, which are useful for further model (re)building. Here, the refinement of macromolecular structures using REFMAC5 and related tools distributed as part of the CCP4 suite is discussed.
Kazemi, Elahe; Dadfarnia, Shayessteh; Haji Shabani, Ali Mohammad; Ranjbar, Mansoureh
2017-12-15
A novel Zn(II) imprinted polymer was synthesized via a co-precipitation method using graphene oxide/magnetic chitosan nanocomposite as supporting material. The synthesized imprinted polymer was characterized by Fourier transform infrared spectrometry (FTIR) and scanning electron microscopy (SEM) and applied as a sorbent for selective magnetic solid phase extraction of zinc followed by its determination by flame atomic absorption spectrometry. The kinetic and isothermal adsorption experiments were carried out and all parameters affecting the extraction process was optimized. Under the optimal experimental conditions, the developed procedure exhibits a linear dynamic range of 0.5-5.0µgL -1 with a detection limit of 0.09µgL -1 and quantification limit of 0.3µgL -1 . The maximum sorption capacity of the sorbent was found to be 71.4mgg -1 . The developed procedure was successfully applied to the selective extraction and determination of zinc in various samples including well water, drinking water, black tea, rice, and milk. Copyright © 2017 Elsevier Ltd. All rights reserved.
Diffeomorphic demons: efficient non-parametric image registration.
Vercauteren, Tom; Pennec, Xavier; Perchant, Aymeric; Ayache, Nicholas
2009-03-01
We propose an efficient non-parametric diffeomorphic image registration algorithm based on Thirion's demons algorithm. In the first part of this paper, we show that Thirion's demons algorithm can be seen as an optimization procedure on the entire space of displacement fields. We provide strong theoretical roots to the different variants of Thirion's demons algorithm. This analysis predicts a theoretical advantage for the symmetric forces variant of the demons algorithm. We show on controlled experiments that this advantage is confirmed in practice and yields a faster convergence. In the second part of this paper, we adapt the optimization procedure underlying the demons algorithm to a space of diffeomorphic transformations. In contrast to many diffeomorphic registration algorithms, our solution is computationally efficient since in practice it only replaces an addition of displacement fields by a few compositions. Our experiments show that in addition to being diffeomorphic, our algorithm provides results that are similar to the ones from the demons algorithm but with transformations that are much smoother and closer to the gold standard, available in controlled experiments, in terms of Jacobians.
Salvo, Andrea; La Torre, Giovanna Loredana; Di Stefano, Vita; Capocchiano, Valentina; Mangano, Valentina; Saija, Emanuele; Pellizzeri, Vito; Casale, Katia Erminia; Dugo, Giacomo
2017-04-15
A fast reversed-phase UPLC method was developed for squalene determination in Sicilian pistachio samples that entry in the European register of the products with P.D.O. In the present study the SPE procedure was optimized for the squalene extraction prior to the UPLC/PDA analysis. The precision of the full analytical procedure was satisfactory and the mean recoveries were 92.8±0.3% and 96.6±0.1% for 25 and 50mgL -1 level of addition, respectively. Selected chromatographic conditions allowed a very fast squalene determination; in fact it was well separated in ∼0.54min with good resolution. Squalene was detected in all the pistachio samples analyzed and the levels ranged from 55.45-226.34mgkg -1 . Comparing our results with those of other studies it emerges that squalene contents in P.D.O. Sicilian pistachio samples, generally, were higher than those measured for other samples of different geographic origins. Copyright © 2016 Elsevier Ltd. All rights reserved.
Raza, Asad; Zia-Ul-Haq, Muhammad
2011-01-01
Two simple, fast, and accurate spectrophotometric methods for the determination of alendronate sodium are described. The methods are based on charge-transfer complex formation of the drug with two π-electron acceptors 7,7,7,8-tetracyanoquinodimethane (TCNQ) and 2,3-dichloro-5,6-dicyano-1,4-benzoquinone (DDQ) in acetonitrile and methanol medium. The methods are followed spectrophotometrically by measuring the maximum absorbance at 840 nm and 465 nm, respectively. Under the optimized experimental conditions, the calibration curves showed a linear relationship over the concentration ranges of 2-10 μg mL(-1) and 2-12 μg mL(-1), respectively. The optimal reactions conditions values such as the reagent concentration, heating time, and stability of reaction product were determined. No significant difference was obtained between the results of newly proposed methods and the B.P. Titrimetric procedures. The charge transfer approach using TCNQ and DDQ procedures described in this paper is simple, fast, accurate, precise, and extraction-free.
Raza, Asad; Zia-ul-Haq, Muhammad
2011-01-01
Two simple, fast, and accurate spectrophotometric methods for the determination of alendronate sodium are described. The methods are based on charge-transfer complex formation of the drug with two π-electron acceptors 7,7,7,8-tetracyanoquinodimethane (TCNQ) and 2,3-dichloro-5,6-dicyano-1,4-benzoquinone (DDQ) in acetonitrile and methanol medium. The methods are followed spectrophotometrically by measuring the maximum absorbance at 840 nm and 465 nm, respectively. Under the optimized experimental conditions, the calibration curves showed a linear relationship over the concentration ranges of 2–10 μg mL−1 and 2–12 μg mL−1, respectively. The optimal reactions conditions values such as the reagent concentration, heating time, and stability of reaction product were determined. No significant difference was obtained between the results of newly proposed methods and the B.P. Titrimetric procedures. The charge transfer approach using TCNQ and DDQ procedures described in this paper is simple, fast, accurate, precise, and extraction-free. PMID:21760789
Feature Vector Construction Method for IRIS Recognition
NASA Astrophysics Data System (ADS)
Odinokikh, G.; Fartukov, A.; Korobkin, M.; Yoo, J.
2017-05-01
One of the basic stages of iris recognition pipeline is iris feature vector construction procedure. The procedure represents the extraction of iris texture information relevant to its subsequent comparison. Thorough investigation of feature vectors obtained from iris showed that not all the vector elements are equally relevant. There are two characteristics which determine the vector element utility: fragility and discriminability. Conventional iris feature extraction methods consider the concept of fragility as the feature vector instability without respect to the nature of such instability appearance. This work separates sources of the instability into natural and encodinginduced which helps deeply investigate each source of instability independently. According to the separation concept, a novel approach of iris feature vector construction is proposed. The approach consists of two steps: iris feature extraction using Gabor filtering with optimal parameters and quantization with separated preliminary optimized fragility thresholds. The proposed method has been tested on two different datasets of iris images captured under changing environmental conditions. The testing results show that the proposed method surpasses all the methods considered as a prior art by recognition accuracy on both datasets.
Scheyer, Anne; Briand, Olivier; Morville, Stéphane; Mirabel, Philippe; Millet, Maurice
2007-01-01
Solid-phase microextraction (SPME) was used for the analysis of some pesticides (bromoxynil, chlorotoluron, diuron, isoproturon, 2,4-MCPA, MCPP and 2,4-D) in rainwater after derivatisation with PFBBr and gas chromatography-ion trap mass spectrometry. The derivatisation procedure was optimized by testing different methods: direct derivatisation in the aqueous phase followed by SPME extraction, on-fibre derivatisation and derivatisation in the injector. The best result was obtained by headspace coating the PDMS/DVB fibre with PFBBr for 10 min followed by direct SPME extraction for 60 min at 68 degrees C (pH 2 and 75% NaCl). Good detection limits were obtained for all the compounds: these ranged between 10 and 1,000 ng L-1 with a relatively high uncertainty due to the combination of derivatisation and SPME extraction steps. The optimized procedure was applied to the analysis of pesticides in rainwater and results obtained shows that this method is a fast and simple technique to assess the spatial and temporal variations of concentrations of pesticides in rainwater.
Determining the Ocean's Role on the Variable Gravity Field and Earth Rotation
NASA Technical Reports Server (NTRS)
Ponte, Rui M.; Frey, H. (Technical Monitor)
2000-01-01
A number of ocean models of different complexity have been used to study changes in the oceanic angular momentum (OAM) and mass fields and their relation to the variable Earth rotation and gravity field. Time scales examined range from seasonal to a few days. Results point to the importance of oceanic signals in driving polar motion, in particular the Chandler and annual wobbles. Results also show that oceanic signals have a measurable impact on length-of-day variations. Various circulation features and associated mass signals, including the North Pacific subtropical gyre, the equatorial currents, and the Antarctic Circumpolar Current play a significant role in oceanic angular momentum variability. The impact on OAM values of an optimization procedure that uses available data to constrain ocean model results was also tested for the first time. The optimization procedure yielded substantial changes, in OAM, related to adjustments in both motion and mass fields,as well as in the wind stress torques acting on the ocean. Constrained OAM values were found to yield noticeable improvements in the agreement with the observed Earth rotation parameters, particularly at the seasonal timescale.
Sel, Davorka; Lebar, Alenka Macek; Miklavcic, Damijan
2007-05-01
In electrochemotherapy (ECT) electropermeabilization, parameters (pulse amplitude, electrode setup) need to be customized in order to expose the whole tumor to electric field intensities above permeabilizing threshold to achieve effective ECT. In this paper, we present a model-based optimization approach toward determination of optimal electropermeabilization parameters for effective ECT. The optimization is carried out by minimizing the difference between the permeabilization threshold and electric field intensities computed by finite element model in selected points of tumor. We examined the feasibility of model-based optimization of electropermeabilization parameters on a model geometry generated from computer tomography images, representing brain tissue with tumor. Continuous parameter subject to optimization was pulse amplitude. The distance between electrode pairs was optimized as a discrete parameter. Optimization also considered the pulse generator constraints on voltage and current. During optimization the two constraints were reached preventing the exposure of the entire volume of the tumor to electric field intensities above permeabilizing threshold. However, despite the fact that with the particular needle array holder and pulse generator the entire volume of the tumor was not permeabilized, the maximal extent of permeabilization for the particular case (electrodes, tissue) was determined with the proposed approach. Model-based optimization approach could also be used for electro-gene transfer, where electric field intensities should be distributed between permeabilizing threshold and irreversible threshold-the latter causing tissue necrosis. This can be obtained by adding constraints on maximum electric field intensity in optimization procedure.
Longenecker, R J; Galazyuk, A V
2012-11-16
Recently prepulse inhibition of the acoustic startle reflex (ASR) became a popular technique for tinnitus assessment in laboratory animals. This method confers a significant advantage over the previously used time-consuming behavioral approaches utilizing basic mechanisms of conditioning. Although this technique has been successfully used to assess tinnitus in different laboratory animals, many of the finer details of this methodology have not been described enough to be replicated, but are critical for tinnitus assessment. Here we provide detail description of key procedures and methodological issues that provide guidance for newcomers with the process of learning to correctly apply gap detection techniques for tinnitus assessment in laboratory animals. The major categories of these issues include: refinement of hardware for best performance, optimization of stimulus parameters, behavioral considerations, and identification of optimal strategies for data analysis. This article is part of a Special Issue entitled: Tinnitus Neuroscience. Copyright © 2012. Published by Elsevier B.V.
Optimization of Compton-suppression and summing schemes for the TIGRESS HPGe detector array
NASA Astrophysics Data System (ADS)
Schumaker, M. A.; Svensson, C. E.; Andreoiu, C.; Andreyev, A.; Austin, R. A. E.; Ball, G. C.; Bandyopadhyay, D.; Boston, A. J.; Chakrawarthy, R. S.; Churchman, R.; Drake, T. E.; Finlay, P.; Garrett, P. E.; Grinyer, G. F.; Hackman, G.; Hyland, B.; Jones, B.; Maharaj, R.; Morton, A. C.; Pearson, C. J.; Phillips, A. A.; Sarazin, F.; Scraggs, H. C.; Smith, M. B.; Valiente-Dobón, J. J.; Waddington, J. C.; Watters, L. M.
2007-04-01
Methods of optimizing the performance of an array of Compton-suppressed, segmented HPGe clover detectors have been developed which rely on the physical position sensitivity of both the HPGe crystals and the Compton-suppression shields. These relatively simple analysis procedures promise to improve the precision of experiments with the TRIUMF-ISAC Gamma-Ray Escape-Suppressed Spectrometer (TIGRESS). Suppression schemes will improve the efficiency and peak-to-total ratio of TIGRESS for high γ-ray multiplicity events by taking advantage of the 20-fold segmentation of the Compton-suppression shields, while the use of different summing schemes will improve results for a wide range of experimental conditions. The benefits of these methods are compared for many γ-ray energies and multiplicities using a GEANT4 simulation, and the optimal physical configuration of the TIGRESS array under each set of conditions is determined.
Structural optimization of framed structures using generalized optimality criteria
NASA Technical Reports Server (NTRS)
Kolonay, R. M.; Venkayya, Vipperla B.; Tischler, V. A.; Canfield, R. A.
1989-01-01
The application of a generalized optimality criteria to framed structures is presented. The optimality conditions, Lagrangian multipliers, resizing algorithm, and scaling procedures are all represented as a function of the objective and constraint functions along with their respective gradients. The optimization of two plane frames under multiple loading conditions subject to stress, displacement, generalized stiffness, and side constraints is presented. These results are compared to those found by optimizing the frames using a nonlinear mathematical programming technique.
Jig-Shape Optimization of a Low-Boom Supersonic Aircraft
NASA Technical Reports Server (NTRS)
Pak, Chan-gi
2018-01-01
A simple approach for optimizing the jig-shape is proposed in this study. This simple approach is based on an unconstrained optimization problem and applied to a low-boom supersonic aircraft. In this study, the jig-shape optimization is performed using the two-step approach. First, starting design variables are computed using the least squares surface fitting technique. Next, the jig-shape is further tuned using a numerical optimization procedure based on in-house object-oriented optimization tool.
Optimization of an Offset Receiver Optics for Radio Telescopes
NASA Astrophysics Data System (ADS)
Yeap, Kim Ho; Tham, Choy Yoong
2018-01-01
The latest generation of Cassegrain radio astronomy antennas is designed for multiple frequency bands with receivers for individual bands offset from the antenna axis. The offset feed arrangement typically has two focusing elements in the form of ellipsoidal mirrors in the optical path between the feed horn and the antenna focus. This arrangement aligns the beam from the offset feed horn to illuminate the subreflector. The additional focusing elements increase the number of design variables, namely the distances between the horn aperture and the first mirror and that between the two mirrors, and their focal lengths. There are a huge number of possible combinations of these four variables in which the optics system can take on. The design aim is to seek the combination that will give the optimum antenna efficiency, not only at the centre frequency of the particular band but also across its bandwidth. To pick the optimum combination of the variables, it requires working through, by computational mean, a continuum range of variable values at different frequencies which will fit the optics system within the allocated physical space. Physical optics (PO) is a common technique used in optics design. However, due to the repeated iteration of the huge number of computation involved, the use of PO is not feasible. We present a procedure based on using multimode Gaussian optics to pick the optimum design and using PO for final verification of the system performance. The best antenna efficiency is achieved when the beam illuminating the subreflector is truncated with the optimum edge taper. The optimization procedure uses the beam's edge taper at the subreflector as the iteration target. The band 6 receiver optics design for the Atacama Large Millimetre Array (ALMA) antenna is used to illustrate the optimization procedure.
Global linear-irreversible principle for optimization in finite-time thermodynamics
NASA Astrophysics Data System (ADS)
Johal, Ramandeep S.
2018-03-01
There is intense effort into understanding the universal properties of finite-time models of thermal machines —at optimal performance— such as efficiency at maximum power, coefficient of performance at maximum cooling power, and other such criteria. In this letter, a global principle consistent with linear irreversible thermodynamics is proposed for the whole cycle —without considering details of irreversibilities in the individual steps of the cycle. This helps to express the total duration of the cycle as τ \\propto {\\bar{Q}^2}/{Δ_\\text{tot}S} , where \\bar{Q} models the effective heat transferred through the machine during the cycle, and Δ_ \\text{tot} S is the total entropy generated. By taking \\bar{Q} in the form of simple algebraic means (such as arithmetic and geometric means) over the heats exchanged by the reservoirs, the present approach is able to predict various standard expressions for figures of merit at optimal performance, as well as the bounds respected by them. It simplifies the optimization procedure to a one-parameter optimization, and provides a fresh perspective on the issue of universality at optimal performance, for small difference in reservoir temperatures. As an illustration, we compare the performance of a partially optimized four-step endoreversible cycle with the present approach.
Goldberg, Kenneth A; Yashchuk, Valeriy V
2016-05-01
For glancing-incidence optical systems, such as short-wavelength optics used for nano-focusing, incorporating physical factors in the calculations used for shape optimization can improve performance. Wavefront metrology, including the measurement of a mirror's shape or slope, is routinely used as input for mirror figure optimization on mirrors that can be bent, actuated, positioned, or aligned. Modeling shows that when the incident power distribution, distance from focus, angle of incidence, and the spatially varying reflectivity are included in the optimization, higher Strehl ratios can be achieved. Following the works of Maréchal and Mahajan, optimization of the Strehl ratio (for peak intensity with a coherently illuminated system) occurs when the expectation value of the phase error's variance is minimized. We describe an optimization procedure based on regression analysis that incorporates these physical parameters. This approach is suitable for coherently illuminated systems of nearly diffraction-limited quality. Mathematically, this work is an enhancement of the methods commonly applied for ex situ alignment based on uniform weighting of all points on the surface (or a sub-region of the surface). It follows a similar approach to the optimization of apodized and non-uniformly illuminated optical systems. Significantly, it reaches a different conclusion than a more recent approach based on minimization of focal plane ray errors.
Experimental design for evaluating WWTP data by linear mass balances.
Le, Quan H; Verheijen, Peter J T; van Loosdrecht, Mark C M; Volcke, Eveline I P
2018-05-15
A stepwise experimental design procedure to obtain reliable data from wastewater treatment plants (WWTPs) was developed. The proposed procedure aims at determining sets of additional measurements (besides available ones) that guarantee the identifiability of key process variables, which means that their value can be calculated from other, measured variables, based on available constraints in the form of linear mass balances. Among all solutions, i.e. all possible sets of additional measurements allowing the identifiability of all key process variables, the optimal solutions were found taking into account two objectives, namely the accuracy of the identified key variables and the cost of additional measurements. The results of this multi-objective optimization problem were represented in a Pareto-optimal front. The presented procedure was applied to a full-scale WWTP. Detailed analysis of the relation between measurements allowed the determination of groups of overlapping mass balances. Adding measured variables could only serve in identifying key variables that appear in the same group of mass balances. Besides, the application of the experimental design procedure to these individual groups significantly reduced the computational effort in evaluating available measurements and planning additional monitoring campaigns. The proposed procedure is straightforward and can be applied to other WWTPs with or without prior data collection. Copyright © 2018 Elsevier Ltd. All rights reserved.
A new procedure to analyze the effect of air changes in building energy consumption
2014-01-01
Background Today, the International Energy Agency is working under good practice guides that integrate appropriate and cost effective technologies. In this paper a new procedure to define building energy consumption in accordance with the ISO 13790 standard was performed and tested based on real data from a Spanish region. Results Results showed that the effect of air changes on building energy consumption can be defined using the Weibull peak function model. Furthermore, the effect of climate change on building energy consumption under several different air changes was nearly nil during the summer season. Conclusions The procedure obtained could be the much sought-after solution to the problem stated by researchers in the past and future research works relating to this new methodology could help us define the optimal improvement in real buildings to reduce energy consumption, and its related carbon dioxide emissions, at minimal economical cost. PMID:24456655
NASA Astrophysics Data System (ADS)
Hsueh, Yu-Li; Rogge, Matthew S.; Shaw, Wei-Tao; Kim, Jaedon; Yamamoto, Shu; Kazovsky, Leonid G.
2005-09-01
A simple and cost-effective upgrade of existing passive optical networks (PONs) is proposed, which realizes service overlay by novel spectral-shaping line codes. A hierarchical coding procedure allows processing simplicity and achieves desired long-term spectral properties. Different code rates are supported, and the spectral shape can be properly tailored to adapt to different systems. The computation can be simplified by quantization of trigonometric functions. DC balance is achieved by passing the dc residual between processing windows. The proposed line codes tend to introduce bit transitions to avoid long consecutive identical bits and facilitate receiver clock recovery. Experiments demonstrate and compare several different optimized line codes. For a specific tolerable interference level, the optimal line code can easily be determined, which maximizes the data throughput. The service overlay using the line-coding technique leaves existing services and field-deployed fibers untouched but fully functional, providing a very flexible and economic way to upgrade existing PONs.
Ethnic and Gender Considerations in the Use of Facial Injectables: Asian Patients.
Liew, Steven
2015-11-01
Asians have distinct facial characteristics due to underlying skeletal and morphological features that differ greatly with those of whites. This together with the higher sun protection factor and the differences in the quality of the skin and soft tissue create a profound effect on their aging process. Understanding of these differences and their effects in the aging process in Asians is crucial in determining effective utilization and placement of injectable products to ensure optimal aesthetic outcomes. For younger Asian women, the main treatment goal is to address the inherent structural deficits through reshaping and the provision of facial support. Facial injectables are used to provide anterior projection, to reduce facial width, and to lengthen facial height. In the older group, the aim is for rejuvenation and also to address the underlying structural issues that has compounded due to age-related volume loss. Asian women requesting cosmetic procedures do not want to be Westernized but rather seeking to enhance and optimize their Asian ethnic features.
[AWAKE CRANIOTOMY: IN SEARCH FOR OPTIMAL SEDATION].
Kulikova, A S; Sel'kov, D A; Kobyakov, G L; Shmigel'skiy, A V; Lubnin, A Yu
2015-01-01
Awake craniotomy is a "gold standard"for intraoperative brain language mapping. One of the main anesthetic challenge of awake craniotomy is providing of optimal sedation for initial stages of intervention. The goal of this study was comparison of different technics of anesthesia for awake craniotomy. Materials and methods: 162 operations were divided in 4 groups: 76 cases with propofol sedation (2-4mg/kg/h) without airway protection; 11 cases with propofol sedation (4-5 mg/kg/h) with MV via LMA; 36 cases of xenon anesthesia; and 39 cases with dexmedetomidine sedation without airway protection. Results and discussion: brain language mapping was successful in 90% of cases. There was no difference between groups in successfulness of brain mapping. However in the first group respiratory complications were more frequent. Three other technics were more safer Xenon anesthesia was associated with ultrafast awakening for mapping (5±1 min). Dexmedetomidine sedation provided high hemodynamic and respiratory stability during the procedure.
DC breakdown characteristics of silicone polymer composites for HVDC insulator applications
NASA Astrophysics Data System (ADS)
Han, Byung-Jo; Seo, In-Jin; Seong, Jae-Kyu; Hwang, Young-Ho; Yang, Hai-Won
2015-11-01
Critical components for HVDC transmission systems are polymer insulators, which have stricter requirements that are more difficult to achieve compared to those of HVAC insulators. In this study, we investigated the optimal design of HVDC polymer insulators by using a DC electric field analysis and experiments. The physical properties of the polymer specimens were analyzed to develop an optimal HVDC polymer material, and four polymer specimens were prepared for DC breakdown experiments. Single and reverse polarity breakdown tests were conducted to analyze the effect of temperature on the breakdown strength of the polymer. In addition, electric fields were analyzed via simulations, in which a small-scale polymer insulator model was applied to prevent dielectric breakdown due to electric field concentration, with four DC operating conditions taken into consideration. The experimental results show that the electrical breakdown strength and the electric field distribution exhibit significant differences in relation to different DC polarity transition procedures.
Embedded sparse representation of fMRI data via group-wise dictionary optimization
NASA Astrophysics Data System (ADS)
Zhu, Dajiang; Lin, Binbin; Faskowitz, Joshua; Ye, Jieping; Thompson, Paul M.
2016-03-01
Sparse learning enables dimension reduction and efficient modeling of high dimensional signals and images, but it may need to be tailored to best suit specific applications and datasets. Here we used sparse learning to efficiently represent functional magnetic resonance imaging (fMRI) data from the human brain. We propose a novel embedded sparse representation (ESR), to identify the most consistent dictionary atoms across different brain datasets via an iterative group-wise dictionary optimization procedure. In this framework, we introduced additional criteria to make the learned dictionary atoms more consistent across different subjects. We successfully identified four common dictionary atoms that follow the external task stimuli with very high accuracy. After projecting the corresponding coefficient vectors back into the 3-D brain volume space, the spatial patterns are also consistent with traditional fMRI analysis results. Our framework reveals common features of brain activation in a population, as a new, efficient fMRI analysis method.
Structural tailoring of engine blades (STAEBL)
NASA Technical Reports Server (NTRS)
Platt, C. E.; Pratt, T. K.; Brown, K. W.
1982-01-01
A mathematical optimization procedure was developed for the structural tailoring of engine blades and was used to structurally tailor two engine fan blades constructed of composite materials without midspan shrouds. The first was a solid blade made from superhybrid composites, and the second was a hollow blade with metal matrix composite inlays. Three major computerized functions were needed to complete the procedure: approximate analysis with the established input variables, optimization of an objective function, and refined analysis for design verification.
2010-09-01
matrix is used in many methods, like Jacobi or Gauss Seidel , for solving linear systems. Also, no partial pivoting is necessary for a strictly column...problems that arise during the procedure, which in general, converges to the solving of a linear system. The most common issue with the solution is the... iterative procedure to find an appropriate subset of parameters that produce an optimal solution commonly known as forward selection. Then, the
Multi-level optimization of a beam-like space truss utilizing a continuum model
NASA Technical Reports Server (NTRS)
Yates, K.; Gurdal, Z.; Thangjitham, S.
1992-01-01
A continuous beam model is developed for approximate analysis of a large, slender, beam-like truss. The model is incorporated in a multi-level optimization scheme for the weight minimization of such trusses. This scheme is tested against traditional optimization procedures for savings in computational cost. Results from both optimization methods are presented for comparison.
Caporale, A; Doti, N; Monti, A; Sandomenico, A; Ruvo, M
2018-04-01
Solid-Phase Peptide Synthesis (SPPS) is a rapid and efficient methodology for the chemical synthesis of peptides and small proteins. However, the assembly of peptide sequences classified as "difficult" poses severe synthetic problems in SPPS for the occurrence of extensive aggregation of growing peptide chains which often leads to synthesis failure. In this framework, we have investigated the impact of different synthetic procedures on the yield and final purity of three well-known "difficult peptides" prepared using oxyma as additive for the coupling steps. In particular, we have comparatively investigated the use of piperidine and morpholine/DBU as deprotection reagents, the addition of DIPEA, collidine and N-methylmorpholine as bases to the coupling reagent. Moreover, the effect of different agitation modalities during the acylation reactions has been investigated. Data obtained represent a step forward in optimizing strategies for the synthesis of "difficult peptides". Copyright © 2018 Elsevier Inc. All rights reserved.
Jaime, Laura; Mendiola, José A; Herrero, Miguel; Soler-Rivas, Cristina; Santoyo, Susana; Señorans, F Javier; Cifuentes, Alejandro; Ibáñez, Elena
2005-11-01
A new procedure has been developed to separate and characterize antioxidant compounds from Spirulina platensis microalga based on the combination of pressurized liquid extraction (PLE) and different chromatographic procedures, such as TLC, at preparative scale, and HPLC with a diode array detector (DAD). Different solvents were tested for PLE extraction of antioxidants from S. platensis microalga. An optimized PLE process using ethanol (generally recognized as safe, GRAS) as extraction solvent has been obtained that provides natural extracts with high yields and good antioxidant properties. TLC analysis of this ethanolic extract obtained at 115 degrees C for 15 min was carried out and the silica layer was stained with a DPPH (diphenyl-pycril-hydrazyl) radical solution to determine the antioxidant activity of different chromatographic bands. Next, these colored bands were collected for their subsequent analysis by HPLC-DAD, revealing that the compounds with the most important antioxidant activity present in Spirulina extracts were carotenoids, as well as phenolic compounds and degradation products of chlorophylls.
Papadopoulou, Maria P; Nikolos, Ioannis K; Karatzas, George P
2010-01-01
Artificial Neural Networks (ANNs) comprise a powerful tool to approximate the complicated behavior and response of physical systems allowing considerable reduction in computation time during time-consuming optimization runs. In this work, a Radial Basis Function Artificial Neural Network (RBFN) is combined with a Differential Evolution (DE) algorithm to solve a water resources management problem, using an optimization procedure. The objective of the optimization scheme is to cover the daily water demand on the coastal aquifer east of the city of Heraklion, Crete, without reducing the subsurface water quality due to seawater intrusion. The RBFN is utilized as an on-line surrogate model to approximate the behavior of the aquifer and to replace some of the costly evaluations of an accurate numerical simulation model which solves the subsurface water flow differential equations. The RBFN is used as a local approximation model in such a way as to maintain the robustness of the DE algorithm. The results of this procedure are compared to the corresponding results obtained by using the Simplex method and by using the DE procedure without the surrogate model. As it is demonstrated, the use of the surrogate model accelerates the convergence of the DE optimization procedure and additionally provides a better solution at the same number of exact evaluations, compared to the original DE algorithm.
Storage of cell samples for ToF-SIMS experiments-How to maintain sample integrity.
Schaepe, Kaija; Kokesch-Himmelreich, Julia; Rohnke, Marcus; Wagner, Alena-Svenja; Schaaf, Thimo; Henss, Anja; Wenisch, Sabine; Janek, Jürgen
2016-06-25
In order to obtain comparable and reproducible results from time-of-flight secondary ion mass spectrometry (ToF-SIMS) analysis of biological cells, the influence of sample preparation and storage has to be carefully considered. It has been previously shown that the impact of the chosen preparation routine is crucial. In continuation of this work, the impact of storage needs to be addressed, as besides the fact that degradation will unavoidably take place, the effects of different storage procedures in combination with specific sample preparations remain largely unknown. Therefore, this work examines different wet (buffer, water, and alcohol) and dry (air-dried, freeze-dried, and critical-point-dried) storage procedures on human mesenchymal stem cell cultures. All cell samples were analyzed by ToF-SIMS immediately after preparation and after a storage period of 4 weeks. The obtained spectra were compared by principal component analysis with lipid- and amino acid-related signals known from the literature. In all dry storage procedures, notable degradation effects were observed, especially for lipid-, but also for amino acid-signal intensities. This leads to the conclusion that dried samples are to some extent easier to handle, yet the procedure is not the optimal storage solution. Degradation proceeds faster, which is possibly caused by oxidation reactions and cleaving enzymes that might still be active. Just as well, wet stored samples in alcohol struggle with decreased signal intensities from lipids and amino acids after storage. Compared to that, the wet stored samples in a buffered or pure aqueous environment revealed no degradation effects after 4 weeks. However, this storage bears a higher risk of fungi/bacterial contamination, as sterile conditions are typically not maintained. Thus, regular solution change is recommended for optimized storage conditions. Not directly exposing the samples to air, wet storage seems to minimize oxidation effects, and hence, buffer or water storage with regular renewal of the solution is recommended for short storage periods.
Storage of cell samples for ToF-SIMS experiments—How to maintain sample integrity
Schaepe, Kaija; Kokesch-Himmelreich, Julia; Rohnke, Marcus; Wagner, Alena-Svenja; Schaaf, Thimo; Henss, Anja; Wenisch, Sabine; Janek, Jürgen
2016-01-01
In order to obtain comparable and reproducible results from time-of-flight secondary ion mass spectrometry (ToF-SIMS) analysis of biological cells, the influence of sample preparation and storage has to be carefully considered. It has been previously shown that the impact of the chosen preparation routine is crucial. In continuation of this work, the impact of storage needs to be addressed, as besides the fact that degradation will unavoidably take place, the effects of different storage procedures in combination with specific sample preparations remain largely unknown. Therefore, this work examines different wet (buffer, water, and alcohol) and dry (air-dried, freeze-dried, and critical-point-dried) storage procedures on human mesenchymal stem cell cultures. All cell samples were analyzed by ToF-SIMS immediately after preparation and after a storage period of 4 weeks. The obtained spectra were compared by principal component analysis with lipid- and amino acid-related signals known from the literature. In all dry storage procedures, notable degradation effects were observed, especially for lipid-, but also for amino acid-signal intensities. This leads to the conclusion that dried samples are to some extent easier to handle, yet the procedure is not the optimal storage solution. Degradation proceeds faster, which is possibly caused by oxidation reactions and cleaving enzymes that might still be active. Just as well, wet stored samples in alcohol struggle with decreased signal intensities from lipids and amino acids after storage. Compared to that, the wet stored samples in a buffered or pure aqueous environment revealed no degradation effects after 4 weeks. However, this storage bears a higher risk of fungi/bacterial contamination, as sterile conditions are typically not maintained. Thus, regular solution change is recommended for optimized storage conditions. Not directly exposing the samples to air, wet storage seems to minimize oxidation effects, and hence, buffer or water storage with regular renewal of the solution is recommended for short storage periods. PMID:26810048
Automated Bilateral Negotiation and Bargaining Impasse
NASA Astrophysics Data System (ADS)
Lopes, Fernando; Novais, A. Q.; Coelho, Helder
The design and implementation of autonomous negotiating agents involve the consideration of insights from multiple relevant research areas to integrate different perspectives on negotiation. As a starting point for an interdisciplinary research effort, this paper employs game-theoretic techniques to define equilibrium strategies for the bargaining game of alternating offers and formalizes a set of negotiation strategies studied in the social sciences. This paper also shifts the emphasis to negotiations that are "difficult" to resolve and can hit an impasse. Specifically, it analyses a situation where two agents bargain over the division of the surplus of several distinct issues to demonstrate how a procedure to avoid impasses can be utilized in a specific negotiation setting. The procedure is based on the addition of new issues to the agenda during the course of negotiation and the exploration of the differences in the valuation of these issues to capitalize on Pareto optimal agreements.
78 FR 54509 - Tenth Meeting: RTCA Next Gen Advisory Committee (NAC)
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-04
... Capabilities Work Group. Recommendation for Future Metroplex Optimization Activity. [cir] Recommendation for Future Use of Optimization of Airspace and Procedures in the Metroplex (OAPM) developed by the...
A Randomized Study of a Method for Optimizing Adolescent Assent to Biomedical Research
Annett, Robert D.; Brody, Janet L.; Scherer, David G.; Turner, Charles W.; Dalen, Jeanne; Raissy, Hengameh
2018-01-01
Purpose Voluntary consent/assent with adolescents invited to participate in research raises challenging problems. No studies to date have attempted to manipulate autonomy in relation to assent/consent processes. This study evaluated the effects of an autonomy-enhanced individualized assent/consent procedure embedded within a randomized pediatric asthma clinical trial. Methods Families were randomly assigned to remain together or separated during a consent/assent process, the latter we characterize as an autonomy-enhanced assent/consent procedure. We hypothesized that separating adolescents from their parents would improve adolescent assent by increasing knowledge and appreciation of the clinical trial and willingness to participate. Results 64 adolescent-parent dyads completed procedures. The together versus separate randomization made no difference in adolescent or parent willingness to participate. However, significant differences were found in both parent and adolescent knowledge of the asthma clinical trial based on the assent/consent procedure and adolescent age. The separate assent/consent procedure improved knowledge of study risks and benefits for older adolescents and their parents but not for the younger youth or their parents. Regardless of the assent/consent process, younger adolescents had lower comprehension of information associated with the study medication and research risks and benefits, but not study procedures or their research rights and privileges. Conclusions The use of an autonomy-enhanced assent/consent procedure for adolescents may improve their and their parent’s informed assent/consent without impacting research participation decisions. Traditional assent/consent procedures may result in a “diffusion of responsibility” effect between parents and older adolescents, specifically in attending to key information associated with study risks and benefits. PMID:28949898
Zhong, Shangping; Chen, Tianshun; He, Fengying; Niu, Yuzhen
2014-09-01
For a practical pattern classification task solved by kernel methods, the computing time is mainly spent on kernel learning (or training). However, the current kernel learning approaches are based on local optimization techniques, and hard to have good time performances, especially for large datasets. Thus the existing algorithms cannot be easily extended to large-scale tasks. In this paper, we present a fast Gaussian kernel learning method by solving a specially structured global optimization (SSGO) problem. We optimize the Gaussian kernel function by using the formulated kernel target alignment criterion, which is a difference of increasing (d.i.) functions. Through using a power-transformation based convexification method, the objective criterion can be represented as a difference of convex (d.c.) functions with a fixed power-transformation parameter. And the objective programming problem can then be converted to a SSGO problem: globally minimizing a concave function over a convex set. The SSGO problem is classical and has good solvability. Thus, to find the global optimal solution efficiently, we can adopt the improved Hoffman's outer approximation method, which need not repeat the searching procedure with different starting points to locate the best local minimum. Also, the proposed method can be proven to converge to the global solution for any classification task. We evaluate the proposed method on twenty benchmark datasets, and compare it with four other Gaussian kernel learning methods. Experimental results show that the proposed method stably achieves both good time-efficiency performance and good classification performance. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Martowicz, Adam; Uhl, Tadeusz
2012-10-01
The paper discusses the applicability of a reliability- and performance-based multi-criteria robust design optimization technique for micro-electromechanical systems, considering their technological uncertainties. Nowadays, micro-devices are commonly applied systems, especially in the automotive industry, taking advantage of utilizing both the mechanical structure and electronic control circuit on one board. Their frequent use motivates the elaboration of virtual prototyping tools that can be applied in design optimization with the introduction of technological uncertainties and reliability. The authors present a procedure for the optimization of micro-devices, which is based on the theory of reliability-based robust design optimization. This takes into consideration the performance of a micro-device and its reliability assessed by means of uncertainty analysis. The procedure assumes that, for each checked design configuration, the assessment of uncertainty propagation is performed with the meta-modeling technique. The described procedure is illustrated with an example of the optimization carried out for a finite element model of a micro-mirror. The multi-physics approach allowed the introduction of several physical phenomena to correctly model the electrostatic actuation and the squeezing effect present between electrodes. The optimization was preceded by sensitivity analysis to establish the design and uncertain domains. The genetic algorithms fulfilled the defined optimization task effectively. The best discovered individuals are characterized by a minimized value of the multi-criteria objective function, simultaneously satisfying the constraint on material strength. The restriction of the maximum equivalent stresses was introduced with the conditionally formulated objective function with a penalty component. The yielded results were successfully verified with a global uniform search through the input design domain.
Optimizing Chromatographic Separation: An Experiment Using an HPLC Simulator
ERIC Educational Resources Information Center
Shalliker, R. A.; Kayillo, S.; Dennis, G. R.
2008-01-01
Optimization of a chromatographic separation within the time constraints of a laboratory session is practically impossible. However, by employing a HPLC simulator, experiments can be designed that allow students to develop an appreciation of the complexities involved in optimization procedures. In the present exercise, a HPLC simulator from "JCE…
Zhang, Chu; Feng, Xuping; Wang, Jian; Liu, Fei; He, Yong; Zhou, Weijun
2017-01-01
Detection of plant diseases in a fast and simple way is crucial for timely disease control. Conventionally, plant diseases are accurately identified by DNA, RNA or serology based methods which are time consuming, complex and expensive. Mid-infrared spectroscopy is a promising technique that simplifies the detection procedure for the disease. Mid-infrared spectroscopy was used to identify the spectral differences between healthy and infected oilseed rape leaves. Two different sample sets from two experiments were used to explore and validate the feasibility of using mid-infrared spectroscopy in detecting Sclerotinia stem rot (SSR) on oilseed rape leaves. The average mid-infrared spectra showed differences between healthy and infected leaves, and the differences varied among different sample sets. Optimal wavenumbers for the 2 sample sets selected by the second derivative spectra were similar, indicating the efficacy of selecting optimal wavenumbers. Chemometric methods were further used to quantitatively detect the oilseed rape leaves infected by SSR, including the partial least squares-discriminant analysis, support vector machine and extreme learning machine. The discriminant models using the full spectra and the optimal wavenumbers of the 2 sample sets were effective for classification accuracies over 80%. The discriminant results for the 2 sample sets varied due to variations in the samples. The use of two sample sets proved and validated the feasibility of using mid-infrared spectroscopy and chemometric methods for detecting SSR on oilseed rape leaves. The similarities among the selected optimal wavenumbers in different sample sets made it feasible to simplify the models and build practical models. Mid-infrared spectroscopy is a reliable and promising technique for SSR control. This study helps in developing practical application of using mid-infrared spectroscopy combined with chemometrics to detect plant disease.
Welker, A; Wolcke, B; Schleppers, A; Schmeck, S B; Focke, U; Gervais, H W; Schmeck, J
2010-10-01
The introduction of the diagnosis-related groups reimbursement system has increased cost pressures. Due to the interaction of many different professional groups, analysis and optimization of internal coordination and scheduling in the operating room (OR) is mandatory. The aim of this study was to analyze the processes at a university hospital in order to optimize strategies by identifying potential weak points. Over a period 6 weeks before and 4 weeks after intervention processes time intervals in the OR of a tertiary care hospital (university hospital) were documented in a structured data collection sheet. The main reason for lack of efficiency of labor was underused OR utilization. Multifactorial reasons, particularly in the management of perioperative interfaces, led to vacant ORs. A significant deficit was in the use of OR capacity at the end of the daily OR schedule. After harmonization of working hours of different staff groups and implementation of several other changes an increase in efficiency could be verified. These results indicate that optimization of perioperative processes considerably contribute to the success of OR organization. Additionally, the implementation of standard operating procedures and a generally accepted OR statute are mandatory. In this way an efficient OR management can contribute to the economic success of a hospital.
Determination of Ignitable Liquids in Fire Debris: Direct Analysis by Electronic Nose
Ferreiro-González, Marta; Barbero, Gerardo F.; Palma, Miguel; Ayuso, Jesús; Álvarez, José A.; Barroso, Carmelo G.
2016-01-01
Arsonists usually use an accelerant in order to start or accelerate a fire. The most widely used analytical method to determine the presence of such accelerants consists of a pre-concentration step of the ignitable liquid residues followed by chromatographic analysis. A rapid analytical method based on headspace-mass spectrometry electronic nose (E-Nose) has been developed for the analysis of Ignitable Liquid Residues (ILRs). The working conditions for the E-Nose analytical procedure were optimized by studying different fire debris samples. The optimized experimental variables were related to headspace generation, specifically, incubation temperature and incubation time. The optimal conditions were 115 °C and 10 min for these two parameters. Chemometric tools such as hierarchical cluster analysis (HCA) and linear discriminant analysis (LDA) were applied to the MS data (45–200 m/z) to establish the most suitable spectroscopic signals for the discrimination of several ignitable liquids. The optimized method was applied to a set of fire debris samples. In order to simulate post-burn samples several ignitable liquids (gasoline, diesel, citronella, kerosene, paraffin) were used to ignite different substrates (wood, cotton, cork, paper and paperboard). A full discrimination was obtained on using discriminant analysis. This method reported here can be considered as a green technique for fire debris analyses. PMID:27187407
Bayesian Optimization for Neuroimaging Pre-processing in Brain Age Classification and Prediction
Lancaster, Jenessa; Lorenz, Romy; Leech, Rob; Cole, James H.
2018-01-01
Neuroimaging-based age prediction using machine learning is proposed as a biomarker of brain aging, relating to cognitive performance, health outcomes and progression of neurodegenerative disease. However, even leading age-prediction algorithms contain measurement error, motivating efforts to improve experimental pipelines. T1-weighted MRI is commonly used for age prediction, and the pre-processing of these scans involves normalization to a common template and resampling to a common voxel size, followed by spatial smoothing. Resampling parameters are often selected arbitrarily. Here, we sought to improve brain-age prediction accuracy by optimizing resampling parameters using Bayesian optimization. Using data on N = 2003 healthy individuals (aged 16–90 years) we trained support vector machines to (i) distinguish between young (<22 years) and old (>50 years) brains (classification) and (ii) predict chronological age (regression). We also evaluated generalisability of the age-regression model to an independent dataset (CamCAN, N = 648, aged 18–88 years). Bayesian optimization was used to identify optimal voxel size and smoothing kernel size for each task. This procedure adaptively samples the parameter space to evaluate accuracy across a range of possible parameters, using independent sub-samples to iteratively assess different parameter combinations to arrive at optimal values. When distinguishing between young and old brains a classification accuracy of 88.1% was achieved, (optimal voxel size = 11.5 mm3, smoothing kernel = 2.3 mm). For predicting chronological age, a mean absolute error (MAE) of 5.08 years was achieved, (optimal voxel size = 3.73 mm3, smoothing kernel = 3.68 mm). This was compared to performance using default values of 1.5 mm3 and 4mm respectively, resulting in MAE = 5.48 years, though this 7.3% improvement was not statistically significant. When assessing generalisability, best performance was achieved when applying the entire Bayesian optimization framework to the new dataset, out-performing the parameters optimized for the initial training dataset. Our study outlines the proof-of-principle that neuroimaging models for brain-age prediction can use Bayesian optimization to derive case-specific pre-processing parameters. Our results suggest that different pre-processing parameters are selected when optimization is conducted in specific contexts. This potentially motivates use of optimization techniques at many different points during the experimental process, which may improve statistical sensitivity and reduce opportunities for experimenter-led bias. PMID:29483870
NASA Astrophysics Data System (ADS)
Gariano, Stefano Luigi; Brunetti, Maria Teresa; Iovine, Giulio; Melillo, Massimo; Peruccacci, Silvia; Terranova, Oreste Giuseppe; Vennari, Carmela; Guzzetti, Fausto
2015-04-01
Prediction of rainfall-induced landslides can rely on empirical rainfall thresholds. These are obtained from the analysis of past rainfall events that have (or have not) resulted in slope failures. Accurate prediction requires reliable thresholds, which need to be validated before their use in operational landslide warning systems. Despite the clear relevance of validation, only a few studies have addressed the problem, and have proposed and tested robust validation procedures. We propose a validation procedure that allows for the definition of optimal thresholds for early warning purposes. The validation is based on contingency table, skill scores, and receiver operating characteristic (ROC) analysis. To establish the optimal threshold, which maximizes the correct landslide predictions and minimizes the incorrect predictions, we propose an index that results from the linear combination of three weighted skill scores. Selection of the optimal threshold depends on the scope and the operational characteristics of the early warning system. The choice is made by selecting appropriately the weights, and by searching for the optimal (maximum) value of the index. We discuss weakness in the validation procedure caused by the inherent lack of information (epistemic uncertainty) on landslide occurrence typical of large study areas. When working at the regional scale, landslides may have occurred and may have not been reported. This results in biases and variations in the contingencies and the skill scores. We introduce two parameters to represent the unknown proportion of rainfall events (above and below the threshold) for which landslides occurred and went unreported. We show that even a very small underestimation in the number of landslides can result in a significant decrease in the performance of a threshold measured by the skill scores. We show that the variations in the skill scores are different for different uncertainty of events above or below the threshold. This has consequences in the ROC analysis. We applied the proposed procedure to a catalogue of rainfall conditions that have resulted in landslides, and to a set of rainfall events that - presumably - have not resulted in landslides, in Sicily, in the period 2002-2012. First, we determined regional event duration-cumulated event (ED) rainfall thresholds for shallow landslide occurrence using 200 rainfall conditions that have resulted in 223 shallow landslides in Sicily in the period 2002-2011. Next, we validated the thresholds using 29 rainfall conditions that have triggered 42 shallow landslides in Sicily in 2012, and 1250 rainfall events that presumably have not resulted in landslides in the same year. We performed a back analysis simulating the use of the thresholds in a hypothetical landslide warning system operating in 2012.
Costa, Filipa; Gomes, Dora; Magalhães, Helena; Arrais, Rosário; Moreira, Graciete; Cruz, Maria Fátima; Silva, José Pedro; Santos, Lúcio; Sousa, Olga
2016-01-01
Objective: To characterize in vivo dose distributions during pelvic intraoperative electron radiation therapy (IOERT) for rectal cancer and to assess the alterations introduced by irregular irradiation surfaces in the presence of bevelled applicators. Methods: In vivo measurements were performed with Gafchromic films during 32 IOERT procedures. 1 film per procedure was used for the first 20 procedures. The methodology was then optimized for the remaining 12 procedures by using a set of 3 films. Both the average dose and two-dimensional dose distributions for each film were determined. Phantom measurements were performed for comparison. Results: For flat and concave surfaces, the doses measured in vivo agree with expected values. For concave surfaces with step-like irregularities, measured doses tend to be higher than expected doses. Results obtained with three films per procedure show a large variability along the irradiated surface, with important differences from expected profiles. These results are consistent with the presence of surface hotspots, such as those observed in phantoms in the presence of step-like irregularities, as well as fluid build-up. Conclusion: Clinical dose distributions in the IOERT of rectal cancer are often different from the references used for prescription. Further studies are necessary to assess the impact of these differences on treatment outcomes. In vivo measurements are important, but need to be accompanied by accurate imaging of positioning and irradiated surfaces. Advances in knowledge: These results confirm that surface irregularities occur frequently in rectal cancer IOERT and have a measurable effect on the dose distribution. PMID:27188847
Determining which phenotypes underlie a pleiotropic signal
Majumdar, Arunabha; Haldar, Tanushree; Witte, John S.
2016-01-01
Discovering pleiotropic loci is important to understand the biological basis of seemingly distinct phenotypes. Most methods for assessing pleiotropy only test for the overall association between genetic variants and multiple phenotypes. To determine which specific traits are pleiotropic, we evaluate via simulation and application three different strategies. The first is model selection techniques based on the inverse regression of genotype on phenotypes. The second is a subset-based meta-analysis ASSET [Bhattacharjee et al., 2012], which provides an optimal subset of non-null traits. And the third is a modified Benjamini-Hochberg (B-H) procedure of controlling the expected false discovery rate [Benjamini and Hochberg, 1995] in the framework of phenome-wide association study. From our simulations we see that an inverse regression based approach MultiPhen [O’Reilly et al., 2012] is more powerful than ASSET for detecting overall pleiotropic association, except for when all the phenotypes are associated and have genetic effects in the same direction. For determining which specific traits are pleiotropic, the modified B-H procedure performs consistently better than the other two methods. The inverse regression based selection methods perform competitively with the modified B-H procedure only when the phenotypes are weakly correlated. The efficiency of ASSET is observed to lie below and in between the efficiency of the other two methods when the traits are weakly and strongly correlated, respectively. In our application to a large GWAS, we find that the modified B-H procedure also performs well, indicating that this may be an optimal approach for determining the traits underlying a pleiotropic signal. PMID:27238845
Best practice for perioperative management of patients with cytoreductive surgery and HIPEC.
Raspé, C; Flöther, L; Schneider, R; Bucher, M; Piso, P
2017-06-01
Due to the significantly improved outcome and quality of life of patients with different tumor entities after cytoreductive surgery (CRS) and HIPEC, there is an increasing number of centers performing CRS and HIPEC procedures. As this procedure is technically challenging with potential high morbidity and mortality, respectively, institutional experience also in the anesthetic and intensive care departments is essential for optimal treatment and prevention of adverse events. Clinical pathways have to be developed to achieve also good results in more comorbid patients with border line indications and extensive surgical procedures. The anesthesiologist has deal with relevant fluid, blood and protein losses, increased intraabdominal pressure, systemic hypo-/hyperthermia, and increased metabolic rate in patients undergoing cytoreductive surgery with HIPEC. It is of utmost importance to maintain or restore an adequate volume by aggressive substitution of intravenous fluids, which counteracts the increased fluid loss and venous capacitance during this procedure. Supplementary thoracic epidural analgesia, non-invasive ventilation, and physiotherapy are recommended to guarantee adequate pain therapy and postoperative extubation as well as fast-track concepts. Advanced hemodynamic monitoring is essential to help the anesthesiologist picking up information about the real-time fluid status of the patient. Preoperative preconditioning is mandatory in patients scheduled for HIPEC surgery and will result in improved outcome. Postoperatively, volume status optimization, early nutritional support, sufficient anticoagulation, and point of care coagulation management are essential. This is an extensive update on all relevant topics for anesthetists and intensivists dealing with CRS and HIPEC. Copyright © 2016. Published by Elsevier Ltd.
Explant culture: An advantageous method for isolation of mesenchymal stem cells from human tissues.
Hendijani, Fatemeh
2017-04-01
Mesenchymal stem cell (MSC) research progressively moves towards clinical phases. Accordingly, a wide range of different procedures were presented in the literature for MSC isolation from human tissues; however, there is not yet any close focus on the details to offer precise information for best method selection. Choosing a proper isolation method is a critical step in obtaining cells with optimal quality and yield in companion with clinical and economical considerations. In this concern, current review widely discusses advantages of omitting proteolysis step in isolation process and presence of tissue pieces in primary culture of MSCs, including removal of lytic stress on cells, reduction of in vivo to in vitro transition stress for migrated/isolated cells, reduction of price, processing time and labour, removal of viral contamination risk, and addition of supporting functions of extracellular matrix and released growth factors from tissue explant. In next sections, it provides an overall report of technical highlights and molecular events of explant culture method for isolation of MSCs from human tissues including adipose tissue, bone marrow, dental pulp, hair follicle, cornea, umbilical cord and placenta. Focusing on informative collection of molecular and methodological data about explant methods can make it easy for researchers to choose an optimal method for their experiments/clinical studies and also stimulate them to investigate and optimize more efficient procedures according to clinical and economical benefits. © 2017 John Wiley & Sons Ltd.
Živković Semren, Tanja; Brčić Karačonji, Irena; Safner, Toni; Brajenović, Nataša; Tariba Lovaković, Blanka; Pizent, Alica
2018-01-01
Non-targeted metabolomics research of human volatile urinary metabolome can be used to identify potential biomarkers associated with the changes in metabolism related to various health disorders. To ensure reliable analysis of urinary volatile organic metabolites (VOMs) by gas chromatography-mass spectrometry (GC-MS), parameters affecting the headspace-solid phase microextraction (HS-SPME) procedure have been evaluated and optimized. The influence of incubation and extraction temperatures and times, coating fibre material and salt addition on SPME efficiency was investigated by multivariate optimization methods using reduced factorial and Doehlert matrix designs. The results showed optimum values for temperature to be 60°C, extraction time 50min, and incubation time 35min. The proposed conditions were applied to investigate urine samples' stability regarding different storage conditions and freeze-thaw processes. The sum of peak areas of urine samples stored at 4°C, -20°C, and -80°C up to six months showed a time dependent decrease over time although storage at -80°C resulted in a slight non-significant reduction comparing to the fresh sample. However, due to the volatile nature of the analysed compounds, more than two cycles of freezing/thawing of the sample stored for six months at -80°C should be avoided whenever possible. Copyright © 2017 Elsevier B.V. All rights reserved.
Le-Wendling, Linda; Glick, Wesley; Tighe, Patrick
2017-12-01
As newer pharmacologic and procedural interventions, technology, and data on outcomes in pain management are becoming available, effective acute pain management will require a dedicated Acute Pain Service (APS) to help determine the most optimal pain management plan for the patients. Goals for pain management must take into consideration the side effect profile of drugs and potential complications of procedural interventions. Multiple objective optimization is the combination of multiple different objectives for acute pain management. Simple use of opioids, for example, can reduce all pain to minimal levels, but at what cost to the patient, the medical system, and to public health as a whole? Many models for APS exist based on personnel's skills, knowledge and experience, but effective use of an APS will also require allocation of time, space, financial, and personnel resources with clear objectives and a feedback mechanism to guide changes to acute pain medicine practices to meet the constantly evolving medical field. Physician-based practices have the advantage of developing protocols for the management of low-variability, high-occurrence scenarios in addition to tailoring care to individual patients with high-variability, low-occurrence scenarios. Frequent feedback and data collection/assessment on patient outcomes is essential in evaluating the efficacy of the Acute Pain Service's intervention in improving patient outcomes in the acute and perioperative setting.
NASA Astrophysics Data System (ADS)
Li, Leihong
A modular structural design methodology for composite blades is developed. This design method can be used to design composite rotor blades with sophisticate geometric cross-sections. This design method hierarchically decomposed the highly-coupled interdisciplinary rotor analysis into global and local levels. In the global level, aeroelastic response analysis and rotor trim are conduced based on multi-body dynamic models. In the local level, variational asymptotic beam sectional analysis methods are used for the equivalent one-dimensional beam properties. Compared with traditional design methodology, the proposed method is more efficient and accurate. Then, the proposed method is used to study three different design problems that have not been investigated before. The first is to add manufacturing constraints into design optimization. The introduction of manufacturing constraints complicates the optimization process. However, the design with manufacturing constraints benefits the manufacturing process and reduces the risk of violating major performance constraints. Next, a new design procedure for structural design against fatigue failure is proposed. This procedure combines the fatigue analysis with the optimization process. The durability or fatigue analysis employs a strength-based model. The design is subject to stiffness, frequency, and durability constraints. Finally, the manufacturing uncertainty impacts on rotor blade aeroelastic behavior are investigated, and a probabilistic design method is proposed to control the impacts of uncertainty on blade structural performance. The uncertainty factors include dimensions, shapes, material properties, and service loads.
Bahrani, Sonia; Ghaedi, Mehrorang; Khoshnood Mansoorkhani, Mohammad Javad; Ostovan, Abbas
2017-01-01
A selective and rapid method was developed for quantification of curcumin in human plasma and food samples using molecularly imprinted magnetic multiwalled carbon nanotubes (MMWCNTs) which was characterized with EDX and FESEM. The role of sorbent mass, volume of eluent and sonication time on response in solid phase microextraction procedure were optimized by central composite design (CCD) combined with response surface methodology (RSM) using Statistica. Preliminary experiments reveal that among different solvents, methanol:dimethyl sulfoxide (4:1V/V) led to efficient and quantitative elution of analyte. A reversed-phase high performance liquid chromatographic technique with UV detection (HPLC-UV) was applied for detection of curcumin content. The assay procedure involves chromatographic separation on analytical Nucleosil C18 column (250×4.6mm I.D., 5μm particle size) at ambient temperature with acetonitrile-water adjusted at pH=4.0 (20:80, v/v) as mobile phase at flow rate of 1.0mLmin -1 , while UV detector was set at 420nm. Under optimized conditions, the method demonstrated linear calibration curve with good detection limit (0.028ngmL -1 ) and R 2 =0.9983. The proposed method was successfully applied to biological fluid and food samples including ginger powder, curry powder, and turmeric powder. Copyright © 2016. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Indrayana, I. N. E.; P, N. M. Wirasyanti D.; Sudiartha, I. KG
2018-01-01
Mobile application allow many users to access data from the application without being limited to space, space and time. Over time the data population of this application will increase. Data access time will cause problems if the data record has reached tens of thousands to millions of records.The objective of this research is to maintain the performance of data execution for large data records. One effort to maintain data access time performance is to apply query optimization method. The optimization used in this research is query heuristic optimization method. The built application is a mobile-based financial application using MySQL database with stored procedure therein. This application is used by more than one business entity in one database, thus enabling rapid data growth. In this stored procedure there is an optimized query using heuristic method. Query optimization is performed on a “Select” query that involves more than one table with multiple clausa. Evaluation is done by calculating the average access time using optimized and unoptimized queries. Access time calculation is also performed on the increase of population data in the database. The evaluation results shown the time of data execution with query heuristic optimization relatively faster than data execution time without using query optimization.
Asgharnia, Amirhossein; Shahnazi, Reza; Jamali, Ali
2018-05-11
The most studied controller for pitch control of wind turbines is proportional-integral-derivative (PID) controller. However, due to uncertainties in wind turbine modeling and wind speed profiles, the need for more effective controllers is inevitable. On the other hand, the parameters of PID controller usually are unknown and should be selected by the designer which is neither a straightforward task nor optimal. To cope with these drawbacks, in this paper, two advanced controllers called fuzzy PID (FPID) and fractional-order fuzzy PID (FOFPID) are proposed to improve the pitch control performance. Meanwhile, to find the parameters of the controllers the chaotic evolutionary optimization methods are used. Using evolutionary optimization methods not only gives us the unknown parameters of the controllers but also guarantees the optimality based on the chosen objective function. To improve the performance of the evolutionary algorithms chaotic maps are used. All the optimization procedures are applied to the 2-mass model of 5-MW wind turbine model. The proposed optimal controllers are validated using simulator FAST developed by NREL. Simulation results demonstrate that the FOFPID controller can reach to better performance and robustness while guaranteeing fewer fatigue damages in different wind speeds in comparison to FPID, fractional-order PID (FOPID) and gain-scheduling PID (GSPID) controllers. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Chang, Yuchao; Tang, Hongying; Cheng, Yongbo; Zhao, Qin; Yuan, Baoqing Li andXiaobing
2017-07-19
Routing protocols based on topology control are significantly important for improving network longevity in wireless sensor networks (WSNs). Traditionally, some WSN routing protocols distribute uneven network traffic load to sensor nodes, which is not optimal for improving network longevity. Differently to conventional WSN routing protocols, we propose a dynamic hierarchical protocol based on combinatorial optimization (DHCO) to balance energy consumption of sensor nodes and to improve WSN longevity. For each sensor node, the DHCO algorithm obtains the optimal route by establishing a feasible routing set instead of selecting the cluster head or the next hop node. The process of obtaining the optimal route can be formulated as a combinatorial optimization problem. Specifically, the DHCO algorithm is carried out by the following procedures. It employs a hierarchy-based connection mechanism to construct a hierarchical network structure in which each sensor node is assigned to a special hierarchical subset; it utilizes the combinatorial optimization theory to establish the feasible routing set for each sensor node, and takes advantage of the maximum-minimum criterion to obtain their optimal routes to the base station. Various results of simulation experiments show effectiveness and superiority of the DHCO algorithm in comparison with state-of-the-art WSN routing algorithms, including low-energy adaptive clustering hierarchy (LEACH), hybrid energy-efficient distributed clustering (HEED), genetic protocol-based self-organizing network clustering (GASONeC), and double cost function-based routing (DCFR) algorithms.
MO-G-18A-01: Radiation Dose Reducing Strategies in CT, Fluoroscopy and Radiography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mahesh, M; Gingold, E; Jones, A
2014-06-15
Advances in medical x-ray imaging have provided significant benefits to patient care. According to NCRP 160, there are more than 400 million x-ray procedures performed annually in the United States alone that contributes to nearly half of all the radiation exposure to the US population. Similar growth trends in medical x-ray imaging are observed worldwide. Apparent increase in number of medical x-ray imaging procedures, new protocols and the associated radiation dose and risk has drawn considerable attention. This has led to a number of technological innovations such as tube current modulation, iterative reconstruction algorithms, dose alerts, dose displays, flat panelmore » digital detectors, high efficient digital detectors, storage phosphor radiography, variable filters, etc. that are enabling users to acquire medical x-ray images at a much lower radiation dose. Along with these, there are number of radiation dose optimization strategies that users can adapt to effectively lower radiation dose in medical x-ray procedures. The main objectives of this SAM course are to provide information and how to implement the various radiation dose optimization strategies in CT, Fluoroscopy and Radiography. Learning Objectives: To update impact of technological advances on dose optimization in medical imaging. To identify radiation optimization strategies in computed tomography. To describe strategies for configuring fluoroscopic equipment that yields optimal images at reasonable radiation dose. To assess ways to configure digital radiography systems and recommend ways to improve image quality at optimal dose.« less
Aparicio, Juan Daniel; Raimondo, Enzo Emanuel; Gil, Raúl Andrés; Benimeli, Claudia Susana; Polti, Marta Alejandra
2018-01-15
The objective of the present work was to establish optimal biological and physicochemical parameters in order to remove simultaneously lindane and Cr(VI) at high and/or low pollutants concentrations from the soil by an actinobacteria consortium formed by Streptomyces sp. M7, MC1, A5, and Amycolatopsis tucumanensis AB0. Also, the final aim was to treat real soils contaminated with Cr(VI) and/or lindane from the Northwest of Argentina employing the optimal biological and physicochemical conditions. In this sense, after determining the optimal inoculum concentration (2gkg -1 ), an experimental design model with four factors (temperature, moisture, initial concentration of Cr(VI) and lindane) was employed for predicting the system behavior during bioremediation process. According to response optimizer, the optimal moisture level was 30% for all bioremediation processes. However, the optimal temperature was different for each situation: for low initial concentrations of both pollutants, the optimal temperature was 25°C; for low initial concentrations of Cr(VI) and high initial concentrations of lindane, the optimal temperature was 30°C; and for high initial concentrations of Cr(VI), the optimal temperature was 35°C. In order to confirm the model adequacy and the validity of the optimization procedure, experiments were performed in six real contaminated soils samples. The defined actinobacteria consortium reduced the contaminants concentrations in five of the six samples, by working at laboratory scale and employing the optimal conditions obtained through the factorial design. Copyright © 2017 Elsevier B.V. All rights reserved.
Endoscopic hyperspectral imaging: light guide optimization for spectral light source
NASA Astrophysics Data System (ADS)
Browning, Craig M.; Mayes, Samuel; Rich, Thomas C.; Leavesley, Silas J.
2018-02-01
Hyperspectral imaging (HSI) is a technology used in remote sensing, food processing and documentation recovery. Recently, this approach has been applied in the medical field to spectrally interrogate regions of interest within respective substrates. In spectral imaging, a two (spatial) dimensional image is collected, at many different (spectral) wavelengths, to sample spectral signatures from different regions and/or components within a sample. Here, we report on the use of hyperspectral imaging for endoscopic applications. Colorectal cancer is the 3rd leading cancer for incidences and deaths in the US. One factor of severity is the miss rate of precancerous/flat lesions ( 65% accuracy). Integrating HSI into colonoscopy procedures could minimize misdiagnosis and unnecessary resections. We have previously reported a working prototype light source with 16 high-powered light emitting diodes (LEDs) capable of high speed cycling and imaging. In recent testing, we have found our current prototype is limited by transmission loss ( 99%) through the multi-furcated solid light guide (lightpipe) and the desired framerate (20-30 fps) could not be achieved. Here, we report on a series of experimental and modeling studies to better optimize the lightpipe and the spectral endoscopy system as a whole. The lightpipe was experimentally evaluated using an integrating sphere and spectrometer (Ocean Optics). Modeling the lightpipe was performed using Monte Carlo optical ray tracing in TracePro (Lambda Research Corp.). Results of these optimization studies will aid in manufacturing a revised prototype with the newly designed light guide and increased sensitivity. Once the desired optical output (5-10 mW) is achieved then the HIS endoscope system will be able to be implemented without adding onto the procedure time.
A hybrid framework for coupling arbitrary summation-by-parts schemes on general meshes
NASA Astrophysics Data System (ADS)
Lundquist, Tomas; Malan, Arnaud; Nordström, Jan
2018-06-01
We develop a general interface procedure to couple both structured and unstructured parts of a hybrid mesh in a non-collocated, multi-block fashion. The target is to gain optimal computational efficiency in fluid dynamics simulations involving complex geometries. While guaranteeing stability, the proposed procedure is optimized for accuracy and requires minimal algorithmic modifications to already existing schemes. Initial numerical investigations confirm considerable efficiency gains compared to non-hybrid calculations of up to an order of magnitude.
Kuhn-Tucker optimization based reliability analysis for probabilistic finite elements
NASA Technical Reports Server (NTRS)
Liu, W. K.; Besterfield, G.; Lawrence, M.; Belytschko, T.
1988-01-01
The fusion of probability finite element method (PFEM) and reliability analysis for fracture mechanics is considered. Reliability analysis with specific application to fracture mechanics is presented, and computational procedures are discussed. Explicit expressions for the optimization procedure with regard to fracture mechanics are given. The results show the PFEM is a very powerful tool in determining the second-moment statistics. The method can determine the probability of failure or fracture subject to randomness in load, material properties and crack length, orientation, and location.
Shirzadi, Zahra; Crane, David E; Robertson, Andrew D; Maralani, Pejman J; Aviv, Richard I; Chappell, Michael A; Goldstein, Benjamin I; Black, Sandra E; MacIntosh, Bradley J
2015-11-01
To evaluate the impact of rejecting intermediate cerebral blood flow (CBF) images that are adversely affected by head motion during an arterial spin labeling (ASL) acquisition. Eighty participants were recruited, representing a wide age range (14-90 years) and heterogeneous cerebrovascular health conditions including bipolar disorder, chronic stroke, and moderate to severe white matter hyperintensities of presumed vascular origin. Pseudocontinuous ASL and T1 -weigthed anatomical images were acquired on a 3T scanner. ASL intermediate CBF images were included based on their contribution to the mean estimate, with the goal to maximize CBF detectability in gray matter (GM). Simulations were conducted to evaluate the performance of the proposed optimization procedure relative to other ASL postprocessing approaches. Clinical CBF images were also assessed visually by two experienced neuroradiologists. Optimized CBF images (CBFopt ) had significantly greater agreement with a synthetic ground truth CBF image and greater CBF detectability relative to the other ASL analysis methods (P < 0.05). Moreover, empirical CBFopt images showed a significantly improved signal-to-noise ratio relative to CBF images obtained from other postprocessing approaches (mean: 12.6%; range 1% to 56%; P < 0.001), and this improvement was age-dependent (P = 0.03). Differences between CBF images from different analysis procedures were not perceptible by visual inspection, while there was a moderate agreement between the ratings (κ = 0.44, P < 0.001). This study developed an automated head motion threshold-free procedure to improve the detection of CBF in GM. The improvement in CBF image quality was larger when considering older participants. © 2015 Wiley Periodicals, Inc.
A quality of service negotiation procedure for distributed multimedia presentational applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hafid, A.; Bochmann, G.V.; Kerherve, B.
Most of current approaches in designing and implementing distributed multimedia (MM) presentational applications, e.g. news-on-demand, have concentrated on the performance of the continuous media file servers in terms of seek time overhead, and real-time disk scheduling; particularly the QoS negotiation mechanisms they provide are used in a rather static manner that is, these mechanisms are restricted to the evaluation of the capacity of certain system components, e.g. file server a priori known to support a specific quality of service (QoS). In contrast to those approaches, we propose a general QoS negotiation framework that supports the dynamic choice of a configurationmore » of system components to support the QoS requirements of the user of a specific application: we consider different possible system configurations and select an optimal one to provide the appropriate QoS support. In this paper we document the design and implementation of a QoS negotiation procedure for distributed MM presentational applications, such as news-on-demand. The negotiation procedure described here is an instantiation of the general framework for QoS negotiation which was developed earlier Our proposal differs in many respect with the negotiation functions provided by existing approaches: (1) the negotiation process uses an optimization approach to find a configuration of system components which supports the user requirements, (2) the negotiation process supports the negotiation of a MM document and not only a single monomedia object, (3) the QoS negotiation takes into account the cost to the user, and (4) the negotiation process may be used to support automatic adaptation to react to QoS degradations, without intervention by the user/application.« less
Lot sizing and unequal-sized shipment policy for an integrated production-inventory system
NASA Astrophysics Data System (ADS)
Giri, B. C.; Sharma, S.
2014-05-01
This article develops a single-manufacturer single-retailer production-inventory model in which the manufacturer delivers the retailer's ordered quantity in unequal shipments. The manufacturer's production process is imperfect and it may produce some defective items during a production run. The retailer performs a screening process immediately after receiving the order from the manufacturer. The expected average total cost of the integrated production-inventory system is derived using renewal theory and a solution procedure is suggested to determine the optimal production and shipment policy. An extensive numerical study based on different sets of parameter values is conducted and the optimal results so obtained are analysed to examine the relative performance of the models under equal and unequal shipment policies.
An enhanced performance through agent-based secure approach for mobile ad hoc networks
NASA Astrophysics Data System (ADS)
Bisen, Dhananjay; Sharma, Sanjeev
2018-01-01
This paper proposes an agent-based secure enhanced performance approach (AB-SEP) for mobile ad hoc network. In this approach, agent nodes are selected through optimal node reliability as a factor. This factor is calculated on the basis of node performance features such as degree difference, normalised distance value, energy level, mobility and optimal hello interval of node. After selection of agent nodes, a procedure of malicious behaviour detection is performed using fuzzy-based secure architecture (FBSA). To evaluate the performance of the proposed approach, comparative analysis is done with conventional schemes using performance parameters such as packet delivery ratio, throughput, total packet forwarding, network overhead, end-to-end delay and percentage of malicious detection.
Sensing a heart infarction marker with surface plasmon resonance spectroscopy
NASA Astrophysics Data System (ADS)
Kunz, Ulrich; Katerkamp, Andreas; Renneberg, Reinhard; Spener, Friedrich; Cammann, Karl
1995-02-01
In this study a direct immunosensor for heart-type fatty acid binding protein (FABP) based on surface plasmon resonance spectroscopy (SPRS) is presented. FABP can be used as a heart infarction marker in clinical diagnostics. The development of a simple and cheap direct optical sensor device is reported in this paper as well as immobilization procedures and optimization of the measuring conditions. The correct working of the SPRS device is controlled by comparing the signals with theoretical calculated values. Two different immunoassay techniques were optimized for a sensitive FABP-analysis. The competitive immunoassay was superior to the sandwich configuration as it had a lower detection limit (100 ng/ml), needed less antibodies and could be carried out in one step.
Simulation technique for modeling flow on floodplains and in coastal wetlands
Schaffranek, Raymond W.; Baltzer, Robert A.
1988-01-01
The system design is premised on a proven, areal two-dimensional, finite-difference flow/transport model which is supported by an operational set of computer programs for input data management and model output interpretation. The purposes of the project are (1) to demonstrate the utility of the model for providing useful highway design information, (2) to develop guidelines and procedures for using the simulation system for evaluation, analysis, and optimal design of highway crossings of floodplain and coastal wetland areas, and (3) to identify improvements which can be effected in the simulation system to better serve the needs of highway design engineers. Two case study model implementations, being conducted to demonstrate the simulation system and modeling procedure, are presented and discussed briefly.
A design procedure and handling quality criteria for lateral directional flight control systems
NASA Technical Reports Server (NTRS)
Stein, G.; Henke, A. H.
1972-01-01
A practical design procedure for aircraft augmentation systems is described based on quadratic optimal control technology and handling-quality-oriented cost functionals. The procedure is applied to the design of a lateral-directional control system for the F4C aircraft. The design criteria, design procedure, and final control system are validated with a program of formal pilot evaluation experiments.
NASA Astrophysics Data System (ADS)
Nietubyć, Robert; Lorkiewicz, Jerzy; Sekutowicz, Jacek; Smedley, John; Kosińska, Anna
2018-05-01
Superconducting photoinjectors have a potential to be the optimal solution for moderate and high current cw operating free electron lasers. For this application, a superconducting lead (Pb) cathode has been proposed to simplify the cathode integration into a 1.3 GHz, TESLA-type, 1.6-cell long purely superconducting gun cavity. In the proposed design, a lead film several micrometres thick is deposited onto a niobium plug attached to the cavity back wall. Traditional lead deposition techniques usually produce very non-uniform emission surfaces and often result in a poor adhesion of the layer. A pulsed plasma melting procedure reducing the non-uniformity of the lead photocathodes is presented. In order to determine the parameters optimal for this procedure, heat transfer from plasma to the film was first modelled to evaluate melting front penetration range and liquid state duration. The obtained results were verified by surface inspection of witness samples. The optimal procedure was used to prepare a photocathode plug, which was then tested in an electron gun. The quantum efficiency and the value of cavity quality factor have been found to satisfy the requirements for an injector of the European-XFEL facility.
General approach and scope. [rotor blade design optimization
NASA Technical Reports Server (NTRS)
Adelman, Howard M.; Mantay, Wayne R.
1989-01-01
This paper describes a joint activity involving NASA and Army researchers at the NASA Langley Research Center to develop optimization procedures aimed at improving the rotor blade design process by integrating appropriate disciplines and accounting for all of the important interactions among the disciplines. The disciplines involved include rotor aerodynamics, rotor dynamics, rotor structures, airframe dynamics, and acoustics. The work is focused on combining these five key disciplines in an optimization procedure capable of designing a rotor system to satisfy multidisciplinary design requirements. Fundamental to the plan is a three-phased approach. In phase 1, the disciplines of blade dynamics, blade aerodynamics, and blade structure will be closely coupled, while acoustics and airframe dynamics will be decoupled and be accounted for as effective constraints on the design for the first three disciplines. In phase 2, acoustics is to be integrated with the first three disciplines. Finally, in phase 3, airframe dynamics will be fully integrated with the other four disciplines. This paper deals with details of the phase 1 approach and includes details of the optimization formulation, design variables, constraints, and objective function, as well as details of discipline interactions, analysis methods, and methods for validating the procedure.
NASA Technical Reports Server (NTRS)
Rao, R. G. S.; Ulaby, F. T.
1977-01-01
The paper examines optimal sampling techniques for obtaining accurate spatial averages of soil moisture, at various depths and for cell sizes in the range 2.5-40 acres, with a minimum number of samples. Both simple random sampling and stratified sampling procedures are used to reach a set of recommended sample sizes for each depth and for each cell size. Major conclusions from statistical sampling test results are that (1) the number of samples required decreases with increasing depth; (2) when the total number of samples cannot be prespecified or the moisture in only one single layer is of interest, then a simple random sample procedure should be used which is based on the observed mean and SD for data from a single field; (3) when the total number of samples can be prespecified and the objective is to measure the soil moisture profile with depth, then stratified random sampling based on optimal allocation should be used; and (4) decreasing the sensor resolution cell size leads to fairly large decreases in samples sizes with stratified sampling procedures, whereas only a moderate decrease is obtained in simple random sampling procedures.
Aerodynamic design using numerical optimization
NASA Technical Reports Server (NTRS)
Murman, E. M.; Chapman, G. T.
1983-01-01
The procedure of using numerical optimization methods coupled with computational fluid dynamic (CFD) codes for the development of an aerodynamic design is examined. Several approaches that replace wind tunnel tests, develop pressure distributions and derive designs, or fulfill preset design criteria are presented. The method of Aerodynamic Design by Numerical Optimization (ADNO) is described and illustrated with examples.
Fernandez-Alvarez, Maria; Llompart, Maria; Lamas, J Pablo; Lores, Marta; Garcia-Jares, Carmen; Cela, Rafael; Dagnac, Thierry
2008-06-09
A simple and rapid method based on solid-phase microextraction (SPME) technique followed by gas chromatography with microelectron-capture detection (GC-microECD) was developed for the simultaneous determination of more than 30 pesticides (pyrethroids and organochlorinated among others) in milk. To our knowledge, this is the first application of SPME for the determination of pyrethroid pesticides in milk. Negative matrix effects due to the complexity and lipophility of the studied matrix were reduced by diluting the sample with distilled water. A 2(5-1) fractional factorial design was performed to assess the influence of several factors (type of fiber coating, sampling mode, stirring, extraction temperature, and addition of sodium chloride) on the SPME procedure and to determine the optimal extraction conditions. After optimization of all the significant variables and interactions, the recommended procedure was established as follows: DSPME (using a polydimethylsiloxane (PDMS)/divinylbenzene (DVB) coating) of 1 mL of milk sample diluted with Milli-Q water (1:10 dilution ratio), at 100 degrees C, under stirring for 30 min. The proposed method showed good linearity and high sensitivity, with limits of detection (LOD) at the sub-ng mL(-1) level. Within a day and among days precisions were also evaluated (R.S.D.<15%). One of the most important attainments of this work was the use of external calibration with milk-matched standards to quantify the levels of the target analytes. The method was tested with liquid and powdered milk samples with different fat contents covering the whole commercial range. The efficiency of the extraction process was studied at several analyte concentration levels obtaining high recoveries (>80% in most cases) for different types of full-fat milks. The optimized procedure was validated with powdered milk certified reference material, which was quantified using external calibration and standard addition protocols. Finally, the DSPME-GC-microECD methodology was applied to the analysis of milk samples collected in farms of dairy cattle from NW Spain.
Optimization of pencil beam f-theta lens for high-accuracy metrology
NASA Astrophysics Data System (ADS)
Peng, Chuanqian; He, Yumei; Wang, Jie
2018-01-01
Pencil beam deflectometric profilers are common instruments for high-accuracy surface slope metrology of x-ray mirrors in synchrotron facilities. An f-theta optical system is a key optical component of the deflectometric profilers and is used to perform the linear angle-to-position conversion. Traditional optimization procedures of the f-theta systems are not directly related to the angle-to-position conversion relation and are performed with stops of large size and a fixed working distance, which means they may not be suitable for the design of f-theta systems working with a small-sized pencil beam within a working distance range for ultra-high-accuracy metrology. If an f-theta system is not well-designed, aberrations of the f-theta system will introduce many systematic errors into the measurement. A least-squares' fitting procedure was used to optimize the configuration parameters of an f-theta system. Simulations using ZEMAX software showed that the optimized f-theta system significantly suppressed the angle-to-position conversion errors caused by aberrations. Any pencil-beam f-theta optical system can be optimized with the help of this optimization method.
Optimization of an idealized Y-Shaped Extracardiac Fontan Baffle
NASA Astrophysics Data System (ADS)
Yang, Weiguang; Feinstein, Jeffrey; Mohan Reddy, V.; Marsden, Alison
2008-11-01
Research has showed that vascular geometries can significantly impact hemodynamic performance, particularly in pediatric cardiology, where anatomy varies from one patient to another. In this study we optimize a newly proposed design for the Fontan procedure, a surgery used to treat single ventricle heart patients. The current Fontan procedure connects the inferior vena cava (IVC) to the pulmonary arteries (PA's) via a straight Gore-Tex tube, forming a T-shaped junction. In the Y-graft design, the IVC is connected to the left and right PAs by two branches. Initial studies on the Y-graft design showed an increase in efficiency and improvement in flow distribution compared to traditional designs in a single patient-specific model. We now optimize an idealized Y-graft model to refine the design prior to patient testing. A derivate-free optimization algorithm using Kriging surrogate functions and mesh adaptive direct search is coupled to a 3-D finite element Navier-Stokes solver. We will present optimization results for rest and exercise conditions and examine the influence of energy efficiency, wall shear stress, pulsatile flow, and flow distribution on the optimal design.
Principled negotiation and distributed optimization for advanced air traffic management
NASA Astrophysics Data System (ADS)
Wangermann, John Paul
Today's aircraft/airspace system faces complex challenges. Congestion and delays are widespread as air traffic continues to grow. Airlines want to better optimize their operations, and general aviation wants easier access to the system. Additionally, the accident rate must decline just to keep the number of accidents each year constant. New technology provides an opportunity to rethink the air traffic management process. Faster computers, new sensors, and high-bandwidth communications can be used to create new operating models. The choice is no longer between "inflexible" strategic separation assurance and "flexible" tactical conflict resolution. With suitable operating procedures, it is possible to have strategic, four-dimensional separation assurance that is flexible and allows system users maximum freedom to optimize operations. This thesis describes an operating model based on principled negotiation between agents. Many multi-agent systems have agents that have different, competing interests but have a shared interest in coordinating their actions. Principled negotiation is a method of finding agreement between agents with different interests. By focusing on fundamental interests and searching for options for mutual gain, agents with different interests reach agreements that provide benefits for both sides. Using principled negotiation, distributed optimization by each agent can be coordinated leading to iterative optimization of the system. Principled negotiation is well-suited to aircraft/airspace systems. It allows aircraft and operators to propose changes to air traffic control. Air traffic managers check the proposal maintains required aircraft separation. If it does, the proposal is either accepted or passed to agents whose trajectories change as part of the proposal for approval. Aircraft and operators can use all the data at hand to develop proposals that optimize their operations, while traffic managers can focus on their primary duty of ensuring aircraft safety. This thesis describes how an aircraft/airspace system using principled negotiation operates, and reports simulation results on the concept. The results show safety is maintained while aircraft have freedom to optimize their operations.
Comparison of VFA titration procedures used for monitoring the biogas process.
Lützhøft, Hans-Christian Holten; Boe, Kanokwan; Fang, Cheng; Angelidaki, Irini
2014-05-01
Titrimetric determination of volatile fatty acids (VFAs) contents is a common way to monitor a biogas process. However, digested manure from co-digestion biogas plants has a complex matrix with high concentrations of interfering components, resulting in varying results when using different titration procedures. Currently, no standardized procedure is used and it is therefore difficult to compare the performance among plants. The aim of this study was to evaluate four titration procedures (for determination of VFA-levels of digested manure samples) and compare results with gas chromatographic (GC) analysis. Two of the procedures are commonly used in biogas plants and two are discussed in literature. The results showed that the optimal titration results were obtained when 40 mL of four times diluted digested manure was gently stirred (200 rpm). Results from samples with different VFA concentrations (1-11 g/L) showed linear correlation between titration results and GC measurements. However, determination of VFA by titration generally overestimated the VFA contents compared with GC measurements when samples had low VFA concentrations, i.e. around 1 g/L. The accuracy of titration increased when samples had high VFA concentrations, i.e. around 5 g/L. It was further found that the studied ionisable interfering components had lowest effect on titration when the sample had high VFA concentration. In contrast, bicarbonate, phosphate and lactate had significant effect on titration accuracy at low VFA concentration. An extended 5-point titration procedure with pH correction was best to handle interferences from bicarbonate, phosphate and lactate at low VFA concentrations. Contrary, the simplest titration procedure with only two pH end-points showed the highest accuracy among all titration procedures at high VFA concentrations. All in all, if the composition of the digested manure sample is not known, the procedure with only two pH end-points should be the procedure of choice, due to its simplicity and accuracy. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Yang, Weizhu; Yue, Zhufeng; Li, Lei; Wang, Peiyan
2016-01-01
An optimization procedure combining an automated finite element modelling (AFEM) technique with a ground structure approach (GSA) is proposed for structural layout and sizing design of aircraft wings. The AFEM technique, based on CATIA VBA scripting and PCL programming, is used to generate models automatically considering the arrangement of inner systems. GSA is used for local structural topology optimization. The design procedure is applied to a high-aspect-ratio wing. The arrangement of the integral fuel tank, landing gear and control surfaces is considered. For the landing gear region, a non-conventional initial structural layout is adopted. The positions of components, the number of ribs and local topology in the wing box and landing gear region are optimized to obtain a minimum structural weight. Constraints include tank volume, strength, buckling and aeroelastic parameters. The results show that the combined approach leads to a greater weight saving, i.e. 26.5%, compared with three additional optimizations based on individual design approaches.
On Optimizing an Archibald Rubber-Band Heat Engine.
ERIC Educational Resources Information Center
Mullen, J. G.; And Others
1978-01-01
Discusses the criteria and procedure for optimizing the performance of Archibald rubber-band heat engines by using the appropriate choice of dimensions, minimizing frictional torque, maximizing torque and balancing the rubber band system. (GA)
Surgical approach to posterior inferior cerebellar artery aneurysms.
La Pira, Biagia; Sturiale, Carmelo Lucio; Della Pepa, Giuseppe Maria; Albanese, Alessio
2018-02-01
The far-lateral is a standardised approach to clip aneurysms of the posterior inferior cerebellar artery (PICA). Different variants can be adopted to manage aneurysms that differ in morphology, topography, ruptured status, cerebellar swelling and surgeon preference. We distinguished five paradigmatic approaches aimed to manage aneurysms that are: proximal unruptured; proximal ruptured requiring posterior fossa decompression (PFD); proximal ruptured not requiring PFD; distal unruptured; distal ruptured. Preoperative planning in the setting of PICA aneurysm surgery is of paramount importance to perform an effective and safe procedure, to ensure an adequate PFD and optimal proximal control before aneurysm manipulation.
Introduction: Training in reproductive endocrinology and infertility: meeting worldwide needs.
de Ziegler, Dominique; Meldrum, David R
2015-07-01
Training in reproductive endocrinology (REI) and its male variant, andrology, has been profoundly influenced by the central role captured by assisted reproductive technologies (ART). The marked differences in financial, regulatory, and societal/ethical restrictions on ART in different countries of the world also prominently influence the clinical management of infertility. Training should strive for comprehensive teaching of all medically indicated procedures, even if only to optimize cross-border care. Better international standardization of infertility practices and training would benefit worldwide infertility care and should be promoted by international societies. Copyright © 2015. Published by Elsevier Inc.
Preparation method and quality control of multigamma volume sources with different matrices.
Listkowska, A; Lech, E; Saganowski, P; Tymiński, Z; Dziel, T; Cacko, D; Ziemek, T; Kołakowska, E; Broda, R
2018-04-01
The aim of the work was to develop new radioactive standard sources based on epoxy resins. The optimal proportions of the components and the homogeneity of the matrices were determined. The activity of multigamma sources prepared in Marinelli beakers was determined with reference to the National Standard of Radionuclides Activity in Poland. The difference of radionuclides activity values determined using calibrated gamma spectrometer and the activity of standard solutions used are in most cases significantly lower than measurement uncertainty limits. Sources production method and quality control procedure have been developed. Copyright © 2017 Elsevier Ltd. All rights reserved.
Zarella, Mark D; Breen, David E; Plagov, Andrei; Garcia, Fernando U
2015-01-01
Hematoxylin and eosin (H&E) staining is ubiquitous in pathology practice and research. As digital pathology has evolved, the reliance of quantitative methods that make use of H&E images has similarly expanded. For example, cell counting and nuclear morphometry rely on the accurate demarcation of nuclei from other structures and each other. One of the major obstacles to quantitative analysis of H&E images is the high degree of variability observed between different samples and different laboratories. In an effort to characterize this variability, as well as to provide a substrate that can potentially mitigate this factor in quantitative image analysis, we developed a technique to project H&E images into an optimized space more appropriate for many image analysis procedures. We used a decision tree-based support vector machine learning algorithm to classify 44 H&E stained whole slide images of resected breast tumors according to the histological structures that are present. This procedure takes an H&E image as an input and produces a classification map of the image that predicts the likelihood of a pixel belonging to any one of a set of user-defined structures (e.g., cytoplasm, stroma). By reducing these maps into their constituent pixels in color space, an optimal reference vector is obtained for each structure, which identifies the color attributes that maximally distinguish one structure from other elements in the image. We show that tissue structures can be identified using this semi-automated technique. By comparing structure centroids across different images, we obtained a quantitative depiction of H&E variability for each structure. This measurement can potentially be utilized in the laboratory to help calibrate daily staining or identify troublesome slides. Moreover, by aligning reference vectors derived from this technique, images can be transformed in a way that standardizes their color properties and makes them more amenable to image processing.
A Comparison of Heuristic Procedures for Minimum within-Cluster Sums of Squares Partitioning
ERIC Educational Resources Information Center
Brusco, Michael J.; Steinley, Douglas
2007-01-01
Perhaps the most common criterion for partitioning a data set is the minimization of the within-cluster sums of squared deviation from cluster centroids. Although optimal solution procedures for within-cluster sums of squares (WCSS) partitioning are computationally feasible for small data sets, heuristic procedures are required for most practical…
Montesdeoca-Esponda, Sarah; Mahugo-Santana, Cristina; Sosa-Ferrera, Zoraida; Santana-Rodríguez, José Juan
2015-03-01
A dispersive liquid-liquid micellar microextraction (DLLMME) method coupled with ultra-high-performance liquid chromatography (UHPLC) using Diode Array Detector (DAD) detector was developed for the analysis of five pharmaceutical compounds of different nature in wastewaters. A micellar solution of a surfactant, polidocanol, as extraction solvent (100 μL) and chloroform as dispersive solvent (200 μL) were used to extract and preconcentrate the target analytes. Samples were heated above critical temperature and the cloudy solution was centrifuged. After removing the chloroform, the reduced volume of surfactant was then injected in the UHPLC system. In order to obtain high extraction efficiency, the parameters affecting the liquid-phase microextraction, such as time and temperature extraction, ionic strength and surfactant and organic solvent volume, were optimized using an experimental design. Under the optimized conditions, this procedure allows enrichment factors of up to 47-fold. The detection limit of the method ranged from 0.1 to 2.0 µg/L for the different pharmaceuticals. Relative standard deviations were <26% for all compounds. The procedure was applied to samples from final effluent collected from wastewater treatment plants in Las Palmas de Gran Canaria (Spain), and two compounds were measured at 67 and 113 µg/L in one of them. Copyright © 2014 John Wiley & Sons, Ltd.
High throughput workflow for coacervate formation and characterization in shampoo systems.
Kalantar, T H; Tucker, C J; Zalusky, A S; Boomgaard, T A; Wilson, B E; Ladika, M; Jordan, S L; Li, W K; Zhang, X; Goh, C G
2007-01-01
Cationic cellulosic polymers find wide utility as benefit agents in shampoo. Deposition of these polymers onto hair has been shown to mend split-ends, improve appearance and wet combing, as well as provide controlled delivery of insoluble actives. The deposition is thought to be enhanced by the formation of a polymer/surfactant complex that phase-separates from the bulk solution upon dilution. A standard characterization method has been developed to characterize the coacervate formation upon dilution, but the test is time and material prohibitive. We have developed a semi-automated high throughput workflow to characterize the coacervate-forming behavior of different shampoo formulations. A procedure that allows testing of real use shampoo dilutions without first formulating a complete shampoo was identified. This procedure was adapted to a Tecan liquid handler by optimizing the parameters for liquid dispensing as well as for mixing. The high throughput workflow enabled preparation and testing of hundreds of formulations with different types and levels of cationic cellulosic polymers and surfactants, and for each formulation a haze diagram was constructed. Optimal formulations and their dilutions that give substantial coacervate formation (determined by haze measurements) were identified. Results from this high throughput workflow were shown to reproduce standard haze and bench-top turbidity measurements, and this workflow has the advantages of using less material and allowing more variables to be tested with significant time savings.
Gingival melanin depigmentation by Er:YAG laser: A literature review.
Pavlic, Verica; Brkic, Zlata; Marin, Sasa; Cicmil, Smiljka; Gojkov-Vukelic, Mirjana; Aoki, Akira
2018-04-01
Laser ablation is recently suggested as a most effective and reliable technique for depigmentation of melanin hyperpigmented gingiva. To date, different lasers have been used for gingival depigmentation (CO 2 , diode, Nd:YAG, Er:YAG and Er,Cr:YSGG lasers). The use of Er:YAG laser for depigmentation of melanin hyperpigmented gingiva has gained increasing importance in recent years. The purpose of this study was to report removal of gingival melanin pigmentation using an Er:YAG laser in a literature review. The main outcomes, such as improvement of signs (clinical parameters of bleeding, erythema, swelling and wound healing), symptoms (pain) and melanin recurrence/repigmentation were measured. The literature demonstrated that depigmentation of gingival melanin pigmentation can be performed safely and effectively by Er:YAG laser resulting in healing and an esthetically significant improvement of gingival discoloration. Thus, Er:YAG laser seems to be safe and useful in melanin depigmentation procedure. However, the main issue in giving the final conclusion of the optimal Er:YAG laser use in melanin depigmentation is that, to date, studies are offering completely discrepant Er:YAG laser procedure protocols (complex settings of laser parameters), and different criteria for the assessment of depigmentation and repigmentation (recurrence), thus hampering the comparison of the results. Therefore, further studies are necessary to give an optimal recommendation on the use of Er:YAG laser in gingival melanin hyperpigmentation.
Raja, Muhammad Asif Zahoor; Zameer, Aneela; Khan, Aziz Ullah; Wazwaz, Abdul Majid
2016-01-01
In this study, a novel bio-inspired computing approach is developed to analyze the dynamics of nonlinear singular Thomas-Fermi equation (TFE) arising in potential and charge density models of an atom by exploiting the strength of finite difference scheme (FDS) for discretization and optimization through genetic algorithms (GAs) hybrid with sequential quadratic programming. The FDS procedures are used to transform the TFE differential equations into a system of nonlinear equations. A fitness function is constructed based on the residual error of constituent equations in the mean square sense and is formulated as the minimization problem. Optimization of parameters for the system is carried out with GAs, used as a tool for viable global search integrated with SQP algorithm for rapid refinement of the results. The design scheme is applied to solve TFE for five different scenarios by taking various step sizes and different input intervals. Comparison of the proposed results with the state of the art numerical and analytical solutions reveals that the worth of our scheme in terms of accuracy and convergence. The reliability and effectiveness of the proposed scheme are validated through consistently getting optimal values of statistical performance indices calculated for a sufficiently large number of independent runs to establish its significance.
Arce, Pedro; Lagares, Juan Ignacio
2018-01-25
We have verified the GAMOS/Geant4 simulation model of a 6 MV VARIAN Clinac 2100 C/D linear accelerator by the procedure of adjusting the initial beam parameters to fit the percentage depth dose and cross-profile dose experimental data at different depths in a water phantom. Thanks to the use of a wide range of field sizes, from 2 × 2 cm 2 to 40 × 40 cm 2 , a small phantom voxel size and high statistics, fine precision in the determination of the beam parameters has been achieved. This precision has allowed us to make a thorough study of the different physics models and parameters that Geant4 offers. The three Geant4 electromagnetic physics sets of models, i.e. Standard, Livermore and Penelope, have been compared to the experiment, testing the four different models of angular bremsstrahlung distributions as well as the three available multiple-scattering models, and optimizing the most relevant Geant4 electromagnetic physics parameters. Before the fitting, a comprehensive CPU time optimization has been done, using several of the Geant4 efficiency improvement techniques plus a few more developed in GAMOS.
Kiefl, Johannes; Cordero, Chiara; Nicolotti, Luca; Schieberle, Peter; Reichenbach, Stephen E; Bicchi, Carlo
2012-06-22
The continuous interest in non-targeted profiling induced the development of tools for automated cross-sample analysis. Such tools were found to be selective or not comprehensive thus delivering a biased view on the qualitative/quantitative peak distribution across 2D sample chromatograms. Therefore, the performance of non-targeted approaches needs to be critically evaluated. This study focused on the development of a validation procedure for non-targeted, peak-based, GC×GC-MS data profiling. The procedure introduced performance parameters such as specificity, precision, accuracy, and uncertainty for a profiling method known as Comprehensive Template Matching. The performance was assessed by applying a three-week validation protocol based on CITAC/EURACHEM guidelines. Optimized ¹D and ²D retention times search windows, MS match factor threshold, detection threshold, and template threshold were evolved from two training sets by a semi-automated learning process. The effectiveness of proposed settings to consistently match 2D peak patterns was established by evaluating the rate of mismatched peaks and was expressed in terms of results accuracy. The study utilized 23 different 2D peak patterns providing the chemical fingerprints of raw and roasted hazelnuts (Corylus avellana L.) from different geographical origins, of diverse varieties and different roasting degrees. The validation results show that non-targeted peak-based profiling can be reliable with error rates lower than 10% independent of the degree of analytical variance. The optimized Comprehensive Template Matching procedure was employed to study hazelnut roasting profiles and in particular to find marker compounds strongly dependent on the thermal treatment, and to establish the correlation of potential marker compounds to geographical origin and variety/cultivar and finally to reveal the characteristic release of aroma active compounds. Copyright © 2012 Elsevier B.V. All rights reserved.
Castro-Gómez, M P; Rodriguez-Alcalá, L M; Calvo, M V; Romero, J; Mendiola, J A; Ibañez, E; Fontecha, J
2014-11-01
Although milk polar lipids such as phospholipids and sphingolipids located in the milk fat globule membrane constitute 0.1 to 1% of the total milk fat, those lipid fractions are gaining increasing interest because of their potential beneficial effects on human health and technological properties. In this context, the accurate quantification of the milk polar lipids is crucial for comparison of different milk species, products, or dairy treatments. Although the official International Organization for Standardization-International Dairy Federation method for milk lipid extraction gives satisfactory results for neutral lipids, it has important disadvantages in terms of polar lipid losses. Other methods using mixtures of solvents such as chloroform:methanol are highly efficient for extracting polar lipids but are also associated with low sample throughput, long time, and large solvent consumption. As an alternative, we have optimized the milk fat extraction yield by using a pressurized liquid extraction (PLE) method at different temperatures and times in comparison with those traditional lipid extraction procedures using 2:1 chloroform:methanol as a mixture of solvents. Comparison of classical extraction methods with the developed PLE procedure were carried out using raw whole milk from different species (cows, ewes, and goats) and considering fat yield, fatty acid methyl ester composition, triacylglyceride species, cholesterol content, and lipid class compositions, with special attention to polar lipids such as phospholipids and sphingolipids. The developed PLE procedure was validated for milk fat extraction and the results show that this method performs a complete or close to complete extraction of all lipid classes and in less time than the official and Folch methods. In conclusion, the PLE method optimized in this study could be an alternative to carry out milk fat extraction as a routine method. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Optimal control in adaptive optics modeling of nonlinear systems
NASA Astrophysics Data System (ADS)
Herrmann, J.
The problem of using an adaptive optics system to correct for nonlinear effects like thermal blooming is addressed using a model containing nonlinear lenses through which Gaussian beams are propagated. The best correction of this nonlinear system can be formulated as a deterministic open loop optimal control problem. This treatment gives a limit for the best possible correction. Aspects of adaptive control and servo systems are not included at this stage. An attempt is made to determine that control in the transmitter plane which minimizes the time averaged area or maximizes the fluence in the target plane. The standard minimization procedure leads to a two-point-boundary-value problem, which is ill-conditioned in the case. The optimal control problem was solved using an iterative gradient technique. An instantaneous correction is introduced and compared with the optimal correction. The results of the calculations show that for short times or weak nonlinearities the instantaneous correction is close to the optimal correction, but that for long times and strong nonlinearities a large difference develops between the two types of correction. For these cases the steady state correction becomes better than the instantaneous correction and approaches the optimum correction.
The Multiple-Minima Problem in Protein Folding
NASA Astrophysics Data System (ADS)
Scheraga, Harold A.
1991-10-01
The conformational energy surface of a polypeptide or protein has many local minima, and conventional energy minimization procedures reach only a local minimum (near the starting point of the optimization algorithm) instead of the global minimum (the multiple-minima problem). Several procedures have been developed to surmount this problem, the most promising of which are: (a) build up procedure, (b) optimization of electrostatics, (c) Monte Carlo-plus-energy minimization, (d) electrostatically-driven Monte Carlo, (e) inclusion of distance restraints, (f) adaptive importance-sampling Monte Carlo, (g) relaxation of dimensionality, (h) pattern-recognition, and (i) diffusion equation method. These procedures have been applied to a variety of polypeptide structural problems, and the results of such computations are presented. These include the computation of the structures of open-chain and cyclic peptides, fibrous proteins and globular proteins. Present efforts are being devoted to scaling up these procedures from small polypeptides to proteins, to try to compute the three-dimensional structure of a protein from its amino sequence.
When teams shift among processes: insights from simulation and optimization.
Kennedy, Deanna M; McComb, Sara A
2014-09-01
This article introduces process shifts to study the temporal interplay among transition and action processes espoused in the recurring phase model proposed by Marks, Mathieu, and Zacarro (2001). Process shifts are those points in time when teams complete a focal process and change to another process. By using team communication patterns to measure process shifts, this research explores (a) when teams shift among different transition processes and initiate action processes and (b) the potential of different interventions, such as communication directives, to manipulate process shift timing and order and, ultimately, team performance. Virtual experiments are employed to compare data from observed laboratory teams not receiving interventions, simulated teams receiving interventions, and optimal simulated teams generated using genetic algorithm procedures. Our results offer insights about the potential for different interventions to affect team performance. Moreover, certain interventions may promote discussions about key issues (e.g., tactical strategies) and facilitate shifting among transition processes in a manner that emulates optimal simulated teams' communication patterns. Thus, we contribute to theory regarding team processes in 2 important ways. First, we present process shifts as a way to explore the timing of when teams shift from transition to action processes. Second, we use virtual experimentation to identify those interventions with the greatest potential to affect performance by changing when teams shift among processes. Additionally, we employ computational methods including neural networks, simulation, and optimization, thereby demonstrating their applicability in conducting team research. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Bertocci, Francesco; Fort, Ada; Vignoli, Valerio; Mugnaini, Marco; Berni, Rossella
2017-06-10
Eight different types of nanostructured perovskites based on YCoO 3 with different chemical compositions are prepared as gas sensor materials, and they are studied with two target gases NO 2 and CO. Moreover, a statistical approach is adopted to optimize their performance. The innovative contribution is carried out through a split-plot design planning and modeling, also involving random effects, for studying Metal Oxide Semiconductors (MOX) sensors in a robust design context. The statistical results prove the validity of the proposed approach; in fact, for each material type, the variation of the electrical resistance achieves a satisfactory optimized value conditional to the working temperature and by controlling for the gas concentration variability. Just to mention some results, the sensing material YCo 0 . 9 Pd 0 . 1 O 3 (Mt1) achieved excellent solutions during the optimization procedure. In particular, Mt1 resulted in being useful and feasible for the detection of both gases, with optimal response equal to +10.23% and working temperature at 312 ∘ C for CO (284 ppm, from design) and response equal to -14.17% at 185 ∘ C for NO 2 (16 ppm, from design). Analogously, for NO 2 (16 ppm, from design), the material type YCo 0 . 9 O 2 . 85 + 1 % Pd (Mt8) allows for optimizing the response value at - 15 . 39 % with a working temperature at 181 . 0 ∘ C, whereas for YCo 0 . 95 Pd 0 . 05 O 3 (Mt3), the best response value is achieved at - 15 . 40 % with the temperature equal to 204 ∘ C.
Bertocci, Francesco; Fort, Ada; Vignoli, Valerio; Mugnaini, Marco; Berni, Rossella
2017-01-01
Eight different types of nanostructured perovskites based on YCoO3 with different chemical compositions are prepared as gas sensor materials, and they are studied with two target gases NO2 and CO. Moreover, a statistical approach is adopted to optimize their performance. The innovative contribution is carried out through a split-plot design planning and modeling, also involving random effects, for studying Metal Oxide Semiconductors (MOX) sensors in a robust design context. The statistical results prove the validity of the proposed approach; in fact, for each material type, the variation of the electrical resistance achieves a satisfactory optimized value conditional to the working temperature and by controlling for the gas concentration variability. Just to mention some results, the sensing material YCo0.9Pd0.1O3 (Mt1) achieved excellent solutions during the optimization procedure. In particular, Mt1 resulted in being useful and feasible for the detection of both gases, with optimal response equal to +10.23% and working temperature at 312∘C for CO (284 ppm, from design) and response equal to −14.17% at 185∘C for NO2 (16 ppm, from design). Analogously, for NO2 (16 ppm, from design), the material type YCo0.9O2.85+1%Pd (Mt8) allows for optimizing the response value at −15.39% with a working temperature at 181.0∘C, whereas for YCo0.95Pd0.05O3 (Mt3), the best response value is achieved at −15.40% with the temperature equal to 204∘C. PMID:28604587
Sathish, Ashik; Marlar, Tyler; Sims, Ronald C
2015-10-01
Methods to convert microalgal biomass to bio based fuels and chemicals are limited by several processing and economic hurdles. Research conducted in this study modified/optimized a previously published procedure capable of extracting transesterifiable lipids from wet algal biomass. This optimization resulted in the extraction of 77% of the total transesterifiable lipids, while reducing the amount of materials and temperature required in the procedure. In addition, characterization of side streams generated demonstrated that: (1) the C/N ratio of the residual biomass or lipid extracted (LE) biomass increased to 54.6 versus 10.1 for the original biomass, (2) the aqueous phase generated contains nitrogen, phosphorous, and carbon, and (3) the solid precipitate phase was composed of up to 11.2 wt% nitrogen (70% protein). The ability to isolate algal lipids and the possibility of utilizing generated side streams as products and/or feedstock material for downstream processes helps promote the algal biorefinery concept. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Stahara, S. S.; Elliott, J. P.; Spreiter, J. R.
1983-01-01
An investigation was conducted to continue the development of perturbation procedures and associated computational codes for rapidly determining approximations to nonlinear flow solutions, with the purpose of establishing a method for minimizing computational requirements associated with parametric design studies of transonic flows in turbomachines. The results reported here concern the extension of the previously developed successful method for single parameter perturbations to simultaneous multiple-parameter perturbations, and the preliminary application of the multiple-parameter procedure in combination with an optimization method to blade design/optimization problem. In order to provide as severe a test as possible of the method, attention is focused in particular on transonic flows which are highly supercritical. Flows past both isolated blades and compressor cascades, involving simultaneous changes in both flow and geometric parameters, are considered. Comparisons with the corresponding exact nonlinear solutions display remarkable accuracy and range of validity, in direct correspondence with previous results for single-parameter perturbations.
Outcomes for Gestational Carriers Versus Traditional Surrogates in the United States.
Fuchs, Erika L; Berenson, Abbey B
2018-05-01
Little is known about the obstetric and procedural outcomes of traditional surrogates and gestational carriers. Participants included 222 women living in the United States who completed a brief online survey between November 2015 and February 2016. Differences between gestational carriers (n = 204) and traditional surrogates (n = 18) in demographic characteristics, pregnancy outcomes, and procedural outcomes were examined using chi-squared tests, Fisher's exact tests, and t-tests. Out of 248 eligible respondents, 222 surveys were complete, for a response rate of 89.5%. Overall, obstetric outcomes were similar among gestational carriers and traditional surrogates. Traditional surrogates were more likely than gestational carriers to have a Center for Epidemiologic Studies Depression Scale Revised score of 16 or higher (37.5% vs. 4.0%). Gestational carriers reported higher mean compensation ($27,162.80 vs. $17,070.07) and were more likely to travel over 400 miles (46.0% vs. 0.0%) than traditional surrogates. Procedural differences, but not differences in obstetric outcomes, emerged between gestational carriers and traditional surrogates. To ensure that both traditional surrogates and gestational carriers receive optimal medical care, it may be necessary to extend practice guidelines to ensure that traditional surrogates are offered the same level of care offered to gestational carriers.
Use of multilevel modeling for determining optimal parameters of heat supply systems
NASA Astrophysics Data System (ADS)
Stennikov, V. A.; Barakhtenko, E. A.; Sokolov, D. V.
2017-07-01
The problem of finding optimal parameters of a heat-supply system (HSS) is in ensuring the required throughput capacity of a heat network by determining pipeline diameters and characteristics and location of pumping stations. Effective methods for solving this problem, i.e., the method of stepwise optimization based on the concept of dynamic programming and the method of multicircuit optimization, were proposed in the context of the hydraulic circuit theory developed at Melentiev Energy Systems Institute (Siberian Branch, Russian Academy of Sciences). These methods enable us to determine optimal parameters of various types of piping systems due to flexible adaptability of the calculation procedure to intricate nonlinear mathematical models describing features of used equipment items and methods of their construction and operation. The new and most significant results achieved in developing methodological support and software for finding optimal parameters of complex heat supply systems are presented: a new procedure for solving the problem based on multilevel decomposition of a heat network model that makes it possible to proceed from the initial problem to a set of interrelated, less cumbersome subproblems with reduced dimensionality; a new algorithm implementing the method of multicircuit optimization and focused on the calculation of a hierarchical model of a heat supply system; the SOSNA software system for determining optimum parameters of intricate heat-supply systems and implementing the developed methodological foundation. The proposed procedure and algorithm enable us to solve engineering problems of finding the optimal parameters of multicircuit heat supply systems having large (real) dimensionality, and are applied in solving urgent problems related to the optimal development and reconstruction of these systems. The developed methodological foundation and software can be used for designing heat supply systems in the Central and the Admiralty regions in St. Petersburg, the city of Bratsk, and the Magistral'nyi settlement.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kwon, Deukwoo; Little, Mark P.; Miller, Donald L.
Purpose: To determine more accurate regression formulas for estimating peak skin dose (PSD) from reference air kerma (RAK) or kerma-area product (KAP). Methods: After grouping of the data from 21 procedures into 13 clinically similar groups, assessments were made of optimal clustering using the Bayesian information criterion to obtain the optimal linear regressions of (log-transformed) PSD vs RAK, PSD vs KAP, and PSD vs RAK and KAP. Results: Three clusters of clinical groups were optimal in regression of PSD vs RAK, seven clusters of clinical groups were optimal in regression of PSD vs KAP, and six clusters of clinical groupsmore » were optimal in regression of PSD vs RAK and KAP. Prediction of PSD using both RAK and KAP is significantly better than prediction of PSD with either RAK or KAP alone. The regression of PSD vs RAK provided better predictions of PSD than the regression of PSD vs KAP. The partial-pooling (clustered) method yields smaller mean squared errors compared with the complete-pooling method.Conclusion: PSD distributions for interventional radiology procedures are log-normal. Estimates of PSD derived from RAK and KAP jointly are most accurate, followed closely by estimates derived from RAK alone. Estimates of PSD derived from KAP alone are the least accurate. Using a stochastic search approach, it is possible to cluster together certain dissimilar types of procedures to minimize the total error sum of squares.« less
An expert system for integrated structural analysis and design optimization for aerospace structures
NASA Technical Reports Server (NTRS)
1992-01-01
The results of a research study on the development of an expert system for integrated structural analysis and design optimization is presented. An Object Representation Language (ORL) was developed first in conjunction with a rule-based system. This ORL/AI shell was then used to develop expert systems to provide assistance with a variety of structural analysis and design optimization tasks, in conjunction with procedural modules for finite element structural analysis and design optimization. The main goal of the research study was to provide expertise, judgment, and reasoning capabilities in the aerospace structural design process. This will allow engineers performing structural analysis and design, even without extensive experience in the field, to develop error-free, efficient and reliable structural designs very rapidly and cost-effectively. This would not only improve the productivity of design engineers and analysts, but also significantly reduce time to completion of structural design. An extensive literature survey in the field of structural analysis, design optimization, artificial intelligence, and database management systems and their application to the structural design process was first performed. A feasibility study was then performed, and the architecture and the conceptual design for the integrated 'intelligent' structural analysis and design optimization software was then developed. An Object Representation Language (ORL), in conjunction with a rule-based system, was then developed using C++. Such an approach would improve the expressiveness for knowledge representation (especially for structural analysis and design applications), provide ability to build very large and practical expert systems, and provide an efficient way for storing knowledge. Functional specifications for the expert systems were then developed. The ORL/AI shell was then used to develop a variety of modules of expert systems for a variety of modeling, finite element analysis, and design optimization tasks in the integrated aerospace structural design process. These expert systems were developed to work in conjunction with procedural finite element structural analysis and design optimization modules (developed in-house at SAT, Inc.). The complete software, AutoDesign, so developed, can be used for integrated 'intelligent' structural analysis and design optimization. The software was beta-tested at a variety of companies, used by a range of engineers with different levels of background and expertise. Based on the feedback obtained by such users, conclusions were developed and are provided.
An expert system for integrated structural analysis and design optimization for aerospace structures
NASA Astrophysics Data System (ADS)
1992-04-01
The results of a research study on the development of an expert system for integrated structural analysis and design optimization is presented. An Object Representation Language (ORL) was developed first in conjunction with a rule-based system. This ORL/AI shell was then used to develop expert systems to provide assistance with a variety of structural analysis and design optimization tasks, in conjunction with procedural modules for finite element structural analysis and design optimization. The main goal of the research study was to provide expertise, judgment, and reasoning capabilities in the aerospace structural design process. This will allow engineers performing structural analysis and design, even without extensive experience in the field, to develop error-free, efficient and reliable structural designs very rapidly and cost-effectively. This would not only improve the productivity of design engineers and analysts, but also significantly reduce time to completion of structural design. An extensive literature survey in the field of structural analysis, design optimization, artificial intelligence, and database management systems and their application to the structural design process was first performed. A feasibility study was then performed, and the architecture and the conceptual design for the integrated 'intelligent' structural analysis and design optimization software was then developed. An Object Representation Language (ORL), in conjunction with a rule-based system, was then developed using C++. Such an approach would improve the expressiveness for knowledge representation (especially for structural analysis and design applications), provide ability to build very large and practical expert systems, and provide an efficient way for storing knowledge. Functional specifications for the expert systems were then developed. The ORL/AI shell was then used to develop a variety of modules of expert systems for a variety of modeling, finite element analysis, and design optimization tasks in the integrated aerospace structural design process. These expert systems were developed to work in conjunction with procedural finite element structural analysis and design optimization modules (developed in-house at SAT, Inc.). The complete software, AutoDesign, so developed, can be used for integrated 'intelligent' structural analysis and design optimization. The software was beta-tested at a variety of companies, used by a range of engineers with different levels of background and expertise. Based on the feedback obtained by such users, conclusions were developed and are provided.
Improvement of the insertion axis for cochlear implantation with a robot-based system.
Torres, Renato; Kazmitcheff, Guillaume; De Seta, Daniele; Ferrary, Evelyne; Sterkers, Olivier; Nguyen, Yann
2017-02-01
It has previously reported that alignment of the insertion axis along the basal turn of the cochlea was depending on surgeon' experience. In this experimental study, we assessed technological assistances, such as navigation or a robot-based system, to improve the insertion axis during cochlear implantation. A preoperative cone beam CT and a mastoidectomy with a posterior tympanotomy were performed on four temporal bones. The optimal insertion axis was defined as the closest axis to the scala tympani centerline avoiding the facial nerve. A neuronavigation system, a robot assistance prototype, and software allowing a semi-automated alignment of the robot were used to align an insertion tool with an optimal insertion axis. Four procedures were performed and repeated three times in each temporal bone: manual, manual navigation-assisted, robot-based navigation-assisted, and robot-based semi-automated. The angle between the optimal and the insertion tool axis was measured in the four procedures. The error was 8.3° ± 2.82° for the manual procedure (n = 24), 8.6° ± 2.83° for the manual navigation-assisted procedure (n = 24), 5.4° ± 3.91° for the robot-based navigation-assisted procedure (n = 24), and 3.4° ± 1.56° for the robot-based semi-automated procedure (n = 12). A higher accuracy was observed with the semi-automated robot-based technique than manual and manual navigation-assisted (p < 0.01). Combination of a navigation system and a manual insertion does not improve the alignment accuracy due to the lack of friendly user interface. On the contrary, a semi-automated robot-based system reduces both the error and the variability of the alignment with a defined optimal axis.
NASA Technical Reports Server (NTRS)
Rais-Rohani, Masoud
2001-01-01
This report describes the preliminary results of an investigation on component reliability analysis and reliability-based design optimization of thin-walled circular composite cylinders with average diameter and average length of 15 inches. Structural reliability is based on axial buckling strength of the cylinder. Both Monte Carlo simulation and First Order Reliability Method are considered for reliability analysis with the latter incorporated into the reliability-based structural optimization problem. To improve the efficiency of reliability sensitivity analysis and design optimization solution, the buckling strength of the cylinder is estimated using a second-order response surface model. The sensitivity of the reliability index with respect to the mean and standard deviation of each random variable is calculated and compared. The reliability index is found to be extremely sensitive to the applied load and elastic modulus of the material in the fiber direction. The cylinder diameter was found to have the third highest impact on the reliability index. Also the uncertainty in the applied load, captured by examining different values for its coefficient of variation, is found to have a large influence on cylinder reliability. The optimization problem for minimum weight is solved subject to a design constraint on element reliability index. The methodology, solution procedure and optimization results are included in this report.
Optimization Techniques for Design Problems in Selected Areas in WSNs: A Tutorial
Ibrahim, Ahmed; Alfa, Attahiru
2017-01-01
This paper is intended to serve as an overview of, and mostly a tutorial to illustrate, the optimization techniques used in several different key design aspects that have been considered in the literature of wireless sensor networks (WSNs). It targets the researchers who are new to the mathematical optimization tool, and wish to apply it to WSN design problems. We hence divide the paper into two main parts. One part is dedicated to introduce optimization theory and an overview on some of its techniques that could be helpful in design problem in WSNs. In the second part, we present a number of design aspects that we came across in the WSN literature in which mathematical optimization methods have been used in the design. For each design aspect, a key paper is selected, and for each we explain the formulation techniques and the solution methods implemented. We also provide in-depth analyses and assessments of the problem formulations, the corresponding solution techniques and experimental procedures in some of these papers. The analyses and assessments, which are provided in the form of comments, are meant to reflect the points that we believe should be taken into account when using optimization as a tool for design purposes. PMID:28763039
NASA Astrophysics Data System (ADS)
Bandaru, Sunith; Deb, Kalyanmoy
2011-09-01
In this article, a methodology is proposed for automatically extracting innovative design principles which make a system or process (subject to conflicting objectives) optimal using its Pareto-optimal dataset. Such 'higher knowledge' would not only help designers to execute the system better, but also enable them to predict how changes in one variable would affect other variables if the system has to retain its optimal behaviour. This in turn would help solve other similar systems with different parameter settings easily without the need to perform a fresh optimization task. The proposed methodology uses a clustering-based optimization technique and is capable of discovering hidden functional relationships between the variables, objective and constraint functions and any other function that the designer wishes to include as a 'basis function'. A number of engineering design problems are considered for which the mathematical structure of these explicit relationships exists and has been revealed by a previous study. A comparison with the multivariate adaptive regression splines (MARS) approach reveals the practicality of the proposed approach due to its ability to find meaningful design principles. The success of this procedure for automated innovization is highly encouraging and indicates its suitability for further development in tackling more complex design scenarios.