Optimal False Discovery Rate Control for Dependent Data
Xie, Jichun; Cai, T. Tony; Maris, John; Li, Hongzhe
2013-01-01
This paper considers the problem of optimal false discovery rate control when the test statistics are dependent. An optimal joint oracle procedure, which minimizes the false non-discovery rate subject to a constraint on the false discovery rate is developed. A data-driven marginal plug-in procedure is then proposed to approximate the optimal joint procedure for multivariate normal data. It is shown that the marginal procedure is asymptotically optimal for multivariate normal data with a short-range dependent covariance structure. Numerical results show that the marginal procedure controls false discovery rate and leads to a smaller false non-discovery rate than several commonly used p-value based false discovery rate controlling methods. The procedure is illustrated by an application to a genome-wide association study of neuroblastoma and it identifies a few more genetic variants that are potentially associated with neuroblastoma than several p-value-based false discovery rate controlling procedures. PMID:23378870
NASA Astrophysics Data System (ADS)
Nietubyć, Robert; Lorkiewicz, Jerzy; Sekutowicz, Jacek; Smedley, John; Kosińska, Anna
2018-05-01
Superconducting photoinjectors have a potential to be the optimal solution for moderate and high current cw operating free electron lasers. For this application, a superconducting lead (Pb) cathode has been proposed to simplify the cathode integration into a 1.3 GHz, TESLA-type, 1.6-cell long purely superconducting gun cavity. In the proposed design, a lead film several micrometres thick is deposited onto a niobium plug attached to the cavity back wall. Traditional lead deposition techniques usually produce very non-uniform emission surfaces and often result in a poor adhesion of the layer. A pulsed plasma melting procedure reducing the non-uniformity of the lead photocathodes is presented. In order to determine the parameters optimal for this procedure, heat transfer from plasma to the film was first modelled to evaluate melting front penetration range and liquid state duration. The obtained results were verified by surface inspection of witness samples. The optimal procedure was used to prepare a photocathode plug, which was then tested in an electron gun. The quantum efficiency and the value of cavity quality factor have been found to satisfy the requirements for an injector of the European-XFEL facility.
NASA Astrophysics Data System (ADS)
Lee, Wen-Chuan; Wu, Jong-Wuu; Tsou, Hsin-Hui; Lei, Chia-Ling
2012-10-01
This article considers that the number of defective units in an arrival order is a binominal random variable. We derive a modified mixture inventory model with backorders and lost sales, in which the order quantity and lead time are decision variables. In our studies, we also assume that the backorder rate is dependent on the length of lead time through the amount of shortages and let the backorder rate be a control variable. In addition, we assume that the lead time demand follows a mixture of normal distributions, and then relax the assumption about the form of the mixture of distribution functions of the lead time demand and apply the minimax distribution free procedure to solve the problem. Furthermore, we develop an algorithm procedure to obtain the optimal ordering strategy for each case. Finally, three numerical examples are also given to illustrate the results.
Detonation energies of explosives by optimized JCZ3 procedures
NASA Astrophysics Data System (ADS)
Stiel, Leonard I.; Baker, Ernest L.
1998-07-01
Procedures for the detonation properties of explosives have been extended for the calculation of detonation energies at adiabatic expansion conditions. The use of the JCZ3 equation of state with optimized Exp-6 potential parameters leads to lower errors in comparison to JWL detonation energies than for other methods tested.
Nietubyc, Robert; Lorkiewicz, Jerzy; Sekutowicz, Jacek; ...
2018-02-14
Superconducting photoinjectors have a potential to be the optimal solution for moderate and high current cw operating free electron lasers. For this application, a superconducting lead (Pb) cathode has been proposed to simplify the cathode integration into a 1.3 GHz, TESLA-type, 1.6-cell long purely superconducting gun cavity. In the proposed design, a lead film several micrometres thick is deposited onto a niobium plug attached to the cavity back wall. Traditional lead deposition techniques usually produce very non-uniform emission surfaces and often result in a poor adhesion of the layer. A pulsed plasma melting procedure reducing the non-uniformity of the leadmore » photocathodes is presented. In order to determine the parameters optimal for this procedure, heat transfer from plasma to the film was first modelled to evaluate melting front penetration range and liquid state duration. The obtained results were verified by surface inspection of witness samples. The optimal procedure was used to prepare a photocathode plug, which was then tested in an electron gun. In conclusion, the quantum efficiency and the value of cavity quality factor have been found to satisfy the requirements for an injector of the European-XFEL facility.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nietubyc, Robert; Lorkiewicz, Jerzy; Sekutowicz, Jacek
Superconducting photoinjectors have a potential to be the optimal solution for moderate and high current cw operating free electron lasers. For this application, a superconducting lead (Pb) cathode has been proposed to simplify the cathode integration into a 1.3 GHz, TESLA-type, 1.6-cell long purely superconducting gun cavity. In the proposed design, a lead film several micrometres thick is deposited onto a niobium plug attached to the cavity back wall. Traditional lead deposition techniques usually produce very non-uniform emission surfaces and often result in a poor adhesion of the layer. A pulsed plasma melting procedure reducing the non-uniformity of the leadmore » photocathodes is presented. In order to determine the parameters optimal for this procedure, heat transfer from plasma to the film was first modelled to evaluate melting front penetration range and liquid state duration. The obtained results were verified by surface inspection of witness samples. The optimal procedure was used to prepare a photocathode plug, which was then tested in an electron gun. In conclusion, the quantum efficiency and the value of cavity quality factor have been found to satisfy the requirements for an injector of the European-XFEL facility.« less
Beyond the drugs: nonpharmacologic strategies to optimize procedural care in children.
Leroy, Piet L; Costa, Luciane R; Emmanouil, Dimitris; van Beukering, Alice; Franck, Linda S
2016-03-01
Painful and/or stressful medical procedures mean a substantial burden for sick children. There is good evidence that procedural comfort can be optimized by a comprehensive comfort-directed policy containing the triad of nonpharmacological strategies (NPS) in all cases, timely or preventive procedural analgesia if pain is an issue, and procedural sedation. Based both on well-established theoretical frameworks as well as an increasing body of scientific evidence NPS need to be regarded an inextricable part of procedural comfort care. Procedural comfort care must always start with a child-friendly, nonthreatening environment in which well-being, confidence, and self-efficacy are optimized and maintained. This requires a reconsideration of the medical spaces where we provide care, reduction of sensory stimulation, normalized professional behavior, optimal logistics, and coordination and comfort-directed and age-appropriate verbal and nonverbal expression by professionals. Next, age-appropriate distraction techniques and/or hypnosis should be readily available. NPS are useful for all types of medical and dental procedures and should always precede and accompany procedural sedation. NPS should be embedded into a family-centered, care-directed policy as it has been shown that family-centered care can lead to safer, more personalized, and effective care, improved healthcare experiences and patient outcomes, and more responsive organizations.
On the functional optimization of a certain class of nonstationary spatial functions
Christakos, G.; Paraskevopoulos, P.N.
1987-01-01
Procedures are developed in order to obtain optimal estimates of linear functionals for a wide class of nonstationary spatial functions. These procedures rely on well-established constrained minimum-norm criteria, and are applicable to multidimensional phenomena which are characterized by the so-called hypothesis of inherentity. The latter requires elimination of the polynomial, trend-related components of the spatial function leading to stationary quantities, and also it generates some interesting mathematics within the context of modelling and optimization in several dimensions. The arguments are illustrated using various examples, and a case study computed in detail. ?? 1987 Plenum Publishing Corporation.
NASA Technical Reports Server (NTRS)
Garg, Sanjay; Schmidt, Phillip H.
1993-01-01
A parameter optimization framework has earlier been developed to solve the problem of partitioning a centralized controller into a decentralized, hierarchical structure suitable for integrated flight/propulsion control implementation. This paper presents results from the application of the controller partitioning optimization procedure to IFPC design for a Short Take-Off and Vertical Landing (STOVL) aircraft in transition flight. The controller partitioning problem and the parameter optimization algorithm are briefly described. Insight is provided into choosing various 'user' selected parameters in the optimization cost function such that the resulting optimized subcontrollers will meet the characteristics of the centralized controller that are crucial to achieving the desired closed-loop performance and robustness, while maintaining the desired subcontroller structure constraints that are crucial for IFPC implementation. The optimization procedure is shown to improve upon the initial partitioned subcontrollers and lead to performance comparable to that achieved with the centralized controller. This application also provides insight into the issues that should be addressed at the centralized control design level in order to obtain implementable partitioned subcontrollers.
NASA Astrophysics Data System (ADS)
Peng, Guoyi; Cao, Shuliang; Ishizuka, Masaru; Hayama, Shinji
2002-06-01
This paper is concerned with the design optimization of axial flow hydraulic turbine runner blade geometry. In order to obtain a better design plan with good performance, a new comprehensive performance optimization procedure has been presented by combining a multi-variable multi-objective constrained optimization model with a Q3D inverse computation and a performance prediction procedure. With careful analysis of the inverse design of axial hydraulic turbine runner, the total hydraulic loss and the cavitation coefficient are taken as optimization objectives and a comprehensive objective function is defined using the weight factors. Parameters of a newly proposed blade bound circulation distribution function and parameters describing positions of blade leading and training edges in the meridional flow passage are taken as optimization variables.The optimization procedure has been applied to the design optimization of a Kaplan runner with specific speed of 440 kW. Numerical results show that the performance of designed runner is successfully improved through optimization computation. The optimization model is found to be validated and it has the feature of good convergence. With the multi-objective optimization model, it is possible to control the performance of designed runner by adjusting the value of weight factors defining the comprehensive objective function. Copyright
Wave drag as the objective function in transonic fighter wing optimization
NASA Technical Reports Server (NTRS)
Phillips, P. S.
1984-01-01
The original computational method for determining wave drag in a three dimensional transonic analysis method was replaced by a wave drag formula based on the loss in momentum across an isentropic shock. This formula was used as the objective function in a numerical optimization procedure to reduce the wave drag of a fighter wing at transonic maneuver conditions. The optimization procedure minimized wave drag through modifications to the wing section contours defined by a wing profile shape function. A significant reduction in wave drag was achieved while maintaining a high lift coefficient. Comparisons of the pressure distributions for the initial and optimized wing geometries showed significant reductions in the leading-edge peaks and shock strength across the span.
Optimized tomography of continuous variable systems using excitation counting
NASA Astrophysics Data System (ADS)
Shen, Chao; Heeres, Reinier W.; Reinhold, Philip; Jiang, Luyao; Liu, Yi-Kai; Schoelkopf, Robert J.; Jiang, Liang
2016-11-01
We propose a systematic procedure to optimize quantum state tomography protocols for continuous variable systems based on excitation counting preceded by a displacement operation. Compared with conventional tomography based on Husimi or Wigner function measurement, the excitation counting approach can significantly reduce the number of measurement settings. We investigate both informational completeness and robustness, and provide a bound of reconstruction error involving the condition number of the sensing map. We also identify the measurement settings that optimize this error bound, and demonstrate that the improved reconstruction robustness can lead to an order-of-magnitude reduction of estimation error with given resources. This optimization procedure is general and can incorporate prior information of the unknown state to further simplify the protocol.
Digital adaptive flight controller development
NASA Technical Reports Server (NTRS)
Kaufman, H.; Alag, G.; Berry, P.; Kotob, S.
1974-01-01
A design study of adaptive control logic suitable for implementation in modern airborne digital flight computers was conducted. Two designs are described for an example aircraft. Each of these designs uses a weighted least squares procedure to identify parameters defining the dynamics of the aircraft. The two designs differ in the way in which control law parameters are determined. One uses the solution of an optimal linear regulator problem to determine these parameters while the other uses a procedure called single stage optimization. Extensive simulation results and analysis leading to the designs are presented.
Saljooqi, Asma; Shamspur, Tayebeh; Mohamadi, Maryam; Afzali, Daryoush; Mostafavi, Ali
2015-05-01
First, the extraction and preconcentration of ultratrace amounts of lead(II) ions was performed using microliter volumes of a task-specific ionic liquid. The remarkable properties of ionic liquids were added to the advantages of microextraction procedure. The ionic liquid used was trioctylmethylammonium thiosalicylate, which formed a lead thiolate complex due to the chelating effect of the ortho-positioned carboxylate relative to thiol functionality. So, trioctylmethylammonium thiosalicylate played the roles of both chelating agent and extraction solvent simultaneously. Hence, there is no need to use a ligand. The main parameters affecting the efficiency of the method were investigated and optimized. Under optimized conditions, this approach showed a linear range of 2.0-24.0 ng/mL with a detection limit of 0.0010 ng/mL. The proposed method was applied to the extraction and preconcentration of lead from red lipstick and pine leaves samples prior to electrothermal atomic absorption spectroscopic determination. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Computational wing optimization and comparisons with experiment for a semi-span wing model
NASA Technical Reports Server (NTRS)
Waggoner, E. G.; Haney, H. P.; Ballhaus, W. F.
1978-01-01
A computational wing optimization procedure was developed and verified by an experimental investigation of a semi-span variable camber wing model in the NASA Ames Research Center 14 foot transonic wind tunnel. The Bailey-Ballhaus transonic potential flow analysis and Woodward-Carmichael linear theory codes were linked to Vanderplaats constrained minimization routine to optimize model configurations at several subsonic and transonic design points. The 35 deg swept wing is characterized by multi-segmented leading and trailing edge flaps whose hinge lines are swept relative to the leading and trailing edges of the wing. By varying deflection angles of the flap segments, camber and twist distribution can be optimized for different design conditions. Results indicate that numerical optimization can be both an effective and efficient design tool. The optimized configurations had as good or better lift to drag ratios at the design points as the best designs previously tested during an extensive parametric study.
Detonation Energies of Explosives by Optimized JCZ3 Procedures
NASA Astrophysics Data System (ADS)
Stiel, Leonard; Baker, Ernest
1997-07-01
Procedures for the detonation properties of explosives have been extended for the calculation of detonation energies at adiabatic expansion conditions. Advanced variable metric optimization routines developed by ARDEC are utilized to establish chemical reaction equilibrium by the minimization of the Helmholtz free energy of the system. The use of the JCZ3 equation of state with optimized Exp-6 potential parameters leads to lower errors in JWL detonation energies than the TIGER JCZ3 procedure and other methods tested for relative volumes to 7.0. For the principal isentrope with C-J parameters and freeze conditions established at elevated pressures with the JCZ3 equation of state, best results are obtained if an alternate volumetric relationship is utilized at the highest expansions. Efficient subroutines (designated JAGUAR) have been developed which incorporate the ability to automatically generate JWL and JWLB equation of state parameters. abstract.
Liang, Yanchun; Yu, Haibo; Zhou, Weiwei; Xu, Guoqing; Sun, Y I; Liu, Rong; Wang, Zulu; Han, Yaling
2015-12-01
Electrophysiological mapping (EPM) in coronary sinus (CS) branches is feasible for guiding LV lead placement to the optimal, latest activated site at cardiac resynchronization therapy (CRT) procedures. However, whether this procedure optimizes the response to CRT has not been demonstrated. This study was to evaluate effects of targeting LV lead at the latest activated site guided by EPM during CRT. Seventy-six consecutive patients with advanced heart failure who were referred for CRT were divided into mapping (MG) and control groups (CG). In MG, the LV lead, also used as a mapping bipolar electrode, was placed at the latest activated site determined by EPM in CS branches. In CG, conventional CRT procedure was performed. Patients were followed for 6 months after CRT. Baseline characteristics were comparable between the 2 groups. In MG (n = 29), EPM was successfully performed in 85 of 91 CS branches during CRT. A LV lead was successfully placed at the latest activated site guided by EPM in 27 (93.1%) patients. Compared with CG (n = 47), MG had a significantly higher rate (86.2% vs. 63.8%, P = 0.039) of response (>15% reduction in LV end-systolic volume) to CRT, a higher percentage of patients with clinical improvement of ≥2 NYHA functional classes (72.4% vs. 44.7%, P = 0.032), and a shorter QRS duration (P = 0.004). LV lead placed at the latest activated site guided by EPM resulted in a significantly greater CRT response, and a shorter QRS duration. © 2015 Wiley Periodicals, Inc.
Optimal procedures for quality assurance specifications
DOT National Transportation Integrated Search
2003-04-01
This manual is a comprehensive guide that a highway agency can use when developing new, or modifying existing, acceptance plans and quality assurance specifications. It provides necessary instruction and illustrative examples to lead the agency throu...
NASA Astrophysics Data System (ADS)
Salmin, Vadim V.
2017-01-01
Flight mechanics with a low-thrust is a new chapter of mechanics of space flight, considered plurality of all problems trajectory optimization and movement control laws and the design parameters of spacecraft. Thus tasks associated with taking into account the additional factors in mathematical models of the motion of spacecraft becomes increasingly important, as well as additional restrictions on the possibilities of the thrust vector control. The complication of the mathematical models of controlled motion leads to difficulties in solving optimization problems. Author proposed methods of finding approximate optimal control and evaluating their optimality based on analytical solutions. These methods are based on the principle of extending the class of admissible states and controls and sufficient conditions for the absolute minimum. Developed procedures of the estimation enabling to determine how close to the optimal founded solution, and indicate ways to improve them. Authors describes procedures of estimate for approximately optimal control laws for space flight mechanics problems, in particular for optimization flight low-thrust between the circular non-coplanar orbits, optimization the control angle and trajectory movement of the spacecraft during interorbital flights, optimization flights with low-thrust between arbitrary elliptical orbits Earth satellites.
NASA Astrophysics Data System (ADS)
Vijayashree, M.; Uthayakumar, R.
2017-09-01
Lead time is one of the major limits that affect planning at every stage of the supply chain system. In this paper, we study a continuous review inventory model. This paper investigates the ordering cost reductions are dependent on lead time. This study addressed two-echelon supply chain problem consisting of a single vendor and a single buyer. The main contribution of this study is that the integrated total cost of the single vendor and the single buyer integrated system is analyzed by adopting two different (linear and logarithmic) types ordering cost reductions act dependent on lead time. In both cases, we develop effective solution procedures for finding the optimal solution and then illustrative numerical examples are given to illustrate the results. The solution procedure is to determine the optimal solutions of order quantity, ordering cost, lead time and the number of deliveries from the single vendor and the single buyer in one production run, so that the integrated total cost incurred has the minimum value. Ordering cost reduction is the main aspect of the proposed model. A numerical example is given to validate the model. Numerical example solved by using Matlab software. The mathematical model is solved analytically by minimizing the integrated total cost. Furthermore, the sensitivity analysis is included and the numerical examples are given to illustrate the results. The results obtained in this paper are illustrated with the help of numerical examples. The sensitivity of the proposed model has been checked with respect to the various major parameters of the system. Results reveal that the proposed integrated inventory model is more applicable for the supply chain manufacturing system. For each case, an algorithm procedure of finding the optimal solution is developed. Finally, the graphical representation is presented to illustrate the proposed model and also include the computer flowchart in each model.
Supercritical tests of a self-optimizing, variable-Camber wind tunnel model
NASA Technical Reports Server (NTRS)
Levinsky, E. S.; Palko, R. L.
1979-01-01
A testing procedure was used in a 16-foot Transonic Propulsion Wind Tunnel which leads to optimum wing airfoil sections without stopping the tunnel for model changes. Being experimental, the optimum shapes obtained incorporate various three-dimensional and nonlinear viscous and transonic effects not included in analytical optimization methods. The method is a closed-loop, computer-controlled, interactive procedure and employs a Self-Optimizing Flexible Technology wing semispan model that conformally adapts the airfoil section at two spanwise control stations to maximize or minimize various prescribed merit functions subject to both equality and inequality constraints. The model, which employed twelve independent hydraulic actuator systems and flexible skins, was also used for conventional testing. Although six of seven optimizations attempted were at least partially convergent, further improvements in model skin smoothness and hydraulic reliability are required to make the technique fully operational.
NASA Astrophysics Data System (ADS)
Vollant, A.; Balarac, G.; Corre, C.
2017-09-01
New procedures are explored for the development of models in the context of large eddy simulation (LES) of a passive scalar. They rely on the combination of the optimal estimator theory with machine-learning algorithms. The concept of optimal estimator allows to identify the most accurate set of parameters to be used when deriving a model. The model itself can then be defined by training an artificial neural network (ANN) on a database derived from the filtering of direct numerical simulation (DNS) results. This procedure leads to a subgrid scale model displaying good structural performance, which allows to perform LESs very close to the filtered DNS results. However, this first procedure does not control the functional performance so that the model can fail when the flow configuration differs from the training database. Another procedure is then proposed, where the model functional form is imposed and the ANN used only to define the model coefficients. The training step is a bi-objective optimisation in order to control both structural and functional performances. The model derived from this second procedure proves to be more robust. It also provides stable LESs for a turbulent plane jet flow configuration very far from the training database but over-estimates the mixing process in that case.
Cost-Based Optimization of a Papermaking Wastewater Regeneration Recycling System
NASA Astrophysics Data System (ADS)
Huang, Long; Feng, Xiao; Chu, Khim H.
2010-11-01
Wastewater can be regenerated for recycling in an industrial process to reduce freshwater consumption and wastewater discharge. Such an environment friendly approach will also lead to cost savings that accrue due to reduced freshwater usage and wastewater discharge. However, the resulting cost savings are offset to varying degrees by the costs incurred for the regeneration of wastewater for recycling. Therefore, systematic procedures should be used to determine the true economic benefits for any water-using system involving wastewater regeneration recycling. In this paper, a total cost accounting procedure is employed to construct a comprehensive cost model for a paper mill. The resulting cost model is optimized by means of mathematical programming to determine the optimal regeneration flowrate and regeneration efficiency that will yield the minimum total cost.
Optimization of the magnetic dynamo.
Willis, Ashley P
2012-12-21
In stars and planets, magnetic fields are believed to originate from the motion of electrically conducting fluids in their interior, through a process known as the dynamo mechanism. In this Letter, an optimization procedure is used to simultaneously address two fundamental questions of dynamo theory: "Which velocity field leads to the most magnetic energy growth?" and "How large does the velocity need to be relative to magnetic diffusion?" In general, this requires optimization over the full space of continuous solenoidal velocity fields possible within the geometry. Here the case of a periodic box is considered. Measuring the strength of the flow with the root-mean-square amplitude, an optimal velocity field is shown to exist, but without limitation on the strain rate, optimization is prone to divergence. Measuring the flow in terms of its associated dissipation leads to the identification of a single optimal at the critical magnetic Reynolds number necessary for a dynamo. This magnetic Reynolds number is found to be only 15% higher than that necessary for transient growth of the magnetic field.
NASA Astrophysics Data System (ADS)
Yang, Weizhu; Yue, Zhufeng; Li, Lei; Wang, Peiyan
2016-01-01
An optimization procedure combining an automated finite element modelling (AFEM) technique with a ground structure approach (GSA) is proposed for structural layout and sizing design of aircraft wings. The AFEM technique, based on CATIA VBA scripting and PCL programming, is used to generate models automatically considering the arrangement of inner systems. GSA is used for local structural topology optimization. The design procedure is applied to a high-aspect-ratio wing. The arrangement of the integral fuel tank, landing gear and control surfaces is considered. For the landing gear region, a non-conventional initial structural layout is adopted. The positions of components, the number of ribs and local topology in the wing box and landing gear region are optimized to obtain a minimum structural weight. Constraints include tank volume, strength, buckling and aeroelastic parameters. The results show that the combined approach leads to a greater weight saving, i.e. 26.5%, compared with three additional optimizations based on individual design approaches.
NASA Technical Reports Server (NTRS)
Rao, R. G. S.; Ulaby, F. T.
1977-01-01
The paper examines optimal sampling techniques for obtaining accurate spatial averages of soil moisture, at various depths and for cell sizes in the range 2.5-40 acres, with a minimum number of samples. Both simple random sampling and stratified sampling procedures are used to reach a set of recommended sample sizes for each depth and for each cell size. Major conclusions from statistical sampling test results are that (1) the number of samples required decreases with increasing depth; (2) when the total number of samples cannot be prespecified or the moisture in only one single layer is of interest, then a simple random sample procedure should be used which is based on the observed mean and SD for data from a single field; (3) when the total number of samples can be prespecified and the objective is to measure the soil moisture profile with depth, then stratified random sampling based on optimal allocation should be used; and (4) decreasing the sensor resolution cell size leads to fairly large decreases in samples sizes with stratified sampling procedures, whereas only a moderate decrease is obtained in simple random sampling procedures.
Kydd, Anna C; Khan, Fakhar Z; Watson, William D; Pugh, Peter J; Virdee, Munmohan S; Dutka, David P
2014-06-01
This study was conducted to assess the impact of left ventricular (LV) lead position on longer-term survival after cardiac resynchronization therapy (CRT). An optimal LV lead position in CRT is associated with improved clinical outcome. A strategy of speckle-tracking echocardiography can be used to guide the implanter to the site of latest activation and away from segments of low strain amplitude (scar). Long-term, prospective survival data according to LV lead position in CRT are limited. Data from a follow-up registry of 250 consecutive patients receiving CRT between June 2008 and July 2010 were studied. The study population comprised patients recruited to the derivation group and the subsequent TARGET (Targeted Left Ventricular Lead Placement to guide Cardiac Resynchronization Therapy) randomized, controlled trial. Final LV lead position was described, in relation to the pacing site determined by pre-procedure speckle-tracking echocardiography, as optimal (concordant/adjacent) or suboptimal (remote). All-cause mortality was recorded at follow-up. An optimal LV lead position (n = 202) conferred LV remodeling response superior to that of a suboptimal lead position (change in LV end-systolic volume: -24 ± 15% vs. -12 ± 17% [p < 0.001]; change in ejection fraction: +7 ± 8% vs. +4 ± 7% [p = 0.02]). During long-term follow-up (median: 39 months; range: <1 to 61 months), an optimal LV lead position was associated with improved survival (log-rank p = 0.003). A suboptimal LV lead placement independently predicted all-cause mortality (hazard ratio: 1.8; p = 0.024). An optimal LV lead position at the site of latest mechanical activation, avoiding low strain amplitude (scar), was associated with superior CRT response and improved survival that persisted during follow-up. Copyright © 2014 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.
Optimization of multi-element airfoils for maximum lift
NASA Technical Reports Server (NTRS)
Olsen, L. E.
1979-01-01
Two theoretical methods are presented for optimizing multi-element airfoils to obtain maximum lift. The analyses assume that the shapes of the various high lift elements are fixed. The objective of the design procedures is then to determine the optimum location and/or deflection of the leading and trailing edge devices. The first analysis determines the optimum horizontal and vertical location and the deflection of a leading edge slat. The structure of the flow field is calculated by iteratively coupling potential flow and boundary layer analysis. This design procedure does not require that flow separation effects be modeled. The second analysis determines the slat and flap deflection required to maximize the lift of a three element airfoil. This approach requires that the effects of flow separation from one or more of the airfoil elements be taken into account. The theoretical results are in good agreement with results of a wind tunnel test used to corroborate the predicted optimum slat and flap positions.
Optimizing chirped laser pulse parameters for electron acceleration in vacuum
DOE Office of Scientific and Technical Information (OSTI.GOV)
Akhyani, Mina; Jahangiri, Fazel; Niknam, Ali Reza
2015-11-14
Electron dynamics in the field of a chirped linearly polarized laser pulse is investigated. Variations of electron energy gain versus chirp parameter, time duration, and initial phase of laser pulse are studied. Based on maximizing laser pulse asymmetry, a numerical optimization procedure is presented, which leads to the elimination of rapid fluctuations of gain versus the chirp parameter. Instead, a smooth variation is observed that considerably reduces the accuracy required for experimentally adjusting the chirp parameter.
Electric and hybrid vehicles charge efficiency tests of ESB EV-106 lead acid batteries
NASA Technical Reports Server (NTRS)
Rowlette, J. J.
1981-01-01
Charge efficiencies were determined by measurements made under widely differing conditions of temperature, charge procedure, and battery age. The measurements were used to optimize charge procedures and to evaluate the concept of a modified, coulometric state of charge indicator. Charge efficiency determinations were made by measuring gassing rates and oxygen fractions. A novel, positive displacement gas flow meter which proved to be both simple and highly accurate is described and illustrated.
Mühlebach, Anneke; Adam, Joachim; Schön, Uwe
2011-11-01
Automated medicinal chemistry (parallel chemistry) has become an integral part of the drug-discovery process in almost every large pharmaceutical company. Parallel array synthesis of individual organic compounds has been used extensively to generate diverse structural libraries to support different phases of the drug-discovery process, such as hit-to-lead, lead finding, or lead optimization. In order to guarantee effective project support, efficiency in the production of compound libraries has been maximized. As a consequence, also throughput in chromatographic purification and analysis has been adapted. As a recent trend, more laboratories are preparing smaller, yet more focused libraries with even increasing demands towards quality, i.e. optimal purity and unambiguous confirmation of identity. This paper presents an automated approach how to combine effective purification and structural conformation of a lead optimization library created by microwave-assisted organic synthesis. The results of complementary analytical techniques such as UHPLC-HRMS and NMR are not only regarded but even merged for fast and easy decision making, providing optimal quality of compound stock. In comparison with the previous procedures, throughput times are at least four times faster, while compound consumption could be decreased more than threefold. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Parameter learning for performance adaptation
NASA Technical Reports Server (NTRS)
Peek, Mark D.; Antsaklis, Panos J.
1990-01-01
A parameter learning method is introduced and used to broaden the region of operability of the adaptive control system of a flexible space antenna. The learning system guides the selection of control parameters in a process leading to optimal system performance. A grid search procedure is used to estimate an initial set of parameter values. The optimization search procedure uses a variation of the Hooke and Jeeves multidimensional search algorithm. The method is applicable to any system where performance depends on a number of adjustable parameters. A mathematical model is not necessary, as the learning system can be used whenever the performance can be measured via simulation or experiment. The results of two experiments, the transient regulation and the command following experiment, are presented.
Optimal Design and Operation of Permanent Irrigation Systems
NASA Astrophysics Data System (ADS)
Oron, Gideon; Walker, Wynn R.
1981-01-01
Solid-set pressurized irrigation system design and operation are studied with optimization techniques to determine the minimum cost distribution system. The principle of the analysis is to divide the irrigation system into subunits in such a manner that the trade-offs among energy, piping, and equipment costs are selected at the minimum cost point. The optimization procedure involves a nonlinear, mixed integer approach capable of achieving a variety of optimal solutions leading to significant conclusions with regard to the design and operation of the system. Factors investigated include field geometry, the effect of the pressure head, consumptive use rates, a smaller flow rate in the pipe system, and outlet (sprinkler or emitter) discharge.
Teleportation of squeezing: Optimization using non-Gaussian resources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dell'Anno, Fabio; De Siena, Silvio; Illuminati, Fabrizio
2010-12-15
We study the continuous-variable quantum teleportation of states, statistical moments of observables, and scale parameters such as squeezing. We investigate the problem both in ideal and imperfect Vaidman-Braunstein-Kimble protocol setups. We show how the teleportation fidelity is maximized and the difference between output and input variances is minimized by using suitably optimized entangled resources. Specifically, we consider the teleportation of coherent squeezed states, exploiting squeezed Bell states as entangled resources. This class of non-Gaussian states, introduced by Illuminati and co-workers [F. Dell'Anno, S. De Siena, L. Albano, and F. Illuminati, Phys. Rev. A 76, 022301 (2007); F. Dell'Anno, S. Demore » Siena, and F. Illuminati, ibid. 81, 012333 (2010)], includes photon-added and photon-subtracted squeezed states as special cases. At variance with the case of entangled Gaussian resources, the use of entangled non-Gaussian squeezed Bell resources allows one to choose different optimization procedures that lead to inequivalent results. Performing two independent optimization procedures, one can either maximize the state teleportation fidelity, or minimize the difference between input and output quadrature variances. The two different procedures are compared depending on the degrees of displacement and squeezing of the input states and on the working conditions in ideal and nonideal setups.« less
De Kleijn, P; Fischer, K; Vogely, H Ch; Hendriks, C; Lindeman, E
2011-11-01
This project aimed to develop guidelines for use during in-hospital rehabilitation after combinations of multiple joint procedures (MJP) of the lower extremities in persons with haemophilia (PWH). MJP are defined as surgical procedures on the ankles, knees and hips, performed in any combination, staged, or during a single session. MJP that we studied included total knee arthroplasty, total hip arthroplasty and ankle arthrodesis. Literature on rheumatoid arthritis demonstrated promising functional results, fewer hospitalization days and days lost from work. However, the complication rate is higher and rehabilitation needs optimal conditions. Since 1995, at the Van Creveldkliniek, 54 PWH have undergone MJP. During the rehabilitation in our hospital performed by experienced physical therapists, regular guidelines seemed useless. Guidelines will guarantee an optimal physical recovery and maximum benefit from this enormous investment. This will lead to an optimal functional capability and optimal quality of life for this elderly group of PWH. There are no existing guidelines for MJP, in haemophilia, revealed through a review of the literature. Therefore, a working group was formed to develop and implement such guidelines and the procedure is explained. The total group of PWH who underwent MJP is described, subdivided into combinations of joints. For these subgroups, the number of days in hospital, complications and profile at discharge, as well as a guideline on the clinical rehabilitation, are given. It contains a general part and a part for each specific subgroup. © 2011 Blackwell Publishing Ltd.
A Robust Adaptive Autonomous Approach to Optimal Experimental Design
NASA Astrophysics Data System (ADS)
Gu, Hairong
Experimentation is the fundamental tool of scientific inquiries to understand the laws governing the nature and human behaviors. Many complex real-world experimental scenarios, particularly in quest of prediction accuracy, often encounter difficulties to conduct experiments using an existing experimental procedure for the following two reasons. First, the existing experimental procedures require a parametric model to serve as the proxy of the latent data structure or data-generating mechanism at the beginning of an experiment. However, for those experimental scenarios of concern, a sound model is often unavailable before an experiment. Second, those experimental scenarios usually contain a large number of design variables, which potentially leads to a lengthy and costly data collection cycle. Incompetently, the existing experimental procedures are unable to optimize large-scale experiments so as to minimize the experimental length and cost. Facing the two challenges in those experimental scenarios, the aim of the present study is to develop a new experimental procedure that allows an experiment to be conducted without the assumption of a parametric model while still achieving satisfactory prediction, and performs optimization of experimental designs to improve the efficiency of an experiment. The new experimental procedure developed in the present study is named robust adaptive autonomous system (RAAS). RAAS is a procedure for sequential experiments composed of multiple experimental trials, which performs function estimation, variable selection, reverse prediction and design optimization on each trial. Directly addressing the challenges in those experimental scenarios of concern, function estimation and variable selection are performed by data-driven modeling methods to generate a predictive model from data collected during the course of an experiment, thus exempting the requirement of a parametric model at the beginning of an experiment; design optimization is performed to select experimental designs on the fly of an experiment based on their usefulness so that fewest designs are needed to reach useful inferential conclusions. Technically, function estimation is realized by Bayesian P-splines, variable selection is realized by Bayesian spike-and-slab prior, reverse prediction is realized by grid-search and design optimization is realized by the concepts of active learning. The present study demonstrated that RAAS achieves statistical robustness by making accurate predictions without the assumption of a parametric model serving as the proxy of latent data structure while the existing procedures can draw poor statistical inferences if a misspecified model is assumed; RAAS also achieves inferential efficiency by taking fewer designs to acquire useful statistical inferences than non-optimal procedures. Thus, RAAS is expected to be a principled solution to real-world experimental scenarios pursuing robust prediction and efficient experimentation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Tanghao; Zhou, Yuanyuan; Hu, Qin
The fast-growing procedure (FGP) provides a simple, high-yield and lead (Pb)-release free method to prepare perovskite films. In the FGP, the ultra-dilute perovskite precursor solution is drop-cast onto a hot (~240 degrees C) substrate, where a perovskite film grows immediately accompanied by the rapid evaporation of the host solvent. In this process, all the raw materials in the precursor solution are deposited into the final perovskite film. The potential pollution caused by Pb can be significantly reduced. Properties of the FGP-processed perovskite films can be modulated by the precursor composition. While CH3NH3Cl (MACl) affects the crystallization process and leads tomore » full surface coverage, CH(NH2)2I (FAI) enhances the thermal stability of the film. Based on the optimized precursor composition of PbI2(1-x)FAI xMACl, x=0.75, FGP-processed planar heterojunction perovskite solar cells exhibit power conversion efficiencies (PCEs) exceeding 15% with suppressed hysteresis and excellent reproducibility.« less
Dekker, A L A J; Phelps, B; Dijkman, B; van der Nagel, T; van der Veen, F H; Geskes, G G; Maessen, J G
2004-06-01
Patients in heart failure with left bundle branch block benefit from cardiac resynchronization therapy. Usually the left ventricular pacing lead is placed by coronary sinus catheterization; however, this procedure is not always successful, and patients may be referred for surgical epicardial lead placement. The objective of this study was to develop a method to guide epicardial lead placement in cardiac resynchronization therapy. Eleven patients in heart failure who were eligible for cardiac resynchronization therapy were referred for surgery because of failed coronary sinus left ventricular lead implantation. Minithoracotomy or thoracoscopy was performed, and a temporary epicardial electrode was used for biventricular pacing at various sites on the left ventricle. Pressure-volume loops with the conductance catheter were used to select the best site for each individual patient. Relative to the baseline situation, biventricular pacing with an optimal left ventricular lead position significantly increased stroke volume (+39%, P =.01), maximal left ventricular pressure derivative (+20%, P =.02), ejection fraction (+30%, P =.007), and stroke work (+66%, P =.006) and reduced end-systolic volume (-6%, P =.04). In contrast, biventricular pacing at a suboptimal site did not significantly change left ventricular function and even worsened it in some cases. To optimize cardiac resynchronization therapy with epicardial leads, mapping to determine the best pace site is a prerequisite. Pressure-volume loops offer real-time guidance for targeting epicardial lead placement during minimal invasive surgery.
Optimizing Teleportation Cost in Distributed Quantum Circuits
NASA Astrophysics Data System (ADS)
Zomorodi-Moghadam, Mariam; Houshmand, Mahboobeh; Houshmand, Monireh
2018-03-01
The presented work provides a procedure for optimizing the communication cost of a distributed quantum circuit (DQC) in terms of the number of qubit teleportations. Because of technology limitations which do not allow large quantum computers to work as a single processing element, distributed quantum computation is an appropriate solution to overcome this difficulty. Previous studies have applied ad-hoc solutions to distribute a quantum system for special cases and applications. In this study, a general approach is proposed to optimize the number of teleportations for a DQC consisting of two spatially separated and long-distance quantum subsystems. To this end, different configurations of locations for executing gates whose qubits are in distinct subsystems are considered and for each of these configurations, the proposed algorithm is run to find the minimum number of required teleportations. Finally, the configuration which leads to the minimum number of teleportations is reported. The proposed method can be used as an automated procedure to find the configuration with the optimal communication cost for the DQC. This cost can be used as a basic measure of the communication cost for future works in the distributed quantum circuits.
NASA Astrophysics Data System (ADS)
Bortolotti, P.; Adolphs, G.; Bottasso, C. L.
2016-09-01
This work is concerned with the development of an optimization methodology for the composite materials used in wind turbine blades. Goal of the approach is to guide designers in the selection of the different materials of the blade, while providing indications to composite manufacturers on optimal trade-offs between mechanical properties and material costs. The method works by using a parametric material model, and including its free parameters amongst the design variables of a multi-disciplinary wind turbine optimization procedure. The proposed method is tested on the structural redesign of a conceptual 10 MW wind turbine blade, its spar caps and shell skin laminates being subjected to optimization. The procedure identifies a blade optimum for a new spar cap laminate characterized by a higher longitudinal Young's modulus and higher cost than the initial one, which however in turn induce both cost and mass savings in the blade. In terms of shell skin, the adoption of a laminate with intermediate properties between a bi-axial one and a tri-axial one also leads to slight structural improvements.
Topology optimization of a gas-turbine engine part
NASA Astrophysics Data System (ADS)
Faskhutdinov, R. N.; Dubrovskaya, A. S.; Dongauzer, K. A.; Maksimov, P. V.; Trufanov, N. A.
2017-02-01
One of the key goals of aerospace industry is a reduction of the gas turbine engine weight. The solution of this task consists in the design of gas turbine engine components with reduced weight retaining their functional capabilities. Topology optimization of the part geometry leads to an efficient weight reduction. A complex geometry can be achieved in a single operation with the Selective Laser Melting technology. It should be noted that the complexity of structural features design does not affect the product cost in this case. Let us consider a step-by-step procedure of topology optimization by an example of a gas turbine engine part.
NASA Astrophysics Data System (ADS)
Kneringer, Philipp; Dietz, Sebastian J.; Mayr, Georg J.; Zeileis, Achim
2018-04-01
Airport operations are sensitive to visibility conditions. Low-visibility events may lead to capacity reduction, delays and economic losses. Different levels of low-visibility procedures (lvp) are enacted to ensure aviation safety. A nowcast of the probabilities for each of the lvp categories helps decision makers to optimally schedule their operations. An ordered logistic regression (OLR) model is used to forecast these probabilities directly. It is applied to cold season forecasts at Vienna International Airport for lead times of 30-min out to 2 h. Model inputs are standard meteorological measurements. The skill of the forecasts is accessed by the ranked probability score. OLR outperforms persistence, which is a strong contender at the shortest lead times. The ranked probability score of the OLR is even better than the one of nowcasts from human forecasters. The OLR-based nowcasting system is computationally fast and can be updated instantaneously when new data become available.
Pourmortazavi, Seied Mahdi; Taghdiri, Mehdi; Makari, Vajihe; Rahimi-Nasrabadi, Mehdi
2015-02-05
The present study is dealing with the green synthesis of silver nanoparticles using the aqueous extract of Eucalyptus oleosa as a green synthesis procedure without any catalyst, template or surfactant. Colloidal silver nanoparticles were synthesized by reacting aqueous AgNO3 with E. oleosa leaf extract at non-photomediated conditions. The significance of some synthesis conditions such as: silver nitrate concentration, concentration of the plant extract, time of synthesis reaction and temperature of plant extraction procedure on the particle size of synthesized silver particles was investigated and optimized. The participations of the studied factors in controlling the particle size of reduced silver were quantitatively evaluated via analysis of variance (ANOVA). The results of this investigation showed that silver nanoparticles could be synthesized by tuning significant parameters, while performing the synthesis procedure at optimum conditions leads to form silver nanoparticles with 21nm as averaged size. Ultraviolet-visible spectroscopy was used to monitor the development of silver nanoparticles formation. Meanwhile, produced silver nanoparticles were characterized by scanning electron microscopy, energy-dispersive X-ray, and FT-IR techniques. Copyright © 2014 Elsevier B.V. All rights reserved.
Cognitive Fatigue Facilitates Procedural Sequence Learning.
Borragán, Guillermo; Slama, Hichem; Destrebecqz, Arnaud; Peigneux, Philippe
2016-01-01
Enhanced procedural learning has been evidenced in conditions where cognitive control is diminished, including hypnosis, disruption of prefrontal activity and non-optimal time of the day. Another condition depleting the availability of controlled resources is cognitive fatigue (CF). We tested the hypothesis that CF, eventually leading to diminished cognitive control, facilitates procedural sequence learning. In a two-day experiment, 23 young healthy adults were administered a serial reaction time task (SRTT) following the induction of high or low levels of CF, in a counterbalanced order. CF was induced using the Time load Dual-back (TloadDback) paradigm, a dual working memory task that allows tailoring cognitive load levels to the individual's optimal performance capacity. In line with our hypothesis, reaction times (RT) in the SRTT were faster in the high- than in the low-level fatigue condition, and performance improvement was higher for the sequential than the motor components. Altogether, our results suggest a paradoxical, facilitating impact of CF on procedural motor sequence learning. We propose that facilitated learning in the high-level fatigue condition stems from a reduction in the cognitive resources devoted to cognitive control processes that normally oppose automatic procedural acquisition mechanisms.
Grotti, Marco; Abelmoschi, Maria Luisa; Dalla Riva, Simona; Soggia, Francesco; Frache, Roberto
2005-04-01
A new procedure for determining low levels of lead in bone tissues has been developed. After wet acid digestion in a pressurized microwave-heated system, the solution was analyzed by inductively coupled plasma multichannel-based emission spectrometry. Internal standardization using the Co 228.615 nm reference line was chosen as the optimal method to compensate for the matrix effects from the presence of calcium and nitric acid at high concentration levels. The detection limit of the procedure was 0.11 microg Pb g(-1) dry mass. Instrumental precision at the analytical concentration of approximately 10 microg l(-1) ranged from 6.1 to 9.4%. Precision of the sample preparation step was 5.4%. The concentration of lead in SRM 1486 (1.32+/-0.04 microg g(-1)) found using the new procedure was in excellent agreement with the certified level (1.335+/-0.014 microg g(-1)). Finally, the method was applied to determine the lead in various fish bone tissues, and the analytical results were found to be in good agreement with those obtained through differential pulse anodic stripping voltammetry. The method is therefore suitable for the reliable determination of lead at concentration levels of below 1 microg g(-1) in bone samples. Moreover, the multi-element capability of the technique allows us to simultaneously determine other major or trace elements in order to investigate inter-element correlation and to compute enrichment factors, making the proposed procedure particularly useful for investigating lead occurrence and pathways in fish bone tissues in order to find suitable biomarkers for the Antarctic marine environment.
NASA Technical Reports Server (NTRS)
Ghaffari, F.; Chaturvedi, S. K.
1984-01-01
An analytical design procedure for leading edge extensions (LEE) was developed for thick delta wings. This LEE device is designed to be mounted to a wing along the pseudo-stagnation stream surface associated with the attached flow design lift coefficient of greater than zero. The intended purpose of this device is to improve the aerodynamic performance of high subsonic and low supersonic aircraft at incidences above that of attached flow design lift coefficient, by using a vortex system emanating along the leading edges of the device. The low pressure associated with these vortices would act on the LEE upper surface and the forward facing area at the wing leading edges, providing an additional lift and effective leading edge thrust recovery. The first application of this technique was to a thick, round edged, twisted and cambered wing of approximately triangular planform having a sweep of 58 deg and aspect ratio of 2.30. The panel aerodynamics and vortex lattice method with suction analogy computer codes were employed to determine the pseudo-stagnation stream surface and an optimized LEE planform shape.
Elmiger, Marco P; Poetzsch, Michael; Steuer, Andrea E; Kraemer, Thomas
2018-03-06
High resolution mass spectrometry and modern data independent acquisition (DIA) methods enable the creation of general unknown screening (GUS) procedures. However, even when DIA is used, its potential is far from being exploited, because often, the untargeted acquisition is followed by a targeted search. Applying an actual GUS (including untargeted screening) produces an immense amount of data that must be dealt with. An optimization of the parameters regulating the feature detection and hit generation algorithms of the data processing software could significantly reduce the amount of unnecessary data and thereby the workload. Design of experiment (DoE) approaches allow a simultaneous optimization of multiple parameters. In a first step, parameters are evaluated (crucial or noncrucial). Second, crucial parameters are optimized. The aim in this study was to reduce the number of hits, without missing analytes. The obtained parameter settings from the optimization were compared to the standard settings by analyzing a test set of blood samples spiked with 22 relevant analytes as well as 62 authentic forensic cases. The optimization lead to a marked reduction of workload (12.3 to 1.1% and 3.8 to 1.1% hits for the test set and the authentic cases, respectively) while simultaneously increasing the identification rate (68.2 to 86.4% and 68.8 to 88.1%, respectively). This proof of concept study emphasizes the great potential of DoE approaches to master the data overload resulting from modern data independent acquisition methods used for general unknown screening procedures by optimizing software parameters.
Enhancing to method for extracting Social network by the relation existence
NASA Astrophysics Data System (ADS)
Elfida, Maria; Matyuso Nasution, M. K.; Sitompul, O. S.
2018-01-01
To get the trusty information about the social network extracted from the Web requires a reliable method, but for optimal resultant required the method that can overcome the complexity of information resources. This paper intends to reveal ways to overcome the constraints of social network extraction leading to high complexity by identifying relationships among social actors. By changing the treatment of the procedure used, we obtain the complexity is smaller than the previous procedure. This has also been demonstrated in an experiment by using the denial sample.
Teleportation of squeezing: Optimization using non-Gaussian resources
NASA Astrophysics Data System (ADS)
Dell'Anno, Fabio; de Siena, Silvio; Adesso, Gerardo; Illuminati, Fabrizio
2010-12-01
We study the continuous-variable quantum teleportation of states, statistical moments of observables, and scale parameters such as squeezing. We investigate the problem both in ideal and imperfect Vaidman-Braunstein-Kimble protocol setups. We show how the teleportation fidelity is maximized and the difference between output and input variances is minimized by using suitably optimized entangled resources. Specifically, we consider the teleportation of coherent squeezed states, exploiting squeezed Bell states as entangled resources. This class of non-Gaussian states, introduced by Illuminati and co-workers [F. Dell’Anno, S. De Siena, L. Albano, and F. Illuminati, Phys. Rev. APLRAAN1050-294710.1103/PhysRevA.76.022301 76, 022301 (2007); F. Dell’Anno, S. De Siena, and F. Illuminati, Phys. Rev. APLRAAN1050-294710.1103/PhysRevA.81.012333 81, 012333 (2010)], includes photon-added and photon-subtracted squeezed states as special cases. At variance with the case of entangled Gaussian resources, the use of entangled non-Gaussian squeezed Bell resources allows one to choose different optimization procedures that lead to inequivalent results. Performing two independent optimization procedures, one can either maximize the state teleportation fidelity, or minimize the difference between input and output quadrature variances. The two different procedures are compared depending on the degrees of displacement and squeezing of the input states and on the working conditions in ideal and nonideal setups.
The PDB_REDO server for macromolecular structure model optimization.
Joosten, Robbie P; Long, Fei; Murshudov, Garib N; Perrakis, Anastassis
2014-07-01
The refinement and validation of a crystallographic structure model is the last step before the coordinates and the associated data are submitted to the Protein Data Bank (PDB). The success of the refinement procedure is typically assessed by validating the models against geometrical criteria and the diffraction data, and is an important step in ensuring the quality of the PDB public archive [Read et al. (2011 ▶), Structure, 19, 1395-1412]. The PDB_REDO procedure aims for 'constructive validation', aspiring to consistent and optimal refinement parameterization and pro-active model rebuilding, not only correcting errors but striving for optimal interpretation of the electron density. A web server for PDB_REDO has been implemented, allowing thorough, consistent and fully automated optimization of the refinement procedure in REFMAC and partial model rebuilding. The goal of the web server is to help practicing crystallo-graphers to improve their model prior to submission to the PDB. For this, additional steps were implemented in the PDB_REDO pipeline, both in the refinement procedure, e.g. testing of resolution limits and k-fold cross-validation for small test sets, and as new validation criteria, e.g. the density-fit metrics implemented in EDSTATS and ligand validation as implemented in YASARA. Innovative ways to present the refinement and validation results to the user are also described, which together with auto-generated Coot scripts can guide users to subsequent model inspection and improvement. It is demonstrated that using the server can lead to substantial improvement of structure models before they are submitted to the PDB.
The PDB_REDO server for macromolecular structure model optimization
Joosten, Robbie P.; Long, Fei; Murshudov, Garib N.; Perrakis, Anastassis
2014-01-01
The refinement and validation of a crystallographic structure model is the last step before the coordinates and the associated data are submitted to the Protein Data Bank (PDB). The success of the refinement procedure is typically assessed by validating the models against geometrical criteria and the diffraction data, and is an important step in ensuring the quality of the PDB public archive [Read et al. (2011 ▶), Structure, 19, 1395–1412]. The PDB_REDO procedure aims for ‘constructive validation’, aspiring to consistent and optimal refinement parameterization and pro-active model rebuilding, not only correcting errors but striving for optimal interpretation of the electron density. A web server for PDB_REDO has been implemented, allowing thorough, consistent and fully automated optimization of the refinement procedure in REFMAC and partial model rebuilding. The goal of the web server is to help practicing crystallographers to improve their model prior to submission to the PDB. For this, additional steps were implemented in the PDB_REDO pipeline, both in the refinement procedure, e.g. testing of resolution limits and k-fold cross-validation for small test sets, and as new validation criteria, e.g. the density-fit metrics implemented in EDSTATS and ligand validation as implemented in YASARA. Innovative ways to present the refinement and validation results to the user are also described, which together with auto-generated Coot scripts can guide users to subsequent model inspection and improvement. It is demonstrated that using the server can lead to substantial improvement of structure models before they are submitted to the PDB. PMID:25075342
Evaluating the effects of real power losses in optimal power flow based storage integration
Castillo, Anya; Gayme, Dennice
2017-03-27
This study proposes a DC optimal power flow (DCOPF) with losses formulation (the `-DCOPF+S problem) and uses it to investigate the role of real power losses in OPF based grid-scale storage integration. We derive the `- DCOPF+S problem by augmenting a standard DCOPF with storage (DCOPF+S) problem to include quadratic real power loss approximations. This procedure leads to a multi-period nonconvex quadratically constrained quadratic program, which we prove can be solved to optimality using either a semidefinite or second order cone relaxation. Our approach has some important benefits over existing models. It is more computationally tractable than ACOPF with storagemore » (ACOPF+S) formulations and the provably exact convex relaxations guarantee that an optimal solution can be attained for a feasible problem. Adding loss approximations to a DCOPF+S model leads to a more accurate representation of locational marginal prices, which have been shown to be critical to determining optimal storage dispatch and siting in prior ACOPF+S based studies. Case studies demonstrate the improved accuracy of the `-DCOPF+S model over a DCOPF+S model and the computational advantages over an ACOPF+S formulation.« less
Multispectral tissue characterization for intestinal anastomosis optimization.
Cha, Jaepyeong; Shademan, Azad; Le, Hanh N D; Decker, Ryan; Kim, Peter C W; Kang, Jin U; Krieger, Axel
2015-10-01
Intestinal anastomosis is a surgical procedure that restores bowel continuity after surgical resection to treat intestinal malignancy, inflammation, or obstruction. Despite the routine nature of intestinal anastomosis procedures, the rate of complications is high. Standard visual inspection cannot distinguish the tissue subsurface and small changes in spectral characteristics of the tissue, so existing tissue anastomosis techniques that rely on human vision to guide suturing could lead to problems such as bleeding and leakage from suturing sites. We present a proof-of-concept study using a portable multispectral imaging (MSI) platform for tissue characterization and preoperative surgical planning in intestinal anastomosis. The platform is composed of a fiber ring light-guided MSI system coupled with polarizers and image analysis software. The system is tested on ex vivo porcine intestine tissue, and we demonstrate the feasibility of identifying optimal regions for suture placement.
Multispectral tissue characterization for intestinal anastomosis optimization
Cha, Jaepyeong; Shademan, Azad; Le, Hanh N. D.; Decker, Ryan; Kim, Peter C. W.; Kang, Jin U.; Krieger, Axel
2015-01-01
Abstract. Intestinal anastomosis is a surgical procedure that restores bowel continuity after surgical resection to treat intestinal malignancy, inflammation, or obstruction. Despite the routine nature of intestinal anastomosis procedures, the rate of complications is high. Standard visual inspection cannot distinguish the tissue subsurface and small changes in spectral characteristics of the tissue, so existing tissue anastomosis techniques that rely on human vision to guide suturing could lead to problems such as bleeding and leakage from suturing sites. We present a proof-of-concept study using a portable multispectral imaging (MSI) platform for tissue characterization and preoperative surgical planning in intestinal anastomosis. The platform is composed of a fiber ring light-guided MSI system coupled with polarizers and image analysis software. The system is tested on ex vivo porcine intestine tissue, and we demonstrate the feasibility of identifying optimal regions for suture placement. PMID:26440616
Chen, Jan-Yow; Lin, Kuo-Hung; Chang, Kuan-Cheng; Chou, Che-Yi
2017-08-03
QRS duration has been associated with the response to cardiac resynchronization therapy (CRT). However, the methods for defining QRS duration to predict the outcome of CRT have discrepancies in previous reports. The aim of this study was to determine an optimal measurement of QRS duration to predict the response to CRT.Sixty-one patients who received CRT were analyzed. All patients had class III-IV heart failure, left ventricular ejection fraction not more than 35%, and complete left bundle branch block. The shortest, longest, and average QRS durations from the 12 leads of each electrocardiogram (ECG) were measured. The responses to CRT were determined using the changes in echocardiography after 6 months. Thirty-five (57.4%) patients were responders and 26 (42.6%) patients were non-responders. The pre-procedure shortest, average, and longest QRS durations and the QRS shortening (ΔQRS) of the shortest QRS duration were significantly associated with the response to CRT in a univariate logistic regression analysis (P = 0.002, P = 0.03, P = 0.04 and P = 0.04, respectively). Based on the measurement of the area under curve of the receiver operating characteristic curve, only the pre-procedure shortest QRS duration and the ΔQRS of the shortest QRS duration showed significant discrimination for the response to CRT (P = 0.002 and P = 0.038, respectively). Multivariable logistic regression showed the pre-procedure shortest QRS duration is an independent predictor for the response to CRT.The shortest QRS duration from the 12 leads of the electrocardiogram might be an optimal measurement to predict the response to CRT.
Optimization of Milling Parameters Employing Desirability Functions
NASA Astrophysics Data System (ADS)
Ribeiro, J. L. S.; Rubio, J. C. Campos; Abrão, A. M.
2011-01-01
The principal aim of this paper is to investigate the influence of tool material (one cermet and two coated carbide grades), cutting speed and feed rate on the machinability of hardened AISI H13 hot work steel, in order to identify the cutting conditions which lead to optimal performance. A multiple response optimization procedure based on tool life, surface roughness, milling forces and the machining time (required to produce a sample cavity) was employed. The results indicated that the TiCN-TiN coated carbide and cermet presented similar results concerning the global optimum values for cutting speed and feed rate per tooth, outperforming the TiN-TiCN-Al2O3 coated carbide tool.
NASA Astrophysics Data System (ADS)
Jha, Ratneshwar
Multidisciplinary design optimization (MDO) procedures have been developed for smart composite wings and turbomachinery blades. The analysis and optimization methods used are computationally efficient and sufficiently rigorous. Therefore, the developed MDO procedures are well suited for actual design applications. The optimization procedure for the conceptual design of composite aircraft wings with surface bonded piezoelectric actuators involves the coupling of structural mechanics, aeroelasticity, aerodynamics and controls. The load carrying member of the wing is represented as a single-celled composite box beam. Each wall of the box beam is analyzed as a composite laminate using a refined higher-order displacement field to account for the variations in transverse shear stresses through the thickness. Therefore, the model is applicable for the analysis of composite wings of arbitrary thickness. Detailed structural modeling issues associated with piezoelectric actuation of composite structures are considered. The governing equations of motion are solved using the finite element method to analyze practical wing geometries. Three-dimensional aerodynamic computations are performed using a panel code based on the constant-pressure lifting surface method to obtain steady and unsteady forces. The Laplace domain method of aeroelastic analysis produces root-loci of the system which gives an insight into the physical phenomena leading to flutter/divergence and can be efficiently integrated within an optimization procedure. The significance of the refined higher-order displacement field on the aeroelastic stability of composite wings has been established. The effect of composite ply orientations on flutter and divergence speeds has been studied. The Kreisselmeier-Steinhauser (K-S) function approach is used to efficiently integrate the objective functions and constraints into a single envelope function. The resulting unconstrained optimization problem is solved using the Broyden-Fletcher-Goldberg-Shanno algorithm. The optimization problem is formulated with the objective of simultaneously minimizing wing weight and maximizing its aerodynamic efficiency. Design variables include composite ply orientations, ply thicknesses, wing sweep, piezoelectric actuator thickness and actuator voltage. Constraints are placed on the flutter/divergence dynamic pressure, wing root stresses and the maximum electric field applied to the actuators. Numerical results are presented showing significant improvements, after optimization, compared to reference designs. The multidisciplinary optimization procedure for the design of turbomachinery blades integrates aerodynamic and heat transfer design objective criteria along with various mechanical and geometric constraints on the blade geometry. The airfoil shape is represented by Bezier-Bernstein polynomials, which results in a relatively small number of design variables for the optimization. Thin shear layer approximation of the Navier-Stokes equation is used for the viscous flow calculations. Grid generation is accomplished by solving Poisson equations. The maximum and average blade temperatures are obtained through a finite element analysis. Total pressure and exit kinetic energy losses are minimized, with constraints on blade temperatures and geometry. The constrained multiobjective optimization problem is solved using the K-S function approach. The results for the numerical example show significant improvements after optimization.
Amorim, Fábio A C; Ferreira, Sérgio L C
2005-02-28
In the present paper, a simultaneous pre-concentration procedure for the sequential determination of cadmium and lead in table salt samples using flame atomic absorption spectrometry is proposed. This method is based on the liquid-liquid extraction of cadmium(II) and lead(II) ions as dithizone complexes and direct aspiration of the organic phase for the spectrometer. The sequential determination of cadmium and lead is possible using a computer program. The optimization step was performed by a two-level fractional factorial design involving the variables: pH, dithizone mass, shaking time after addition of dithizone and shaking time after addition of solvent. In the studied levels these variables are not significant. The experimental conditions established propose a sample volume of 250mL and the extraction process using 4.0mL of methyl isobutyl ketone. This way, the procedure allows determination of cadmium and lead in table salt samples with a pre-concentration factor higher than 80, and detection limits of 0.3ngg(-1) for cadmium and 4.2ngg(-1) for lead. The precision expressed as relative standard deviation (n = 10) were 5.6 and 2.6% for cadmium concentration of 2 and 20ngg(-1), respectively, and of 3.2 and 1.1% for lead concentration of 20 and 200ngg(-1), respectively. Recoveries of cadmium and lead in several samples, measured by standard addition technique, proved also that this procedure is not affected by the matrix and can be applied satisfactorily for the determination of cadmium and lead in saline samples. The method was applied for the evaluation of the concentration of cadmium and lead in table salt samples consumed in Salvador City, Bahia, Brazil.
NASA Astrophysics Data System (ADS)
Cocozzella, N.; Lebeau, M.; Majni, G.; Paone, N.; Rinaldi, D.
2001-08-01
Scintillating crystals are widely used as detectors in radiographic systems, computerized axial tomography devices and in calorimeters employed in high-energy physics. This paper results from a project motivated by the development of the CMS calorimeter at CERN, which will make use of a large number of scintillating crystals. In order to prevent crystals from breaking because of internal residual stress, a quality control system based on optic inspection of interference fringe patterns was developed. The principle of measurement procedures was theoretically modelled, and then a dedicated polariscope was designed and built, in order to observe the crystals under induced stresses or to evaluate the residual internal stresses. The results are innovative and open a new perspective for scintillating crystals quality control: the photoelastic constant normal to the optic axis of the lead tungstate crystals (PbWO 4) was measured, and the inspection procedure developed is applicable to mass production, not only to optimize the crystal processing, but also to establish a quality inspection procedure.
An integrated platform for image-guided cardiac resynchronization therapy
NASA Astrophysics Data System (ADS)
Ma, Ying Liang; Shetty, Anoop K.; Duckett, Simon; Etyngier, Patrick; Gijsbers, Geert; Bullens, Roland; Schaeffter, Tobias; Razavi, Reza; Rinaldi, Christopher A.; Rhode, Kawal S.
2012-05-01
Cardiac resynchronization therapy (CRT) is an effective procedure for patients with heart failure but 30% of patients do not respond. This may be due to sub-optimal placement of the left ventricular (LV) lead. It is hypothesized that the use of cardiac anatomy, myocardial scar distribution and dyssynchrony information, derived from cardiac magnetic resonance imaging (MRI), may improve outcome by guiding the physician for optimal LV lead positioning. Whole heart MR data can be processed to yield detailed anatomical models including the coronary veins. Cine MR data can be used to measure the motion of the LV to determine which regions are late-activating. Finally, delayed Gadolinium enhancement imaging can be used to detect regions of scarring. This paper presents a complete platform for the guidance of CRT using pre-procedural MR data combined with live x-ray fluoroscopy. The platform was used for 21 patients undergoing CRT in a standard catheterization laboratory. The patients underwent cardiac MRI prior to their procedure. For each patient, a MRI-derived cardiac model, showing the LV lead targets, was registered to x-ray fluoroscopy using multiple views of a catheter looped in the right atrium. Registration was maintained throughout the procedure by a combination of C-arm/x-ray table tracking and respiratory motion compensation. Validation of the registration between the three-dimensional (3D) roadmap and the 2D x-ray images was performed using balloon occlusion coronary venograms. A 2D registration error of 1.2 ± 0.7 mm was achieved. In addition, a novel navigation technique was developed, called Cardiac Unfold, where an entire cardiac chamber is unfolded from 3D to 2D along with all relevant anatomical and functional information and coupled to real-time device detection. This allowed more intuitive navigation as the entire 3D scene was displayed simultaneously on a 2D plot. The accuracy of the unfold navigation was assessed off-line using 13 patient data sets by computing the registration error of the LV pacing lead electrodes which was found to be 2.2 ± 0.9 mm. Furthermore, the use of Unfold Navigation was demonstrated in real-time for four clinical cases.
False Discovery Control in Large-Scale Spatial Multiple Testing
Sun, Wenguang; Reich, Brian J.; Cai, T. Tony; Guindani, Michele; Schwartzman, Armin
2014-01-01
Summary This article develops a unified theoretical and computational framework for false discovery control in multiple testing of spatial signals. We consider both point-wise and cluster-wise spatial analyses, and derive oracle procedures which optimally control the false discovery rate, false discovery exceedance and false cluster rate, respectively. A data-driven finite approximation strategy is developed to mimic the oracle procedures on a continuous spatial domain. Our multiple testing procedures are asymptotically valid and can be effectively implemented using Bayesian computational algorithms for analysis of large spatial data sets. Numerical results show that the proposed procedures lead to more accurate error control and better power performance than conventional methods. We demonstrate our methods for analyzing the time trends in tropospheric ozone in eastern US. PMID:25642138
A Hamiltonian approach to the planar optimization of mid-course corrections
NASA Astrophysics Data System (ADS)
Iorfida, E.; Palmer, P. L.; Roberts, M.
2016-04-01
Lawden's primer vector theory gives a set of necessary conditions that characterize the optimality of a transfer orbit, defined accordingly to the possibility of adding mid-course corrections. In this paper a novel approach is proposed where, through a polar coordinates transformation, the primer vector components decouple. Furthermore, the case when transfer, departure and arrival orbits are coplanar is analyzed using a Hamiltonian approach. This procedure leads to approximate analytic solutions for the in-plane components of the primer vector. Moreover, the solution for the circular transfer case is proven to be the Hill's solution. The novel procedure reduces the mathematical and computational complexity of the original case study. It is shown that the primer vector is independent of the semi-major axis of the transfer orbit. The case with a fixed transfer trajectory and variable initial and final thrust impulses is studied. The acquired related optimality maps are presented and analyzed and they express the likelihood of a set of trajectories to be optimal. Furthermore, it is presented which kind of requirements have to be fulfilled by a set of departure and arrival orbits to have the same profile of primer vector.
Fundamental principles in periodontal plastic surgery and mucosal augmentation--a narrative review.
Burkhardt, Rino; Lang, Niklaus P
2014-04-01
To provide a narrative review of the current literature elaborating on fundamental principles of periodontal plastic surgical procedures. Based on a presumptive outline of the narrative review, MESH terms have been used to search the relevant literature electronically in the PubMed and Cochrane Collaboration databases. If possible, systematic reviews were included. The review is divided into three phases associated with periodontal plastic surgery: a) pre-operative phase, b) surgical procedures and c) post-surgical care. The surgical procedures were discussed in the light of a) flap design and preparation, b) flap mobilization and c) flap adaptation and stabilization. Pre-operative paradigms include the optimal plaque control and smoking counselling. Fundamental principles in surgical procedures address basic knowledge in anatomy and vascularity, leading to novel appropriate flap designs with papilla preservation. Flap mobilization based on releasing incisions can be performed up to 5 mm. Flap adaptation and stabilization depend on appropriate wound bed characteristics, undisturbed blood clot formation, revascularization and wound stability through adequate suturing. Delicate tissue handling and tension free wound closure represent prerequisites for optimal healing outcomes. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Jaiswal, Rohit; Pu, Lee L Q
2013-04-01
Major facial trauma injuries often require complex repair. Traditionally, the reconstruction of such injuries has primarily utilized only free tissue transfer. However, the advent of newer, contemporary procedures may lead to potential reconstructive improvement through the use of complementary procedures after free flap reconstruction. An 18-year-old male patient suffered a major left facial degloving injury resulting in soft-tissue defect with exposed zygoma, and parietal bone. Multiple operations were undertaken in a staged manner for reconstruction. A state-of-the-art free anterolateral thigh (ALT) perforator flap and Medpor implant reconstruction of the midface were initially performed, followed by flap debulking, lateral canthopexy, midface lift with redo canthopexy, scalp tissue expansion for hairline reconstruction, and epidermal skin grafting for optimal skin color matching. Over a follow-up period of 2 years, a good and impressive reconstructive result was achieved through the use of multiple contemporary reconstructive procedures following an excellent free ALT flap reconstruction. Multiple staged reconstructions are essential in producing an optimal outcome in this complex facial injury that would likely not have been produced through a 1-stage traditional free flap reconstruction. Utilizing multiple, sequential contemporary surgeries may substantially improve outcome through the enhancement and refinement of results based on possibly the best initial soft-tissue reconstruction.
Vortex assisted solid-phase extraction of lead(II) using orthorhombic nanosized Bi2WO6 as a sorbent.
Baghban, Neda; Yilmaz, Erkan; Soylak, Mustafa
2017-12-07
Nanosized single crystal orthorhombic Bi 2 WO 6 was synthesized by a hydrothermal method and used as a sorbent for vortex assisted solid phase extraction of lead(II). The crystal and molecular structure of the sorbent was examined using XRD, Raman, SEM and SEM-EDX analysis. Various parameters affecting extraction efficiency were optimized by using multivariate design. The effect of diverse ions on the extraction also was studied. Lead was quantified by flame atomic absorption spectrometry (FAAS). The recoveries of lead(II) from spiked samples (at a typical spiking level of 200-400 ng·mL -1 ) are >95%. Other figures of merit includes (a) a detection limit of 6 ng·mL -1 , (b) a preconcentration factor of 50, (c) a relative standard deviation of 1.6%, and (d) and adsorption capacity of 6.6 mg·g -1 . The procedure was successfully applied to accurate determination of lead in (spiked) pomegranate and water samples. Graphical abstract Nanosized single crystal orthorhombic Bi 2 WO 6 was synthesized and characterized by a hydrothermal method and used as a sorbent for vortex assisted solid phase extraction of lead(II). The procedure was successfully applied to accurate determination of lead in (spiked) pomegranate and water samples.
On a New Optimization Approach for the Hydroforming of Defects-Free Tubular Metallic Parts
NASA Astrophysics Data System (ADS)
Caseiro, J. F.; Valente, R. A. F.; Andrade-Campos, A.; Jorge, R. M. Natal
2011-05-01
In the hydroforming of tubular metallic components, process parameters (internal pressure, axial feed and counter-punch position) must be carefully set in order to avoid defects in the final part. If, on one hand, excessive pressure may lead to thinning and bursting during forming, on the other hand insufficient pressure may lead to an inadequate filling of the die. Similarly, an excessive axial feeding may lead to the formation of wrinkles, whilst an inadequate one may cause thinning and, consequentially, bursting. These apparently contradictory targets are virtually impossible to achieve without trial-and-error procedures in industry, unless optimization approaches are formulated and implemented for complex parts. In this sense, an optimization algorithm based on differentialevolutionary techniques is presented here, capable of being applied in the determination of the adequate process parameters for the hydroforming of metallic tubular components of complex geometries. The Hybrid Differential Evolution Particle Swarm Optimization (HDEPSO) algorithm, combining the advantages of a number of well-known distinct optimization strategies, acts along with a general purpose implicit finite element software, and is based on the definition of a wrinkling and thinning indicators. If defects are detected, the algorithm automatically corrects the process parameters and new numerical simulations are performed in real time. In the end, the algorithm proved to be robust and computationally cost-effective, thus providing a valid design tool for the conformation of defects-free components in industry [1].
Aydın Urucu, Oya; Dönmez, Şeyda; Kök Yetimoğlu, Ece
2017-01-01
A novel method was developed for determination of trace amounts of lead in water and food samples. Solidified floating organic drop microextraction was used to preconcentrate the lead ion. After the analyte was complexed with 1-(2-pyridylazo)-2-naphthol, undecanol and acetonitrile were added as extraction and dispersive solvent, respectively. Variables such as pH, volumes of extraction and dispersive solvents, and concentration of chelating agent were optimized. Under the optimum conditions, the detection limit of Pb (II) was determined as 0.042 µ g L -1 with an enrichment factor of 300. The relative standard deviation is <10%. Accuracy of the developed procedure was evaluated by the analysis of certified reference material of human hair (NCS DC 73347) and wastewater (SPS-WW2) with satisfactory results. The developed procedure was then successfully applied to biscuit and water samples for detection of Pb (II) ions.
Shape Optimization and Modular Discretization for the Development of a Morphing Wingtip
NASA Astrophysics Data System (ADS)
Morley, Joshua
Better knowledge in the areas of aerodynamics and optimization has allowed designers to develop efficient wingtip structures in recent years. However, the requirements faced by wingtip devices can be considerably different amongst an aircraft's flight regimes. Traditional static wingtip devices are then a compromise between conflicting requirements, resulting in less than optimal performance within each regime. Alternatively, a morphing wingtip can reconfigure leading to improved performance over a range of dissimilar flight conditions. Developed within this thesis, is a modular morphing wingtip concept that centers on the use of variable geometry truss mechanisms to permit morphing. A conceptual design framework is established to aid in the development of the concept. The framework uses a metaheuristic optimization procedure to determine optimal continuous wingtip configurations. The configurations are then discretized for the modular concept. The functionality of the framework is demonstrated through a design study on a hypothetical wing/winglet within the thesis.
Pricing strategy in a dual-channel and remanufacturing supply chain system
NASA Astrophysics Data System (ADS)
Jiang, Chengzhi; Xu, Feng; Sheng, Zhaohan
2010-07-01
This article addresses the pricing strategy problems in a supply chain system where the manufacturer sells original products and remanufactured products via indirect retailer channels and direct Internet channels. Due to the complexity of that system, agent technologies that provide a new way for analysing complex systems are used for modelling. Meanwhile, in order to reduce the computational load of searching procedure for optimal prices and profits, a learning search algorithm is designed and implemented within the multi-agent supply chain model. The simulation results show that the proposed model can find out optimal prices of original products and remanufactured products in both channels, which lead to optimal profits of the manufacturer and the retailer. It is also found that the optimal profits are increased by introducing direct channel and remanufacturing. Furthermore, the effect of customer preference, direct channel cost and remanufactured unit cost on optimal prices and profits are examined.
Vorticity Dynamics in Axial Compressor Flow Diagnosis and Design.
NASA Astrophysics Data System (ADS)
Wu, Jie-Zhi; Yang, Yan-Tao; Wu, Hong; Li, Qiu-Shi; Mao, Feng; Zhou, Sheng
2007-11-01
It is well recognized that vorticity and vortical structures appear inevitably in viscous compressor flows and have strong influence on the compressor performance. But conventional analysis and design procedure cannot pinpoint the quantitative contribution of each individual vortical structure to the integrated performance of a compressor, such as the stagnation-pressure ratio and efficiency. We fill this gap by using the so-called derivative-moment transformation which has been successfully applied to external aerodynamics. We show that the compressor performance is mainly controlled by the radial distribution of azimuthal vorticity, of which an optimization in the through-flow design stage leads to a simple Abel equation of the second kind. Solving the equation yields desired circulation distribution that optimizes the blade geometry. The advantage of this new procedure is demonstrated by numerical examples, including the posterior performance check by 3-D Navier-Stokes simulation.
Application of modern control theory to the design of optimum aircraft controllers
NASA Technical Reports Server (NTRS)
Power, L. J.
1973-01-01
The synthesis procedure presented is based on the solution of the output regulator problem of linear optimal control theory for time-invariant systems. By this technique, solution of the matrix Riccati equation leads to a constant linear feedback control law for an output regulator which will maintain a plant in a particular equilibrium condition in the presence of impulse disturbances. Two simple algorithms are presented that can be used in an automatic synthesis procedure for the design of maneuverable output regulators requiring only selected state variables for feedback. The first algorithm is for the construction of optimal feedforward control laws that can be superimposed upon a Kalman output regulator and that will drive the output of a plant to a desired constant value on command. The second algorithm is for the construction of optimal Luenberger observers that can be used to obtain feedback control laws for the output regulator requiring measurement of only part of the state vector. This algorithm constructs observers which have minimum response time under the constraint that the magnitude of the gains in the observer filter be less than some arbitrary limit.
2018-01-01
This paper presents an integrated hybrid optimization algorithm for training the radial basis function neural network (RBF NN). Training of neural networks is still a challenging exercise in machine learning domain. Traditional training algorithms in general suffer and trap in local optima and lead to premature convergence, which makes them ineffective when applied for datasets with diverse features. Training algorithms based on evolutionary computations are becoming popular due to their robust nature in overcoming the drawbacks of the traditional algorithms. Accordingly, this paper proposes a hybrid training procedure with differential search (DS) algorithm functionally integrated with the particle swarm optimization (PSO). To surmount the local trapping of the search procedure, a new population initialization scheme is proposed using Logistic chaotic sequence, which enhances the population diversity and aid the search capability. To demonstrate the effectiveness of the proposed RBF hybrid training algorithm, experimental analysis on publicly available 7 benchmark datasets are performed. Subsequently, experiments were conducted on a practical application case for wind speed prediction to expound the superiority of the proposed RBF training algorithm in terms of prediction accuracy. PMID:29768463
Rani R, Hannah Jessie; Victoire T, Aruldoss Albert
2018-01-01
This paper presents an integrated hybrid optimization algorithm for training the radial basis function neural network (RBF NN). Training of neural networks is still a challenging exercise in machine learning domain. Traditional training algorithms in general suffer and trap in local optima and lead to premature convergence, which makes them ineffective when applied for datasets with diverse features. Training algorithms based on evolutionary computations are becoming popular due to their robust nature in overcoming the drawbacks of the traditional algorithms. Accordingly, this paper proposes a hybrid training procedure with differential search (DS) algorithm functionally integrated with the particle swarm optimization (PSO). To surmount the local trapping of the search procedure, a new population initialization scheme is proposed using Logistic chaotic sequence, which enhances the population diversity and aid the search capability. To demonstrate the effectiveness of the proposed RBF hybrid training algorithm, experimental analysis on publicly available 7 benchmark datasets are performed. Subsequently, experiments were conducted on a practical application case for wind speed prediction to expound the superiority of the proposed RBF training algorithm in terms of prediction accuracy.
Statistically optimal perception and learning: from behavior to neural representations
Fiser, József; Berkes, Pietro; Orbán, Gergő; Lengyel, Máté
2010-01-01
Human perception has recently been characterized as statistical inference based on noisy and ambiguous sensory inputs. Moreover, suitable neural representations of uncertainty have been identified that could underlie such probabilistic computations. In this review, we argue that learning an internal model of the sensory environment is another key aspect of the same statistical inference procedure and thus perception and learning need to be treated jointly. We review evidence for statistically optimal learning in humans and animals, and reevaluate possible neural representations of uncertainty based on their potential to support statistically optimal learning. We propose that spontaneous activity can have a functional role in such representations leading to a new, sampling-based, framework of how the cortex represents information and uncertainty. PMID:20153683
Optimization and fabrication of porous carbon electrodes for Fe/Cr redox flow cells
NASA Technical Reports Server (NTRS)
Jalan, V.; Morriseau, B.; Swette, L.
1982-01-01
Negative electrode development for the NASA chromous/ferric Redox battery is reported. The effects of substrate material, gold/lead catalyst composition and loading, and catalyzation procedures on the performance of the chromium electrode were investigated. Three alternative catalyst systems were also examined, and 1/3 square foot size electrodes were fabricated and delivered to NASA at the conclusion of the program.
NASA Astrophysics Data System (ADS)
Manzke, R.; Bornstedt, A.; Lutz, A.; Schenderlein, M.; Hombach, V.; Binner, L.; Rasche, V.
2010-02-01
Various multi-center trials have shown that cardiac resynchronization therapy (CRT) is an effective procedure for patients with end-stage drug invariable heart failure (HF). Despite the encouraging results of CRT, at least 30% of patients do not respond to the treatment. Detailed knowledge of the cardiac anatomy (coronary venous tree, left ventricle), functional parameters (i.e. ventricular synchronicity) is supposed to improve CRT patient selection and interventional lead placement for reduction of the number of non-responders. As a pre-interventional imaging modality, cardiac magnetic resonance (CMR) imaging has the potential to provide all relevant information. With functional information from CMR optimal implantation target sites may be better identified. Pre-operative CMR could also help to determine whether useful vein target segments are available for lead placement. Fused with X-ray, the mainstay interventional modality, improved interventional guidance for lead-placement could further help to increase procedure outcome. In this contribution, we present novel and practicable methods for a) pre-operative functional and anatomical imaging of relevant cardiac structures to CRT using CMR, b) 2D-3D registration of CMR anatomy and functional meshes with X-ray vein angiograms and c) real-time capable breathing motion compensation for improved fluoroscopy mesh overlay during the intervention based on right ventricular pacer lead tracking. With these methods, enhanced interventional guidance for left ventricular lead placement is provided.
Bot, Maarten; van den Munckhof, Pepijn; Bakay, Roy; Stebbins, Glenn; Verhagen Metman, Leo
2017-01-01
Objective To determine the accuracy of intraoperative computed tomography (iCT) in localizing deep brain stimulation (DBS) electrodes by comparing this modality with postoperative magnetic resonance imaging (MRI). Background Optimal lead placement is a critical factor for the outcome of DBS procedures and preferably confirmed during surgery. iCT offers 3-dimensional verification of both microelectrode and lead location during DBS surgery. However, accurate electrode representation on iCT has not been extensively studied. Methods DBS surgery was performed using the Leksell stereotactic G frame. Stereotactic coordinates of 52 DBS leads were determined on both iCT and postoperative MRI and compared with intended final target coordinates. The resulting absolute differences in X (medial-lateral), Y (anterior-posterior), and Z (dorsal-ventral) coordinates (ΔX, ΔY, and ΔZ) for both modalities were then used to calculate the euclidean distance. Results Euclidean distances were 2.7 ± 1.1 and 2.5 ± 1.2 mm for MRI and iCT, respectively (p = 0.2). Conclusion Postoperative MRI and iCT show equivalent DBS lead representation. Intraoperative localization of both microelectrode and DBS lead in stereotactic space enables direct adjustments. Verification of lead placement with postoperative MRI, considered to be the gold standard, is unnecessary. PMID:28601874
Bot, Maarten; van den Munckhof, Pepijn; Bakay, Roy; Stebbins, Glenn; Verhagen Metman, Leo
2017-01-01
To determine the accuracy of intraoperative computed tomography (iCT) in localizing deep brain stimulation (DBS) electrodes by comparing this modality with postoperative magnetic resonance imaging (MRI). Optimal lead placement is a critical factor for the outcome of DBS procedures and preferably confirmed during surgery. iCT offers 3-dimensional verification of both microelectrode and lead location during DBS surgery. However, accurate electrode representation on iCT has not been extensively studied. DBS surgery was performed using the Leksell stereotactic G frame. Stereotactic coordinates of 52 DBS leads were determined on both iCT and postoperative MRI and compared with intended final target coordinates. The resulting absolute differences in X (medial-lateral), Y (anterior-posterior), and Z (dorsal-ventral) coordinates (ΔX, ΔY, and ΔZ) for both modalities were then used to calculate the euclidean distance. Euclidean distances were 2.7 ± 1.1 and 2.5 ± 1.2 mm for MRI and iCT, respectively (p = 0.2). Postoperative MRI and iCT show equivalent DBS lead representation. Intraoperative localization of both microelectrode and DBS lead in stereotactic space enables direct adjustments. Verification of lead placement with postoperative MRI, considered to be the gold standard, is unnecessary. © 2017 The Author(s) Published by S. Karger AG, Basel.
Levoin, Nicolas; Calmels, Thierry; Poupardin-Olivier, Olivia; Labeeuw, Olivier; Danvy, Denis; Robert, Philippe; Berrebi-Bertrand, Isabelle; Ganellin, C Robin; Schunack, Walter; Stark, Holger; Capet, Marc
2008-10-01
Drug-discovery projects frequently employ structure-based information through protein modeling and ligand docking, and there is a plethora of reports relating successful use of them in virtual screening. Hit/lead optimization, which represents the next step and the longest for the medicinal chemist, is very rarely considered. This is not surprising because lead optimization is a much more complex task. Here, a homology model of the histamine H(3) receptor was built and tested for its ability to discriminate ligands above a defined threshold of affinity. In addition, drug safety is also evaluated during lead optimization, and "antitargets" are studied. So, we have used the same benchmarking procedure with the HERG channel and CYP2D6 enzyme, for which a minimal affinity is strongly desired. For targets and antitargets, we report here an accuracy as high as at least 70%, for ligands being classified above or below the chosen threshold. Such a good result is beyond what could have been predicted, especially, since our test conditions were particularly stringent. First, we measured the accuracy by means of AUC of ROC plots, i. e. considering both false positive and false negatives. Second, we used as datasets extensive chemical libraries (nearly a thousand ligands for H(3)). All molecules considered were true H(3) receptor ligands with moderate to high affinity (from microM to nM range). Third, the database is issued from concrete SAR (Bioprojet H(3) BF2.649 library) and is not simply constituted by few active ligands buried in a chemical catalogue.
Sulcal set optimization for cortical surface registration.
Joshi, Anand A; Pantazis, Dimitrios; Li, Quanzheng; Damasio, Hanna; Shattuck, David W; Toga, Arthur W; Leahy, Richard M
2010-04-15
Flat mapping based cortical surface registration constrained by manually traced sulcal curves has been widely used for inter subject comparisons of neuroanatomical data. Even for an experienced neuroanatomist, manual sulcal tracing can be quite time consuming, with the cost increasing with the number of sulcal curves used for registration. We present a method for estimation of an optimal subset of size N(C) from N possible candidate sulcal curves that minimizes a mean squared error metric over all combinations of N(C) curves. The resulting procedure allows us to estimate a subset with a reduced number of curves to be traced as part of the registration procedure leading to optimal use of manual labeling effort for registration. To minimize the error metric we analyze the correlation structure of the errors in the sulcal curves by modeling them as a multivariate Gaussian distribution. For a given subset of sulci used as constraints in surface registration, the proposed model estimates registration error based on the correlation structure of the sulcal errors. The optimal subset of constraint curves consists of the N(C) sulci that jointly minimize the estimated error variance for the subset of unconstrained curves conditioned on the N(C) constraint curves. The optimal subsets of sulci are presented and the estimated and actual registration errors for these subsets are computed. Copyright 2009 Elsevier Inc. All rights reserved.
Alventosa-deLara, E; Barredo-Damas, S; Alcaina-Miranda, M I; Iborra-Clar, M I
2014-05-01
Membrane fouling is one of the main drawbacks of ultrafiltration technology during the treatment of dye-containing effluents. Therefore, the optimization of the membrane cleaning procedure is essential to improve the overall efficiency. In this work, a study of the factors affecting the ultrasound-assisted cleaning of an ultrafiltration ceramic membrane fouled by dye particles was carried out. The effect of transmembrane pressure (0.5, 1.5, 2.5 bar), cross-flow velocity (1, 2, 3 ms(-1)), ultrasound power level (40%, 70%, 100%) and ultrasound frequency mode (37, 80 kHz and mixed wave) on the cleaning efficiency was evaluated. The lowest frequency showed better results, although the best cleaning performance was obtained using the mixed wave mode. A Box-Behnken Design was used to find the optimal conditions for the cleaning procedure through a response surface study. The optimal operating conditions leading to the maximum cleaning efficiency predicted (32.19%) were found to be 1.1 bar, 3 ms(-1) and 100% of power level. Finally, the optimized response was compared to the efficiency of a chemical cleaning with NaOH solution, with and without the use of ultrasound. By using NaOH, cleaning efficiency nearly triples, and it improves up to 25% by adding ultrasound. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Lee, C. H.; Cho, J. H.; Park, S. J.; Kim, J. S.; On, Y. K.; Huh, J.
2015-10-01
The purpose of this study was to measure the radiation exposure to operator and patient during cardiac electrophysiology study, radiofrequency catheter ablation and cardiac device implantation procedures and to calculate the allowable number of cases per year. We carried out 9 electrophysiology studies, 40 radiofrequency catheter ablation and 11 cardiac device implantation procedures. To measure occupational radiation dose and dose-area product (DAP), 13 photoluminescence glass dosimeters were placed at eyes (inside and outside lead glass), thyroids (inside and outside thyroid collar), chest (inside and outside lead apron), wrists, genital of the operator (inside lead apron), and 6 of photoluminescence glass dosimeters were placed at eyes, thyroids, chest and genital of the patient. Exposure time and DAP values were 11.7 ± 11.8 min and 23.2 ± 26.2 Gy cm2 for electrophysiology study; 36.5 ± 42.1 min and 822.4 ± 125.5 Gy cm2 for radiofrequency catheter ablation; 16.2 ± 9.3 min and 27.8 ± 16.5 Gy cm2 for cardiac device implantation procedure, prospectively. 4591 electrophysiology studies can be conducted within the occupational exposure limit for the eyes (150 mSv), and 658-electrophysiology studies with radiofrequency catheter ablation can be carried out within the occupational exposure limit for the hands (500 mSv). 1654 cardiac device implantation procedure can be conducted within the occupational exposure limit for the eyes (150 mSv). The amounts of the operator and patient's radiation exposure were comparatively small. So, electrophysiology study, radio frequency catheter ablation and cardiac device implantation procedures are safe when performed with modern equipment and optimized protective radiation protect equipment.
Madan, Rachna; Laur, Olga; Crudup, Breland; Peavy, Latia; Carter, Brett W
2018-02-01
Iatrogenic injury to the oesophagus is a serious complication which is increasingly seen in clinical practice secondary to expansion and greater acceptability of surgical and endoscopic oesophageal procedures. Morbidity and mortality following such injury is high. This is mostly due to an inflammatory response to gastric contents in the mediastinum, and the negative intrathoracic pressures that may further draw out oesophageal contents into the mediastinum leading to mediastinitis. Subsequently, pulmonary complications such as pneumonia or abscess may ensue leading to rapid clinical deterioration. Optimized and timely cross-sectional imaging evaluation is necessary for early and aggressive management of these complications. The goal of this review is to make the radiologist aware of the importance of early and accurate identification of postoperative oesophageal injury using optimized CT imaging protocols and use of oral contrast. Specifically, it is critical to differentiate benign post-operative findings, such as herniated viscus or redundant anastomosis, from clinically significant postoperative complications as this helps guide appropriate management. Advantages and drawbacks of other diagnostic methods, such as contrast oesophagogram, are also discussed.
NASA Astrophysics Data System (ADS)
Khajeh, M.; Pourkarami, A.; Arefnejad, E.; Bohlooli, M.; Khatibi, A.; Ghaffari-Moghaddam, M.; Zareian-Jahromi, S.
2017-09-01
Chitosan-zinc oxide nanoparticles (CZPs) were developed for solid-phase extraction. Combined artificial neural network-ant colony optimization (ANN-ACO) was used for the simultaneous preconcentration and determination of lead (Pb2+) ions in water samples prior to graphite furnace atomic absorption spectrometry (GF AAS). The solution pH, mass of adsorbent CZPs, amount of 1-(2-pyridylazo)-2-naphthol (PAN), which was used as a complexing agent, eluent volume, eluent concentration, and flow rates of sample and eluent were used as input parameters of the ANN model, and the percentage of extracted Pb2+ ions was used as the output variable of the model. A multilayer perception network with a back-propagation learning algorithm was used to fit the experimental data. The optimum conditions were obtained based on the ACO. Under the optimized conditions, the limit of detection for Pb2+ ions was found to be 0.078 μg/L. This procedure was also successfully used to determine the amounts of Pb2+ ions in various natural water samples.
The optimal power puzzle: scrutiny of the monotone likelihood ratio assumption in multiple testing.
Cao, Hongyuan; Sun, Wenguang; Kosorok, Michael R
2013-01-01
In single hypothesis testing, power is a non-decreasing function of type I error rate; hence it is desirable to test at the nominal level exactly to achieve optimal power. The puzzle lies in the fact that for multiple testing, under the false discovery rate paradigm, such a monotonic relationship may not hold. In particular, exact false discovery rate control may lead to a less powerful testing procedure if a test statistic fails to fulfil the monotone likelihood ratio condition. In this article, we identify different scenarios wherein the condition fails and give caveats for conducting multiple testing in practical settings.
Combinatorial approaches to gene recognition.
Roytberg, M A; Astakhova, T V; Gelfand, M S
1997-01-01
Recognition of genes via exon assembly approaches leads naturally to the use of dynamic programming. We consider the general graph-theoretical formulation of the exon assembly problem and analyze in detail some specific variants: multicriterial optimization in the case of non-linear gene-scoring functions; context-dependent schemes for scoring exons and related procedures for exon filtering; and highly specific recognition of arbitrary gene segments, oligonucleotide probes and polymerase chain reaction (PCR) primers.
Robust Portfolio Optimization Using Pseudodistances.
Toma, Aida; Leoni-Aubin, Samuela
2015-01-01
The presence of outliers in financial asset returns is a frequently occurring phenomenon which may lead to unreliable mean-variance optimized portfolios. This fact is due to the unbounded influence that outliers can have on the mean returns and covariance estimators that are inputs in the optimization procedure. In this paper we present robust estimators of mean and covariance matrix obtained by minimizing an empirical version of a pseudodistance between the assumed model and the true model underlying the data. We prove and discuss theoretical properties of these estimators, such as affine equivariance, B-robustness, asymptotic normality and asymptotic relative efficiency. These estimators can be easily used in place of the classical estimators, thereby providing robust optimized portfolios. A Monte Carlo simulation study and applications to real data show the advantages of the proposed approach. We study both in-sample and out-of-sample performance of the proposed robust portfolios comparing them with some other portfolios known in literature.
Robust Portfolio Optimization Using Pseudodistances
2015-01-01
The presence of outliers in financial asset returns is a frequently occurring phenomenon which may lead to unreliable mean-variance optimized portfolios. This fact is due to the unbounded influence that outliers can have on the mean returns and covariance estimators that are inputs in the optimization procedure. In this paper we present robust estimators of mean and covariance matrix obtained by minimizing an empirical version of a pseudodistance between the assumed model and the true model underlying the data. We prove and discuss theoretical properties of these estimators, such as affine equivariance, B-robustness, asymptotic normality and asymptotic relative efficiency. These estimators can be easily used in place of the classical estimators, thereby providing robust optimized portfolios. A Monte Carlo simulation study and applications to real data show the advantages of the proposed approach. We study both in-sample and out-of-sample performance of the proposed robust portfolios comparing them with some other portfolios known in literature. PMID:26468948
Optimal design of a microgripper-type actuator based on AlN/Si heterogeneous bimorph
NASA Astrophysics Data System (ADS)
Ruiz, D.; Díaz-Molina, A.; Sigmund, O.; Donoso, A.; Bellido, J. C.; Sánchez-Rojas, J. L.
2017-05-01
This work presents a systematic procedure to design piezoelectrically actuated microgrippers. Topology optimization combined with optimal design of electrodes is used to maximize the displacement at the output port of the gripper. The fabrication at the microscale leads us to overcome an important issue: the difficulty of placing a piezoelectric film on both top and bottom of the host layer. Due to the non-symmetric lamination of the structure, an out-of-plane bending spoils the behaviour of the gripper. Suppression of this out-of-plane deformation is the main novelty introduced. In addition, a robust formulation approach is used in order to control the length scale in the whole domain and to reduce sensitivity of the designs to small manufacturing errors.
On meeting capital requirements with a chance-constrained optimization model.
Atta Mills, Ebenezer Fiifi Emire; Yu, Bo; Gu, Lanlan
2016-01-01
This paper deals with a capital to risk asset ratio chance-constrained optimization model in the presence of loans, treasury bill, fixed assets and non-interest earning assets. To model the dynamics of loans, we introduce a modified CreditMetrics approach. This leads to development of a deterministic convex counterpart of capital to risk asset ratio chance constraint. We pursue the scope of analyzing our model under the worst-case scenario i.e. loan default. The theoretical model is analyzed by applying numerical procedures, in order to administer valuable insights from a financial outlook. Our results suggest that, our capital to risk asset ratio chance-constrained optimization model guarantees banks of meeting capital requirements of Basel III with a likelihood of 95 % irrespective of changes in future market value of assets.
Macyszyn, Luke; Attiah, Mark; Ma, Tracy S; Ali, Zarina; Faught, Ryan; Hossain, Alisha; Man, Karen; Patel, Hiren; Sobota, Rosanna; Zager, Eric L; Stein, Sherman C
2017-05-01
OBJECTIVE Moyamoya disease (MMD) is a chronic cerebrovascular disease that can lead to devastating neurological outcomes. Surgical intervention is the definitive treatment, with direct, indirect, and combined revascularization procedures currently employed by surgeons. The optimal surgical approach, however, remains unclear. In this decision analysis, the authors compared the effectiveness of revascularization procedures in both adult and pediatric patients with MMD. METHODS A comprehensive literature search was performed for studies of MMD. Using complication and success rates from the literature, the authors constructed a decision analysis model for treatment using a direct and indirect revascularization technique. Utility values for the various outcomes and complications were extracted from the literature examining preferences in similar clinical conditions. Sensitivity analysis was performed. RESULTS A structured literature search yielded 33 studies involving 4197 cases. Cases were divided into adult and pediatric populations. These were further subdivided into 3 different treatment groups: indirect, direct, and combined revascularization procedures. In the pediatric population at 5- and 10-year follow-up, there was no significant difference between indirect and combination procedures, but both were superior to direct revascularization. In adults at 4-year follow-up, indirect was superior to direct revascularization. CONCLUSIONS In the absence of factors that dictate a specific approach, the present decision analysis suggests that direct revascularization procedures are inferior in terms of quality-adjusted life years in both adults at 4 years and children at 5 and 10 years postoperatively, respectively. These findings were statistically significant (p < 0.001 in all cases), suggesting that indirect and combination procedures may offer optimal results at long-term follow-up.
Rendon, Marta I; Effron, Cheryl; Edison, Brenda L
2007-01-01
There are many procedures that a physician may utilize to improve the appearance and quality of the skin. Combining procedures can enhance the overall result and lead to increased patient satisfaction. Thus, it is important to choose procedures that will complement each other. Fillers or botulinum toxin type A (BTX-A) can plump the skin and smooth lines and wrinkles but will do little for uneven tone, skin laxity, or radiance and clarity. These signs of aging can be addressed with superficial glycolic acid peels. Methods of combining injectable compounds with superficial glycolic acid peels were discussed at a dermatologist roundtable event and are summarized in this article.
Improving stability and strength characteristics of framed structures with nonlinear behavior
NASA Technical Reports Server (NTRS)
Pezeshk, Shahram
1990-01-01
In this paper an optimal design procedure is introduced to improve the overall performance of nonlinear framed structures. The design methodology presented here is a multiple-objective optimization procedure whose objective functions involve the buckling eigenvalues and eigenvectors of the structure. A constant volume with bounds on the design variables is used in conjunction with an optimality criterion approach. The method provides a general tool for solving complex design problems and generally leads to structures with better limit strength and stability. Many algorithms have been developed to improve the limit strength of structures. In most applications geometrically linear analysis is employed with the consequence that overall strength of the design is overestimated. Directly optimizing the limit load of the structure would require a full nonlinear analysis at each iteration which would be prohibitively expensive. The objective of this paper is to develop an algorithm that can improve the limit-load of geometrically nonlinear framed structures while avoiding the nonlinear analysis. One of the novelties of the new design methodology is its ability to efficiently model and design structures under multiple loading conditions. These loading conditions can be different factored loads or any kind of loads that can be applied to the structure simultaneously or independently. Attention is focused on optimal design of space framed structures. Three-dimensional design problems are more complicated to carry out, but they yield insight into real behavior of the structure and can help avoiding some of the problems that might appear in planar design procedure such as the need for out-of-plane buckling constraint. Although researchers in the field of structural engineering generally agree that optimum design of three-dimension building frames especially in the seismic regions would be beneficial, methods have been slow to emerge. Most of the research in this area has dealt with the optimization of truss and plane frame structures.
Adaptive Modeling Procedure Selection by Data Perturbation.
Zhang, Yongli; Shen, Xiaotong
2015-10-01
Many procedures have been developed to deal with the high-dimensional problem that is emerging in various business and economics areas. To evaluate and compare these procedures, modeling uncertainty caused by model selection and parameter estimation has to be assessed and integrated into a modeling process. To do this, a data perturbation method estimates the modeling uncertainty inherited in a selection process by perturbing the data. Critical to data perturbation is the size of perturbation, as the perturbed data should resemble the original dataset. To account for the modeling uncertainty, we derive the optimal size of perturbation, which adapts to the data, the model space, and other relevant factors in the context of linear regression. On this basis, we develop an adaptive data-perturbation method that, unlike its nonadaptive counterpart, performs well in different situations. This leads to a data-adaptive model selection method. Both theoretical and numerical analysis suggest that the data-adaptive model selection method adapts to distinct situations in that it yields consistent model selection and optimal prediction, without knowing which situation exists a priori. The proposed method is applied to real data from the commodity market and outperforms its competitors in terms of price forecasting accuracy.
Optimal generalized multistep integration formulae for real-time digital simulation
NASA Technical Reports Server (NTRS)
Moerder, D. D.; Halyo, N.
1985-01-01
The problem of discretizing a dynamical system for real-time digital simulation is considered. Treating the system and its simulation as stochastic processes leads to a statistical characterization of simulator fidelity. A plant discretization procedure based on an efficient matrix generalization of explicit linear multistep discrete integration formulae is introduced, which minimizes a weighted sum of the mean squared steady-state and transient error between the system and simulator outputs.
A robust active control system for shimmy damping in the presence of free play and uncertainties
NASA Astrophysics Data System (ADS)
Orlando, Calogero; Alaimo, Andrea
2017-02-01
Shimmy vibration is the oscillatory motion of the fork-wheel assembly about the steering axis. It represents one of the major problem of aircraft landing gear because it can lead to excessive wear, discomfort as well as safety concerns. Based on the nonlinear model of the mechanics of a single wheel nose landing gear (NLG), electromechanical actuator and tire elasticity, a robust active controller capable of damping shimmy vibration is designed and investigated in this study. A novel Decline Population Swarm Optimization (PDSO) procedure is introduced and used to select the optimal parameters for the controller. The PDSO procedure is based on a decline demographic model and shows high global search capability with reduced computational costs. The open and closed loop system behavior is analyzed under different case studies of aeronautical interest and the effects of torsional free play on the nose landing gear response are also studied. Plant parameters probabilistic uncertainties are then taken into account to assess the active controller robustness using a stochastic approach.
Development of Multiobjective Optimization Techniques for Sonic Boom Minimization
NASA Technical Reports Server (NTRS)
Chattopadhyay, Aditi; Rajadas, John Narayan; Pagaldipti, Naryanan S.
1996-01-01
A discrete, semi-analytical sensitivity analysis procedure has been developed for calculating aerodynamic design sensitivities. The sensitivities of the flow variables and the grid coordinates are numerically calculated using direct differentiation of the respective discretized governing equations. The sensitivity analysis techniques are adapted within a parabolized Navier Stokes equations solver. Aerodynamic design sensitivities for high speed wing-body configurations are calculated using the semi-analytical sensitivity analysis procedures. Representative results obtained compare well with those obtained using the finite difference approach and establish the computational efficiency and accuracy of the semi-analytical procedures. Multidisciplinary design optimization procedures have been developed for aerospace applications namely, gas turbine blades and high speed wing-body configurations. In complex applications, the coupled optimization problems are decomposed into sublevels using multilevel decomposition techniques. In cases with multiple objective functions, formal multiobjective formulation such as the Kreisselmeier-Steinhauser function approach and the modified global criteria approach have been used. Nonlinear programming techniques for continuous design variables and a hybrid optimization technique, based on a simulated annealing algorithm, for discrete design variables have been used for solving the optimization problems. The optimization procedure for gas turbine blades improves the aerodynamic and heat transfer characteristics of the blades. The two-dimensional, blade-to-blade aerodynamic analysis is performed using a panel code. The blade heat transfer analysis is performed using an in-house developed finite element procedure. The optimization procedure yields blade shapes with significantly improved velocity and temperature distributions. The multidisciplinary design optimization procedures for high speed wing-body configurations simultaneously improve the aerodynamic, the sonic boom and the structural characteristics of the aircraft. The flow solution is obtained using a comprehensive parabolized Navier Stokes solver. Sonic boom analysis is performed using an extrapolation procedure. The aircraft wing load carrying member is modeled as either an isotropic or a composite box beam. The isotropic box beam is analyzed using thin wall theory. The composite box beam is analyzed using a finite element procedure. The developed optimization procedures yield significant improvements in all the performance criteria and provide interesting design trade-offs. The semi-analytical sensitivity analysis techniques offer significant computational savings and allow the use of comprehensive analysis procedures within design optimization studies.
Optimization and surgical design for applications in pediatric cardiology
NASA Astrophysics Data System (ADS)
Marsden, Alison; Bernstein, Adam; Taylor, Charles; Feinstein, Jeffrey
2007-11-01
The coupling of shape optimization to cardiovascular blood flow simulations has potential to improve the design of current surgeries and to eventually allow for optimization of surgical designs for individual patients. This is particularly true in pediatric cardiology, where geometries vary dramatically between patients, and unusual geometries can lead to unfavorable hemodynamic conditions. Interfacing shape optimization to three-dimensional, time-dependent fluid mechanics problems is particularly challenging because of the large computational cost and the difficulty in computing objective function gradients. In this work a derivative-free optimization algorithm is coupled to a three-dimensional Navier-Stokes solver that has been tailored for cardiovascular applications. The optimization code employs mesh adaptive direct search in conjunction with a Kriging surrogate. This framework is successfully demonstrated on several geometries representative of cardiovascular surgical applications. We will discuss issues of cost function choice for surgical applications, including energy loss and wall shear stress distribution. In particular, we will discuss the creation of new designs for the Fontan procedure, a surgery done in pediatric cardiology to treat single ventricle heart defects.
Towards Robust Designs Via Multiple-Objective Optimization Methods
NASA Technical Reports Server (NTRS)
Man Mohan, Rai
2006-01-01
Fabricating and operating complex systems involves dealing with uncertainty in the relevant variables. In the case of aircraft, flow conditions are subject to change during operation. Efficiency and engine noise may be different from the expected values because of manufacturing tolerances and normal wear and tear. Engine components may have a shorter life than expected because of manufacturing tolerances. In spite of the important effect of operating- and manufacturing-uncertainty on the performance and expected life of the component or system, traditional aerodynamic shape optimization has focused on obtaining the best design given a set of deterministic flow conditions. Clearly it is important to both maintain near-optimal performance levels at off-design operating conditions, and, ensure that performance does not degrade appreciably when the component shape differs from the optimal shape due to manufacturing tolerances and normal wear and tear. These requirements naturally lead to the idea of robust optimal design wherein the concept of robustness to various perturbations is built into the design optimization procedure. The basic ideas involved in robust optimal design will be included in this lecture. The imposition of the additional requirement of robustness results in a multiple-objective optimization problem requiring appropriate solution procedures. Typically the costs associated with multiple-objective optimization are substantial. Therefore efficient multiple-objective optimization procedures are crucial to the rapid deployment of the principles of robust design in industry. Hence the companion set of lecture notes (Single- and Multiple-Objective Optimization with Differential Evolution and Neural Networks ) deals with methodology for solving multiple-objective Optimization problems efficiently, reliably and with little user intervention. Applications of the methodologies presented in the companion lecture to robust design will be included here. The evolutionary method (DE) is first used to solve a relatively difficult problem in extended surface heat transfer wherein optimal fin geometries are obtained for different safe operating base temperatures. The objective of maximizing the safe operating base temperature range is in direct conflict with the objective of maximizing fin heat transfer. This problem is a good example of achieving robustness in the context of changing operating conditions. The evolutionary method is then used to design a turbine airfoil; the two objectives being reduced sensitivity of the pressure distribution to small changes in the airfoil shape and the maximization of the trailing edge wedge angle with the consequent increase in airfoil thickness and strength. This is a relevant example of achieving robustness to manufacturing tolerances and wear and tear in the presence of other objectives.
Optimal coupling and feasibility of a solar-powered year-round ejector air conditioner
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sokolov, M.; Hershgal, D.
1993-06-01
An ejector refrigeration system that uses a conventional refrigerant (R-114) is introduced as a possible mechanism for providing solar-based air-conditioning. Optimal coupling conditions between the collectors' energy output and energy requirements of the cooling system, are investigated. Operation at such optimal conditions assures maximized overall efficiency. Procedures leading to the evaluation of the performance of a real system are disclosed. Design curves for such a system with R-114 as refrigerant are provided. A multi-ejectors arrangement that provides an efficient adjustment for variations of ambient conditions, is described. Year-round air-conditioning is facilitated by rerouting the refrigerant flow through a heating modemore » of the system. Calculations are carried out for illustrative configurations in which relatively low condensing temperature (water reservoirs, cooling towers, or moderate climate) can be maintained.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Townsend, D.W.; Linnhoff, B.
In Part I, criteria for heat engine and heat pump placement in chemical process networks were derived, based on the ''temperature interval'' (T.I) analysis of the heat exchanger network problem. Using these criteria, this paper gives a method for identifying the best outline design for any combined system of chemical process, heat engines, and heat pumps. The method eliminates inferior alternatives early, and positively leads on to the most appropriate solution. A graphical procedure based on the T.I. analysis forms the heart of the approach, and the calculations involved are simple enough to be carried out on, say, a programmablemore » calculator. Application to a case study is demonstrated. Optimization methods based on this procedure are currently under research.« less
Treatment of Periprosthetic Infections: An Economic Analysis
Hernández-Vaquero, Daniel; Fernández-Fairen, Mariano; Torres, Ana; Menzie, Ann M.; Fernández-Carreira, José Manuel; Murcia-Mazon, Antonio; Merzthal, Luis
2013-01-01
This review summarizes the existing economic literature, assesses the value of current data, and presents procedures that are the less costly and more effective options for the treatment of periprosthetic infections of knee and hip. Optimizing antibiotic use in the prevention and treatment of periprosthetic infection, combined with systemic and behavioral changes in the operating room, the detection and treatment of high-risk patient groups, as well as the rational management of the existing infection by using the different procedures according to each particular case, could allow for improved outcomes and lead to the highest quality of life for patients and the lowest economic impact. Nevertheless, the costeffectiveness of different interventions to treat periprosthetic infections remains unclear. PMID:23781163
Complications of Bariatric Surgery: What You Can Expect to See in Your GI Practice.
Schulman, Allison R; Thompson, Christopher C
2017-11-01
Obesity is one of the most significant health problems worldwide. Bariatric surgery has become one of the fastest growing operative procedures and has gained acceptance as the leading option for weight-loss. Despite improvement in the performance of bariatric surgical procedures, complications are not uncommon. There are a number of unique complications that arise in this patient population and require specific knowledge for proper management. Furthermore, conditions unrelated to the altered anatomy typically require a different management strategy. As such, a basic understanding of surgical anatomy, potential complications, and endoscopic tools and techniques for optimal management is essential for the practicing gastroenterologist. Gastroenterologists should be familiar with these procedures and complication management strategies. This review will cover these topics and focus on major complications that gastroenterologists will be most likely to see in their practice.
Exponential Modelling for Mutual-Cohering of Subband Radar Data
NASA Astrophysics Data System (ADS)
Siart, U.; Tejero, S.; Detlefsen, J.
2005-05-01
Increasing resolution and accuracy is an important issue in almost any type of radar sensor application. However, both resolution and accuracy are strongly related to the available signal bandwidth and energy that can be used. Nowadays, often several sensors operating in different frequency bands become available on a sensor platform. It is an attractive goal to use the potential of advanced signal modelling and optimization procedures by making proper use of information stemming from different frequency bands at the RF signal level. An important prerequisite for optimal use of signal energy is coherence between all contributing sensors. Coherent multi-sensor platforms are greatly expensive and are thus not available in general. This paper presents an approach for accurately estimating object radar responses using subband measurements at different RF frequencies. An exponential model approach allows to compensate for the lack of mutual coherence between independently operating sensors. Mutual coherence is recovered from the a-priori information that both sensors have common scattering centers in view. Minimizing the total squared deviation between measured data and a full-range exponential signal model leads to more accurate pole angles and pole magnitudes compared to single-band optimization. The model parameters (range and magnitude of point scatterers) after this full-range optimization process are also more accurate than the parameters obtained from a commonly used super-resolution procedure (root-MUSIC) applied to the non-coherent subband data.
NASA Astrophysics Data System (ADS)
Balla, Vamsi Krishna; Coox, Laurens; Deckers, Elke; Plyumers, Bert; Desmet, Wim; Marudachalam, Kannan
2018-01-01
The vibration response of a component or system can be predicted using the finite element method after ensuring numerical models represent realistic behaviour of the actual system under study. One of the methods to build high-fidelity finite element models is through a model updating procedure. In this work, a novel model updating method of deep-drawn components is demonstrated. Since the component is manufactured with a high draw ratio, significant deviations in both profile and thickness distributions occurred in the manufacturing process. A conventional model updating, involving Young's modulus, density and damping ratios, does not lead to a satisfactory match between simulated and experimental results. Hence a new model updating process is proposed, where geometry shape variables are incorporated, by carrying out morphing of the finite element model. This morphing process imitates the changes that occurred during the deep drawing process. An optimization procedure that uses the Global Response Surface Method (GRSM) algorithm to maximize diagonal terms of the Modal Assurance Criterion (MAC) matrix is presented. This optimization results in a more accurate finite element model. The advantage of the proposed methodology is that the CAD surface of the updated finite element model can be readily obtained after optimization. This CAD model can be used for carrying out analysis, as it represents the manufactured part more accurately. Hence, simulations performed using this updated model with an accurate geometry, will therefore yield more reliable results.
Ranganathan, Sridhar; Suthers, Patrick F.; Maranas, Costas D.
2010-01-01
Computational procedures for predicting metabolic interventions leading to the overproduction of biochemicals in microbial strains are widely in use. However, these methods rely on surrogate biological objectives (e.g., maximize growth rate or minimize metabolic adjustments) and do not make use of flux measurements often available for the wild-type strain. In this work, we introduce the OptForce procedure that identifies all possible engineering interventions by classifying reactions in the metabolic model depending upon whether their flux values must increase, decrease or become equal to zero to meet a pre-specified overproduction target. We hierarchically apply this classification rule for pairs, triples, quadruples, etc. of reactions. This leads to the identification of a sufficient and non-redundant set of fluxes that must change (i.e., MUST set) to meet a pre-specified overproduction target. Starting with this set we subsequently extract a minimal set of fluxes that must actively be forced through genetic manipulations (i.e., FORCE set) to ensure that all fluxes in the network are consistent with the overproduction objective. We demonstrate our OptForce framework for succinate production in Escherichia coli using the most recent in silico E. coli model, iAF1260. The method not only recapitulates existing engineering strategies but also reveals non-intuitive ones that boost succinate production by performing coordinated changes on pathways distant from the last steps of succinate synthesis. PMID:20419153
NASA Astrophysics Data System (ADS)
Savelyev, Andrey; Anisimov, Kirill; Kazhan, Egor; Kursakov, Innocentiy; Lysenkov, Alexandr
2016-10-01
The paper is devoted to the development of methodology to optimize external aerodynamics of the engine. Optimization procedure is based on numerical solution of the Reynolds-averaged Navier-Stokes equations. As a method of optimization the surrogate based method is used. As a test problem optimal shape design of turbofan nacelle is considered. The results of the first stage, which investigates classic airplane configuration with engine located under the wing, are presented. Described optimization procedure is considered in the context of multidisciplinary optimization of the 3rd generation, developed in the project AGILE.
The aerodynamic design of an advanced rotor airfoil
NASA Technical Reports Server (NTRS)
Blackwell, J. A., Jr.; Hinson, B. L.
1978-01-01
An advanced rotor airfoil, designed utilizing supercritical airfoil technology and advanced design and analysis methodology is described. The airfoil was designed subject to stringent aerodynamic design criteria for improving the performance over the entire rotor operating regime. The design criteria are discussed. The design was accomplished using a physical plane, viscous, transonic inverse design procedure, and a constrained function minimization technique for optimizing the airfoil leading edge shape. The aerodynamic performance objectives of the airfoil are discussed.
von Eckardstein, Kajetan L; Sixel-Döring, Friederike; Kazmaier, Stephan; Trenkwalder, Claudia; Hoover, Jason M; Rohde, Veit
2016-11-08
In accordance with German neurosurgical and neurological consensus recommendations, lead placements for deep brain stimulation (DBS) in patients with Parkinson's disease (PD) are usually performed with the patient awake and in "medication off" state. This allows for optimal lead position adjustment according to the clinical response to intraoperative test stimulation. However, exacerbation of Parkinsonian symptoms after withdrawal of dopaminergic medication may endanger the patient by inducing severe "off" state motor phenomena. In particular, this can be a problem in awake craniotomies utilizing intraoperative airway management and resuscitation. We report the case of a PD patient with progressive orofacial and neck muscle dystonia resulting in laryngeal spasm during DBS lead placement. This led to upper airway compromise and asphyxia, requiring resuscitation. Laryngeal spasms may occur as a rare "off" state motor complication in patients with PD. Other potential causes of intraoperative difficulties breathing include bilateral vocal cord palsy, positional asphyxia, and silent aspiration. In our practice, we have adjusted our medication regimen and now allow patients to receive their standard dopaminergic medication until the morning of surgery. Neurologists and neurosurgeons performing lead placement procedures for PD should be aware of this rare but unsafe condition to most optimized treatment.
Simple procedure for phase-space measurement and entanglement validation
NASA Astrophysics Data System (ADS)
Rundle, R. P.; Mills, P. W.; Tilma, Todd; Samson, J. H.; Everitt, M. J.
2017-08-01
It has recently been shown that it is possible to represent the complete quantum state of any system as a phase-space quasiprobability distribution (Wigner function) [Phys. Rev. Lett. 117, 180401 (2016), 10.1103/PhysRevLett.117.180401]. Such functions take the form of expectation values of an observable that has a direct analogy to displaced parity operators. In this work we give a procedure for the measurement of the Wigner function that should be applicable to any quantum system. We have applied our procedure to IBM's Quantum Experience five-qubit quantum processor to demonstrate that we can measure and generate the Wigner functions of two different Bell states as well as the five-qubit Greenberger-Horne-Zeilinger state. Because Wigner functions for spin systems are not unique, we define, compare, and contrast two distinct examples. We show how the use of these Wigner functions leads to an optimal method for quantum state analysis especially in the situation where specific characteristic features are of particular interest (such as for spin Schrödinger cat states). Furthermore we show that this analysis leads to straightforward, and potentially very efficient, entanglement test and state characterization methods.
Dish layouts analysis method for concentrative solar power plant.
Xu, Jinshan; Gan, Shaocong; Li, Song; Ruan, Zhongyuan; Chen, Shengyong; Wang, Yong; Gui, Changgui; Wan, Bin
2016-01-01
Designs leading to maximize the use of sun radiation of a given reflective area without increasing the expense on investment are important to solar power plants construction. We here provide a method that allows one to compute shade area at any given time as well as the total shading effect of a day. By establishing a local coordinate system with the origin at the apex of a parabolic dish and z -axis pointing to the sun, neighboring dishes only with [Formula: see text] would shade onto the dish when in tracking mode. This procedure reduces the required computational resources, simplifies the calculation and allows a quick search for the optimum layout by considering all aspects leading to optimized arrangement: aspect ratio, shifting and rotation. Computer simulations done with information on dish Stirling system as well as DNI data released from NREL, show that regular-spacing is not an optimal layout, shifting and rotating column by certain amount can bring more benefits.
Döring, Michael; Sommer, Philipp; Rolf, Sascha; Lucas, Johannes; Breithardt, Ole A; Hindricks, Gerhard; Richter, Sergio
2015-02-01
Implantation of cardiac resynchronization therapy (CRT) devices can be challenging, time consuming, and fluoroscopy intense. To facilitate placement of left ventricular (LV) leads, a novel electromagnetic navigation system (MediGuide™, St. Jude Medical, St. Paul, MN, USA) has been developed, displaying real-time 3-D location of sensor-embedded delivery tools superimposed on prerecorded X-ray cine-loops of coronary sinus venograms. We report our experience and advanced progress in the use of this new electromagnetic tracking system to guide LV lead implantation. Between January 2012 and December 2013, 71 consecutive patients (69 ± 9 years, 76% male) were implanted with a CRT device using the new electromagnetic tracking system. Demographics, procedural data, and periprocedural adverse events were gathered. The impact of the operator's experience, optimized workflow, and improved software technology on procedural data were analyzed. LV lead implantation was successfully achieved in all patients without severe adverse events. Total procedure time measured 87 ± 37 minutes and the median total fluoroscopy time (skin-to-skin) was 4.9 (2.5-7.8) minutes with a median dose-area-product of 476 (260-1056) cGy*cm(2) . An additional comparison with conventional CRT device implantations showed a significant reduction in fluoroscopy time from 8.0 (5.8; 11.5) to 4.5 (2.8; 7.3) minutes (P = 0.016) and radiation dose from 603 (330; 969) to 338 (176; 680) cGy*cm(2) , respectively (P = 0.044 ). Use of the new navigation system enables safe and successful LV lead placement with improved orientation and significantly reduced radiation exposure during CRT implantation. © 2014 Wiley Periodicals, Inc.
Mesh refinement in finite element analysis by minimization of the stiffness matrix trace
NASA Technical Reports Server (NTRS)
Kittur, Madan G.; Huston, Ronald L.
1989-01-01
Most finite element packages provide means to generate meshes automatically. However, the user is usually confronted with the problem of not knowing whether the mesh generated is appropriate for the problem at hand. Since the accuracy of the finite element results is mesh dependent, mesh selection forms a very important step in the analysis. Indeed, in accurate analyses, meshes need to be refined or rezoned until the solution converges to a value so that the error is below a predetermined tolerance. A-posteriori methods use error indicators, developed by using the theory of interpolation and approximation theory, for mesh refinements. Some use other criterions, such as strain energy density variation and stress contours for example, to obtain near optimal meshes. Although these methods are adaptive, they are expensive. Alternatively, a priori methods, until now available, use geometrical parameters, for example, element aspect ratio. Therefore, they are not adaptive by nature. An adaptive a-priori method is developed. The criterion is that the minimization of the trace of the stiffness matrix with respect to the nodal coordinates, leads to a minimization of the potential energy, and as a consequence provide a good starting mesh. In a few examples the method is shown to provide the optimal mesh. The method is also shown to be relatively simple and amenable to development of computer algorithms. When the procedure is used in conjunction with a-posteriori methods of grid refinement, it is shown that fewer refinement iterations and fewer degrees of freedom are required for convergence as opposed to when the procedure is not used. The mesh obtained is shown to have uniform distribution of stiffness among the nodes and elements which, as a consequence, leads to uniform error distribution. Thus the mesh obtained meets the optimality criterion of uniform error distribution.
Es’haghi, Zarrin; Hoseini, Hasan Ali; Mohammadi-Nokhandani, Saeed; Ebrahimi, Javad
2013-01-01
A new procedure is presented for the determination of low concentrations of lead and cadmium in water samples. Ligand assisted pseudo-stir bar hollow fiber solid/liquid phase microextraction using sol–gel sorbent reinforced with carbon nanotubes was combined with differential pulse anodic stripping voltammetry for simultaneous determination of cadmium and lead in tap water, and Darongar river water samples. In the present work, differential pulse anodic stripping voltammetry (DPASV) using a hanging mercury drop electrode (HMDE) was used in order to determine the ultra trace level of lead and cadmium ions in real samples. This method is based on accumulation of lead and cadmium ions on the electrode using different ligands; Quinolin-8-ol, 5,7-diiodo quinoline-8-ol, 4,5-diphenyl-1H-imidazole-2(3H)-one and 2-{[2-(2-Hydroxy-ethylamino)-ethylamino]-methyl}-phenol as the complexing agent. The optimized conditions were obtained. The relationship between the peak current versus concentration was linear over the range of 0.05–500 ng mL−1 for Cd (II) and Pb (II). The limits of detection for lead and cadmium were 0.015 ng mL−1 and 0.012 ng mL−1, respectively. Under the optimized conditions, the pre-concentration factors are 2440 and 3710 for Cd (II) and Pb (II) in 5 mL of water sample, respectively. PMID:25685537
Curtailing Perovskite Processing Limitations via Lamination at the Perovskite/Perovskite Interface
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Hest, Marinus F; Moore, David; Klein, Talysa
Standard layer-by-layer solution processing methods constrain lead-halide perovskite device architectures. The layer below the perovskite must be robust to the strong organic solvents used to form the perovskite while the layer above has a limited thermal budget and must be processed in nonpolar solvents to prevent perovskite degradation. To circumvent these limitations, we developed a procedure where two transparent conductive oxide/transport material/perovskite half stacks are independently fabricated and then laminated together at the perovskite/perovskite interface. Using ultraviolet-visible absorption spectroscopy, external quantum efficiency, X-ray diffraction, and time-resolved photoluminesence spectroscopy, we show that this procedure improves photovoltaic properties of the perovskite layer.more » Applying this procedure, semitransparent devices employing two high-temperature oxide transport layers were fabricated, which realized an average efficiency of 9.6% (maximum: 10.6%) despite series resistance limitations from the substrate design. Overall, the developed lamination procedure curtails processing constraints, enables new device designs, and affords new opportunities for optimization.« less
Aerodynamic optimization by simultaneously updating flow variables and design parameters
NASA Technical Reports Server (NTRS)
Rizk, M. H.
1990-01-01
The application of conventional optimization schemes to aerodynamic design problems leads to inner-outer iterative procedures that are very costly. An alternative approach is presented based on the idea of updating the flow variable iterative solutions and the design parameter iterative solutions simultaneously. Two schemes based on this idea are applied to problems of correcting wind tunnel wall interference and optimizing advanced propeller designs. The first of these schemes is applicable to a limited class of two-design-parameter problems with an equality constraint. It requires the computation of a single flow solution. The second scheme is suitable for application to general aerodynamic problems. It requires the computation of several flow solutions in parallel. In both schemes, the design parameters are updated as the iterative flow solutions evolve. Computations are performed to test the schemes' efficiency, accuracy, and sensitivity to variations in the computational parameters.
Development of an Optimization Methodology for the Aluminum Alloy Wheel Casting Process
NASA Astrophysics Data System (ADS)
Duan, Jianglan; Reilly, Carl; Maijer, Daan M.; Cockcroft, Steve L.; Phillion, Andre B.
2015-08-01
An optimization methodology has been developed for the aluminum alloy wheel casting process. The methodology is focused on improving the timing of cooling processes in a die to achieve improved casting quality. This methodology utilizes (1) a casting process model, which was developed within the commercial finite element package, ABAQUS™—ABAQUS is a trademark of Dassault Systèms; (2) a Python-based results extraction procedure; and (3) a numerical optimization module from the open-source Python library, Scipy. To achieve optimal casting quality, a set of constraints have been defined to ensure directional solidification, and an objective function, based on the solidification cooling rates, has been defined to either maximize, or target a specific, cooling rate. The methodology has been applied to a series of casting and die geometries with different cooling system configurations, including a 2-D axisymmetric wheel and die assembly generated from a full-scale prototype wheel. The results show that, with properly defined constraint and objective functions, solidification conditions can be improved and optimal cooling conditions can be achieved leading to process productivity and product quality improvements.
Optimization of Multiple Related Negotiation through Multi-Negotiation Network
NASA Astrophysics Data System (ADS)
Ren, Fenghui; Zhang, Minjie; Miao, Chunyan; Shen, Zhiqi
In this paper, a Multi-Negotiation Network (MNN) and a Multi- Negotiation Influence Diagram (MNID) are proposed to optimally handle Multiple Related Negotiations (MRN) in a multi-agent system. Most popular, state-of-the-art approaches perform MRN sequentially. However, a sequential procedure may not optimally execute MRN in terms of maximizing the global outcome, and may even lead to unnecessary losses in some situations. The motivation of this research is to use a MNN to handle MRN concurrently so as to maximize the expected utility of MRN. Firstly, both the joint success rate and the joint utility by considering all related negotiations are dynamically calculated based on a MNN. Secondly, by employing a MNID, an agent's possible decision on each related negotiation is reflected by the value of expected utility. Lastly, through comparing expected utilities between all possible policies to conduct MRN, an optimal policy is generated to optimize the global outcome of MRN. The experimental results indicate that the proposed approach can improve the global outcome of MRN in a successful end scenario, and avoid unnecessary losses in an unsuccessful end scenario.
Treder, Krzysztof; Chołuj, Joanna; Zacharzewska, Bogumiła; Babujee, Lavanya; Mielczarek, Mateusz; Burzyński, Adam; Rakotondrafara, Aurélie M
2018-02-01
Potato virus Y (PVY) infection has been a global challenge for potato production and the leading cause of downgrading and rejection of seed crops for certification. Accurate and timely diagnosis is a key for effective disease control. Here, we have optimized a reverse transcription loop-mediated amplification (RT-LAMP) assay to differentiate the PVY O and N serotypes. The RT-LAMP assay is based on isothermal autocyclic strand displacement during DNA synthesis. The high specificity of this method relies heavily on the primer sets designed for the amplification of the targeted regions. We designed specific primer sets targeting a region within the coat protein gene that contains nucleotide signatures typical for O and N coat protein types, and these primers differ in their annealing temperature. Combining this assay with total RNA extraction by magnetic capture, we have established a highly sensitive, simplified and shortened RT-LAMP procedure as an alternative to conventional nucleic acid assays for diagnosis. This optimized procedure for virus detection may be used as a preliminary test for identifying the viral serotype prior to investing time and effort in multiplex RT-PCR tests when a specific strain is needed.
PLA realizations for VLSI state machines
NASA Technical Reports Server (NTRS)
Gopalakrishnan, S.; Whitaker, S.; Maki, G.; Liu, K.
1990-01-01
A major problem associated with state assignment procedures for VLSI controllers is obtaining an assignment that produces minimal or near minimal logic. The key item in Programmable Logic Array (PLA) area minimization is the number of unique product terms required by the design equations. This paper presents a state assignment algorithm for minimizing the number of product terms required to implement a finite state machine using a PLA. Partition algebra with predecessor state information is used to derive a near optimal state assignment. A maximum bound on the number of product terms required can be obtained by inspecting the predecessor state information. The state assignment algorithm presented is much simpler than existing procedures and leads to the same number of product terms or less. An area-efficient PLA structure implemented in a 1.0 micron CMOS process is presented along with a summary of the performance for a controller implemented using this design procedure.
A multiple-objective optimal exploration strategy
Christakos, G.; Olea, R.A.
1988-01-01
Exploration for natural resources is accomplished through partial sampling of extensive domains. Such imperfect knowledge is subject to sampling error. Complex systems of equations resulting from modelling based on the theory of correlated random fields are reduced to simple analytical expressions providing global indices of estimation variance. The indices are utilized by multiple objective decision criteria to find the best sampling strategies. The approach is not limited by geometric nature of the sampling, covers a wide range in spatial continuity and leads to a step-by-step procedure. ?? 1988.
Tang, Dalin; Yang, Chun; Geva, Tal; del Nido, Pedro J.
2010-01-01
Recent advances in medical imaging technology and computational modeling techniques are making it possible that patient-specific computational ventricle models be constructed and used to test surgical hypotheses and replace empirical and often risky clinical experimentation to examine the efficiency and suitability of various reconstructive procedures in diseased hearts. In this paper, we provide a brief review on recent development in ventricle modeling and its potential application in surgical planning and management of tetralogy of Fallot (ToF) patients. Aspects of data acquisition, model selection and construction, tissue material properties, ventricle layer structure and tissue fiber orientations, pressure condition, model validation and virtual surgery procedures (changing patient-specific ventricle data and perform computer simulation) were reviewed. Results from a case study using patient-specific cardiac magnetic resonance (CMR) imaging and right/left ventricle and patch (RV/LV/Patch) combination model with fluid-structure interactions (FSI) were reported. The models were used to evaluate and optimize human pulmonary valve replacement/insertion (PVR) surgical procedure and patch design and test a surgical hypothesis that PVR with small patch and aggressive scar tissue trimming in PVR surgery may lead to improved recovery of RV function and reduced stress/strain conditions in the patch area. PMID:21344066
Al Rakan, Mohammed; Shores, Jaimie T.; Bonawitz, Steve; Santiago, Gabriel; Christensen, Joani M.; Grant, Gerald; Murphy, Ryan J.; Basafa, Ehsan; Armand, Mehran; Otovic, Pete; Eller, Sue; Brandacher, Gerald; Gordon, Chad R.
2014-01-01
Introduction Swine are often regarded as having analogous facial skeletons to humans and therefore serve as an ideal animal model for translational investigation. However, there's a dearth of literature describing the pertinent ancillary procedures required for craniomaxillofacial research. With this in mind, our objective was to evaluate all necessary procedures required for peri-operative management and animal safety related to experimental craniomaxillofacial surgical procedures such as orthotopic, maxillofacial transplantation. Methods Miniature swine (n=9) were used to investigate peri-operative airway management, methods for providing nutrition, and long-dwelling intravenous access. Flap perfusion using near-infrared laser angiography and facial nerve assessment with EMG were explored. Results Bivona(R) tracheostomy was deemed appropriate versus Shiley since soft, wire-reinforced tubing reduced the incidence of tracheal necrosis. PEG tube, as opposed to esophagostomy, provided a reliable route for post-operative feeding. Femoral venous access with dorsal tunneling proved to be an ideal option being far from pertinent neck vessels. Laser angiography was beneficial for real-time evaluation of graft perfusion. Facial EMG techniques for tracing capture were found most optimal using percutaneous leads near the oral commissure. Experience shows that ancillary procedures are critical and malpositioning of devices may lead to irreversible sequelae with premature animal death. Conclusion Face-jaw-teeth transplantation in swine is a complicated procedure which demands special attention to airway, feeding, and intravascular access. It is critical that each ancillary procedure be performed by a dedicated team familiar with relevant anatomy and protocol. Emphasis should be placed on secure skin-level fixation for all tube/lines to minimize risk of dislodgement. A reliable veterinarian team is invaluable and critical for long-term success. PMID:25377964
Procedures for shape optimization of gas turbine disks
NASA Technical Reports Server (NTRS)
Cheu, Tsu-Chien
1989-01-01
Two procedures, the feasible direction method and sequential linear programming, for shape optimization of gas turbine disks are presented. The objective of these procedures is to obtain optimal designs of turbine disks with geometric and stress constraints. The coordinates of the selected points on the disk contours are used as the design variables. Structural weight, stress and their derivatives with respect to the design variables are calculated by an efficient finite element method for design senitivity analysis. Numerical examples of the optimal designs of a disk subjected to thermo-mechanical loadings are presented to illustrate and compare the effectiveness of these two procedures.
The application of click chemistry in the synthesis of agents with anticancer activity
Ma, Nan; Wang, Ying; Zhao, Bing-Xin; Ye, Wen-Cai; Jiang, Sheng
2015-01-01
The copper(I)-catalyzed 1,3-dipolar cycloaddition between alkynes and azides (click chemistry) to form 1,2,3-triazoles is the most popular reaction due to its reliability, specificity, and biocompatibility. This reaction has the potential to shorten procedures, and render more efficient lead identification and optimization procedures in medicinal chemistry, which is a powerful modular synthetic approach toward the assembly of new molecular entities and has been applied in anticancer drugs discovery increasingly. The present review focuses mainly on the applications of this reaction in the field of synthesis of agents with anticancer activity, which are divided into four groups: topoisomerase II inhibitors, histone deacetylase inhibitors, protein tyrosine kinase inhibitors, and antimicrotubule agents. PMID:25792812
Enhanced Multiobjective Optimization Technique for Comprehensive Aerospace Design. Part A
NASA Technical Reports Server (NTRS)
Chattopadhyay, Aditi; Rajadas, John N.
1997-01-01
A multidisciplinary design optimization procedure which couples formal multiobjectives based techniques and complex analysis procedures (such as computational fluid dynamics (CFD) codes) developed. The procedure has been demonstrated on a specific high speed flow application involving aerodynamics and acoustics (sonic boom minimization). In order to account for multiple design objectives arising from complex performance requirements, multiobjective formulation techniques are used to formulate the optimization problem. Techniques to enhance the existing Kreisselmeier-Steinhauser (K-S) function multiobjective formulation approach have been developed. The K-S function procedure used in the proposed work transforms a constrained multiple objective functions problem into an unconstrained problem which then is solved using the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm. Weight factors are introduced during the transformation process to each objective function. This enhanced procedure will provide the designer the capability to emphasize specific design objectives during the optimization process. The demonstration of the procedure utilizes a computational Fluid dynamics (CFD) code which solves the three-dimensional parabolized Navier-Stokes (PNS) equations for the flow field along with an appropriate sonic boom evaluation procedure thus introducing both aerodynamic performance as well as sonic boom as the design objectives to be optimized simultaneously. Sensitivity analysis is performed using a discrete differentiation approach. An approximation technique has been used within the optimizer to improve the overall computational efficiency of the procedure in order to make it suitable for design applications in an industrial setting.
Virtual reality simulation for the optimization of endovascular procedures: current perspectives.
Rudarakanchana, Nung; Van Herzeele, Isabelle; Desender, Liesbeth; Cheshire, Nicholas J W
2015-01-01
Endovascular technologies are rapidly evolving, often requiring coordination and cooperation between clinicians and technicians from diverse specialties. These multidisciplinary interactions lead to challenges that are reflected in the high rate of errors occurring during endovascular procedures. Endovascular virtual reality (VR) simulation has evolved from simple benchtop devices to full physic simulators with advanced haptics and dynamic imaging and physiological controls. The latest developments in this field include the use of fully immersive simulated hybrid angiosuites to train whole endovascular teams in crisis resource management and novel technologies that enable practitioners to build VR simulations based on patient-specific anatomy. As our understanding of the skills, both technical and nontechnical, required for optimal endovascular performance improves, the requisite tools for objective assessment of these skills are being developed and will further enable the use of VR simulation in the training and assessment of endovascular interventionalists and their entire teams. Simulation training that allows deliberate practice without danger to patients may be key to bridging the gap between new endovascular technology and improved patient outcomes.
Algorithms for the optimization of RBE-weighted dose in particle therapy.
Horcicka, M; Meyer, C; Buschbacher, A; Durante, M; Krämer, M
2013-01-21
We report on various algorithms used for the nonlinear optimization of RBE-weighted dose in particle therapy. Concerning the dose calculation carbon ions are considered and biological effects are calculated by the Local Effect Model. Taking biological effects fully into account requires iterative methods to solve the optimization problem. We implemented several additional algorithms into GSI's treatment planning system TRiP98, like the BFGS-algorithm and the method of conjugated gradients, in order to investigate their computational performance. We modified textbook iteration procedures to improve the convergence speed. The performance of the algorithms is presented by convergence in terms of iterations and computation time. We found that the Fletcher-Reeves variant of the method of conjugated gradients is the algorithm with the best computational performance. With this algorithm we could speed up computation times by a factor of 4 compared to the method of steepest descent, which was used before. With our new methods it is possible to optimize complex treatment plans in a few minutes leading to good dose distributions. At the end we discuss future goals concerning dose optimization issues in particle therapy which might benefit from fast optimization solvers.
Algorithms for the optimization of RBE-weighted dose in particle therapy
NASA Astrophysics Data System (ADS)
Horcicka, M.; Meyer, C.; Buschbacher, A.; Durante, M.; Krämer, M.
2013-01-01
We report on various algorithms used for the nonlinear optimization of RBE-weighted dose in particle therapy. Concerning the dose calculation carbon ions are considered and biological effects are calculated by the Local Effect Model. Taking biological effects fully into account requires iterative methods to solve the optimization problem. We implemented several additional algorithms into GSI's treatment planning system TRiP98, like the BFGS-algorithm and the method of conjugated gradients, in order to investigate their computational performance. We modified textbook iteration procedures to improve the convergence speed. The performance of the algorithms is presented by convergence in terms of iterations and computation time. We found that the Fletcher-Reeves variant of the method of conjugated gradients is the algorithm with the best computational performance. With this algorithm we could speed up computation times by a factor of 4 compared to the method of steepest descent, which was used before. With our new methods it is possible to optimize complex treatment plans in a few minutes leading to good dose distributions. At the end we discuss future goals concerning dose optimization issues in particle therapy which might benefit from fast optimization solvers.
Optimization of the resources management in fighting wildfires.
Martin-Fernández, Susana; Martínez-Falero, Eugenio; Pérez-González, J Manuel
2002-09-01
Wildfires lead to important economic, social, and environmental losses, especially in areas of Mediterranean climate where they are of a high intensity and frequency. Over the past 30 years there has been a dramatic surge in the development and use of fire spread models. However, given the chaotic nature of environmental systems, it is very difficult to develop real-time fire-extinguishing models. This article proposes a method of optimizing the performance of wildfire fighting resources such that losses are kept to a minimum. The optimization procedure includes discrete simulation algorithms and Bayesian optimization methods for discrete and continuous problems (simulated annealing and Bayesian global optimization). Fast calculus algorithms are applied to provide optimization outcomes in short periods of time such that the predictions of the model and the real behavior of the fire, combat resources, and meteorological conditions are similar. In addition, adaptive algorithms take into account the chaotic behavior of wildfire so that the system can be updated with data corresponding to the real situation to obtain a new optimum solution. The application of this method to the Northwest Forest of Madrid (Spain) is also described. This application allowed us to check that it is a helpful tool in the decision-making process.
Optimization of the Resources Management in Fighting Wildfires
NASA Astrophysics Data System (ADS)
Martin-Fernández, Susana; Martínez-Falero, Eugenio; Pérez-González, J. Manuel
2002-09-01
Wildfires lead to important economic, social, and environmental losses, especially in areas of Mediterranean climate where they are of a high intensity and frequency. Over the past 30 years there has been a dramatic surge in the development and use of fire spread models. However, given the chaotic nature of environmental systems, it is very difficult to develop real-time fire-extinguishing models. This article proposes a method of optimizing the performance of wildfire fighting resources such that losses are kept to a minimum. The optimization procedure includes discrete simulation algorithms and Bayesian optimization methods for discrete and continuous problems (simulated annealing and Bayesian global optimization). Fast calculus algorithms are applied to provide optimization outcomes in short periods of time such that the predictions of the model and the real behavior of the fire, combat resources, and meteorological conditions are similar. In addition, adaptive algorithms take into account the chaotic behavior of wildfire so that the system can be updated with data corresponding to the real situation to obtain a new optimum solution. The application of this method to the Northwest Forest of Madrid (Spain) is also described. This application allowed us to check that it is a helpful tool in the decision-making process.
Performance optimization of helicopter rotor blades
NASA Technical Reports Server (NTRS)
Walsh, Joanne L.
1991-01-01
As part of a center-wide activity at NASA Langley Research Center to develop multidisciplinary design procedures by accounting for discipline interactions, a performance design optimization procedure is developed. The procedure optimizes the aerodynamic performance of rotor blades by selecting the point of taper initiation, root chord, taper ratio, and maximum twist which minimize hover horsepower while not degrading forward flight performance. The procedure uses HOVT (a strip theory momentum analysis) to compute the horse power required for hover and the comprehensive helicopter analysis program CAMRAD to compute the horsepower required for forward flight and maneuver. The optimization algorithm consists of the general purpose optimization program CONMIN and approximate analyses. Sensitivity analyses consisting of derivatives of the objective function and constraints are carried out by forward finite differences. The procedure is applied to a test problem which is an analytical model of a wind tunnel model of a utility rotor blade.
Least-squares/parabolized Navier-Stokes procedure for optimizing hypersonic wind tunnel nozzles
NASA Technical Reports Server (NTRS)
Korte, John J.; Kumar, Ajay; Singh, D. J.; Grossman, B.
1991-01-01
A new procedure is demonstrated for optimizing hypersonic wind-tunnel-nozzle contours. The procedure couples a CFD computer code to an optimization algorithm, and is applied to both conical and contoured hypersonic nozzles for the purpose of determining an optimal set of parameters to describe the surface geometry. A design-objective function is specified based on the deviation from the desired test-section flow-field conditions. The objective function is minimized by optimizing the parameters used to describe the nozzle contour based on the solution to a nonlinear least-squares problem. The effect of the changes in the nozzle wall parameters are evaluated by computing the nozzle flow using the parabolized Navier-Stokes equations. The advantage of the new procedure is that it directly takes into account the displacement effect of the boundary layer on the wall contour. The new procedure provides a method for optimizing hypersonic nozzles of high Mach numbers which have been designed by classical procedures, but are shown to produce poor flow quality due to the large boundary layers present in the test section. The procedure is demonstrated by finding the optimum design parameters for a Mach 10 conical nozzle and a Mach 6 and a Mach 15 contoured nozzle.
Patient-specific rehearsal prior to EVAR: a pilot study.
Desender, L; Rancic, Z; Aggarwal, R; Duchateau, J; Glenck, M; Lachat, M; Vermassen, F; Van Herzeele, I
2013-06-01
This study aims to evaluate feasibility, face validity, influence on technical factors and subjective sense of utility of patient-specific rehearsal (PsR) prior to endovascular aortic aneurysm repair (EVAR). A prospective, multicentre pilot study. Patients suitable for EVAR were enrolled and a three-dimensional (3D) model of the patient's anatomy was generated. Less than 24 h prior to the real case, rehearsals were conducted in the laboratory or clinical angiosuite. Technical metrics were recorded during both procedures. A subjective questionnaire was used to evaluate realism, technical and human factor aspects (scale 1-5). Ten patients were enrolled. In one case, the treatment plan was altered based on PsR. In 7/9 patients, the rehearsal significantly altered the optimal C-arm position for the proximal landing zone and an identical fluoroscopy angle was chosen in the real procedure. All team members found the rehearsal useful for selecting the optimal fluoroscopy angle (median 4). The realism of the EVAR procedure simulation was rated highly (median 4). All team members found the PsR useful to prepare the individual team members and the entire team (median 4). PsR for EVAR permits creation of realistic case studies. Subjective evaluation indicates that it may influence optimal C-arm angles and be valuable to prepare the entire team. A randomised controlled trial (RCT) is planned to evaluate how this technology may influence technical and team performance, ultimately leading to improved patient safety. Copyright © 2013 European Society for Vascular Surgery. Published by Elsevier Ltd. All rights reserved.
Kanagaraju, Vijayanth; Chhabra, H S; Srivastava, Abhishek; Mahajan, Rajat; Kaul, Rahul; Bhatia, Pallav; Tandon, Vikas; Nanda, Ankur; Sangondimath, Gururaj; Patel, Nishit
2016-10-01
Congenital lordoscoliosis is an uncommon pathology and its management poses formidable challenge especially in the presence of type 2 respiratory failure and intraspinal anomalies. In such patients standard management protocols are not applicable and may require multistage procedure to minimize risk and optimize results. A 15-year-old girl presented in our hospital emergency services with severe breathing difficulty. She had a severe and rapidly progressing deformity in her back, noted since 6 years of age, associated with severe respiratory distress requiring oxygen and BiPAP support. She was diagnosed to have a severe and rigid congenital right thoracolumbar lordoscoliosis (coronal Cobb's angle: 105° and thoracic lordosis -10°) with type 1 split cord malformation with bony septum extending from T11 to L3. This leads to presentation of restrictive lung disease with type 2 respiratory failure. As her lung condition did not allow for any major procedure, we did a staged procedure rather than executing in a single stage. Controlled axial traction by halogravity was applied initially followed by halo-femoral traction. Four weeks later, this was replaced by halo-pelvic distraction device after a posterior release procedure with asymmetric pedicle substraction osteotomies at T7 and T10. Halo-pelvic distraction continued for 4 more weeks to optimize and correct the deformity. Subsequently definitive posterior stabilization and fusion was done. The detrimental effect of diastematomyelia resection in such cases is clearly evident from literature, so it was left unresected. A good scoliotic correction with improved respiratory function was achieved. Three years follow-up showed no loss of deformity correction, no evidence of pseudarthrosis and a good clinical outcome with reasonably balanced spine. The management of severe and rigid congenital lordoscoliotic deformities with intraspinal anomalies is challenging. Progressive reduction in respiratory volume in untreated cases can lead to acute respiratory failure. Such patients have a high rate of intraoperative and postoperative morbidity and mortality. Hence a staged procedure is recommended. Initially a less invasive procedure like halo traction helps to improve their respiratory function with simultaneous correction of the deformity, while allowing for monitoring of neurological deficit. Subsequently spinal osteotomies and combined halo traction helps further improve the correction, following which definitive instrumented fusion can be done.
Optimization for minimum sensitivity to uncertain parameters
NASA Technical Reports Server (NTRS)
Pritchard, Jocelyn I.; Adelman, Howard M.; Sobieszczanski-Sobieski, Jaroslaw
1994-01-01
A procedure to design a structure for minimum sensitivity to uncertainties in problem parameters is described. The approach is to minimize directly the sensitivity derivatives of the optimum design with respect to fixed design parameters using a nested optimization procedure. The procedure is demonstrated for the design of a bimetallic beam for minimum weight with insensitivity to uncertainties in structural properties. The beam is modeled with finite elements based on two dimensional beam analysis. A sequential quadratic programming procedure used as the optimizer supplies the Lagrange multipliers that are used to calculate the optimum sensitivity derivatives. The method was perceived to be successful from comparisons of the optimization results with parametric studies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, Biao; Yamaguchi, Keiichi; Fukuoka, Mayuko
To accelerate the logical drug design procedure, we created the program “NAGARA,” a plugin for PyMOL, and applied it to the discovery of small compounds called medical chaperones (MCs) that stabilize the cellular form of a prion protein (PrP{sup C}). In NAGARA, we constructed a single platform to unify the docking simulation (DS), free energy calculation by molecular dynamics (MD) simulation, and interfragment interaction energy (IFIE) calculation by quantum chemistry (QC) calculation. NAGARA also enables large-scale parallel computing via a convenient graphical user interface. Here, we demonstrated its performance and its broad applicability from drug discovery to lead optimization withmore » full compatibility with various experimental methods including Western blotting (WB) analysis, surface plasmon resonance (SPR), and nuclear magnetic resonance (NMR) measurements. Combining DS and WB, we discovered anti-prion activities for two compounds and tegobuvir (TGV), a non-nucleoside non-structural protein NS5B polymerase inhibitor showing activity against hepatitis C virus genotype 1. Binding profiles predicted by MD and QC are consistent with those obtained by SPR and NMR. Free energy analyses showed that these compounds stabilize the PrP{sup C} conformation by decreasing the conformational fluctuation of the PrP{sup C}. Because TGV has been already approved as a medicine, its extension to prion diseases is straightforward. Finally, we evaluated the affinities of the fragmented regions of TGV using QC and found a clue for its further optimization. By repeating WB, MD, and QC recursively, we were able to obtain the optimum lead structure. - Highlights: • NAGARA integrates docking simulation, molecular dynamics, and quantum chemistry. • We found many compounds, e.g., tegobuvir (TGV), that exhibit anti-prion activities. • We obtained insights into the action mechanism of TGV as a medical chaperone. • Using QC, we obtained useful information for optimization of the lead compound, TGV. • NAGARA is a convenient platform for drug discovery and lead optimization.« less
NASA Technical Reports Server (NTRS)
Stahara, S. S.
1984-01-01
An investigation was carried out to complete the preliminary development of a combined perturbation/optimization procedure and associated computational code for designing optimized blade-to-blade profiles of turbomachinery blades. The overall purpose of the procedures developed is to provide demonstration of a rapid nonlinear perturbation method for minimizing the computational requirements associated with parametric design studies of turbomachinery flows. The method combines the multiple parameter nonlinear perturbation method, successfully developed in previous phases of this study, with the NASA TSONIC blade-to-blade turbomachinery flow solver, and the COPES-CONMIN optimization procedure into a user's code for designing optimized blade-to-blade surface profiles of turbomachinery blades. Results of several design applications and a documented version of the code together with a user's manual are provided.
Krüger, Marie T; Coenen, Volker A; Egger, Karl; Shah, Mukesch; Reinacher, Peter C
2018-06-13
In recent years, simulations based on phantom models have become increasingly popular in the medical field. In the field of functional and stereotactic neurosurgery, a cranial phantom would be useful to train operative techniques, such as stereo-electroencephalography (SEEG), to establish new methods as well as to develop and modify radiological techniques. In this study, we describe the construction of a cranial phantom and show examples for it in stereotactic and functional neurosurgery and its applicability with different radiological modalities. We prepared a plaster skull filled with agar. A complete operation for deep brain stimulation (DBS) was simulated using directional leads. Moreover, a complete SEEG operation including planning, implantation of the electrodes, and intraoperative and postoperative imaging was simulated. An optimally customized cranial phantom is filled with 10% agar. At 7°C, it can be stored for approximately 4 months. A DBS and an SEEG procedure could be realistically simulated. Lead artifacts can be studied in CT, X-ray, rotational fluoroscopy, and MRI. This cranial phantom is a simple and effective model to simulate functional and stereotactic neurosurgical operations. This might be useful for teaching and training of neurosurgeons, establishing operations in a new center and for optimization of radiological examinations. © 2018 S. Karger AG, Basel.
Multidisciplinary design optimization using multiobjective formulation techniques
NASA Technical Reports Server (NTRS)
Chattopadhyay, Aditi; Pagaldipti, Narayanan S.
1995-01-01
This report addresses the development of a multidisciplinary optimization procedure using an efficient semi-analytical sensitivity analysis technique and multilevel decomposition for the design of aerospace vehicles. A semi-analytical sensitivity analysis procedure is developed for calculating computational grid sensitivities and aerodynamic design sensitivities. Accuracy and efficiency of the sensitivity analysis procedure is established through comparison of the results with those obtained using a finite difference technique. The developed sensitivity analysis technique are then used within a multidisciplinary optimization procedure for designing aerospace vehicles. The optimization problem, with the integration of aerodynamics and structures, is decomposed into two levels. Optimization is performed for improved aerodynamic performance at the first level and improved structural performance at the second level. Aerodynamic analysis is performed by solving the three-dimensional parabolized Navier Stokes equations. A nonlinear programming technique and an approximate analysis procedure are used for optimization. The proceduredeveloped is applied to design the wing of a high speed aircraft. Results obtained show significant improvements in the aircraft aerodynamic and structural performance when compared to a reference or baseline configuration. The use of the semi-analytical sensitivity technique provides significant computational savings.
Herrero, A; Sanllorente, S; Reguera, C; Ortiz, M C; Sarabia, L A
2016-11-16
A new strategy to approach multiresponse optimization in conjunction to a D-optimal design for simultaneously optimizing a large number of experimental factors is proposed. The procedure is applied to the determination of biogenic amines (histamine, putrescine, cadaverine, tyramine, tryptamine, 2-phenylethylamine, spermine and spermidine) in swordfish by HPLC-FLD after extraction with an acid and subsequent derivatization with dansyl chloride. Firstly, the extraction from a solid matrix and the derivatization of the extract are optimized. Ten experimental factors involved in both stages are studied, seven of them at two levels and the remaining at three levels; the use of a D-optimal design leads to optimize the ten experimental variables, significantly reducing by a factor of 67 the experimental effort needed but guaranteeing the quality of the estimates. A model with 19 coefficients, which includes those corresponding to the main effects and two possible interactions, is fitted to the peak area of each amine. Then, the validated models are used to predict the response (peak area) of the 3456 experiments of the complete factorial design. The variability among peak areas ranges from 13.5 for 2-phenylethylamine to 122.5 for spermine, which shows, to a certain extent, the high and different effect of the pretreatment on the responses. Then the percentiles are calculated from the peak areas of each amine. As the experimental conditions are in conflict, the optimal solution for the multiresponse optimization is chosen from among those which have all the responses greater than a certain percentile for all the amines. The developed procedure reaches decision limits down to 2.5 μg L -1 for cadaverine or 497 μg L -1 for histamine in solvent and 0.07 mg kg -1 and 14.81 mg kg -1 in fish (probability of false positive equal to 0.05), respectively. Copyright © 2016 Elsevier B.V. All rights reserved.
New Method of Calibrating IRT Models.
ERIC Educational Resources Information Center
Jiang, Hai; Tang, K. Linda
This discussion of new methods for calibrating item response theory (IRT) models looks into new optimization procedures, such as the Genetic Algorithm (GA) to improve on the use of the Newton-Raphson procedure. The advantages of using a global optimization procedure like GA is that this kind of procedure is not easily affected by local optima and…
Leistritz, L; Suesse, T; Haueisen, J; Hilgenfeld, B; Witte, H
2006-01-01
Directed information transfer in the human brain occurs presumably by oscillations. As of yet, most approaches for the analysis of these oscillations are based on time-frequency or coherence analysis. The present work concerns the modeling of cortical 600 Hz oscillations, localized within the Brodmann Areas 3b and 1 after stimulation of the nervus medianus, by means of coupled differential equations. This approach leads to the so-called parameter identification problem, where based on a given data set, a set of unknown parameters of a system of ordinary differential equations is determined by special optimization procedures. Some suitable algorithms for this task are presented in this paper. Finally an oscillatory network model is optimally fitted to the data taken from ten volunteers.
Weak-value amplification as an optimal metrological protocol
NASA Astrophysics Data System (ADS)
Alves, G. Bié; Escher, B. M.; de Matos Filho, R. L.; Zagury, N.; Davidovich, L.
2015-06-01
The implementation of weak-value amplification requires the pre- and postselection of states of a quantum system, followed by the observation of the response of the meter, which interacts weakly with the system. Data acquisition from the meter is conditioned to successful postselection events. Here we derive an optimal postselection procedure for estimating the coupling constant between system and meter and show that it leads both to weak-value amplification and to the saturation of the quantum Fisher information, under conditions fulfilled by all previously reported experiments on the amplification of weak signals. For most of the preselected states, full information on the coupling constant can be extracted from the meter data set alone, while for a small fraction of the space of preselected states, it must be obtained from the postselection statistics.
Singular perturbation techniques for real time aircraft trajectory optimization and control
NASA Technical Reports Server (NTRS)
Calise, A. J.; Moerder, D. D.
1982-01-01
The usefulness of singular perturbation methods for developing real time computer algorithms to control and optimize aircraft flight trajectories is examined. A minimum time intercept problem using F-8 aerodynamic and propulsion data is used as a baseline. This provides a framework within which issues relating to problem formulation, solution methodology and real time implementation are examined. Theoretical questions relating to separability of dynamics are addressed. With respect to implementation, situations leading to numerical singularities are identified, and procedures for dealing with them are outlined. Also, particular attention is given to identifying quantities that can be precomputed and stored, thus greatly reducing the on-board computational load. Numerical results are given to illustrate the minimum time algorithm, and the resulting flight paths. An estimate is given for execution time and storage requirements.
47 CFR 1.2202 - Competitive bidding design options.
Code of Federal Regulations, 2014 CFR
2014-10-01
... Section 1.2202 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Grants...) Procedures that utilize mathematical computer optimization software, such as integer programming, to evaluate... evaluating bids using a ranking based on specified factors. (B) Procedures that combine computer optimization...
Fully integrated aerodynamic/dynamic optimization of helicopter rotor blades
NASA Technical Reports Server (NTRS)
Walsh, Joanne L.; Lamarsh, William J., II; Adelman, Howard M.
1992-01-01
This paper describes a fully integrated aerodynamic/dynamic optimization procedure for helicopter rotor blades. The procedure combines performance and dynamics analyses with a general purpose optimizer. The procedure minimizes a linear combination of power required (in hover, forward flight, and maneuver) and vibratory hub shear. The design variables include pretwist, taper initiation, taper ratio, root chord, blade stiffnesses, tuning masses, and tuning mass locations. Aerodynamic constraints consist of limits on power required in hover, forward flight and maneuver; airfoil section stall; drag divergence Mach number; minimum tip chord; and trim. Dynamic constraints are on frequencies, minimum autorotational inertia, and maximum blade weight. The procedure is demonstrated for two cases. In the first case the objective function involves power required (in hover, forward flight, and maneuver) and dynamics. The second case involves only hover power and dynamics. The designs from the integrated procedure are compared with designs from a sequential optimization approach in which the blade is first optimized for performance and then for dynamics. In both cases, the integrated approach is superior.
Fully integrated aerodynamic/dynamic optimization of helicopter rotor blades
NASA Technical Reports Server (NTRS)
Walsh, Joanne L.; Lamarsh, William J., II; Adelman, Howard M.
1992-01-01
A fully integrated aerodynamic/dynamic optimization procedure is described for helicopter rotor blades. The procedure combines performance and dynamic analyses with a general purpose optimizer. The procedure minimizes a linear combination of power required (in hover, forward flight, and maneuver) and vibratory hub shear. The design variables include pretwist, taper initiation, taper ratio, root chord, blade stiffnesses, tuning masses, and tuning mass locations. Aerodynamic constraints consist of limits on power required in hover, forward flight and maneuvers; airfoil section stall; drag divergence Mach number; minimum tip chord; and trim. Dynamic constraints are on frequencies, minimum autorotational inertia, and maximum blade weight. The procedure is demonstrated for two cases. In the first case, the objective function involves power required (in hover, forward flight and maneuver) and dynamics. The second case involves only hover power and dynamics. The designs from the integrated procedure are compared with designs from a sequential optimization approach in which the blade is first optimized for performance and then for dynamics. In both cases, the integrated approach is superior.
Treatment Options: Biological Basis of Regenerative Endodontic Procedures
Hargreaves, Kenneth M.; Diogenes, Anibal; Teixeira, Fabricio B.
2013-01-01
Dental trauma occurs frequently in children and often can lead to pulpal necrosis. The occurrence of pulpal necrosis in the permanent but immature tooth represents a challenging clinical situation since the thin and often short roots increase the risk of subsequent fracture. Current approaches for treating the traumatized immature tooth with pulpal necrosis do not reliably achieve the desired clinical outcomes, consisting of healing of apical periodontitis, promotion of continued root development and restoration of the functional competence of pulpal tissue. An optimal approach for treating the immature permanent tooth with a necrotic pulp would be to regenerate functional pulpal tissue. This review summarizes the current literature supporting a biological rationale for considering regenerative endodontic treatment procedures in treating the immature permanent tooth with pulp necrosis. PMID:23439043
Implementation of a partitioned algorithm for simulation of large CSI problems
NASA Technical Reports Server (NTRS)
Alvin, Kenneth F.; Park, K. C.
1991-01-01
The implementation of a partitioned numerical algorithm for determining the dynamic response of coupled structure/controller/estimator finite-dimensional systems is reviewed. The partitioned approach leads to a set of coupled first and second-order linear differential equations which are numerically integrated with extrapolation and implicit step methods. The present software implementation, ACSIS, utilizes parallel processing techniques at various levels to optimize performance on a shared-memory concurrent/vector processing system. A general procedure for the design of controller and filter gains is also implemented, which utilizes the vibration characteristics of the structure to be solved. Also presented are: example problems; a user's guide to the software; the procedures and algorithm scripts; a stability analysis for the algorithm; and the source code for the parallel implementation.
1998-05-01
Coverage Probability with a Random Optimization Procedure: An Artificial Neural Network Approach by Biing T. Guan, George Z. Gertner, and Alan B...Modeling Training Site Vegetation Coverage Probability with a Random Optimizing Procedure: An Artificial Neural Network Approach 6. AUTHOR(S) Biing...coverage based on past coverage. Approach A literature survey was conducted to identify artificial neural network analysis techniques applicable for
Optimizing Illumina next-generation sequencing library preparation for extremely AT-biased genomes.
Oyola, Samuel O; Otto, Thomas D; Gu, Yong; Maslen, Gareth; Manske, Magnus; Campino, Susana; Turner, Daniel J; Macinnis, Bronwyn; Kwiatkowski, Dominic P; Swerdlow, Harold P; Quail, Michael A
2012-01-03
Massively parallel sequencing technology is revolutionizing approaches to genomic and genetic research. Since its advent, the scale and efficiency of Next-Generation Sequencing (NGS) has rapidly improved. In spite of this success, sequencing genomes or genomic regions with extremely biased base composition is still a great challenge to the currently available NGS platforms. The genomes of some important pathogenic organisms like Plasmodium falciparum (high AT content) and Mycobacterium tuberculosis (high GC content) display extremes of base composition. The standard library preparation procedures that employ PCR amplification have been shown to cause uneven read coverage particularly across AT and GC rich regions, leading to problems in genome assembly and variation analyses. Alternative library-preparation approaches that omit PCR amplification require large quantities of starting material and hence are not suitable for small amounts of DNA/RNA such as those from clinical isolates. We have developed and optimized library-preparation procedures suitable for low quantity starting material and tolerant to extremely high AT content sequences. We have used our optimized conditions in parallel with standard methods to prepare Illumina sequencing libraries from a non-clinical and a clinical isolate (containing ~53% host contamination). By analyzing and comparing the quality of sequence data generated, we show that our optimized conditions that involve a PCR additive (TMAC), produces amplified libraries with improved coverage of extremely AT-rich regions and reduced bias toward GC neutral templates. We have developed a robust and optimized Next-Generation Sequencing library amplification method suitable for extremely AT-rich genomes. The new amplification conditions significantly reduce bias and retain the complexity of either extremes of base composition. This development will greatly benefit sequencing clinical samples that often require amplification due to low mass of DNA starting material.
An analysis dictionary learning algorithm under a noisy data model with orthogonality constraint.
Zhang, Ye; Yu, Tenglong; Wang, Wenwu
2014-01-01
Two common problems are often encountered in analysis dictionary learning (ADL) algorithms. The first one is that the original clean signals for learning the dictionary are assumed to be known, which otherwise need to be estimated from noisy measurements. This, however, renders a computationally slow optimization process and potentially unreliable estimation (if the noise level is high), as represented by the Analysis K-SVD (AK-SVD) algorithm. The other problem is the trivial solution to the dictionary, for example, the null dictionary matrix that may be given by a dictionary learning algorithm, as discussed in the learning overcomplete sparsifying transform (LOST) algorithm. Here we propose a novel optimization model and an iterative algorithm to learn the analysis dictionary, where we directly employ the observed data to compute the approximate analysis sparse representation of the original signals (leading to a fast optimization procedure) and enforce an orthogonality constraint on the optimization criterion to avoid the trivial solutions. Experiments demonstrate the competitive performance of the proposed algorithm as compared with three baselines, namely, the AK-SVD, LOST, and NAAOLA algorithms.
NASA Astrophysics Data System (ADS)
Fusillo, G.; Rosestolato, D.; Scura, F.; Cattarin, S.; Mattarozzi, L.; Guerriero, P.; Gambirasi, A.; Brianese, N.; Staiti, P.; Guerriero, R.; La Sala, G.
2018-03-01
We present the preparation and characterization of pure lead monoxide obtained through recycling of the lead paste recovered from exhausted lead acid batteries. The recycling is based on a hydrometallurgical procedure reported in a STC Patent, that includes simple chemical operations (desulphurisation, leaching, precipitation, filtration) and a final thermal conversion. Materials obtained by treatment at 600 °C consist predominantly of β-PbO. The electrochemical behaviour of Positive Active Mass (PAM) prepared from different materials (or mixtures) is then investigated and compared. An optimized oxide material, obtained by prolonged (8 h) thermal treatment at 600 °C, consists of pure β-PbO and appears suitable for preparation of battery elements, alone or in mixture with a small fraction (10%-30%) of traditional industrial leady oxide. The resulting battery performances are similar to those obtained from pure leady oxide. In comparison with traditional recycling processes, the proposed method guarantees lower energy consumption, limited environmental impact and reduced operating risk for industry workers.
Gallei, Markus; Tockner, Stefan; Klein, Roland; Rehahn, Matthias
2010-05-12
Well-defined diblock copolymers have been prepared in which three different ferrocene-based monomers are combined with 1,1-dimethylsilacyclobutane (DMSB) and 1-methylsilacyclobutane, respectively, as their carbosilane counterparts. Optimized procedures are reported for the living anionic chain growth following sequential monomer addition protocols, ensuring narrow polydispersities and high blocking efficiencies. The DMSB-containing copolymers show phase segregation in the bulk state, leading to micromorphologies composed of crystalline DMSB phases and amorphous polymetallocene phases. Copyright © 2010 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Optimal control in adaptive optics modeling of nonlinear systems
NASA Astrophysics Data System (ADS)
Herrmann, J.
The problem of using an adaptive optics system to correct for nonlinear effects like thermal blooming is addressed using a model containing nonlinear lenses through which Gaussian beams are propagated. The best correction of this nonlinear system can be formulated as a deterministic open loop optimal control problem. This treatment gives a limit for the best possible correction. Aspects of adaptive control and servo systems are not included at this stage. An attempt is made to determine that control in the transmitter plane which minimizes the time averaged area or maximizes the fluence in the target plane. The standard minimization procedure leads to a two-point-boundary-value problem, which is ill-conditioned in the case. The optimal control problem was solved using an iterative gradient technique. An instantaneous correction is introduced and compared with the optimal correction. The results of the calculations show that for short times or weak nonlinearities the instantaneous correction is close to the optimal correction, but that for long times and strong nonlinearities a large difference develops between the two types of correction. For these cases the steady state correction becomes better than the instantaneous correction and approaches the optimum correction.
Transonic airfoil design for helicopter rotor applications
NASA Technical Reports Server (NTRS)
Hassan, Ahmed A.; Jackson, B.
1989-01-01
Despite the fact that the flow over a rotor blade is strongly influenced by locally three-dimensional and unsteady effects, practical experience has always demonstrated that substantial improvements in the aerodynamic performance can be gained by improving the steady two-dimensional charateristics of the airfoil(s) employed. The two phenomena known to have great impact on the overall rotor performance are: (1) retreating blade stall with the associated large pressure drag, and (2) compressibility effects on the advancing blade leading to shock formation and the associated wave drag and boundary-layer separation losses. It was concluded that: optimization routines are a powerful tool for finding solutions to multiple design point problems; the optimization process must be guided by the judicious choice of geometric and aerodynamic constraints; optimization routines should be appropriately coupled to viscous, not inviscid, transonic flow solvers; hybrid design procedures in conjunction with optimization routines represent the most efficient approach for rotor airfroil design; unsteady effects resulting in the delay of lift and moment stall should be modeled using simple empirical relations; and inflight optimization of aerodynamic loads (e.g., use of variable rate blowing, flaps, etc.) can satisfy any number of requirements at design and off-design conditions.
Optimization of digitization procedures in cultural heritage preservation
NASA Astrophysics Data System (ADS)
Martínez, Bea; Mitjà, Carles; Escofet, Jaume
2013-11-01
The digitization of both volumetric and flat objects is the nowadays-preferred method in order to preserve cultural heritage items. High quality digital files obtained from photographic plates, films and prints, paintings, drawings, gravures, fabrics and sculptures, allows not only for a wider diffusion and on line transmission, but also for the preservation of the original items from future handling. Early digitization procedures used scanners for flat opaque or translucent objects and camera only for volumetric or flat highly texturized materials. The technical obsolescence of the high-end scanners and the improvement achieved by professional cameras has result in a wide use of cameras with digital back to digitize any kind of cultural heritage item. Since the lens, the digital back, the software controlling the camera and the digital image processing provide a wide range of possibilities, there is necessary to standardize the methods used in the reproduction work leading to preserve as high as possible the original item properties. This work presents an overview about methods used for camera system characterization, as well as the best procedures in order to identify and counteract the effect of the lens residual aberrations, sensor aliasing, image illumination, color management and image optimization by means of parametric image processing. As a corollary, the work shows some examples of reproduction workflow applied to the digitization of valuable art pieces and glass plate photographic black and white negatives.
Reducing infection risk in implant-based breast-reconstruction surgery: challenges and solutions
Ooi, Adrian SH; Song, David H
2016-01-01
Implant-based procedures are the most commonly performed method for postmastectomy breast reconstruction. While donor-site morbidity is low, these procedures are associated with a higher risk of reconstructive loss. Many of these are related to infection of the implant, which can lead to prolonged antibiotic treatment, undesired additional surgical procedures, and unsatisfactory results. This review combines a summary of the recent literature regarding implant-related breast-reconstruction infections and combines this with a practical approach to the patient and surgery aimed at reducing this risk. Prevention of infection begins with appropriate reconstructive choice based on an assessment and optimization of risk factors. These include patient and disease characteristics, such as smoking, obesity, large breast size, and immediate reconstructive procedures, as well as adjuvant therapy, such as radiotherapy and chemotherapy. For implant-based breast reconstruction, preoperative planning and organization is key to reducing infection. A logical and consistent intraoperative and postoperative surgical protocol, including appropriate antibiotic choice, mastectomy-pocket creation, implant handling, and considered acellular dermal matrix use contribute toward the reduction of breast-implant infections. PMID:27621667
40 CFR 91.316 - Hydrocarbon analyzer calibration.
Code of Federal Regulations, 2011 CFR
2011-07-01
... periodic optimization of detector response. Prior to introduction into service and at least annually... nitrogen. (2) One of the following procedures is required for FID or HFID optimization: (i) The procedure outlined in Society of Automotive Engineers (SAE) paper No. 770141, “Optimization of Flame Ionization...
NASA Technical Reports Server (NTRS)
Oconnell, R. F.; Hassig, H. J.; Radovcich, N. A.
1976-01-01
Results of a study of the development of flutter modules applicable to automated structural design of advanced aircraft configurations, such as a supersonic transport, are presented. Automated structural design is restricted to automated sizing of the elements of a given structural model. It includes a flutter optimization procedure; i.e., a procedure for arriving at a structure with minimum mass for satisfying flutter constraints. Methods of solving the flutter equation and computing the generalized aerodynamic force coefficients in the repetitive analysis environment of a flutter optimization procedure are studied, and recommended approaches are presented. Five approaches to flutter optimization are explained in detail and compared. An approach to flutter optimization incorporating some of the methods discussed is presented. Problems related to flutter optimization in a realistic design environment are discussed and an integrated approach to the entire flutter task is presented. Recommendations for further investigations are made. Results of numerical evaluations, applying the five methods of flutter optimization to the same design task, are presented.
How Near is a Near-Optimal Solution: Confidence Limits for the Global Optimum.
1980-05-01
or near-optimal solutions are the only practical solutions available. This paper identifies and compares some procedures which use independent near...approximate or near-optimal solutions are the only practical solutions available. This paper identifies and compares some procedures which use inde- pendent...The objective of this paper is to indicate some relatively new statistical procedures for obtaining an upper confidence limit on G Each of these
Numerical modeling and optimization of the Iguassu gas centrifuge
NASA Astrophysics Data System (ADS)
Bogovalov, S. V.; Borman, V. D.; Borisevich, V. D.; Tronin, V. N.; Tronin, I. V.
2017-07-01
The full procedure of the numerical calculation of the optimized parameters of the Iguassu gas centrifuge (GC) is under discussion. The procedure consists of a few steps. On the first step the problem of a hydrodynamical flow of the gas in the rotating rotor of the GC is solved numerically. On the second step the problem of diffusion of the binary mixture of isotopes is solved. The separation power of the gas centrifuge is calculated after that. On the last step the time consuming procedure of optimization of the GC is performed providing us the maximum of the separation power. The optimization is based on the BOBYQA method exploring the results of numerical simulations of the hydrodynamics and diffusion of the mixture of isotopes. Fast convergence of calculations is achieved due to exploring of a direct solver at the solution of the hydrodynamical and diffusion parts of the problem. Optimized separative power and optimal internal parameters of the Iguassu GC with 1 m rotor were calculated using the developed approach. Optimization procedure converges in 45 iterations taking 811 minutes.
Merlé, Y; Mentré, F
1995-02-01
In this paper 3 criteria to design experiments for Bayesian estimation of the parameters of nonlinear models with respect to their parameters, when a prior distribution is available, are presented: the determinant of the Bayesian information matrix, the determinant of the pre-posterior covariance matrix, and the expected information provided by an experiment. A procedure to simplify the computation of these criteria is proposed in the case of continuous prior distributions and is compared with the criterion obtained from a linearization of the model about the mean of the prior distribution for the parameters. This procedure is applied to two models commonly encountered in the area of pharmacokinetics and pharmacodynamics: the one-compartment open model with bolus intravenous single-dose injection and the Emax model. They both involve two parameters. Additive as well as multiplicative gaussian measurement errors are considered with normal prior distributions. Various combinations of the variances of the prior distribution and of the measurement error are studied. Our attention is restricted to designs with limited numbers of measurements (1 or 2 measurements). This situation often occurs in practice when Bayesian estimation is performed. The optimal Bayesian designs that result vary with the variances of the parameter distribution and with the measurement error. The two-point optimal designs sometimes differ from the D-optimal designs for the mean of the prior distribution and may consist of replicating measurements. For the studied cases, the determinant of the Bayesian information matrix and its linearized form lead to the same optimal designs. In some cases, the pre-posterior covariance matrix can be far from its lower bound, namely, the inverse of the Bayesian information matrix, especially for the Emax model and a multiplicative measurement error. The expected information provided by the experiment and the determinant of the pre-posterior covariance matrix generally lead to the same designs except for the Emax model and the multiplicative measurement error. Results show that these criteria can be easily computed and that they could be incorporated in modules for designing experiments.
NASA Technical Reports Server (NTRS)
Nissim, E.; Abel, I.
1978-01-01
An optimization procedure is developed based on the responses of a system to continuous gust inputs. The procedure uses control law transfer functions which have been partially determined by using the relaxed aerodynamic energy approach. The optimization procedure yields a flutter suppression system which minimizes control surface activity in a gust environment. The procedure is applied to wing flutter of a drone aircraft to demonstrate a 44 percent increase in the basic wing flutter dynamic pressure. It is shown that a trailing edge control system suppresses the flutter instability over a wide range of subsonic mach numbers and flight altitudes. Results of this study confirm the effectiveness of the relaxed energy approach.
Improved compaction of dried tannery wastewater sludge.
Della Zassa, M; Zerlottin, M; Refosco, D; Santomaso, A C; Canu, P
2015-12-01
We quantitatively studied the advantages of improving the compaction of a powder waste by several techniques, including its pelletization. The goal is increasing the mass storage capacity in a given storage volume, and reducing the permeability of air and moisture, that may trigger exothermic spontaneous reactions in organic waste, particularly as powders. The study is based on dried sludges from a wastewater treatment, mainly from tanneries, but the indications are valid and useful for any waste in the form of powder, suitable to pelletization. Measurements of bulk density have been carried out at the industrial and laboratory scale, using different packing procedures, amenable to industrial processes. Waste as powder, pellets and their mixtures have been considered. The bulk density of waste as powder increases from 0.64 t/m(3) (simply poured) to 0.74 t/m(3) (tapped) and finally to 0.82 t/m(3) by a suitable, yet simple, packing procedure that we called dispersion filling, with a net gain of 28% in the compaction by simply modifying the collection procedure. Pelletization increases compaction by definition, but the packing of pellets is relatively coarse. Some increase in bulk density of pellets can be achieved by tapping; vibration and dispersion filling are not efficient with pellets. Mixtures of powder and pellets is the optimal packing policy. The best compaction result was achieved by controlled vibration of a 30/70 wt% mixture of powders and pellets, leading to a final bulk density of 1t/m(3), i.e. an improvement of compaction by more than 54% with respect to simply poured powders, but also larger than 35% compared to just pellets. That means increasing the mass storage capacity by a factor of 1.56. Interestingly, vibration can be the most or the least effective procedure to improve compaction of mixtures, depending on characteristics of vibration. The optimal packing (30/70 wt% powders/pellets) proved to effectively mitigate the onset of smouldering, leading to self-heating, according to standard tests, whereas the pure pelletization totally removes the self-heating hazard. Copyright © 2015 Elsevier Ltd. All rights reserved.
Model Specification Searches Using Ant Colony Optimization Algorithms
ERIC Educational Resources Information Center
Marcoulides, George A.; Drezner, Zvi
2003-01-01
Ant colony optimization is a recently proposed heuristic procedure inspired by the behavior of real ants. This article applies the procedure to model specification searches in structural equation modeling and reports the results. The results demonstrate the capabilities of ant colony optimization algorithms for conducting automated searches.
Deng, Yongbo; Korvink, Jan G
2016-05-01
This paper develops a topology optimization procedure for three-dimensional electromagnetic waves with an edge element-based finite-element method. In contrast to the two-dimensional case, three-dimensional electromagnetic waves must include an additional divergence-free condition for the field variables. The edge element-based finite-element method is used to both discretize the wave equations and enforce the divergence-free condition. For wave propagation described in terms of the magnetic field in the widely used class of non-magnetic materials, the divergence-free condition is imposed on the magnetic field. This naturally leads to a nodal topology optimization method. When wave propagation is described using the electric field, the divergence-free condition must be imposed on the electric displacement. In this case, the material in the design domain is assumed to be piecewise homogeneous to impose the divergence-free condition on the electric field. This results in an element-wise topology optimization algorithm. The topology optimization problems are regularized using a Helmholtz filter and a threshold projection method and are analysed using a continuous adjoint method. In order to ensure the applicability of the filter in the element-wise topology optimization version, a regularization method is presented to project the nodal into an element-wise physical density variable.
Korvink, Jan G.
2016-01-01
This paper develops a topology optimization procedure for three-dimensional electromagnetic waves with an edge element-based finite-element method. In contrast to the two-dimensional case, three-dimensional electromagnetic waves must include an additional divergence-free condition for the field variables. The edge element-based finite-element method is used to both discretize the wave equations and enforce the divergence-free condition. For wave propagation described in terms of the magnetic field in the widely used class of non-magnetic materials, the divergence-free condition is imposed on the magnetic field. This naturally leads to a nodal topology optimization method. When wave propagation is described using the electric field, the divergence-free condition must be imposed on the electric displacement. In this case, the material in the design domain is assumed to be piecewise homogeneous to impose the divergence-free condition on the electric field. This results in an element-wise topology optimization algorithm. The topology optimization problems are regularized using a Helmholtz filter and a threshold projection method and are analysed using a continuous adjoint method. In order to ensure the applicability of the filter in the element-wise topology optimization version, a regularization method is presented to project the nodal into an element-wise physical density variable. PMID:27279766
Rossi, Stefano; Gazzola, Enrico; Capaldo, Pietro; Borile, Giulia; Romanato, Filippo
2018-05-18
Surface Plasmon Resonance (SPR)-based sensors have the advantage of being label-free, enzyme-free and real-time. However, their spreading in multidisciplinary research is still mostly limited to prism-coupled devices. Plasmonic gratings, combined with a simple and cost-effective instrumentation, have been poorly developed compared to prism-coupled system mainly due to their lower sensitivity. Here we describe the optimization and signal enhancement of a sensing platform based on phase-interrogation method, which entails the exploitation of a nanostructured sensor. This technique is particularly suitable for integration of the plasmonic sensor in a lab-on-a-chip platform and can be used in a microfluidic chamber to ease the sensing procedures and limit the injected volume. The careful optimization of most suitable experimental parameters by numerical simulations leads to a 30⁻50% enhancement of SPR response, opening new possibilities for applications in the biomedical research field while maintaining the ease and versatility of the configuration.
Rossi, Stefano; Gazzola, Enrico; Capaldo, Pietro; Borile, Giulia; Romanato, Filippo
2018-01-01
Surface Plasmon Resonance (SPR)-based sensors have the advantage of being label-free, enzyme-free and real-time. However, their spreading in multidisciplinary research is still mostly limited to prism-coupled devices. Plasmonic gratings, combined with a simple and cost-effective instrumentation, have been poorly developed compared to prism-coupled system mainly due to their lower sensitivity. Here we describe the optimization and signal enhancement of a sensing platform based on phase-interrogation method, which entails the exploitation of a nanostructured sensor. This technique is particularly suitable for integration of the plasmonic sensor in a lab-on-a-chip platform and can be used in a microfluidic chamber to ease the sensing procedures and limit the injected volume. The careful optimization of most suitable experimental parameters by numerical simulations leads to a 30–50% enhancement of SPR response, opening new possibilities for applications in the biomedical research field while maintaining the ease and versatility of the configuration. PMID:29783711
General strategy for the protection of organs at risk in IMRT therapy of a moving body
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abolfath, Ramin M.; Papiez, Lech
2009-07-15
We investigated protection strategies of organs at risk (OARs) in intensity modulated radiation therapy (IMRT). These strategies apply to delivery of IMRT to moving body anatomies that show relative displacement of OAR in close proximity to a tumor target. We formulated an efficient genetic algorithm which makes it possible to search for global minima in a complex landscape of multiple irradiation strategies delivering a given, predetermined intensity map to a target. The optimal strategy was investigated with respect to minimizing the dose delivered to the OAR. The optimization procedure developed relies on variability of all parameters available for control ofmore » radiation delivery in modern linear accelerators, including adaptation of leaf trajectories and simultaneous modification of beam dose rate during irradiation. We showed that the optimization algorithms lead to a significant reduction in the dose delivered to OAR in cases where organs at risk move relative to a treatment target.« less
Aguirre, Erik; Lopez-Iturri, Peio; Azpilicueta, Leire; Astrain, José Javier; Villadangos, Jesús; Falcone, Francisco
2015-02-05
One of the main challenges in the implementation and design of context-aware scenarios is the adequate deployment strategy for Wireless Sensor Networks (WSNs), mainly due to the strong dependence of the radiofrequency physical layer with the surrounding media, which can lead to non-optimal network designs. In this work, radioplanning analysis for WSN deployment is proposed by employing a deterministic 3D ray launching technique in order to provide insight into complex wireless channel behavior in context-aware indoor scenarios. The proposed radioplanning procedure is validated with a testbed implemented with a Mobile Ad Hoc Network WSN following a chain configuration, enabling the analysis and assessment of a rich variety of parameters, such as received signal level, signal quality and estimation of power consumption. The adoption of deterministic radio channel techniques allows the design and further deployment of WSNs in heterogeneous wireless scenarios with optimized behavior in terms of coverage, capacity, quality of service and energy consumption.
Degenerative lumbosacral stenosis in working dogs: current concepts and review.
Worth, A J; Thompson, D J; Hartman, A C
2009-12-01
Degenerative lumbosacral stenosis (DLSS) is characterised by intervertebral disc degeneration, with secondary bony and soft-tissue changes leading to compression of the cauda equina. Large-breed, active and working dogs are the most commonly affected by DLSS. Specific manipulative tests allow the clinician to form a high suspicion of DLSS, and initiate investigation. Changes seen using conventional radiography are unreliable, and although contrast radiography represents an improvement, advanced imaging is accepted as the diagnostic method of choice. Treatment involves decompression and/or stabilisation procedures in working dogs, although conservative management may be acceptable in pet dogs with mild signs. Prognosis for return to work is only fair, and there is a high rate of recurrence following conventional surgery. Stabilisation procedures are associated with the potential for failure of the implant, and their use has not gained universal acceptance. A new surgical procedure, dorsolateral foramenotomy, offers a potential advance in the management of DLSS. everal aspects of the pathogenesis, heritability and optimal treatment approach remain uncertain.
40 CFR 90.316 - Hydrocarbon analyzer calibration.
Code of Federal Regulations, 2011 CFR
2011-07-01
...) Initial and periodic optimization of detector response. Prior to initial use and at least annually... nitrogen. (2) Use of one of the following procedures is required for FID or HFID optimization: (i) The procedure outlined in Society of Automotive Engineers (SAE) paper No. 770141, “Optimization of a Flame...
Reliable Transition State Searches Integrated with the Growing String Method.
Zimmerman, Paul
2013-07-09
The growing string method (GSM) is highly useful for locating reaction paths connecting two molecular intermediates. GSM has often been used in a two-step procedure to locate exact transition states (TS), where GSM creates a quality initial structure for a local TS search. This procedure and others like it, however, do not always converge to the desired transition state because the local search is sensitive to the quality of the initial guess. This article describes an integrated technique for simultaneous reaction path and exact transition state search. This is achieved by implementing an eigenvector following optimization algorithm in internal coordinates with Hessian update techniques. After partial convergence of the string, an exact saddle point search begins under the constraint that the maximized eigenmode of the TS node Hessian has significant overlap with the string tangent near the TS. Subsequent optimization maintains connectivity of the string to the TS as well as locks in the TS direction, all but eliminating the possibility that the local search leads to the wrong TS. To verify the robustness of this approach, reaction paths and TSs are found for a benchmark set of more than 100 elementary reactions.
Optimization of flexible wing structures subject to strength and induced drag constraints
NASA Technical Reports Server (NTRS)
Haftka, R. T.
1977-01-01
An optimization procedure for designing wing structures subject to stress, strain, and drag constraints is presented. The optimization method utilizes an extended penalty function formulation for converting the constrained problem into a series of unconstrained ones. Newton's method is used to solve the unconstrained problems. An iterative analysis procedure is used to obtain the displacements of the wing structure including the effects of load redistribution due to the flexibility of the structure. The induced drag is calculated from the lift distribution. Approximate expressions for the constraints used during major portions of the optimization process enhance the efficiency of the procedure. A typical fighter wing is used to demonstrate the procedure. Aluminum and composite material designs are obtained. The tradeoff between weight savings and drag reduction is investigated.
NASA Technical Reports Server (NTRS)
Walsh, Joanne L.; Young, Katherine C.; Pritchard, Jocelyn I.; Adelman, Howard M.; Mantay, Wayne R.
1995-01-01
This paper describes an integrated aerodynamic/dynamic/structural (IADS) optimization procedure for helicopter rotor blades. The procedure combines performance, dynamics, and structural analyses with a general-purpose optimizer using multilevel decomposition techniques. At the upper level, the structure is defined in terms of global quantities (stiffness, mass, and average strains). At the lower level, the structure is defined in terms of local quantities (detailed dimensions of the blade structure and stresses). The IADS procedure provides an optimization technique that is compatible with industrial design practices in which the aerodynamic and dynamic designs are performed at a global level and the structural design is carried out at a detailed level with considerable dialog and compromise among the aerodynamic, dynamic, and structural groups. The IADS procedure is demonstrated for several examples.
NASA Technical Reports Server (NTRS)
Walsh, Joanne L.; Young, Katherine C.; Pritchard, Jocelyn I.; Adelman, Howard M.; Mantay, Wayne R.
1994-01-01
This paper describes an integrated aerodynamic, dynamic, and structural (IADS) optimization procedure for helicopter rotor blades. The procedure combines performance, dynamics, and structural analyses with a general purpose optimizer using multilevel decomposition techniques. At the upper level, the structure is defined in terms of local quantities (stiffnesses, mass, and average strains). At the lower level, the structure is defined in terms of local quantities (detailed dimensions of the blade structure and stresses). The IADS procedure provides an optimization technique that is compatible with industrial design practices in which the aerodynamic and dynamic design is performed at a global level and the structural design is carried out at a detailed level with considerable dialogue and compromise among the aerodynamic, dynamic, and structural groups. The IADS procedure is demonstrated for several cases.
An integrated optimum design approach for high speed prop rotors
NASA Technical Reports Server (NTRS)
Chattopadhyay, Aditi; Mccarthy, Thomas R.
1995-01-01
The objective is to develop an optimization procedure for high-speed and civil tilt-rotors by coupling all of the necessary disciplines within a closed-loop optimization procedure. Both simplified and comprehensive analysis codes are used for the aerodynamic analyses. The structural properties are calculated using in-house developed algorithms for both isotropic and composite box beam sections. There are four major objectives of this study. (1) Aerodynamic optimization: The effects of blade aerodynamic characteristics on cruise and hover performance of prop-rotor aircraft are investigated using the classical blade element momentum approach with corrections for the high lift capability of rotors/propellers. (2) Coupled aerodynamic/structures optimization: A multilevel hybrid optimization technique is developed for the design of prop-rotor aircraft. The design problem is decomposed into a level for improved aerodynamics with continuous design variables and a level with discrete variables to investigate composite tailoring. The aerodynamic analysis is based on that developed in objective 1 and the structural analysis is performed using an in-house code which models a composite box beam. The results are compared to both a reference rotor and the optimum rotor found in the purely aerodynamic formulation. (3) Multipoint optimization: The multilevel optimization procedure of objective 2 is extended to a multipoint design problem. Hover, cruise, and take-off are the three flight conditions simultaneously maximized. (4) Coupled rotor/wing optimization: Using the comprehensive rotary wing code CAMRAD, an optimization procedure is developed for the coupled rotor/wing performance in high speed tilt-rotor aircraft. The developed procedure contains design variables which define the rotor and wing planforms.
The Individual Virtual Eye: a Computer Model for Advanced Intraocular Lens Calculation
Einighammer, Jens; Oltrup, Theo; Bende, Thomas; Jean, Benedikt
2010-01-01
Purpose To describe the individual virtual eye, a computer model of a human eye with respect to its optical properties. It is based on measurements of an individual person and one of its major application is calculating intraocular lenses (IOLs) for cataract surgery. Methods The model is constructed from an eye's geometry, including axial length and topographic measurements of the anterior corneal surface. All optical components of a pseudophakic eye are modeled with computer scientific methods. A spline-based interpolation method efficiently includes data from corneal topographic measurements. The geometrical optical properties, such as the wavefront aberration, are simulated with real ray-tracing using Snell's law. Optical components can be calculated using computer scientific optimization procedures. The geometry of customized aspheric IOLs was calculated for 32 eyes and the resulting wavefront aberration was investigated. Results The more complex the calculated IOL is, the lower the residual wavefront error is. Spherical IOLs are only able to correct for the defocus, while toric IOLs also eliminate astigmatism. Spherical aberration is additionally reduced by aspheric and toric aspheric IOLs. The efficient implementation of time-critical numerical ray-tracing and optimization procedures allows for short calculation times, which may lead to a practicable method integrated in some device. Conclusions The individual virtual eye allows for simulations and calculations regarding geometrical optics for individual persons. This leads to clinical applications like IOL calculation, with the potential to overcome the limitations of those current calculation methods that are based on paraxial optics, exemplary shown by calculating customized aspheric IOLs.
Visualizing deep neural network by alternately image blurring and deblurring.
Wang, Feng; Liu, Haijun; Cheng, Jian
2018-01-01
Visualization from trained deep neural networks has drawn massive public attention in recent. One of the visualization approaches is to train images maximizing the activation of specific neurons. However, directly maximizing the activation would lead to unrecognizable images, which cannot provide any meaningful information. In this paper, we introduce a simple but effective technique to constrain the optimization route of the visualization. By adding two totally inverse transformations, image blurring and deblurring, to the optimization procedure, recognizable images can be created. Our algorithm is good at extracting the details in the images, which are usually filtered by previous methods in the visualizations. Extensive experiments on AlexNet, VGGNet and GoogLeNet illustrate that we can better understand the neural networks utilizing the knowledge obtained by the visualization. Copyright © 2017 Elsevier Ltd. All rights reserved.
Design of Quiet Rotorcraft Approach Trajectories
NASA Technical Reports Server (NTRS)
Padula, Sharon L.; Burley, Casey L.; Boyd, D. Douglas, Jr.; Marcolini, Michael A.
2009-01-01
A optimization procedure for identifying quiet rotorcraft approach trajectories is proposed and demonstrated. The procedure employs a multi-objective genetic algorithm in order to reduce noise and create approach paths that will be acceptable to pilots and passengers. The concept is demonstrated by application to two different helicopters. The optimized paths are compared with one another and to a standard 6-deg approach path. The two demonstration cases validate the optimization procedure but highlight the need for improved noise prediction techniques and for additional rotorcraft acoustic data sets.
Utilization of group theory in studies of molecular clusters
NASA Astrophysics Data System (ADS)
Ocak, Mahir E.
The structure of the molecular symmetry group of molecular clusters was analyzed and it is shown that the molecular symmetry group of a molecular cluster can be written as direct products and semidirect products of its subgroups. Symmetry adaptation of basis functions in direct product groups and semidirect product groups was considered in general and the sequential symmetry adaptation procedure which is already known for direct product groups was extended to the case of semidirect product groups. By using the sequential symmetry adaptation procedure a new method for calculating the VRT spectra of molecular clusters which is named as Monomer Basis Representation (MBR) method is developed. In the MBR method, calculations starts with a single monomer with the purpose of obtaining an optimized basis for that monomer as a linear combination of some primitive basis functions. Then, an optimized basis for each identical monomer is generated from the optimized basis of this monomer. By using the optimized bases of the monomers, a basis is generated generated for the solution of the full problem, and the VRT spectra of the cluster is obtained by using this basis. Since an optimized basis is used for each monomer which has a much smaller size than the primitive basis from which the optimized bases are generated, the MBR method leads to an exponential optimization in the size of the basis that is required for the calculations. Application of the MBR method has been illustrated by calculating the VRT spectra of water dimer by using the SAPT-5st potential surface of Groenenboom et al. The rest of the calculations are in good agreement with both the original calculations of Groenenboom et al. and also with the experimental results. Comparing the size of the optimized basis with the size of the primitive basis, it can be said that the method works efficiently. Because of its efficiency, the MBR method can be used for studies of clusters bigger than dimers. Thus, MBR method can be used for studying the many-body terms and for deriving accurate potential surfaces.
NASA Technical Reports Server (NTRS)
Korte, J. J.; Auslender, A. H.
1993-01-01
A new optimization procedure, in which a parabolized Navier-Stokes solver is coupled with a non-linear least-squares optimization algorithm, is applied to the design of a Mach 14, laminar two-dimensional hypersonic subscale flight inlet with an internal contraction ratio of 15:1 and a length-to-throat half-height ratio of 150:1. An automated numerical search of multiple geometric wall contours, which are defined by polynomical splines, results in an optimal geometry that yields the maximum total-pressure recovery for the compression process. Optimal inlet geometry is obtained for both inviscid and viscous flows, with the assumption that the gas is either calorically or thermally perfect. The analysis with a calorically perfect gas results in an optimized inviscid inlet design that is defined by two cubic splines and yields a mass-weighted total-pressure recovery of 0.787, which is a 23% improvement compared with the optimized shock-canceled two-ramp inlet design. Similarly, the design procedure obtains the optimized contour for a viscous calorically perfect gas to yield a mass-weighted total-pressure recovery value of 0.749. Additionally, an optimized contour for a viscous thermally perfect gas is obtained to yield a mass-weighted total-pressure recovery value of 0.768. The design methodology incorporates both complex fluid dynamic physics and optimal search techniques without an excessive compromise of computational speed; hence, this methodology is a practical technique that is applicable to optimal inlet design procedures.
Optimal Parameters for Intervertebral Disk Resection Using Aqua-Plasma Beams.
Yoon, Sung-Young; Kim, Gon-Ho; Kim, Yushin; Kim, Nack Hwan; Lee, Sangheon; Kawai, Christina; Hong, Youngki
2018-06-14
A minimally invasive procedure for intervertebral disk resection using plasma beams has been developed. Conventional parameters for the plasma procedure such as voltage and tip speed mainly rely on the surgeon's personal experience, without adequate evidence from experiments. Our objective was to determine the optimal parameters for plasma disk resection. Rate of ablation was measured at different procedural tip speeds and voltages using porcine nucleus pulposi. The amount of heat formation during experimental conditions was also measured to evaluate the thermal safety of the plasma procedure. The ablation rate increased at slower procedural speeds and higher voltages. However, for thermal safety, the optimal parameters for plasma procedures with minimal tissue damage were an electrical output of 280 volts root-mean-square (V rms ) and a procedural tip speed of 2.5 mm/s. Our findings provide useful information for an effective and safe plasma procedure for disk resection in a clinical setting. Georg Thieme Verlag KG Stuttgart · New York.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tonkopi, E; Lightfoot, C; LeBlanc, E
Purpose: The rising complexity of interventional fluoroscopic procedures has resulted in an increase of occupational radiation exposures in the interventional radiology (IR) department. This study assessed the impact of ancillary shielding on optimizing radiation protection for the IR staff. Methods: Scattered radiation measurements were performed in two IR suites equipped with Axiom Artis systems (Siemens Healthcare, Erlangen, Germany) installed in 2006 and 2010. Both rooms had suspended ceiling-mounted lead-acrylic shields of 75×60 cm (Mavig, Munich, Germany) with lead equivalency of 0.5 mm, and under-table drapes of 70×116 cm and 65×70 cm in the newer and the older room respectively. Themore » larger skirt can be wrapped around the table’s corner and in addition the newer suite had two upper shields of 25×55 cm and 25×35 cm. The patient was simulated by 30 cm of acrylic, air kerma rate (AKR) was measured with the 180cc ionization chamber (AccuPro Radcal Corporation, Monrovia, CA, USA) at different positions. The ancillary shields, x-ray tube, image detector, and table height were adjusted by the IR radiologist to simulate various clinical setups. The same exposure parameters were used for all acquisitions. AKR measurements were made at different positions relative to the operator. Results: The AKR measurements demonstrated 91–99% x-ray attenuation by the drapes in both suites. The smaller size of the under-table skirt and absence of the side-drapes in the older room resulted in a 20–50 fold increase of scattered radiation to the operator. The mobile suspended lead-acrylic shield reduced AKR by 90–94% measured at 150–170 cm height. The recommendations were made to replace the smaller under-table skirt and to use the ceiling-mounted shields for all IR procedures. Conclusion: The ancillary shielding may significantly affect radiation exposure to the IR staff. The use of suspended ceiling-mounted shields is especially important for reduction of interventional radiologists’ cranial radiation.« less
Aerodynamic Design Using Neural Networks
NASA Technical Reports Server (NTRS)
Rai, Man Mohan; Madavan, Nateri K.
2003-01-01
The design of aerodynamic components of aircraft, such as wings or engines, involves a process of obtaining the most optimal component shape that can deliver the desired level of component performance, subject to various constraints, e.g., total weight or cost, that the component must satisfy. Aerodynamic design can thus be formulated as an optimization problem that involves the minimization of an objective function subject to constraints. A new aerodynamic design optimization procedure based on neural networks and response surface methodology (RSM) incorporates the advantages of both traditional RSM and neural networks. The procedure uses a strategy, denoted parameter-based partitioning of the design space, to construct a sequence of response surfaces based on both neural networks and polynomial fits to traverse the design space in search of the optimal solution. Some desirable characteristics of the new design optimization procedure include the ability to handle a variety of design objectives, easily impose constraints, and incorporate design guidelines and rules of thumb. It provides an infrastructure for variable fidelity analysis and reduces the cost of computation by using less-expensive, lower fidelity simulations in the early stages of the design evolution. The initial or starting design can be far from optimal. The procedure is easy and economical to use in large-dimensional design space and can be used to perform design tradeoff studies rapidly. Designs involving multiple disciplines can also be optimized. Some practical applications of the design procedure that have demonstrated some of its capabilities include the inverse design of an optimal turbine airfoil starting from a generic shape and the redesign of transonic turbines to improve their unsteady aerodynamic characteristics.
Aerodynamic shape optimization using preconditioned conjugate gradient methods
NASA Technical Reports Server (NTRS)
Burgreen, Greg W.; Baysal, Oktay
1993-01-01
In an effort to further improve upon the latest advancements made in aerodynamic shape optimization procedures, a systematic study is performed to examine several current solution methodologies as applied to various aspects of the optimization procedure. It is demonstrated that preconditioned conjugate gradient-like methodologies dramatically decrease the computational efforts required for such procedures. The design problem investigated is the shape optimization of the upper and lower surfaces of an initially symmetric (NACA-012) airfoil in inviscid transonic flow and at zero degree angle-of-attack. The complete surface shape is represented using a Bezier-Bernstein polynomial. The present optimization method then automatically obtains supercritical airfoil shapes over a variety of freestream Mach numbers. Furthermore, the best optimization strategy examined resulted in a factor of 8 decrease in computational time as well as a factor of 4 decrease in memory over the most efficient strategies in current use.
Radio Frequency Ablation Registration, Segmentation, and Fusion Tool
McCreedy, Evan S.; Cheng, Ruida; Hemler, Paul F.; Viswanathan, Anand; Wood, Bradford J.; McAuliffe, Matthew J.
2008-01-01
The Radio Frequency Ablation Segmentation Tool (RFAST) is a software application developed using NIH's Medical Image Processing Analysis and Visualization (MIPAV) API for the specific purpose of assisting physicians in the planning of radio frequency ablation (RFA) procedures. The RFAST application sequentially leads the physician through the steps necessary to register, fuse, segment, visualize and plan the RFA treatment. Three-dimensional volume visualization of the CT dataset with segmented 3D surface models enables the physician to interactively position the ablation probe to simulate burns and to semi-manually simulate sphere packing in an attempt to optimize probe placement. PMID:16871716
[Why is brachytherapy still essential in 2017?
Haie-Méder, C; Maroun, P; Fumagalli, I; Lazarescu, I; Dumas, I; Martinetti, F; Chargari, C
2018-05-16
These recent years, brachytherapy has benefited from imaging modalities advances. A more systematic use of tomodensitometric, ultrasonographic and MRI images during brachytherapy procedures has allowed an improvement in target and organs at risk assessment as well as their relationship with the applicators. New concepts integrating tumor regression during treatment have been defined and have been clinically validated. New applicators have been developed and are commercially available. Optimization processes have been developed, integrating hypofractionation modalities leading to tumor control improvement. All these opportunities led to further development of brachytherapy, with indisputable ballistic advantages, especially compared to external irradiation. Copyright © 2018. Published by Elsevier SAS.
Validation of the procedures. [integrated multidisciplinary optimization of rotorcraft
NASA Technical Reports Server (NTRS)
Mantay, Wayne R.
1989-01-01
Validation strategies are described for procedures aimed at improving the rotor blade design process through a multidisciplinary optimization approach. Validation of the basic rotor environment prediction tools and the overall rotor design are discussed.
The importance of hydration thermodynamics in fragment-to-lead optimization.
Ichihara, Osamu; Shimada, Yuzo; Yoshidome, Daisuke
2014-12-01
Using a computational approach to assess changes in solvation thermodynamics upon ligand binding, we investigated the effects of water molecules on the binding energetics of over 20 fragment hits and their corresponding optimized lead compounds. Binding activity and X-ray crystallographic data of published fragment-to-lead optimization studies from various therapeutically relevant targets were studied. The analysis reveals a distinct difference between the thermodynamic profile of water molecules displaced by fragment hits and those displaced by the corresponding optimized lead compounds. Specifically, fragment hits tend to displace water molecules with notably unfavorable excess entropies-configurationally constrained water molecules-relative to those displaced by the newly added moieties of the lead compound during the course of fragment-to-lead optimization. Herein we describe the details of this analysis with the goal of providing practical guidelines for exploiting thermodynamic signatures of binding site water molecules in the context of fragment-to-lead optimization. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Spectral gap optimization of order parameters for sampling complex molecular systems
Tiwary, Pratyush; Berne, B. J.
2016-01-01
In modern-day simulations of many-body systems, much of the computational complexity is shifted to the identification of slowly changing molecular order parameters called collective variables (CVs) or reaction coordinates. A vast array of enhanced-sampling methods are based on the identification and biasing of these low-dimensional order parameters, whose fluctuations are important in driving rare events of interest. Here, we describe a new algorithm for finding optimal low-dimensional CVs for use in enhanced-sampling biasing methods like umbrella sampling, metadynamics, and related methods, when limited prior static and dynamic information is known about the system, and a much larger set of candidate CVs is specified. The algorithm involves estimating the best combination of these candidate CVs, as quantified by a maximum path entropy estimate of the spectral gap for dynamics viewed as a function of that CV. The algorithm is called spectral gap optimization of order parameters (SGOOP). Through multiple practical examples, we show how this postprocessing procedure can lead to optimization of CV and several orders of magnitude improvement in the convergence of the free energy calculated through metadynamics, essentially giving the ability to extract useful information even from unsuccessful metadynamics runs. PMID:26929365
Multi-objective/loading optimization for rotating composite flexbeams
NASA Technical Reports Server (NTRS)
Hamilton, Brian K.; Peters, James R.
1989-01-01
With the evolution of advanced composites, the feasibility of designing bearingless rotor systems for high speed, demanding maneuver envelopes, and high aircraft gross weights has become a reality. These systems eliminate the need for hinges and heavily loaded bearings by incorporating a composite flexbeam structure which accommodates flapping, lead-lag, and feathering motions by bending and twisting while reacting full blade centrifugal force. The flight characteristics of a bearingless rotor system are largely dependent on hub design, and the principal element in this type of system is the composite flexbeam. As in any hub design, trade off studies must be performed in order to optimize performance, dynamics (stability), handling qualities, and stresses. However, since the flexbeam structure is the primary component which will determine the balance of these characteristics, its design and fabrication are not straightforward. It was concluded that: pitchcase and snubber damper representations are required in the flexbeam model for proper sizing resulting from dynamic requirements; optimization is necessary for flexbeam design, since it reduces the design iteration time and results in an improved design; and inclusion of multiple flight conditions and their corresponding fatigue allowables is necessary for the optimization procedure.
78 FR 53237 - Establishment of Area Navigation (RNAV) Routes; Washington, DC
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-29
... ``Optimization of Airspace and Procedures in a Metroplex (OAPM)'' effort in that this rule did not include T.... The new routes support the Washington, DC Optimization of Airspace and Procedures in a Metroplex (OAPM...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guillermin, M.; Colombier, J. P.; Audouard, E.
2010-07-15
With an interest in pulsed laser deposition and remote spectroscopy techniques, we explore here the potential of laser pulses temporally tailored on ultrafast time scales to control the expansion and the excitation degree of various ablation products including atomic species and nanoparticulates. Taking advantage of automated pulse-shaping techniques, an adaptive procedure based on spectroscopic feedback is applied to regulate the irradiance and enhance the optical emission of monocharged aluminum ions with respect to the neutral signal. This leads to optimized pulses usually consisting in a series of femtosecond peaks distributed on a longer picosecond sequence. The ablation features induced bymore » the optimized pulse are compared with those determined by picosecond pulses generated by imposed second-order dispersion or by double pulse sequences with adjustable picosecond separation. This allows to analyze the influence of fast- and slow-varying envelope features on the material heating and the resulting plasma excitation degree. Using various optimal pulse forms including designed asymmetric shapes, we analyze the establishment of surface pre-excitation that enables conditions of enhanced radiation coupling. Thin films elaborated by unshaped femtosecond laser pulses and by optimized, stretched, or double pulse sequences are compared, indicating that the nanoparticles generation efficiency is strongly influenced by the temporal shaping of the laser irradiation. A thermodynamic scenario involving supercritical heating is proposed to explain enhanced ionization rates and lower particulates density for optimal pulses. Numerical one-dimensional hydrodynamic simulations for the excited matter support the interpretation of the experimental results in terms of relative efficiency of various relaxation paths for excited matter above or below the thermodynamic stability limits. The calculation results underline the role of the temperature and density gradients along the ablated plasma plume which lead to the spatial distinct locations of excited species. Moreover, the nanoparticles sizes are computed based on liquid layer ejection followed by a Rayleigh and Taylor instability decomposition, in good agreement with the experimental findings.« less
Multiparameter optimization of mammography: an update
NASA Astrophysics Data System (ADS)
Jafroudi, Hamid; Muntz, E. P.; Jennings, Robert J.
1994-05-01
Previously in this forum we have reported the application of multiparameter optimization techniques to the design of a minimum dose mammography system. The approach used a reference system to define the physical imaging performance required and the dose to which the dose for the optimized system should be compared. During the course of implementing the resulting design in hardware suitable for laboratory testing, the state of the art in mammographic imaging changed, so that the original reference system, which did not have a grid, was no longer appropriate. A reference system with a grid was selected in response to this change, and at the same time the optimization procedure was modified, to make it more general and to facilitate study of the optimized design under a variety of conditions. We report the changes in the procedure, and the results obtained using the revised procedure and the up- to-date reference system. Our results, which are supported by laboratory measurements, indicate that the optimized design can image small objects as well as the reference system using only about 30% of the dose required by the reference system. Hardware meeting the specification produced by the optimization procedure and suitable for clinical use is currently under evaluation in the Diagnostic Radiology Department at the Clinical Center, NH.
Distributed Method to Optimal Profile Descent
NASA Astrophysics Data System (ADS)
Kim, Geun I.
Current ground automation tools for Optimal Profile Descent (OPD) procedures utilize path stretching and speed profile change to maintain proper merging and spacing requirements at high traffic terminal area. However, low predictability of aircraft's vertical profile and path deviation during decent add uncertainty to computing estimated time of arrival, a key information that enables the ground control center to manage airspace traffic effectively. This paper uses an OPD procedure that is based on a constant flight path angle to increase the predictability of the vertical profile and defines an OPD optimization problem that uses both path stretching and speed profile change while largely maintaining the original OPD procedure. This problem minimizes the cumulative cost of performing OPD procedures for a group of aircraft by assigning a time cost function to each aircraft and a separation cost function to a pair of aircraft. The OPD optimization problem is then solved in a decentralized manner using dual decomposition techniques under inter-aircraft ADS-B mechanism. This method divides the optimization problem into more manageable sub-problems which are then distributed to the group of aircraft. Each aircraft solves its assigned sub-problem and communicate the solutions to other aircraft in an iterative process until an optimal solution is achieved thus decentralizing the computation of the optimization problem.
Optimization applications in aircraft engine design and test
NASA Technical Reports Server (NTRS)
Pratt, T. K.
1984-01-01
Starting with the NASA-sponsored STAEBL program, optimization methods based primarily upon the versatile program COPES/CONMIN were introduced over the past few years to a broad spectrum of engineering problems in structural optimization, engine design, engine test, and more recently, manufacturing processes. By automating design and testing processes, many repetitive and costly trade-off studies have been replaced by optimization procedures. Rather than taking engineers and designers out of the loop, optimization has, in fact, put them more in control by providing sophisticated search techniques. The ultimate decision whether to accept or reject an optimal feasible design still rests with the analyst. Feedback obtained from this decision process has been invaluable since it can be incorporated into the optimization procedure to make it more intelligent. On several occasions, optimization procedures have produced novel designs, such as the nonsymmetric placement of rotor case stiffener rings, not anticipated by engineering designers. In another case, a particularly difficult resonance contraint could not be satisfied using hand iterations for a compressor blade, when the STAEBL program was applied to the problem, a feasible solution was obtained in just two iterations.
Adaptive feature selection using v-shaped binary particle swarm optimization.
Teng, Xuyang; Dong, Hongbin; Zhou, Xiurong
2017-01-01
Feature selection is an important preprocessing method in machine learning and data mining. This process can be used not only to reduce the amount of data to be analyzed but also to build models with stronger interpretability based on fewer features. Traditional feature selection methods evaluate the dependency and redundancy of features separately, which leads to a lack of measurement of their combined effect. Moreover, a greedy search considers only the optimization of the current round and thus cannot be a global search. To evaluate the combined effect of different subsets in the entire feature space, an adaptive feature selection method based on V-shaped binary particle swarm optimization is proposed. In this method, the fitness function is constructed using the correlation information entropy. Feature subsets are regarded as individuals in a population, and the feature space is searched using V-shaped binary particle swarm optimization. The above procedure overcomes the hard constraint on the number of features, enables the combined evaluation of each subset as a whole, and improves the search ability of conventional binary particle swarm optimization. The proposed algorithm is an adaptive method with respect to the number of feature subsets. The experimental results show the advantages of optimizing the feature subsets using the V-shaped transfer function and confirm the effectiveness and efficiency of the feature subsets obtained under different classifiers.
Adaptive feature selection using v-shaped binary particle swarm optimization
Dong, Hongbin; Zhou, Xiurong
2017-01-01
Feature selection is an important preprocessing method in machine learning and data mining. This process can be used not only to reduce the amount of data to be analyzed but also to build models with stronger interpretability based on fewer features. Traditional feature selection methods evaluate the dependency and redundancy of features separately, which leads to a lack of measurement of their combined effect. Moreover, a greedy search considers only the optimization of the current round and thus cannot be a global search. To evaluate the combined effect of different subsets in the entire feature space, an adaptive feature selection method based on V-shaped binary particle swarm optimization is proposed. In this method, the fitness function is constructed using the correlation information entropy. Feature subsets are regarded as individuals in a population, and the feature space is searched using V-shaped binary particle swarm optimization. The above procedure overcomes the hard constraint on the number of features, enables the combined evaluation of each subset as a whole, and improves the search ability of conventional binary particle swarm optimization. The proposed algorithm is an adaptive method with respect to the number of feature subsets. The experimental results show the advantages of optimizing the feature subsets using the V-shaped transfer function and confirm the effectiveness and efficiency of the feature subsets obtained under different classifiers. PMID:28358850
Global Parameter Optimization of CLM4.5 Using Sparse-Grid Based Surrogates
NASA Astrophysics Data System (ADS)
Lu, D.; Ricciuto, D. M.; Gu, L.
2016-12-01
Calibration of the Community Land Model (CLM) is challenging because of its model complexity, large parameter sets, and significant computational requirements. Therefore, only a limited number of simulations can be allowed in any attempt to find a near-optimal solution within an affordable time. The goal of this study is to calibrate some of the CLM parameters in order to improve model projection of carbon fluxes. To this end, we propose a computationally efficient global optimization procedure using sparse-grid based surrogates. We first use advanced sparse grid (SG) interpolation to construct a surrogate system of the actual CLM model, and then we calibrate the surrogate model in the optimization process. As the surrogate model is a polynomial whose evaluation is fast, it can be efficiently evaluated with sufficiently large number of times in the optimization, which facilitates the global search. We calibrate five parameters against 12 months of GPP, NEP, and TLAI data from the U.S. Missouri Ozark (US-MOz) tower. The results indicate that an accurate surrogate model can be created for the CLM4.5 with a relatively small number of SG points (i.e., CLM4.5 simulations), and the application of the optimized parameters leads to a higher predictive capacity than the default parameter values in the CLM4.5 for the US-MOz site.
An effective parameter optimization with radiation balance constraints in the CAM5
NASA Astrophysics Data System (ADS)
Wu, L.; Zhang, T.; Qin, Y.; Lin, Y.; Xue, W.; Zhang, M.
2017-12-01
Uncertain parameters in physical parameterizations of General Circulation Models (GCMs) greatly impact model performance. Traditional parameter tuning methods are mostly unconstrained optimization, leading to the simulation results with optimal parameters may not meet the conditions that models have to keep. In this study, the radiation balance constraint is taken as an example, which is involved in the automatic parameter optimization procedure. The Lagrangian multiplier method is used to solve this optimization problem with constrains. In our experiment, we use CAM5 atmosphere model under 5-yr AMIP simulation with prescribed seasonal climatology of SST and sea ice. We consider the synthesized metrics using global means of radiation, precipitation, relative humidity, and temperature as the goal of optimization, and simultaneously consider the conditions that FLUT and FSNTOA should satisfy as constraints. The global average of the output variables FLUT and FSNTOA are set to be approximately equal to 240 Wm-2 in CAM5. Experiment results show that the synthesized metrics is 13.6% better than the control run. At the same time, both FLUT and FSNTOA are close to the constrained conditions. The FLUT condition is well satisfied, which is obviously better than the average annual FLUT obtained with the default parameters. The FSNTOA has a slight deviation from the observed value, but the relative error is less than 7.7‰.
A Stochastic-Variational Model for Soft Mumford-Shah Segmentation
2006-01-01
In contemporary image and vision analysis, stochastic approaches demonstrate great flexibility in representing and modeling complex phenomena, while variational-PDE methods gain enormous computational advantages over Monte Carlo or other stochastic algorithms. In combination, the two can lead to much more powerful novel models and efficient algorithms. In the current work, we propose a stochastic-variational model for soft (or fuzzy) Mumford-Shah segmentation of mixture image patterns. Unlike the classical hard Mumford-Shah segmentation, the new model allows each pixel to belong to each image pattern with some probability. Soft segmentation could lead to hard segmentation, and hence is more general. The modeling procedure, mathematical analysis on the existence of optimal solutions, and computational implementation of the new model are explored in detail, and numerical examples of both synthetic and natural images are presented. PMID:23165059
Neural Net-Based Redesign of Transonic Turbines for Improved Unsteady Aerodynamic Performance
NASA Technical Reports Server (NTRS)
Madavan, Nateri K.; Rai, Man Mohan; Huber, Frank W.
1998-01-01
A recently developed neural net-based aerodynamic design procedure is used in the redesign of a transonic turbine stage to improve its unsteady aerodynamic performance. The redesign procedure used incorporates the advantages of both traditional response surface methodology (RSM) and neural networks by employing a strategy called parameter-based partitioning of the design space. Starting from the reference design, a sequence of response surfaces based on both neural networks and polynomial fits are constructed to traverse the design space in search of an optimal solution that exhibits improved unsteady performance. The procedure combines the power of neural networks and the economy of low-order polynomials (in terms of number of simulations required and network training requirements). A time-accurate, two-dimensional, Navier-Stokes solver is used to evaluate the various intermediate designs and provide inputs to the optimization procedure. The optimization procedure yields a modified design that improves the aerodynamic performance through small changes to the reference design geometry. The computed results demonstrate the capabilities of the neural net-based design procedure, and also show the tremendous advantages that can be gained by including high-fidelity unsteady simulations that capture the relevant flow physics in the design optimization process.
Helicopter Flight Procedures for Community Noise Reduction
NASA Technical Reports Server (NTRS)
Greenwood, Eric
2017-01-01
A computationally efficient, semiempirical noise model suitable for maneuvering flight noise prediction is used to evaluate the community noise impact of practical variations on several helicopter flight procedures typical of normal operations. Turns, "quick-stops," approaches, climbs, and combinations of these maneuvers are assessed. Relatively small variations in flight procedures are shown to cause significant changes to Sound Exposure Levels over a wide area. Guidelines are developed for helicopter pilots intended to provide effective strategies for reducing the negative effects of helicopter noise on the community. Finally, direct optimization of flight trajectories is conducted to identify low noise optimal flight procedures and quantify the magnitude of community noise reductions that can be obtained through tailored helicopter flight procedures. Physically realizable optimal turns and approaches are identified that achieve global noise reductions of as much as 10 dBA Sound Exposure Level.
Patient-specific simulation in carotid artery stenting.
Willaert, Willem; Aggarwal, Rajesh; Bicknell, Colin; Hamady, Mo; Darzi, Ara; Vermassen, Frank; Cheshire, Nicholas
2010-12-01
Patient-specific virtual reality (VR) simulation is a technologic advancement that allows planning and practice of the carotid artery stenting (CAS) procedure before it is performed on the patient. The initial findings are reported, using this novel VR technique as a tool to optimize technical and nontechnical aspects of this complex endovascular procedure. In the angiography suite, the same interventional team performed the VR rehearsal and the actual CAS on the patient. All proceedings were recorded to allow for video analysis of team, technical, and nontechnical skills. Analysis of both procedures showed identical use of endovascular tools, similar access strategy, and a high degree of similarity between the angiography images. The total procedure time (24.04 vs 60.44 minutes), fluoroscopy time (11.19 vs 21.04 minutes), and cannulation of the common carotid artery (1.35 vs 9.34) took considerably longer in reality. An extensive questionnaire revealed that all team members found that the rehearsal increased the subjective sense of teamwork (4/5), communication (4/5), and patient safety (4/5). A VR procedure rehearsal is a practical and feasible preparatory tool for CAS and shows a high correlation with the real procedure. It has the potential to enhance the technical, nontechnical, and team performance. Further research is needed to evaluate if this technology can lead to improved outcomes for patients. Copyright © 2010 Society for Vascular Surgery. Published by Mosby, Inc. All rights reserved.
Reul, Ross M.; Ramchandani, Mahesh K.; Reardon, Michael J.
2017-01-01
Surgical aortic valve replacement is the gold standard procedure to treat patients with severe, symptomatic aortic valve stenosis or insufficiency. Bioprosthetic valves are used for surgical aortic valve replacement with a much greater prevalence than mechanical valves. However, bioprosthetic valves may fail over time because of structural valve deterioration; this often requires intervention due to severe bioprosthetic valve stenosis or regurgitation or a combination of both. In select patients, transcatheter aortic valve replacement is an alternative to surgical aortic valve replacement. Transcatheter valve-in-valve (ViV) replacement is performed by implanting a transcatheter heart valve within a failing bioprosthetic valve. The transcatheter ViV operation is a less invasive procedure compared with reoperative surgical aortic valve replacement, but it has been associated with specific complications and requires extensive preoperative work-up and planning by the heart team. Data from experimental studies and analyses of results from clinical procedures have led to strategies to improve outcomes of these procedures. The type, size, and implant position of the transcatheter valve can be optimized for individual patients with knowledge of detailed dimensions of the surgical valve and radiographic and echocardiographic measurements of the patient's anatomy. Understanding the complexities of the ViV procedure can lead surgeons to make choices during the original surgical valve implantation that can make a future ViV operation more technically feasible years before it is required. PMID:29743998
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernal, Andrés; Patiny, Luc; Castillo, Andrés M.
2015-02-21
Nuclear magnetic resonance (NMR) assignment of small molecules is presented as a typical example of a combinatorial optimization problem in chemical physics. Three strategies that help improve the efficiency of solution search by the branch and bound method are presented: 1. reduction of the size of the solution space by resort to a condensed structure formula, wherein symmetric nuclei are grouped together; 2. partitioning of the solution space based on symmetry, that becomes the basis for an efficient branching procedure; and 3. a criterion of selection of input restrictions that leads to increased gaps between branches and thus faster pruningmore » of non-viable solutions. Although the examples chosen to illustrate this work focus on small-molecule NMR assignment, the results are generic and might help solving other combinatorial optimization problems.« less
34 CFR 303.400 - General responsibility of lead agency for procedural safeguards.
Code of Federal Regulations, 2012 CFR
2012-07-01
... intervention services under this part; and (c) Make available to parents an initial copy of the child's early... 34 Education 2 2012-07-01 2012-07-01 false General responsibility of lead agency for procedural... responsibility of lead agency for procedural safeguards. Subject to paragraph (c) of this section, each lead...
34 CFR 303.400 - General responsibility of lead agency for procedural safeguards.
Code of Federal Regulations, 2014 CFR
2014-07-01
... intervention services under this part; and (c) Make available to parents an initial copy of the child's early... 34 Education 2 2014-07-01 2013-07-01 true General responsibility of lead agency for procedural... responsibility of lead agency for procedural safeguards. Subject to paragraph (c) of this section, each lead...
34 CFR 303.400 - General responsibility of lead agency for procedural safeguards.
Code of Federal Regulations, 2013 CFR
2013-07-01
... intervention services under this part; and (c) Make available to parents an initial copy of the child's early... 34 Education 2 2013-07-01 2013-07-01 false General responsibility of lead agency for procedural... responsibility of lead agency for procedural safeguards. Subject to paragraph (c) of this section, each lead...
Design of Three-Dimensional Hypersonic Inlets with Rectangular to Elliptical Shape Transition
NASA Technical Reports Server (NTRS)
Smart, M. K.
1998-01-01
A methodology has been devised for the design of three-dimensional hypersonic inlets which include a rectangular to elliptical shape transition. This methodology makes extensive use of inviscid streamtracing techniques to generate a smooth shape transition from a rectangular-like capture to an elliptical throat. Highly swept leading edges and a significantly notched cowl enable use of these inlets in fixed geometry configurations. The design procedure includes a three dimensional displacement thickness calculation and uses established correlations to check for boundary layer separation due to shock wave interactions. Complete details of the design procedure are presented and the characteristics of a modular inlet with rectangular to elliptical shape transition and a design point of Mach 7.1 are examined. Comparison with a classical two-dimensional inlet optimized for maximum total pressure recovery indicates that this three-dimensional inlet demonstrates good performance even well below its design point.
NASA Astrophysics Data System (ADS)
Jiang, Xue-Qin; Huang, Peng; Huang, Duan; Lin, Dakai; Zeng, Guihua
2017-02-01
Achieving information theoretic security with practical complexity is of great interest to continuous-variable quantum key distribution in the postprocessing procedure. In this paper, we propose a reconciliation scheme based on the punctured low-density parity-check (LDPC) codes. Compared to the well-known multidimensional reconciliation scheme, the present scheme has lower time complexity. Especially when the chosen punctured LDPC code achieves the Shannon capacity, the proposed reconciliation scheme can remove the information that has been leaked to an eavesdropper in the quantum transmission phase. Therefore, there is no information leaked to the eavesdropper after the reconciliation stage. This indicates that the privacy amplification algorithm of the postprocessing procedure is no more needed after the reconciliation process. These features lead to a higher secret key rate, optimal performance, and availability for the involved quantum key distribution scheme.
NASA Technical Reports Server (NTRS)
Korte, John J.
1992-01-01
A new procedure unifying the best of present classical design practices, CFD and optimization procedures, is demonstrated for designing the aerodynamic lines of hypersonic wind tunnel nozzles. This procedure can be employed to design hypersonic wind tunnel nozzles with thick boundary layers where the classical design procedure has been demonstrated to break down. Advantages of this procedure allow full utilization of powerful CFD codes in the design process, solves an optimization problem to determine the new contour, may be used to design new nozzles or improve sections of existing nozzles, and automatically compensates the nozzle contour for viscous effects as part of the unified design procedure.
Optimization of a Tube Hydroforming Process
NASA Astrophysics Data System (ADS)
Abedrabbo, Nader; Zafar, Naeem; Averill, Ron; Pourboghrat, Farhang; Sidhu, Ranny
2004-06-01
An approach is presented to optimize a tube hydroforming process using a Genetic Algorithm (GA) search method. The goal of the study is to maximize formability by identifying the optimal internal hydraulic pressure and feed rate while satisfying the forming limit diagram (FLD). The optimization software HEEDS is used in combination with the nonlinear structural finite element code LS-DYNA to carry out the investigation. In particular, a sub-region of a circular tube blank is formed into a square die. Compared to the best results of a manual optimization procedure, a 55% increase in expansion was achieved when using the pressure and feed profiles identified by the automated optimization procedure.
Iterative pass optimization of sequence data
NASA Technical Reports Server (NTRS)
Wheeler, Ward C.
2003-01-01
The problem of determining the minimum-cost hypothetical ancestral sequences for a given cladogram is known to be NP-complete. This "tree alignment" problem has motivated the considerable effort placed in multiple sequence alignment procedures. Wheeler in 1996 proposed a heuristic method, direct optimization, to calculate cladogram costs without the intervention of multiple sequence alignment. This method, though more efficient in time and more effective in cladogram length than many alignment-based procedures, greedily optimizes nodes based on descendent information only. In their proposal of an exact multiple alignment solution, Sankoff et al. in 1976 described a heuristic procedure--the iterative improvement method--to create alignments at internal nodes by solving a series of median problems. The combination of a three-sequence direct optimization with iterative improvement and a branch-length-based cladogram cost procedure, provides an algorithm that frequently results in superior (i.e., lower) cladogram costs. This iterative pass optimization is both computation and memory intensive, but economies can be made to reduce this burden. An example in arthropod systematics is discussed. c2003 The Willi Hennig Society. Published by Elsevier Science (USA). All rights reserved.
Energy optimization in mobile sensor networks
NASA Astrophysics Data System (ADS)
Yu, Shengwei
Mobile sensor networks are considered to consist of a network of mobile robots, each of which has computation, communication and sensing capabilities. Energy efficiency is a critical issue in mobile sensor networks, especially when mobility (i.e., locomotion control), routing (i.e., communications) and sensing are unique characteristics of mobile robots for energy optimization. This thesis focuses on the problem of energy optimization of mobile robotic sensor networks, and the research results can be extended to energy optimization of a network of mobile robots that monitors the environment, or a team of mobile robots that transports materials from stations to stations in a manufacturing environment. On the energy optimization of mobile robotic sensor networks, our research focuses on the investigation and development of distributed optimization algorithms to exploit the mobility of robotic sensor nodes for network lifetime maximization. In particular, the thesis studies these five problems: 1. Network-lifetime maximization by controlling positions of networked mobile sensor robots based on local information with distributed optimization algorithms; 2. Lifetime maximization of mobile sensor networks with energy harvesting modules; 3. Lifetime maximization using joint design of mobility and routing; 4. Optimal control for network energy minimization; 5. Network lifetime maximization in mobile visual sensor networks. In addressing the first problem, we consider only the mobility strategies of the robotic relay nodes in a mobile sensor network in order to maximize its network lifetime. By using variable substitutions, the original problem is converted into a convex problem, and a variant of the sub-gradient method for saddle-point computation is developed for solving this problem. An optimal solution is obtained by the method. Computer simulations show that mobility of robotic sensors can significantly prolong the lifetime of the whole robotic sensor network while consuming negligible amount of energy for mobility cost. For the second problem, the problem is extended to accommodate mobile robotic nodes with energy harvesting capability, which makes it a non-convex optimization problem. The non-convexity issue is tackled by using the existing sequential convex approximation method, based on which we propose a novel procedure of modified sequential convex approximation that has fast convergence speed. For the third problem, the proposed procedure is used to solve another challenging non-convex problem, which results in utilizing mobility and routing simultaneously in mobile robotic sensor networks to prolong the network lifetime. The results indicate that joint design of mobility and routing has an edge over other methods in prolonging network lifetime, which is also the justification for the use of mobility in mobile sensor networks for energy efficiency purpose. For the fourth problem, we include the dynamics of the robotic nodes in the problem by modeling the networked robotic system using hybrid systems theory. A novel distributed method for the networked hybrid system is used to solve the optimal moving trajectories for robotic nodes and optimal network links, which are not answered by previous approaches. Finally, the fact that mobility is more effective in prolonging network lifetime for a data-intensive network leads us to apply our methods to study mobile visual sensor networks, which are useful in many applications. We investigate the joint design of mobility, data routing, and encoding power to help improving the video quality while maximizing the network lifetime. This study leads to a better understanding of the role mobility can play in data-intensive surveillance sensor networks.
Strategies for the Optimization of Natural Leads to Anticancer Drugs or Drug Candidates
Xiao, Zhiyan; Morris-Natschke, Susan L.; Lee, Kuo-Hsiung
2015-01-01
Natural products have made significant contribution to cancer chemotherapy over the past decades and remain an indispensable source of molecular and mechanistic diversity for anticancer drug discovery. More often than not, natural products may serve as leads for further drug development rather than as effective anticancer drugs by themselves. Generally, optimization of natural leads into anticancer drugs or drug candidates should not only address drug efficacy, but also improve ADMET profiles and chemical accessibility associated with the natural leads. Optimization strategies involve direct chemical manipulation of functional groups, structure-activity relationship-directed optimization and pharmacophore-oriented molecular design based on the natural templates. Both fundamental medicinal chemistry principles (e.g., bio-isosterism) and state-of-the-art computer-aided drug design techniques (e.g., structure-based design) can be applied to facilitate optimization efforts. In this review, the strategies to optimize natural leads to anticancer drugs or drug candidates are illustrated with examples and described according to their purposes. Furthermore, successful case studies on lead optimization of bioactive compounds performed in the Natural Products Research Laboratories at UNC are highlighted. PMID:26359649
Meng, Jiang; Dong, Xiao-ping; Zhou, Yi-sheng; Jiang, Zhi-hong; Leung, Kelvin Sze-Yin; Zhao, Zhong-zhen
2007-02-01
To optimize the extraction procedure of essential oil from H. cordata using the SFE-CO2 and analyze the chemical composition of the essential oil. The extraction procedure of essential oil from fresh H. cordata was optimized with the orthogonal experiment. Essential oil of fresh H. cordata was analysed by GC-MS. The optimize preparative procedure was as follow: essential oil of H. cordata was extracted at a temperature of 35 degrees C, pressure of 15,000 kPa for 20 min. 38 chemical components were identified and the relative contents were quantified. The optimum preparative procedure is reliable and can guarantee the quality of essential oil.
A CFD-based aerodynamic design procedure for hypersonic wind-tunnel nozzles
NASA Technical Reports Server (NTRS)
Korte, John J.
1993-01-01
A new procedure which unifies the best of current classical design practices, computational fluid dynamics (CFD), and optimization procedures is demonstrated for designing the aerodynamic lines of hypersonic wind-tunnel nozzles. The new procedure can be used to design hypersonic wind tunnel nozzles with thick boundary layers where the classical design procedure has been shown to break down. An efficient CFD code, which solves the parabolized Navier-Stokes (PNS) equations using an explicit upwind algorithm, is coupled to a least-squares (LS) optimization procedure. A LS problem is formulated to minimize the difference between the computed flow field and the objective function, consisting of the centerline Mach number distribution and the exit Mach number and flow angle profiles. The aerodynamic lines of the nozzle are defined using a cubic spline, the slopes of which are optimized with the design procedure. The advantages of the new procedure are that it allows full use of powerful CFD codes in the design process, solves an optimization problem to determine the new contour, can be used to design new nozzles or improve sections of existing nozzles, and automatically compensates the nozzle contour for viscous effects as part of the unified design procedure. The new procedure is demonstrated by designing two Mach 15, a Mach 12, and a Mach 18 helium nozzles. The flexibility of the procedure is demonstrated by designing the two Mach 15 nozzles using different constraints, the first nozzle for a fixed length and exit diameter and the second nozzle for a fixed length and throat diameter. The computed flow field for the Mach 15 least squares parabolized Navier-Stokes (LS/PNS) designed nozzle is compared with the classically designed nozzle and demonstrates a significant improvement in the flow expansion process and uniform core region.
Statistical aspects of point count sampling
Barker, R.J.; Sauer, J.R.; Ralph, C.J.; Sauer, J.R.; Droege, S.
1995-01-01
The dominant feature of point counts is that they do not census birds, but instead provide incomplete counts of individuals present within a survey plot. Considering a simple model for point count sampling, we demon-strate that use of these incomplete counts can bias estimators and testing procedures, leading to inappropriate conclusions. A large portion of the variability in point counts is caused by the incomplete counting, and this within-count variation can be confounded with ecologically meaningful varia-tion. We recommend caution in the analysis of estimates obtained from point counts. Using; our model, we also consider optimal allocation of sampling effort. The critical step in the optimization process is in determining the goals of the study and methods that will be used to meet these goals. By explicitly defining the constraints on sampling and by estimating the relationship between precision and bias of estimators and time spent counting, we can predict the optimal time at a point for each of several monitoring goals. In general, time spent at a point will differ depending on the goals of the study.
Schaffner, B; Kanai, T; Futami, Y; Shimbo, M; Urakabe, E
2000-04-01
The broad-beam three-dimensional irradiation system under development at National Institute of Radiological Sciences (NIRS) requires a small ridge filter to spread the initially monoenergetic heavy-ion beam to a small spread-out Bragg peak (SOBP). A large SOBP covering the target volume is then achieved by a superposition of differently weighted and displaced small SOBPs. Two approaches were studied for the definition of a suitable ridge filter and experimental verifications were performed. Both approaches show a good agreement between the calculated and measured dose and lead to a good homogeneity of the biological dose in the target. However, the ridge filter design that produces a Gaussian-shaped spectrum of the particle ranges was found to be more robust to small errors and uncertainties in the beam application. Furthermore, an optimization procedure for two fields was applied to compensate for the missing dose from the fragmentation tail for the case of a simple-geometry target. The optimized biological dose distributions show that a very good homogeneity is achievable in the target.
Optimization of auxiliary basis sets for the LEDO expansion and a projection technique for LEDO-DFT.
Götz, Andreas W; Kollmar, Christian; Hess, Bernd A
2005-09-01
We present a systematic procedure for the optimization of the expansion basis for the limited expansion of diatomic overlap density functional theory (LEDO-DFT) and report on optimized auxiliary orbitals for the Ahlrichs split valence plus polarization basis set (SVP) for the elements H, Li--F, and Na--Cl. A new method to deal with near-linear dependences in the LEDO expansion basis is introduced, which greatly reduces the computational effort of LEDO-DFT calculations. Numerical results for a test set of small molecules demonstrate the accuracy of electronic energies, structural parameters, dipole moments, and harmonic frequencies. For larger molecular systems the numerical errors introduced by the LEDO approximation can lead to an uncontrollable behavior of the self-consistent field (SCF) process. A projection technique suggested by Löwdin is presented in the framework of LEDO-DFT, which guarantees for SCF convergence. Numerical results on some critical test molecules suggest the general applicability of the auxiliary orbitals presented in combination with this projection technique. Timing results indicate that LEDO-DFT is competitive with conventional density fitting methods. (c) 2005 Wiley Periodicals, Inc.
Minimizing Postsampling Degradation of Peptides by a Thermal Benchtop Tissue Stabilization Method
Segerström, Lova; Gustavsson, Jenny
2016-01-01
Enzymatic degradation is a major concern in peptide analysis. Postmortem metabolism in biological samples entails considerable risk for measurements misrepresentative of true in vivo concentrations. It is therefore vital to find reliable, reproducible, and easy-to-use procedures to inhibit enzymatic activity in fresh tissues before subjecting them to qualitative and quantitative analyses. The aim of this study was to test a benchtop thermal stabilization method to optimize measurement of endogenous opioids in brain tissue. Endogenous opioid peptides are generated from precursor proteins through multiple enzymatic steps that include conversion of one bioactive peptide to another, often with a different function. Ex vivo metabolism may, therefore, lead to erroneous functional interpretations. The efficacy of heat stabilization was systematically evaluated in a number of postmortem handling procedures. Dynorphin B (DYNB), Leu-enkephalin-Arg6 (LARG), and Met-enkephalin-Arg6-Phe7 (MEAP) were measured by radioimmunoassay in rat hypothalamus, striatum (STR), and cingulate cortex (CCX). Also, simplified extraction protocols for stabilized tissue were tested. Stabilization affected all peptide levels to varying degrees compared to those prepared by standard dissection and tissue handling procedures. Stabilization increased DYNB in hypothalamus, but not STR or CCX, whereas LARG generally decreased. MEAP increased in hypothalamus after all stabilization procedures, whereas for STR and CCX, the effect was dependent on the time point for stabilization. The efficacy of stabilization allowed samples to be left for 2 hours in room temperature (20°C) without changes in peptide levels. This study shows that conductive heat transfer is an easy-to-use and efficient procedure for the preservation of the molecular composition in biological samples. Region- and peptide-specific critical steps were identified and stabilization enabled the optimization of tissue handling and opioid peptide analysis. The result is improved diagnostic and research value of the samples with great benefits for basic research and clinical work. PMID:27007059
Morel, O; Monceau, E; Tran, N; Malartic, C; Morel, F; Barranger, E; Côté, J F; Gayat, E; Chavatte-Palmer, P; Cabrol, D; Tsatsaris, V
2009-06-01
To evaluate radiofrequency (RF) efficiency and safety for the ablation of retained placenta in humans, using a pregnant sheep model. Experimental study. Laboratory of Surgery School, Nancy, France. Three pregnant ewes/ten human placentas. Various RF procedures were tested in pregnant ewes on 50 placentomes (individual placental units). Reproducibility of the best procedure was then evaluated in a further 20 placentomes and on ten human term placentas in vitro after delivery. Placental tissues destruction, lesions' size, myometrial lesions. Low power (100 W) and low target temperatures (60 degrees C) lead to homogenous tissue destruction, without myometrial lesion. No significant difference was observed in terms of lesion size and procedure duration for in the placentomes of pregnant ewe in vivo and in human placentas in vitro. The diameter of the ablation could be correlated with the tines deployment. The placental tissue structure is very permissive to RF energy, which suggests that RF could be used for the ablation of retained placenta, providing optimal control of tissue destruction. These results call for further experimental evaluations.
VARIABLE SELECTION FOR REGRESSION MODELS WITH MISSING DATA
Garcia, Ramon I.; Ibrahim, Joseph G.; Zhu, Hongtu
2009-01-01
We consider the variable selection problem for a class of statistical models with missing data, including missing covariate and/or response data. We investigate the smoothly clipped absolute deviation penalty (SCAD) and adaptive LASSO and propose a unified model selection and estimation procedure for use in the presence of missing data. We develop a computationally attractive algorithm for simultaneously optimizing the penalized likelihood function and estimating the penalty parameters. Particularly, we propose to use a model selection criterion, called the ICQ statistic, for selecting the penalty parameters. We show that the variable selection procedure based on ICQ automatically and consistently selects the important covariates and leads to efficient estimates with oracle properties. The methodology is very general and can be applied to numerous situations involving missing data, from covariates missing at random in arbitrary regression models to nonignorably missing longitudinal responses and/or covariates. Simulations are given to demonstrate the methodology and examine the finite sample performance of the variable selection procedures. Melanoma data from a cancer clinical trial is presented to illustrate the proposed methodology. PMID:20336190
NASA Technical Reports Server (NTRS)
Martin, Carl J., Jr.
1996-01-01
This report describes a structural optimization procedure developed for use with the Engineering Analysis Language (EAL) finite element analysis system. The procedure is written primarily in the EAL command language. Three external processors which are written in FORTRAN generate equivalent stiffnesses and evaluate stress and local buckling constraints for the sections. Several built-up structural sections were coded into the design procedures. These structural sections were selected for use in aircraft design, but are suitable for other applications. Sensitivity calculations use the semi-analytic method, and an extensive effort has been made to increase the execution speed and reduce the storage requirements. There is also an approximate sensitivity update method included which can significantly reduce computational time. The optimization is performed by an implementation of the MINOS V5.4 linear programming routine in a sequential liner programming procedure.
Automation and Optimization of Multipulse Laser Zona Drilling of Mouse Embryos During Embryo Biopsy.
Wong, Christopher Yee; Mills, James K
2017-03-01
Laser zona drilling (LZD) is a required step in many embryonic surgical procedures, for example, assisted hatching and preimplantation genetic diagnosis. LZD involves the ablation of the zona pellucida (ZP) using a laser while minimizing potentially harmful thermal effects on critical internal cell structures. Develop a method for the automation and optimization of multipulse LZD, applied to cleavage-stage embryos. A two-stage optimization is used. The first stage uses computer vision algorithms to identify embryonic structures and determines the optimal ablation zone farthest away from critical structures such as blastomeres. The second stage combines a genetic algorithm with a previously reported thermal analysis of LZD to optimize the combination of laser pulse locations and pulse durations. The goal is to minimize the peak temperature experienced by the blastomeres while creating the desired opening in the ZP. A proof of concept of the proposed LZD automation and optimization method is demonstrated through experiments on mouse embryos with positive results, as adequately sized openings are created. Automation of LZD is feasible and is a viable step toward the automation of embryo biopsy procedures. LZD is a common but delicate procedure performed by human operators using subjective methods to gauge proper LZD procedure. Automation of LZD removes human error to increase the success rate of LZD. Although the proposed methods are developed for cleavage-stage embryos, the same methods may be applied to most types LZD procedures, embryos at different developmental stages, or nonembryonic cells.
Shape optimization of road tunnel cross-section by simulated annealing
NASA Astrophysics Data System (ADS)
Sobótka, Maciej; Pachnicz, Michał
2016-06-01
The paper concerns shape optimization of a tunnel excavation cross-section. The study incorporates optimization procedure of the simulated annealing (SA). The form of a cost function derives from the energetic optimality condition, formulated in the authors' previous papers. The utilized algorithm takes advantage of the optimization procedure already published by the authors. Unlike other approaches presented in literature, the one introduced in this paper takes into consideration a practical requirement of preserving fixed clearance gauge. Itasca Flac software is utilized in numerical examples. The optimal excavation shapes are determined for five different in situ stress ratios. This factor significantly affects the optimal topology of excavation. The resulting shapes are elongated in the direction of a principal stress greater value. Moreover, the obtained optimal shapes have smooth contours circumscribing the gauge.
Analysis of neighborhood behavior in lead optimization and array design.
Papadatos, George; Cooper, Anthony W J; Kadirkamanathan, Visakan; Macdonald, Simon J F; McLay, Iain M; Pickett, Stephen D; Pritchard, John M; Willett, Peter; Gillet, Valerie J
2009-02-01
Neighborhood behavior describes the extent to which small structural changes defined by a molecular descriptor are likely to lead to small property changes. This study evaluates two methods for the quantification of neighborhood behavior: the optimal diagonal method of Patterson et al. and the optimality criterion method of Horvath and Jeandenans. The methods are evaluated using twelve different types of fingerprint (both 2D and 3D) with screening data derived from several lead optimization projects at GlaxoSmithKline. The principal focus of the work is the design of chemical arrays during lead optimization, and the study hence considers not only biological activity but also important drug properties such as metabolic stability, permeability, and lipophilicity. Evidence is provided to suggest that the optimality criterion method may provide a better quantitative description of neighborhood behavior than the optimal diagonal method.
Zhang, Litao; Cvijic, Mary Ellen; Lippy, Jonathan; Myslik, James; Brenner, Stephen L; Binnie, Alastair; Houston, John G
2012-07-01
In this paper, we review the key solutions that enabled evolution of the lead optimization screening support process at Bristol-Myers Squibb (BMS) between 2004 and 2009. During this time, technology infrastructure investment and scientific expertise integration laid the foundations to build and tailor lead optimization screening support models across all therapeutic groups at BMS. Together, harnessing advanced screening technology platforms and expanding panel screening strategy led to a paradigm shift at BMS in supporting lead optimization screening capability. Parallel SAR and structure liability relationship (SLR) screening approaches were first and broadly introduced to empower more-rapid and -informed decisions about chemical synthesis strategy and to broaden options for identifying high-quality drug candidates during lead optimization. Copyright © 2012 Elsevier Ltd. All rights reserved.
Design optimization studies using COSMIC NASTRAN
NASA Technical Reports Server (NTRS)
Pitrof, Stephen M.; Bharatram, G.; Venkayya, Vipperla B.
1993-01-01
The purpose of this study is to create, test and document a procedure to integrate mathematical optimization algorithms with COSMIC NASTRAN. This procedure is very important to structural design engineers who wish to capitalize on optimization methods to ensure that their design is optimized for its intended application. The OPTNAST computer program was created to link NASTRAN and design optimization codes into one package. This implementation was tested using two truss structure models and optimizing their designs for minimum weight, subject to multiple loading conditions and displacement and stress constraints. However, the process is generalized so that an engineer could design other types of elements by adding to or modifying some parts of the code.
Maessen, J G; Phelps, B; Dekker, A L A J; Dijkman, B
2004-05-01
To optimize resynchronization in biventricular pacing with epicardial leads, mapping to determine the best pacing site, is a prerequisite. A port access surgical mapping technique was developed that allowed multiple pace site selection and reproducible lead evaluation and implantation. Pressure-volume loops analysis was used for real time guidance in targeting epicardial lead placement. Even the smallest changes in lead position revealed significantly different functional results. Optimizing the pacing site with this technique allowed functional improvement up to 40% versus random pace site selection.
NASA Technical Reports Server (NTRS)
Scott, Elaine P.
1993-01-01
Thermal stress analyses are an important aspect in the development of aerospace vehicles such as the National Aero-Space Plane (NASP) and the High-Speed Civil Transport (HSCT) at NASA-LaRC. These analyses require knowledge of the temperature within the structures which consequently necessitates the need for thermal property data. The initial goal of this research effort was to develop a methodology for the estimation of thermal properties of aerospace structural materials at room temperature and to develop a procedure to optimize the estimation process. The estimation procedure was implemented utilizing a general purpose finite element code. In addition, an optimization procedure was developed and implemented to determine critical experimental parameters to optimize the estimation procedure. Finally, preliminary experiments were conducted at the Aircraft Structures Branch (ASB) laboratory.
NASA Astrophysics Data System (ADS)
Scott, Elaine P.
1993-12-01
Thermal stress analyses are an important aspect in the development of aerospace vehicles such as the National Aero-Space Plane (NASP) and the High-Speed Civil Transport (HSCT) at NASA-LaRC. These analyses require knowledge of the temperature within the structures which consequently necessitates the need for thermal property data. The initial goal of this research effort was to develop a methodology for the estimation of thermal properties of aerospace structural materials at room temperature and to develop a procedure to optimize the estimation process. The estimation procedure was implemented utilizing a general purpose finite element code. In addition, an optimization procedure was developed and implemented to determine critical experimental parameters to optimize the estimation procedure. Finally, preliminary experiments were conducted at the Aircraft Structures Branch (ASB) laboratory.
34 CFR 303.172 - Lead agency procedures for resolving complaints.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 34 Education 2 2010-07-01 2010-07-01 false Lead agency procedures for resolving complaints. 303.172 Section 303.172 Education Regulations of the Offices of the Department of Education (Continued... System-Application Requirements § 303.172 Lead agency procedures for resolving complaints. Each...
Cao, Wenhua; Lim, Gino; Li, Xiaoqiang; Li, Yupeng; Zhu, X. Ronald; Zhang, Xiaodong
2014-01-01
The purpose of this study is to investigate the feasibility and impact of incorporating deliverable monitor unit (MU) constraints into spot intensity optimization in intensity modulated proton therapy (IMPT) treatment planning. The current treatment planning system (TPS) for IMPT disregards deliverable MU constraints in the spot intensity optimization (SIO) routine. It performs a post-processing procedure on an optimized plan to enforce deliverable MU values that are required by the spot scanning proton delivery system. This procedure can create a significant dose distribution deviation between the optimized and post-processed deliverable plans, especially when small spot spacings are used. In this study, we introduce a two-stage linear programming (LP) approach to optimize spot intensities and constrain deliverable MU values simultaneously, i.e., a deliverable spot intensity optimization (DSIO) model. Thus, the post-processing procedure is eliminated and the associated optimized plan deterioration can be avoided. Four prostate cancer cases at our institution were selected for study and two parallel opposed beam angles were planned for all cases. A quadratic programming (QP) based model without MU constraints, i.e., a conventional spot intensity optimization (CSIO) model, was also implemented to emulate the commercial TPS. Plans optimized by both the DSIO and CSIO models were evaluated for five different settings of spot spacing from 3 mm to 7 mm. For all spot spacings, the DSIO-optimized plans yielded better uniformity for the target dose coverage and critical structure sparing than did the CSIO-optimized plans. With reduced spot spacings, more significant improvements in target dose uniformity and critical structure sparing were observed in the DSIO- than in the CSIO-optimized plans. Additionally, better sparing of the rectum and bladder was achieved when reduced spacings were used for the DSIO-optimized plans. The proposed DSIO approach ensures the deliverability of optimized IMPT plans that take into account MU constraints. This eliminates the post-processing procedure required by the TPS as well as the resultant deteriorating effect on ultimate dose distributions. This approach therefore allows IMPT plans to adopt all possible spot spacings optimally. Moreover, dosimetric benefits can be achieved using smaller spot spacings. PMID:23835656
Geometrical Optimization Approach to Isomerization: Models and Limitations.
Chang, Bo Y; Shin, Seokmin; Engel, Volker; Sola, Ignacio R
2017-11-02
We study laser-driven isomerization reactions through an excited electronic state using the recently developed Geometrical Optimization procedure. Our goal is to analyze whether an initial wave packet in the ground state, with optimized amplitudes and phases, can be used to enhance the yield of the reaction at faster rates, driven by a single picosecond pulse or a pair of femtosecond pulses resonant with the electronic transition. We show that the symmetry of the system imposes limitations in the optimization procedure, such that the method rediscovers the pump-dump mechanism.
Wavefront correction in two-photon microscopy with a multi-actuator adaptive lens.
Bueno, Juan M; Skorsetz, Martin; Bonora, Stefano; Artal, Pablo
2018-05-28
A multi-actuator adaptive lens (AL) was incorporated into a multi-photon (MP) microscope to improve the quality of images of thick samples. Through a hill-climbing procedure the AL corrected for the specimen-induced aberrations enhancing MP images. The final images hardly differed when two different metrics were used, although the sets of Zernike coefficients were not identical. The optimized MP images acquired with the AL were also compared with those obtained with a liquid-crystal-on-silicon spatial light modulator. Results have shown that both devices lead to similar images, which corroborates the usefulness of this AL for MP imaging.
Yang, Felix; Kulbak, Guy
2015-07-01
The axillary vein is frequently used to implant pacemaker and defibrillator leads. We describe a technique utilizing the caudal fluoroscopic view to facilitate axillary venous access without contrast. Outcomes of device implants or upgrades utilizing this technique were examined during a 1-year period at our institution. Of 229 consecutive implants, only 9 patients required an alternate technique for lead implantation. There were zero cases of pneumothorax. The caudal view allows for optimal appreciation of the anterior border of the lung and the first rib. This simple technique increases the implanter's appreciation of and control over the access needle depth relative to the lung and first rib, thereby reducing pneumothorax risk. Published on behalf of the European Society of Cardiology. All rights reserved. © The Author 2015. For permissions please email: journals.permissions@oup.com.
Fluorescence lifetime assays: current advances and applications in drug discovery.
Pritz, Stephan; Doering, Klaus; Woelcke, Julian; Hassiepen, Ulrich
2011-06-01
Fluorescence lifetime assays complement the portfolio of established assay formats available in drug discovery, particularly with the recent advances in microplate readers and the commercial availability of novel fluorescent labels. Fluorescence lifetime assists in lowering complexity of compound screening assays, affording a modular, toolbox-like approach to assay development and yielding robust homogeneous assays. To date, materials and procedures have been reported for biochemical assays on proteases, as well as on protein kinases and phosphatases. This article gives an overview of two assay families, distinguished by the origin of the fluorescence signal modulation. The pharmaceutical industry demands techniques with a robust, integrated compound profiling process and short turnaround times. Fluorescence lifetime assays have already helped the drug discovery field, in this sense, by enhancing productivity during the hit-to-lead and lead optimization phases. Future work will focus on covering other biochemical molecular modifications by investigating the detailed photo-physical mechanisms underlying the fluorescence signal.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roy, L.; Rao, N.D.
1983-04-01
This paper presents a new method for optimal dispatch of real and reactive power generation which is based on cartesian coordinate formulation of economic dispatch problem and reclassification of state and control variables associated with generator buses. The voltage and power at these buses are classified as parametric and functional inequality constraints, and are handled by reduced gradient technique and penalty factor approach respectively. The advantage of this classification is the reduction in the size of the equality constraint model, leading to less storage requirement. The rectangular coordinate formulation results in an exact equality constraint model in which the coefficientmore » matrix is real, sparse, diagonally dominant, smaller in size and need be computed and factorized once only in each gradient step. In addition, Lagragian multipliers are calculated using a new efficient procedure. A natural outcome of these features is the solution of the economic dispatch problem, faster than other methods available to date in the literature. Rapid and reliable convergence is an additional desirable characteristic of the method. Digital simulation results are presented on several IEEE test systems to illustrate the range of application of the method visa-vis the popular Dommel-Tinney (DT) procedure. It is found that the proposed method is more reliable, 3-4 times faster and requires 20-30 percent less storage compared to the DT algorithm, while being just as general. Thus, owing to its exactness, robust mathematical model and less computational requirements, the method developed in the paper is shown to be a practically feasible algorithm for on-line optimal power dispatch.« less
The use of optimization techniques to design controlled diffusion compressor blading
NASA Technical Reports Server (NTRS)
Sanger, N. L.
1982-01-01
A method for automating compressor blade design using numerical optimization, and applied to the design of a controlled diffusion stator blade row is presented. A general purpose optimization procedure is employed, based on conjugate directions for locally unconstrained problems and on feasible directions for locally constrained problems. Coupled to the optimizer is an analysis package consisting of three analysis programs which calculate blade geometry, inviscid flow, and blade surface boundary layers. The optimizing concepts and selection of design objective and constraints are described. The procedure for automating the design of a two dimensional blade section is discussed, and design results are presented.
34 CFR 303.172 - Lead agency procedures for resolving complaints.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 34 Education 2 2011-07-01 2010-07-01 true Lead agency procedures for resolving complaints. 303.172 Section 303.172 Education Regulations of the Offices of the Department of Education (Continued) OFFICE OF...-Application Requirements § 303.172 Lead agency procedures for resolving complaints. Each application must...
NASA Astrophysics Data System (ADS)
Bacopoulos, Peter
2018-05-01
A localized truncation error analysis with complex derivatives (LTEA+CD) is applied recursively with advanced circulation (ADCIRC) simulations of tides and storm surge for finite element mesh optimization. Mesh optimization is demonstrated with two iterations of LTEA+CD for tidal simulation in the lower 200 km of the St. Johns River, located in northeast Florida, and achieves more than an over 50% decrease in the number of mesh nodes, relating to a twofold increase in efficiency, at a zero cost to model accuracy. The recursively generated meshes using LTEA+CD lead to successive reductions in the global cumulative truncation error associated with the model mesh. Tides are simulated with root mean square error (RMSE) of 0.09-0.21 m and index of agreement (IA) values generally in the 80s and 90s percentage ranges. Tidal currents are simulated with RMSE of 0.09-0.23 m s-1 and IA values of 97% and greater. Storm tide due to Hurricane Matthew 2016 is simulated with RMSE of 0.09-0.33 m and IA values of 75-96%. Analysis of the LTEA+CD results shows the M2 constituent to dominate the node spacing requirement in the St. Johns River, with the M4 and M6 overtides and the STEADY constituent contributing some. Friction is the predominant physical factor influencing the target element size distribution, especially along the main river stem, while frequency (inertia) and Coriolis (rotation) are supplementary contributing factors. The combination of interior- and boundary-type computational molecules, providing near-full coverage of the model domain, renders LTEA+CD an attractive mesh generation/optimization tool for complex coastal and estuarine domains. The mesh optimization procedure using LTEA+CD is automatic and extensible to other finite element-based numerical models. Discussion is provided on the scope of LTEA+CD, the starting point (mesh) of the procedure, the user-specified scaling of the LTEA+CD results, and the iteration (termination) of LTEA+CD for mesh optimization.
Optimum Design of High-Speed Prop-Rotors
NASA Technical Reports Server (NTRS)
Chattopadhyay, Aditi; McCarthy, Thomas Robert
1993-01-01
An integrated multidisciplinary optimization procedure is developed for application to rotary wing aircraft design. The necessary disciplines such as dynamics, aerodynamics, aeroelasticity, and structures are coupled within a closed-loop optimization process. The procedure developed is applied to address two different problems. The first problem considers the optimization of a helicopter rotor blade and the second problem addresses the optimum design of a high-speed tilting proprotor. In the helicopter blade problem, the objective is to reduce the critical vibratory shear forces and moments at the blade root, without degrading rotor aerodynamic performance and aeroelastic stability. In the case of the high-speed proprotor, the goal is to maximize the propulsive efficiency in high-speed cruise without deteriorating the aeroelastic stability in cruise and the aerodynamic performance in hover. The problems studied involve multiple design objectives; therefore, the optimization problems are formulated using multiobjective design procedures. A comprehensive helicopter analysis code is used for the rotary wing aerodynamic, dynamic and aeroelastic stability analyses and an algorithm developed specifically for these purposes is used for the structural analysis. A nonlinear programming technique coupled with an approximate analysis procedure is used to perform the optimization. The optimum blade designs obtained in each case are compared to corresponding reference designs.
Arantes, Tatiane M; Sardinha, André; Baldan, Mauricio R; Cristovan, Fernando H; Ferreira, Neidenei G
2014-10-01
Monitoring heavy metal ion levels in water is essential for human health and safety. Electroanalytical techniques have presented important features to detect toxic trace heavy metals in the environment due to their high sensitivity associated with their easy operational procedures. Square-wave voltammetry is a powerful electrochemical technique that may be applied to both electrokinetic and analytical measurements, and the analysis of the characteristic parameters of this technique also enables the mechanism and kinetic evaluation of the electrochemical process under study. In this work, we present a complete optimized study on the heavy metal detection using diamond electrodes. It was analyzed the influence of the morphology characteristics as well as the doping level on micro/nanocrystalline boron-doped diamond films by means of square-wave anodic stripping voltammetry (SWASV) technique. The SWASV parameters were optimized for all films, considering that their kinetic response is dependent on the morphology and/or doping level. The films presented reversible results for the Lead [Pb (II)] system studied. The Pb (II) analysis was performed in ammonium acetate buffer at pH 4.5, varying the lead concentration in the range from 1 to 10 μg L(-1). The analytical responses were obtained for the four electrodes. However, the best low limit detection and reproducibility was found for boron doped nanocrystalline diamond electrodes (BDND) doped with 2000 mg L(-1) in B/C ratio. Copyright © 2014 Elsevier B.V. All rights reserved.
Sanjeevi, V; Shahabudeen, P
2016-01-01
Worldwide, about US$410 billion is spent every year to manage four billion tonnes of municipal solid wastes (MSW). Transport cost alone constitutes more than 50% of the total expenditure on solid waste management (SWM) in major cities of the developed world and the collection and transport cost is about 85% in the developing world. There is a need to improve the ability of the city administrators to manage the municipal solid wastes with least cost. Since 2000, new technologies such as geographical information system (GIS) and related optimization software have been used to optimize the haul route distances. The city limits of Chennai were extended from 175 to 426 km(2) in 2011, leading to sub-optimum levels in solid waste transportation of 4840 tonnes per day. After developing a spatial database for the whole of Chennai with 200 wards, the route optimization procedures have been run for the transport of solid wastes from 13 wards (generating nodes) to one transfer station (intermediary before landfill), using ArcGIS. The optimization process reduced the distances travelled by 9.93%. The annual total cost incurred for this segment alone is Indian Rupees (INR) 226.1 million. Savings in terms of time taken for both the current and shortest paths have also been computed, considering traffic conditions. The overall savings are thus very meaningful and call for optimization of the haul routes for the entire Chennai. © The Author(s) 2015.
2015-01-01
With ever-growing aging population and demand for denture treatments, pressure-induced mucosa lesion and residual ridge resorption remain main sources of clinical complications. Conventional denture design and fabrication are challenged for its labor and experience intensity, urgently necessitating an automatic procedure. This study aims to develop a fully automatic procedure enabling shape optimization and additive manufacturing of removable partial dentures (RPD), to maximize the uniformity of contact pressure distribution on the mucosa, thereby reducing associated clinical complications. A 3D heterogeneous finite element (FE) model was constructed from CT scan, and the critical tissue of mucosa was modeled as a hyperelastic material from in vivo clinical data. A contact shape optimization algorithm was developed based on the bi-directional evolutionary structural optimization (BESO) technique. Both initial and optimized dentures were prototyped by 3D printing technology and evaluated with in vitro tests. Through the optimization, the peak contact pressure was reduced by 70%, and the uniformity was improved by 63%. In vitro tests verified the effectiveness of this procedure, and the hydrostatic pressure induced in the mucosa is well below clinical pressure-pain thresholds (PPT), potentially lessening risk of residual ridge resorption. This proposed computational optimization and additive fabrication procedure provides a novel method for fast denture design and adjustment at low cost, with quantitative guidelines and computer aided design and manufacturing (CAD/CAM) for a specific patient. The integration of digitalized modeling, computational optimization, and free-form fabrication enables more efficient clinical adaptation. The customized optimal denture design is expected to minimize pain/discomfort and potentially reduce long-term residual ridge resorption. PMID:26161878
Debaize, Lydie; Jakobczyk, Hélène; Rio, Anne-Gaëlle; Gandemer, Virginie; Troadec, Marie-Bérengère
2017-01-01
Genetic abnormalities, including chromosomal translocations, are described for many hematological malignancies. From the clinical perspective, detection of chromosomal abnormalities is relevant not only for diagnostic and treatment purposes but also for prognostic risk assessment. From the translational research perspective, the identification of fusion proteins and protein interactions has allowed crucial breakthroughs in understanding the pathogenesis of malignancies and consequently major achievements in targeted therapy. We describe the optimization of the Proximity Ligation Assay (PLA) to ascertain the presence of fusion proteins, and protein interactions in non-adherent pre-B cells. PLA is an innovative method of protein-protein colocalization detection by molecular biology that combines the advantages of microscopy with the advantages of molecular biology precision, enabling detection of protein proximity theoretically ranging from 0 to 40 nm. We propose an optimized PLA procedure. We overcome the issue of maintaining non-adherent hematological cells by traditional cytocentrifugation and optimized buffers, by changing incubation times, and modifying washing steps. Further, we provide convincing negative and positive controls, and demonstrate that optimized PLA procedure is sensitive to total protein level. The optimized PLA procedure allows the detection of fusion proteins and protein interactions on non-adherent cells. The optimized PLA procedure described here can be readily applied to various non-adherent hematological cells, from cell lines to patients' cells. The optimized PLA protocol enables detection of fusion proteins and their subcellular expression, and protein interactions in non-adherent cells. Therefore, the optimized PLA protocol provides a new tool that can be adopted in a wide range of applications in the biological field.
[Preparation procedures of anti-complementary polysaccharides from Houttuynia cordata].
Zhang, Juanjuan; Lu, Yan; Chen, Daofeng
2012-07-01
To establish and optimize the preparation procedures of the anti-complementary polysaccharides from Houttuynia cordata. Based on the yield and anti-complementary activity in vitro, the conditions of extraction and alcohol precipitating process were optimized by orthogonal tests. The optimal condition of deproteinization was determined according to the results of protein removed and polysaccharide maintained. The best decoloring method was also optimized by orthogonal experimental design. The optimized preparation procedures were given as follows: extract the coarse powder 3 times with 50 times volume of water at 90 degrees C for 2 hours every time, combine the extracts and concentrate appropriately, equivalent to 0.12 g of H. cordata per milliliter. Add 4 times volume of 90% ethanol to the extract, allow to stand for 24 hours to precipitate totally, filter and the precipitate was successfully washed with anhydrous alcohol, acetone and anhydrous ether. Resolve the residue with water, add trichloroacetic acid (TCA) to a concentration of 20% to remove protein. Decoloration was at a concentration of 3% with activated carbon at pH 3.0, 50 degrees C for 50 min. The above procedures above were tested 3 times, resulting in the average yield of polysaccharides at 4.03% (RSD 0.96%), the average concentrations of polysaccharides and protein at 80.97% (RSD 1.5%) and 2.02% (RSD 2.3%), and average CH50 at 0.079 g x L-(-1) (RSD 3.6%). The established and optimized procedures are repeatable and reliable to prepare the anti-complementary polysaccharides with high quality and activity from H. cordata.
Rossum, Huub H van; Kemperman, Hans
2017-07-26
General application of a moving average (MA) as continuous analytical quality control (QC) for routine chemistry assays has failed due to lack of a simple method that allows optimization of MAs. A new method was applied to optimize the MA for routine chemistry and was evaluated in daily practice as continuous analytical QC instrument. MA procedures were optimized using an MA bias detection simulation procedure. Optimization was graphically supported by bias detection curves. Next, all optimal MA procedures that contributed to the quality assurance were run for 100 consecutive days and MA alarms generated during working hours were investigated. Optimized MA procedures were applied for 24 chemistry assays. During this evaluation, 303,871 MA values and 76 MA alarms were generated. Of all alarms, 54 (71%) were generated during office hours. Of these, 41 were further investigated and were caused by ion selective electrode (ISE) failure (1), calibration failure not detected by QC due to improper QC settings (1), possible bias (significant difference with the other analyzer) (10), non-human materials analyzed (2), extreme result(s) of a single patient (2), pre-analytical error (1), no cause identified (20), and no conclusion possible (4). MA was implemented in daily practice as a continuous QC instrument for 24 routine chemistry assays. In our setup when an MA alarm required follow-up, a manageable number of MA alarms was generated that resulted in valuable MA alarms. For the management of MA alarms, several applications/requirements in the MA management software will simplify the use of MA procedures.
Guetarni, F; Rigoard, P
2015-03-01
Conventional spinal cord stimulation (SCS) generates paraesthesia, as the efficacy of this technique is based on the relationship between the paraesthesia provided by SCS on the painful zone and an analgesic effect on the stimulated zone. Although this basic postulate is based on clinical evidence, it is clear that this relationship has never been formally demonstrated by scientific studies. There is a need for objective evaluation tools ("transducers") to transpose electrical signals to clinical effects and to guide therapeutic choices. We have developed a software at Poitiers University hospital allowing real-time objective mapping of the paraesthesia generated by SCS lead placement and programming during the implantation procedure itself, on a touch screen interface. The purpose of this article is to describe this intraoperative mapping software, in terms of its concept and technical aspects. The Neuro-Mapping Locator (NML) software is dedicated to patients with failed back surgery syndrome, candidates for SCS lead implantation, to actively participate in the implantation procedure. Real-time geographical localization of the paraesthesia generated by percutaneous or multicolumn surgical SCS lead implanted under awake anaesthesia allows intraoperative lead programming and possibly lead positioning to be modified with the patient's cooperation. Software updates should enable us to refine objectives related to the use of this tool and minimize observational biases. The ultimate goals of NML software should not be limited to optimize one specific device implantation in a patient but also allow to compare instantaneously various stimulation strategies, by characterizing new technical parameters as "coverage efficacy" and "device specificity" on selected subgroups of patients. Another longer-term objective would be to organize these predictive factors into computer science ontologies, which could constitute robust and helpful data for device selection and programming of tomorrow's neurostimulators. Copyright © 2014 Elsevier Masson SAS. All rights reserved.
Chatterjee, Arnab K; Yeung, Bryan KS
2012-01-01
Antimalarial drug discovery has historically benefited from the whole-cell (phenotypic) screening approach to identify lead molecules in the search for new drugs. However over the past two decades there has been a shift in the pharmaceutical industry to move away from whole-cell screening to target-based approaches. As part of a Wellcome Trust and Medicines for Malaria Venture (MMV) funded consortium to discover new blood-stage antimalarials, we used both approaches to identify new antimalarial chemotypes, two of which have progressed beyond the lead optimization phase and display excellent in vivo efficacy in mice. These two advanced series were identified through a cell-based optimization devoid of target information and in this review we summarize the advantages of this approach versus a target-based optimization. Although the each lead optimization required slightly different medicinal chemistry strategies, we observed some common issues across the different the scaffolds which could be applied to other cell based lead optimization programs. PMID:22242845
Springback effects during single point incremental forming: Optimization of the tool path
NASA Astrophysics Data System (ADS)
Giraud-Moreau, Laurence; Belchior, Jérémy; Lafon, Pascal; Lotoing, Lionel; Cherouat, Abel; Courtielle, Eric; Guines, Dominique; Maurine, Patrick
2018-05-01
Incremental sheet forming is an emerging process to manufacture sheet metal parts. This process is more flexible than conventional one and well suited for small batch production or prototyping. During the process, the sheet metal blank is clamped by a blank-holder and a small-size smooth-end hemispherical tool moves along a user-specified path to deform the sheet incrementally. Classical three-axis CNC milling machines, dedicated structure or serial robots can be used to perform the forming operation. Whatever the considered machine, large deviations between the theoretical shape and the real shape can be observed after the part unclamping. These deviations are due to both the lack of stiffness of the machine and residual stresses in the part at the end of the forming stage. In this paper, an optimization strategy of the tool path is proposed in order to minimize the elastic springback induced by residual stresses after unclamping. A finite element model of the SPIF process allowing the shape prediction of the formed part with a good accuracy is defined. This model, based on appropriated assumptions, leads to calculation times which remain compatible with an optimization procedure. The proposed optimization method is based on an iterative correction of the tool path. The efficiency of the method is shown by an improvement of the final shape.
Park, Woo Young; Shin, Yang-Sik; Lee, Sang Kil; Kim, So Yeon; Lee, Tai Kyung
2014-01-01
Purpose Endoscopic submucosal dissection (ESD) is a technically difficult and lengthy procedure requiring optimal depth of sedation. The bispectral index (BIS) monitor is a non-invasive tool that objectively evaluates the depth of sedation. The purpose of this prospective randomized controlled trial was to evaluate whether BIS guided sedation with propofol and remifentanil could reduce the number of patients requiring rescue propofol, and thus reduce the incidence of sedation- and/or procedure-related complications. Materials and Methods A total of 180 patients who underwent the ESD procedure for gastric adenoma or early gastric cancer were randomized to two groups. The control group (n=90) was monitored by the Modified Observer's Assessment of Alertness and Sedation scale and the BIS group (n=90) was monitored using BIS. The total doses of propofol and remifentanil, the need for rescue propofol, and the rates of complications were recorded. Results The number of patients who needed rescue propofol during the procedure was significantly higher in the control group than the BIS group (47.8% vs. 30.0%, p=0.014). There were no significant differences in the incidence of sedation- and/or procedure-related complications. Conclusion BIS-guided propofol infusion combined with remifentanil reduced the number of patients requiring rescue propofol in ESD procedures. However, this finding did not lead to clinical benefits and thus BIS monitoring is of limited use during anesthesiologist-directed sedation. PMID:25048506
Kumar, A; Kothari, M; Grigoriadis, A; Trulsson, M; Svensson, P
2018-04-01
Tooth loss, decreased mass and strength of the masticatory muscles leading to difficulty in chewing have been suggested as important determinants of eating and nutrition in the elderly. To compensate for the loss of teeth, in particular, a majority of the elderly rely on dental prosthesis for chewing. Chewing function is indeed an important aspect of oral health, and therefore, oral rehabilitation procedures should aim to restore or maintain adequate function. However, even if the possibilities to anatomically restore lost teeth and occlusion have never been better; conventional rehabilitation procedures may still fail to optimally restore oral functions. Perhaps this is due to the lack of focus on the importance of the brain in the rehabilitation procedures. Therefore, the aim of this narrative review was to discuss the importance of maintaining or restoring optimum chewing function in the superageing population and to summarise the emerging studies on oral motor task performance and measures of cortical neuroplasticity induced by systematic training paradigms in healthy participants. Further, brain imaging studies in patients undergoing or undergone oral rehabilitation procedures will be discussed. Overall, this information is believed to enhance the understanding and develop better rehabilitative strategies to exploit training-induced cortical neuroplasticity in individuals affected by impaired oral motor coordination and function. Training or relearning of oral motor tasks could be important to optimise masticatory performance in dental prosthesis users and may represent a much-needed paradigm shift in the approach to oral rehabilitation procedures. © 2018 John Wiley & Sons Ltd.
Mathematical calibration procedure of a capacitive sensor-based indexed metrology platform
NASA Astrophysics Data System (ADS)
Brau-Avila, A.; Santolaria, J.; Acero, R.; Valenzuela-Galvan, M.; Herrera-Jimenez, V. M.; Aguilar, J. J.
2017-03-01
The demand for faster and more reliable measuring tasks for the control and quality assurance of modern production systems has created new challenges for the field of coordinate metrology. Thus, the search for new solutions in coordinate metrology systems and the need for the development of existing ones still persists. One example of such a system is the portable coordinate measuring machine (PCMM), the use of which in industry has considerably increased in recent years, mostly due to its flexibility for accomplishing in-line measuring tasks as well as its reduced cost and operational advantages compared to traditional coordinate measuring machines. Nevertheless, PCMMs have a significant drawback derived from the techniques applied in the verification and optimization procedures of their kinematic parameters. These techniques are based on the capture of data with the measuring instrument from a calibrated gauge object, fixed successively in various positions so that most of the instrument measuring volume is covered, which results in time-consuming, tedious and expensive verification and optimization procedures. In this work the mathematical calibration procedure of a capacitive sensor-based indexed metrology platform (IMP) is presented. This calibration procedure is based on the readings and geometric features of six capacitive sensors and their targets with nanometer resolution. The final goal of the IMP calibration procedure is to optimize the geometric features of the capacitive sensors and their targets in order to use the optimized data in the verification procedures of PCMMs.
3D-printed flow system for determination of lead in natural waters.
Mattio, Elodie; Robert-Peillard, Fabien; Branger, Catherine; Puzio, Kinga; Margaillan, André; Brach-Papa, Christophe; Knoery, Joël; Boudenne, Jean-Luc; Coulomb, Bruno
2017-06-01
The development of 3D printing in recent years opens up a vast array of possibilities in the field of flow analysis. In the present study, a new 3D-printed flow system has been developed for the selective spectrophotometric determination of lead in natural waters. This system was composed of three 3D-printed units (sample treatment, mixing coil and detection) that might have been assembled without any tubing to form a complete flow system. Lead was determined in a two-step procedure. A preconcentration of lead was first carried out on TrisKem Pb Resin located in a 3D-printed column reservoir closed by a tapped screw. This resin showed a high extraction selectivity for lead over many tested potential interfering metals. In a second step, lead was eluted by ammonium oxalate in presence of 4-(2-pyridylazo)-resorcinol (PAR), and spectrophotometrically detected at 520nm. The optimized flow system has exhibited a linear response from 3 to 120µgL -1 . Detection limit, coefficient of variation and sampling rate were evaluated at 2.7µgL -1 , 5.4% (n=6) and 4 sampleh -1 , respectively. This flow system stands out by its fully 3D design, portability and simplicity for low cost analysis of lead in natural waters. Copyright © 2017 Elsevier B.V. All rights reserved.
Recent advances in integrated multidisciplinary optimization of rotorcraft
NASA Technical Reports Server (NTRS)
Adelman, Howard M.; Walsh, Joanne L.; Pritchard, Jocelyn I.
1992-01-01
A joint activity involving NASA and Army researchers at NASA LaRC to develop optimization procedures to improve the rotor blade design process by integrating appropriate disciplines and accounting for all of the important interactions among the disciplines is described. The disciplines involved include rotor aerodynamics, rotor dynamics, rotor structures, airframe dynamics, and acoustics. The work is focused on combining these five key disciplines in an optimization procedure capable of designing a rotor system to satisfy multidisciplinary design requirements. Fundamental to the plan is a three-phased approach. In phase 1, the disciplines of blade dynamics, blade aerodynamics, and blade structure are closely coupled while acoustics and airframe dynamics are decoupled and are accounted for as effective constraints on the design for the first three disciplines. In phase 2, acoustics is integrated with the first three disciplines. Finally, in phase 3, airframe dynamics is integrated with the other four disciplines. Representative results from work performed to date are described. These include optimal placement of tuning masses for reduction of blade vibratory shear forces, integrated aerodynamic/dynamic optimization, and integrated aerodynamic/dynamic/structural optimization. Examples of validating procedures are described.
Recent advances in multidisciplinary optimization of rotorcraft
NASA Technical Reports Server (NTRS)
Adelman, Howard M.; Walsh, Joanne L.; Pritchard, Jocelyn I.
1992-01-01
A joint activity involving NASA and Army researchers at NASA LaRC to develop optimization procedures to improve the rotor blade design process by integrating appropriate disciplines and accounting for all of the important interactions among the disciplines is described. The disciplines involved include rotor aerodynamics, rotor dynamics, rotor structures, airframe dynamics, and acoustics. The work is focused on combining these five key disciplines in an optimization procedure capable of designing a rotor system to satisfy multidisciplinary design requirements. Fundamental to the plan is a three-phased approach. In phase 1, the disciplines of blade dynamics, blade aerodynamics, and blade structure are closely coupled while acoustics and airframe dynamics are decoupled and are accounted for as effective constraints on the design for the first three disciplines. In phase 2, acoustics is integrated with the first three disciplines. Finally, in phase 3, airframe dynamics is integrated with the other four disciplines. Representative results from work performed to date are described. These include optimal placement of tuning masses for reduction of blade vibratory shear forces, integrated aerodynamic/dynamic optimization, and integrated aerodynamic/dynamic/structural optimization. Examples of validating procedures are described.
Fiber optic tracheal detection device
NASA Astrophysics Data System (ADS)
Souhan, Brian E.; Nawn, Corinne D.; Shmel, Richard; Watts, Krista L.; Ingold, Kirk A.
2017-02-01
Poorly performed airway management procedures can lead to a wide variety of adverse events, such as laryngeal trauma, stenosis, cardiac arrest, hypoxemia, or death as in the case of failed airway management or intubation of the esophagus. Current methods for confirming tracheal placement, such as auscultation, direct visualization or capnography, may be subjective, compromised due to clinical presentation or require additional specialized equipment that is not always readily available during the procedure. Consequently, there exists a need for a non-visual detection mechanism for confirming successful airway placement that can give the provider rapid feedback during the procedure. Based upon our previously presented work characterizing the reflectance spectra of tracheal and esophageal tissue, we developed a fiber-optic prototype to detect the unique spectral characteristics of tracheal tissue. Device performance was tested by its ability to differentiate ex vivo samples of tracheal and esophageal tissue. Pig tissue samples were tested with the larynx, trachea and esophagus intact as well as excised and mounted on cork. The device positively detected tracheal tissue 18 out of 19 trials and 1 false positive out of 19 esophageal trials. Our proof of concept device shows great promise as a potential mechanism for rapid user feedback during airway management procedures to confirm tracheal placement. Ongoing studies will investigate device optimizations of the probe for more refined sensing and in vivo testing.
Surgical Management of Chronic Pancreatitis.
Parekh, Dilip; Natarajan, Sathima
2015-10-01
Advances over the past decade have indicated that a complex interplay between environmental factors, genetic predisposition, alcohol abuse, and smoking lead towards the development of chronic pancreatitis. Chronic pancreatitis is a complex disorder that causes significant and chronic incapacity in patients and a substantial burden on the society. Major advances have been made in the etiology and pathogenesis of this disease and the role of genetic predisposition is increasingly coming to the fore. Advances in noninvasive diagnostic modalities now allow for better diagnosis of chronic pancreatitis at an early stage of the disease. The impact of these advances on surgical treatment is beginning to emerge, for example, patients with certain genetic predispositions may be better treated with total pancreatectomy versus lesser procedures. Considerable controversy remains with respect to the surgical management of chronic pancreatitis. Modern understanding of the neurobiology of pain in chronic pancreatitis suggests that a window of opportunity exists for effective treatment of the intractable pain after which central sensitization can lead to an irreversible pain syndrome in patients with chronic pancreatitis. Effective surgical procedures exist for chronic pancreatitis; however, the timing of surgery is unclear. For optimal treatment of patients with chronic pancreatitis, close collaboration between a multidisciplinary team including gastroenterologists, surgeons, and pain management physicians is needed.
An artificial system for selecting the optimal surgical team.
Saberi, Nahid; Mahvash, Mohsen; Zenati, Marco
2015-01-01
We introduce an intelligent system to optimize a team composition based on the team's historical outcomes and apply this system to compose a surgical team. The system relies on a record of the procedures performed in the past. The optimal team composition is the one with the lowest probability of unfavorable outcome. We use the theory of probability and the inclusion exclusion principle to model the probability of team outcome for a given composition. A probability value is assigned to each person of database and the probability of a team composition is calculated from them. The model allows to determine the probability of all possible team compositions even if there is no recoded procedure for some team compositions. From an analytical perspective, assembling an optimal team is equivalent to minimizing the overlap of team members who have a recurring tendency to be involved with procedures of unfavorable results. A conceptual example shows the accuracy of the proposed system on obtaining the optimal team.
An efficient constraint to account for mistuning effects in the optimal design of engine rotors
NASA Technical Reports Server (NTRS)
Murthy, Durbha V.; Pierre, Christophe; Ottarsson, Gisli
1992-01-01
Blade-to-blade differences in structural properties, unavoidable in practice due to manufacturing tolerances, can have significant influence on the vibratory response of engine rotor blade. Accounting for these differences, also known as mistuning, in design and in optimization procedures is generally not possible. This note presents an easily calculated constraint that can be used in design and optimization procedures to control the sensitivity of final designs to mistuning.
Steinhaus, David; Reynolds, Dwight W; Gadler, Fredrik; Kay, G Neal; Hess, Mike F; Bennett, Tom
2005-08-01
Management of congestive heart failure is a serious public health problem. The use of implantable hemodynamic monitors (IHMs) may assist in this management by providing continuous ambulatory filling pressure status for optimal volume management. The Chronicle system includes an implanted monitor, a pressure sensor lead with passive fixation, an external pressure reference (EPR), and data retrieval and viewing components. The tip of the lead is placed near the right ventricular outflow tract to minimize risk of sensor tissue encapsulation. Implant technique and lead placement is similar to that of a permanent pacemaker. After the system had been successfully implanted in 148 patients, the type and frequency of implant-related adverse events were similar to a single-chamber pacemaker implant. R-wave amplitude was 15.2 +/- 6.7 mV and the pressure waveform signal was acceptable in all but two patients in whom presence of artifacts required lead repositioning. Implant procedure time was not influenced by experience, remaining constant throughout the study. Based on this evaluation, permanent placement of an IHM in symptomatic heart failure patients is technically feasible. Further investigation is warranted to evaluate the use of the continuous hemodynamic data in management of heart failure patients.
Optimized postweld heat treatment procedures for 17-4 PH stainless steels
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhaduri, A.K.; Sujith, S.; Srinivasan, G.
1995-05-01
The postweld heat treatment (PWHT) procedures for 17-4 PH stainless steel weldments of matching chemistry was optimized vis-a-vis its microstructure prior to welding based on microstructural studies and room-temperature mechanical properties. The 17-4 PH stainless steel was welded in two different prior microstructural conditions (condition A and condition H 1150) and then postweld heat treated to condition H900 or condition H1150, using different heat treatment procedures. Microstructural investigations and room-temperature tensile properties were determined to study the combined effects of prior microstructural and PWHT procedures.
Amini, Ata; Shrimpton, Paul J; Muggleton, Stephen H; Sternberg, Michael J E
2007-12-01
Despite the increased recent use of protein-ligand and protein-protein docking in the drug discovery process due to the increases in computational power, the difficulty of accurately ranking the binding affinities of a series of ligands or a series of proteins docked to a protein receptor remains largely unsolved. This problem is of major concern in lead optimization procedures and has lead to the development of scoring functions tailored to rank the binding affinities of a series of ligands to a specific system. However, such methods can take a long time to develop and their transferability to other systems remains open to question. Here we demonstrate that given a suitable amount of background information a new approach using support vector inductive logic programming (SVILP) can be used to produce system-specific scoring functions. Inductive logic programming (ILP) learns logic-based rules for a given dataset that can be used to describe properties of each member of the set in a qualitative manner. By combining ILP with support vector machine regression, a quantitative set of rules can be obtained. SVILP has previously been used in a biological context to examine datasets containing a series of singular molecular structures and properties. Here we describe the use of SVILP to produce binding affinity predictions of a series of ligands to a particular protein. We also for the first time examine the applicability of SVILP techniques to datasets consisting of protein-ligand complexes. Our results show that SVILP performs comparably with other state-of-the-art methods on five protein-ligand systems as judged by similar cross-validated squares of their correlation coefficients. A McNemar test comparing SVILP to CoMFA and CoMSIA across the five systems indicates our method to be significantly better on one occasion. The ability to graphically display and understand the SVILP-produced rules is demonstrated and this feature of ILP can be used to derive hypothesis for future ligand design in lead optimization procedures. The approach can readily be extended to evaluate the binding affinities of a series of protein-protein complexes. (c) 2007 Wiley-Liss, Inc.
NASA Astrophysics Data System (ADS)
Arroyo, Orlando; Gutiérrez, Sergio
2017-07-01
Several seismic optimization methods have been proposed to improve the performance of reinforced concrete framed (RCF) buildings; however, they have not been widely adopted among practising engineers because they require complex nonlinear models and are computationally expensive. This article presents a procedure to improve the seismic performance of RCF buildings based on eigenfrequency optimization, which is effective, simple to implement and efficient. The method is used to optimize a 10-storey regular building, and its effectiveness is demonstrated by nonlinear time history analyses, which show important reductions in storey drifts and lateral displacements compared to a non-optimized building. A second example for an irregular six-storey building demonstrates that the method provides benefits to a wide range of RCF structures and supports the applicability of the proposed method.
Topology optimization under stochastic stiffness
NASA Astrophysics Data System (ADS)
Asadpoure, Alireza
Topology optimization is a systematic computational tool for optimizing the layout of materials within a domain for engineering design problems. It allows variation of structural boundaries and connectivities. This freedom in the design space often enables discovery of new, high performance designs. However, solutions obtained by performing the optimization in a deterministic setting may be impractical or suboptimal when considering real-world engineering conditions with inherent variabilities including (for example) variabilities in fabrication processes and operating conditions. The aim of this work is to provide a computational methodology for topology optimization in the presence of uncertainties associated with structural stiffness, such as uncertain material properties and/or structural geometry. Existing methods for topology optimization under deterministic conditions are first reviewed. Modifications are then proposed to improve the numerical performance of the so-called Heaviside Projection Method (HPM) in continuum domains. Next, two approaches, perturbation and Polynomial Chaos Expansion (PCE), are proposed to account for uncertainties in the optimization procedure. These approaches are intrusive, allowing tight and efficient coupling of the uncertainty quantification with the optimization sensitivity analysis. The work herein develops a robust topology optimization framework aimed at reducing the sensitivity of optimized solutions to uncertainties. The perturbation-based approach combines deterministic topology optimization with a perturbation method for the quantification of uncertainties. The use of perturbation transforms the problem of topology optimization under uncertainty to an augmented deterministic topology optimization problem. The PCE approach combines the spectral stochastic approach for the representation and propagation of uncertainties with an existing deterministic topology optimization technique. The resulting compact representations for the response quantities allow for efficient and accurate calculation of sensitivities of response statistics with respect to the design variables. The proposed methods are shown to be successful at generating robust optimal topologies. Examples from topology optimization in continuum and discrete domains (truss structures) under uncertainty are presented. It is also shown that proposed methods lead to significant computational savings when compared to Monte Carlo-based optimization which involve multiple formations and inversions of the global stiffness matrix and that results obtained from the proposed method are in excellent agreement with those obtained from a Monte Carlo-based optimization algorithm.
Efficient Simulation Budget Allocation for Selecting an Optimal Subset
NASA Technical Reports Server (NTRS)
Chen, Chun-Hung; He, Donghai; Fu, Michael; Lee, Loo Hay
2008-01-01
We consider a class of the subset selection problem in ranking and selection. The objective is to identify the top m out of k designs based on simulated output. Traditional procedures are conservative and inefficient. Using the optimal computing budget allocation framework, we formulate the problem as that of maximizing the probability of correc tly selecting all of the top-m designs subject to a constraint on the total number of samples available. For an approximation of this corre ct selection probability, we derive an asymptotically optimal allocat ion and propose an easy-to-implement heuristic sequential allocation procedure. Numerical experiments indicate that the resulting allocatio ns are superior to other methods in the literature that we tested, and the relative efficiency increases for larger problems. In addition, preliminary numerical results indicate that the proposed new procedur e has the potential to enhance computational efficiency for simulation optimization.
A robust component mode synthesis method for stochastic damped vibroacoustics
NASA Astrophysics Data System (ADS)
Tran, Quang Hung; Ouisse, Morvan; Bouhaddi, Noureddine
2010-01-01
In order to reduce vibrations or sound levels in industrial vibroacoustic problems, the low-cost and efficient way consists in introducing visco- and poro-elastic materials either on the structure or on cavity walls. Depending on the frequency range of interest, several numerical approaches can be used to estimate the behavior of the coupled problem. In the context of low frequency applications related to acoustic cavities with surrounding vibrating structures, the finite elements method (FEM) is one of the most efficient techniques. Nevertheless, industrial problems lead to large FE models which are time-consuming in updating or optimization processes. A classical way to reduce calculation time is the component mode synthesis (CMS) method, whose classical formulation is not always efficient to predict dynamical behavior of structures including visco-elastic and/or poro-elastic patches. Then, to ensure an efficient prediction, the fluid and structural bases used for the model reduction need to be updated as a result of changes in a parametric optimization procedure. For complex models, this leads to prohibitive numerical costs in the optimization phase or for management and propagation of uncertainties in the stochastic vibroacoustic problem. In this paper, the formulation of an alternative CMS method is proposed and compared to classical ( u, p) CMS method: the Ritz basis is completed with static residuals associated to visco-elastic and poro-elastic behaviors. This basis is also enriched by the static response of residual forces due to structural modifications, resulting in a so-called robust basis, also adapted to Monte Carlo simulations for uncertainties propagation using reduced models.
Algorithms for selecting informative marker panels for population assignment.
Rosenberg, Noah A
2005-11-01
Given a set of potential source populations, genotypes of an individual of unknown origin at a collection of markers can be used to predict the correct source population of the individual. For improved efficiency, informative markers can be chosen from a larger set of markers to maximize the accuracy of this prediction. However, selecting the loci that are individually most informative does not necessarily produce the optimal panel. Here, using genotypes from eight species--carp, cat, chicken, dog, fly, grayling, human, and maize--this univariate accumulation procedure is compared to new multivariate "greedy" and "maximin" algorithms for choosing marker panels. The procedures generally suggest similar panels, although the greedy method often recommends inclusion of loci that are not chosen by the other algorithms. In seven of the eight species, when applied to five or more markers, all methods achieve at least 94% assignment accuracy on simulated individuals, with one species--dog--producing this level of accuracy with only three markers, and the eighth species--human--requiring approximately 13-16 markers. The new algorithms produce substantial improvements over use of randomly selected markers; where differences among the methods are noticeable, the greedy algorithm leads to slightly higher probabilities of correct assignment. Although none of the approaches necessarily chooses the panel with optimal performance, the algorithms all likely select panels with performance near enough to the maximum that they all are suitable for practical use.
Li, Yi; Zhu, Hong; Zhang, Huajun; Chen, Zhangran; Tian, Yun; Xu, Hong; Zheng, Tianling; Zheng, Wei
2014-08-15
Toxicity of algicidal extracts from Mangrovimonas yunxiaonensis strain LY01 on Alexandrium tamarense were measured through studying the algicidal procedure, nuclear damage and transcription of related genes. Medium components were optimized to improve algicidal activity, and characteristics of algicidal extracts were determined. Transmission electron microscope analysis revealed that the cell structure was broken. Cell membrane integrity destruction and nuclear structure degradation were monitored using confocal laser scanning microscope, and the rbcS, hsp and proliferating cell nuclear antigen (PCNA) gene expressions were studied. Results showed that 1.0% tryptone, 0.4% glucose and 0.8% MgCl2 were the optimal nutrient sources. The algicidal extracts were heat and pH stable, non-protein and less than 1kD. Cell membrane and nuclear structure integrity were lost, and the transcription of the rbcS and PCNA genes were significantly inhibited and there was up-regulation of hsp gene expression during the exposure procedure. The algicidal extracts destroyed the cell membrane and nuclear structure integrity, inhibited related gene expression and, eventually, lead to the inhibition of algal growth. All the results may elaborate firstly the cell death process and nuclear damage in A. tamarense which was induced by algicidal extracts, and the algicidal extracts could be potentially used as bacterial control of HABs in future. Copyright © 2014 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kroepil, Patric; Lanzman, Rotem S., E-mail: rotemshlomo@yahoo.de; Miese, Falk R.
2011-04-15
We report on percutaneous catheter procedures in the operating room (OR) to assist complicated manual extraction or insertion of pacemaker (PM) and implantable cardioverter defibrillator leads. We retrospectively reviewed complicated PM revisions and implantations performed between 2004 and 2009 that required percutaneous catheter procedures performed in the OR. The type of interventional procedure, catheter and retrieval system used, venous access, success rates, and procedural complications were analyzed. In 41 (12 female and 29 male [mean age 62 {+-} 17 years]) of 3021 (1.4%) patients, standard manual retrieval of old leads or insertion of new leads was not achievable and thusmore » required percutaneous catheter intervention for retrieval of misplaced leads and/or recanalisation of occluded central veins. Thirteen of 18 (72.2%) catheter-guided retrieval procedures for misplaced (right atrium [RA] or ventricle [RV; n = 3], superior vena cava [n = 2], brachiocephalic vein [n = 5], and subclavian vein [n = 3]) lead fragments in 16 patients were successful. Percutaneous catheter retrieval failed in five patients because there were extremely fixed or adhered lead fragments. Percutaneous transluminal angiography (PTA) of central veins for occlusion or high-grade stenosis was performed in 25 patients. In 22 of 25 patients (88%), recanalization of central veins was successful, thus enabling subsequent lead replacement. Major periprocedural complications were not observed. In the case of complicated manual PM lead implantation or revision, percutaneous catheter-guided extraction of misplaced lead fragments or recanalisation of central veins can be performed safely in the OR, thus enabling subsequent implantation or revision of PM systems in the majority of patients.« less
Proposed evaluation framework for assessing operator performance with multisensor displays
NASA Technical Reports Server (NTRS)
Foyle, David C.
1992-01-01
Despite aggressive work on the development of sensor fusion algorithms and techniques, no formal evaluation procedures have been proposed. Based on existing integration models in the literature, an evaluation framework is developed to assess an operator's ability to use multisensor, or sensor fusion, displays. The proposed evaluation framework for evaluating the operator's ability to use such systems is a normative approach: The operator's performance with the sensor fusion display can be compared to the models' predictions based on the operator's performance when viewing the original sensor displays prior to fusion. This allows for the determination as to when a sensor fusion system leads to: 1) poorer performance than one of the original sensor displays (clearly an undesirable system in which the fused sensor system causes some distortion or interference); 2) better performance than with either single sensor system alone, but at a sub-optimal (compared to the model predictions) level; 3) optimal performance (compared to model predictions); or, 4) super-optimal performance, which may occur if the operator were able to use some highly diagnostic 'emergent features' in the sensor fusion display, which were unavailable in the original sensor displays. An experiment demonstrating the usefulness of the proposed evaluation framework is discussed.
Multipurpose silicon photonics signal processor core.
Pérez, Daniel; Gasulla, Ivana; Crudgington, Lee; Thomson, David J; Khokhar, Ali Z; Li, Ke; Cao, Wei; Mashanovich, Goran Z; Capmany, José
2017-09-21
Integrated photonics changes the scaling laws of information and communication systems offering architectural choices that combine photonics with electronics to optimize performance, power, footprint, and cost. Application-specific photonic integrated circuits, where particular circuits/chips are designed to optimally perform particular functionalities, require a considerable number of design and fabrication iterations leading to long development times. A different approach inspired by electronic Field Programmable Gate Arrays is the programmable photonic processor, where a common hardware implemented by a two-dimensional photonic waveguide mesh realizes different functionalities through programming. Here, we report the demonstration of such reconfigurable waveguide mesh in silicon. We demonstrate over 20 different functionalities with a simple seven hexagonal cell structure, which can be applied to different fields including communications, chemical and biomedical sensing, signal processing, multiprocessor networks, and quantum information systems. Our work is an important step toward this paradigm.Integrated optical circuits today are typically designed for a few special functionalities and require complex design and development procedures. Here, the authors demonstrate a reconfigurable but simple silicon waveguide mesh with different functionalities.
An improved robust buffer allocation method for the project scheduling problem
NASA Astrophysics Data System (ADS)
Ghoddousi, Parviz; Ansari, Ramin; Makui, Ahmad
2017-04-01
Unpredictable uncertainties cause delays and additional costs for projects. Often, when using traditional approaches, the optimizing procedure of the baseline project plan fails and leads to delays. In this study, a two-stage multi-objective buffer allocation approach is applied for robust project scheduling. In the first stage, some decisions are made on buffer sizes and allocation to the project activities. A set of Pareto-optimal robust schedules is designed using the meta-heuristic non-dominated sorting genetic algorithm (NSGA-II) based on the decisions made in the buffer allocation step. In the second stage, the Pareto solutions are evaluated in terms of the deviation from the initial start time and due dates. The proposed approach was implemented on a real dam construction project. The outcomes indicated that the obtained buffered schedule reduces the cost of disruptions by 17.7% compared with the baseline plan, with an increase of about 0.3% in the project completion time.
Remmelink, M; Sokolow, Y; Leduc, D
2015-04-01
Histopathology is key to the diagnosis and staging of lung cancer. This analysis requires tissue sampling from primary and/or metastatic lesions. The choice of sampling technique is intended to optimize diagnostic yield while avoiding unnecessarily invasive procedures. Recent developments in targeted therapy require increasingly precise histological and molecular characterization of the tumor. Therefore, pathologists must be economical with tissue samples to ensure that they have the opportunity to perform all the analyses required. More than ever, good communication between clinician, endoscopist or surgeon, and pathologist is essential. This is necessary to ensure that all participants in the process of lung cancer diagnosis collaborate to ensure that the appropriate number and type of biopsies are performed with the appropriate tissue sampling treatment. This will allow performance of all the necessary analyses leading to a more precise characterization of the tumor, and thus the optimal treatment for patients with lung cancer. Copyright © 2015 SPLF. Published by Elsevier Masson SAS. All rights reserved.
Wing optimization for space shuttle orbiter vehicles
NASA Technical Reports Server (NTRS)
Surber, T. E.; Bornemann, W. E.; Miller, W. D.
1972-01-01
The results were presented of a parametric study performed to determine the optimum wing geometry for a proposed space shuttle orbiter. The results of the study establish the minimum weight wing for a series of wing-fuselage combinations subject to constraints on aerodynamic heating, wing trailing edge sweep, and wing over-hang. The study consists of a generalized design evaluation which has the flexibility of arbitrarily varying those wing parameters which influence the vehicle system design and its performance. The study is structured to allow inputs of aerodynamic, weight, aerothermal, structural and material data in a general form so that the influence of these parameters on the design optimization process can be isolated and identified. This procedure displays the sensitivity of the system design of variations in wing geometry. The parameters of interest are varied in a prescribed fashion on a selected fuselage and the effect on the total vehicle weight is determined. The primary variables investigated are: wing loading, aspect ratio, leading edge sweep, thickness ratio, and taper ratio.
de Oliveira, Fabio Santos; Korn, Mauro
2006-01-15
A sensitive SIA method was developed for sulphate determination in automotive fuel ethanol. This method was based on the reaction of sulphate with barium-dimethylsulphonazo(III) leading to a decrease on the magnitude of analytical signal monitored at 665 nm. Alcohol fuel samples were previously burned up to avoid matrix effects for sulphate determinations. Binary sampling and stop-flow strategies were used to increase the sensitivity of the method. The optimization of analytical parameter was performed by response surface method using Box-Behnker and central composite designs. The proposed sequential flow procedure permits to determine up to 10.0mg SO(4)(2-)l(-1) with R.S.D. <2.5% and limit of detection of 0.27 mg l(-1). The method has been successfully applied for sulphate determination in automotive fuel alcohol and the results agreed with the reference volumetric method. In the optimized condition the SIA system carried out 27 samples per hour.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mbamalu, G.A.N.; El-Hawary, M.E.
The authors propose suboptimal least squares or IRWLS procedures for estimating the parameters of a seasonal multiplicative AR model encountered during power system load forecasting. The proposed method involves using an interactive computer environment to estimate the parameters of a seasonal multiplicative AR process. The method comprises five major computational steps. The first determines the order of the seasonal multiplicative AR process, and the second uses the least squares or the IRWLS to estimate the optimal nonseasonal AR model parameters. In the third step one obtains the intermediate series by back forecast, which is followed by using the least squaresmore » or the IRWLS to estimate the optimal season AR parameters. The final step uses the estimated parameters to forecast future load. The method is applied to predict the Nova Scotia Power Corporation's 168 lead time hourly load. The results obtained are documented and compared with results based on the Box and Jenkins method.« less
Jig-Shape Optimization of a Low-Boom Supersonic Aircraft
NASA Technical Reports Server (NTRS)
Pak, Chan-Gi
2018-01-01
A simple approach for optimizing the jig-shape is proposed in this study. This simple approach is based on an unconstrained optimization problem and applied to a low-boom supersonic aircraft. In this study, the jig-shape optimization is performed using the two-step approach. First, starting design variables are computed using the least-squares surface fitting technique. Next, the jig-shape is further tuned using a numerical optimization procedure based on an in-house object-oriented optimization tool. During the numerical optimization procedure, a design jig-shape is determined by the baseline jig-shape and basis functions. A total of 12 symmetric mode shapes of the cruise-weight configuration, rigid pitch shape, rigid left and right stabilator rotation shapes, and a residual shape are selected as sixteen basis functions. After three optimization runs, the trim shape error distribution is improved, and the maximum trim shape error of 0.9844 inches of the starting configuration becomes 0.00367 inch by the end of the third optimization run.
Lead optimization in the nondrug-like space.
Zhao, Hongyu
2011-02-01
Drug-like space might be more densely populated with orally available compounds than the remaining chemical space, but lead optimization can still occur outside this space. Oral drug space is more dynamic than the relatively static drug-like space. As new targets emerge and optimization tools advance the oral drug space might expand. Lead optimization protocols are becoming more complex with greater optimization needs to be satisfied, which consequently could change the role of drug-likeness in the process. Whereas drug-like space should usually be explored preferentially, it can be easier to find oral drugs for certain targets in the nondrug-like space. Copyright © 2010 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Obracaj, Piotr; Fabianowski, Dariusz
2017-10-01
Implementations concerning adaptation of historic facilities for public utility objects are associated with the necessity of solving many complex, often conflicting expectations of future users. This mainly concerns the function that includes construction, technology and aesthetic issues. The list of issues is completed with proper protection of historic values, different in each case. The procedure leading to obtaining the expected solution is a multicriteria procedure, usually difficult to accurately define and requiring designer’s large experience. An innovative approach has been used for the analysis, namely - the modified EA FAHP (Extent Analysis Fuzzy Analytic Hierarchy Process) Chang’s method of a multicriteria analysis for the assessment of complex functional and spatial issues. Selection of optimal spatial form of an adapted historic building intended for the multi-functional public utility facility was analysed. The assumed functional flexibility was determined in the scope of: education, conference, and chamber spectacles, such as drama, concerts, in different stage-audience layouts.
Caporale, A; Doti, N; Monti, A; Sandomenico, A; Ruvo, M
2018-04-01
Solid-Phase Peptide Synthesis (SPPS) is a rapid and efficient methodology for the chemical synthesis of peptides and small proteins. However, the assembly of peptide sequences classified as "difficult" poses severe synthetic problems in SPPS for the occurrence of extensive aggregation of growing peptide chains which often leads to synthesis failure. In this framework, we have investigated the impact of different synthetic procedures on the yield and final purity of three well-known "difficult peptides" prepared using oxyma as additive for the coupling steps. In particular, we have comparatively investigated the use of piperidine and morpholine/DBU as deprotection reagents, the addition of DIPEA, collidine and N-methylmorpholine as bases to the coupling reagent. Moreover, the effect of different agitation modalities during the acylation reactions has been investigated. Data obtained represent a step forward in optimizing strategies for the synthesis of "difficult peptides". Copyright © 2018 Elsevier Inc. All rights reserved.
Morisse Pradier, H; Sénéchal, A; Philit, F; Tronc, F; Maury, J-M; Grima, R; Flamens, C; Paulus, S; Neidecker, J; Mornex, J-F
2016-02-01
Lung transplantation (LT) is now considered as an excellent treatment option for selected patients with end-stage pulmonary diseases, such as COPD, cystic fibrosis, idiopathic pulmonary fibrosis, and pulmonary arterial hypertension. The 2 goals of LT are to provide a survival benefit and to improve quality of life. The 3-step decision process leading to LT is discussed in this review. The first step is the selection of candidates, which requires a careful examination in order to check absolute and relative contraindications. The second step is the timing of listing for LT; it requires the knowledge of disease-specific prognostic factors available in international guidelines, and discussed in this paper. The third step is the choice of procedure: indications of heart-lung, single-lung, and bilateral-lung transplantation are described. In conclusion, this document provides guidelines to help pulmonologists in the referral and selection processes of candidates for transplantation in order to optimize the outcome of LT. Copyright © 2014 Elsevier Masson SAS. All rights reserved.
Mata-Cantero, Lydia; Lafuente, Maria J; Sanz, Laura; Rodriguez, Manuel S
2014-03-21
The establishment of methods for an in vitro continuous culture of Plasmodium falciparum is essential for gaining knowledge into its biology and for the development of new treatments. Previously, several techniques have been used to synchronize, enrich and concentrate P. falciparum, although obtaining cultures with high parasitaemia continues being a challenging process. Current methods produce high parasitaemia levels of synchronized P. falciparum cultures by frequent changes of culture medium or reducing the haematocrit. However, these methods are time consuming and sometimes lead to the loss of synchrony. A procedure that combines Percoll and sorbitol treatments, the use of magnetic columns, and the optimization of the in vitro culture conditions to reach high parasitaemia levels for synchronized Plasmodium falciparum cultures is described. A new procedure has been established using P. falciparum 3D7, combining previous reported methodologies to achieve in vitro parasite cultures that reach parasitaemia up to 40% at any intra-erythrocytic stage. High parasitaemia levels are obtained only one day after magnetic column purification without compromising the parasite viability and synchrony. The described procedure allows obtaining a large scale synchronized parasite culture at a high parasitaemia with less manipulations than other methods previously described.
Analyse et design aerodynamique haute-fidelite de l'integration moteur sur un avion BWB
NASA Astrophysics Data System (ADS)
Mirzaei Amirabad, Mojtaba
BWB (Blended Wing Body) is an innovative type of aircraft based on the flying wing concept. In this configuration, the wing and the fuselage are blended together smoothly. BWB offers economical and environmental advantages by reducing fuel consumption through improving aerodynamic performance. In this project, the goal is to improve the aerodynamic performance by optimizing the main body of BWB that comes from conceptual design. The high fidelity methods applied in this project have been less frequently addressed in the literature. This research develops an automatic optimization procedure in order to reduce the drag force on the main body. The optimization is carried out in two main stages: before and after engine installation. Our objective is to minimize the drag by taking into account several constraints in high fidelity optimization. The commercial software, Isight is chosen as an optimizer in which MATLAB software is called to start the optimization process. Geometry is generated using ANSYS-DesignModeler, unstructured mesh is created by ANSYS-Mesh and CFD calculations are done with the help of ANSYS-Fluent. All of these software are coupled together in ANSYS-Workbench environment which is called by MATLAB. The high fidelity methods are used during optimization by solving Navier-Stokes equations. For verifying the results, a finer structured mesh is created by ICEM software to be used in each stage of optimization. The first stage includes a 3D optimization on the surface of the main body, before adding the engine. The optimized case is then used as an input for the second stage in which the nacelle is added. It could be concluded that this study leads us to obtain appropriate reduction in drag coefficient for BWB without nacelle. In the second stage (adding the nacelle) a drag minimization is also achieved by performing a local optimization. Furthermore, the flow separation, created in the main body-nacelle zone, is reduced.
Design and Optimization of Composite Gyroscope Momentum Wheel Rings
NASA Technical Reports Server (NTRS)
Bednarcyk, Brett A.; Arnold, Steven M.
2007-01-01
Stress analysis and preliminary design/optimization procedures are presented for gyroscope momentum wheel rings composed of metallic, metal matrix composite, and polymer matrix composite materials. The design of these components involves simultaneously minimizing both true part volume and mass, while maximizing angular momentum. The stress analysis results are combined with an anisotropic failure criterion to formulate a new sizing procedure that provides considerable insight into the design of gyroscope momentum wheel ring components. Results compare the performance of two optimized metallic designs, an optimized SiC/Ti composite design, and an optimized graphite/epoxy composite design. The graphite/epoxy design appears to be far superior to the competitors considered unless a much greater premium is placed on volume efficiency compared to mass efficiency.
Fuel Injector Design Optimization for an Annular Scramjet Geometry
NASA Technical Reports Server (NTRS)
Steffen, Christopher J., Jr.
2003-01-01
A four-parameter, three-level, central composite experiment design has been used to optimize the configuration of an annular scramjet injector geometry using computational fluid dynamics. The computational fluid dynamic solutions played the role of computer experiments, and response surface methodology was used to capture the simulation results for mixing efficiency and total pressure recovery within the scramjet flowpath. An optimization procedure, based upon the response surface results of mixing efficiency, was used to compare the optimal design configuration against the target efficiency value of 92.5%. The results of three different optimization procedures are presented and all point to the need to look outside the current design space for different injector geometries that can meet or exceed the stated mixing efficiency target.
The relationship of acquisition systems to automated stereo correlation.
Colvocoresses, A.P.
1983-01-01
Today a concerted effort is being made to expedite the mapping process through automated correlation of stereo data. Stereo correlation involves the comparison of radiance (brightness) signals or patterns recorded by sensors. Conventionally, two-dimensional area correlation is utilized but this is a rather slow and cumbersome procedure. Digital correlation can be performed in only one dimension where suitable signal patterns exist, and the one-dimensional mode is much faster. Electro-optical (EO) systems, suitable for space use, also have much greater flexibility than film systems. Thus, an EO space system can be designed which will optimize one-dimensional stereo correlation and lead toward the automation of topographic mapping.-from Author
2000-12-15
Paul Ducheyne, a principal investigator in the microgravity materials science program and head of the University of Pernsylvania's Center for Bioactive Materials and Tissue Engineering, is leading the trio as they use simulated microgravity to determine the optimal characteristics of tiny glass particles for growing bone tissue. The result could make possible a much broader range of synthetic bone-grafting applications. Bioactive glass particles (left) with a microporous surface (right) are widely accepted as a synthetic material for periodontal procedures. Using the particles to grow three-dimensional tissue cultures may one day result in developing an improved, more rugged bone tissue that may be used to correct skeletal disorders and bone defects. The work is sponsored by NASA's Office of Biological and Physical Research.
ESTIMATION OF FUNCTIONALS OF SPARSE COVARIANCE MATRICES.
Fan, Jianqing; Rigollet, Philippe; Wang, Weichen
High-dimensional statistical tests often ignore correlations to gain simplicity and stability leading to null distributions that depend on functionals of correlation matrices such as their Frobenius norm and other ℓ r norms. Motivated by the computation of critical values of such tests, we investigate the difficulty of estimation the functionals of sparse correlation matrices. Specifically, we show that simple plug-in procedures based on thresholded estimators of correlation matrices are sparsity-adaptive and minimax optimal over a large class of correlation matrices. Akin to previous results on functional estimation, the minimax rates exhibit an elbow phenomenon. Our results are further illustrated in simulated data as well as an empirical study of data arising in financial econometrics.
ESTIMATION OF FUNCTIONALS OF SPARSE COVARIANCE MATRICES
Fan, Jianqing; Rigollet, Philippe; Wang, Weichen
2016-01-01
High-dimensional statistical tests often ignore correlations to gain simplicity and stability leading to null distributions that depend on functionals of correlation matrices such as their Frobenius norm and other ℓr norms. Motivated by the computation of critical values of such tests, we investigate the difficulty of estimation the functionals of sparse correlation matrices. Specifically, we show that simple plug-in procedures based on thresholded estimators of correlation matrices are sparsity-adaptive and minimax optimal over a large class of correlation matrices. Akin to previous results on functional estimation, the minimax rates exhibit an elbow phenomenon. Our results are further illustrated in simulated data as well as an empirical study of data arising in financial econometrics. PMID:26806986
Mozharovskiy, V V; Tsyganov, A A; Mozharovskiy, K V; Tarasov, A A
To assess an effectiveness of surgical treatment of gastroesophageal reflux disease (GERD) combined with hiatal hernia (HH). The trial included 96 patients with GERD and HH who were divided into 2 groups. The principal difference between groups was the use of surgery in the main group and therapeutic treatment in the comparison group. The effectiveness of surgical treatment is superior to therapeutic treatment of GERD by more than 2.5 times. HH combined with GERD is an indication for surgical treatment. Fundoplication cuff should not lead to angular and rotational esophageal deformation. Nissen procedure in Donahue modification (Short Floppy Nissen) simulates optimally the geometry of esophago-gastric junction and His angle.
Self-similarity of waiting times in fracture systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Niccolini, G.; Bosia, F.; Carpinteri, A.
2009-08-15
Experimental and numerical results are presented for a fracture experiment carried out on a fiber-reinforced element under flexural loading, and a statistical analysis is performed for acoustic emission waiting-time distributions. By an optimization procedure, a recently proposed scaling law describing these distributions for different event magnitude scales is confirmed by both experimental and numerical data, thus reinforcing the idea that fracture of heterogeneous materials has scaling properties similar to those found for earthquakes. Analysis of the different scaling parameters obtained for experimental and numerical data leads us to formulate the hypothesis that the type of scaling function obtained depends onmore » the level of correlation among fracture events in the system.« less
Storage of cell samples for ToF-SIMS experiments-How to maintain sample integrity.
Schaepe, Kaija; Kokesch-Himmelreich, Julia; Rohnke, Marcus; Wagner, Alena-Svenja; Schaaf, Thimo; Henss, Anja; Wenisch, Sabine; Janek, Jürgen
2016-06-25
In order to obtain comparable and reproducible results from time-of-flight secondary ion mass spectrometry (ToF-SIMS) analysis of biological cells, the influence of sample preparation and storage has to be carefully considered. It has been previously shown that the impact of the chosen preparation routine is crucial. In continuation of this work, the impact of storage needs to be addressed, as besides the fact that degradation will unavoidably take place, the effects of different storage procedures in combination with specific sample preparations remain largely unknown. Therefore, this work examines different wet (buffer, water, and alcohol) and dry (air-dried, freeze-dried, and critical-point-dried) storage procedures on human mesenchymal stem cell cultures. All cell samples were analyzed by ToF-SIMS immediately after preparation and after a storage period of 4 weeks. The obtained spectra were compared by principal component analysis with lipid- and amino acid-related signals known from the literature. In all dry storage procedures, notable degradation effects were observed, especially for lipid-, but also for amino acid-signal intensities. This leads to the conclusion that dried samples are to some extent easier to handle, yet the procedure is not the optimal storage solution. Degradation proceeds faster, which is possibly caused by oxidation reactions and cleaving enzymes that might still be active. Just as well, wet stored samples in alcohol struggle with decreased signal intensities from lipids and amino acids after storage. Compared to that, the wet stored samples in a buffered or pure aqueous environment revealed no degradation effects after 4 weeks. However, this storage bears a higher risk of fungi/bacterial contamination, as sterile conditions are typically not maintained. Thus, regular solution change is recommended for optimized storage conditions. Not directly exposing the samples to air, wet storage seems to minimize oxidation effects, and hence, buffer or water storage with regular renewal of the solution is recommended for short storage periods.
Storage of cell samples for ToF-SIMS experiments—How to maintain sample integrity
Schaepe, Kaija; Kokesch-Himmelreich, Julia; Rohnke, Marcus; Wagner, Alena-Svenja; Schaaf, Thimo; Henss, Anja; Wenisch, Sabine; Janek, Jürgen
2016-01-01
In order to obtain comparable and reproducible results from time-of-flight secondary ion mass spectrometry (ToF-SIMS) analysis of biological cells, the influence of sample preparation and storage has to be carefully considered. It has been previously shown that the impact of the chosen preparation routine is crucial. In continuation of this work, the impact of storage needs to be addressed, as besides the fact that degradation will unavoidably take place, the effects of different storage procedures in combination with specific sample preparations remain largely unknown. Therefore, this work examines different wet (buffer, water, and alcohol) and dry (air-dried, freeze-dried, and critical-point-dried) storage procedures on human mesenchymal stem cell cultures. All cell samples were analyzed by ToF-SIMS immediately after preparation and after a storage period of 4 weeks. The obtained spectra were compared by principal component analysis with lipid- and amino acid-related signals known from the literature. In all dry storage procedures, notable degradation effects were observed, especially for lipid-, but also for amino acid-signal intensities. This leads to the conclusion that dried samples are to some extent easier to handle, yet the procedure is not the optimal storage solution. Degradation proceeds faster, which is possibly caused by oxidation reactions and cleaving enzymes that might still be active. Just as well, wet stored samples in alcohol struggle with decreased signal intensities from lipids and amino acids after storage. Compared to that, the wet stored samples in a buffered or pure aqueous environment revealed no degradation effects after 4 weeks. However, this storage bears a higher risk of fungi/bacterial contamination, as sterile conditions are typically not maintained. Thus, regular solution change is recommended for optimized storage conditions. Not directly exposing the samples to air, wet storage seems to minimize oxidation effects, and hence, buffer or water storage with regular renewal of the solution is recommended for short storage periods. PMID:26810048
A Fast Proceduere for Optimizing Thermal Protection Systems of Re-Entry Vehicles
NASA Astrophysics Data System (ADS)
Ferraiuolo, M.; Riccio, A.; Tescione, D.; Gigliotti, M.
The aim of the present work is to introduce a fast procedure to optimize thermal protection systems for re-entry vehicles subjected to high thermal loads. A simplified one-dimensional optimization process, performed in order to find the optimum design variables (lengths, sections etc.), is the first step of the proposed design procedure. Simultaneously, the most suitable materials able to sustain high temperatures and meeting the weight requirements are selected and positioned within the design layout. In this stage of the design procedure, simplified (generalized plane strain) FEM models are used when boundary and geometrical conditions allow the reduction of the degrees of freedom. Those simplified local FEM models can be useful because they are time-saving and very simple to build; they are essentially one dimensional and can be used for optimization processes in order to determine the optimum configuration with regard to weight, temperature and stresses. A triple-layer and a double-layer body, subjected to the same aero-thermal loads, have been optimized to minimize the overall weight. Full two and three-dimensional analyses are performed in order to validate those simplified models. Thermal-structural analyses and optimizations are executed by adopting the Ansys FEM code.
NASA Astrophysics Data System (ADS)
Izah Anuar, Nurul; Saptari, Adi
2016-02-01
This paper addresses the types of particle representation (encoding) procedures in a population-based stochastic optimization technique in solving scheduling problems known in the job-shop manufacturing environment. It intends to evaluate and compare the performance of different particle representation procedures in Particle Swarm Optimization (PSO) in the case of solving Job-shop Scheduling Problems (JSP). Particle representation procedures refer to the mapping between the particle position in PSO and the scheduling solution in JSP. It is an important step to be carried out so that each particle in PSO can represent a schedule in JSP. Three procedures such as Operation and Particle Position Sequence (OPPS), random keys representation and random-key encoding scheme are used in this study. These procedures have been tested on FT06 and FT10 benchmark problems available in the OR-Library, where the objective function is to minimize the makespan by the use of MATLAB software. Based on the experimental results, it is discovered that OPPS gives the best performance in solving both benchmark problems. The contribution of this paper is the fact that it demonstrates to the practitioners involved in complex scheduling problems that different particle representation procedures can have significant effects on the performance of PSO in solving JSP.
van Rossum, Huub H; Kemperman, Hans
2017-02-01
To date, no practical tools are available to obtain optimal settings for moving average (MA) as a continuous analytical quality control instrument. Also, there is no knowledge of the true bias detection properties of applied MA. We describe the use of bias detection curves for MA optimization and MA validation charts for validation of MA. MA optimization was performed on a data set of previously obtained consecutive assay results. Bias introduction and MA bias detection were simulated for multiple MA procedures (combination of truncation limits, calculation algorithms and control limits) and performed for various biases. Bias detection curves were generated by plotting the median number of test results needed for bias detection against the simulated introduced bias. In MA validation charts the minimum, median, and maximum numbers of assay results required for MA bias detection are shown for various bias. Their use was demonstrated for sodium, potassium, and albumin. Bias detection curves allowed optimization of MA settings by graphical comparison of bias detection properties of multiple MA. The optimal MA was selected based on the bias detection characteristics obtained. MA validation charts were generated for selected optimal MA and provided insight into the range of results required for MA bias detection. Bias detection curves and MA validation charts are useful tools for optimization and validation of MA procedures.
Predicting the difficulty of a lead extraction procedure: the LED index.
Bontempi, Luca; Vassanelli, Francesca; Cerini, Manuel; D'Aloia, Antonio; Vizzardi, Enrico; Gargaro, Alessio; Chiusso, Francesco; Mamedouv, Rashad; Lipari, Alessandro; Curnis, Antonio
2014-08-01
According to recent surveys, many sites performing permanent lead extractions do not meet the minimum prerequisites concerning personnel training, procedures' volume, or facility requirements. The current Heart Rhythm Society consensus on lead extractions suggests that patients should be referred to more experienced sites when a better outcome could be achieved. The purpose of this study was to develop a score aimed at predicting the difficulty of a lead extraction procedure through the analysis of a high-volume center database. This score could help to discriminate patients who should be sent to a referral site. A total of 889 permanent leads were extracted from 469 patients. All procedures were performed from January 2009 to May 2012 by two expert electrophysiologists, at the University Hospital of Brescia. Factors influencing the difficulty of a procedure were assessed using a univariate and a multivariate logistic regression model. The fluoroscopy time of the procedure was taken as an index of difficulty. A Lead Extraction Difficulty (LED) score was defined, considering the strongest predictors. Overall, 873 of 889 (98.2%) leads were completely removed. Major complications were reported in one patient (0.2%) who manifested cardiac tamponade. Minor complications occurred in six (1.3%) patients. No deaths occurred. Median fluoroscopic time was 8.7 min (3.3-17.3). A procedure was classified as difficult when fluoroscopy time was more than 31.2 min [90th percentile (PCTL)].At a univariate analysis, the number of extracted leads and years from implant were significantly associated with an increased risk of fluoroscopy time above 90th PCTL [odds ratio (OR) 1.51, 95% confidence interval (CI) 1.08-2.11, P = 0.01; and OR 1.19, 95% CI 1.12-1.25, P < 0.001, respectively). After adjusting for patient age and sex, and combining with other covariates potentially influencing the extraction procedure, a multivariate analysis confirmed a 71% increased risk of fluoroscopy time above 90th PCTL for each additional lead extracted (OR 1.71, 95% CI 1.06-2.77, P = 0.028) and a 23% increased risk for each year of lead age (OR 1.23, 95% CI 1.15-1.31, P < 0.001). Further nonindependent factors increasing the risk were the presence of active fixation leads and dual-coil implantable cardiac defibrillator leads. Conversely, vegetations significantly favored lead extraction.The LED score was defined as: number of extracted leads within a procedure + lead age (years from implant) + 1 if dual-coil - 1 if vegetation. The LED score independently predicted complex procedure (with fluoroscopic time >90th PCTL) both at univariate and multivariate analysis. A receiver-operating characteristic analysis showed an area under the curve of 0.81. A LED score greater than 10 could predict fluoroscopy time above 90th PCTL with a sensitivity of 78.3% and a specificity of 76.7%. The LED score is easy to compute and potentially predicts fluoroscopy time above 90th PCTL with a relatively high accuracy.
Improving the Unsteady Aerodynamic Performance of Transonic Turbines using Neural Networks
NASA Technical Reports Server (NTRS)
Rai, Man Mohan; Madavan, Nateri K.; Huber, Frank W.
1999-01-01
A recently developed neural net-based aerodynamic design procedure is used in the redesign of a transonic turbine stage to improve its unsteady aerodynamic performance. The redesign procedure used incorporates the advantages of both traditional response surface methodology and neural networks by employing a strategy called parameter-based partitioning of the design space. Starting from the reference design, a sequence of response surfaces based on both neural networks and polynomial fits are constructed to traverse the design space in search of an optimal solution that exhibits improved unsteady performance. The procedure combines the power of neural networks and the economy of low-order polynomials (in terms of number of simulations required and network training requirements). A time-accurate, two-dimensional, Navier-Stokes solver is used to evaluate the various intermediate designs and provide inputs to the optimization procedure. The procedure yielded a modified design that improves the aerodynamic performance through small changes to the reference design geometry. These results demonstrate the capabilities of the neural net-based design procedure, and also show the advantages of including high-fidelity unsteady simulations that capture the relevant flow physics in the design optimization process.
Design of controlled elastic and inelastic structures
NASA Astrophysics Data System (ADS)
Reinhorn, A. M.; Lavan, O.; Cimellaro, G. P.
2009-12-01
One of the founders of structural control theory and its application in civil engineering, Professor Emeritus Tsu T. Soong, envisioned the development of the integral design of structures protected by active control devices. Most of his disciples and colleagues continuously attempted to develop procedures to achieve such integral control. In his recent papers published jointly with some of the authors of this paper, Professor Soong developed design procedures for the entire structure using a design — redesign procedure applied to elastic systems. Such a procedure was developed as an extension of other work by his disciples. This paper summarizes some recent techniques that use traditional active control algorithms to derive the most suitable (optimal, stable) control force, which could then be implemented with a combination of active, passive and semi-active devices through a simple match or more sophisticated optimal procedures. Alternative design can address the behavior of structures using Liapunov stability criteria. This paper shows a unified procedure which can be applied to both elastic and inelastic structures. Although the implementation does not always preserve the optimal criteria, it is shown that the solutions are effective and practical for design of supplemental damping, stiffness enhancement or softening, and strengthening or weakening.
Endoscopic hyperspectral imaging: light guide optimization for spectral light source
NASA Astrophysics Data System (ADS)
Browning, Craig M.; Mayes, Samuel; Rich, Thomas C.; Leavesley, Silas J.
2018-02-01
Hyperspectral imaging (HSI) is a technology used in remote sensing, food processing and documentation recovery. Recently, this approach has been applied in the medical field to spectrally interrogate regions of interest within respective substrates. In spectral imaging, a two (spatial) dimensional image is collected, at many different (spectral) wavelengths, to sample spectral signatures from different regions and/or components within a sample. Here, we report on the use of hyperspectral imaging for endoscopic applications. Colorectal cancer is the 3rd leading cancer for incidences and deaths in the US. One factor of severity is the miss rate of precancerous/flat lesions ( 65% accuracy). Integrating HSI into colonoscopy procedures could minimize misdiagnosis and unnecessary resections. We have previously reported a working prototype light source with 16 high-powered light emitting diodes (LEDs) capable of high speed cycling and imaging. In recent testing, we have found our current prototype is limited by transmission loss ( 99%) through the multi-furcated solid light guide (lightpipe) and the desired framerate (20-30 fps) could not be achieved. Here, we report on a series of experimental and modeling studies to better optimize the lightpipe and the spectral endoscopy system as a whole. The lightpipe was experimentally evaluated using an integrating sphere and spectrometer (Ocean Optics). Modeling the lightpipe was performed using Monte Carlo optical ray tracing in TracePro (Lambda Research Corp.). Results of these optimization studies will aid in manufacturing a revised prototype with the newly designed light guide and increased sensitivity. Once the desired optical output (5-10 mW) is achieved then the HIS endoscope system will be able to be implemented without adding onto the procedure time.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ulmer, W
Purpose: During the past decade the quantization of coupled/forced electromagnetic circuits with or without Ohm’s resistance has gained the subject of some fundamental studies, since even problems of quantum electrodynamics can be solved in an elegant manner, e.g. the creation of quantized electromagnetic fields. In this communication, we shall use these principles to describe optimization procedures in the design of klystrons, synchrotron irradiation and high energy bremsstrahlung. Methods: The base is the Hamiltonian of an electromagnetic circuit and the extension to coupled circuits, which allow the study of symmetries and perturbed symmetries in a very apparent way (SU2, SU3, SU4).more » The introduction resistance and forced oscillators for the emission and absorption in such coupled systems provides characteristic resonance conditions, and atomic orbitals can be described by that. The extension to virtual orbitals leads to creation of bremsstrahlung, if the incident electron (velocity v nearly c) is described by a current, which is associated with its inductivitance and the virtual orbital to the charge distribution (capacitance). Coupled systems with forced oscillators can be used to amplify drastically the resonance frequencies to describe klystrons and synchrotron radiation. Results: The cross-section formula for bremsstrahlung given by the propagator method of Feynman can readily be derived. The design of klystrons and synchrotrons inclusive the radiation outcome can be described and optimized by the determination of the mutual magnetic couplings between the oscillators induced by the currents. Conclusions: The presented methods of quantization of circuits inclusive resistance provide rather a straightforward way to understand complex technical processes such as creation of bremsstrahlung or creation of radiation by klystrons and synchrotrons. They can either be used for optimization procedures and, last but not least, for pedagogical purposes with regard to a qualified understanding of radiation physics for students.« less
Optimization of wearable microwave antenna with simplified electromagnetic model of the human body
NASA Astrophysics Data System (ADS)
Januszkiewicz, Łukasz; Barba, Paolo Di; Hausman, Sławomir
2017-12-01
In this paper the problem of optimization design of a microwave wearable antenna is investigated. Reference is made to a specific antenna design that is a wideband Vee antenna the geometry of which is characterized by 6 parameters. These parameters were automatically adjusted with an evolution strategy based algorithm EStra to obtain the impedance matching of the antenna located in the proximity of the human body. The antenna was designed to operate in the ISM (industrial, scientific, medical) band which covers the frequency range of 2.4 GHz up to 2.5 GHz. The optimization procedure used the finite-difference time-domain method based full-wave simulator with a simplified human body model. In the optimization procedure small movements of antenna towards or away of the human body that are likely to happen during real use were considered. The stability of the antenna parameters irrespective of the movements of the user's body is an important factor in wearable antenna design. The optimization procedure allowed obtaining good impedance matching for a given range of antenna distances with respect to the human body.
Improving Societal Resilience Through Enhanced Reconnection Speed of Damaged Networks
NASA Astrophysics Data System (ADS)
Vodák, Rostislav; Bíl, Michal
2017-04-01
Road networks rank among the foundations of civilization. They enable people, services and goods to be transported to arbitrary places at any time. Its functioning can be impacted by various events, not only by natural hazards and their combinations. This can lead to the concurrent interruption of a number of roads and even cut-off parts of the network from vital services. The impact of these events can be reduced by various measures, but cannot be fully eliminated. We are aware of the fact that extreme events which result in road network break up will occur regardless of the ongoing process of hazard reduction using, for example, the improvement of the structural robustness of roads. The next problem is that many of the events are unpredictable and thus the needed costs of the improvement can easily spiral out of control. We therefore focus on the speed of the recovery process which can be optimized. This means that the time during which the damaged network is reconnected again will be as short as possible. The result of the optimization procedure is a sequence of road links which represent the routes of the repair units. The optimization process is, however, highly nontrivial because of the large number of possible routes for repair units. This prevents anyone from finding an optimal solution. We consequently introduce an approach based on the Ant Colony Optimization algorithm which is able to suggest an almost optimal solution under various constraints which can be established by the administrator of the network. We will also demonstrate its results and variability with several case examples.
Urban Forest Ecosystem Service Optimization, Tradeoffs, and Disparities
NASA Astrophysics Data System (ADS)
Bodnaruk, E.; Kroll, C. N.; Endreny, T. A.; Hirabayashi, S.; Yang, Y.
2014-12-01
Urban land area and the proportion of humanity living in cities is growing, leading to increased urban air pollution, temperature, and stormwater runoff. These changes can exacerbate respiratory and heat-related illnesses and affect ecosystem functioning. Urban trees can help mitigate these threats by removing air pollutants, mitigating urban heat island effects, and infiltrating and filtering stormwater. The urban environment is highly heterogeneous, and there is no tool to determine optimal locations to plant or protect trees. Using spatially explicit land cover, weather, and demographic data within biophysical ecosystem service models, this research expands upon the iTree urban forest tools to produce a new decision support tool (iTree-DST) that will explore the development and impacts of optimal tree planting. It will also heighten awareness of environmental justice by incorporating the Atkinson Index to quantify disparities in health risks and ecosystem services across vulnerable and susceptible populations. The study area is Baltimore City, a location whose urban forest and environmental justice concerns have been studied extensively. The iTree-DST is run at the US Census block group level and utilizes a local gradient approach to calculate the change in ecosystem services with changing tree cover across the study area. Empirical fits provide ecosystem service gradients for possible tree cover scenarios, greatly increasing the speed and efficiency of the optimization procedure. Initial results include an evaluation of the performance of the gradient method, optimal planting schemes for individual ecosystem services, and an analysis of tradeoffs and synergies between competing objectives.
Development of an ELISA for evaluation of swab recovery efficiencies of bovine serum albumin.
Sparding, Nadja; Slotved, Hans-Christian; Nicolaisen, Gert M; Giese, Steen B; Elmlund, Jón; Steenhard, Nina R
2014-01-01
After a potential biological incident the sampling strategy and sample analysis are crucial for the outcome of the investigation and identification. In this study, we have developed a simple sandwich ELISA based on commercial components to quantify BSA (used as a surrogate for ricin) with a detection range of 1.32-80 ng/mL. We used the ELISA to evaluate different protein swabbing procedures (swabbing techniques and after-swabbing treatments) for two swab types: a cotton gauze swab and a flocked nylon swab. The optimal swabbing procedure for each swab type was used to obtain recovery efficiencies from different surface materials. The surface recoveries using the optimal swabbing procedure ranged from 0-60% and were significantly higher from nonporous surfaces compared to porous surfaces. In conclusion, this study presents a swabbing procedure evaluation and a simple BSA ELISA based on commercial components, which are easy to perform in a laboratory with basic facilities. The data indicate that different swabbing procedures were optimal for each of the tested swab types, and the particular swab preference depends on the surface material to be swabbed.
Network placement optimization for large-scale distributed system
NASA Astrophysics Data System (ADS)
Ren, Yu; Liu, Fangfang; Fu, Yunxia; Zhou, Zheng
2018-01-01
The network geometry strongly influences the performance of the distributed system, i.e., the coverage capability, measurement accuracy and overall cost. Therefore the network placement optimization represents an urgent issue in the distributed measurement, even in large-scale metrology. This paper presents an effective computer-assisted network placement optimization procedure for the large-scale distributed system and illustrates it with the example of the multi-tracker system. To get an optimal placement, the coverage capability and the coordinate uncertainty of the network are quantified. Then a placement optimization objective function is developed in terms of coverage capabilities, measurement accuracy and overall cost. And a novel grid-based encoding approach for Genetic algorithm is proposed. So the network placement is optimized by a global rough search and a local detailed search. Its obvious advantage is that there is no need for a specific initial placement. At last, a specific application illustrates this placement optimization procedure can simulate the measurement results of a specific network and design the optimal placement efficiently.
The use of carbon dioxide angiography for renal sympathetic denervation: a technical report.
Renton, Mary; Hameed, Mohammad A; Dasgupta, Indranil; Hoey, Edward T D; Freedman, Jonathan; Ganeshan, Arul
2016-12-01
Hypertension is the leading attributable cause of cardiovascular mortality worldwide. Patients with hypertension have multiple comorbidities including high rates of concomitant renal disease. Current pharmacological approaches are inadequate in the treatment of resistant hypertension. Renal sympathetic denervation (RDN) has been shown to effectively treat resistant hypertension. The traditional use of iodinated contrast in RDN is contraindicated in patients with significant renal insufficiency. In patients with renal impairment, carbon dioxide (CO 2 ) can be used as an alternative contrast material for RDN. This article describes the technical aspects of RDN using CO 2 angiography. Our centre is experienced in the innovative RDN procedure using CO 2 angiography. We describe the protocol for CO 2 angiography for RDN using a home-made CO 2 delivery system and the Symplicity ™ (Minneapolis MN 55432 USA) catheter (Medtronic) device. CO 2 angiography is an excellent alternative to iodinated contrast for RDN procedures. CO 2 angiography for RDN is a safe and effective alternative to iodinated contrast. RDN using CO 2 angiography is an easy and feasible procedure that can be used in patients with renal insufficiency or iodinated contrast allergies. Advances in knowledge: There is a paucity of descriptive reports for CO 2 angiography for RDN and we provide details of the optimal protocol for the procedure. In particular, we describe the use of a Symplicity Spyral ™ catheter (Medtronic), which has not been reported to date for use in this procedure.
NASA Astrophysics Data System (ADS)
S, Kyriacou; E, Kontoleontos; S, Weissenberger; L, Mangani; E, Casartelli; I, Skouteropoulou; M, Gattringer; A, Gehrer; M, Buchmayr
2014-03-01
An efficient hydraulic optimization procedure, suitable for industrial use, requires an advanced optimization tool (EASY software), a fast solver (block coupled CFD) and a flexible geometry generation tool. EASY optimization software is a PCA-driven metamodel-assisted Evolutionary Algorithm (MAEA (PCA)) that can be used in both single- (SOO) and multiobjective optimization (MOO) problems. In MAEAs, low cost surrogate evaluation models are used to screen out non-promising individuals during the evolution and exclude them from the expensive, problem specific evaluation, here the solution of Navier-Stokes equations. For additional reduction of the optimization CPU cost, the PCA technique is used to identify dependences among the design variables and to exploit them in order to efficiently drive the application of the evolution operators. To further enhance the hydraulic optimization procedure, a very robust and fast Navier-Stokes solver has been developed. This incompressible CFD solver employs a pressure-based block-coupled approach, solving the governing equations simultaneously. This method, apart from being robust and fast, also provides a big gain in terms of computational cost. In order to optimize the geometry of hydraulic machines, an automatic geometry and mesh generation tool is necessary. The geometry generation tool used in this work is entirely based on b-spline curves and surfaces. In what follows, the components of the tool chain are outlined in some detail and the optimization results of hydraulic machine components are shown in order to demonstrate the performance of the presented optimization procedure.
A linearized theory method of constrained optimization for supersonic cruise wing design
NASA Technical Reports Server (NTRS)
Miller, D. S.; Carlson, H. W.; Middleton, W. D.
1976-01-01
A linearized theory wing design and optimization procedure which allows physical realism and practical considerations to be imposed as constraints on the optimum (least drag due to lift) solution is discussed and examples of application are presented. In addition to the usual constraints on lift and pitching moment, constraints are imposed on wing surface ordinates and wing upper surface pressure levels and gradients. The design procedure also provides the capability of including directly in the optimization process the effects of other aircraft components such as a fuselage, canards, and nacelles.
Optimum Design of High Speed Prop-Rotors
NASA Technical Reports Server (NTRS)
Chattopadhyay, Aditi
1992-01-01
The objective of this research is to develop optimization procedures to provide design trends in high speed prop-rotors. The necessary disciplinary couplings are all considered within a closed loop optimization process. The procedures involve the consideration of blade aeroelastic, aerodynamic performance, structural and dynamic design requirements. Further, since the design involves consideration of several different objectives, multiobjective function formulation techniques are developed.
Chung, Byunghoon; Lee, Hun; Choi, Bong Joon; Seo, Kyung Ryul; Kim, Eung Kwon; Kim, Dae Yune; Kim, Tae-Im
2017-02-01
The purpose of this study was to investigate the clinical efficacy of an optimized prolate ablation procedure for correcting residual refractive errors following laser surgery. We analyzed 24 eyes of 15 patients who underwent an optimized prolate ablation procedure for the correction of residual refractive errors following laser in situ keratomileusis, laser-assisted subepithelial keratectomy, or photorefractive keratectomy surgeries. Preoperative ophthalmic examinations were performed, and uncorrected distance visual acuity, corrected distance visual acuity, manifest refraction values (sphere, cylinder, and spherical equivalent), point spread function, modulation transfer function, corneal asphericity (Q value), ocular aberrations, and corneal haze measurements were obtained postoperatively at 1, 3, and 6 months. Uncorrected distance visual acuity improved and refractive errors decreased significantly at 1, 3, and 6 months postoperatively. Total coma aberration increased at 3 and 6 months postoperatively, while changes in all other aberrations were not statistically significant. Similarly, no significant changes in point spread function were detected, but modulation transfer function increased significantly at the postoperative time points measured. The optimized prolate ablation procedure was effective in terms of improving visual acuity and objective visual performance for the correction of persistent refractive errors following laser surgery.
O'Conner-Von, Susan; Turner, Helen N
2013-12-01
The ASPMN strongly recommends that infants who are being circumcised must receive optimal pain management. ‘‘If a decision for circumcision is made, procedural analgesia should be provided’’ (AAP, 1999, p. 691). Therefore, it is the position of the ASPMN that optimal pain management must be provided throughout the circumcision process. Furthermore, parents must be prepared for the procedure and educated about infant pain assessment. They must also be informed of pharmacologic and integrative pain management therapies that are appropriate before, during, and after the procedure.
Final Report: Pilot Region-Based Optimization Program for Fund-Lead Sites, EPA Region III
This report describes a pilot study for a Region-based optimization program, implemented by a Regional Optimization Evaluation Team (ROET) that was conducted in U.S. EPA Region III at Fund-lead sites with pump-and-treat (P&T) systems.
NASA Astrophysics Data System (ADS)
Bosman, Peter A. N.; Alderliesten, Tanja
2016-03-01
We recently demonstrated the strong potential of using dual-dynamic transformation models when tackling deformable image registration problems involving large anatomical differences. Dual-dynamic transformation models employ two moving grids instead of the common single moving grid for the target image (and single fixed grid for the source image). We previously employed powerful optimization algorithms to make use of the additional flexibility offered by a dual-dynamic transformation model with good results, directly obtaining insight into the trade-off between important registration objectives as a result of taking a multi-objective approach to optimization. However, optimization has so far been initialized using two regular grids, which still leaves a great potential of dual-dynamic transformation models untapped: a-priori grid alignment with image structures/areas that are expected to deform more. This allows (far) less grid points to be used, compared to using a sufficiently refined regular grid, leading to (far) more efficient optimization, or, equivalently, more accurate results using the same number of grid points. We study the implications of exploiting this potential by experimenting with two new smart grid initialization procedures: one manual expert-based and one automated image-feature-based. We consider a CT test case with large differences in bladder volume with and without a multi-resolution scheme and find a substantial benefit of using smart grid initialization.
Augusto, Elisabeth F P; Moraes, Angela M; Piccoli, Rosane A M; Barral, Manuel F; Suazo, Cláudio A T; Tonso, Aldo; Pereira, Carlos A
2010-01-01
Studies of a bioprocess optimization and monitoring for protein synthesis in animal cells face a challenge on how to express in quantitative terms the system performance. It is possible to have a panel of calculated variables that fits more or less appropriately the intended goal. Each mathematical expression approach translates different quantitative aspects. We can basically separate them into two categories: those used for the evaluation of cell physiology in terms of product synthesis, which can be for bioprocess improvement or optimization, and those used for production unit sizing and for bioprocess operation. With these perspectives and based on our own data of kinetic S2 cells growth and metabolism, as well as on their synthesis of the transmembrane recombinant rabies virus glycoprotein, here indicated as P, we show and discuss the main characteristics of calculated variables and their recommended use. Mainly applied to a bioprocess improvement/optimization and that mainly used for operation definition and to design the production unit, we expect these definitions/recommendations would improve the quality of data produced in this field and lead to more standardized procedures. In turn, it would allow a better and easier comprehension of scientific and technological communications for specialized readers. Copyright 2009 The International Association for Biologicals. Published by Elsevier Ltd. All rights reserved.
Szajek, Krzysztof; Wierszycki, Marcin
2016-01-01
Dental implant designing is a complex process which considers many limitations both biological and mechanical in nature. In earlier studies, a complete procedure for improvement of two-component dental implant was proposed. However, the optimization tasks carried out required assumption on representative load case, which raised doubts on optimality for the other load cases. This paper deals with verification of the optimal design in context of fatigue life and its main goal is to answer the question if the assumed load scenario (solely horizontal occlusal load) leads to the design which is also "safe" for oblique occlussal loads regardless the angle from an implant axis. The verification is carried out with series of finite element analyses for wide spectrum of physiologically justified loads. The design of experiment methodology with full factorial technique is utilized. All computations are done in Abaqus suite. The maximal Mises stress and normalized effective stress amplitude for various load cases are discussed and compared with the assumed "safe" limit (equivalent of fatigue life for 5e6 cycles). The obtained results proof that coronial-appical load component should be taken into consideration in the two component dental implant when fatigue life is optimized. However, its influence in the analyzed case is small and does not change the fact that the fatigue life improvement is observed for all components within whole range of analyzed loads.
Fonoff, Erich Talamoni; Azevedo, Angelo; Angelos, Jairo Silva Dos; Martinez, Raquel Chacon Ruiz; Navarro, Jessie; Reis, Paul Rodrigo; Sepulveda, Miguel Ernesto San Martin; Cury, Rubens Gisbert; Ghilardi, Maria Gabriela Dos Santos; Teixeira, Manoel Jacobsen; Lopez, William Omar Contreras
2016-07-01
OBJECT Currently, bilateral procedures involve 2 sequential implants in each of the hemispheres. The present report demonstrates the feasibility of simultaneous bilateral procedures during the implantation of deep brain stimulation (DBS) leads. METHODS Fifty-seven patients with movement disorders underwent bilateral DBS implantation in the same study period. The authors compared the time required for the surgical implantation of deep brain electrodes in 2 randomly assigned groups. One group of 28 patients underwent traditional sequential electrode implantation, and the other 29 patients underwent simultaneous bilateral implantation. Clinical outcomes of the patients with Parkinson's disease (PD) who had undergone DBS implantation of the subthalamic nucleus using either of the 2 techniques were compared. RESULTS Overall, a reduction of 38.51% in total operating time for the simultaneous bilateral group (136.4 ± 20.93 minutes) as compared with that for the traditional consecutive approach (220.3 ± 27.58 minutes) was observed. Regarding clinical outcomes in the PD patients who underwent subthalamic nucleus DBS implantation, comparing the preoperative off-medication condition with the off-medication/on-stimulation condition 1 year after the surgery in both procedure groups, there was a mean 47.8% ± 9.5% improvement in the Unified Parkinson's Disease Rating Scale Part III (UPDRS-III) score in the simultaneous group, while the sequential group experienced 47.5% ± 15.8% improvement (p = 0.96). Moreover, a marked reduction in the levodopa-equivalent dose from preoperatively to postoperatively was similar in these 2 groups. The simultaneous bilateral procedure presented major advantages over the traditional sequential approach, with a shorter total operating time. CONCLUSIONS A simultaneous stereotactic approach significantly reduces the operation time in bilateral DBS procedures, resulting in decreased microrecording time, contributing to the optimization of functional stereotactic procedures.
Flexible Approximation Model Approach for Bi-Level Integrated System Synthesis
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw; Kim, Hongman; Ragon, Scott; Soremekun, Grant; Malone, Brett
2004-01-01
Bi-Level Integrated System Synthesis (BLISS) is an approach that allows design problems to be naturally decomposed into a set of subsystem optimizations and a single system optimization. In the BLISS approach, approximate mathematical models are used to transfer information from the subsystem optimizations to the system optimization. Accurate approximation models are therefore critical to the success of the BLISS procedure. In this paper, new capabilities that are being developed to generate accurate approximation models for BLISS procedure will be described. The benefits of using flexible approximation models such as Kriging will be demonstrated in terms of convergence characteristics and computational cost. An approach of dealing with cases where subsystem optimization cannot find a feasible design will be investigated by using the new flexible approximation models for the violated local constraints.
NASA Technical Reports Server (NTRS)
Hahne, David E.; Glaab, Louis J.
1999-01-01
An investigation was performed to evaluate leading-and trailing-edge flap deflections for optimal aerodynamic performance of a High-Speed Civil Transport concept during takeoff and approach-to-landing conditions. The configuration used for this study was designed by the Douglas Aircraft Company during the 1970's. A 0.1-scale model of this configuration was tested in the Langley 30- by 60-Foot Tunnel with both the original leading-edge flap system and a new leading-edge flap system, which was designed with modem computational flow analysis and optimization tools. Leading-and trailing-edge flap deflections were generated for the original and modified leading-edge flap systems with the computational flow analysis and optimization tools. Although wind tunnel data indicated improvements in aerodynamic performance for the analytically derived flap deflections for both leading-edge flap systems, perturbations of the analytically derived leading-edge flap deflections yielded significant additional improvements in aerodynamic performance. In addition to the aerodynamic performance optimization testing, stability and control data were also obtained. An evaluation of the crosswind landing capability of the aircraft configuration revealed that insufficient lateral control existed as a result of high levels of lateral stability. Deflection of the leading-and trailing-edge flaps improved the crosswind landing capability of the vehicle considerably; however, additional improvements are required.
Steepest descent method implementation on unconstrained optimization problem using C++ program
NASA Astrophysics Data System (ADS)
Napitupulu, H.; Sukono; Mohd, I. Bin; Hidayat, Y.; Supian, S.
2018-03-01
Steepest Descent is known as the simplest gradient method. Recently, many researches are done to obtain the appropriate step size in order to reduce the objective function value progressively. In this paper, the properties of steepest descent method from literatures are reviewed together with advantages and disadvantages of each step size procedure. The development of steepest descent method due to its step size procedure is discussed. In order to test the performance of each step size, we run a steepest descent procedure in C++ program. We implemented it to unconstrained optimization test problem with two variables, then we compare the numerical results of each step size procedure. Based on the numerical experiment, we conclude the general computational features and weaknesses of each procedure in each case of problem.
Large Scale Bacterial Colony Screening of Diversified FRET Biosensors
Litzlbauer, Julia; Schifferer, Martina; Ng, David; Fabritius, Arne; Thestrup, Thomas; Griesbeck, Oliver
2015-01-01
Biosensors based on Förster Resonance Energy Transfer (FRET) between fluorescent protein mutants have started to revolutionize physiology and biochemistry. However, many types of FRET biosensors show relatively small FRET changes, making measurements with these probes challenging when used under sub-optimal experimental conditions. Thus, a major effort in the field currently lies in designing new optimization strategies for these types of sensors. Here we describe procedures for optimizing FRET changes by large scale screening of mutant biosensor libraries in bacterial colonies. We describe optimization of biosensor expression, permeabilization of bacteria, software tools for analysis, and screening conditions. The procedures reported here may help in improving FRET changes in multiple suitable classes of biosensors. PMID:26061878
Procedure for minimizing the cost per watt of photovoltaic systems
NASA Technical Reports Server (NTRS)
Redfield, D.
1977-01-01
A general analytic procedure is developed that provides a quantitative method for optimizing any element or process in the fabrication of a photovoltaic energy conversion system by minimizing its impact on the cost per watt of the complete system. By determining the effective value of any power loss associated with each element of the system, this procedure furnishes the design specifications that optimize the cost-performance tradeoffs for each element. A general equation is derived that optimizes the properties of any part of the system in terms of appropriate cost and performance functions, although the power-handling components are found to have a different character from the cell and array steps. Another principal result is that a fractional performance loss occurring at any cell- or array-fabrication step produces that same fractional increase in the cost per watt of the complete array. It also follows that no element or process step can be optimized correctly by considering only its own cost and performance
Sequeira, Ana Filipa; Brás, Joana L A; Guerreiro, Catarina I P D; Vincentelli, Renaud; Fontes, Carlos M G A
2016-12-01
Gene synthesis is becoming an important tool in many fields of recombinant DNA technology, including recombinant protein production. De novo gene synthesis is quickly replacing the classical cloning and mutagenesis procedures and allows generating nucleic acids for which no template is available. In addition, when coupled with efficient gene design algorithms that optimize codon usage, it leads to high levels of recombinant protein expression. Here, we describe the development of an optimized gene synthesis platform that was applied to the large scale production of small genes encoding venom peptides. This improved gene synthesis method uses a PCR-based protocol to assemble synthetic DNA from pools of overlapping oligonucleotides and was developed to synthesise multiples genes simultaneously. This technology incorporates an accurate, automated and cost effective ligation independent cloning step to directly integrate the synthetic genes into an effective Escherichia coli expression vector. The robustness of this technology to generate large libraries of dozens to thousands of synthetic nucleic acids was demonstrated through the parallel and simultaneous synthesis of 96 genes encoding animal toxins. An automated platform was developed for the large-scale synthesis of small genes encoding eukaryotic toxins. Large scale recombinant expression of synthetic genes encoding eukaryotic toxins will allow exploring the extraordinary potency and pharmacological diversity of animal venoms, an increasingly valuable but unexplored source of lead molecules for drug discovery.
Agustini, Deonir; Mangrich, Antonio Salvio; Bergamini, Márcio F; Marcolino-Junior, Luiz Humberto
2015-09-01
A simple and sensitive electroanalytical method was developed for determination of nanomolar levels of Pb(II) based on the voltammetric stripping response at a carbon paste electrode modified with biochar (a special charcoal) and bismuth nanostructures (nBi-BchCPE). The proposed methodology was based on spontaneous interactions between the highly functionalized biochar surface and Pb(II) ions followed by reduction of these ions into bismuth nanodots which promote an improvement on the stripping anodic current. The experimental procedure could be summarized in three steps: including an open circuit pre-concentration, reduction of accumulated lead ions at the electrode surface and stripping step under differential pulse voltammetric conditions (DPAdSV). SEM images revealed dimensions of bismuth nanodots ranging from 20 nm to 70 nm. The effects of main parameters related to biochar, bismuth and operational parameters were examined in detail. Under the optimal conditions, the proposed sensor has exhibited linear range from 5.0 to 1000 nmol L(-1) and detection limit of 1.41 nmol L(-1) for Pb(II). The optimized method was successfully applied for determination of Pb(II) released from overglaze-decorated ceramic dishes. Results obtained were compared with those given by inductively coupled plasma optical emission spectroscopy (ICP-OES) and they are in agreement at 99% of confidence level. Copyright © 2015. Published by Elsevier B.V.
Sampling design optimization for spatial functions
Olea, R.A.
1984-01-01
A new procedure is presented for minimizing the sampling requirements necessary to estimate a mappable spatial function at a specified level of accuracy. The technique is based on universal kriging, an estimation method within the theory of regionalized variables. Neither actual implementation of the sampling nor universal kriging estimations are necessary to make an optimal design. The average standard error and maximum standard error of estimation over the sampling domain are used as global indices of sampling efficiency. The procedure optimally selects those parameters controlling the magnitude of the indices, including the density and spatial pattern of the sample elements and the number of nearest sample elements used in the estimation. As an illustration, the network of observation wells used to monitor the water table in the Equus Beds of Kansas is analyzed and an improved sampling pattern suggested. This example demonstrates the practical utility of the procedure, which can be applied equally well to other spatial sampling problems, as the procedure is not limited by the nature of the spatial function. ?? 1984 Plenum Publishing Corporation.
Efficient Robust Optimization of Metal Forming Processes using a Sequential Metamodel Based Strategy
NASA Astrophysics Data System (ADS)
Wiebenga, J. H.; Klaseboer, G.; van den Boogaard, A. H.
2011-08-01
The coupling of Finite Element (FE) simulations to mathematical optimization techniques has contributed significantly to product improvements and cost reductions in the metal forming industries. The next challenge is to bridge the gap between deterministic optimization techniques and the industrial need for robustness. This paper introduces a new and generally applicable structured methodology for modeling and solving robust optimization problems. Stochastic design variables or noise variables are taken into account explicitly in the optimization procedure. The metamodel-based strategy is combined with a sequential improvement algorithm to efficiently increase the accuracy of the objective function prediction. This is only done at regions of interest containing the optimal robust design. Application of the methodology to an industrial V-bending process resulted in valuable process insights and an improved robust process design. Moreover, a significant improvement of the robustness (>2σ) was obtained by minimizing the deteriorating effects of several noise variables. The robust optimization results demonstrate the general applicability of the robust optimization strategy and underline the importance of including uncertainty and robustness explicitly in the numerical optimization procedure.
Development of electrodes for the NASA iron/chromium
NASA Technical Reports Server (NTRS)
Swette, L.; Jalan, V.
1984-01-01
This program was directed primarily to the development of the negative (Cr3+/Cr2+) electrode for the NASA chromous/ferric Redox battery. The investigation of the effects of substrate processing and gold/lead catalyzation parameters on electrochemical performance were continued. In addition, the effects of reactant cross-mixing, acidity level, and temperature were examined for both Redox couples. Finally, the performance of optimized electrodes was tested in system hardware (1/3 square foot single cell). The major findings are discussed: (1) The recommended processing temperature for the carbon felt, as a substrate for the negative electrode, is 1650 to 1750 C, (2) The recommended gold catalyzation procedure is essentially the published NASA procedure (NASA TM-82724, Nov. 1981) based on deposition from aqueous methanol solution, with the imposition of a few controls such as temperature (25 C) and precatalyzation pH of the felt (7), (3) Experimental observations of the gold catalyzation process and subsequent electron microscopy indicate that the gold is deposited from the colloidal state, induced by contact of the solution with the carbon felt, (4) Electrodeposited lead appears to be present as a thin uniform layer over the entire surface of the carbon fibers, rather than an discrete particles, and (5) Cross-mixing of reactants (Fe-2+ in negative electrode solution or Cr-3+ in the positive electrode solution) did not appear to produce significant interference at either electrode.
Genovese, David W; Estrada, Amara H; Maisenbacher, Herbert W; Heatwole, Bonnie A; Powell, Melanie A
2013-01-15
To compare procedure times and major and minor complication rates associated with single-chamber versus dual-chamber pacemaker implantation and with 1-lead, 2-lead, and 3-lead pacemaker implantation in dogs with clinical signs of bradyarrhythmia. Retrospective case series. 54 dogs that underwent pacemaker implantation because of clinical signs of bradyarrhythmia. Medical records of dogs that received pacemakers between July 2004 and December 2009 were reviewed for information regarding signalment, diagnosis, pacemaker implantation, pacemaker type, complications, and survival time. Analyses were performed to determine significant differences in anesthesia time, procedure time, and outcome for dogs on the basis of pacing mode and number of pacing leads. 28 of 54 (51.9%) dogs received single-chamber pacemakers and 26 (48.1%) received dual-chamber pacemakers. Mean ± SD procedural time was significantly longer for patients with dual-chamber pacemakers (133.5 ± 51.3 minutes) than for patients with single-chamber pacemakers (94.9 ± 37.0 minutes), and procedure time increased significantly as the number of leads increased (1 lead, 102.3 ± 51.1 minutes; 2 leads, 114.9 ± 24.8 minutes; 3 leads, 158.2 ± 8.5 minutes). Rates of major and minor complications were not significantly different between dogs that received single-chamber pacemakers and those that received dual-chamber pacemakers or among dogs grouped on the basis of the number of pacing leads placed. Although dual-chamber pacemaker implantation did result in increased procedural and anesthesia times, compared with single-chamber pacemaker implantation, this did not result in a higher complication rate.
Koshari, Stijn H S; Ross, Jean L; Nayak, Purnendu K; Zarraga, Isidro E; Rajagopal, Karthikan; Wagner, Norman J; Lenhoff, Abraham M
2017-02-06
Protein-stabilizer microheterogeneity is believed to influence long-term protein stability in solid-state biopharmaceutical formulations and its characterization is therefore essential for the rational design of stable formulations. However, the spatial distribution of the protein and the stabilizer in a solid-state formulation is, in general, difficult to characterize because of the lack of a functional, simple, and reliable characterization technique. We demonstrate the use of confocal fluorescence microscopy with fluorescently labeled monoclonal antibodies (mAbs) and antibody fragments (Fabs) to directly visualize three-dimensional particle morphologies and protein distributions in dried biopharmaceutical formulations, without restrictions on processing conditions or the need for extensive data analysis. While industrially relevant lyophilization procedures of a model IgG1 mAb generally lead to uniform protein-excipient distribution, the method shows that specific spray-drying conditions lead to distinct protein-excipient segregation. Therefore, this method can enable more definitive optimization of formulation conditions than has previously been possible.
Development of candidate reference materials for the measurement of lead in bone
Hetter, Katherine M.; Bellis, David J.; Geraghty, Ciaran; Todd, Andrew C.; Parsons, Patrick J.
2010-01-01
The production of modest quantities of candidate bone lead (Pb) reference materials is described, and an optimized production procedure is presented. The reference materials were developed to enable an assessment of the interlaboratory agreement of laboratories measuring Pb in bone; method validation; and for calibration of solid sampling techniques such as laser ablation ICP-MS. Long bones obtained from Pb-dosed and undosed animals were selected to produce four different pools of a candidate powdered bone reference material. The Pb concentrations of these pools reflect both environmental and occupational exposure levels in humans. The animal bones were harvested post mortem, cleaned, defatted, and broken into pieces using the brittle fracture technique at liquid nitrogen temperature. The bone pieces were then ground in a knife mill to produce fragments of 2-mm size. These were further ground in an ultra-centrifugal mill, resulting in finely powdered bone material that was homogenized and then sampled-scooped into vials. Testing for contamination and homogeneity was performed via instrumental methods of analysis. PMID:18421443
Quantitative local analysis of nonlinear systems
NASA Astrophysics Data System (ADS)
Topcu, Ufuk
This thesis investigates quantitative methods for local robustness and performance analysis of nonlinear dynamical systems with polynomial vector fields. We propose measures to quantify systems' robustness against uncertainties in initial conditions (regions-of-attraction) and external disturbances (local reachability/gain analysis). S-procedure and sum-of-squares relaxations are used to translate Lyapunov-type characterizations to sum-of-squares optimization problems. These problems are typically bilinear/nonconvex (due to local analysis rather than global) and their size grows rapidly with state/uncertainty space dimension. Our approach is based on exploiting system theoretic interpretations of these optimization problems to reduce their complexity. We propose a methodology incorporating simulation data in formal proof construction enabling more reliable and efficient search for robustness and performance certificates compared to the direct use of general purpose solvers. This technique is adapted both to region-of-attraction and reachability analysis. We extend the analysis to uncertain systems by taking an intentionally simplistic and potentially conservative route, namely employing parameter-independent rather than parameter-dependent certificates. The conservatism is simply reduced by a branch-and-hound type refinement procedure. The main thrust of these methods is their suitability for parallel computing achieved by decomposing otherwise challenging problems into relatively tractable smaller ones. We demonstrate proposed methods on several small/medium size examples in each chapter and apply each method to a benchmark example with an uncertain short period pitch axis model of an aircraft. Additional practical issues leading to a more rigorous basis for the proposed methodology as well as promising further research topics are also addressed. We show that stability of linearized dynamics is not only necessary but also sufficient for the feasibility of the formulations in region-of-attraction analysis. Furthermore, we generalize an upper bound refinement procedure in local reachability/gain analysis which effectively generates non-polynomial certificates from polynomial ones. Finally, broader applicability of optimization-based tools stringently depends on the availability of scalable/hierarchial algorithms. As an initial step toward this direction, we propose a local small-gain theorem and apply to stability region analysis in the presence of unmodeled dynamics.
NASA Astrophysics Data System (ADS)
Koziel, Slawomir; Bekasiewicz, Adrian
2018-02-01
In this article, a simple yet efficient and reliable technique for fully automated multi-objective design optimization of antenna structures using sequential domain patching (SDP) is discussed. The optimization procedure according to SDP is a two-step process: (i) obtaining the initial set of Pareto-optimal designs representing the best possible trade-offs between considered conflicting objectives, and (ii) Pareto set refinement for yielding the optimal designs at the high-fidelity electromagnetic (EM) simulation model level. For the sake of computational efficiency, the first step is realized at the level of a low-fidelity (coarse-discretization) EM model by sequential construction and relocation of small design space segments (patches) in order to create a path connecting the extreme Pareto front designs obtained beforehand. The second stage involves response correction techniques and local response surface approximation models constructed by reusing EM simulation data acquired in the first step. A major contribution of this work is an automated procedure for determining the patch dimensions. It allows for appropriate selection of the number of patches for each geometry variable so as to ensure reliability of the optimization process while maintaining its low cost. The importance of this procedure is demonstrated by comparing it with uniform patch dimensions.
Theivendran, Shevanuja; Dass, Amala
2017-08-01
Ultrasmall nanomolecules (<2 nm) such as Au 25 (SCH 2 CH 2 Ph) 18 , Au 38 (SCH 2 CH 2 Ph) 24 , and Au 144 (SCH 2 CH 2 Ph) 60 are well studied and can be prepared using established synthetic procedures. No such synthetic protocols that result in high yield products from commercially available starting materials exist for Au 36 (SPh-X) 24 . Here, we report a synthetic procedure for the large-scale synthesis of highly stable Au 36 (SPh-X) 24 with a yield of ∼42%. Au 36 (SPh-X) 24 was conveniently synthesized by using tert-butylbenzenethiol (HSPh-tBu, TBBT) as the ligand, giving a more stable product with better shelf life and higher yield than previously reported for making Au 36 (SPh) 24 from thiophenol (PhSH). The choice of thiol, solvent, and reaction conditions were modified for the optimization of the synthetic procedure. The purposes of this work are to (1) optimize the existing procedure to obtain stable product with better yield, (2) develop a scalable synthetic procedure, (3) demonstrate the superior stability of Au 36 (SPh-tBu) 24 when compared to Au 36 (SPh) 24 , and (4) demonstrate the reproducibility and robustness of the optimized synthetic procedure.
Osztheimer, István; Szilágyi, Szabolcs; Pongor, Zsuzsanna; Zima, Endre; Molnár, Levente; Tahin, Tamás; Özcan, Emin Evren; Széplaki, Gábor; Merkely, Béla; Gellér, László
2017-06-01
Lead dislocations of pacemaker systems are reported in all and even in high-volume centers. Repeated procedures necessitated by lead dislocations are associated with an increased risk of complications. We investigated a minimal invasive method for right atrial and ventricular lead repositioning. The minimal invasive method was applied only when passive fixation leads were implanted. During the minimal invasive procedure, a steerable catheter was advanced through the femoral vein to move the distal end of the lead to the appropriate position. Retrospective data collection was conducted in all patients with minimal invasive and with conventional method, at a single center between September 2006 and December 2012. Forty-five minimal invasive lead repositionings were performed, of which eight were acutely unsuccessful and nine electrodes re-dislocated after the procedure. One hundred two leads were repositioned with opening of the pocket during the same time, including the ones with unsuccessful minimal invasive repositionings. One procedure was acutely unsuccessful in this group and four re-dislocations happened. A significant difference of success rates was noted (66.6% vs. 95.1%, p = 0.001). One complication was observed during the minimal invasive lead repositionings (left ventricular lead microdislodgement). Open-pocket procedures showed different types of complications (pneumothorax, subclavian artery puncture, pericardial effusion, hematoma, fever, device-associated infection which necessitated explantation, atrial lead dislodgement while repositioning the ventricular one, deterioration of renal function). The minimal invasive method as a first alternative is safe and feasible. In those cases when it cannot be carried out successfully, the conventional method is applicable.
Efficient sensitivity analysis and optimization of a helicopter rotor
NASA Technical Reports Server (NTRS)
Lim, Joon W.; Chopra, Inderjit
1989-01-01
Aeroelastic optimization of a system essentially consists of the determination of the optimum values of design variables which minimize the objective function and satisfy certain aeroelastic and geometric constraints. The process of aeroelastic optimization analysis is illustrated. To carry out aeroelastic optimization effectively, one needs a reliable analysis procedure to determine steady response and stability of a rotor system in forward flight. The rotor dynamic analysis used in the present study developed inhouse at the University of Maryland is based on finite elements in space and time. The analysis consists of two major phases: vehicle trim and rotor steady response (coupled trim analysis), and aeroelastic stability of the blade. For a reduction of helicopter vibration, the optimization process requires the sensitivity derivatives of the objective function and aeroelastic stability constraints. For this, the derivatives of steady response, hub loads and blade stability roots are calculated using a direct analytical approach. An automated optimization procedure is developed by coupling the rotor dynamic analysis, design sensitivity analysis and constrained optimization code CONMIN.
The future of simulation technologies for complex cardiovascular procedures.
Cates, Christopher U; Gallagher, Anthony G
2012-09-01
Changing work practices and the evolution of more complex interventions in cardiovascular medicine are forcing a paradigm shift in the way doctors are trained. Implantable cardioverter defibrillator (ICD), transcatheter aortic valve implantation (TAVI), carotid artery stenting (CAS), and acute stroke intervention procedures are forcing these changes at a faster pace than in other disciplines. As a consequence, cardiovascular medicine has had to develop a sophisticated understanding of precisely what is meant by 'training' and 'skill'. An evolving conclusion is that procedure training on a virtual reality (VR) simulator presents a viable current solution. These simulations should characterize the important performance characteristics of procedural skill that have metrics derived and defined from, and then benchmarked to experienced operators (i.e. level of proficiency). Simulation training is optimal with metric-based feedback, particularly formative trainee error assessments, proximate to their performance. In prospective, randomized studies, learners who trained to a benchmarked proficiency level on the simulator performed significantly better than learners who were traditionally trained. In addition, cardiovascular medicine now has available the most sophisticated virtual reality simulators in medicine and these have been used for the roll-out of interventions such as CAS in the USA and globally with cardiovascular society and industry partnered training programmes. The Food and Drug Administration has advocated the use of VR simulation as part of the approval of new devices and the American Board of Internal Medicine has adopted simulation as part of its maintenance of certification. Simulation is rapidly becoming a mainstay of cardiovascular education, training, certification, and the safe adoption of new technology. If cardiovascular medicine is to continue to lead in the adoption and integration of simulation, then, it must take a proactive position in the development of metric-based simulation curriculum, adoption of proficiency benchmarking definitions, and then resolve to commit resources so as to continue to lead this revolution in physician training.
Martendal, Edmar; de Souza Silveira, Cristine Durante; Nardini, Giuliana Stael; Carasek, Eduardo
2011-06-17
This study proposes a new approach to the optimization of the extraction of the volatile fraction of plant matrices using the headspace solid-phase microextraction (HS-SPME) technique. The optimization focused on the extraction time and temperature using a CAR/DVB/PDMS 50/30 μm SPME fiber and 100mg of a mixture of plants as the sample in a 15-mL vial. The extraction time (10-60 min) and temperature (5-60 °C) were optimized by means of a central composite design. The chromatogram was divided into four groups of peaks based on the elution temperature to provide a better understanding of the influence of the extraction parameters on the extraction efficiency considering compounds with different volatilities/polarities. In view of the different optimum extraction time and temperature conditions obtained for each group, a new approach based on the use of two extraction temperatures in the same procedure is proposed. The optimum conditions were achieved by extracting for 30 min with a sample temperature of 60 °C followed by a further 15 min at 5 °C. The proposed method was compared with the optimized conventional method based on a single extraction temperature (45 min of extraction at 50 °C) by submitting five samples to both procedures. The proposed method led to better results in all cases, considering as the response both peak area and the number of identified peaks. The newly proposed optimization approach provided an excellent alternative procedure to extract analytes with quite different volatilities in the same procedure. Copyright © 2011 Elsevier B.V. All rights reserved.
Stop! border ahead: Automatic detection of subthalamic exit during deep brain stimulation surgery.
Valsky, Dan; Marmor-Levin, Odeya; Deffains, Marc; Eitan, Renana; Blackwell, Kim T; Bergman, Hagai; Israel, Zvi
2017-01-01
Microelectrode recordings along preplanned trajectories are often used for accurate definition of the subthalamic nucleus (STN) borders during deep brain stimulation (DBS) surgery for Parkinson's disease. Usually, the demarcation of the STN borders is performed manually by a neurophysiologist. The exact detection of the borders is difficult, especially detecting the transition between the STN and the substantia nigra pars reticulata. Consequently, demarcation may be inaccurate, leading to suboptimal location of the DBS lead and inadequate clinical outcomes. We present machine-learning classification procedures that use microelectrode recording power spectra and allow for real-time, high-accuracy discrimination between the STN and substantia nigra pars reticulata. A support vector machine procedure was tested on microelectrode recordings from 58 trajectories that included both STN and substantia nigra pars reticulata that achieved a 97.6% consistency with human expert classification (evaluated by 10-fold cross-validation). We used the same data set as a training set to find the optimal parameters for a hidden Markov model using both microelectrode recording features and trajectory history to enable real-time classification of the ventral STN border (STN exit). Seventy-three additional trajectories were used to test the reliability of the learned statistical model in identifying the exit from the STN. The hidden Markov model procedure identified the STN exit with an error of 0.04 ± 0.18 mm and detection reliability (error < 1 mm) of 94%. The results indicate that robust, accurate, and automatic real-time electrophysiological detection of the ventral STN border is feasible. © 2016 International Parkinson and Movement Disorder Society. © 2016 International Parkinson and Movement Disorder Society.
A mixed optimization method for automated design of fuselage structures.
NASA Technical Reports Server (NTRS)
Sobieszczanski, J.; Loendorf, D.
1972-01-01
A procedure for automating the design of transport aircraft fuselage structures has been developed and implemented in the form of an operational program. The structure is designed in two stages. First, an overall distribution of structural material is obtained by means of optimality criteria to meet strength and displacement constraints. Subsequently, the detailed design of selected rings and panels consisting of skin and stringers is performed by mathematical optimization accounting for a set of realistic design constraints. The practicality and computer efficiency of the procedure is demonstrated on cylindrical and area-ruled large transport fuselages.
NASA Astrophysics Data System (ADS)
Calderone, Luigi; Pinola, Licia; Varoli, Vincenzo
1992-04-01
The paper describes an analytical procedure to optimize the feed-forward compensation for any PWM dc/dc converters. The aims of achieving zero dc audiosusceptibility was found to be possible for the buck, buck-boost, Cuk, and SEPIC cells; for the boost converter, however, only nonoptimal compensation is feasible. Rules for the design of PWM controllers and procedures for the evaluation of the hardware-introduced errors are discussed. A PWM controller implementing the optimal feed-forward compensation for buck-boost, Cuk, and SEPIC cells is described and fully experimentally characterized.
Thermal/Structural Tailoring of Engine Blades (T/STAEBL) User's manual
NASA Technical Reports Server (NTRS)
Brown, K. W.
1994-01-01
The Thermal/Structural Tailoring of Engine Blades (T/STAEBL) system is a computer code that is able to perform numerical optimizations of cooled jet engine turbine blades and vanes. These optimizations seek an airfoil design of minimum operating cost that satisfies realistic design constraints. This report documents the organization of the T/STAEBL computer program, its design and analysis procedure, its optimization procedure, and provides an overview of the input required to run the program, as well as the computer resources required for its effective use. Additionally, usage of the program is demonstrated through a validation test case.
Thermal/Structural Tailoring of Engine Blades (T/STAEBL): User's manual
NASA Astrophysics Data System (ADS)
Brown, K. W.
1994-03-01
The Thermal/Structural Tailoring of Engine Blades (T/STAEBL) system is a computer code that is able to perform numerical optimizations of cooled jet engine turbine blades and vanes. These optimizations seek an airfoil design of minimum operating cost that satisfies realistic design constraints. This report documents the organization of the T/STAEBL computer program, its design and analysis procedure, its optimization procedure, and provides an overview of the input required to run the program, as well as the computer resources required for its effective use. Additionally, usage of the program is demonstrated through a validation test case.
Neural networks for structural design - An integrated system implementation
NASA Technical Reports Server (NTRS)
Berke, Laszlo; Hafez, Wassim; Pao, Yoh-Han
1992-01-01
The development of powerful automated procedures to aid the creative designer is becoming increasingly critical for complex design tasks. In the work described here Artificial Neural Nets are applied to acquire structural analysis and optimization domain expertise. Based on initial instructions from the user an automated procedure generates random instances of structural analysis and/or optimization 'experiences' that cover a desired domain. It extracts training patterns from the created instances, constructs and trains an appropriate network architecture and checks the accuracy of net predictions. The final product is a trained neural net that can estimate analysis and/or optimization results instantaneously.
Structural Tailoring of Advanced Turboprops (STAT)
NASA Technical Reports Server (NTRS)
Brown, Kenneth W.
1988-01-01
This interim report describes the progress achieved in the structural Tailoring of Advanced Turboprops (STAT) program which was developed to perform numerical optimizations on highly swept propfan blades. The optimization procedure seeks to minimize an objective function, defined as either direct operating cost or aeroelastic differences between a blade and its scaled model, by tuning internal and external geometry variables that must satisfy realistic blade design constraints. This report provides a detailed description of the input, optimization procedures, approximate analyses and refined analyses, as well as validation test cases for the STAT program. In addition, conclusions and recommendations are summarized.
Turbomachinery Airfoil Design Optimization Using Differential Evolution
NASA Technical Reports Server (NTRS)
Madavan, Nateri K.; Biegel, Bryan A. (Technical Monitor)
2002-01-01
An aerodynamic design optimization procedure that is based on a evolutionary algorithm known at Differential Evolution is described. Differential Evolution is a simple, fast, and robust evolutionary strategy that has been proven effective in determining the global optimum for several difficult optimization problems, including highly nonlinear systems with discontinuities and multiple local optima. The method is combined with a Navier-Stokes solver that evaluates the various intermediate designs and provides inputs to the optimization procedure. An efficient constraint handling mechanism is also incorporated. Results are presented for the inverse design of a turbine airfoil from a modern jet engine. The capability of the method to search large design spaces and obtain the optimal airfoils in an automatic fashion is demonstrated. Substantial reductions in the overall computing time requirements are achieved by using the algorithm in conjunction with neural networks.
A Rapid Aerodynamic Design Procedure Based on Artificial Neural Networks
NASA Technical Reports Server (NTRS)
Rai, Man Mohan
2001-01-01
An aerodynamic design procedure that uses neural networks to model the functional behavior of the objective function in design space has been developed. This method incorporates several improvements to an earlier method that employed a strategy called parameter-based partitioning of the design space in order to reduce the computational costs associated with design optimization. As with the earlier method, the current method uses a sequence of response surfaces to traverse the design space in search of the optimal solution. The new method yields significant reductions in computational costs by using composite response surfaces with better generalization capabilities and by exploiting synergies between the optimization method and the simulation codes used to generate the training data. These reductions in design optimization costs are demonstrated for a turbine airfoil design study where a generic shape is evolved into an optimal airfoil.
Reduced state feedback gain computation. [optimization and control theory for aircraft control
NASA Technical Reports Server (NTRS)
Kaufman, H.
1976-01-01
Because application of conventional optimal linear regulator theory to flight controller design requires the capability of measuring and/or estimating the entire state vector, it is of interest to consider procedures for computing controls which are restricted to be linear feedback functions of a lower dimensional output vector and which take into account the presence of measurement noise and process uncertainty. Therefore, a stochastic linear model that was developed is presented which accounts for aircraft parameter and initial uncertainty, measurement noise, turbulence, pilot command and a restricted number of measurable outputs. Optimization with respect to the corresponding output feedback gains was performed for both finite and infinite time performance indices without gradient computation by using Zangwill's modification of a procedure originally proposed by Powell. Results using a seventh order process show the proposed procedures to be very effective.
Investigation of Low-Reynolds-Number Rocket Nozzle Design Using PNS-Based Optimization Procedure
NASA Technical Reports Server (NTRS)
Hussaini, M. Moin; Korte, John J.
1996-01-01
An optimization approach to rocket nozzle design, based on computational fluid dynamics (CFD) methodology, is investigated for low-Reynolds-number cases. This study is undertaken to determine the benefits of this approach over those of classical design processes such as Rao's method. A CFD-based optimization procedure, using the parabolized Navier-Stokes (PNS) equations, is used to design conical and contoured axisymmetric nozzles. The advantage of this procedure is that it accounts for viscosity during the design process; other processes make an approximated boundary-layer correction after an inviscid design is created. Results showed significant improvement in the nozzle thrust coefficient over that of the baseline case; however, the unusual nozzle design necessitates further investigation of the accuracy of the PNS equations for modeling expanding flows with thick laminar boundary layers.
Three-dimensional aerodynamic shape optimization of supersonic delta wings
NASA Technical Reports Server (NTRS)
Burgreen, Greg W.; Baysal, Oktay
1994-01-01
A recently developed three-dimensional aerodynamic shape optimization procedure AeSOP(sub 3D) is described. This procedure incorporates some of the most promising concepts from the area of computational aerodynamic analysis and design, specifically, discrete sensitivity analysis, a fully implicit 3D Computational Fluid Dynamics (CFD) methodology, and 3D Bezier-Bernstein surface parameterizations. The new procedure is demonstrated in the preliminary design of supersonic delta wings. Starting from a symmetric clipped delta wing geometry, a Mach 1.62 asymmetric delta wing and two Mach 1. 5 cranked delta wings were designed subject to various aerodynamic and geometric constraints.
NASA Astrophysics Data System (ADS)
Ozbasaran, Hakan
Trusses have an important place amongst engineering structures due to many advantages such as high structural efficiency, fast assembly and easy maintenance. Iterative truss design procedures, which require analysis of a large number of candidate structural systems such as size, shape and topology optimization with stochastic methods, mostly lead the engineer to establish a link between the development platform and external structural analysis software. By increasing number of structural analyses, this (probably slow-response) link may climb to the top of the list of performance issues. This paper introduces a software for static, global member buckling and frequency analysis of 2D and 3D trusses to overcome this problem for Mathematica users.
NASA Astrophysics Data System (ADS)
Bourgeois, E.; Bokanowski, O.; Zidani, H.; Désilles, A.
2018-06-01
The resolution of the launcher ascent trajectory problem by the so-called Hamilton-Jacobi-Bellman (HJB) approach, relying on the Dynamic Programming Principle, has been investigated. The method gives a global optimum and does not need any initialization procedure. Despite these advantages, this approach is seldom used because of the dicculties of computing the solution of the HJB equation for high dimension problems. The present study shows that an eccient resolution is found. An illustration of the method is proposed on a heavy class launcher, for a typical GEO (Geostationary Earth Orbit) mission. This study has been performed in the frame of the Centre National d'Etudes Spatiales (CNES) Launchers Research & Technology Program.
Diode-Pumped Laser for Lung-Sparing Surgical Treatment of Malignant Pleural Mesothelioma.
Bölükbas, Servet; Biancosino, Christian; Redwan, Bassam; Eberlein, Michael
2017-06-01
Surgical resection represents one of the essential cornerstones in multimodal treatment of malignant pleural mesothelioma. In cases of tumor infiltration of the lung, lung-scarifying procedures such as lobectomies or pneumonectomies might be necessary to achieve macroscopic complete resection. However, this increases the morbidity of the patients because it leads to possible delay of the planned chemotherapy or radiotherapy. Innovative surgical techniques are therefore required to enable salvage of the lung parenchyma and optimization of surgical treatment. Here we report our first experience with a diode-pumped neodymium-doped yttrium aluminium garnet laser for parenchyma-sparing lung resection during surgery for malignant pleural mesothelioma. Copyright © 2017 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.
Transtracheal oxygen and positive airway pressure: A salvage technique in overlap syndrome.
Biscardi, Frank Hugo; Rubio, Edmundo Raul
2014-01-01
The coexistence of sleep apnea-hypopnea syndrome (SAHS) with chronic obstructive pulmonary disease (COPD) occurs commonly. This so called overlap syndrome leads to more profound hypoxemia, hypercapnic respiratory failure, and pulmonary hypertension than each of these conditions independently. Not infrequently, these patients show profound hypoxemia, despite optimal continuous positive airway pressure (CPAP) therapy for their SAHS. We report a case where CPAP therapy with additional in-line oxygen supplementation failed to accomplish adequate oxygenation. Adding transtracheal oxygen therapy (TTOT) to CPAP therapy provided better results. We review the literature on transtracheal oxygen therapy and how this technique may play a significant role in these complicated patients with overlap syndrome, obviating the need for more invasive procedures, such as tracheostomy.
NASA Technical Reports Server (NTRS)
2000-01-01
Paul Ducheyne, a principal investigator in the microgravity materials science program and head of the University of Pernsylvania's Center for Bioactive Materials and Tissue Engineering, is leading the trio as they use simulated microgravity to determine the optimal characteristics of tiny glass particles for growing bone tissue. The result could make possible a much broader range of synthetic bone-grafting applications. Bioactive glass particles (left) with a microporous surface (right) are widely accepted as a synthetic material for periodontal procedures. Using the particles to grow three-dimensional tissue cultures may one day result in developing an improved, more rugged bone tissue that may be used to correct skeletal disorders and bone defects. The work is sponsored by NASA's Office of Biological and Physical Research.
Integer-ambiguity resolution in astronomy and geodesy
NASA Astrophysics Data System (ADS)
Lannes, A.; Prieur, J.-L.
2014-02-01
Recent theoretical developments in astronomical aperture synthesis have revealed the existence of integer-ambiguity problems. Those problems, which appear in the self-calibration procedures of radio imaging, have been shown to be similar to the nearest-lattice point (NLP) problems encountered in high-precision geodetic positioning and in global navigation satellite systems. In this paper we analyse the theoretical aspects of the matter and propose new methods for solving those NLP~problems. The related optimization aspects concern both the preconditioning stage, and the discrete-search stage in which the integer ambiguities are finally fixed. Our algorithms, which are described in an explicit manner, can easily be implemented. They lead to substantial gains in the processing time of both stages. Their efficiency was shown via intensive numerical tests.
Determination of Lead in Blood by Atomic Absorption Spectrophotometry1
Selander, Stig; Cramér, Kim
1968-01-01
Lead in blood was determined by atomic absorption spectrophotometry, using a wet ashing procedure and a procedure in which the proteins were precipitated with trichloroacetic acid. In both methods the lead was extracted into isobutylmethylketone before measurement, using ammonium pyrrolidine dithiocarbamate as chelator. The simpler precipitation procedure was shown to give results identical with those obtained with the ashing technique. In addition, blood specimens were examined by the precipitation method and by spectral analysis, which method includes wet ashing of the samples, with good agreement. All analyses were done on blood samples from `normal' persons or from lead-exposed workers, and no additions of inorganic lead were made. The relatively simple protein precipitation technique gave accurate results and is suitable for the large-scale control of lead-exposed workers. PMID:5663425
Traveling-Wave Tube Cold-Test Circuit Optimization Using CST MICROWAVE STUDIO
NASA Technical Reports Server (NTRS)
Chevalier, Christine T.; Kory, Carol L.; Wilson, Jeffrey D.; Wintucky, Edwin G.; Dayton, James A., Jr.
2003-01-01
The internal optimizer of CST MICROWAVE STUDIO (MWS) was used along with an application-specific Visual Basic for Applications (VBA) script to develop a method to optimize traveling-wave tube (TWT) cold-test circuit performance. The optimization procedure allows simultaneous optimization of circuit specifications including on-axis interaction impedance, bandwidth or geometric limitations. The application of Microwave Studio to TWT cold-test circuit optimization is described.
Performance, optimization, and latest development of the SRI family of rotary cryocoolers
NASA Astrophysics Data System (ADS)
Dovrtel, Klemen; Megušar, Franc
2017-05-01
In this paper the SRI family of Le-tehnika rotary cryocoolers is presented (SRI401, SRI423/SRI421 and SRI474). The Stirling coolers cooling power range starts from 0.25W to 0.75W at 77K with available temperature range from 60K to 150K and are fitted to typical dewar detector sizes and powers supply voltages. The DDCA performance optimizing procedure is presented. The procedure includes cooler steady state performance mapping and optimization and cooldown optimization. The current cryogenic performance status and reliability evaluation method and figures are presented on the existing and new units. The latest improved SRI401 demonstrated MTTF close to 25'000 hours and the test is still on going.
Mandel, Jacob E; Morel-Ovalle, Louis; Boas, Franz E; Ziv, Etay; Yarmohammadi, Hooman; Deipolyi, Amy; Mohabir, Heeralall R; Erinjeri, Joseph P
2018-02-20
The purpose of this study is to determine whether a custom Google Maps application can optimize site selection when scheduling outpatient interventional radiology (IR) procedures within a multi-site hospital system. The Google Maps for Business Application Programming Interface (API) was used to develop an internal web application that uses real-time traffic data to determine estimated travel time (ETT; minutes) and estimated travel distance (ETD; miles) from a patient's home to each a nearby IR facility in our hospital system. Hypothetical patient home addresses based on the 33 cities comprising our institution's catchment area were used to determine the optimal IR site for hypothetical patients traveling from each city based on real-time traffic conditions. For 10/33 (30%) cities, there was discordance between the optimal IR site based on ETT and the optimal IR site based on ETD at non-rush hour time or rush hour time. By choosing to travel to an IR site based on ETT rather than ETD, patients from discordant cities were predicted to save an average of 7.29 min during non-rush hour (p = 0.03), and 28.80 min during rush hour (p < 0.001). Using a custom Google Maps application to schedule outpatients for IR procedures can effectively reduce patient travel time when more than one location providing IR procedures is available within the same hospital system.
NASA Technical Reports Server (NTRS)
Holms, A. G.
1982-01-01
A previous report described a backward deletion procedure of model selection that was optimized for minimum prediction error and which used a multiparameter combination of the F - distribution and an order statistics distribution of Cochran's. A computer program is described that applies the previously optimized procedure to real data. The use of the program is illustrated by examples.
2014-10-01
nonlinear and non-stationary signals. It aims at decomposing a signal, via an iterative sifting procedure, into several intrinsic mode functions ...stationary signals. It aims at decomposing a signal, via an iterative sifting procedure into several intrinsic mode functions (IMFs), and each of the... function , optimization. 1 Introduction It is well known that nonlinear and non-stationary signal analysis is important and difficult. His- torically
Reduced-Order Models Based on POD-Tpwl for Compositional Subsurface Flow Simulation
NASA Astrophysics Data System (ADS)
Durlofsky, L. J.; He, J.; Jin, L. Z.
2014-12-01
A reduced-order modeling procedure applicable for compositional subsurface flow simulation will be described and applied. The technique combines trajectory piecewise linearization (TPWL) and proper orthogonal decomposition (POD) to provide highly efficient surrogate models. The method is based on a molar formulation (which uses pressure and overall component mole fractions as the primary variables) and is applicable for two-phase, multicomponent systems. The POD-TPWL procedure expresses new solutions in terms of linearizations around solution states generated and saved during previously simulated 'training' runs. High-dimensional states are projected into a low-dimensional subspace using POD. Thus, at each time step, only a low-dimensional linear system needs to be solved. Results will be presented for heterogeneous three-dimensional simulation models involving CO2 injection. Both enhanced oil recovery and carbon storage applications (with horizontal CO2 injectors) will be considered. Reasonably close agreement between full-order reference solutions and compositional POD-TPWL simulations will be demonstrated for 'test' runs in which the well controls differ from those used for training. Construction of the POD-TPWL model requires preprocessing overhead computations equivalent to about 3-4 full-order runs. Runtime speedups using POD-TPWL are, however, very significant - typically O(100-1000). The use of POD-TPWL for well control optimization will also be illustrated. For this application, some amount of retraining during the course of the optimization is required, which leads to smaller, but still significant, speedup factors.
A new vertical grid nesting capability in the Weather Research and Forecasting (WRF) Model
Daniels, Megan H.; Lundquist, Katherine A.; Mirocha, Jeffrey D.; ...
2016-09-16
Mesoscale atmospheric models are increasingly used for high-resolution (<3 km) simulations to better resolve smaller-scale flow details. Increased resolution is achieved using mesh refinement via grid nesting, a procedure where multiple computational domains are integrated either concurrently or in series. A constraint in the concurrent nesting framework offered by the Weather Research and Forecasting (WRF) Model is that mesh refinement is restricted to the horizontal dimensions. This limitation prevents control of the grid aspect ratio, leading to numerical errors due to poor grid quality and preventing grid optimization. Here, a procedure permitting vertical nesting for one-way concurrent simulation is developedmore » and validated through idealized cases. The benefits of vertical nesting are demonstrated using both mesoscale and large-eddy simulations (LES). Mesoscale simulations of the Terrain-Induced Rotor Experiment (T-REX) show that vertical grid nesting can alleviate numerical errors due to large aspect ratios on coarse grids, while allowing for higher vertical resolution on fine grids. Furthermore, the coarsening of the parent domain does not result in a significant loss of accuracy on the nested domain. LES of neutral boundary layer flow shows that, by permitting optimal grid aspect ratios on both parent and nested domains, use of vertical nesting yields improved agreement with the theoretical logarithmic velocity profile on both domains. Lastly, vertical grid nesting in WRF opens the path forward for multiscale simulations, allowing more accurate simulations spanning a wider range of scales than previously possible.« less
A new vertical grid nesting capability in the Weather Research and Forecasting (WRF) Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Daniels, Megan H.; Lundquist, Katherine A.; Mirocha, Jeffrey D.
Mesoscale atmospheric models are increasingly used for high-resolution (<3 km) simulations to better resolve smaller-scale flow details. Increased resolution is achieved using mesh refinement via grid nesting, a procedure where multiple computational domains are integrated either concurrently or in series. A constraint in the concurrent nesting framework offered by the Weather Research and Forecasting (WRF) Model is that mesh refinement is restricted to the horizontal dimensions. This limitation prevents control of the grid aspect ratio, leading to numerical errors due to poor grid quality and preventing grid optimization. Here, a procedure permitting vertical nesting for one-way concurrent simulation is developedmore » and validated through idealized cases. The benefits of vertical nesting are demonstrated using both mesoscale and large-eddy simulations (LES). Mesoscale simulations of the Terrain-Induced Rotor Experiment (T-REX) show that vertical grid nesting can alleviate numerical errors due to large aspect ratios on coarse grids, while allowing for higher vertical resolution on fine grids. Furthermore, the coarsening of the parent domain does not result in a significant loss of accuracy on the nested domain. LES of neutral boundary layer flow shows that, by permitting optimal grid aspect ratios on both parent and nested domains, use of vertical nesting yields improved agreement with the theoretical logarithmic velocity profile on both domains. Lastly, vertical grid nesting in WRF opens the path forward for multiscale simulations, allowing more accurate simulations spanning a wider range of scales than previously possible.« less
NASA Astrophysics Data System (ADS)
Bai, Yanru; Chen, Keren; Mishra, Arti; Beuerman, Roger; Liu, Quan
2017-02-01
Ocular infection is a serious eye disease that could lead to blindness without prompt and proper treatment. In pathology, ocular infection is caused by microorganisms such as bacteria, fungi or viruses. The essential prerequisite for the optimal treatment of ocular infection is to identify the microorganism causing infection early as each type of microorganism requires a different therapeutic approach. The clinical procedure for identifying the microorganism species causing ocular infection includes Gram staining (for bacteria)/microscopy (for fungi) and the culture of corneal surface scraping, or aqueous and vitreous smear samples taken from the surface of infected eyes. The culture procedure is labor intensive and expensive. Moreover, culturing is time consuming, which usually takes a few days or even weeks. Such a long delay in diagnosis could result in the exacerbation of patients' symptoms, the missing of the optimal time frame for initiating treatment and subsequently the rising cost for disease management. Raman spectroscopy has been shown highly effective for non-invasive identification of both fungi and bacteria qualitatively. In this study, we investigate the feasibility of identifying the microorganisms of ocular infection and quantifying the concentrations using Raman spectroscopy by measuring not only gram negative and gram positive bacteria but also infected cornea. By applying a modified orthogonal projection approach, the relative concentration of each bacteria species could be quantified. Our results indicate the great potential of Raman spectroscopy as an alternative tool for non-invasive diagnosis of ocular infection and could play a significantly role in future ophthalmology.
Andellini, Martina; Fernandez Riesgo, Sandra; Morolli, Federica; Ritrovato, Matteo; Cosoli, Piero; Petruzzellis, Silverio; Rosso, Nicola
2017-11-03
To test the application of Business Process Management technology to manage clinical pathways, using a pediatric kidney transplantation as case study, and to identify the benefits obtained from using this technology. Using a Business Process Management platform, we implemented a specific application to manage the clinical pathway of pediatric patients, and monitored the activities of the coordinator in charge of the case management during a 6-month period (from June 2015 to November 2015) using two methodologies: the traditional procedure and the one under study. The application helped physicians and nurses to optimize the amount of time and resources devoted to management purposes. In particular, time reduction was close to 60%. In addition, the reduction of data duplication, the integrated event management and the efficient data collection improved the quality of the service. The use of Business Process Management technology, usually related to well-defined processes with high management costs, is an established procedure in multiple environments; its use in healthcare, however, is innovative. The use of already accepted clinical pathways is known to improve outcomes. The combination of these two techniques, well established in their respective areas of application, could represent a revolution in clinical pathway management. The study has demonstrated that the use of this technology in a clinical environment, using a proper architecture and identifying a well-defined process, leads to real benefits in terms of resources optimization and quality improvement.
Munson, Mark; Lieberman, Harvey; Tserlin, Elina; Rocnik, Jennifer; Ge, Jie; Fitzgerald, Maria; Patel, Vinod; Garcia-Echeverria, Carlos
2015-08-01
Herein, we report a novel and general method, lead optimization attrition analysis (LOAA), to benchmark two distinct small-molecule lead series using a relatively unbiased, simple technique and commercially available software. We illustrate this approach with data collected during lead optimization of two independent oncology programs as a case study. Easily generated graphics and attrition curves enabled us to calibrate progress and support go/no go decisions on each program. We believe that this data-driven technique could be used broadly by medicinal chemists and management to guide strategic decisions during drug discovery. Copyright © 2015 Elsevier Ltd. All rights reserved.
Feinstein, Wei P; Brylinski, Michal
2015-01-01
Computational approaches have emerged as an instrumental methodology in modern research. For example, virtual screening by molecular docking is routinely used in computer-aided drug discovery. One of the critical parameters for ligand docking is the size of a search space used to identify low-energy binding poses of drug candidates. Currently available docking packages often come with a default protocol for calculating the box size, however, many of these procedures have not been systematically evaluated. In this study, we investigate how the docking accuracy of AutoDock Vina is affected by the selection of a search space. We propose a new procedure for calculating the optimal docking box size that maximizes the accuracy of binding pose prediction against a non-redundant and representative dataset of 3,659 protein-ligand complexes selected from the Protein Data Bank. Subsequently, we use the Directory of Useful Decoys, Enhanced to demonstrate that the optimized docking box size also yields an improved ranking in virtual screening. Binding pockets in both datasets are derived from the experimental complex structures and, additionally, predicted by eFindSite. A systematic analysis of ligand binding poses generated by AutoDock Vina shows that the highest accuracy is achieved when the dimensions of the search space are 2.9 times larger than the radius of gyration of a docking compound. Subsequent virtual screening benchmarks demonstrate that this optimized docking box size also improves compound ranking. For instance, using predicted ligand binding sites, the average enrichment factor calculated for the top 1 % (10 %) of the screening library is 8.20 (3.28) for the optimized protocol, compared to 7.67 (3.19) for the default procedure. Depending on the evaluation metric, the optimal docking box size gives better ranking in virtual screening for about two-thirds of target proteins. This fully automated procedure can be used to optimize docking protocols in order to improve the ranking accuracy in production virtual screening simulations. Importantly, the optimized search space systematically yields better results than the default method not only for experimental pockets, but also for those predicted from protein structures. A script for calculating the optimal docking box size is freely available at www.brylinski.org/content/docking-box-size. Graphical AbstractWe developed a procedure to optimize the box size in molecular docking calculations. Left panel shows the predicted binding pose of NADP (green sticks) compared to the experimental complex structure of human aldose reductase (blue sticks) using a default protocol. Right panel shows the docking accuracy using an optimized box size.
Transonic rotor tip design using numerical optimization
NASA Technical Reports Server (NTRS)
Tauber, Michael E.; Langhi, Ronald G.
1985-01-01
The aerodynamic design procedure for a new blade tip suitable for operation at transonic speeds is illustrated. For the first time, 3 dimensional numerical optimization was applied to rotor tip design, using the recent derivative of the ROT22 code, program R22OPT. Program R22OPT utilized an efficient quasi-Newton optimization algorithm. Multiple design objectives were specified. The delocalization of the shock wave was to be eliminated in forward flight for an advance ratio of 0.41 and a tip Mach number of 0.92 at psi = 90 deg. Simultaneously, it was sought to reduce torque requirements while maintaining effective restoring pitching moments. Only the outer 10 percent of the blade span was modified; the blade area was not to be reduced by more than 3 percent. The goal was to combine the advantages of both sweptback and sweptforward blade tips. A planform that featured inboard sweepback was combined with a sweptforward tip and a taper ratio of 0.5. Initially, the ROT22 code was used to find by trial and error a planform geometry which met the design goals. This configuration had an inboard section with a leading edge sweep of 20 deg and a tip section swept forward at 25 deg; in addition, the airfoils were modified.
Multi-disciplinary optimization of railway wheels
NASA Astrophysics Data System (ADS)
Nielsen, J. C. O.; Fredö, C. R.
2006-06-01
A numerical procedure for multi-disciplinary optimization of railway wheels, based on Design of Experiments (DOE) methodology and automated design, is presented. The target is a wheel design that meets the requirements for fatigue strength, while minimizing the unsprung mass and rolling noise. A 3-level full factorial (3LFF) DOE is used to collect data points required to set up Response Surface Models (RSM) relating design and response variables in the design space. Computationally efficient simulations are thereafter performed using the RSM to identify the solution that best fits the design target. A demonstration example, including four geometric design variables in a parametric finite element (FE) model, is presented. The design variables are wheel radius, web thickness, lateral offset between rim and hub, and radii at the transitions rim/web and hub/web, but more variables (including material properties) can be added if needed. To improve further the performance of the wheel design, a constrained layer damping (CLD) treatment is applied on the web. For a given load case, compared to a reference wheel design without CLD, a combination of wheel shape and damping optimization leads to the conclusion that a reduction in the wheel component of A-weighted rolling noise of 11 dB can be achieved if a simultaneous increase in wheel mass of 14 kg is accepted.
NASA Technical Reports Server (NTRS)
Coe, P. L., Jr.; Huffman, J. K.
1979-01-01
An investigation conducted in the Langley 7 by 10 foot tunnel to determine the influence of an optimized leading-edge deflection on the low speed aerodynamic performance of a configuration with a low aspect ratio, highly swept wing. The sensitivity of the lateral stability derivative to geometric anhedral was also studied. The optimized leading edge deflection was developed by aligning the leading edge with the incoming flow along the entire span. Owing to spanwise variation of unwash, the resulting optimized leading edge was a smooth, continuously warped surface for which the deflection varied from 16 deg at the side of body to 50 deg at the wing tip. For the particular configuration studied, levels of leading-edge suction on the order of 90 percent were achieved. The results of tests conducted to determine the sensitivity of the lateral stability derivative to geometric anhedral indicate values which are in reasonable agreement with estimates provided by simple vortex-lattice theories.
Optimizing cost-efficiency in mean exposure assessment - cost functions reconsidered
2011-01-01
Background Reliable exposure data is a vital concern in medical epidemiology and intervention studies. The present study addresses the needs of the medical researcher to spend monetary resources devoted to exposure assessment with an optimal cost-efficiency, i.e. obtain the best possible statistical performance at a specified budget. A few previous studies have suggested mathematical optimization procedures based on very simple cost models; this study extends the methodology to cover even non-linear cost scenarios. Methods Statistical performance, i.e. efficiency, was assessed in terms of the precision of an exposure mean value, as determined in a hierarchical, nested measurement model with three stages. Total costs were assessed using a corresponding three-stage cost model, allowing costs at each stage to vary non-linearly with the number of measurements according to a power function. Using these models, procedures for identifying the optimally cost-efficient allocation of measurements under a constrained budget were developed, and applied on 225 scenarios combining different sizes of unit costs, cost function exponents, and exposure variance components. Results Explicit mathematical rules for identifying optimal allocation could be developed when cost functions were linear, while non-linear cost functions implied that parts of or the entire optimization procedure had to be carried out using numerical methods. For many of the 225 scenarios, the optimal strategy consisted in measuring on only one occasion from each of as many subjects as allowed by the budget. Significant deviations from this principle occurred if costs for recruiting subjects were large compared to costs for setting up measurement occasions, and, at the same time, the between-subjects to within-subject variance ratio was small. In these cases, non-linearities had a profound influence on the optimal allocation and on the eventual size of the exposure data set. Conclusions The analysis procedures developed in the present study can be used for informed design of exposure assessment strategies, provided that data are available on exposure variability and the costs of collecting and processing data. The present shortage of empirical evidence on costs and appropriate cost functions however impedes general conclusions on optimal exposure measurement strategies in different epidemiologic scenarios. PMID:21600023
Optimizing cost-efficiency in mean exposure assessment--cost functions reconsidered.
Mathiassen, Svend Erik; Bolin, Kristian
2011-05-21
Reliable exposure data is a vital concern in medical epidemiology and intervention studies. The present study addresses the needs of the medical researcher to spend monetary resources devoted to exposure assessment with an optimal cost-efficiency, i.e. obtain the best possible statistical performance at a specified budget. A few previous studies have suggested mathematical optimization procedures based on very simple cost models; this study extends the methodology to cover even non-linear cost scenarios. Statistical performance, i.e. efficiency, was assessed in terms of the precision of an exposure mean value, as determined in a hierarchical, nested measurement model with three stages. Total costs were assessed using a corresponding three-stage cost model, allowing costs at each stage to vary non-linearly with the number of measurements according to a power function. Using these models, procedures for identifying the optimally cost-efficient allocation of measurements under a constrained budget were developed, and applied on 225 scenarios combining different sizes of unit costs, cost function exponents, and exposure variance components. Explicit mathematical rules for identifying optimal allocation could be developed when cost functions were linear, while non-linear cost functions implied that parts of or the entire optimization procedure had to be carried out using numerical methods.For many of the 225 scenarios, the optimal strategy consisted in measuring on only one occasion from each of as many subjects as allowed by the budget. Significant deviations from this principle occurred if costs for recruiting subjects were large compared to costs for setting up measurement occasions, and, at the same time, the between-subjects to within-subject variance ratio was small. In these cases, non-linearities had a profound influence on the optimal allocation and on the eventual size of the exposure data set. The analysis procedures developed in the present study can be used for informed design of exposure assessment strategies, provided that data are available on exposure variability and the costs of collecting and processing data. The present shortage of empirical evidence on costs and appropriate cost functions however impedes general conclusions on optimal exposure measurement strategies in different epidemiologic scenarios.
Southwell, Derek G; Narvid, Jared A; Martin, Alastair J; Qasim, Salman E; Starr, Philip A; Larson, Paul S
2016-01-01
Interventional magnetic resonance imaging (iMRI) allows deep brain stimulator lead placement under general anesthesia. While the accuracy of lead targeting has been described for iMRI systems utilizing 1.5-tesla magnets, a similar assessment of 3-tesla iMRI procedures has not been performed. To compare targeting accuracy, the number of lead targeting attempts, and surgical duration between procedures performed on 1.5- and 3-tesla iMRI systems. Radial targeting error, the number of targeting attempts, and procedure duration were compared between surgeries performed on 1.5- and 3-tesla iMRI systems (SmartFrame and ClearPoint systems). During the first year of operation of each system, 26 consecutive leads were implanted using the 1.5-tesla system, and 23 consecutive leads were implanted using the 3-tesla system. There was no significant difference in radial error (Mann-Whitney test, p = 0.26), number of lead placements that required multiple targeting attempts (Fisher's exact test, p = 0.59), or bilateral procedure durations between surgeries performed with the two systems (p = 0.15). Accurate DBS lead targeting can be achieved with iMRI systems utilizing either 1.5- or 3-tesla magnets. The use of a 3-tesla magnet, however, offers improved visualization of the target structures and allows comparable accuracy and efficiency of placement at the selected targets. © 2016 S. Karger AG, Basel.
Multidisciplinary design optimization - An emerging new engineering discipline
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw
1993-01-01
A definition of the multidisciplinary design optimization (MDO) is introduced, and functionality and relationship of the MDO conceptual components are examined. The latter include design-oriented analysis, approximation concepts, mathematical system modeling, design space search, an optimization procedure, and a humane interface.
Time optimized path-choice in the termite hunting ant Megaponera analis.
Frank, Erik T; Hönle, Philipp O; Linsenmair, K Eduard
2018-05-10
Trail network systems among ants have received a lot of scientific attention due to their various applications in problem solving of networks. Recent studies have shown that ants select the fastest available path when facing different velocities on different substrates, rather than the shortest distance. The progress of decision-making by these ants is determined by pheromone-based maintenance of paths, which is a collective decision. However, path optimization through individual decision-making remains mostly unexplored. Here we present the first study of time-optimized path selection via individual decision-making by scout ants. Megaponera analis scouts search for termite foraging sites and lead highly organized raid columns to them. The path of the scout determines the path of the column. Through installation of artificial roads around M. analis nests we were able to influence the pathway choice of the raids. After road installation 59% of all recorded raids took place completely or partly on the road, instead of the direct, i.e. distance-optimized, path through grass from the nest to the termites. The raid velocity on the road was more than double the grass velocity, the detour thus saved 34.77±23.01% of the travel time compared to a hypothetical direct path. The pathway choice of the ants was similar to a mathematical model of least time allowing us to hypothesize the underlying mechanisms regulating the behavior. Our results highlight the importance of individual decision-making in the foraging behavior of ants and show a new procedure of pathway optimization. © 2018. Published by The Company of Biologists Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wunschel, David S.; Colburn, Heather A.; Fox, Alvin
2008-08-01
Detection of small quantities of agar associated with spores of Bacillus anthracis could provide key information regarding its source or growth characteristics. Agar, widely used in growth of bacteria on solid surfaces, consists primarily of repeating polysaccharide units of 3,6-anhydro-L-galactose (AGal) and galactose (Gal) with sulfated and O-methylated galactoses present as minor constituents. Two variants of the alditol acetate procedure were evaluated for detection of potential agar markers associated with spores. The first method employed a reductive hydrolysis step, to stabilize labile anhydrogalactose, by converting to anhydrogalactitol. The second eliminated the reductive hydrolysis step simplifying the procedure. Anhydrogalactitol, derived frommore » agar, was detected using both derivatization methods followed by gas chromatography-mass spectrometry (GC-MS) analysis. However, challenges with artefactual background (reductive hydrolysis) or marker destruction (hydrolysis) lead to the search for alternative sugar markers. A minor agar component, 6-O-methyl galactose (6-O-M gal), was readily detected in agar-grown but not broth-grown bacteria. Detection was optimized by the use of gas chromatography-tandem mass spectrometry (GC-MS-MS). With appropriate choice of sugar marker and analytical procedure, detection of sugar markers for agar has considerable potential in microbial forensics.« less
Optimal experimental designs for the estimation of thermal properties of composite materials
NASA Technical Reports Server (NTRS)
Scott, Elaine P.; Moncman, Deborah A.
1994-01-01
Reliable estimation of thermal properties is extremely important in the utilization of new advanced materials, such as composite materials. The accuracy of these estimates can be increased if the experiments are designed carefully. The objectives of this study are to design optimal experiments to be used in the prediction of these thermal properties and to then utilize these designs in the development of an estimation procedure to determine the effective thermal properties (thermal conductivity and volumetric heat capacity). The experiments were optimized by choosing experimental parameters that maximize the temperature derivatives with respect to all of the unknown thermal properties. This procedure has the effect of minimizing the confidence intervals of the resulting thermal property estimates. Both one-dimensional and two-dimensional experimental designs were optimized. A heat flux boundary condition is required in both analyses for the simultaneous estimation of the thermal properties. For the one-dimensional experiment, the parameters optimized were the heating time of the applied heat flux, the temperature sensor location, and the experimental time. In addition to these parameters, the optimal location of the heat flux was also determined for the two-dimensional experiments. Utilizing the optimal one-dimensional experiment, the effective thermal conductivity perpendicular to the fibers and the effective volumetric heat capacity were then estimated for an IM7-Bismaleimide composite material. The estimation procedure used is based on the minimization of a least squares function which incorporates both calculated and measured temperatures and allows for the parameters to be estimated simultaneously.
Optimization of tocotrienols as antiproliferative and antimigratory leads.
Behery, Fathy A; Akl, Mohamed R; Ananthula, Suryatheja; Parajuli, Parash; Sylvester, Paul W; El Sayed, Khalid A
2013-01-01
The vitamin E family members γ- and δ-tocotrienols (2 and 3, respectively) are known natural products with documented anticancer activities. Redox-silent structural modifications, such as esterification, etherification and carbamoylation, of 2 and 3 significantly enhanced their anticancer activities. However, hit-to-lead optimization of tocotrienols and their analogs was yet to be reported at the outset of the project described herein. Subjecting the chroman ring of 2 and 3 to the electrophilic substitution reactions, namely, Mannich and Lederer-Manasse procedures, afforded 42 new products. These included the 3,4-dihydro-1,3-oxazines 3-29 and 35-44, Mannich bases 30-31, and the hydroxymethyl analogs 32-34. Of these, the δ-tocotrienol analogs 8, 11, 18, 24, 25, 27, and 40 inhibited the proliferation of the highly metastatic +SA mammary epithelial cancer cell line, with IC(50) values in the nanomolar (nM) range. In NCI's 60 human tumor cell line panel, 8, 17, 38, and 40 showed antiproliferative activity, with nM GI(50) values. The δ-tocotrienol analogs 10 and 38 inhibited the migration of the highly metastatic human breast cancer cell line MDA-MB-231 with IC(50) values of 1.3 and 1.5 μM, respectively, in the wound-healing assay. A dose of 0.5 mg/day for 14 days of one of the active analogs, 30, significantly slowed the growth of +SA mammary tumors in the syngeneic BALB/c mouse model, compared to the vehicle- and the parent γ-tocotrienol-treated control groups. Electrophilic substitution reactions promoted tocotrienols to lead level and can enable their future use to control metastatic breast malignancies. Copyright © 2012 Elsevier Masson SAS. All rights reserved.
Optimal design of a hybrid MR brake for haptic wrist application
NASA Astrophysics Data System (ADS)
Nguyen, Quoc Hung; Nguyen, Phuong Bac; Choi, Seung-Bok
2011-03-01
In this work, a new configuration of a magnetorheological (MR) brake is proposed and an optimal design of the proposed MR brake for haptic wrist application is performed considering the required braking torque, the zero-field friction torque, the size and mass of the brake. The proposed MR brake configuration is a combination of disc-type and drum-type which is referred as a hybrid configuration in this study. After the MR brake with the hybrid configuration is proposed, braking torque of the brake is analyzed based on Bingham rheological model of the MR fluid. The zero-field friction torque of the MR brake is also obtained. An optimization procedure based on finite element analysis integrated with an optimization tool is developed for the MR brake. The purpose of the optimal design is to find the optimal geometric dimensions of the MR brake structure that can produce the required braking torque and minimize the uncontrollable torque (passive torque) of the haptic wrist. Based on developed optimization procedure, optimal solution of the proposed MR brake is achieved. The proposed optimized hybrid brake is then compared with conventional types of MR brake and discussions on working performance of the proposed MR brake are described.
Rebecchi, Stefano; Pinelli, Davide; Zanaroli, Giulio; Fava, Fabio; Frascari, Dario
2018-01-01
2,3-Butanediol (BD) is a largely used fossil-based platform chemical. The yield and productivity of bio-based BD fermentative production must be increased and cheaper substrates need to be identified, to make bio-based BD production more competitive. As BD bioproduction occurs under microaerobic conditions, a fine tuning and control of the oxygen transfer rate (OTR) is crucial to maximize BD yield and productivity. Very few studies on BD bioproduction focused on the use of non-pathogenic microorganisms and of byproducts as substrate. The goal of this work was to optimize BD bioproduction by the non-pathogenic strain Bacillus licheniformis ATCC9789 by (i) identifying the ranges of volumetric and biomass-specific OTR that maximize BD yield and productivity using standard sugar and protein sources, and (ii) performing a preliminary evaluation of the variation in process performances and cost resulting from the replacement of glucose with molasses, and beef extract/peptone with chicken meat and bone meal, a byproduct of the meat production industry. OTR optimization with an expensive, standard medium containing glucose, beef extract and peptone revealed that OTRs in the 7-15 mmol/L/h range lead to an optimal BD yield (0.43 ± 0.03 g/g) and productivity (0.91 ± 0.05 g/L/h). The corresponding optimal range of biomass-specific OTR was equal to 1.4-7.9 [Formula: see text], whereas the respiratory quotient ranged from 1.8 to 2.5. The switch to an agro-industrial byproduct-based medium containing chicken meat and bone meal and molasses led to a 50% decrease in both BD yield and productivity. A preliminary economic analysis indicated that the use of the byproduct-based medium can reduce by about 45% the BD production cost. A procedure for OTR optimization was developed and implemented, leading to the identification of a range of biomass-specific OTR and respiratory quotient to be used for the scale-up and control of BD bioproduction by Bacillus licheniformis . The switch to a byproduct-based medium led to a relevant decrease in BD production cost. Further research is needed to optimize the process of BD bioproduction from the tested byproduct-based medium.
CCD image sensor induced error in PIV applications
NASA Astrophysics Data System (ADS)
Legrand, M.; Nogueira, J.; Vargas, A. A.; Ventas, R.; Rodríguez-Hidalgo, M. C.
2014-06-01
The readout procedure of charge-coupled device (CCD) cameras is known to generate some image degradation in different scientific imaging fields, especially in astrophysics. In the particular field of particle image velocimetry (PIV), widely extended in the scientific community, the readout procedure of the interline CCD sensor induces a bias in the registered position of particle images. This work proposes simple procedures to predict the magnitude of the associated measurement error. Generally, there are differences in the position bias for the different images of a certain particle at each PIV frame. This leads to a substantial bias error in the PIV velocity measurement (˜0.1 pixels). This is the order of magnitude that other typical PIV errors such as peak-locking may reach. Based on modern CCD technology and architecture, this work offers a description of the readout phenomenon and proposes a modeling for the CCD readout bias error magnitude. This bias, in turn, generates a velocity measurement bias error when there is an illumination difference between two successive PIV exposures. The model predictions match the experiments performed with two 12-bit-depth interline CCD cameras (MegaPlus ES 4.0/E incorporating the Kodak KAI-4000M CCD sensor with 4 megapixels). For different cameras, only two constant values are needed to fit the proposed calibration model and predict the error from the readout procedure. Tests by different researchers using different cameras would allow verification of the model, that can be used to optimize acquisition setups. Simple procedures to obtain these two calibration values are also described.
Baytak, Sitki; Türker, A Rehber
2006-02-28
Lead and nickel were preconcentrated as their ethylenediaminetetraacedic acid (EDTA) complexes from aqueous sample solutions using a column containing Ambersorb-572 and determined by flame atomic absorption spectrometry (FAAS). pH values, amount of solid phase, elution solution and flow rate of sample solution have been optimized in order to obtain quantitative recovery of the analytes. The effect of interfering ions on the recovery of the analytes has also been investigated. The recoveries of Pb and Ni under the optimum conditions were 99 +/- 2 and 97 +/- 3%, respectively, at 95% confidence level. Seventy-five-fold (using 750 mL of sample solution and 10 mL of eluent) and 50-fold (using 500 mL of sample solution and 10 mL of eluent) preconcentration was obtained for Pb and Ni, respectively. Time of analysis is about 4.5 h (for obtaining enrichment factor of 75). By applying these enrichment factors, the analytical detection limits of Pb and Ni were found as 3.65 and 1.42 ng mL(-1), respectively. The capacity of the sorbent was found as 0.17 and 0.21 mmol g(-1) for Pb and Ni, respectively. The interferences of some cations, such as Mn2+, Co2+, Fe3+, Al3+, Zn2+, Cd2+, Ca2+, Mg2+, K+ and Na+ usually present in water samples were also studied. This procedure was applied to the determination of lead and nickel in parsley, green onion, sea water and waste water samples. The accuracy of the procedure was checked by determining Pb and Ni in standard reference tea leaves sample (GBW-07605). The results demonstrated good agreement with the certified values.
ERIC Educational Resources Information Center
Fasoula, S.; Nikitas, P.; Pappa-Louisi, A.
2017-01-01
A series of Microsoft Excel spreadsheets were developed to simulate the process of separation optimization under isocratic and simple gradient conditions. The optimization procedure is performed in a stepwise fashion using simple macros for an automatic application of this approach. The proposed optimization approach involves modeling of the peak…
Schumann, Marcel; Armen, Roger S
2013-05-30
Molecular docking of small-molecules is an important procedure for computer-aided drug design. Modeling receptor side chain flexibility is often important or even crucial, as it allows the receptor to adopt new conformations as induced by ligand binding. However, the accurate and efficient incorporation of receptor side chain flexibility has proven to be a challenge due to the huge computational complexity required to adequately address this problem. Here we describe a new docking approach with a very fast, graph-based optimization algorithm for assignment of the near-optimal set of residue rotamers. We extensively validate our approach using the 40 DUD target benchmarks commonly used to assess virtual screening performance and demonstrate a large improvement using the developed side chain optimization over rigid receptor docking (average ROC AUC of 0.693 vs. 0.623). Compared to numerous benchmarks, the overall performance is better than nearly all other commonly used procedures. Furthermore, we provide a detailed analysis of the level of receptor flexibility observed in docking results for different classes of residues and elucidate potential avenues for further improvement. Copyright © 2013 Wiley Periodicals, Inc.
Optimization of reinforced concrete slabs
NASA Technical Reports Server (NTRS)
Ferritto, J. M.
1979-01-01
Reinforced concrete cells composed of concrete slabs and used to limit the effects of accidental explosions during hazardous explosives operations are analyzed. An automated design procedure which considers the dynamic nonlinear behavior of the reinforced concrete of arbitrary geometrical and structural configuration subjected to dynamic pressure loading is discussed. The optimum design of the slab is examined using an interior penalty function. The optimization procedure is presented and the results are discussed and compared with finite element analysis.
Optimizing a liquid propellant rocket engine with an automated combustor design code (AUTOCOM)
NASA Technical Reports Server (NTRS)
Hague, D. S.; Reichel, R. H.; Jones, R. T.; Glatt, C. R.
1972-01-01
A procedure for automatically designing a liquid propellant rocket engine combustion chamber in an optimal fashion is outlined. The procedure is contained in a digital computer code, AUTOCOM. The code is applied to an existing engine, and design modifications are generated which provide a substantial potential payload improvement over the existing design. Computer time requirements for this payload improvement were small, approximately four minutes in the CDC 6600 computer.
An integrated optimum design approach for high speed prop-rotors including acoustic constraints
NASA Technical Reports Server (NTRS)
Chattopadhyay, Aditi; Wells, Valana; Mccarthy, Thomas; Han, Arris
1993-01-01
The objective of this research is to develop optimization procedures to provide design trends in high speed prop-rotors. The necessary disciplinary couplings are all considered within a closed loop multilevel decomposition optimization process. The procedures involve the consideration of blade-aeroelastic aerodynamic performance, structural-dynamic design requirements, and acoustics. Further, since the design involves consideration of several different objective functions, multiobjective function formulation techniques are developed.
Evaluation of Genetic Algorithm Concepts Using Model Problems. Part 2; Multi-Objective Optimization
NASA Technical Reports Server (NTRS)
Holst, Terry L.; Pulliam, Thomas H.
2003-01-01
A genetic algorithm approach suitable for solving multi-objective optimization problems is described and evaluated using a series of simple model problems. Several new features including a binning selection algorithm and a gene-space transformation procedure are included. The genetic algorithm is suitable for finding pareto optimal solutions in search spaces that are defined by any number of genes and that contain any number of local extrema. Results indicate that the genetic algorithm optimization approach is flexible in application and extremely reliable, providing optimal results for all optimization problems attempted. The binning algorithm generally provides pareto front quality enhancements and moderate convergence efficiency improvements for most of the model problems. The gene-space transformation procedure provides a large convergence efficiency enhancement for problems with non-convoluted pareto fronts and a degradation in efficiency for problems with convoluted pareto fronts. The most difficult problems --multi-mode search spaces with a large number of genes and convoluted pareto fronts-- require a large number of function evaluations for GA convergence, but always converge.
Cryogenic Tank Structure Sizing With Structural Optimization Method
NASA Technical Reports Server (NTRS)
Wang, J. T.; Johnson, T. F.; Sleight, D. W.; Saether, E.
2001-01-01
Structural optimization methods in MSC /NASTRAN are used to size substructures and to reduce the weight of a composite sandwich cryogenic tank for future launch vehicles. Because the feasible design space of this problem is non-convex, many local minima are found. This non-convex problem is investigated in detail by conducting a series of analyses along a design line connecting two feasible designs. Strain constraint violations occur for some design points along the design line. Since MSC/NASTRAN uses gradient-based optimization procedures. it does not guarantee that the lowest weight design can be found. In this study, a simple procedure is introduced to create a new starting point based on design variable values from previous optimization analyses. Optimization analysis using this new starting point can produce a lower weight design. Detailed inputs for setting up the MSC/NASTRAN optimization analysis and final tank design results are presented in this paper. Approaches for obtaining further weight reductions are also discussed.
Chen, Ke; Feng, Yijun; Yang, Zhongjie; Cui, Li; Zhao, Junming; Zhu, Bo; Jiang, Tian
2016-10-24
Ultrathin metasurface compromising various sub-wavelength meta-particles offers promising advantages in controlling electromagnetic wave by spatially manipulating the wavefront characteristics across the interface. The recently proposed digital coding metasurface could even simplify the design and optimization procedures due to the digitalization of the meta-particle geometry. However, current attempts to implement the digital metasurface still utilize several structural meta-particles to obtain certain electromagnetic responses, and requiring time-consuming optimization especially in multi-bits coding designs. In this regard, we present herein utilizing geometric phase based single structured meta-particle with various orientations to achieve either 1-bit or multi-bits digital metasurface. Particular electromagnetic wave scattering patterns dependent on the incident polarizations can be tailored by the encoded metasurfaces with regular sequences. On the contrast, polarization insensitive diffusion-like scattering can also been successfully achieved by digital metasurface encoded with randomly distributed coding sequences leading to substantial suppression of backward scattering in a broadband microwave frequency. The proposed digital metasurfaces provide simple designs and reveal new opportunities for controlling electromagnetic wave scattering with or without polarization dependence.
Mesh refinement strategy for optimal control problems
NASA Astrophysics Data System (ADS)
Paiva, L. T.; Fontes, F. A. C. C.
2013-10-01
Direct methods are becoming the most used technique to solve nonlinear optimal control problems. Regular time meshes having equidistant spacing are frequently used. However, in some cases these meshes cannot cope accurately with nonlinear behavior. One way to improve the solution is to select a new mesh with a greater number of nodes. Another way, involves adaptive mesh refinement. In this case, the mesh nodes have non equidistant spacing which allow a non uniform nodes collocation. In the method presented in this paper, a time mesh refinement strategy based on the local error is developed. After computing a solution in a coarse mesh, the local error is evaluated, which gives information about the subintervals of time domain where refinement is needed. This procedure is repeated until the local error reaches a user-specified threshold. The technique is applied to solve the car-like vehicle problem aiming minimum consumption. The approach developed in this paper leads to results with greater accuracy and yet with lower overall computational time as compared to using a time meshes having equidistant spacing.
Chen, Ke; Feng, Yijun; Yang, Zhongjie; Cui, Li; Zhao, Junming; Zhu, Bo; Jiang, Tian
2016-01-01
Ultrathin metasurface compromising various sub-wavelength meta-particles offers promising advantages in controlling electromagnetic wave by spatially manipulating the wavefront characteristics across the interface. The recently proposed digital coding metasurface could even simplify the design and optimization procedures due to the digitalization of the meta-particle geometry. However, current attempts to implement the digital metasurface still utilize several structural meta-particles to obtain certain electromagnetic responses, and requiring time-consuming optimization especially in multi-bits coding designs. In this regard, we present herein utilizing geometric phase based single structured meta-particle with various orientations to achieve either 1-bit or multi-bits digital metasurface. Particular electromagnetic wave scattering patterns dependent on the incident polarizations can be tailored by the encoded metasurfaces with regular sequences. On the contrast, polarization insensitive diffusion-like scattering can also been successfully achieved by digital metasurface encoded with randomly distributed coding sequences leading to substantial suppression of backward scattering in a broadband microwave frequency. The proposed digital metasurfaces provide simple designs and reveal new opportunities for controlling electromagnetic wave scattering with or without polarization dependence. PMID:27775064
Optimization of the parameters for intrastromal refractive surgery with ultrashort laser pulses
NASA Astrophysics Data System (ADS)
Heisterkamp, Alexander; Ripken, Tammo; Lubatschowski, Holger; Welling, Herbert; Dommer, Wolfgang; Luetkefels, Elke; Mamom, Thanongsak; Ertmer, Wolfgang
2001-06-01
Focussing femtosecond laser pulses into a transparent media, such as corneal tissue, leads to optical breakdown, generation of a micro-plasma and, thus, a cutting effect inside the tissue. To proof the potential of fs-lasers in refractive surgery, three-dimensional cutting within the corneal stroma was evaluated. With the use of ultrashort laser pulses within the LASIK procedure (laser in situ keratomileusis) possible complications in handling of a mechanical knife, the microkeratome, can be reduced by using the treatment laser as the keratome itself. To study woundhealing effects, animal studies were carried out in rabbit specimen. The surgical outcome was analyzed by means of histological sections, as well as light and scanning electron microscopy. Dependencies on the dispersion caused by focussing optics were evaluated and optimized. Thus, pulse energies well below 1 (mu) J were sufficient to perform the intrastromal cuts. The laser pulses with a duration of 180 fs and energies of 0.5-100 (mu) J were provided by a modelocked frequency doubled erbium fiber-laser with subsequent chirped pulse amplification in a titanium sapphire amplifier at up to 3 kHz.
Lukášová, Ivana; Muselík, Jan; Franc, Aleš; Goněc, Roman; Mika, Filip; Vetchý, David
2017-11-15
Warfarin is intensively discussed drug with narrow therapeutic range. There have been cases of bleeding attributed to varying content or altered quality of the active substance. Factor analysis is useful for finding suitable technological parameters leading to high content uniformity of tablets containing low amount of active substance. The composition of tabletting blend and technological procedure were set with respect to factor analysis of previously published results. The correctness of set parameters was checked by manufacturing and evaluation of tablets containing 1-10mg of warfarin sodium. The robustness of suggested technology was checked by using "worst case scenario" and statistical evaluation of European Pharmacopoeia (EP) content uniformity limits with respect to Bergum division and process capability index (Cpk). To evaluate the quality of active substance and tablets, dissolution method was developed (water; EP apparatus II; 25rpm), allowing for statistical comparison of dissolution profiles. Obtained results prove the suitability of factor analysis to optimize the composition with respect to batches manufactured previously and thus the use of metaanalysis under industrial conditions is feasible. Copyright © 2017 Elsevier B.V. All rights reserved.
Ciraj-Bjelac, Olivera; Carinou, Eleftheria; Ferrari, Paolo; Gingaume, Merce; Merce, Marta Sans; O'Connor, Una
2016-11-01
Occupational exposure from interventional x-ray procedures is one of the areas in which increased eye lens exposure may occur. Accurate dosimetry is an important element to investigate the correlation of observed radiation effects with radiation dose, to verify the compliance with regulatory dose limits, and to optimize radiation protection practice. The objective of this work is to review eye lens dose levels in clinical practice that may occur from the use of ionizing radiation. The use of a dedicated eye lens dosimeter is the recommended methodology; however, in practice it cannot always be easily implemented. Alternatively, the eye lens dose could be assessed from measurements of other dosimetric quantities or other indirect parameters, such as patient dose. The practical implementation of monitoring eye lens doses and the use of adequate protective equipment still remains a challenge. The use of lead glasses with a good fit to the face, appropriate lateral coverage, and/or ceiling-suspended screens is recommended in workplaces with potential high eye lens doses. Copyright © 2016 American College of Radiology. Published by Elsevier Inc. All rights reserved.
A Framework for Reproducible Latent Fingerprint Enhancements.
Carasso, Alfred S
2014-01-01
Photoshop processing of latent fingerprints is the preferred methodology among law enforcement forensic experts, but that appproach is not fully reproducible and may lead to questionable enhancements. Alternative, independent, fully reproducible enhancements, using IDL Histogram Equalization and IDL Adaptive Histogram Equalization, can produce better-defined ridge structures, along with considerable background information. Applying a systematic slow motion smoothing procedure to such IDL enhancements, based on the rapid FFT solution of a Lévy stable fractional diffusion equation, can attenuate background detail while preserving ridge information. The resulting smoothed latent print enhancements are comparable to, but distinct from, forensic Photoshop images suitable for input into automated fingerprint identification systems, (AFIS). In addition, this progressive smoothing procedure can be reexamined by displaying the suite of progressively smoother IDL images. That suite can be stored, providing an audit trail that allows monitoring for possible loss of useful information, in transit to the user-selected optimal image. Such independent and fully reproducible enhancements provide a valuable frame of reference that may be helpful in informing, complementing, and possibly validating the forensic Photoshop methodology.
Scansen, Brian A; Kent, Agnieszka M; Cheatham, Sharon L; Cheatham, John P; Cheatham, John D
2014-09-01
Two dogs with severe dysplastic pulmonary valve stenosis and right-to-left shunting defects (patent foramen ovale, perimembranous ventricular septal defect) underwent palliative stenting of the right ventricular outflow tract and pulmonary valve annulus using balloon expandable stents. One dog received 2 over-lapping bare metal stents placed 7 months apart; the other received a single covered stent. Both procedures were considered technically successful with a reduction in the transpulmonary valve pressure gradient from 202 to 90 mmHg in 1 dog and from 168 to 95 mmHg in the other. Clinical signs of exercise intolerance and syncope were temporarily resolved in both dogs. However, progressive right ventricular concentric hypertrophy, recurrent stenosis, and erythrocytosis were observed over the subsequent 6 months leading to poor long-term outcomes. Stenting of the right ventricular outflow tract is feasible in dogs with severe dysplastic pulmonary valve stenosis, though further study and optimization of the procedure is required. Copyright © 2014 Elsevier B.V. All rights reserved.
A Framework for Reproducible Latent Fingerprint Enhancements
Carasso, Alfred S.
2014-01-01
Photoshop processing1 of latent fingerprints is the preferred methodology among law enforcement forensic experts, but that appproach is not fully reproducible and may lead to questionable enhancements. Alternative, independent, fully reproducible enhancements, using IDL Histogram Equalization and IDL Adaptive Histogram Equalization, can produce better-defined ridge structures, along with considerable background information. Applying a systematic slow motion smoothing procedure to such IDL enhancements, based on the rapid FFT solution of a Lévy stable fractional diffusion equation, can attenuate background detail while preserving ridge information. The resulting smoothed latent print enhancements are comparable to, but distinct from, forensic Photoshop images suitable for input into automated fingerprint identification systems, (AFIS). In addition, this progressive smoothing procedure can be reexamined by displaying the suite of progressively smoother IDL images. That suite can be stored, providing an audit trail that allows monitoring for possible loss of useful information, in transit to the user-selected optimal image. Such independent and fully reproducible enhancements provide a valuable frame of reference that may be helpful in informing, complementing, and possibly validating the forensic Photoshop methodology. PMID:26601028
Inverse Modelling to Obtain Head Movement Controller Signal
NASA Technical Reports Server (NTRS)
Kim, W. S.; Lee, S. H.; Hannaford, B.; Stark, L.
1984-01-01
Experimentally obtained dynamics of time-optimal, horizontal head rotations have previously been simulated by a sixth order, nonlinear model driven by rectangular control signals. Electromyography (EMG) recordings have spects which differ in detail from the theoretical rectangular pulsed control signal. Control signals for time-optimal as well as sub-optimal horizontal head rotations were obtained by means of an inverse modelling procedures. With experimentally measured dynamical data serving as the input, this procedure inverts the model to produce the neurological control signals driving muscles and plant. The relationships between these controller signals, and EMG records should contribute to the understanding of the neurological control of movements.
Noninvasive, automatic optimization strategy in cardiac resynchronization therapy.
Reumann, Matthias; Osswald, Brigitte; Doessel, Olaf
2007-07-01
Optimization of cardiac resynchronization therapy (CRT) is still unsolved. It has been shown that optimal electrode position,atrioventricular (AV) and interventricular (VV) delays improve the success of CRT and reduce the number of non-responders. However, no automatic, noninvasive optimization strategy exists to date. Cardiac resynchronization therapy was simulated on the Visible Man and a patient data-set including fiber orientation and ventricular heterogeneity. A cellular automaton was used for fast computation of ventricular excitation. An AV block and a left bundle branch block were simulated with 100%, 80% and 60% interventricular conduction velocity. A right apical and 12 left ventricular lead positions were set. Sequential optimization and optimization with the downhill simplex algorithm (DSA) were carried out. The minimal error between isochrones of the physiologic excitation and the therapy was computed automatically and leads to an optimal lead position and timing. Up to 1512 simulations were carried out per pathology per patient. One simulation took 4 minutes on an Apple Macintosh 2 GHz PowerPC G5. For each electrode pair an optimal pacemaker delay was found. The DSA reduced the number of simulations by an order of magnitude and the AV-delay and VV - delay were determined with a much higher resolution. The findings are well comparable with clinical studies. The presented computer model of CRT automatically evaluates an optimal lead position and AV-delay and VV-delay, which can be used to noninvasively plan an optimal therapy for an individual patient. The application of the DSA reduces the simulation time so that the strategy is suitable for pre-operative planning in clinical routine. Future work will focus on clinical evaluation of the computer models and integration of patient data for individualized therapy planning and optimization.
Computer-oriented synthesis of wide-band non-uniform negative resistance amplifiers
NASA Technical Reports Server (NTRS)
Branner, G. R.; Chan, S.-P.
1975-01-01
This paper presents a synthesis procedure which provides design values for broad-band amplifiers using non-uniform negative resistance devices. Employing a weighted least squares optimization scheme, the technique, based on an extension of procedures for uniform negative resistance devices, is capable of providing designs for a variety of matching network topologies. It also provides, for the first time, quantitative results for predicting the effects of parameter element variations on overall amplifier performance. The technique is also unique in that it employs exact partial derivatives for optimization and sensitivity computation. In comparison with conventional procedures, significantly improved broad-band designs are shown to result.
Heifetz, Alexander; Southey, Michelle; Morao, Inaki; Townsend-Nicholson, Andrea; Bodkin, Mike J
2018-01-01
GPCR modeling approaches are widely used in the hit-to-lead (H2L) and lead optimization (LO) stages of drug discovery. The aims of these modeling approaches are to predict the 3D structures of the receptor-ligand complexes, to explore the key interactions between the receptor and the ligand and to utilize these insights in the design of new molecules with improved binding, selectivity or other pharmacological properties. In this book chapter, we present a brief survey of key computational approaches integrated with hierarchical GPCR modeling protocol (HGMP) used in hit-to-lead (H2L) and in lead optimization (LO) stages of structure-based drug discovery (SBDD). We outline the differences in modeling strategies used in H2L and LO of SBDD and illustrate how these tools have been applied in three drug discovery projects.
MARS Gravity-Assist to Improve Missions towards Main-Belt Asteroids
NASA Astrophysics Data System (ADS)
Casalino, Lorenzo; Colasurdo, Guido
fain-belt asteroids are one of the keys to the investigation of the processes that lead to the solar electric propulsion (SEP) with ion thrusters is a mature technology for the exploration of the bolar system. NASA is currently planning the DAWN mission towards two asteroids of the main s with Vesta in 2010 and Ceres in 2014. A mission to an asteroid of the main belt requires a large velocity increment (V) and the use of high-specific-impulse thrusters, such as ion thrusters, p m ovides a large improvement of the payload and, consequently, of the scientific return of the of this kind of trajectory is a non-trivial task, since many local optima exist and performance can be improved by increasing the trip-time and the number of revolutions around the sun, in order to use t the propellant only in the most favorable positions (namely, perihelia, aphelia and nodes) along the Mars is midway between the Earth and the main belt; even though its gravity is quite small, a gravity assist from Mars can remarkably improve the trajectory performance and is considered in this paper. p he authors use an indirect optimization procedure based on the theory of optimal control. The Mars) spheres of influence is neglected; the equations of motion are therefore integrated only in the heliocentric reference frame, whereas the flyby is treated as a discontinuity of the spacecraft's velocity. The paper analyzes trajectories, which exploit chemical propulsion to escape from the E variable-power, constant-specific-impulse propulsion system is assumed. The optimization procedure provides departure, flyby and arrival dates, the hyperbolic excess velocity on leaving the t arth's sphere of influence, which must be provided by the chemical propulsion system, and the E e ass at rendezvous, when the trip time is assigned. As far as the thrust magnitude is concerned, m either full-thrust arcs or coast arcs are required, and the procedure provides the times to switch the g low and the spacecraft would pass too close to (or even inside) the planet's surface to obtain a is c A comparison of direct flight and Mars-Gravity-Assist trajectories is carried out in the paper. The oharacteristics and theoretical performance of the optimal trajectories are determined as a function Numerical results show that in many cases the energy and inclination increment, which is provided b f y Mars' gravity, can significantly reduce the propellant requirements and increase the spacecraft
Structural optimization of framed structures using generalized optimality criteria
NASA Technical Reports Server (NTRS)
Kolonay, R. M.; Venkayya, Vipperla B.; Tischler, V. A.; Canfield, R. A.
1989-01-01
The application of a generalized optimality criteria to framed structures is presented. The optimality conditions, Lagrangian multipliers, resizing algorithm, and scaling procedures are all represented as a function of the objective and constraint functions along with their respective gradients. The optimization of two plane frames under multiple loading conditions subject to stress, displacement, generalized stiffness, and side constraints is presented. These results are compared to those found by optimizing the frames using a nonlinear mathematical programming technique.
Jig-Shape Optimization of a Low-Boom Supersonic Aircraft
NASA Technical Reports Server (NTRS)
Pak, Chan-gi
2018-01-01
A simple approach for optimizing the jig-shape is proposed in this study. This simple approach is based on an unconstrained optimization problem and applied to a low-boom supersonic aircraft. In this study, the jig-shape optimization is performed using the two-step approach. First, starting design variables are computed using the least squares surface fitting technique. Next, the jig-shape is further tuned using a numerical optimization procedure based on in-house object-oriented optimization tool.
NASA Astrophysics Data System (ADS)
Franck, Bas A. M.; Dreschler, Wouter A.; Lyzenga, Johannes
2004-12-01
In this study we investigated the reliability and convergence characteristics of an adaptive multidirectional pattern search procedure, relative to a nonadaptive multidirectional pattern search procedure. The procedure was designed to optimize three speech-processing strategies. These comprise noise reduction, spectral enhancement, and spectral lift. The search is based on a paired-comparison paradigm, in which subjects evaluated the listening comfort of speech-in-noise fragments. The procedural and nonprocedural factors that influence the reliability and convergence of the procedure are studied using various test conditions. The test conditions combine different tests, initial settings, background noise types, and step size configurations. Seven normal hearing subjects participated in this study. The results indicate that the reliability of the optimization strategy may benefit from the use of an adaptive step size. Decreasing the step size increases accuracy, while increasing the step size can be beneficial to create clear perceptual differences in the comparisons. The reliability also depends on starting point, stop criterion, step size constraints, background noise, algorithms used, as well as the presence of drifting cues and suboptimal settings. There appears to be a trade-off between reliability and convergence, i.e., when the step size is enlarged the reliability improves, but the convergence deteriorates. .
Experimental design for evaluating WWTP data by linear mass balances.
Le, Quan H; Verheijen, Peter J T; van Loosdrecht, Mark C M; Volcke, Eveline I P
2018-05-15
A stepwise experimental design procedure to obtain reliable data from wastewater treatment plants (WWTPs) was developed. The proposed procedure aims at determining sets of additional measurements (besides available ones) that guarantee the identifiability of key process variables, which means that their value can be calculated from other, measured variables, based on available constraints in the form of linear mass balances. Among all solutions, i.e. all possible sets of additional measurements allowing the identifiability of all key process variables, the optimal solutions were found taking into account two objectives, namely the accuracy of the identified key variables and the cost of additional measurements. The results of this multi-objective optimization problem were represented in a Pareto-optimal front. The presented procedure was applied to a full-scale WWTP. Detailed analysis of the relation between measurements allowed the determination of groups of overlapping mass balances. Adding measured variables could only serve in identifying key variables that appear in the same group of mass balances. Besides, the application of the experimental design procedure to these individual groups significantly reduced the computational effort in evaluating available measurements and planning additional monitoring campaigns. The proposed procedure is straightforward and can be applied to other WWTPs with or without prior data collection. Copyright © 2018 Elsevier Ltd. All rights reserved.
Dai, Hongying; Wu, Guodong; Wu, Michael; Zhi, Degui
2016-01-01
Next-generation sequencing data pose a severe curse of dimensionality, complicating traditional "single marker-single trait" analysis. We propose a two-stage combined p-value method for pathway analysis. The first stage is at the gene level, where we integrate effects within a gene using the Sequence Kernel Association Test (SKAT). The second stage is at the pathway level, where we perform a correlated Lancaster procedure to detect joint effects from multiple genes within a pathway. We show that the Lancaster procedure is optimal in Bahadur efficiency among all combined p-value methods. The Bahadur efficiency,[Formula: see text], compares sample sizes among different statistical tests when signals become sparse in sequencing data, i.e. ε →0. The optimal Bahadur efficiency ensures that the Lancaster procedure asymptotically requires a minimal sample size to detect sparse signals ([Formula: see text]). The Lancaster procedure can also be applied to meta-analysis. Extensive empirical assessments of exome sequencing data show that the proposed method outperforms Gene Set Enrichment Analysis (GSEA). We applied the competitive Lancaster procedure to meta-analysis data generated by the Global Lipids Genetics Consortium to identify pathways significantly associated with high-density lipoprotein cholesterol, low-density lipoprotein cholesterol, triglycerides, and total cholesterol.
ERIC Educational Resources Information Center
Yamashita, Shuichi; Kashiwaguma, Yasuyuki; Hayashi, Hideko; Pietzner, Verena
2017-01-01
In science classes, students usually learn about the law of definite proportions by the oxidation of copper. However, common procedures usually do not lead to proper results. This leads to confusion among the students because their experimental results do not fit to the theoretical values. Therefore, we invented a new procedure for this experiment…
Structural tailoring of engine blades (STAEBL)
NASA Technical Reports Server (NTRS)
Platt, C. E.; Pratt, T. K.; Brown, K. W.
1982-01-01
A mathematical optimization procedure was developed for the structural tailoring of engine blades and was used to structurally tailor two engine fan blades constructed of composite materials without midspan shrouds. The first was a solid blade made from superhybrid composites, and the second was a hollow blade with metal matrix composite inlays. Three major computerized functions were needed to complete the procedure: approximate analysis with the established input variables, optimization of an objective function, and refined analysis for design verification.
2010-09-01
matrix is used in many methods, like Jacobi or Gauss Seidel , for solving linear systems. Also, no partial pivoting is necessary for a strictly column...problems that arise during the procedure, which in general, converges to the solving of a linear system. The most common issue with the solution is the... iterative procedure to find an appropriate subset of parameters that produce an optimal solution commonly known as forward selection. Then, the
Multi-level optimization of a beam-like space truss utilizing a continuum model
NASA Technical Reports Server (NTRS)
Yates, K.; Gurdal, Z.; Thangjitham, S.
1992-01-01
A continuous beam model is developed for approximate analysis of a large, slender, beam-like truss. The model is incorporated in a multi-level optimization scheme for the weight minimization of such trusses. This scheme is tested against traditional optimization procedures for savings in computational cost. Results from both optimization methods are presented for comparison.
Papadopoulou, Maria P; Nikolos, Ioannis K; Karatzas, George P
2010-01-01
Artificial Neural Networks (ANNs) comprise a powerful tool to approximate the complicated behavior and response of physical systems allowing considerable reduction in computation time during time-consuming optimization runs. In this work, a Radial Basis Function Artificial Neural Network (RBFN) is combined with a Differential Evolution (DE) algorithm to solve a water resources management problem, using an optimization procedure. The objective of the optimization scheme is to cover the daily water demand on the coastal aquifer east of the city of Heraklion, Crete, without reducing the subsurface water quality due to seawater intrusion. The RBFN is utilized as an on-line surrogate model to approximate the behavior of the aquifer and to replace some of the costly evaluations of an accurate numerical simulation model which solves the subsurface water flow differential equations. The RBFN is used as a local approximation model in such a way as to maintain the robustness of the DE algorithm. The results of this procedure are compared to the corresponding results obtained by using the Simplex method and by using the DE procedure without the surrogate model. As it is demonstrated, the use of the surrogate model accelerates the convergence of the DE optimization procedure and additionally provides a better solution at the same number of exact evaluations, compared to the original DE algorithm.
Rad, Masih Mafi; Blaauw, Yuri; Dinh, Trang; Pison, Laurent; Crijns, Harry J; Prinzen, Frits W; Vernooy, Kevin
2015-01-01
Left ventricular (LV) lead placement in the latest activated region is an important determinant of response to cardiac resynchronization therapy (CRT). We investigated the feasibility of coronary venous electroanatomic mapping (EAM) to guide LV lead placement to the latest activated region. Twenty-five consecutive CRT candidates with left bundle-branch block underwent intra-procedural coronary venous EAM using EnSite NavX. A guidewire was used to map the coronary veins during intrinsic activation, and to test for phrenic nerve stimulation (PNS). The latest activated region, defined as the region with an electrical delay >75% of total QRS duration, was located anterolaterally in 18 (basal, n = 10; mid, n = 8) and inferolaterally in 6 (basal, n = 3; mid, n = 3). In one patient, identification of the latest activated region was impeded by limited coronary venous anatomy. In patients with >1 target vein (n = 12), the anatomically targeted inferolateral vein was rarely the vein with maximal electrical delay (n = 3). A concordant LV lead position was achieved in 18 of 25 patients. In six patients, this was hampered by PNS (n = 4), lead instability (n = 1), and coronary vein stenosis (n = 1). Coronary venous EAM can be used intraprocedurally to guide LV lead placement to the latest activated region free of PNS. This approach especially contributes to optimization of LV lead electrical delay in patients with multiple target veins. Conventional anatomical LV lead placement strategy does not target the vein with maximal electrical delay in many of these patients. Published on behalf of the European Society of Cardiology. All rights reserved. © The Author 2014. For permissions please email: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Hazenberg, P.; Uijlenhoet, R.; Leijnse, H.
2015-12-01
Volumetric weather radars provide information on the characteristics of precipitation at high spatial and temporal resolution. Unfortunately, rainfall measurements by radar are affected by multiple error sources, which can be subdivided into two main groups: 1) errors affecting the volumetric reflectivity measurements (e.g. ground clutter, vertical profile of reflectivity, attenuation, etc.), and 2) errors related to the conversion of the observed reflectivity (Z) values into rainfall intensity (R) and specific attenuation (k). Until the recent wide-scale implementation of dual-polarimetric radar, this second group of errors received relatively little attention, focusing predominantly on precipitation type-dependent Z-R and Z-k relations. The current work accounts for the impact of variations of the drop size distribution (DSD) on the radar QPE performance. We propose to link the parameters of the Z-R and Z-k relations directly to those of the normalized gamma DSD. The benefit of this procedure is that it reduces the number of unknown parameters. In this work, the DSD parameters are obtained using 1) surface observations from a Parsivel and Thies LPM disdrometer, and 2) a Monte Carlo optimization procedure using surface rain gauge observations. The impact of both approaches for a given precipitation type is assessed for 45 days of summertime precipitation observed within The Netherlands. Accounting for DSD variations using disdrometer observations leads to an improved radar QPE product as compared to applying climatological Z-R and Z-k relations. However, overall precipitation intensities are still underestimated. This underestimation is expected to result from unaccounted errors (e.g. transmitter calibration, erroneous identification of precipitation as clutter, overshooting and small-scale variability). In case the DSD parameters are optimized, the performance of the radar is further improved, resulting in the best performance of the radar QPE product. However, the resulting optimal Z-R and Z-k relations are considerably different from those obtained from disdrometer observations. As such, the best microphysical parameter set results in a minimization of the overall bias, which besides accounting for DSD variations also corrects for the impact of additional error sources.
78 FR 54509 - Tenth Meeting: RTCA Next Gen Advisory Committee (NAC)
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-04
... Capabilities Work Group. Recommendation for Future Metroplex Optimization Activity. [cir] Recommendation for Future Use of Optimization of Airspace and Procedures in the Metroplex (OAPM) developed by the...
Tools of the Future: How Decision Tree Analysis Will Impact Mission Planning
NASA Technical Reports Server (NTRS)
Otterstatter, Matthew R.
2005-01-01
The universe is infinitely complex; however, the human mind has a finite capacity. The multitude of possible variables, metrics, and procedures in mission planning are far too many to address exhaustively. This is unfortunate because, in general, considering more possibilities leads to more accurate and more powerful results. To compensate, we can get more insightful results by employing our greatest tool, the computer. The power of the computer will be utilized through a technology that considers every possibility, decision tree analysis. Although decision trees have been used in many other fields, this is innovative for space mission planning. Because this is a new strategy, no existing software is able to completely accommodate all of the requirements. This was determined through extensive research and testing of current technologies. It was necessary to create original software, for which a short-term model was finished this summer. The model was built into Microsoft Excel to take advantage of the familiar graphical interface for user input, computation, and viewing output. Macros were written to automate the process of tree construction, optimization, and presentation. The results are useful and promising. If this tool is successfully implemented in mission planning, our reliance on old-fashioned heuristics, an error-prone shortcut for handling complexity, will be reduced. The computer algorithms involved in decision trees will revolutionize mission planning. The planning will be faster and smarter, leading to optimized missions with the potential for more valuable data.
Reversed-phase liquid chromatography column testing: robustness study of the test.
Le Mapihan, K; Vial, J; Jardy, A
2004-12-24
Choosing the right RPLC column for an actual separation among the more than 600 commercially available ones still represents a real challenge for the analyst particularly when basic solutes are involved. Many tests dedicated to the characterization and the classification of stationary phases have been proposed in the literature and some of them highlighted the need of a better understanding of retention properties to lead to a rational choice of columns. However, unlike classical chromatographic methods, the problem of their robustness evaluation has often been left unaddressed. In the present study, we present a robustness study that was applied to the chromatographic testing procedure we had developed and optimized previously. A design of experiment (DoE) approach was implemented. Four factors, previously identified as potentially influent, were selected and subjected to small controlled variations: solvent fraction, temperature, pH and buffer concentration. As our model comprised quadratic terms instead of a simple linear model, we chose a D-optimal design in order to minimize the experiment number. As a previous batch-to-batch study [K. Le Mapihan, Caractérisation et classification des phases stationnaires utilisées pour l'analyse CPL de produits pharmaceutiques, Ph.D. Thesis, Pierre and Marie Curie University, 2004] had shown a low variability on the selected stationary phase, it was then possible to split the design into two parts, according to the solvent nature, each using one column. Actually, our testing procedure involving assays both with methanol and with acetonitrile as organic modifier, such an approach enabled to avoid a possible bias due to the column ageing considering the number of experiments required (16 + 6 center points). Experimental results were computed thanks to a Partial Least Squares regression procedure, more adapted than the classical regression to handle factors and responses not completely independent. The results showed the behavior of the solutes in relation to their physico-chemical properties and the relevance of the second term degree of our model. Finally, the robust domain of the test has been fairly identified, so that any potential user precisely knows to which extend each experimental parameter must be controlled when our testing procedure is to be implemented.
NASA Astrophysics Data System (ADS)
Martowicz, Adam; Uhl, Tadeusz
2012-10-01
The paper discusses the applicability of a reliability- and performance-based multi-criteria robust design optimization technique for micro-electromechanical systems, considering their technological uncertainties. Nowadays, micro-devices are commonly applied systems, especially in the automotive industry, taking advantage of utilizing both the mechanical structure and electronic control circuit on one board. Their frequent use motivates the elaboration of virtual prototyping tools that can be applied in design optimization with the introduction of technological uncertainties and reliability. The authors present a procedure for the optimization of micro-devices, which is based on the theory of reliability-based robust design optimization. This takes into consideration the performance of a micro-device and its reliability assessed by means of uncertainty analysis. The procedure assumes that, for each checked design configuration, the assessment of uncertainty propagation is performed with the meta-modeling technique. The described procedure is illustrated with an example of the optimization carried out for a finite element model of a micro-mirror. The multi-physics approach allowed the introduction of several physical phenomena to correctly model the electrostatic actuation and the squeezing effect present between electrodes. The optimization was preceded by sensitivity analysis to establish the design and uncertain domains. The genetic algorithms fulfilled the defined optimization task effectively. The best discovered individuals are characterized by a minimized value of the multi-criteria objective function, simultaneously satisfying the constraint on material strength. The restriction of the maximum equivalent stresses was introduced with the conditionally formulated objective function with a penalty component. The yielded results were successfully verified with a global uniform search through the input design domain.
Optimizing Chromatographic Separation: An Experiment Using an HPLC Simulator
ERIC Educational Resources Information Center
Shalliker, R. A.; Kayillo, S.; Dennis, G. R.
2008-01-01
Optimization of a chromatographic separation within the time constraints of a laboratory session is practically impossible. However, by employing a HPLC simulator, experiments can be designed that allow students to develop an appreciation of the complexities involved in optimization procedures. In the present exercise, a HPLC simulator from "JCE…
Byron, Kelly; Bluvshtein, Vlad; Lucke, Lori
2013-01-01
Transcutaneous energy transmission systems (TETS) wirelessly transmit power through the skin. TETS is particularly desirable for ventricular assist devices (VAD), which currently require cables through the skin to power the implanted pump. Optimizing the inductive link of the TET system is a multi-parameter problem. Most current techniques to optimize the design simplify the problem by combining parameters leading to sub-optimal solutions. In this paper we present an optimization method using a genetic algorithm to handle a larger set of parameters, which leads to a more optimal design. Using this approach, we were able to increase efficiency while also reducing power variability in a prototype, compared to a traditional manual design method.
Parkinson's disease patient preference and experience with various methods of DBS lead placement.
LaHue, Sara C; Ostrem, Jill L; Galifianakis, Nicholas B; San Luciano, Marta; Ziman, Nathan; Wang, Sarah; Racine, Caroline A; Starr, Philip A; Larson, Paul S; Katz, Maya
2017-08-01
Physiology-guided deep brain stimulation (DBS) surgery requires patients to be awake during a portion of the procedure, which may be poorly tolerated. Interventional MRI-guided (iMRI) DBS surgery was developed to use real-time image guidance, obviating the need for patients to be awake during lead placement. All English-speaking adults with PD who underwent iMRI DBS between 2010 and 2014 at our Center were invited to participate. Subjects completed a structured interview that explored perioperative preferences and experiences. We compared these responses to patients who underwent the physiology-guided method, matched for age and gender. Eighty-nine people with PD completed the study. Of those, 40 underwent iMRI, 44 underwent physiology-guided implantation, and five underwent both methods. There were no significant differences in baseline characteristics between groups. The primary reason for choosing iMRI DBS was a preference to be asleep during implantation due to: 1) a history of claustrophobia; 2) concerns about the potential for discomfort during the awake physiology-guided procedure in those with an underlying pain syndrome or severe off-medication symptoms; or 3) non-specific fear about being awake during neurosurgery. Participants were satisfied with both DBS surgery methods. However, identification of the factors associated with a preference for iMRI DBS may allow for optimization of patient experience and satisfaction when choices of surgical methods for DBS implantation are available. Published by Elsevier Ltd.
Bláha, M; Hoch, J; Ferko, A; Ryška, A; Hovorková, E
Improvement in any human activity is preconditioned by inspection of results and providing feedback used for modification of the processes applied. Comparison of experts experience in the given field is another indispensable part leading to optimisation and improvement of processes, and optimally to implementation of standards. For the purpose of objective comparison and assessment of the processes, it is always necessary to describe the processes in a parametric way, to obtain representative data, to assess the achieved results, and to provide unquestionable and data-driven feedback based on such analysis. This may lead to a consensus on the definition of standards in the given area of health care. Total mesorectal excision (TME) is a standard procedure of rectal cancer (C20) surgical treatment. However, the quality of performed procedures varies in different health care facilities, which is given, among others, by internal processes and surgeons experience. Assessment of surgical treatment results is therefore of key importance. A pathologist who assesses the resected tissue can provide valuable feedback in this respect. An information system for the parametric assessment of TME performance is described in our article, including technical background in the form of a multicentre clinical registry and the structure of observed parameters. We consider the proposed system of TME parametric assessment as significant for improvement of TME performance, aimed at reducing local recurrences and at improving the overall prognosis of patients. rectal cancer total mesorectal excision parametric data clinical registries TME registry.
Economic optimization of natural hazard protection - conceptual study of existing approaches
NASA Astrophysics Data System (ADS)
Spackova, Olga; Straub, Daniel
2013-04-01
Risk-based planning of protection measures against natural hazards has become a common practice in many countries. The selection procedure aims at identifying an economically efficient strategy with regard to the estimated costs and risk (i.e. expected damage). A correct setting of the evaluation methodology and decision criteria should ensure an optimal selection of the portfolio of risk protection measures under a limited state budget. To demonstrate the efficiency of investments, indicators such as Benefit-Cost Ratio (BCR), Marginal Costs (MC) or Net Present Value (NPV) are commonly used. However, the methodologies for efficiency evaluation differ amongst different countries and different hazard types (floods, earthquakes etc.). Additionally, several inconsistencies can be found in the applications of the indicators in practice. This is likely to lead to a suboptimal selection of the protection strategies. This study provides a general formulation for optimization of the natural hazard protection measures from a socio-economic perspective. It assumes that all costs and risks can be expressed in monetary values. The study regards the problem as a discrete hierarchical optimization, where the state level sets the criteria and constraints, while the actual optimization is made on the regional level (towns, catchments) when designing particular protection measures and selecting the optimal protection level. The study shows that in case of an unlimited budget, the task is quite trivial, as it is sufficient to optimize the protection measures in individual regions independently (by minimizing the sum of risk and cost). However, if the budget is limited, the need for an optimal allocation of resources amongst the regions arises. To ensure this, minimum values of BCR or MC can be required by the state, which must be achieved in each region. The study investigates the meaning of these indicators in the optimization task at the conceptual level and compares their suitability. To illustrate the theoretical findings, the indicators are tested on a hypothetical example of five regions with different risk levels. Last but not least, political and societal aspects and limitations in the use of the risk-based optimization framework are discussed.
NASA Astrophysics Data System (ADS)
Peña Crecente, Rosa M.; Lovera, Carlha Gutiérrez; García, Julia Barciela; Méndez, Jennifer Álvarez; Martín, Sagrario García; Latorre, Carlos Herrero
2014-11-01
The determination of lead in urine is a way of monitoring the chemical exposure to this metal. In the present paper, a new method for the Pb determination by electrothermal atomic absorption spectrometry (ETAAS) in urine at low levels has been developed. Lead was separated from the undesirable urine matrix by means of a solid phase extraction (SPE) procedure. Oxidized multiwalled carbon nanotubes have been used as a sorbent material. Lead from urine was retained at pH 4.0 and was quantitatively eluted using a 0.7 M nitric acid solution and was subsequently measured by ETAAS. The effects of parameters that influence the adsorption-elution process (such as pH, eluent volume and concentration, sampling and elution flow rates) and the atomic spectrometry conditions have been studied by means of different factorial design strategies. Under the optimized conditions, the detection and quantification limits obtained were 0.08 and 0.26 μg Pb L- 1, respectively. The results demonstrate the absence of a urine matrix effect and this is the consequence of the SPE process carried out. Therefore, the developed method is useful for the analysis of Pb at low levels in real samples without the influence of other urine components. The proposed method was applied to the determination of lead in urine samples of unexposed healthy people and satisfactory results were obtained (in the range 3.64-22.9 μg Pb L- 1).
Andrade-Eiroa, Auréa; Diévart, Pascal; Dagaut, Philippe
2010-04-15
A new procedure for optimizing PAHs separation in very complex mixtures by reverse phase high performance (RPLC) is proposed. It is based on changing gradually the experimental conditions all along the chromatographic procedure as a function of the physical properties of the compounds eluted. The temperature and speed flow gradients allowed obtaining the optimum resolution in large chromatographic determinations where PAHs with very different medium polarizability have to be separated. Whereas optimization procedures of RPLC methodologies had always been accomplished regardless of the physico-chemical properties of the target analytes, we found that resolution is highly dependent on the physico-chemical properties of the target analytes. Based on resolution criterion, optimization process for a 16 EPA PAHs mixture was performed on three sets of difficult-to-separate PAHs pairs: acenaphthene-fluorene (for the optimization procedure in the first part of the chromatogram where light PAHs elute), benzo[g,h,i]perylene-dibenzo[a,h]anthracene and benzo[g,h,i]perylene-indeno[1,2,3-cd]pyrene (for the optimization procedure of the second part of the chromatogram where the heavier PAHs elute). Two-level full factorial designs were applied to detect interactions among variables to be optimized: speed flow, temperature of column oven and mobile-phase gradient in the two parts of the studied chromatogram. Experimental data were fitted by multivariate nonlinear regression models and optimum values of speed flow and temperature were obtained through mathematical analysis of the constructed models. An HPLC system equipped with a reversed phase 5 microm C18, 250 mm x 4.6mm column (with acetonitrile/water mobile phase), a column oven, a binary pump, a photodiode array detector (PDA), and a fluorimetric detector were used in this work. Optimum resolution was achieved operating at 1.0 mL/min in the first part of the chromatogram (until 45 min) and 0.5 mL/min in the second one (from 45 min to the end) and by applying programmed temperature gradient (15 degrees C until 30 min and progressively increasing temperature until reaching 40 degrees C at 45 min). (c) 2009 Elsevier B.V. All rights reserved.
Uthoff, Heiko; Peña, Constantino; West, James; Contreras, Francisco; Benenati, James F; Katzen, Barry T
2013-04-01
Radiation exposure to interventionalists is increasing. The currently available standard radiation protection devices are heavy and do not protect the head of the operator. The aim of this study was to evaluate the effectiveness and comfort of caps and thyroid collars made of a disposable, light-weight, lead-free material (XPF) for occupational radiation protection in a clinical setting. Up to two interventional operators were randomized to wear a XPF or standard 0.5-mm lead-equivalent thyroid collars in 60 consecutive endovascular procedures requiring fluoroscopy. Simultaneously a XPF cap was worn by all operators. Radiation doses were measured using dosimeters placed outside and underneath the caps and thyroid collars. Wearing comfort was assessed at the end of each procedure on a visual analog scale (0-100 [100 = optimal]). Patient and procedure data did not differ between the XPF and standard protection groups. The cumulative radiation dose measured outside the cap was 15,700 μSv and outside the thyroid collars 21,240 μSv. Measured radiation attenuation provided by the XPF caps (n = 70), XPF thyroid collars (n = 40), and standard thyroid collars (n = 38) was 85.4% ± 25.6%, 79.7% ± 25.8% and 71.9% ± 34.2%, respectively (mean difference XPF vs standard thyroid collars, 7.8% [95% CI, -5.9% to 21.6%]; p = 0.258). The median XPF cap weight was 144 g (interquartile range, 128-170 g), and the XPF thyroid collars were 27% lighter than the standard thyroid collars (p < 0.0001). Operators rated the comfort of all devices as high (mean scores for XPF caps and XPF thyroid collars 83.4 ± 12.7 (SD) and 88.5 ± 14.6, respectively; mean scores for standard thyroid collars 89.6 ± 9.9) (p = 0.648). Light-weight disposable caps and thyroid collars made of XPF were assessed as being comfortable to wear, and they provide radiation protection similar to that of standard 0.5-mm lead-equivalent thyroid collars.
Postoperative Management of Penetrating and Nonpenetrating External Filtering Procedures.
Bettin, Paolo; Di Matteo, Federico
2017-01-01
Correct postoperative management is fundamental to prevent and treat complications and to optimize the success of filtering surgery. Timely control visits and appropriate actions and prescriptions ensure the best outcomes, allow recovery from a number of untoward events, and can reestablish filtration when failure seems imminent. In contrast, a slack follow-up and wrong interventions or prescriptions can lead to the failure of any surgery, no matter how accurately it was carried out, sometimes jeopardizing vision and even the anatomy of the globe. The purpose of this review is to present a rational approach to postoperative follow-up and to synthetically describe how to prevent, recognize and address the most common complications of filtering surgery, pointing out the most common pitfalls in the management of the operated eye. © 2017 S. Karger AG, Basel.
Electroporating Fields Target Oxidatively Damaged Areas in the Cell Membrane
Vernier, P. Thomas; Levine, Zachary A.; Wu, Yu-Hsuan; Joubert, Vanessa; Ziegler, Matthew J.; Mir, Lluis M.; Tieleman, D. Peter
2009-01-01
Reversible electropermeabilization (electroporation) is widely used to facilitate the introduction of genetic material and pharmaceutical agents into living cells. Although considerable knowledge has been gained from the study of real and simulated model membranes in electric fields, efforts to optimize electroporation protocols are limited by a lack of detailed understanding of the molecular basis for the electropermeabilization of the complex biomolecular assembly that forms the plasma membrane. We show here, with results from both molecular dynamics simulations and experiments with living cells, that the oxidation of membrane components enhances the susceptibility of the membrane to electropermeabilization. Manipulation of the level of oxidative stress in cell suspensions and in tissues may lead to more efficient permeabilization procedures in the laboratory and in clinical applications such as electrochemotherapy and electrotransfection-mediated gene therapy. PMID:19956595
Metal accumulation in the earthworm Lumbricus rubellus. Model predictions compared to field data
Veltman, K.; Huijbregts, M.A.J.; Vijver, M.G.; Peijnenburg, W.J.G.M.; Hobbelen, P.H.F.; Koolhaas, J.E.; van Gestel, C.A.M.; van Vliet, P.C.J.; Jan, Hendriks A.
2007-01-01
The mechanistic bioaccumulation model OMEGA (Optimal Modeling for Ecotoxicological Applications) is used to estimate accumulation of zinc (Zn), copper (Cu), cadmium (Cd) and lead (Pb) in the earthworm Lumbricus rubellus. Our validation to field accumulation data shows that the model accurately predicts internal cadmium concentrations. In addition, our results show that internal metal concentrations in the earthworm are less than linearly (slope < 1) related to the total concentration in soil, while risk assessment procedures often assume the biota-soil accumulation factor (BSAF) to be constant. Although predicted internal concentrations of all metals are generally within a factor 5 compared to field data, incorporation of regulation in the model is necessary to improve predictability of the essential metals such as zinc and copper. ?? 2006 Elsevier Ltd. All rights reserved.
Fabrication of Transition Edge Sensor Microcalorimeters for X-Ray Focal Planes
NASA Technical Reports Server (NTRS)
Chervenak, James A.; Adams, Joseph S.; Audley, Heather; Bandler, Simon R.; Betancourt-Martinez, Gabriele; Eckart, Megan E.; Finkbeiner, Fred M.; Kelley, Richard L.; Kilbourne, Caroline; Lee, Sang Jun;
2015-01-01
Requirements for focal planes for x-ray astrophysics vary widely depending on the needs of the science application such as photon count rate, energy band, resolving power, and angular resolution. Transition edge sensor x-ray calorimeters can encounter limitations when optimized for these specific applications. Balancing specifications leads to choices in, for example, pixel size, thermal sinking arrangement, and absorber thickness and material. For the broadest specifications, instruments can benefit from multiple pixel types in the same array or focal plane. Here we describe a variety of focal plane architectures that anticipate science requirements of x-ray instruments for heliophysics and astrophysics. We describe the fabrication procedures that enable each array and explore limitations for the specifications of such arrays, including arrays with multiple pixel types on the same array.
Playing biology's name game: identifying protein names in scientific text.
Hanisch, Daniel; Fluck, Juliane; Mevissen, Heinz-Theodor; Zimmer, Ralf
2003-01-01
A growing body of work is devoted to the extraction of protein or gene interaction information from the scientific literature. Yet, the basis for most extraction algorithms, i.e. the specific and sensitive recognition of protein and gene names and their numerous synonyms, has not been adequately addressed. Here we describe the construction of a comprehensive general purpose name dictionary and an accompanying automatic curation procedure based on a simple token model of protein names. We designed an efficient search algorithm to analyze all abstracts in MEDLINE in a reasonable amount of time on standard computers. The parameters of our method are optimized using machine learning techniques. Used in conjunction, these ingredients lead to good search performance. A supplementary web page is available at http://cartan.gmd.de/ProMiner/.
NASA Astrophysics Data System (ADS)
Indrayana, I. N. E.; P, N. M. Wirasyanti D.; Sudiartha, I. KG
2018-01-01
Mobile application allow many users to access data from the application without being limited to space, space and time. Over time the data population of this application will increase. Data access time will cause problems if the data record has reached tens of thousands to millions of records.The objective of this research is to maintain the performance of data execution for large data records. One effort to maintain data access time performance is to apply query optimization method. The optimization used in this research is query heuristic optimization method. The built application is a mobile-based financial application using MySQL database with stored procedure therein. This application is used by more than one business entity in one database, thus enabling rapid data growth. In this stored procedure there is an optimized query using heuristic method. Query optimization is performed on a “Select” query that involves more than one table with multiple clausa. Evaluation is done by calculating the average access time using optimized and unoptimized queries. Access time calculation is also performed on the increase of population data in the database. The evaluation results shown the time of data execution with query heuristic optimization relatively faster than data execution time without using query optimization.
MO-G-18A-01: Radiation Dose Reducing Strategies in CT, Fluoroscopy and Radiography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mahesh, M; Gingold, E; Jones, A
2014-06-15
Advances in medical x-ray imaging have provided significant benefits to patient care. According to NCRP 160, there are more than 400 million x-ray procedures performed annually in the United States alone that contributes to nearly half of all the radiation exposure to the US population. Similar growth trends in medical x-ray imaging are observed worldwide. Apparent increase in number of medical x-ray imaging procedures, new protocols and the associated radiation dose and risk has drawn considerable attention. This has led to a number of technological innovations such as tube current modulation, iterative reconstruction algorithms, dose alerts, dose displays, flat panelmore » digital detectors, high efficient digital detectors, storage phosphor radiography, variable filters, etc. that are enabling users to acquire medical x-ray images at a much lower radiation dose. Along with these, there are number of radiation dose optimization strategies that users can adapt to effectively lower radiation dose in medical x-ray procedures. The main objectives of this SAM course are to provide information and how to implement the various radiation dose optimization strategies in CT, Fluoroscopy and Radiography. Learning Objectives: To update impact of technological advances on dose optimization in medical imaging. To identify radiation optimization strategies in computed tomography. To describe strategies for configuring fluoroscopic equipment that yields optimal images at reasonable radiation dose. To assess ways to configure digital radiography systems and recommend ways to improve image quality at optimal dose.« less
Determination of full piezoelectric complex parameters using gradient-based optimization algorithm
NASA Astrophysics Data System (ADS)
Kiyono, C. Y.; Pérez, N.; Silva, E. C. N.
2016-02-01
At present, numerical techniques allow the precise simulation of mechanical structures, but the results are limited by the knowledge of the material properties. In the case of piezoelectric ceramics, the full model determination in the linear range involves five elastic, three piezoelectric, and two dielectric complex parameters. A successful solution to obtaining piezoceramic properties consists of comparing the experimental measurement of the impedance curve and the results of a numerical model by using the finite element method (FEM). In the present work, a new systematic optimization method is proposed to adjust the full piezoelectric complex parameters in the FEM model. Once implemented, the method only requires the experimental data (impedance modulus and phase data acquired by an impedometer), material density, geometry, and initial values for the properties. This method combines a FEM routine implemented using an 8-noded axisymmetric element with a gradient-based optimization routine based on the method of moving asymptotes (MMA). The main objective of the optimization procedure is minimizing the quadratic difference between the experimental and numerical electrical conductance and resistance curves (to consider resonance and antiresonance frequencies). To assure the convergence of the optimization procedure, this work proposes restarting the optimization loop whenever the procedure ends in an undesired or an unfeasible solution. Two experimental examples using PZ27 and APC850 samples are presented to test the precision of the method and to check the dependency of the frequency range used, respectively.
Optimization experiments with a double Gauss lens
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brixner, B.; Klein, M.M.
1988-05-01
This paper describes how a lens can be generated by starting from plane surfaces. Three different experiments, using the Los Alamos National Laboratory optimization procedure, all converged on the same stable prescriptions in the optimum minimum region. The starts were made first from an already optimized lens appearing in the literature, then from a powerless plane-surfaces configuration, and finally from a crude Super Angulon configuration. In each case the result was a double Gauss lens, which suggests that this type of lens may be the best compact six-glass solution for one imaging problem: an f/2 aperture and a moderate fieldmore » of view. The procedures and results are discussed in detail.« less
Optimization Experiments With A Double Gauss Lens
NASA Astrophysics Data System (ADS)
Brixner, Berlyn; Klein, Morris M.
1988-05-01
This paper describes how a lens can be generated by starting from plane surfaces. Three different experiments, using the Los Alamos National Laboratory optimization procedure, all converged on the same stable prescriptions in the optimum minimum region. The starts were made first from an already optimized lens appearing in the literature, then from a powerless plane-surfaces configuration, and finally from a crude Super Angulon configuration. In each case the result was a double Gauss lens, which suggests that this type of lens may be the best compact six-glass solution for one imaging problem: an f/2 aperture and a moderate field of view. The procedures and results are discussed in detail.
Principled negotiation and distributed optimization for advanced air traffic management
NASA Astrophysics Data System (ADS)
Wangermann, John Paul
Today's aircraft/airspace system faces complex challenges. Congestion and delays are widespread as air traffic continues to grow. Airlines want to better optimize their operations, and general aviation wants easier access to the system. Additionally, the accident rate must decline just to keep the number of accidents each year constant. New technology provides an opportunity to rethink the air traffic management process. Faster computers, new sensors, and high-bandwidth communications can be used to create new operating models. The choice is no longer between "inflexible" strategic separation assurance and "flexible" tactical conflict resolution. With suitable operating procedures, it is possible to have strategic, four-dimensional separation assurance that is flexible and allows system users maximum freedom to optimize operations. This thesis describes an operating model based on principled negotiation between agents. Many multi-agent systems have agents that have different, competing interests but have a shared interest in coordinating their actions. Principled negotiation is a method of finding agreement between agents with different interests. By focusing on fundamental interests and searching for options for mutual gain, agents with different interests reach agreements that provide benefits for both sides. Using principled negotiation, distributed optimization by each agent can be coordinated leading to iterative optimization of the system. Principled negotiation is well-suited to aircraft/airspace systems. It allows aircraft and operators to propose changes to air traffic control. Air traffic managers check the proposal maintains required aircraft separation. If it does, the proposal is either accepted or passed to agents whose trajectories change as part of the proposal for approval. Aircraft and operators can use all the data at hand to develop proposals that optimize their operations, while traffic managers can focus on their primary duty of ensuring aircraft safety. This thesis describes how an aircraft/airspace system using principled negotiation operates, and reports simulation results on the concept. The results show safety is maintained while aircraft have freedom to optimize their operations.
Topology optimization of two-dimensional elastic wave barriers
NASA Astrophysics Data System (ADS)
Van hoorickx, C.; Sigmund, O.; Schevenels, M.; Lazarov, B. S.; Lombaert, G.
2016-08-01
Topology optimization is a method that optimally distributes material in a given design domain. In this paper, topology optimization is used to design two-dimensional wave barriers embedded in an elastic halfspace. First, harmonic vibration sources are considered, and stiffened material is inserted into a design domain situated between the source and the receiver to minimize wave transmission. At low frequencies, the stiffened material reflects and guides waves away from the surface. At high frequencies, destructive interference is obtained that leads to high values of the insertion loss. To handle harmonic sources at a frequency in a given range, a uniform reduction of the response over a frequency range is pursued. The minimal insertion loss over the frequency range of interest is maximized. The resulting design contains features at depth leading to a reduction of the insertion loss at the lowest frequencies and features close to the surface leading to a reduction at the highest frequencies. For broadband sources, the average insertion loss in a frequency range is optimized. This leads to designs that especially reduce the response at high frequencies. The designs optimized for the frequency averaged insertion loss are found to be sensitive to geometric imperfections. In order to obtain a robust design, a worst case approach is followed.
Optimization of rotational arc station parameter optimized radiation therapy.
Dong, P; Ungun, B; Boyd, S; Xing, L
2016-09-01
To develop a fast optimization method for station parameter optimized radiation therapy (SPORT) and show that SPORT is capable of matching VMAT in both plan quality and delivery efficiency by using three clinical cases of different disease sites. The angular space from 0° to 360° was divided into 180 station points (SPs). A candidate aperture was assigned to each of the SPs based on the calculation results using a column generation algorithm. The weights of the apertures were then obtained by optimizing the objective function using a state-of-the-art GPU based proximal operator graph solver. To avoid being trapped in a local minimum in beamlet-based aperture selection using the gradient descent algorithm, a stochastic gradient descent was employed here. Apertures with zero or low weight were thrown out. To find out whether there was room to further improve the plan by adding more apertures or SPs, the authors repeated the above procedure with consideration of the existing dose distribution from the last iteration. At the end of the second iteration, the weights of all the apertures were reoptimized, including those of the first iteration. The above procedure was repeated until the plan could not be improved any further. The optimization technique was assessed by using three clinical cases (prostate, head and neck, and brain) with the results compared to that obtained using conventional VMAT in terms of dosimetric properties, treatment time, and total MU. Marked dosimetric quality improvement was demonstrated in the SPORT plans for all three studied cases. For the prostate case, the volume of the 50% prescription dose was decreased by 22% for the rectum and 6% for the bladder. For the head and neck case, SPORT improved the mean dose for the left and right parotids by 15% each. The maximum dose was lowered from 72.7 to 71.7 Gy for the mandible, and from 30.7 to 27.3 Gy for the spinal cord. The mean dose for the pharynx and larynx was reduced by 8% and 6%, respectively. For the brain case, the doses to the eyes, chiasm, and inner ears were all improved. SPORT shortened the treatment time by ∼1 min for the prostate case, ∼0.5 min for brain case, and ∼0.2 min for the head and neck case. The dosimetric quality and delivery efficiency presented here indicate that SPORT is an intriguing alternative treatment modality. With the widespread adoption of digital linac, SPORT should lead to improved patient care in the future.
Optimization of rotational arc station parameter optimized radiation therapy
Dong, P.; Ungun, B.; Boyd, S.; Xing, L.
2016-01-01
Purpose: To develop a fast optimization method for station parameter optimized radiation therapy (SPORT) and show that SPORT is capable of matching VMAT in both plan quality and delivery efficiency by using three clinical cases of different disease sites. Methods: The angular space from 0° to 360° was divided into 180 station points (SPs). A candidate aperture was assigned to each of the SPs based on the calculation results using a column generation algorithm. The weights of the apertures were then obtained by optimizing the objective function using a state-of-the-art GPU based proximal operator graph solver. To avoid being trapped in a local minimum in beamlet-based aperture selection using the gradient descent algorithm, a stochastic gradient descent was employed here. Apertures with zero or low weight were thrown out. To find out whether there was room to further improve the plan by adding more apertures or SPs, the authors repeated the above procedure with consideration of the existing dose distribution from the last iteration. At the end of the second iteration, the weights of all the apertures were reoptimized, including those of the first iteration. The above procedure was repeated until the plan could not be improved any further. The optimization technique was assessed by using three clinical cases (prostate, head and neck, and brain) with the results compared to that obtained using conventional VMAT in terms of dosimetric properties, treatment time, and total MU. Results: Marked dosimetric quality improvement was demonstrated in the SPORT plans for all three studied cases. For the prostate case, the volume of the 50% prescription dose was decreased by 22% for the rectum and 6% for the bladder. For the head and neck case, SPORT improved the mean dose for the left and right parotids by 15% each. The maximum dose was lowered from 72.7 to 71.7 Gy for the mandible, and from 30.7 to 27.3 Gy for the spinal cord. The mean dose for the pharynx and larynx was reduced by 8% and 6%, respectively. For the brain case, the doses to the eyes, chiasm, and inner ears were all improved. SPORT shortened the treatment time by ∼1 min for the prostate case, ∼0.5 min for brain case, and ∼0.2 min for the head and neck case. Conclusions: The dosimetric quality and delivery efficiency presented here indicate that SPORT is an intriguing alternative treatment modality. With the widespread adoption of digital linac, SPORT should lead to improved patient care in the future. PMID:27587028
Optimization of rotational arc station parameter optimized radiation therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dong, P.; Ungun, B.
Purpose: To develop a fast optimization method for station parameter optimized radiation therapy (SPORT) and show that SPORT is capable of matching VMAT in both plan quality and delivery efficiency by using three clinical cases of different disease sites. Methods: The angular space from 0° to 360° was divided into 180 station points (SPs). A candidate aperture was assigned to each of the SPs based on the calculation results using a column generation algorithm. The weights of the apertures were then obtained by optimizing the objective function using a state-of-the-art GPU based proximal operator graph solver. To avoid being trappedmore » in a local minimum in beamlet-based aperture selection using the gradient descent algorithm, a stochastic gradient descent was employed here. Apertures with zero or low weight were thrown out. To find out whether there was room to further improve the plan by adding more apertures or SPs, the authors repeated the above procedure with consideration of the existing dose distribution from the last iteration. At the end of the second iteration, the weights of all the apertures were reoptimized, including those of the first iteration. The above procedure was repeated until the plan could not be improved any further. The optimization technique was assessed by using three clinical cases (prostate, head and neck, and brain) with the results compared to that obtained using conventional VMAT in terms of dosimetric properties, treatment time, and total MU. Results: Marked dosimetric quality improvement was demonstrated in the SPORT plans for all three studied cases. For the prostate case, the volume of the 50% prescription dose was decreased by 22% for the rectum and 6% for the bladder. For the head and neck case, SPORT improved the mean dose for the left and right parotids by 15% each. The maximum dose was lowered from 72.7 to 71.7 Gy for the mandible, and from 30.7 to 27.3 Gy for the spinal cord. The mean dose for the pharynx and larynx was reduced by 8% and 6%, respectively. For the brain case, the doses to the eyes, chiasm, and inner ears were all improved. SPORT shortened the treatment time by ∼1 min for the prostate case, ∼0.5 min for brain case, and ∼0.2 min for the head and neck case. Conclusions: The dosimetric quality and delivery efficiency presented here indicate that SPORT is an intriguing alternative treatment modality. With the widespread adoption of digital linac, SPORT should lead to improved patient care in the future.« less
Static deflection control of flexible beams by piezo-electric actuators
NASA Technical Reports Server (NTRS)
Baz, A. M.
1986-01-01
This study deals with the utilization of piezo-electric actuators in controlling the static deformation of flexible beams. An optimum design procedure is presented to enable the selection of the optimal location, thickness and excitation voltage of the piezo-electric actuators in a way that would minimize the deflection of the beam to which these actuators are bonded. Numerical examples are presented to illustrate the application of the developed optimization procedure in minimizing the structural deformation of beams of different materials when subjected to different loading and end conditions using ceramic or polymeric piezo-electric actuators. The results obtained emphasize the importance of the devised rational procedure in designing beam-actuator systems with minimal elastic distortions.
A hybrid framework for coupling arbitrary summation-by-parts schemes on general meshes
NASA Astrophysics Data System (ADS)
Lundquist, Tomas; Malan, Arnaud; Nordström, Jan
2018-06-01
We develop a general interface procedure to couple both structured and unstructured parts of a hybrid mesh in a non-collocated, multi-block fashion. The target is to gain optimal computational efficiency in fluid dynamics simulations involving complex geometries. While guaranteeing stability, the proposed procedure is optimized for accuracy and requires minimal algorithmic modifications to already existing schemes. Initial numerical investigations confirm considerable efficiency gains compared to non-hybrid calculations of up to an order of magnitude.
Kuhn-Tucker optimization based reliability analysis for probabilistic finite elements
NASA Technical Reports Server (NTRS)
Liu, W. K.; Besterfield, G.; Lawrence, M.; Belytschko, T.
1988-01-01
The fusion of probability finite element method (PFEM) and reliability analysis for fracture mechanics is considered. Reliability analysis with specific application to fracture mechanics is presented, and computational procedures are discussed. Explicit expressions for the optimization procedure with regard to fracture mechanics are given. The results show the PFEM is a very powerful tool in determining the second-moment statistics. The method can determine the probability of failure or fracture subject to randomness in load, material properties and crack length, orientation, and location.
Procedural techniques in sacral nerve modulation.
Williams, Elizabeth R; Siegel, Steven W
2010-12-01
Sacral neuromodulation involves a staged process, including a screening trial and delayed formal implantation for those with substantial improvement. The advent of the tined lead has revolutionized the technology, allowing for a minimally invasive outpatient procedure to be performed under intravenous sedation. With the addition of fluoroscopy to the bilateral percutaneous nerve evaluation, there has been marked improvement in the placement of these temporary leads. Thus, the screening evaluation is now a better reflection of possible permanent improvement. Both methods of screening have advantages and disadvantages. Selection of a particular procedure should be tailored to individual patient characteristics. Subsequent implantation of the internal pulse generator (IPG) or explantation of an unsuccessful staged lead is straightforward outpatient procedure, providing minimal additional risk for the patient. Future refinement to the procedure may involve the introduction of a rechargeable battery, eliminating the need for IPG replacement at the end of the battery life.
Polewczyk, Anna; Kutarski, Andrzej; Tomaszewski, Andrzej; Brzozowski, Wojciech; Czajkowski, Marek; Polewczyk, Maciej; Janion, Marianna
2013-01-01
Lead-dependent tricuspid dysfunction (LDTD) is one of important complications in patients with cardiac implantable electronic devices. However, this phenomenon is probably underestimated because of an improper interpretation of its clinical symptoms. The aim of this study was to identify LDTD mechanisms and management in patients referred for transvenous lead extraction (TLE) due to lead-dependent complications. Data of 940 patients undergoing TLE in a single center from 2009 to 2011 were assessed and 24 patients with LDTD were identifi ed. The general indications for TLE, pacing system types and lead dwell time in both study groups were comparatively analyzed. The radiological and clinical effi cacy of TLE procedure was also assessed in both groups with precision estimation of clinical status patients with LDTD (before and after TLE). Additionally, mechanisms, concomitant lead-dependent complications and degree (severity) of LDTD before and after the procedure were evaluated. Telephone follow-up of LDTD patients was performed at the mean time 1.5 years after TLE/replacement procedure. The main indications for TLE in both groups were similar (apart from isolated LDTD in 45.83% patients from group I). Patients with LDTD had more complex pacing systems with more leads (2.04 in the LDTD group vs. 1.69 in the control group; p = 0.04). There were more unnecessary loops of lead in LDTD patients than in the control group (41.7% vs. 5.24%; p = 0.001). There were no signifi cant differences in average time from implantation to extraction and the number of preceding procedures. Signifi cant tricuspid regurgitation (TR-grade III-IV) was found in 96% of LDTD patients, whereas stenosis with regurgitation in 4%. The 10% frequency of severe TR (not lead dependent) in the control group patients was observed. The main mechanism of LDTD was abnormal leafl et coaptation caused by: loop of the lead (42%), septal leafl et pulled toward the interventricular septum (37%) or too intensive lead impingement of the leafl ets (21%). LDTD patients were treated with TLE and reimplantation of the lead to the right ventricle (87.5%) or to the cardiac vein (4.2%), or surgery procedure with epicardial lead placement following ineffective TLE (8.3%). The radiological and clinical effi cacy of TLE procedure was very high and comparable between the groups I and II (91.7% vs. 94.2%; p = 0.6 and 100% vs. 98.4%; p = 0.46, respectively). Repeated echocardiography showed reduced severity of tricuspid valve dysfunction in 62.5% of LDTD patients. The follow- -up interview confi rmed clinical improvement in 75% of patients (further improvement after cardiosurgery in 2 patients was observed). LDTD is a diagnostic and therapeutic challenge. The main reason for LDTD was abnormal leafl et coaptation caused by lead loop presence, or propping, or impingement the leafl ets by the lead. Probably, TLE with lead reimplantation is a safe and effective option in LDTD management. An alternative option is TLE with omitted tricuspid valve reimplantation. Cardiac surgery with epicardial lead placement should be reserved for patients with ineffective previous procedures.
Energy-saving management modelling and optimization for lead-acid battery formation process
NASA Astrophysics Data System (ADS)
Wang, T.; Chen, Z.; Xu, J. Y.; Wang, F. Y.; Liu, H. M.
2017-11-01
In this context, a typical lead-acid battery producing process is introduced. Based on the formation process, an efficiency management method is proposed. An optimization model with the objective to minimize the formation electricity cost in a single period is established. This optimization model considers several related constraints, together with two influencing factors including the transformation efficiency of IGBT charge-and-discharge machine and the time-of-use price. An example simulation is shown using PSO algorithm to solve this mathematic model, and the proposed optimization strategy is proved to be effective and learnable for energy-saving and efficiency optimization in battery producing industries.
Bayesian Optimization for Neuroimaging Pre-processing in Brain Age Classification and Prediction
Lancaster, Jenessa; Lorenz, Romy; Leech, Rob; Cole, James H.
2018-01-01
Neuroimaging-based age prediction using machine learning is proposed as a biomarker of brain aging, relating to cognitive performance, health outcomes and progression of neurodegenerative disease. However, even leading age-prediction algorithms contain measurement error, motivating efforts to improve experimental pipelines. T1-weighted MRI is commonly used for age prediction, and the pre-processing of these scans involves normalization to a common template and resampling to a common voxel size, followed by spatial smoothing. Resampling parameters are often selected arbitrarily. Here, we sought to improve brain-age prediction accuracy by optimizing resampling parameters using Bayesian optimization. Using data on N = 2003 healthy individuals (aged 16–90 years) we trained support vector machines to (i) distinguish between young (<22 years) and old (>50 years) brains (classification) and (ii) predict chronological age (regression). We also evaluated generalisability of the age-regression model to an independent dataset (CamCAN, N = 648, aged 18–88 years). Bayesian optimization was used to identify optimal voxel size and smoothing kernel size for each task. This procedure adaptively samples the parameter space to evaluate accuracy across a range of possible parameters, using independent sub-samples to iteratively assess different parameter combinations to arrive at optimal values. When distinguishing between young and old brains a classification accuracy of 88.1% was achieved, (optimal voxel size = 11.5 mm3, smoothing kernel = 2.3 mm). For predicting chronological age, a mean absolute error (MAE) of 5.08 years was achieved, (optimal voxel size = 3.73 mm3, smoothing kernel = 3.68 mm). This was compared to performance using default values of 1.5 mm3 and 4mm respectively, resulting in MAE = 5.48 years, though this 7.3% improvement was not statistically significant. When assessing generalisability, best performance was achieved when applying the entire Bayesian optimization framework to the new dataset, out-performing the parameters optimized for the initial training dataset. Our study outlines the proof-of-principle that neuroimaging models for brain-age prediction can use Bayesian optimization to derive case-specific pre-processing parameters. Our results suggest that different pre-processing parameters are selected when optimization is conducted in specific contexts. This potentially motivates use of optimization techniques at many different points during the experimental process, which may improve statistical sensitivity and reduce opportunities for experimenter-led bias. PMID:29483870
A design procedure and handling quality criteria for lateral directional flight control systems
NASA Technical Reports Server (NTRS)
Stein, G.; Henke, A. H.
1972-01-01
A practical design procedure for aircraft augmentation systems is described based on quadratic optimal control technology and handling-quality-oriented cost functionals. The procedure is applied to the design of a lateral-directional control system for the F4C aircraft. The design criteria, design procedure, and final control system are validated with a program of formal pilot evaluation experiments.
General approach and scope. [rotor blade design optimization
NASA Technical Reports Server (NTRS)
Adelman, Howard M.; Mantay, Wayne R.
1989-01-01
This paper describes a joint activity involving NASA and Army researchers at the NASA Langley Research Center to develop optimization procedures aimed at improving the rotor blade design process by integrating appropriate disciplines and accounting for all of the important interactions among the disciplines. The disciplines involved include rotor aerodynamics, rotor dynamics, rotor structures, airframe dynamics, and acoustics. The work is focused on combining these five key disciplines in an optimization procedure capable of designing a rotor system to satisfy multidisciplinary design requirements. Fundamental to the plan is a three-phased approach. In phase 1, the disciplines of blade dynamics, blade aerodynamics, and blade structure will be closely coupled, while acoustics and airframe dynamics will be decoupled and be accounted for as effective constraints on the design for the first three disciplines. In phase 2, acoustics is to be integrated with the first three disciplines. Finally, in phase 3, airframe dynamics will be fully integrated with the other four disciplines. This paper deals with details of the phase 1 approach and includes details of the optimization formulation, design variables, constraints, and objective function, as well as details of discipline interactions, analysis methods, and methods for validating the procedure.
Metafitting: Weight optimization for least-squares fitting of PTTI data
NASA Technical Reports Server (NTRS)
Douglas, Rob J.; Boulanger, J.-S.
1995-01-01
For precise time intercomparisons between a master frequency standard and a slave time scale, we have found it useful to quantitatively compare different fitting strategies by examining the standard uncertainty in time or average frequency. It is particularly useful when designing procedures which use intermittent intercomparisons, with some parameterized fit used to interpolate or extrapolate from the calibrating intercomparisons. We use the term 'metafitting' for the choices that are made before a fitting procedure is operationally adopted. We present methods for calculating the standard uncertainty for general, weighted least-squares fits and a method for optimizing these weights for a general noise model suitable for many PTTI applications. We present the results of the metafitting of procedures for the use of a regular schedule of (hypothetical) high-accuracy frequency calibration of a maser time scale. We have identified a cumulative series of improvements that give a significant reduction of the expected standard uncertainty, compared to the simplest procedure of resetting the maser synthesizer after each calibration. The metafitting improvements presented include the optimum choice of weights for the calibration runs, optimized over a period of a week or 10 days.
Ground-state properties of 4He and 16O extrapolated from lattice QCD with pionless EFT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Contessi, L.; Lovato, A.; Pederiva, F.
Here, we extend the prediction range of Pionless Effective Field Theory with an analysis of the ground state of 16O in leading order. To renormalize the theory, we use as input both experimental data and lattice QCD predictions of nuclear observables, which probe the sensitivity of nuclei to increased quark masses. The nuclear many-body Schrödinger equation is solved with the Auxiliary Field Diffusion Monte Carlo method. For the first time in a nuclear quantum Monte Carlo calculation, a linear optimization procedure, which allows us to devise an accurate trial wave function with a large number of variational parameters, is adopted.more » The method yields a binding energy of 4He which is in good agreement with experiment at physical pion mass and with lattice calculations at larger pion masses. At leading order we do not find any evidence of a 16O state which is stable against breakup into four 4He, although higher-order terms could bind 16O.« less
Prevention of cardiorenal syndromes.
McCullough, Peter A
2010-01-01
The cardiorenal syndromes (CRS) are composed of five recently defined syndromes which represent common clinical scenarios in which both the heart and the kidney are involved in a bidirectional injury process leading to dysfunction of both organs. Common to each subtype are multiple complex pathogenic factors, a precipitous decline in function and a progressive course. Most pathways that lead to CRS involve acute injury to organs which manifest evidence of chronic disease, suggesting reduced ability to sustain damage, maintain vital functions, and facilitate recovery. Prevention of CRS is an ideal clinical goal, because once initiated, CRS cannot be readily aborted, are not completely reversible, and are associated with serious consequences including hospitalization, complicated procedures, need for renal replacement therapy, and death. Principles of prevention include identification and amelioration of precipitating factors, optimal management of both chronic heart and kidney diseases, and future use of multimodality therapies for end-organ protection at the time of systemic injury. This paper will review the core concepts of prevention of CRS with practical applications to be considered in today's practice. 2010 S. Karger AG, Basel.
Bellemain, V
2012-08-01
Coordination between Veterinary Services and other relevant authorities is a key component of good public governance, especially for effective action and optimal management of available resources. The importance of good coordination is reflected in the World Organisation for Animal Health'Tool forthe Evaluation of Performance of Veterinary Services', which includes a critical competency on coordination. Many partners from technical, administrative and legal fields are involved. The degree of formalisation of coordination tends to depend on a country's level of organisation and development. Contingency plans against avian influenza led to breakthroughs in many countries in the mid-2000s. While interpersonal relationships remain vital, not everything should hinge on them. Organisation and management are critical to operational efficiency. The distribution of responsibilities needs to be defined clearly, avoiding duplication and areas of conflict. Lead authorities should be designated according to subject (Veterinary Services in animal health areas) and endowed with the necessary legitimacy. Lead authorities will be responsible for coordinating the drafting and updating of the relevant documents: agreements between authorities, contingency plans, standard operating procedures, etc.
Woo, Russell K; Skarsgard, Erik D
2015-06-01
Innovation in surgical techniques, technology, and care processes are essential for improving the care and outcomes of surgical patients, including children. The time and cost associated with surgical innovation can be significant, and unless it leads to improvements in outcome at equivalent or lower costs, it adds little or no value from the perspective of the patients, and decreases the overall resources available to our already financially constrained healthcare system. The emergence of a safety and quality mandate in surgery, and the development of the American College of Surgeons National Surgical Quality Improvement Program (NSQIP) allow needs-based surgical care innovation which leads to value-based improvement in care. In addition to general and procedure-specific clinical outcomes, surgeons should consider the measurement of quality from the patients' perspective. To this end, the integration of validated Patient Reported Outcome Measures (PROMs) into actionable, benchmarked institutional outcomes reporting has the potential to facilitate quality improvement in process, treatment and technology that optimizes value for our patients and health system. Copyright © 2015 Elsevier Inc. All rights reserved.
Hospital ambulatory medicine: A leading strategy for Internal Medicine in Europe.
Corbella, Xavier; Barreto, Vasco; Bassetti, Stefano; Bivol, Monica; Castellino, Pietro; de Kruijf, Evert-Jan; Dentali, Francesco; Durusu-Tanriöver, Mine; Fierbinţeanu-Braticevici, Carmen; Hanslik, Thomas; Hojs, Radovan; Kiňová, Soňa; Lazebnik, Leonid; Livčāne, Evija; Raspe, Matthias; Campos, Luis
2018-04-13
Addressing the current collision course between growing healthcare demands, rising costs and limited resources is an extremely complex challenge for most healthcare systems worldwide. Given the consensus that this critical reality is unsustainable from staff, consumer, and financial perspectives, our aim was to describe the official position and approach of the Working Group on Professional Issues and Quality of Care of the European Federation of Internal Medicine (EFIM), for encouraging internists to lead a thorough reengineering of hospital operational procedures by the implementation of innovative hospital ambulatory care strategies. Among these, we include outpatient and ambulatory care strategies, quick diagnostic units, hospital-at-home, observation units and daycare hospitals. Moving from traditional 'bed-based' inpatient care to hospital ambulatory medicine may optimize patient flow, relieve pressure on hospital bed availability by avoiding hospital admissions and shortening unnecessary hospital stays, reduce hospital-acquired complications, increase the capacity of hospitals with minor structural investments, increase efficiency, and offer patients a broader, more appropriate and more satisfactory spectrum of delivery options. Copyright © 2018. Published by Elsevier B.V.
Ground-state properties of 4He and 16O extrapolated from lattice QCD with pionless EFT
Contessi, L.; Lovato, A.; Pederiva, F.; ...
2017-07-26
Here, we extend the prediction range of Pionless Effective Field Theory with an analysis of the ground state of 16O in leading order. To renormalize the theory, we use as input both experimental data and lattice QCD predictions of nuclear observables, which probe the sensitivity of nuclei to increased quark masses. The nuclear many-body Schrödinger equation is solved with the Auxiliary Field Diffusion Monte Carlo method. For the first time in a nuclear quantum Monte Carlo calculation, a linear optimization procedure, which allows us to devise an accurate trial wave function with a large number of variational parameters, is adopted.more » The method yields a binding energy of 4He which is in good agreement with experiment at physical pion mass and with lattice calculations at larger pion masses. At leading order we do not find any evidence of a 16O state which is stable against breakup into four 4He, although higher-order terms could bind 16O.« less
NASA Technical Reports Server (NTRS)
Hablani, H. B.
1985-01-01
Real disturbances and real sensors have finite bandwidths. The first objective of this paper is to incorporate this finiteness in the 'open-loop modal cost analysis' as applied to a flexible spacecraft. Analysis based on residue calculus shows that among other factors, significance of a mode depends on the power spectral density of disturbances and the response spectral density of sensors at the modal frequency. The second objective of this article is to compare performances of an optimal and a suboptimal output feedback controller, the latter based on 'minimum error excitation' of Kosut. Both the performances are found to be nearly the same, leading us to favor the latter technique because it entails only linear computations. Our final objective is to detect an instability due to truncated modes by representing them as a multiplicative and an additive perturbation in a nominal transfer function. In an example problem it is found that this procedure leads to a narrow range of permissible controller gains, and that it labels a wrong mode as a cause of instability. A free beam is used to illustrate the analysis in this work.
Proficiency Testing for Evaluating Aerospace Materials Test Anomalies
NASA Technical Reports Server (NTRS)
Hirsch, D.; Motto, S.; Peyton, S.; Beeson, H.
2006-01-01
ASTM G 86 and ASTM G 74 are commonly used to evaluate materials susceptibility to ignition in liquid and gaseous oxygen systems. However, the methods have been known for their lack of repeatability. The inherent problems identified with the test logic would either not allow precise identification or the magnitude of problems related to running the tests, such as lack of consistency of systems performance, lack of adherence to procedures, etc. Excessive variability leads to increasing instances of accepting the null hypothesis erroneously, and so to the false logical deduction that problems are nonexistent when they really do exist. This paper attempts to develop and recommend an approach that could lead to increased accuracy in problem diagnostics by using the 50% reactivity point, which has been shown to be more repeatable. The initial tests conducted indicate that PTFE and Viton A (for pneumatic impact) and Buna S (for mechanical impact) would be good choices for additional testing and consideration for inter-laboratory evaluations. The approach presented could also be used to evaluate variable effects with increased confidence and tolerance optimization.
Aerodynamic design using numerical optimization
NASA Technical Reports Server (NTRS)
Murman, E. M.; Chapman, G. T.
1983-01-01
The procedure of using numerical optimization methods coupled with computational fluid dynamic (CFD) codes for the development of an aerodynamic design is examined. Several approaches that replace wind tunnel tests, develop pressure distributions and derive designs, or fulfill preset design criteria are presented. The method of Aerodynamic Design by Numerical Optimization (ADNO) is described and illustrated with examples.
Su, Bo-Han; Huang, Yi-Syuan; Chang, Chia-Yun; Tu, Yi-Shu; Tseng, Yufeng J
2013-10-31
There is a compelling need to discover type II inhibitors targeting the unique DFG-out inactive kinase conformation since they are likely to possess greater potency and selectivity relative to traditional type I inhibitors. Using a known inhibitor, such as a currently available and approved drug or inhibitor, as a template to design new drugs via computational de novo design is helpful when working with known ligand-receptor interactions. This study proposes a new template-based de novo design protocol to discover new inhibitors that preserve and also optimize the binding interactions of the type II kinase template. First, sorafenib (Nexavar) and nilotinib (Tasigna), two type II inhibitors with different ligand-receptor interactions, were selected as the template compounds. The five-step protocol can reassemble each drug from a large fragment library. Our procedure demonstrates that the selected template compounds can be successfully reassembled while the key ligand-receptor interactions are preserved. Furthermore, to demonstrate that the algorithm is able to construct more potent compounds, we considered kinase inhibitors and other protein dataset, acetylcholinesterase (AChE) inhibitors. The de novo optimization was initiated using a template compound possessing a less than optimal activity from a series of aminoisoquinoline and TAK-285 inhibiting type II kinases, and E2020 derivatives inhibiting AChE respectively. Three compounds with greater potency than the template compound were discovered that were also included in the original congeneric series. This template-based lead optimization protocol with the fragment library can help to design compounds with preferred binding interactions of known inhibitors automatically and further optimize the compounds in the binding pockets.
Optimal secondary source position in exterior spherical acoustical holophony
NASA Astrophysics Data System (ADS)
Pasqual, A. M.; Martin, V.
2012-02-01
Exterior spherical acoustical holophony is a branch of spatial audio reproduction that deals with the rendering of a given free-field radiation pattern (the primary field) by using a compact spherical loudspeaker array (the secondary source). More precisely, the primary field is known on a spherical surface surrounding the primary and secondary sources and, since the acoustic fields are described in spherical coordinates, they are naturally subjected to spherical harmonic analysis. Besides, the inverse problem of deriving optimal driving signals from a known primary field is ill-posed because the secondary source cannot radiate high-order spherical harmonics efficiently, especially in the low-frequency range. As a consequence, a standard least-squares solution will overload the transducers if the primary field contains such harmonics. Here, this is avoided by discarding the strongly decaying spherical waves, which are identified through inspection of the radiation efficiency curves of the secondary source. However, such an unavoidable regularization procedure increases the least-squares error, which also depends on the position of the secondary source. This paper deals with the above-mentioned questions in the context of far-field directivity reproduction at low and medium frequencies. In particular, an optimal secondary source position is sought, which leads to the lowest reproduction error in the least-squares sense without overloading the transducers. In order to address this issue, a regularization quality factor is introduced to evaluate the amount of regularization required. It is shown that the optimal position improves significantly the holophonic reconstruction and maximizes the regularization quality factor (minimizes the amount of regularization), which is the main general contribution of this paper. Therefore, this factor can also be used as a cost function to obtain the optimal secondary source position.
Endoscopic-Assisted Burr Hole Reservoir and Ventricle Catheter Placement.
Antes, Sebastian; Tschan, Christoph A; Heckelmann, Michael; Salah, Mohamed; Senger, Sebastian; Linsler, Stefan; Oertel, Joachim
2017-05-01
Accurate positioning of a ventricle catheter is of utmost importance. Various techniques to ensure optimal positioning have been described. Commonly, after catheter placement, additional manipulation is necessary to connect a burr hole reservoir or shunt components. This manipulation can lead to accidental catheter dislocation and should be avoided. Here, we present a new technique that allows direct endoscopic insertion of a burr hole reservoir with an already mounted ventricle catheter. Before insertion, the ventricle catheter was slit at the tip, shortened to the correct length, and connected to the special burr hole reservoir. An intracatheter endoscope was then advanced through the reservoir and the connected catheter. This assemblage allowed using the endoscope as a stylet for shielded ventricular puncture. To confirm correct placement of the ventricle catheter, the endoscope was protruded a few millimeters beyond the catheter tip for inspection. The new technique was applied in 12 procedures. The modified burr hole reservoir was inserted for first-time ventriculoperitoneal shunting (n = 1), cerebrospinal fluid withdrawals and drug administration (n = 2), or different stenting procedures (n = 9). Optimal positioning of the catheter was achieved in 11 of 12 cases. No subcutaneous cerebrospinal fluid collection or fluid leakage through the wound occurred. No parenchymal damage or bleeding appeared. The use of the intracatheter endoscope combined with the modified burr hole reservoir provides a sufficient technique for accurate and safe placement. Connecting the ventricle catheter to the reservoir before the insertion reduces later manipulation and accidental dislocation of the catheter. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Airoldi, A.; Marelli, L.; Bettini, P.; Sala, G.; Apicella, A.
2017-04-01
Technologies based on optical fibers provide the possibility of installing relatively dense networks of sensors that can perform effective strain sensing functions during the operational life of structures. A contemporary trend is the increasing adoption of composite materials in aerospace constructions, which leads to structural architectures made of large monolithic elements. The paper is aimed at showing the feasibility of a detailed reconstruction of the strain field in a composite spar, which is based on the development of reference finite element models and the identification of load modes, consisting of a parameterized set of forces. The procedure is described and assessed in ideal conditions. Thereafter, a surrogate model is used to obtain realistic representation of the data acquired by the strain sensing system, so that the developed procedure is evaluated considering local effects due to the introduction of loads, significant modelling discrepancy in the development of the reference model and the presence of measurement noise. Results show that the method can obtain a robust and quite detailed reconstruction of strain fields, even at the level of local distributions, of the internal forces in the spars and of the displacements, by identifying an equivalent set of load parameters. Finally, the trade-off between the number of sensor and the accuracy, and the optimal position of the sensors for a given maximum number of sensors is evaluated by performing a multi-objective optimization, thus showing that even a relative dense network of externally applied sensors can be used to achieve good quality results.
Automatic motor task selection via a bandit algorithm for a brain-controlled button
NASA Astrophysics Data System (ADS)
Fruitet, Joan; Carpentier, Alexandra; Munos, Rémi; Clerc, Maureen
2013-02-01
Objective. Brain-computer interfaces (BCIs) based on sensorimotor rhythms use a variety of motor tasks, such as imagining moving the right or left hand, the feet or the tongue. Finding the tasks that yield best performance, specifically to each user, is a time-consuming preliminary phase to a BCI experiment. This study presents a new adaptive procedure to automatically select (online) the most promising motor task for an asynchronous brain-controlled button. Approach. We develop for this purpose an adaptive algorithm UCB-classif based on the stochastic bandit theory and design an EEG experiment to test our method. We compare (offline) the adaptive algorithm to a naïve selection strategy which uses uniformly distributed samples from each task. We also run the adaptive algorithm online to fully validate the approach. Main results. By not wasting time on inefficient tasks, and focusing on the most promising ones, this algorithm results in a faster task selection and a more efficient use of the BCI training session. More precisely, the offline analysis reveals that the use of this algorithm can reduce the time needed to select the most appropriate task by almost half without loss in precision, or alternatively, allow us to investigate twice the number of tasks within a similar time span. Online tests confirm that the method leads to an optimal task selection. Significance. This study is the first one to optimize the task selection phase by an adaptive procedure. By increasing the number of tasks that can be tested in a given time span, the proposed method could contribute to reducing ‘BCI illiteracy’.
Optimization of pencil beam f-theta lens for high-accuracy metrology
NASA Astrophysics Data System (ADS)
Peng, Chuanqian; He, Yumei; Wang, Jie
2018-01-01
Pencil beam deflectometric profilers are common instruments for high-accuracy surface slope metrology of x-ray mirrors in synchrotron facilities. An f-theta optical system is a key optical component of the deflectometric profilers and is used to perform the linear angle-to-position conversion. Traditional optimization procedures of the f-theta systems are not directly related to the angle-to-position conversion relation and are performed with stops of large size and a fixed working distance, which means they may not be suitable for the design of f-theta systems working with a small-sized pencil beam within a working distance range for ultra-high-accuracy metrology. If an f-theta system is not well-designed, aberrations of the f-theta system will introduce many systematic errors into the measurement. A least-squares' fitting procedure was used to optimize the configuration parameters of an f-theta system. Simulations using ZEMAX software showed that the optimized f-theta system significantly suppressed the angle-to-position conversion errors caused by aberrations. Any pencil-beam f-theta optical system can be optimized with the help of this optimization method.
Optimization of an idealized Y-Shaped Extracardiac Fontan Baffle
NASA Astrophysics Data System (ADS)
Yang, Weiguang; Feinstein, Jeffrey; Mohan Reddy, V.; Marsden, Alison
2008-11-01
Research has showed that vascular geometries can significantly impact hemodynamic performance, particularly in pediatric cardiology, where anatomy varies from one patient to another. In this study we optimize a newly proposed design for the Fontan procedure, a surgery used to treat single ventricle heart patients. The current Fontan procedure connects the inferior vena cava (IVC) to the pulmonary arteries (PA's) via a straight Gore-Tex tube, forming a T-shaped junction. In the Y-graft design, the IVC is connected to the left and right PAs by two branches. Initial studies on the Y-graft design showed an increase in efficiency and improvement in flow distribution compared to traditional designs in a single patient-specific model. We now optimize an idealized Y-graft model to refine the design prior to patient testing. A derivate-free optimization algorithm using Kriging surrogate functions and mesh adaptive direct search is coupled to a 3-D finite element Navier-Stokes solver. We will present optimization results for rest and exercise conditions and examine the influence of energy efficiency, wall shear stress, pulsatile flow, and flow distribution on the optimal design.
On Optimizing an Archibald Rubber-Band Heat Engine.
ERIC Educational Resources Information Center
Mullen, J. G.; And Others
1978-01-01
Discusses the criteria and procedure for optimizing the performance of Archibald rubber-band heat engines by using the appropriate choice of dimensions, minimizing frictional torque, maximizing torque and balancing the rubber band system. (GA)
Chemistry challenges in lead optimization: silicon isosteres in drug discovery.
Showell, Graham A; Mills, John S
2003-06-15
During the lead optimization phase of drug discovery projects, the factors contributing to subsequent failure might include poor portfolio decision-making and a sub-optimal intellectual property (IP) position. The pharmaceutical industry has an ongoing need for new, safe medicines with a genuine biomedical benefit, a clean IP position and commercial viability. Inherent drug-like properties and chemical tractability are also essential for the smooth development of such agents. The introduction of bioisosteres, to improve the properties of a molecule and obtain new classes of compounds without prior art in the patent literature, is a key strategy used by medicinal chemists during the lead optimization process. Sila-substitution (C/Si exchange) of existing drugs is an approach to search for new drug-like candidates that have beneficial biological properties and a clear IP position. Some of the fundamental differences between carbon and silicon can lead to marked alterations in the physicochemical and biological properties of the silicon-containing analogues and the resulting benefits can be exploited in the drug design process.
A Comparison of Heuristic Procedures for Minimum within-Cluster Sums of Squares Partitioning
ERIC Educational Resources Information Center
Brusco, Michael J.; Steinley, Douglas
2007-01-01
Perhaps the most common criterion for partitioning a data set is the minimization of the within-cluster sums of squared deviation from cluster centroids. Although optimal solution procedures for within-cluster sums of squares (WCSS) partitioning are computationally feasible for small data sets, heuristic procedures are required for most practical…
Aerodynamic design optimization using sensitivity analysis and computational fluid dynamics
NASA Technical Reports Server (NTRS)
Baysal, Oktay; Eleshaky, Mohamed E.
1991-01-01
A new and efficient method is presented for aerodynamic design optimization, which is based on a computational fluid dynamics (CFD)-sensitivity analysis algorithm. The method is applied to design a scramjet-afterbody configuration for an optimized axial thrust. The Euler equations are solved for the inviscid analysis of the flow, which in turn provides the objective function and the constraints. The CFD analysis is then coupled with the optimization procedure that uses a constrained minimization method. The sensitivity coefficients, i.e. gradients of the objective function and the constraints, needed for the optimization are obtained using a quasi-analytical method rather than the traditional brute force method of finite difference approximations. During the one-dimensional search of the optimization procedure, an approximate flow analysis (predicted flow) based on a first-order Taylor series expansion is used to reduce the computational cost. Finally, the sensitivity of the optimum objective function to various design parameters, which are kept constant during the optimization, is computed to predict new optimum solutions. The flow analysis of the demonstrative example are compared with the experimental data. It is shown that the method is more efficient than the traditional methods.
The Multiple-Minima Problem in Protein Folding
NASA Astrophysics Data System (ADS)
Scheraga, Harold A.
1991-10-01
The conformational energy surface of a polypeptide or protein has many local minima, and conventional energy minimization procedures reach only a local minimum (near the starting point of the optimization algorithm) instead of the global minimum (the multiple-minima problem). Several procedures have been developed to surmount this problem, the most promising of which are: (a) build up procedure, (b) optimization of electrostatics, (c) Monte Carlo-plus-energy minimization, (d) electrostatically-driven Monte Carlo, (e) inclusion of distance restraints, (f) adaptive importance-sampling Monte Carlo, (g) relaxation of dimensionality, (h) pattern-recognition, and (i) diffusion equation method. These procedures have been applied to a variety of polypeptide structural problems, and the results of such computations are presented. These include the computation of the structures of open-chain and cyclic peptides, fibrous proteins and globular proteins. Present efforts are being devoted to scaling up these procedures from small polypeptides to proteins, to try to compute the three-dimensional structure of a protein from its amino sequence.
Sathish, Ashik; Marlar, Tyler; Sims, Ronald C
2015-10-01
Methods to convert microalgal biomass to bio based fuels and chemicals are limited by several processing and economic hurdles. Research conducted in this study modified/optimized a previously published procedure capable of extracting transesterifiable lipids from wet algal biomass. This optimization resulted in the extraction of 77% of the total transesterifiable lipids, while reducing the amount of materials and temperature required in the procedure. In addition, characterization of side streams generated demonstrated that: (1) the C/N ratio of the residual biomass or lipid extracted (LE) biomass increased to 54.6 versus 10.1 for the original biomass, (2) the aqueous phase generated contains nitrogen, phosphorous, and carbon, and (3) the solid precipitate phase was composed of up to 11.2 wt% nitrogen (70% protein). The ability to isolate algal lipids and the possibility of utilizing generated side streams as products and/or feedstock material for downstream processes helps promote the algal biorefinery concept. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Stahara, S. S.; Elliott, J. P.; Spreiter, J. R.
1983-01-01
An investigation was conducted to continue the development of perturbation procedures and associated computational codes for rapidly determining approximations to nonlinear flow solutions, with the purpose of establishing a method for minimizing computational requirements associated with parametric design studies of transonic flows in turbomachines. The results reported here concern the extension of the previously developed successful method for single parameter perturbations to simultaneous multiple-parameter perturbations, and the preliminary application of the multiple-parameter procedure in combination with an optimization method to blade design/optimization problem. In order to provide as severe a test as possible of the method, attention is focused in particular on transonic flows which are highly supercritical. Flows past both isolated blades and compressor cascades, involving simultaneous changes in both flow and geometric parameters, are considered. Comparisons with the corresponding exact nonlinear solutions display remarkable accuracy and range of validity, in direct correspondence with previous results for single-parameter perturbations.
Use of multilevel modeling for determining optimal parameters of heat supply systems
NASA Astrophysics Data System (ADS)
Stennikov, V. A.; Barakhtenko, E. A.; Sokolov, D. V.
2017-07-01
The problem of finding optimal parameters of a heat-supply system (HSS) is in ensuring the required throughput capacity of a heat network by determining pipeline diameters and characteristics and location of pumping stations. Effective methods for solving this problem, i.e., the method of stepwise optimization based on the concept of dynamic programming and the method of multicircuit optimization, were proposed in the context of the hydraulic circuit theory developed at Melentiev Energy Systems Institute (Siberian Branch, Russian Academy of Sciences). These methods enable us to determine optimal parameters of various types of piping systems due to flexible adaptability of the calculation procedure to intricate nonlinear mathematical models describing features of used equipment items and methods of their construction and operation. The new and most significant results achieved in developing methodological support and software for finding optimal parameters of complex heat supply systems are presented: a new procedure for solving the problem based on multilevel decomposition of a heat network model that makes it possible to proceed from the initial problem to a set of interrelated, less cumbersome subproblems with reduced dimensionality; a new algorithm implementing the method of multicircuit optimization and focused on the calculation of a hierarchical model of a heat supply system; the SOSNA software system for determining optimum parameters of intricate heat-supply systems and implementing the developed methodological foundation. The proposed procedure and algorithm enable us to solve engineering problems of finding the optimal parameters of multicircuit heat supply systems having large (real) dimensionality, and are applied in solving urgent problems related to the optimal development and reconstruction of these systems. The developed methodological foundation and software can be used for designing heat supply systems in the Central and the Admiralty regions in St. Petersburg, the city of Bratsk, and the Magistral'nyi settlement.
Stoms, David M.; Davis, Frank W.
2014-01-01
Quantitative methods of spatial conservation prioritization have traditionally been applied to issues in conservation biology and reserve design, though their use in other types of natural resource management is growing. The utility maximization problem is one form of a covering problem where multiple criteria can represent the expected social benefits of conservation action. This approach allows flexibility with a problem formulation that is more general than typical reserve design problems, though the solution methods are very similar. However, few studies have addressed optimization in utility maximization problems for conservation planning, and the effect of solution procedure is largely unquantified. Therefore, this study mapped five criteria describing elements of multifunctional agriculture to determine a hypothetical conservation resource allocation plan for agricultural land conservation in the Central Valley of CA, USA. We compared solution procedures within the utility maximization framework to determine the difference between an open source integer programming approach and a greedy heuristic, and find gains from optimization of up to 12%. We also model land availability for conservation action as a stochastic process and determine the decline in total utility compared to the globally optimal set using both solution algorithms. Our results are comparable to other studies illustrating the benefits of optimization for different conservation planning problems, and highlight the importance of maximizing the effectiveness of limited funding for conservation and natural resource management. PMID:25538868
Design of piezoelectric transformer for DC/DC converter with stochastic optimization method
NASA Astrophysics Data System (ADS)
Vasic, Dejan; Vido, Lionel
2016-04-01
Piezoelectric transformers were adopted in recent year due to their many inherent advantages such as safety, no EMI problem, low housing profile, and high power density, etc. The characteristics of the piezoelectric transformers are well known when the load impedance is a pure resistor. However, when piezoelectric transformers are used in AC/DC or DC/DC converters, there are non-linear electronic circuits connected before and after the transformer. Consequently, the output load is variable and due to the output capacitance of the transformer the optimal working point change. This paper starts from modeling a piezoelectric transformer connected to a full wave rectifier in order to discuss the design constraints and configuration of the transformer. The optimization method adopted here use the MOPSO algorithm (Multiple Objective Particle Swarm Optimization). We start with the formulation of the objective function and constraints; then the results give different sizes of the transformer and the characteristics. In other word, this method is looking for a best size of the transformer for optimal efficiency condition that is suitable for variable load. Furthermore, the size and the efficiency are found to be a trade-off. This paper proposes the completed design procedure to find the minimum size of PT in need. The completed design procedure is discussed by a given specification. The PT derived from the proposed design procedure can guarantee both good efficiency and enough range for load variation.
Kreitler, Jason R.; Stoms, David M.; Davis, Frank W.
2014-01-01
Quantitative methods of spatial conservation prioritization have traditionally been applied to issues in conservation biology and reserve design, though their use in other types of natural resource management is growing. The utility maximization problem is one form of a covering problem where multiple criteria can represent the expected social benefits of conservation action. This approach allows flexibility with a problem formulation that is more general than typical reserve design problems, though the solution methods are very similar. However, few studies have addressed optimization in utility maximization problems for conservation planning, and the effect of solution procedure is largely unquantified. Therefore, this study mapped five criteria describing elements of multifunctional agriculture to determine a hypothetical conservation resource allocation plan for agricultural land conservation in the Central Valley of CA, USA. We compared solution procedures within the utility maximization framework to determine the difference between an open source integer programming approach and a greedy heuristic, and find gains from optimization of up to 12%. We also model land availability for conservation action as a stochastic process and determine the decline in total utility compared to the globally optimal set using both solution algorithms. Our results are comparable to other studies illustrating the benefits of optimization for different conservation planning problems, and highlight the importance of maximizing the effectiveness of limited funding for conservation and natural resource management.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kwon, Deukwoo; Little, Mark P.; Miller, Donald L.
Purpose: To determine more accurate regression formulas for estimating peak skin dose (PSD) from reference air kerma (RAK) or kerma-area product (KAP). Methods: After grouping of the data from 21 procedures into 13 clinically similar groups, assessments were made of optimal clustering using the Bayesian information criterion to obtain the optimal linear regressions of (log-transformed) PSD vs RAK, PSD vs KAP, and PSD vs RAK and KAP. Results: Three clusters of clinical groups were optimal in regression of PSD vs RAK, seven clusters of clinical groups were optimal in regression of PSD vs KAP, and six clusters of clinical groupsmore » were optimal in regression of PSD vs RAK and KAP. Prediction of PSD using both RAK and KAP is significantly better than prediction of PSD with either RAK or KAP alone. The regression of PSD vs RAK provided better predictions of PSD than the regression of PSD vs KAP. The partial-pooling (clustered) method yields smaller mean squared errors compared with the complete-pooling method.Conclusion: PSD distributions for interventional radiology procedures are log-normal. Estimates of PSD derived from RAK and KAP jointly are most accurate, followed closely by estimates derived from RAK alone. Estimates of PSD derived from KAP alone are the least accurate. Using a stochastic search approach, it is possible to cluster together certain dissimilar types of procedures to minimize the total error sum of squares.« less
Shah, Peer Azmat; Hasbullah, Halabi B; Lawal, Ibrahim A; Aminu Mu'azu, Abubakar; Tang Jung, Low
2014-01-01
Due to the proliferation of handheld mobile devices, multimedia applications like Voice over IP (VoIP), video conferencing, network music, and online gaming are gaining popularity in recent years. These applications are well known to be delay sensitive and resource demanding. The mobility of mobile devices, running these applications, across different networks causes delay and service disruption. Mobile IPv6 was proposed to provide mobility support to IPv6-based mobile nodes for continuous communication when they roam across different networks. However, the Route Optimization procedure in Mobile IPv6 involves the verification of mobile node's reachability at the home address and at the care-of address (home test and care-of test) that results in higher handover delays and signalling overhead. This paper presents an enhanced procedure, time-based one-time password Route Optimization (TOTP-RO), for Mobile IPv6 Route Optimization that uses the concepts of shared secret Token, time based one-time password (TOTP) along with verification of the mobile node via direct communication and maintaining the status of correspondent node's compatibility. The TOTP-RO was implemented in network simulator (NS-2) and an analytical analysis was also made. Analysis showed that TOTP-RO has lower handover delays, packet loss, and signalling overhead with an increased level of security as compared to the standard Mobile IPv6's Return-Routability-based Route Optimization (RR-RO).
Improvement of the insertion axis for cochlear implantation with a robot-based system.
Torres, Renato; Kazmitcheff, Guillaume; De Seta, Daniele; Ferrary, Evelyne; Sterkers, Olivier; Nguyen, Yann
2017-02-01
It has previously reported that alignment of the insertion axis along the basal turn of the cochlea was depending on surgeon' experience. In this experimental study, we assessed technological assistances, such as navigation or a robot-based system, to improve the insertion axis during cochlear implantation. A preoperative cone beam CT and a mastoidectomy with a posterior tympanotomy were performed on four temporal bones. The optimal insertion axis was defined as the closest axis to the scala tympani centerline avoiding the facial nerve. A neuronavigation system, a robot assistance prototype, and software allowing a semi-automated alignment of the robot were used to align an insertion tool with an optimal insertion axis. Four procedures were performed and repeated three times in each temporal bone: manual, manual navigation-assisted, robot-based navigation-assisted, and robot-based semi-automated. The angle between the optimal and the insertion tool axis was measured in the four procedures. The error was 8.3° ± 2.82° for the manual procedure (n = 24), 8.6° ± 2.83° for the manual navigation-assisted procedure (n = 24), 5.4° ± 3.91° for the robot-based navigation-assisted procedure (n = 24), and 3.4° ± 1.56° for the robot-based semi-automated procedure (n = 12). A higher accuracy was observed with the semi-automated robot-based technique than manual and manual navigation-assisted (p < 0.01). Combination of a navigation system and a manual insertion does not improve the alignment accuracy due to the lack of friendly user interface. On the contrary, a semi-automated robot-based system reduces both the error and the variability of the alignment with a defined optimal axis.
Szabó, György; Kiss, Róbert; Páyer-Lengyel, Dóra; Vukics, Krisztina; Szikra, Judit; Baki, Andrea; Molnár, László; Fischer, János; Keseru, György M
2009-07-01
Hit-to-lead optimization of a novel series of N-alkyl-N-[2-oxo-2-(4-aryl-4H-pyrrolo[1,2-a]quinoxaline-5-yl)-ethyl]-carboxylic acid amides, derived from a high throughput screening (HTS) hit, are described. Subsequent optimization led to identification of in vitro potent cannabinoid 1 receptor (CB1R) antagonists representing a new class of compounds in this area.
Evaluation of tricuspid valve regurgitation following laser lead extraction†.
Pecha, Simon; Castro, Liesa; Gosau, Nils; Linder, Matthias; Vogler, Julia; Willems, Stephan; Reichenspurner, Hermann; Hakmi, Samer
2017-06-01
The objective of this study was to examine the effect of laser lead extraction (LLE) on the development of post-procedural tricuspid regurgitation (TR). Some reports have suggested an increase in TR associated with LLE. We present a series of patients who underwent both, LLE and complete echocardiographic evaluation for TR. A single centre analysis of consecutive patients referred for LLE between January 2012 and August 2015. One hundred and three patients had tricuspid valve function evaluated before the procedure with a transthoracic echocardiography (TTE), during the procedure using transoesophageal echocardiography and postoperatively using a TTE. TR was graded from 0 (none) to 4 (severe). We treated 235 leads in 103 patients, including 118 ventricular leads. Seventy-seven were male (74.8%) and 26 female (25.2%), with a mean age of 65.6 ± 15.4 years. Mean time from initial lead implantation was 98.0 ± 67.3 months. Twenty-one patients (20.4%) had ejection fraction below 30%. No intra-procedural worsening of tricuspid valve function was seen with TEE in any of the patients. Ten patients (9.7%) were found to have TR before LLE that returned to normal valve function after the procedure. Two patients (1.9%) experienced mild TR after the procedure (both with tricuspid valve endocarditis). Ninety-one patients (88.3%) did not experience any significant change of the tricuspid valve function after LLE. Transthoracic and transoesophageal echocardiography findings showed that laser lead extraction was not associated with a significant increase in the incidence of tricuspid valve regurgitation. © The Author 2017. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.
NASA Astrophysics Data System (ADS)
Kurdhi, N. A.; Jamaluddin, A.; Jauhari, W. A.; Saputro, D. R. S.
2017-06-01
In this study, we consider a stochastic integrated manufacturer-retailer inventory model with service level constraint. The model analyzed in this article considers the situation in which the vendor and the buyer establish a long-term contract and strategic partnership to jointly determine the best strategy. The lead time and setup cost are assumed can be controlled by an additional crashing cost and an investment, respectively. It is assumed that shortages are allowed and partially backlogged on the buyer’s side, and that the protection interval (i.e., review period plus lead time) demand distribution is unknown but has given finite first and second moments. The objective is to apply the minmax distribution free approach to simultaneously optimize the review period, the lead time, the setup cost, the safety factor, and the number of deliveries in order to minimize the joint total expected annual cost. The service level constraint guarantees that the service level requirement can be satisfied at the worst case. By constructing Lagrange function, the analysis regarding the solution procedure is conducted, and a solution algorithm is then developed. Moreover, a numerical example and sensitivity analysis are given to illustrate the proposed model and to provide some observations and managerial implications.
NASA Astrophysics Data System (ADS)
Morawiec, Seweryn; Sarzała, Robert P.; Nakwaski, Włodzimierz
2013-11-01
Polarization effects are studied within nitride light-emitting diodes (LEDs) manufactured on standard polar and semipolar substrates. A new theoretical approach, somewhat different than standard ones, is proposed to this end. It is well known that when regular polar GaN substrates are used, strong piezoelectric and spontaneous polarizations create built-in electric fields leading to the quantum-confined Stark effects (QCSEs). These effects may be completely avoided in nonpolar crystallographic orientations, but then there are problems with manufacturing InGaN layers of relatively high Indium contents necessary for the green emission. Hence, a procedure leading to partly overcoming these polarization problems in semi-polar LEDs emitting green radiation is proposed. The (11 22) crystallographic substrate orientation (inclination angle of 58∘ to c plane) seems to be the most promising because it is characterized by low Miller-Bravais indices leading to high-quality and high Indium content smooth growth planes. Besides, it makes possible an increased Indium incorporation efficiency and it is efficient in suppressing QCSE. The In0.3Ga0.7N/GaN QW LED grown on the semipolar (11 22) substrate has been found as currently the optimal LED structure emitting green radiation.
Optimal charges in lead progression: a structure-based neuraminidase case study.
Armstrong, Kathryn A; Tidor, Bruce; Cheng, Alan C
2006-04-20
Collective experience in structure-based lead progression has found electrostatic interactions to be more difficult to optimize than shape-based ones. A major reason for this is that the net electrostatic contribution observed includes a significant nonintuitive desolvation component in addition to the more intuitive intermolecular interaction component. To investigate whether knowledge of the ligand optimal charge distribution can facilitate more intuitive design of electrostatic interactions, we took a series of small-molecule influenza neuraminidase inhibitors with known protein cocrystal structures and calculated the difference between the optimal and actual charge distributions. This difference from the electrostatic optimum correlates with the calculated electrostatic contribution to binding (r(2) = 0.94) despite small changes in binding modes caused by chemical substitutions, suggesting that the optimal charge distribution is a useful design goal. Furthermore, detailed suggestions for chemical modification generated by this approach are in many cases consistent with observed improvements in binding affinity, and the method appears to be useful despite discrete chemical constraints. Taken together, these results suggest that charge optimization is useful in facilitating generation of compound ideas in lead optimization. Our results also provide insight into design of neuraminidase inhibitors.
A novel artificial fish swarm algorithm for recalibration of fiber optic gyroscope error parameters.
Gao, Yanbin; Guan, Lianwu; Wang, Tingjun; Sun, Yunlong
2015-05-05
The artificial fish swarm algorithm (AFSA) is one of the state-of-the-art swarm intelligent techniques, which is widely utilized for optimization purposes. Fiber optic gyroscope (FOG) error parameters such as scale factors, biases and misalignment errors are relatively unstable, especially with the environmental disturbances and the aging of fiber coils. These uncalibrated error parameters are the main reasons that the precision of FOG-based strapdown inertial navigation system (SINS) degraded. This research is mainly on the application of a novel artificial fish swarm algorithm (NAFSA) on FOG error coefficients recalibration/identification. First, the NAFSA avoided the demerits (e.g., lack of using artificial fishes' pervious experiences, lack of existing balance between exploration and exploitation, and high computational cost) of the standard AFSA during the optimization process. To solve these weak points, functional behaviors and the overall procedures of AFSA have been improved with some parameters eliminated and several supplementary parameters added. Second, a hybrid FOG error coefficients recalibration algorithm has been proposed based on NAFSA and Monte Carlo simulation (MCS) approaches. This combination leads to maximum utilization of the involved approaches for FOG error coefficients recalibration. After that, the NAFSA is verified with simulation and experiments and its priorities are compared with that of the conventional calibration method and optimal AFSA. Results demonstrate high efficiency of the NAFSA on FOG error coefficients recalibration.
Huo, Yong; Thompson, Peter; Buddhari, Wacin; Ge, Junbo; Harding, Scott; Ramanathan, Letchuman; Reyes, Eugenio; Santoso, Anwar; Tam, Li-Wah; Vijayaraghavan, Govindan; Yeh, Hung-I
2015-03-15
Acute coronary syndromes (ACS) remain a leading cause of mortality and morbidity in the Asia-Pacific (APAC) region. International guidelines advocate invasive procedures in all but low-risk ACS patients; however, a high proportion of ACS patients in the APAC region receive solely medical management due to a combination of unique geographical, socioeconomic, and population-specific barriers. The APAC ACS Medical Management Working Group recently convened to discuss the ACS medical management landscape in the APAC region. Local and international ACS guidelines and the global and APAC clinical evidence-base for medical management of ACS were reviewed. Challenges in the provision of optimal care for these patients were identified and broadly categorized into issues related to (1) accessibility/systems of care, (2) risk stratification, (3) education, (4) optimization of pharmacotherapy, and (5) cost/affordability. While ACS guidelines clearly represent a valuable standard of care, the group concluded that these challenges can be best met by establishing cardiac networks and individual hospital models/clinical pathways taking into account local risk factors (including socioeconomic status), affordability and availability of pharmacotherapies/invasive facilities, and the nature of local healthcare systems. Potential solutions central to the optimization of ACS medical management in the APAC region are outlined with specific recommendations. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Widodo, Edy; Kariyam
2017-03-01
To determine the input variable settings that create the optimal compromise in response variable used Response Surface Methodology (RSM). There are three primary steps in the RSM problem, namely data collection, modelling, and optimization. In this study focused on the establishment of response surface models, using the assumption that the data produced is correct. Usually the response surface model parameters are estimated by OLS. However, this method is highly sensitive to outliers. Outliers can generate substantial residual and often affect the estimator models. Estimator models produced can be biased and could lead to errors in the determination of the optimal point of fact, that the main purpose of RSM is not reached. Meanwhile, in real life, the collected data often contain some response variable and a set of independent variables. Treat each response separately and apply a single response procedures can result in the wrong interpretation. So we need a development model for the multi-response case. Therefore, it takes a multivariate model of the response surface that is resistant to outliers. As an alternative, in this study discussed on M-estimation as a parameter estimator in multivariate response surface models containing outliers. As an illustration presented a case study on the experimental results to the enhancement of the surface layer of aluminium alloy air by shot peening.
Optimization of a Centrifugal Impeller Design Through CFD Analysis
NASA Technical Reports Server (NTRS)
Chen, W. C.; Eastland, A. H.; Chan, D. C.; Garcia, Roberto
1993-01-01
This paper discusses the procedure, approach and Rocketdyne CFD results for the optimization of the NASA consortium impeller design. Two different approaches have been investigated. The first one is to use a tandem blade arrangement, the main impeller blade is split into two separate rows with the second blade row offset circumferentially with respect to the first row. The second approach is to control the high losses related to secondary flows within the impeller passage. Many key parameters have been identified and each consortium team member involved will optimize a specific parameter using 3-D CFD analysis. Rocketdyne has provided a series of CFD grids for the consortium team members. SECA will complete the tandem blade study, SRA will study the effect of the splitter blade solidity change, NASA LeRC will evaluate the effect of circumferential position of the splitter blade, VPI will work on the hub to shroud blade loading distribution, NASA Ames will examine the impeller discharge leakage flow impacts and Rocketdyne will continue to work on the meridional contour and the blade leading to trailing edge work distribution. This paper will also present Rocketdyne results from the tandem blade study and from the blade loading distribution study. It is the ultimate goal of this consortium team to integrate the available CFD analysis to design an advanced technology impeller that is suitable for use in the NASA Space Transportation Main Engine (STME) fuel turbopump.