Sample records for optimal time point

  1. Optimized Dose Distribution of Gammamed Plus Vaginal Cylinders

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Supe, Sanjay S.; Bijina, T.K.; Varatharaj, C.

    2009-04-01

    Endometrial carcinoma is the most common malignancy arising in the female genital tract. Intracavitary vaginal cuff irradiation may be given alone or with external beam irradiation in patients determined to be at risk for locoregional recurrence. Vaginal cylinders are often used to deliver a brachytherapy dose to the vaginal apex and upper vagina or the entire vaginal surface in the management of postoperative endometrial cancer or cervical cancer. The dose distributions of HDR vaginal cylinders must be evaluated carefully, so that clinical experiences with LDR techniques can be used in guiding optimal use of HDR techniques. The aim of thismore » study was to optimize dose distribution for Gammamed plus vaginal cylinders. Placement of dose optimization points was evaluated for its effect on optimized dose distributions. Two different dose optimization point models were used in this study, namely non-apex (dose optimization points only on periphery of cylinder) and apex (dose optimization points on periphery and along the curvature including the apex points). Thirteen dwell positions were used for the HDR dosimetry to obtain a 6-cm active length. Thus 13 optimization points were available at the periphery of the cylinder. The coordinates of the points along the curvature depended on the cylinder diameters and were chosen for each cylinder so that four points were distributed evenly in the curvature portion of the cylinder. Diameter of vaginal cylinders varied from 2.0 to 4.0 cm. Iterative optimization routine was utilized for all optimizations. The effects of various optimization routines (iterative, geometric, equal times) was studied for the 3.0-cm diameter vaginal cylinder. The effect of source travel step size on the optimized dose distributions for vaginal cylinders was also evaluated. All optimizations in this study were carried for dose of 6 Gy at dose optimization points. For both non-apex and apex models of vaginal cylinders, doses for apex point and three dome points were higher for the apex model compared with the non-apex model. Mean doses to the optimization points for both the cylinder models and all the cylinder diameters were 6 Gy, matching with the prescription dose of 6 Gy. Iterative optimization routine resulted in the highest dose to apex point and dome points. The mean dose for optimization point was 6.01 Gy for iterative optimization and was much higher than 5.74 Gy for geometric and equal times routines. Step size of 1 cm gave the highest dose to the apex point. This step size was superior in terms of mean dose to optimization points. Selection of dose optimization points for the derivation of optimized dose distributions for vaginal cylinders affects the dose distributions.« less

  2. Four-body trajectory optimization

    NASA Technical Reports Server (NTRS)

    Pu, C. L.; Edelbaum, T. N.

    1973-01-01

    A collection of typical three-body trajectories from the L1 libration point on the sun-earth line to the earth is presented. These trajectories in the sun-earth system are grouped into four distinct families which differ in transfer time and delta V requirements. Curves showing the variations of delta V with respect to transfer time, and typical two and three-impulse primer vector histories, are included. The development of a four-body trajectory optimization program to compute fuel optimal trajectories between the earth and a point in the sun-earth-moon system are also discussed. Methods for generating fuel optimal two-impulse trajectories which originate at the earth or a point in space, and fuel optimal three-impulse trajectories between two points in space, are presented. A brief qualitative comparison of these methods is given. An example of a four-body two-impulse transfer from the Li libration point to the earth is included.

  3. Knee point search using cascading top-k sorting with minimized time complexity.

    PubMed

    Wang, Zheng; Tseng, Shian-Shyong

    2013-01-01

    Anomaly detection systems and many other applications are frequently confronted with the problem of finding the largest knee point in the sorted curve for a set of unsorted points. This paper proposes an efficient knee point search algorithm with minimized time complexity using the cascading top-k sorting when a priori probability distribution of the knee point is known. First, a top-k sort algorithm is proposed based on a quicksort variation. We divide the knee point search problem into multiple steps. And in each step an optimization problem of the selection number k is solved, where the objective function is defined as the expected time cost. Because the expected time cost in one step is dependent on that of the afterwards steps, we simplify the optimization problem by minimizing the maximum expected time cost. The posterior probability of the largest knee point distribution and the other parameters are updated before solving the optimization problem in each step. An example of source detection of DNS DoS flooding attacks is provided to illustrate the applications of the proposed algorithm.

  4. Optimal Strategy for Integrated Dynamic Inventory Control and Supplier Selection in Unknown Environment via Stochastic Dynamic Programming

    NASA Astrophysics Data System (ADS)

    Sutrisno; Widowati; Solikhin

    2016-06-01

    In this paper, we propose a mathematical model in stochastic dynamic optimization form to determine the optimal strategy for an integrated single product inventory control problem and supplier selection problem where the demand and purchasing cost parameters are random. For each time period, by using the proposed model, we decide the optimal supplier and calculate the optimal product volume purchased from the optimal supplier so that the inventory level will be located at some point as close as possible to the reference point with minimal cost. We use stochastic dynamic programming to solve this problem and give several numerical experiments to evaluate the model. From the results, for each time period, the proposed model was generated the optimal supplier and the inventory level was tracked the reference point well.

  5. Temporal Data Set Reduction Based on D-Optimality for Quantitative FLIM-FRET Imaging.

    PubMed

    Omer, Travis; Intes, Xavier; Hahn, Juergen

    2015-01-01

    Fluorescence lifetime imaging (FLIM) when paired with Förster resonance energy transfer (FLIM-FRET) enables the monitoring of nanoscale interactions in living biological samples. FLIM-FRET model-based estimation methods allow the quantitative retrieval of parameters such as the quenched (interacting) and unquenched (non-interacting) fractional populations of the donor fluorophore and/or the distance of the interactions. The quantitative accuracy of such model-based approaches is dependent on multiple factors such as signal-to-noise ratio and number of temporal points acquired when sampling the fluorescence decays. For high-throughput or in vivo applications of FLIM-FRET, it is desirable to acquire a limited number of temporal points for fast acquisition times. Yet, it is critical to acquire temporal data sets with sufficient information content to allow for accurate FLIM-FRET parameter estimation. Herein, an optimal experimental design approach based upon sensitivity analysis is presented in order to identify the time points that provide the best quantitative estimates of the parameters for a determined number of temporal sampling points. More specifically, the D-optimality criterion is employed to identify, within a sparse temporal data set, the set of time points leading to optimal estimations of the quenched fractional population of the donor fluorophore. Overall, a reduced set of 10 time points (compared to a typical complete set of 90 time points) was identified to have minimal impact on parameter estimation accuracy (≈5%), with in silico and in vivo experiment validations. This reduction of the number of needed time points by almost an order of magnitude allows the use of FLIM-FRET for certain high-throughput applications which would be infeasible if the entire number of time sampling points were used.

  6. Robust Airfoil Optimization to Achieve Consistent Drag Reduction Over a Mach Range

    NASA Technical Reports Server (NTRS)

    Li, Wu; Huyse, Luc; Padula, Sharon; Bushnell, Dennis M. (Technical Monitor)

    2001-01-01

    We prove mathematically that in order to avoid point-optimization at the sampled design points for multipoint airfoil optimization, the number of design points must be greater than the number of free-design variables. To overcome point-optimization at the sampled design points, a robust airfoil optimization method (called the profile optimization method) is developed and analyzed. This optimization method aims at a consistent drag reduction over a given Mach range and has three advantages: (a) it prevents severe degradation in the off-design performance by using a smart descent direction in each optimization iteration, (b) there is no random airfoil shape distortion for any iterate it generates, and (c) it allows a designer to make a trade-off between a truly optimized airfoil and the amount of computing time consumed. For illustration purposes, we use the profile optimization method to solve a lift-constrained drag minimization problem for 2-D airfoil in Euler flow with 20 free-design variables. A comparison with other airfoil optimization methods is also included.

  7. Free-time and fixed end-point optimal control theory in dissipative media: application to entanglement generation and maintenance.

    PubMed

    Mishima, K; Yamashita, K

    2009-07-07

    We develop monotonically convergent free-time and fixed end-point optimal control theory (OCT) in the density-matrix representation to deal with quantum systems showing dissipation. Our theory is more general and flexible for tailoring optimal laser pulses in order to control quantum dynamics with dissipation than the conventional fixed-time and fixed end-point OCT in that the optimal temporal duration of laser pulses can also be optimized exactly. To show the usefulness of our theory, it is applied to the generation and maintenance of the vibrational entanglement of carbon monoxide adsorbed on the copper (100) surface, CO/Cu(100). We demonstrate the numerical results and clarify how to combat vibrational decoherence as much as possible by the tailored shapes of the optimal laser pulses. It is expected that our theory will be general enough to be applied to a variety of dissipative quantum dynamics systems because the decoherence is one of the quantum phenomena sensitive to the temporal duration of the quantum dynamics.

  8. The effect of different control point sampling sequences on convergence of VMAT inverse planning

    NASA Astrophysics Data System (ADS)

    Pardo Montero, Juan; Fenwick, John D.

    2011-04-01

    A key component of some volumetric-modulated arc therapy (VMAT) optimization algorithms is the progressive addition of control points to the optimization. This idea was introduced in Otto's seminal VMAT paper, in which a coarse sampling of control points was used at the beginning of the optimization and new control points were progressively added one at a time. A different form of the methodology is also present in the RapidArc optimizer, which adds new control points in groups called 'multiresolution levels', each doubling the number of control points in the optimization. This progressive sampling accelerates convergence, improving the results obtained, and has similarities with the ordered subset algorithm used to accelerate iterative image reconstruction. In this work we have used a VMAT optimizer developed in-house to study the performance of optimization algorithms which use different control point sampling sequences, most of which fall into three different classes: doubling sequences, which add new control points in groups such that the number of control points in the optimization is (roughly) doubled; Otto-like progressive sampling which adds one control point at a time, and equi-length sequences which contain several multiresolution levels each with the same number of control points. Results are presented in this study for two clinical geometries, prostate and head-and-neck treatments. A dependence of the quality of the final solution on the number of starting control points has been observed, in agreement with previous works. We have found that some sequences, especially E20 and E30 (equi-length sequences with 20 and 30 multiresolution levels, respectively), generate better results than a 5 multiresolution level RapidArc-like sequence. The final value of the cost function is reduced up to 20%, such reductions leading to small improvements in dosimetric parameters characterizing the treatments—slightly more homogeneous target doses and better sparing of the organs at risk.

  9. Statistical aspects of point count sampling

    USGS Publications Warehouse

    Barker, R.J.; Sauer, J.R.; Ralph, C.J.; Sauer, J.R.; Droege, S.

    1995-01-01

    The dominant feature of point counts is that they do not census birds, but instead provide incomplete counts of individuals present within a survey plot. Considering a simple model for point count sampling, we demon-strate that use of these incomplete counts can bias estimators and testing procedures, leading to inappropriate conclusions. A large portion of the variability in point counts is caused by the incomplete counting, and this within-count variation can be confounded with ecologically meaningful varia-tion. We recommend caution in the analysis of estimates obtained from point counts. Using; our model, we also consider optimal allocation of sampling effort. The critical step in the optimization process is in determining the goals of the study and methods that will be used to meet these goals. By explicitly defining the constraints on sampling and by estimating the relationship between precision and bias of estimators and time spent counting, we can predict the optimal time at a point for each of several monitoring goals. In general, time spent at a point will differ depending on the goals of the study.

  10. Optimal ballistically captured Earth-Moon transfers

    NASA Astrophysics Data System (ADS)

    Ricord Griesemer, Paul; Ocampo, Cesar; Cooley, D. S.

    2012-07-01

    The optimality of a low-energy Earth-Moon transfer terminating in ballistic capture is examined for the first time using primer vector theory. An optimal control problem is formed with the following free variables: the location, time, and magnitude of the transfer insertion burn, and the transfer time. A constraint is placed on the initial state of the spacecraft to bind it to a given initial orbit around a first body, and on the final state of the spacecraft to limit its Keplerian energy with respect to a second body. Optimal transfers in the system are shown to meet certain conditions placed on the primer vector and its time derivative. A two point boundary value problem containing these necessary conditions is created for use in targeting optimal transfers. The two point boundary value problem is then applied to the ballistic lunar capture problem, and an optimal trajectory is shown. Additionally, the problem is then modified to fix the time of transfer, allowing for optimal multi-impulse transfers. The tradeoff between transfer time and fuel cost is shown for Earth-Moon ballistic lunar capture transfers.

  11. A study of the application of singular perturbation theory. [development of a real time algorithm for optimal three dimensional aircraft maneuvers

    NASA Technical Reports Server (NTRS)

    Mehra, R. K.; Washburn, R. B.; Sajan, S.; Carroll, J. V.

    1979-01-01

    A hierarchical real time algorithm for optimal three dimensional control of aircraft is described. Systematic methods are developed for real time computation of nonlinear feedback controls by means of singular perturbation theory. The results are applied to a six state, three control variable, point mass model of an F-4 aircraft. Nonlinear feedback laws are presented for computing the optimal control of throttle, bank angle, and angle of attack. Real Time capability is assessed on a TI 9900 microcomputer. The breakdown of the singular perturbation approximation near the terminal point is examined Continuation methods are examined to obtain exact optimal trajectories starting from the singular perturbation solutions.

  12. Sedentary Behaviour Profiling of Office Workers: A Sensitivity Analysis of Sedentary Cut-Points

    PubMed Central

    Boerema, Simone T.; Essink, Gerard B.; Tönis, Thijs M.; van Velsen, Lex; Hermens, Hermie J.

    2015-01-01

    Measuring sedentary behaviour and physical activity with wearable sensors provides detailed information on activity patterns and can serve health interventions. At the basis of activity analysis stands the ability to distinguish sedentary from active time. As there is no consensus regarding the optimal cut-point for classifying sedentary behaviour, we studied the consequences of using different cut-points for this type of analysis. We conducted a battery of sitting and walking activities with 14 office workers, wearing the Promove 3D activity sensor to determine the optimal cut-point (in counts per minute (m·s−2)) for classifying sedentary behaviour. Then, 27 office workers wore the sensor for five days. We evaluated the sensitivity of five sedentary pattern measures for various sedentary cut-points and found an optimal cut-point for sedentary behaviour of 1660 × 10−3 m·s−2. Total sedentary time was not sensitive to cut-point changes within ±10% of this optimal cut-point; other sedentary pattern measures were not sensitive to changes within the ±20% interval. The results from studies analyzing sedentary patterns, using different cut-points, can be compared within these boundaries. Furthermore, commercial, hip-worn activity trackers can implement feedback and interventions on sedentary behaviour patterns, using these cut-points. PMID:26712758

  13. Transfers between libration-point orbits in the elliptic restricted problem

    NASA Astrophysics Data System (ADS)

    Hiday, L. A.; Howell, K. C.

    The present time-fixed impulsive transfers between 3D libration point orbits in the vicinity of the interior L(1) libration point of the sun-earth-moon barycenter system are 'optimal' in that the total characteristic velocity required for implementation of the transfer exhibits a local minimum. The conditions necessary for a time-fixed, two-impulse transfer trajectory to be optimal are stated in terms of the primer vector, and the conditions necessary for satisfying the local optimality of a transfer trajectory containing additional impulses are addressed by requiring continuity of the Hamiltonian and the derivative of the primer vector at all interior impulses.

  14. Distributed optimization system and method

    DOEpatents

    Hurtado, John E.; Dohrmann, Clark R.; Robinett, III, Rush D.

    2003-06-10

    A search system and method for controlling multiple agents to optimize an objective using distributed sensing and cooperative control. The search agent can be one or more physical agents, such as a robot, and can be software agents for searching cyberspace. The objective can be: chemical sources, temperature sources, radiation sources, light sources, evaders, trespassers, explosive sources, time dependent sources, time independent sources, function surfaces, maximization points, minimization points, and optimal control of a system such as a communication system, an economy, a crane, and a multi-processor computer.

  15. Distributed Optimization System

    DOEpatents

    Hurtado, John E.; Dohrmann, Clark R.; Robinett, III, Rush D.

    2004-11-30

    A search system and method for controlling multiple agents to optimize an objective using distributed sensing and cooperative control. The search agent can be one or more physical agents, such as a robot, and can be software agents for searching cyberspace. The objective can be: chemical sources, temperature sources, radiation sources, light sources, evaders, trespassers, explosive sources, time dependent sources, time independent sources, function surfaces, maximization points, minimization points, and optimal control of a system such as a communication system, an economy, a crane, and a multi-processor computer.

  16. On the Motion of Agents across Terrain with Obstacles

    NASA Astrophysics Data System (ADS)

    Kuznetsov, A. V.

    2018-01-01

    The paper is devoted to finding the time optimal route of an agent travelling across a region from a given source point to a given target point. At each point of this region, a maximum allowed speed is specified. This speed limit may vary in time. The continuous statement of this problem and the case when the agent travels on a grid with square cells are considered. In the latter case, the time is also discrete, and the number of admissible directions of motion at each point in time is eight. The existence of an optimal solution of this problem is proved, and estimates of the approximate solution obtained on the grid are obtained. It is found that decreasing the size of cells below a certain limit does not further improve the approximation. These results can be used to estimate the quasi-optimal trajectory of the agent motion across the rugged terrain produced by an algorithm based on a cellular automaton that was earlier developed by the author.

  17. Experimental Evaluation of a Deformable Registration Algorithm for Motion Correction in PET-CT Guided Biopsy.

    PubMed

    Khare, Rahul; Sala, Guillaume; Kinahan, Paul; Esposito, Giuseppe; Banovac, Filip; Cleary, Kevin; Enquobahrie, Andinet

    2013-01-01

    Positron emission tomography computed tomography (PET-CT) images are increasingly being used for guidance during percutaneous biopsy. However, due to the physics of image acquisition, PET-CT images are susceptible to problems due to respiratory and cardiac motion, leading to inaccurate tumor localization, shape distortion, and attenuation correction. To address these problems, we present a method for motion correction that relies on respiratory gated CT images aligned using a deformable registration algorithm. In this work, we use two deformable registration algorithms and two optimization approaches for registering the CT images obtained over the respiratory cycle. The two algorithms are the BSpline and the symmetric forces Demons registration. In the first optmization approach, CT images at each time point are registered to a single reference time point. In the second approach, deformation maps are obtained to align each CT time point with its adjacent time point. These deformations are then composed to find the deformation with respect to a reference time point. We evaluate these two algorithms and optimization approaches using respiratory gated CT images obtained from 7 patients. Our results show that overall the BSpline registration algorithm with the reference optimization approach gives the best results.

  18. Control-enhanced multiparameter quantum estimation

    NASA Astrophysics Data System (ADS)

    Liu, Jing; Yuan, Haidong

    2017-10-01

    Most studies in multiparameter estimation assume the dynamics is fixed and focus on identifying the optimal probe state and the optimal measurements. In practice, however, controls are usually available to alter the dynamics, which provides another degree of freedom. In this paper we employ optimal control methods, particularly the gradient ascent pulse engineering (GRAPE), to design optimal controls for the improvement of the precision limit in multiparameter estimation. We show that the controlled schemes are not only capable to provide a higher precision limit, but also have a higher stability to the inaccuracy of the time point performing the measurements. This high time stability will benefit the practical metrology, where it is hard to perform the measurement at a very accurate time point due to the response time of the measurement apparatus.

  19. Optimal Low Energy Earth-Moon Transfers

    NASA Technical Reports Server (NTRS)

    Griesemer, Paul Ricord; Ocampo, Cesar; Cooley, D. S.

    2010-01-01

    The optimality of a low-energy Earth-Moon transfer is examined for the first time using primer vector theory. An optimal control problem is formed with the following free variables: the location, time, and magnitude of the transfer insertion burn, and the transfer time. A constraint is placed on the initial state of the spacecraft to bind it to a given initial orbit around a first body, and on the final state of the spacecraft to limit its Keplerian energy with respect to a second body. Optimal transfers in the system are shown to meet certain conditions placed on the primer vector and its time derivative. A two point boundary value problem containing these necessary conditions is created for use in targeting optimal transfers. The two point boundary value problem is then applied to the ballistic lunar capture problem, and an optimal trajectory is shown. Additionally, the ballistic lunar capture trajectory is examined to determine whether one or more additional impulses may improve on the cost of the transfer.

  20. An Exploration Of Fuel Optimal Two-impulse Transfers To Cyclers in the Earth-Moon System

    NASA Astrophysics Data System (ADS)

    Hosseinisianaki, Saghar

    2011-12-01

    This research explores the optimum two-impulse transfers between a low Earth orbit and cycler orbits in the Earth-Moon circular restricted three-body framework, emphasizing the optimization strategy. Cyclers are those types of periodic orbits that meet both the Earth and the Moon periodically. A spacecraft on such trajectories are under the influence of both the Earth and the Moon gravitational fields. Cyclers have gained recent interest as baseline orbits for several Earth-Moon mission concepts, notably in relation to human exploration. In this thesis it is shown that a direct optimization starting from the classic lambert initial guess may not be adequate for these problems and propose a three-step optimization solver to improve the domain of convergence toward an optimal solution. The first step consists of finding feasible trajectories with a given transfer time. I employ Lambert's problem to provide initial guess to optimize the error in arrival position. This includes the analysis of the liability of Lambert's solution as an initial guess. Once a feasible trajectory is found, the velocity impulse is only a function of transfer time, departure, and arrival points' phases. The second step consists of the optimization of impulse over transfer time which results in the minimum impulse transfer for fixed end points. Finally, the third step is mapping the optimal solutions as the end points are varied.

  1. An Exploration Of Fuel Optimal Two-impulse Transfers To Cyclers in the Earth-Moon System

    NASA Astrophysics Data System (ADS)

    Hosseinisianaki, Saghar

    This research explores the optimum two-impulse transfers between a low Earth orbit and cycler orbits in the Earth-Moon circular restricted three-body framework, emphasizing the optimization strategy. Cyclers are those types of periodic orbits that meet both the Earth and the Moon periodically. A spacecraft on such trajectories are under the influence of both the Earth and the Moon gravitational fields. Cyclers have gained recent interest as baseline orbits for several Earth-Moon mission concepts, notably in relation to human exploration. In this thesis it is shown that a direct optimization starting from the classic lambert initial guess may not be adequate for these problems and propose a three-step optimization solver to improve the domain of convergence toward an optimal solution. The first step consists of finding feasible trajectories with a given transfer time. I employ Lambert's problem to provide initial guess to optimize the error in arrival position. This includes the analysis of the liability of Lambert's solution as an initial guess. Once a feasible trajectory is found, the velocity impulse is only a function of transfer time, departure, and arrival points' phases. The second step consists of the optimization of impulse over transfer time which results in the minimum impulse transfer for fixed end points. Finally, the third step is mapping the optimal solutions as the end points are varied.

  2. A geostatistical methodology for the optimal design of space-time hydraulic head monitoring networks and its application to the Valle de Querétaro aquifer.

    PubMed

    Júnez-Ferreira, H E; Herrera, G S

    2013-04-01

    This paper presents a new methodology for the optimal design of space-time hydraulic head monitoring networks and its application to the Valle de Querétaro aquifer in Mexico. The selection of the space-time monitoring points is done using a static Kalman filter combined with a sequential optimization method. The Kalman filter requires as input a space-time covariance matrix, which is derived from a geostatistical analysis. A sequential optimization method that selects the space-time point that minimizes a function of the variance, in each step, is used. We demonstrate the methodology applying it to the redesign of the hydraulic head monitoring network of the Valle de Querétaro aquifer with the objective of selecting from a set of monitoring positions and times, those that minimize the spatiotemporal redundancy. The database for the geostatistical space-time analysis corresponds to information of 273 wells located within the aquifer for the period 1970-2007. A total of 1,435 hydraulic head data were used to construct the experimental space-time variogram. The results show that from the existing monitoring program that consists of 418 space-time monitoring points, only 178 are not redundant. The implied reduction of monitoring costs was possible because the proposed method is successful in propagating information in space and time.

  3. SOP: parallel surrogate global optimization with Pareto center selection for computationally expensive single objective problems

    DOE PAGES

    Krityakierne, Tipaluck; Akhtar, Taimoor; Shoemaker, Christine A.

    2016-02-02

    This paper presents a parallel surrogate-based global optimization method for computationally expensive objective functions that is more effective for larger numbers of processors. To reach this goal, we integrated concepts from multi-objective optimization and tabu search into, single objective, surrogate optimization. Our proposed derivative-free algorithm, called SOP, uses non-dominated sorting of points for which the expensive function has been previously evaluated. The two objectives are the expensive function value of the point and the minimum distance of the point to previously evaluated points. Based on the results of non-dominated sorting, P points from the sorted fronts are selected as centersmore » from which many candidate points are generated by random perturbations. Based on surrogate approximation, the best candidate point is subsequently selected for expensive evaluation for each of the P centers, with simultaneous computation on P processors. Centers that previously did not generate good solutions are tabu with a given tenure. We show almost sure convergence of this algorithm under some conditions. The performance of SOP is compared with two RBF based methods. The test results show that SOP is an efficient method that can reduce time required to find a good near optimal solution. In a number of cases the efficiency of SOP is so good that SOP with 8 processors found an accurate answer in less wall-clock time than the other algorithms did with 32 processors.« less

  4. Efficient Algorithms for Segmentation of Item-Set Time Series

    NASA Astrophysics Data System (ADS)

    Chundi, Parvathi; Rosenkrantz, Daniel J.

    We propose a special type of time series, which we call an item-set time series, to facilitate the temporal analysis of software version histories, email logs, stock market data, etc. In an item-set time series, each observed data value is a set of discrete items. We formalize the concept of an item-set time series and present efficient algorithms for segmenting a given item-set time series. Segmentation of a time series partitions the time series into a sequence of segments where each segment is constructed by combining consecutive time points of the time series. Each segment is associated with an item set that is computed from the item sets of the time points in that segment, using a function which we call a measure function. We then define a concept called the segment difference, which measures the difference between the item set of a segment and the item sets of the time points in that segment. The segment difference values are required to construct an optimal segmentation of the time series. We describe novel and efficient algorithms to compute segment difference values for each of the measure functions described in the paper. We outline a dynamic programming based scheme to construct an optimal segmentation of the given item-set time series. We use the item-set time series segmentation techniques to analyze the temporal content of three different data sets—Enron email, stock market data, and a synthetic data set. The experimental results show that an optimal segmentation of item-set time series data captures much more temporal content than a segmentation constructed based on the number of time points in each segment, without examining the item set data at the time points, and can be used to analyze different types of temporal data.

  5. An optimized treatment for algorithmic differentiation of an important glaciological fixed-point problem

    DOE PAGES

    Goldberg, Daniel N.; Narayanan, Sri Hari Krishna; Hascoet, Laurent; ...

    2016-05-20

    We apply an optimized method to the adjoint generation of a time-evolving land ice model through algorithmic differentiation (AD). The optimization involves a special treatment of the fixed-point iteration required to solve the nonlinear stress balance, which differs from a straightforward application of AD software, and leads to smaller memory requirements and in some cases shorter computation times of the adjoint. The optimization is done via implementation of the algorithm of Christianson (1994) for reverse accumulation of fixed-point problems, with the AD tool OpenAD. For test problems, the optimized adjoint is shown to have far lower memory requirements, potentially enablingmore » larger problem sizes on memory-limited machines. In the case of the land ice model, implementation of the algorithm allows further optimization by having the adjoint model solve a sequence of linear systems with identical (as opposed to varying) matrices, greatly improving performance. Finally, the methods introduced here will be of value to other efforts applying AD tools to ice models, particularly ones which solve a hybrid shallow ice/shallow shelf approximation to the Stokes equations.« less

  6. Parallelization of Program to Optimize Simulated Trajectories (POST3D)

    NASA Technical Reports Server (NTRS)

    Hammond, Dana P.; Korte, John J. (Technical Monitor)

    2001-01-01

    This paper describes the parallelization of the Program to Optimize Simulated Trajectories (POST3D). POST3D uses a gradient-based optimization algorithm that reaches an optimum design point by moving from one design point to the next. The gradient calculations required to complete the optimization process, dominate the computational time and have been parallelized using a Single Program Multiple Data (SPMD) on a distributed memory NUMA (non-uniform memory access) architecture. The Origin2000 was used for the tests presented.

  7. The "Best Worst" Field Optimization and Focusing

    NASA Technical Reports Server (NTRS)

    Vaughnn, David; Moore, Ken; Bock, Noah; Zhou, Wei; Ming, Liang; Wilson, Mark

    2008-01-01

    A simple algorithm for optimizing and focusing lens designs is presented. The goal of the algorithm is to simultaneously create the best and most uniform image quality over the field of view. Rather than relatively weighting multiple field points, only the image quality from the worst field point is considered. When optimizing a lens design, iterations are made to make this worst field point better until such a time as a different field point becomes worse. The same technique is used to determine focus position. The algorithm works with all the various image quality metrics. It works with both symmetrical and asymmetrical systems. It works with theoretical models and real hardware.

  8. Computerized optimization of multiple isocentres in stereotactic convergent beam irradiation

    NASA Astrophysics Data System (ADS)

    Treuer, U.; Treuer, H.; Hoevels, M.; Müller, R. P.; Sturm, V.

    1998-01-01

    A method for the fully computerized determination and optimization of positions of target points and collimator sizes in convergent beam irradiation is presented. In conventional interactive trial and error methods, which are very time consuming, the treatment parameters are chosen according to the operator's experience and improved successively. This time is reduced significantly by the use of a computerized procedure. After the definition of target volume and organs at risk in the CT or MR scans, an initial configuration is created automatically. In the next step the target point positions and collimator diameters are optimized by the program. The aim of the optimization is to find a configuration for which a prescribed dose at the target surface is approximated as close as possible. At the same time dose peaks inside the target volume are minimized and organs at risk and tissue surrounding the target are spared. To enhance the speed of the optimization a fast method for approximate dose calculation in convergent beam irradiation is used. A possible application of the method for calculating the leaf positions when irradiating with a micromultileaf collimator is briefly discussed. The success of the procedure has been demonstrated for several clinical cases with up to six target points.

  9. Grey-Theory-Based Optimization Model of Emergency Logistics Considering Time Uncertainty.

    PubMed

    Qiu, Bao-Jian; Zhang, Jiang-Hua; Qi, Yuan-Tao; Liu, Yang

    2015-01-01

    Natural disasters occur frequently in recent years, causing huge casualties and property losses. Nowadays, people pay more and more attention to the emergency logistics problems. This paper studies the emergency logistics problem with multi-center, multi-commodity, and single-affected-point. Considering that the path near the disaster point may be damaged, the information of the state of the paths is not complete, and the travel time is uncertainty, we establish the nonlinear programming model that objective function is the maximization of time-satisfaction degree. To overcome these drawbacks: the incomplete information and uncertain time, this paper firstly evaluates the multiple roads of transportation network based on grey theory and selects the reliable and optimal path. Then simplify the original model under the scenario that the vehicle only follows the optimal path from the emergency logistics center to the affected point, and use Lingo software to solve it. The numerical experiments are presented to show the feasibility and effectiveness of the proposed method.

  10. Grey-Theory-Based Optimization Model of Emergency Logistics Considering Time Uncertainty

    PubMed Central

    Qiu, Bao-Jian; Zhang, Jiang-Hua; Qi, Yuan-Tao; Liu, Yang

    2015-01-01

    Natural disasters occur frequently in recent years, causing huge casualties and property losses. Nowadays, people pay more and more attention to the emergency logistics problems. This paper studies the emergency logistics problem with multi-center, multi-commodity, and single-affected-point. Considering that the path near the disaster point may be damaged, the information of the state of the paths is not complete, and the travel time is uncertainty, we establish the nonlinear programming model that objective function is the maximization of time-satisfaction degree. To overcome these drawbacks: the incomplete information and uncertain time, this paper firstly evaluates the multiple roads of transportation network based on grey theory and selects the reliable and optimal path. Then simplify the original model under the scenario that the vehicle only follows the optimal path from the emergency logistics center to the affected point, and use Lingo software to solve it. The numerical experiments are presented to show the feasibility and effectiveness of the proposed method. PMID:26417946

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beltran, C; Kamal, H

    Purpose: To provide a multicriteria optimization algorithm for intensity modulated radiation therapy using pencil proton beam scanning. Methods: Intensity modulated radiation therapy using pencil proton beam scanning requires efficient optimization algorithms to overcome the uncertainties in the Bragg peaks locations. This work is focused on optimization algorithms that are based on Monte Carlo simulation of the treatment planning and use the weights and the dose volume histogram (DVH) control points to steer toward desired plans. The proton beam treatment planning process based on single objective optimization (representing a weighted sum of multiple objectives) usually leads to time-consuming iterations involving treatmentmore » planning team members. We proved a time efficient multicriteria optimization algorithm that is developed to run on NVIDIA GPU (Graphical Processing Units) cluster. The multicriteria optimization algorithm running time benefits from up-sampling of the CT voxel size of the calculations without loss of fidelity. Results: We will present preliminary results of Multicriteria optimization for intensity modulated proton therapy based on DVH control points. The results will show optimization results of a phantom case and a brain tumor case. Conclusion: The multicriteria optimization of the intensity modulated radiation therapy using pencil proton beam scanning provides a novel tool for treatment planning. Work support by a grant from Varian Inc.« less

  12. Options for Robust Airfoil Optimization under Uncertainty

    NASA Technical Reports Server (NTRS)

    Padula, Sharon L.; Li, Wu

    2002-01-01

    A robust optimization method is developed to overcome point-optimization at the sampled design points. This method combines the best features from several preliminary methods proposed by the authors and their colleagues. The robust airfoil shape optimization is a direct method for drag reduction over a given range of operating conditions and has three advantages: (1) it prevents severe degradation in the off-design performance by using a smart descent direction in each optimization iteration, (2) it uses a large number of spline control points as design variables yet the resulting airfoil shape does not need to be smoothed, and (3) it allows the user to make a tradeoff between the level of optimization and the amount of computing time consumed. For illustration purposes, the robust optimization method is used to solve a lift-constrained drag minimization problem for a two-dimensional (2-D) airfoil in Euler flow with 20 geometric design variables.

  13. The effect of dropout on the efficiency of D-optimal designs of linear mixed models.

    PubMed

    Ortega-Azurduy, S A; Tan, F E S; Berger, M P F

    2008-06-30

    Dropout is often encountered in longitudinal data. Optimal designs will usually not remain optimal in the presence of dropout. In this paper, we study D-optimal designs for linear mixed models where dropout is encountered. Moreover, we estimate the efficiency loss in cases where a D-optimal design for complete data is chosen instead of that for data with dropout. Two types of monotonically decreasing response probability functions are investigated to describe dropout. Our results show that the location of D-optimal design points for the dropout case will shift with respect to that for the complete and uncorrelated data case. Owing to this shift, the information collected at the D-optimal design points for the complete data case does not correspond to the smallest variance. We show that the size of the displacement of the time points depends on the linear mixed model and that the efficiency loss is moderate.

  14. Graph rigidity, cyclic belief propagation, and point pattern matching.

    PubMed

    McAuley, Julian J; Caetano, Tibério S; Barbosa, Marconi S

    2008-11-01

    A recent paper [1] proposed a provably optimal polynomial time method for performing near-isometric point pattern matching by means of exact probabilistic inference in a chordal graphical model. Its fundamental result is that the chordal graph in question is shown to be globally rigid, implying that exact inference provides the same matching solution as exact inference in a complete graphical model. This implies that the algorithm is optimal when there is no noise in the point patterns. In this paper, we present a new graph that is also globally rigid but has an advantage over the graph proposed in [1]: Its maximal clique size is smaller, rendering inference significantly more efficient. However, this graph is not chordal, and thus, standard Junction Tree algorithms cannot be directly applied. Nevertheless, we show that loopy belief propagation in such a graph converges to the optimal solution. This allows us to retain the optimality guarantee in the noiseless case, while substantially reducing both memory requirements and processing time. Our experimental results show that the accuracy of the proposed solution is indistinguishable from that in [1] when there is noise in the point patterns.

  15. Particle Swarm Optimization of Low-Thrust, Geocentric-to-Halo-Orbit Transfers

    NASA Astrophysics Data System (ADS)

    Abraham, Andrew J.

    Missions to Lagrange points are becoming increasingly popular amongst spacecraft mission planners. Lagrange points are locations in space where the gravity force from two bodies, and the centrifugal force acting on a third body, cancel. To date, all spacecraft that have visited a Lagrange point have done so using high-thrust, chemical propulsion. Due to the increasing availability of low-thrust (high efficiency) propulsive devices, and their increasing capability in terms of fuel efficiency and instantaneous thrust, it has now become possible for a spacecraft to reach a Lagrange point orbit without the aid of chemical propellant. While at any given time there are many paths for a low-thrust trajectory to take, only one is optimal. The traditional approach to spacecraft trajectory optimization utilizes some form of gradient-based algorithm. While these algorithms offer numerous advantages, they also have a few significant shortcomings. The three most significant shortcomings are: (1) the fact that an initial guess solution is required to initialize the algorithm, (2) the radius of convergence can be quite small and can allow the algorithm to become trapped in local minima, and (3) gradient information is not always assessable nor always trustworthy for a given problem. To avoid these problems, this dissertation is focused on optimizing a low-thrust transfer trajectory from a geocentric orbit to an Earth-Moon, L1, Lagrange point orbit using the method of Particle Swarm Optimization (PSO). The PSO method is an evolutionary heuristic that was originally written to model birds swarming to locate hidden food sources. This PSO method will enable the exploration of the invariant stable manifold of the target Lagrange point orbit in an effort to optimize the spacecraft's low-thrust trajectory. Examples of these optimized trajectories are presented and contrasted with those found using traditional, gradient-based approaches. In summary, the results of this dissertation find that the PSO method does, indeed, successfully optimize the low-thrust trajectory transfer problem without the need for initial guessing. Furthermore, a two-degree-of-freedom PSO problem formulation significantly outperformed a one-degree-of-freedom formulation by at least an order of magnitude, in terms of CPU time. Finally, the PSO method is also used to solve a traditional, two-burn, impulsive transfer to a Lagrange point orbit using a hybrid optimization algorithm that incorporates a gradient-based shooting algorithm as a pre-optimizer. Surprisingly, the results of this study show that "fast" transfers outperform "slow" transfers in terms of both Deltav and time of flight.

  16. A Parallel Particle Swarm Optimization Algorithm Accelerated by Asynchronous Evaluations

    NASA Technical Reports Server (NTRS)

    Venter, Gerhard; Sobieszczanski-Sobieski, Jaroslaw

    2005-01-01

    A parallel Particle Swarm Optimization (PSO) algorithm is presented. Particle swarm optimization is a fairly recent addition to the family of non-gradient based, probabilistic search algorithms that is based on a simplified social model and is closely tied to swarming theory. Although PSO algorithms present several attractive properties to the designer, they are plagued by high computational cost as measured by elapsed time. One approach to reduce the elapsed time is to make use of coarse-grained parallelization to evaluate the design points. Previous parallel PSO algorithms were mostly implemented in a synchronous manner, where all design points within a design iteration are evaluated before the next iteration is started. This approach leads to poor parallel speedup in cases where a heterogeneous parallel environment is used and/or where the analysis time depends on the design point being analyzed. This paper introduces an asynchronous parallel PSO algorithm that greatly improves the parallel e ciency. The asynchronous algorithm is benchmarked on a cluster assembled of Apple Macintosh G5 desktop computers, using the multi-disciplinary optimization of a typical transport aircraft wing as an example.

  17. New clinical insights for transiently evoked otoacoustic emission protocols.

    PubMed

    Hatzopoulos, Stavros; Grzanka, Antoni; Martini, Alessandro; Konopka, Wieslaw

    2009-08-01

    The objective of the study was to optimize the area of a time-frequency analysis and then investigate any stable patterns in the time-frequency structure of otoacoustic emissions in a population of 152 healthy adults sampled over one year. TEOAE recordings were collected from 302 ears in subjects presenting normal hearing and normal impedance values. The responses were analyzed by the Wigner-Ville distribution (WVD). The TF region of analysis was optimized by examining the energy content of various rectangular and triangular TF regions. The TEOAE components from the initial and recordings 12 months later were compared in the optimized TF region. The best region for TF analysis was identified with base point 1 at 2.24 ms and 2466 Hz, base point 2 at 6.72 ms and 2466 Hz, and the top point at 2.24 ms and 5250 Hz. Correlation indices from the TF optimized region were higher, and were statistically significant, than the traditional indices in the selected time window. An analysis of the TF data within a 12-month period indicated a 85% TEOAE component similarity in 90% of the tested subjects.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krityakierne, Tipaluck; Akhtar, Taimoor; Shoemaker, Christine A.

    This paper presents a parallel surrogate-based global optimization method for computationally expensive objective functions that is more effective for larger numbers of processors. To reach this goal, we integrated concepts from multi-objective optimization and tabu search into, single objective, surrogate optimization. Our proposed derivative-free algorithm, called SOP, uses non-dominated sorting of points for which the expensive function has been previously evaluated. The two objectives are the expensive function value of the point and the minimum distance of the point to previously evaluated points. Based on the results of non-dominated sorting, P points from the sorted fronts are selected as centersmore » from which many candidate points are generated by random perturbations. Based on surrogate approximation, the best candidate point is subsequently selected for expensive evaluation for each of the P centers, with simultaneous computation on P processors. Centers that previously did not generate good solutions are tabu with a given tenure. We show almost sure convergence of this algorithm under some conditions. The performance of SOP is compared with two RBF based methods. The test results show that SOP is an efficient method that can reduce time required to find a good near optimal solution. In a number of cases the efficiency of SOP is so good that SOP with 8 processors found an accurate answer in less wall-clock time than the other algorithms did with 32 processors.« less

  19. Near constant-time optimal piecewise LDR to HDR inverse tone mapping

    NASA Astrophysics Data System (ADS)

    Chen, Qian; Su, Guan-Ming; Yin, Peng

    2015-02-01

    In a backward compatible HDR image/video compression, it is a general approach to reconstruct HDR from compressed LDR as a prediction to original HDR, which is referred to as inverse tone mapping. Experimental results show that 2- piecewise 2nd order polynomial has the best mapping accuracy than 1 piece high order or 2-piecewise linear, but it is also the most time-consuming method because to find the optimal pivot point to split LDR range to 2 pieces requires exhaustive search. In this paper, we propose a fast algorithm that completes optimal 2-piecewise 2nd order polynomial inverse tone mapping in near constant time without quality degradation. We observe that in least square solution, each entry in the intermediate matrix can be written as the sum of some basic terms, which can be pre-calculated into look-up tables. Since solving the matrix becomes looking up values in tables, computation time barely differs regardless of the number of points searched. Hence, we can carry out the most thorough pivot point search to find the optimal pivot that minimizes MSE in near constant time. Experiment shows that our proposed method achieves the same PSNR performance while saving 60 times computation time compared to the traditional exhaustive search in 2-piecewise 2nd order polynomial inverse tone mapping with continuous constraint.

  20. End-point controller design for an experimental two-link flexible manipulator using convex optimization

    NASA Technical Reports Server (NTRS)

    Oakley, Celia M.; Barratt, Craig H.

    1990-01-01

    Recent results in linear controller design are used to design an end-point controller for an experimental two-link flexible manipulator. A nominal 14-state linear-quadratic-Gaussian (LQG) controller was augmented with a 528-tap finite-impulse-response (FIR) filter designed using convex optimization techniques. The resulting 278-state controller produced improved end-point trajectory tracking and disturbance rejection in simulation and experimentally in real time.

  1. Optimal transfers between libration-point orbits in the elliptic restricted three-body problem

    NASA Astrophysics Data System (ADS)

    Hiday, Lisa Ann

    1992-09-01

    A strategy is formulated to design optimal impulsive transfers between three-dimensional libration-point orbits in the vicinity of the interior L(1) libration point of the Sun-Earth/Moon barycenter system. Two methods of constructing nominal transfers, for which the fuel cost is to be minimized, are developed; both inferior and superior transfers between two halo orbits are considered. The necessary conditions for an optimal transfer trajectory are stated in terms of the primer vector. The adjoint equation relating reference and perturbed trajectories in this formulation of the elliptic restricted three-body problem is shown to be distinctly different from that obtained in the analysis of trajectories in the two-body problem. Criteria are established whereby the cost on a nominal transfer can be improved by the addition of an interior impulse or by the implementation of coastal arcs in the initial and final orbits. The necessary conditions for the local optimality of a time-fixed transfer trajectory possessing additional impulses are satisfied by requiring continuity of the Hamiltonian and the derivative of the primer vector at all interior impulses. The optimality of a time-free transfer containing coastal arcs is surmised by examination of the slopes at the endpoints of a plot of the magnitude of the primer vector over the duration of the transfer path. If the initial and final slopes of the primer magnitude are zero, the transfer trajectory is optimal; otherwise, the execution of coasts is warranted. The position and timing of each interior impulse applied to a time-fixed transfer as well as the direction and length of coastal periods implemented on a time-free transfer are specified by the unconstrained minimization of the appropriate variation in cost utilizing a multivariable search technique. Although optimal solutions in some instances are elusive, the time-fixed and time-free optimization algorithms prove to be very successful in diminishing costs on nominal transfer trajectories. The inclusion of coastal arcs on time-free superior and inferior transfers results in significant modification of the transfer time of flight caused by shifts in departure and arrival locations on the halo orbits.

  2. A New Continuous-Time Equality-Constrained Optimization to Avoid Singularity.

    PubMed

    Quan, Quan; Cai, Kai-Yuan

    2016-02-01

    In equality-constrained optimization, a standard regularity assumption is often associated with feasible point methods, namely, that the gradients of constraints are linearly independent. In practice, the regularity assumption may be violated. In order to avoid such a singularity, a new projection matrix is proposed based on which a feasible point method to continuous-time, equality-constrained optimization is developed. First, the equality constraint is transformed into a continuous-time dynamical system with solutions that always satisfy the equality constraint. Second, a new projection matrix without singularity is proposed to realize the transformation. An update (or say a controller) is subsequently designed to decrease the objective function along the solutions of the transformed continuous-time dynamical system. The invariance principle is then applied to analyze the behavior of the solution. Furthermore, the proposed method is modified to address cases in which solutions do not satisfy the equality constraint. Finally, the proposed optimization approach is applied to three examples to demonstrate its effectiveness.

  3. Instructional versus schedule control of humans' choices in situations of diminishing returns

    PubMed Central

    Hackenberg, Timothy D.; Joker, Veronica R.

    1994-01-01

    Four adult humans chose repeatedly between a fixed-time schedule (of points later exchangeable for money) and a progressive-time schedule that began at 0 s and increased by a fixed number of seconds with each point delivered by that schedule. Each point delivered by the fixed-time schedule reset the requirements of the progressive-time schedule to its minimum value. Subjects were provided with instructions that specified a particular sequence of choices. Under the initial conditions, the instructions accurately specified the optimal choice sequence. Thus, control by instructions and optimal control by the programmed contingencies both supported the same performance. To distinguish the effects of instructions from schedule sensitivity, the correspondence between the instructed and optimal choice patterns was gradually altered across conditions by varying the step size of the progressive-time schedule while maintaining the same instructions. Step size was manipulated, typically in 1-s units, first in an ascending and then in a descending sequence of conditions. Instructions quickly established control in all 4 subjects but, by narrowing the range of choice patterns, they reduced subsequent sensitivity to schedule changes. Instructional control was maintained across the ascending sequence of progressive-time values for each subject, but eventually diminished, giving way to more schedule-appropriate patterns. The transition from instruction-appropriate to schedule-appropriate behavior was characterized by an increase in the variability of choice patterns and local increases in point density. On the descending sequence of progressive-time values, behavior appeared to be schedule sensitive, sometimes even optimally sensitive, but it did not always change systematically with the contingencies, suggesting the involvement of other factors. PMID:16812747

  4. Optimal time points sampling in pathway modelling.

    PubMed

    Hu, Shiyan

    2004-01-01

    Modelling cellular dynamics based on experimental data is at the heart of system biology. Considerable progress has been made to dynamic pathway modelling as well as the related parameter estimation. However, few of them gives consideration for the issue of optimal sampling time selection for parameter estimation. Time course experiments in molecular biology rarely produce large and accurate data sets and the experiments involved are usually time consuming and expensive. Therefore, to approximate parameters for models with only few available sampling data is of significant practical value. For signal transduction, the sampling intervals are usually not evenly distributed and are based on heuristics. In the paper, we investigate an approach to guide the process of selecting time points in an optimal way to minimize the variance of parameter estimates. In the method, we first formulate the problem to a nonlinear constrained optimization problem by maximum likelihood estimation. We then modify and apply a quantum-inspired evolutionary algorithm, which combines the advantages of both quantum computing and evolutionary computing, to solve the optimization problem. The new algorithm does not suffer from the morass of selecting good initial values and being stuck into local optimum as usually accompanied with the conventional numerical optimization techniques. The simulation results indicate the soundness of the new method.

  5. Accuracy and optimal timing of activity measurements in estimating the absorbed dose of radioiodine in the treatment of Graves' disease

    NASA Astrophysics Data System (ADS)

    Merrill, S.; Horowitz, J.; Traino, A. C.; Chipkin, S. R.; Hollot, C. V.; Chait, Y.

    2011-02-01

    Calculation of the therapeutic activity of radioiodine 131I for individualized dosimetry in the treatment of Graves' disease requires an accurate estimate of the thyroid absorbed radiation dose based on a tracer activity administration of 131I. Common approaches (Marinelli-Quimby formula, MIRD algorithm) use, respectively, the effective half-life of radioiodine in the thyroid and the time-integrated activity. Many physicians perform one, two, or at most three tracer dose activity measurements at various times and calculate the required therapeutic activity by ad hoc methods. In this paper, we study the accuracy of estimates of four 'target variables': time-integrated activity coefficient, time of maximum activity, maximum activity, and effective half-life in the gland. Clinical data from 41 patients who underwent 131I therapy for Graves' disease at the University Hospital in Pisa, Italy, are used for analysis. The radioiodine kinetics are described using a nonlinear mixed-effects model. The distributions of the target variables in the patient population are characterized. Using minimum root mean squared error as the criterion, optimal 1-, 2-, and 3-point sampling schedules are determined for estimation of the target variables, and probabilistic bounds are given for the errors under the optimal times. An algorithm is developed for computing the optimal 1-, 2-, and 3-point sampling schedules for the target variables. This algorithm is implemented in a freely available software tool. Taking into consideration 131I effective half-life in the thyroid and measurement noise, the optimal 1-point time for time-integrated activity coefficient is a measurement 1 week following the tracer dose. Additional measurements give only a slight improvement in accuracy.

  6. Comparison of dose volume parameters evaluated using three forward planning – optimization techniques in cervical cancer brachytherapy involving two applicators

    PubMed Central

    Basu-Roy, Somapriya; Kar, Sanjay Kumar; Das, Sounik; Lahiri, Annesha

    2017-01-01

    Purpose This study is intended to compare dose-volume parameters evaluated using different forward planning- optimization techniques, involving two applicator systems in intracavitary brachytherapy for cervical cancer. It looks for the best applicator-optimization combination to fulfill recommended dose-volume objectives in different high-dose-rate (HDR) fractionation schedules. Material and methods We used tandem-ring and Fletcher-style tandem-ovoid applicator in same patients in two fractions of brachytherapy. Six plans were generated for each patient utilizing 3 forward optimization techniques for each applicator used: equal dwell weight/times (‘no optimization’), ‘manual dwell weight/times’, and ‘graphical’. Plans were normalized to left point A and dose of 8 Gy was prescribed. Dose volume and dose point parameters were compared. Results Without graphical optimization, maximum width and thickness of volume enclosed by 100% isodose line, dose to 90%, and 100% of clinical target volume (CTV); minimum, maximum, median, and average dose to both rectum and bladder are significantly higher with Fletcher applicator. Even if it is done, dose to both points B, minimum dose to CTV, and treatment time; dose to 2 cc (D2cc) rectum and rectal point etc.; D2cc, minimum, maximum, median, and average dose to sigmoid colon; D2cc of bladder remain significantly higher with this applicator. Dose to bladder point is similar (p > 0.05) between two applicators, after all optimization techniques. Conclusions Fletcher applicator generates higher dose to both CTV and organs at risk (2 cc volumes) after all optimization techniques. Dose restriction to rectum is possible using graphical optimization only during selected HDR fractionation schedules. Bladder always receives dose higher than recommended, and 2 cc sigmoid colon always gets permissible dose. Contrarily, graphical optimization with ring applicators fulfills all dose volume objectives in all HDR fractionations practiced. PMID:29204164

  7. Time-free transfers between libration-point orbits in the elliptic restricted problem

    NASA Astrophysics Data System (ADS)

    Howell, K. C.; Hiday-Johnston, L. A.

    This work is part of a larger research effort directed toward the formulation of a strategy to design optimal time-free impulsive transfers between three-dimensional libration-point orbits in the vicinity of the interior LI libration point of the Sun-Earth/Moon barycenter system. Inferior transfers that move a spacecraft from a large halo orbit to a smaller halo orbit are considered here. Primer vector theory is applied to non-optimal impulsive trajectories in the elliptic restricted three-body problem in order to establish whether the implementation of a coast in the initial orbit, a coast in the final orbit, or dual coasts accomplishes a reduction in fuel expenditure. The addition of interior impulses is also considered. Results indicate that a substantial savings in fuel can be achieved by the allowance for coastal periods on the specified libration-point orbits. The resulting time-free inferior transfers are compared to time-free superior transfers between halo orbits of equal z-amplitude separation.

  8. An automated model-based aim point distribution system for solar towers

    NASA Astrophysics Data System (ADS)

    Schwarzbözl, Peter; Rong, Amadeus; Macke, Ansgar; Säck, Jan-Peter; Ulmer, Steffen

    2016-05-01

    Distribution of heliostat aim points is a major task during central receiver operation, as the flux distribution produced by the heliostats varies continuously with time. Known methods for aim point distribution are mostly based on simple aim point patterns and focus on control strategies to meet local temperature and flux limits of the receiver. Lowering the peak flux on the receiver to avoid hot spots and maximizing thermal output are obviously competing targets that call for a comprehensive optimization process. This paper presents a model-based method for online aim point optimization that includes the current heliostat field mirror quality derived through an automated deflectometric measurement process.

  9. On computing the global time-optimal motions of robotic manipulators in the presence of obstacles

    NASA Technical Reports Server (NTRS)

    Shiller, Zvi; Dubowsky, Steven

    1991-01-01

    A method for computing the time-optimal motions of robotic manipulators is presented that considers the nonlinear manipulator dynamics, actuator constraints, joint limits, and obstacles. The optimization problem is reduced to a search for the time-optimal path in the n-dimensional position space. A small set of near-optimal paths is first efficiently selected from a grid, using a branch and bound search and a series of lower bound estimates on the traveling time along a given path. These paths are further optimized with a local path optimization to yield the global optimal solution. Obstacles are considered by eliminating the collision points from the tessellated space and by adding a penalty function to the motion time in the local optimization. The computational efficiency of the method stems from the reduced dimensionality of the searched spaced and from combining the grid search with a local optimization. The method is demonstrated in several examples for two- and six-degree-of-freedom manipulators with obstacles.

  10. Numerical study and ex vivo assessment of HIFU treatment time reduction through optimization of focal point trajectory

    NASA Astrophysics Data System (ADS)

    Grisey, A.; Yon, S.; Pechoux, T.; Letort, V.; Lafitte, P.

    2017-03-01

    Treatment time reduction is a key issue to expand the use of high intensity focused ultrasound (HIFU) surgery, especially for benign pathologies. This study aims at quantitatively assessing the potential reduction of the treatment time arising from moving the focal point during long pulses. In this context, the optimization of the focal point trajectory is crucial to achieve a uniform thermal dose repartition and avoid boiling. At first, a numerical optimization algorithm was used to generate efficient trajectories. Thermal conduction was simulated in 3D with a finite difference code and damages to the tissue were modeled using the thermal dose formula. Given an initial trajectory, the thermal dose field was first computed, then, making use of Pontryagin's maximum principle, the trajectory was iteratively refined. Several initial trajectories were tested. Then, an ex vivo study was conducted in order to validate the efficicency of the resulting optimized strategies. Single pulses were performed at 3MHz on fresh veal liver samples with an Echopulse and the size of each unitary lesion was assessed by cutting each sample along three orthogonal planes and measuring the dimension of the whitened area based on photographs. We propose a promising approach to significantly shorten HIFU treatment time: the numerical optimization algorithm was shown to provide a reliable insight on trajectories that can improve treatment strategies. The model must now be improved in order to take in vivo conditions into account and extensively validated.

  11. Impulsive time-free transfers between halo orbits

    NASA Astrophysics Data System (ADS)

    Hiday, L. A.; Howell, K. C.

    1992-08-01

    A methodology is developed to design optimal time-free impulsive transfers between three-dimensional halo orbits in the vicinity of the interior L1 libration point of the sun-earth/moon barycenter system. The transfer trajectories are optimal in the sense that the total characteristics velocity required to implement the transfer exhibits a local minimum. Criteria are established whereby the implementation of a coast in the initial orbit, a coast in the final orbit, or dual coasts accomplishes a reduction in fuel expenditure. The optimality of a reference two-impulse transfer can be determined by examining the slope at the endpoints of a plot of the magnitude of the primer vector on the reference trajectory. If the initial and final slopes of the primer magnitude are zero, the transfer trajectory is optimal; otherwise, the execution of coasts is warranted. The optimal time of flight on the time-free transfer, and consequently, the departure and arrival locations on the halo orbits are determined by the unconstrained minimization of a function of two variables using a multivariable search technique. Results indicate that the cost can be substantially diminished by the allowance for coasts in the initial and final libration-point orbits.

  12. Impulsive Time-Free Transfers Between Halo Orbits

    NASA Astrophysics Data System (ADS)

    Hiday-Johnston, L. A.; Howell, K. C.

    1996-12-01

    A methodology is developed to design optimal time-free impulsive transfers between three-dimensional halo orbits in the vicinity of the interior L 1 libration point of the Sun-Earth/Moon barycenter system. The transfer trajectories are optimal in the sense that the total characteristic velocity required to implement the transfer exhibits a local minimum. Criteria are established whereby the implementation of a coast in the initial orbit, a coast in the final orbit, or dual coasts accomplishes a reduction in fuel expenditure. The optimality of a reference two-impulse transfer can be determined by examining the slope at the endpoints of a plot of the magnitude of the primer vector on the reference trajectory. If the initial and final slopes of the primer magnitude are zero, the transfer trajectory is optimal; otherwise, the execution of coasts is warranted. The optimal time of flight on the time-free transfer, and consequently, the departure and arrival locations on the halo orbits are determined by the unconstrained minimization of a function of two variables using a multivariable search technique. Results indicate that the cost can be substantially diminished by the allowance for coasts in the initial and final libration-point orbits.

  13. Selective Data Acquisition in NMR. The Quantification of Anti-phase Scalar Couplings

    NASA Astrophysics Data System (ADS)

    Hodgkinson, P.; Holmes, K. J.; Hore, P. J.

    Almost all time-domain NMR experiments employ "linear sampling," in which the NMR response is digitized at equally spaced times, with uniform signal averaging. Here, the possibilities of nonlinear sampling are explored using anti-phase doublets in the indirectly detected dimensions of multidimensional COSY-type experiments as an example. The Cramér-Rao lower bounds are used to evaluate and optimize experiments in which the sampling points, or the extent of signal averaging at each point, or both, are varied. The optimal nonlinear sampling for the estimation of the coupling constant J, by model fitting, turns out to involve just a few key time points, for example, at the first node ( t= 1/ J) of the sin(π Jt) modulation. Such sparse sampling patterns can be used to derive more practical strategies, in which the sampling or the signal averaging is distributed around the most significant time points. The improvements in the quantification of NMR parameters can be quite substantial especially when, as is often the case for indirectly detected dimensions, the total number of samples is limited by the time available.

  14. Conception et analyse d'un systeme d'optimisation de plans de vol pour les avions

    NASA Astrophysics Data System (ADS)

    Maazoun, Wissem

    The main objective of this thesis is to develop an optimization method for the preparation of flight plans for aircrafts. The flight plan minimizes all costs associated with the flight. We determine an optimal path for an airplane from a departure airport to a destination airport. The optimal path minimizes the sum of all costs, i.e. the cost of fuel added to the cost of time (wages, rental of the aircraft, arrival delays, etc.). The optimal trajectory is obtained by considering all possible trajectories on a 3D graph (longitude, latitude and altitude) where the altitude levels are separated by 2,000 feet, and by applying a shortest path algorithm. The main task was to accurately compute fuel consumption on each edge of the graph, making sure that each arc has a minimal cost and is covered in a realistic way from the point of view of control, i.e. in accordance with the rules of navigation. To compute the cost of an arc, we take into account weather conditions (temperature, pressure, wind components, etc.). The optimization of each arc is done via the evaluation of an optimum speed that takes all costs into account. Each arc of the graph typically includes several sub-phases of the flight, e.g. altitude change, speed change, and constant speed and altitude. In the initial climb and the final descent phases, the costs are determined by considering altitude changes at constant CAS (Calibrated Air Speed) or constant Mach number. CAS and Mach number are adjusted to minimize cost. The aerodynamic model used is the one proposed by Eurocontrol, which uses the BADA (Base of Aircraft Data) tables. This model is based on the total energy equation that determines the instantaneous fuel consumption. Calculations on each arc are done by solving a system of differential equations that systematically takes all costs into account. To compute the cost of an arc, we must know the time to go through it, which is generally unknown. To have well-posed boundary conditions, we use the horizontal displacement as the independent variable of the system of differential equations. We consider the velocity components of the wind in a 3D system of coordinates to compute the instantaneous ground speed of the aircraft. To consider the cost of time, we use the cost index. The cost of an arc depends on the aircraft mass at the beginning of this arc, and this mass depends on the path. As we consider all possible paths, the cost of an arc must be computed for each trajectory to which it belongs. For a long-distance flight, the number of arcs to be considered in the graph is large and therefore the cost of an arc is typically computed many times. Our algorithm computes the costs of one million arcs in seconds while having a high accuracy. The determination of the optimal trajectory can therefore be done in a short time. To get the optimal path, the mass of the aircraft at the departure point must also be optimal. It is therefore necessary to know the optimal amount of fuel for the journey. The aircraft mass is known only at the arrival point. This mass is the mass of the aircraft including passengers, cargo and reserve fuel mass. The optimal path is determined by calculating backwards, i.e. from the arrival point to the departure point. For the determination of the optimal trajectory, we use an elliptical grid that has focal points at the departure and arrival points. The use of this grid is essential for the construction of a direct and acyclic graph. We use the Bellman-Ford algorithm on a DAG to determine the shortest path. This algorithm is easy to implement and results in short computation times. Our algorithm computes an optimal trajectory with an optimal cost for each arc. Altitude changes are done optimally with respect to the mass of the aircraft and the cost of time. Our algorithm gives the mass, speed, altitude and total cost at any point of the trajectory as well as the optimal profiles of climb and descent. A prototype has been implemented in C. We made simulations of all types of possible arcs and of several complete trajectories to illustrate the behaviour of the algorithm.

  15. Optimizing point-of-care testing in clinical systems management.

    PubMed

    Kost, G J

    1998-01-01

    The goal of improving medical and economic outcomes calls for leadership based on fundamental principles. The manager of clinical systems works collaboratively within the acute care center to optimize point-of-care testing through systematic approaches such as integrative strategies, algorithms, and performance maps. These approaches are effective and efficacious for critically ill patients. Optimizing point-of-care testing throughout the entire health-care system is inherently more difficult. There is potential to achieve high-quality testing, integrated disease management, and equitable health-care delivery. Despite rapid change and economic uncertainty, a macro-strategic, information-integrated, feedback-systems, outcomes-oriented approach is timely, challenging, effective, and uplifting to the creative human spirit.

  16. Local activation time sampling density for atrial tachycardia contact mapping: how much is enough?

    PubMed

    Williams, Steven E; Harrison, James L; Chubb, Henry; Whitaker, John; Kiedrowicz, Radek; Rinaldi, Christopher A; Cooklin, Michael; Wright, Matthew; Niederer, Steven; O'Neill, Mark D

    2018-02-01

    Local activation time (LAT) mapping forms the cornerstone of atrial tachycardia diagnosis. Although anatomic and positional accuracy of electroanatomic mapping (EAM) systems have been validated, the effect of electrode sampling density on LAT map reconstruction is not known. Here, we study the effect of chamber geometry and activation complexity on optimal LAT sampling density using a combined in silico and in vivo approach. In vivo 21 atrial tachycardia maps were studied in three groups: (1) focal activation, (2) macro-re-entry, and (3) localized re-entry. In silico activation was simulated on a 4×4cm atrial monolayer, sampled randomly at 0.25-10 points/cm2 and used to re-interpolate LAT maps. Activation patterns were studied in the geometrically simple porcine right atrium (RA) and complex human left atrium (LA). Activation complexity was introduced into the porcine RA by incomplete inter-caval linear ablation. In all cases, optimal sampling density was defined as the highest density resulting in minimal further error reduction in the re-interpolated maps. Optimal sampling densities for LA tachycardias were 0.67 ± 0.17 points/cm2 (focal activation), 1.05 ± 0.32 points/cm2 (macro-re-entry) and 1.23 ± 0.26 points/cm2 (localized re-entry), P = 0.0031. Increasing activation complexity was associated with increased optimal sampling density both in silico (focal activation 1.09 ± 0.14 points/cm2; re-entry 1.44 ± 0.49 points/cm2; spiral-wave 1.50 ± 0.34 points/cm2, P < 0.0001) and in vivo (porcine RA pre-ablation 0.45 ± 0.13 vs. post-ablation 0.78 ± 0.17 points/cm2, P = 0.0008). Increasing chamber geometry was also associated with increased optimal sampling density (0.61 ± 0.22 points/cm2 vs. 1.0 ± 0.34 points/cm2, P = 0.0015). Optimal sampling densities can be identified to maximize diagnostic yield of LAT maps. Greater sampling density is required to correctly reveal complex activation and represent activation across complex geometries. Overall, the optimal sampling density for LAT map interpolation defined in this study was ∼1.0-1.5 points/cm2. Published on behalf of the European Society of Cardiology

  17. A general method to determine sampling windows for nonlinear mixed effects models with an application to population pharmacokinetic studies.

    PubMed

    Foo, Lee Kien; McGree, James; Duffull, Stephen

    2012-01-01

    Optimal design methods have been proposed to determine the best sampling times when sparse blood sampling is required in clinical pharmacokinetic studies. However, the optimal blood sampling time points may not be feasible in clinical practice. Sampling windows, a time interval for blood sample collection, have been proposed to provide flexibility in blood sampling times while preserving efficient parameter estimation. Because of the complexity of the population pharmacokinetic models, which are generally nonlinear mixed effects models, there is no analytical solution available to determine sampling windows. We propose a method for determination of sampling windows based on MCMC sampling techniques. The proposed method attains a stationary distribution rapidly and provides time-sensitive windows around the optimal design points. The proposed method is applicable to determine sampling windows for any nonlinear mixed effects model although our work focuses on an application to population pharmacokinetic models. Copyright © 2012 John Wiley & Sons, Ltd.

  18. Adaptive surrogate model based multi-objective transfer trajectory optimization between different libration points

    NASA Astrophysics Data System (ADS)

    Peng, Haijun; Wang, Wei

    2016-10-01

    An adaptive surrogate model-based multi-objective optimization strategy that combines the benefits of invariant manifolds and low-thrust control toward developing a low-computational-cost transfer trajectory between libration orbits around the L1 and L2 libration points in the Sun-Earth system has been proposed in this paper. A new structure for a multi-objective transfer trajectory optimization model that divides the transfer trajectory into several segments and gives the dominations for invariant manifolds and low-thrust control in different segments has been established. To reduce the computational cost of multi-objective transfer trajectory optimization, a mixed sampling strategy-based adaptive surrogate model has been proposed. Numerical simulations show that the results obtained from the adaptive surrogate-based multi-objective optimization are in agreement with the results obtained using direct multi-objective optimization methods, and the computational workload of the adaptive surrogate-based multi-objective optimization is only approximately 10% of that of direct multi-objective optimization. Furthermore, the generating efficiency of the Pareto points of the adaptive surrogate-based multi-objective optimization is approximately 8 times that of the direct multi-objective optimization. Therefore, the proposed adaptive surrogate-based multi-objective optimization provides obvious advantages over direct multi-objective optimization methods.

  19. Lasing in optimized two-dimensional iron-nail-shaped rod photonic crystals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kwon, Soon-Yong; Moon, Seul-Ki; Yang, Jin-Kyu, E-mail: jinkyuyang@kongju.ac.kr

    2016-03-15

    We demonstrated lasing at the Γ-point band-edge (BE) modes in optimized two-dimensional iron-nail-shaped rod photonic crystals by optical pulse pumping at room temperature. As the radius of the rod increased quadratically toward the edge of the pattern, the quality factor of the Γ-point BE mode increased up to three times, and the modal volume decreased to 56% compared with the values of the original Γ-point BE mode because of the reduction of the optical loss in the horizontal direction. Single-mode lasing from an optimized iron-nail-shaped rod array with an InGaAsP multiple quantum well embedded in the nail heads was observedmore » at a low threshold pump power of 160 μW. Real-image-based numerical simulations showed that the lasing actions originated from the optimized Γ-point BE mode and agreed well with the measurement results, including the lasing polarization, wavelength, and near-field image.« less

  20. Optimal inverse functions created via population-based optimization.

    PubMed

    Jennings, Alan L; Ordóñez, Raúl

    2014-06-01

    Finding optimal inputs for a multiple-input, single-output system is taxing for a system operator. Population-based optimization is used to create sets of functions that produce a locally optimal input based on a desired output. An operator or higher level planner could use one of the functions in real time. For the optimization, each agent in the population uses the cost and output gradients to take steps lowering the cost while maintaining their current output. When an agent reaches an optimal input for its current output, additional agents are generated in the output gradient directions. The new agents then settle to the local optima for the new output values. The set of associated optimal points forms an inverse function, via spline interpolation, from a desired output to an optimal input. In this manner, multiple locally optimal functions can be created. These functions are naturally clustered in input and output spaces allowing for a continuous inverse function. The operator selects the best cluster over the anticipated range of desired outputs and adjusts the set point (desired output) while maintaining optimality. This reduces the demand from controlling multiple inputs, to controlling a single set point with no loss in performance. Results are demonstrated on a sample set of functions and on a robot control problem.

  1. Powered Descent Guidance with General Thrust-Pointing Constraints

    NASA Technical Reports Server (NTRS)

    Carson, John M., III; Acikmese, Behcet; Blackmore, Lars

    2013-01-01

    The Powered Descent Guidance (PDG) algorithm and software for generating Mars pinpoint or precision landing guidance profiles has been enhanced to incorporate thrust-pointing constraints. Pointing constraints would typically be needed for onboard sensor and navigation systems that have specific field-of-view requirements to generate valid ground proximity and terrain-relative state measurements. The original PDG algorithm was designed to enforce both control and state constraints, including maximum and minimum thrust bounds, avoidance of the ground or descent within a glide slope cone, and maximum speed limits. The thrust-bound and thrust-pointing constraints within PDG are non-convex, which in general requires nonlinear optimization methods to generate solutions. The short duration of Mars powered descent requires guaranteed PDG convergence to a solution within a finite time; however, nonlinear optimization methods have no guarantees of convergence to the global optimal or convergence within finite computation time. A lossless convexification developed for the original PDG algorithm relaxed the non-convex thrust bound constraints. This relaxation was theoretically proven to provide valid and optimal solutions for the original, non-convex problem within a convex framework. As with the thrust bound constraint, a relaxation of the thrust-pointing constraint also provides a lossless convexification that ensures the enhanced relaxed PDG algorithm remains convex and retains validity for the original nonconvex problem. The enhanced PDG algorithm provides guidance profiles for pinpoint and precision landing that minimize fuel usage, minimize landing error to the target, and ensure satisfaction of all position and control constraints, including thrust bounds and now thrust-pointing constraints.

  2. Alternatives for jet engine control

    NASA Technical Reports Server (NTRS)

    Leake, R. J.; Sain, M. K.

    1978-01-01

    General goals of the research were classified into two categories. The first category involves the use of modern multivariable frequency domain methods for control of engine models in the neighborhood of a quiescent point. The second category involves the use of nonlinear modelling and optimization techniques for control of engine models over a more extensive part of the flight envelope. In the frequency domain category, works were published in the areas of low-interaction design, polynomial design, and multiple setpoint studies. A number of these ideas progressed to the point at which they are starting to attract practical interest. In the nonlinear category, advances were made both in engine modelling and in the details associated with software for determination of time optimal controls. Nonlinear models for a two spool turbofan engine were expanded and refined; and a promising new approach to automatic model generation was placed under study. A two time scale scheme was developed to do two-dimensional dynamic programming, and an outward spiral sweep technique has greatly speeded convergence times in time optimal calculations.

  3. Time-optimal aircraft pursuit-evasion with a weapon envelope constraint

    NASA Technical Reports Server (NTRS)

    Menon, P. K. A.; Duke, E. L.

    1990-01-01

    The optimal pursuit-evasion problem between two aircraft, including nonlinear point-mass vehicle models and a realistic weapon envelope, is analyzed. Using a linear combination of flight time and the square of the vehicle acceleration as the performance index, a closed-form solution is obtained in nonlinear feedback form. Due to its modest computational requirements, this guidance law can be used for onboard real-time implementation.

  4. Advanced Optimal Extraction for the Spitzer/IRS

    NASA Astrophysics Data System (ADS)

    Lebouteiller, V.; Bernard-Salas, J.; Sloan, G. C.; Barry, D. J.

    2010-02-01

    We present new advances in the spectral extraction of pointlike sources adapted to the Infrared Spectrograph (IRS) on board the Spitzer Space Telescope. For the first time, we created a supersampled point-spread function of the low-resolution modules. We describe how to use the point-spread function to perform optimal extraction of a single source and of multiple sources within the slit. We also examine the case of the optimal extraction of one or several sources with a complex background. The new algorithms are gathered in a plug-in called AdOpt which is part of the SMART data analysis software.

  5. Computation of the target state and feedback controls for time optimal consensus in multi-agent systems

    NASA Astrophysics Data System (ADS)

    Mulla, Ameer K.; Patil, Deepak U.; Chakraborty, Debraj

    2018-02-01

    N identical agents with bounded inputs aim to reach a common target state (consensus) in the minimum possible time. Algorithms for computing this time-optimal consensus point, the control law to be used by each agent and the time taken for the consensus to occur, are proposed. Two types of multi-agent systems are considered, namely (1) coupled single-integrator agents on a plane and, (2) double-integrator agents on a line. At the initial time instant, each agent is assumed to have access to the state information of all the other agents. An algorithm, using convexity of attainable sets and Helly's theorem, is proposed, to compute the final consensus target state and the minimum time to achieve this consensus. Further, parts of the computation are parallelised amongst the agents such that each agent has to perform computations of O(N2) run time complexity. Finally, local feedback time-optimal control laws are synthesised to drive each agent to the target point in minimum time. During this part of the operation, the controller for each agent uses measurements of only its own states and does not need to communicate with any neighbouring agents.

  6. Rapid optimization of tension distribution for cable-driven parallel manipulators with redundant cables

    NASA Astrophysics Data System (ADS)

    Ouyang, Bo; Shang, Weiwei

    2016-03-01

    The solution of tension distributions is infinite for cable-driven parallel manipulators(CDPMs) with redundant cables. A rapid optimization method for determining the optimal tension distribution is presented. The new optimization method is primarily based on the geometry properties of a polyhedron and convex analysis. The computational efficiency of the optimization method is improved by the designed projection algorithm, and a fast algorithm is proposed to determine which two of the lines are intersected at the optimal point. Moreover, a method for avoiding the operating point on the lower tension limit is developed. Simulation experiments are implemented on a six degree-of-freedom(6-DOF) CDPM with eight cables, and the results indicate that the new method is one order of magnitude faster than the standard simplex method. The optimal distribution of tension distribution is thus rapidly established on real-time by the proposed method.

  7. Optimal Trajectories for the Helicopter in One-Engine-Inoperative Terminal-Area Operations

    NASA Technical Reports Server (NTRS)

    Zhao, Yiyuan; Chen, Robert T. N.

    1996-01-01

    This paper presents a summary of a series of recent analytical studies conducted to investigate One-Engine-Inoperative (OEI) optimal control strategies and the associated optimal trajectories for a twin engine helicopter in Category-A terminal-area operations. These studies also examine the associated heliport size requirements and the maximum gross weight capability of the helicopter. Using an eight states, two controls, augmented point-mass model representative of the study helicopter, Continued TakeOff (CTO), Rejected TakeOff (RTO), Balked Landing (BL), and Continued Landing (CL) are investigated for both Vertical-TakeOff-and-Landing (VTOL) and Short-TakeOff-and-Landing (STOL) terminal-area operations. The formulation of the nonlinear optimal control problems with considerations for realistic constraints, solution methods for the two-point boundary-value problem, a new real-time generation method for the optimal OEI trajectories, and the main results of this series of trajectory optimization studies are presented. In particular, a new balanced- weight concept for determining the takeoff decision point for VTOL Category-A operations is proposed, extending the balanced-field length concept used for STOL operations.

  8. Optimization of physico-chemical properties of gelatin extracted from fish skin of rainbow trout (Onchorhynchus mykiss).

    PubMed

    Tabarestani, H Shahiri; Maghsoudlou, Y; Motamedzadegan, A; Mahoonak, A R Sadeghi

    2010-08-01

    Physico-chemical properties of gelatin extracted from rainbow trout (Onchorhynchus mykiss) skin were optimized using response surface methodology (RSM). Central rotatable composite design was applied to study the combined effects of NaOH concentration (0.01-0.21 N), acetic acid concentration (0.01-0.21 N) and pre-treatment time (1-3h) on yield, molecular weight distribution, gel strength, viscosity and melting point of gelatin. Regression models were developed to predict the variables. Predict values of multiple response at optimal condition were that yield=9.36%, alpha(1)/alpha(2) chain ratio=1.76, beta chain percent=32.81, gel strength=459 g, viscosity=3.2 mPa s and melting point=20.4 degrees C. The optimal condition was obtained using 0.19 N NaOH and 0.121 N acetic acid for 3h. The results showed that the concentration of H(+) during pre-treatment had significant effect on molecular weight distribution, melting point and gel strength. The concentration of OH(-) had significant effect on viscosity and for extraction yield, pretreatment time was the critical factor. (c) 2010 Elsevier Ltd. All rights reserved.

  9. Optimizing the sequence of diameter distributions and selection harvests for uneven-aged stand management

    Treesearch

    Robert G. Haight; J. Douglas Brodie; Darius M. Adams

    1985-01-01

    The determination of an optimal sequence of diameter distributions and selection harvests for uneven-aged stand management is formulated as a discrete-time optimal-control problem with bounded control variables and free-terminal point. An efficient programming technique utilizing gradients provides solutions that are stable and interpretable on the basis of economic...

  10. Optimal solar sail planetocentric trajectories

    NASA Technical Reports Server (NTRS)

    Sackett, L. L.

    1977-01-01

    The analysis of solar sail planetocentric optimal trajectory problem is described. A computer program was produced to calculate optimal trajectories for a limited performance analysis. A square sail model is included and some consideration is given to a heliogyro sail model. Orbit to a subescape point and orbit to orbit transfer are considered. Trajectories about the four inner planets can be calculated and shadowing, oblateness, and solar motion may be included. Equinoctial orbital elements are used to avoid the classical singularities, and the method of averaging is applied to increase computational speed. Solution of the two-point boundary value problem which arises from the application of optimization theory is accomplished with a Newton procedure. Time optimal trajectories are emphasized, but a penalty function has been considered to prevent trajectories which intersect a planet's surface.

  11. Improved dose-volume histogram estimates for radiopharmaceutical therapy by optimizing quantitative SPECT reconstruction parameters

    NASA Astrophysics Data System (ADS)

    Cheng, Lishui; Hobbs, Robert F.; Segars, Paul W.; Sgouros, George; Frey, Eric C.

    2013-06-01

    In radiopharmaceutical therapy, an understanding of the dose distribution in normal and target tissues is important for optimizing treatment. Three-dimensional (3D) dosimetry takes into account patient anatomy and the nonuniform uptake of radiopharmaceuticals in tissues. Dose-volume histograms (DVHs) provide a useful summary representation of the 3D dose distribution and have been widely used for external beam treatment planning. Reliable 3D dosimetry requires an accurate 3D radioactivity distribution as the input. However, activity distribution estimates from SPECT are corrupted by noise and partial volume effects (PVEs). In this work, we systematically investigated OS-EM based quantitative SPECT (QSPECT) image reconstruction in terms of its effect on DVHs estimates. A modified 3D NURBS-based Cardiac-Torso (NCAT) phantom that incorporated a non-uniform kidney model and clinically realistic organ activities and biokinetics was used. Projections were generated using a Monte Carlo (MC) simulation; noise effects were studied using 50 noise realizations with clinical count levels. Activity images were reconstructed using QSPECT with compensation for attenuation, scatter and collimator-detector response (CDR). Dose rate distributions were estimated by convolution of the activity image with a voxel S kernel. Cumulative DVHs were calculated from the phantom and QSPECT images and compared both qualitatively and quantitatively. We found that noise, PVEs, and ringing artifacts due to CDR compensation all degraded histogram estimates. Low-pass filtering and early termination of the iterative process were needed to reduce the effects of noise and ringing artifacts on DVHs, but resulted in increased degradations due to PVEs. Large objects with few features, such as the liver, had more accurate histogram estimates and required fewer iterations and more smoothing for optimal results. Smaller objects with fine details, such as the kidneys, required more iterations and less smoothing at early time points post-radiopharmaceutical administration but more smoothing and fewer iterations at later time points when the total organ activity was lower. The results of this study demonstrate the importance of using optimal reconstruction and regularization parameters. Optimal results were obtained with different parameters at each time point, but using a single set of parameters for all time points produced near-optimal dose-volume histograms.

  12. Transfers between libration-point orbits in the elliptic restricted problem

    NASA Astrophysics Data System (ADS)

    Hiday-Johnston, L. A.; Howell, K. C.

    1994-04-01

    A strategy is formulated to design optimal time-fixed impulsive transfers between three-dimensional libration-point orbits in the vicinity of the interior L1 libration point of the Sun-Earth/Moon barycenter system. The adjoint equation in terms of rotating coordinates in the elliptic restricted three-body problem is shown to be of a distinctly different form from that obtained in the analysis of trajectories in the two-body problem. Also, the necessary conditions for a time-fixed two-impulse transfer to be optimal are stated in terms of the primer vector. Primer vector theory is then extended to nonoptimal impulsive trajectories in order to establish a criterion whereby the addition of an interior impulse reduces total fuel expenditure. The necessary conditions for the local optimality of a transfer containing additional impulses are satisfied by requiring continuity of the Hamiltonian and the derivative of the primer vector at all interior impulses. Determination of location, orientation, and magnitude of each additional impulse is accomplished by the unconstrained minimization of the cost function using a multivariable search method. Results indicate that substantial savings in fuel can be achieved by the addition of interior impulsive maneuvers on transfers between libration-point orbits.

  13. Environmental optimal control strategies based on plant canopy photosynthesis responses and greenhouse climate model

    NASA Astrophysics Data System (ADS)

    Deng, Lujuan; Xie, Songhe; Cui, Jiantao; Liu, Tao

    2006-11-01

    It is the essential goal of intelligent greenhouse environment optimal control to enhance income of cropper and energy save. There were some characteristics such as uncertainty, imprecision, nonlinear, strong coupling, bigger inertia and different time scale in greenhouse environment control system. So greenhouse environment optimal control was not easy and especially model-based optimal control method was more difficult. So the optimal control problem of plant environment in intelligent greenhouse was researched. Hierarchical greenhouse environment control system was constructed. In the first level data measuring was carried out and executive machine was controlled. Optimal setting points of climate controlled variable in greenhouse was calculated and chosen in the second level. Market analysis and planning were completed in third level. The problem of the optimal setting point was discussed in this paper. Firstly the model of plant canopy photosynthesis responses and the model of greenhouse climate model were constructed. Afterwards according to experience of the planting expert, in daytime the optimal goals were decided according to the most maximal photosynthesis rate principle. In nighttime on plant better growth conditions the optimal goals were decided by energy saving principle. Whereafter environment optimal control setting points were computed by GA. Compared the optimal result and recording data in real system, the method is reasonable and can achieve energy saving and the maximal photosynthesis rate in intelligent greenhouse

  14. Two-craft Coulomb formation study about circular orbits and libration points

    NASA Astrophysics Data System (ADS)

    Inampudi, Ravi Kishore

    This dissertation investigates the dynamics and control of a two-craft Coulomb formation in circular orbits and at libration points; it addresses relative equilibria, stability and optimal reconfigurations of such formations. The relative equilibria of a two-craft tether formation connected by line-of-sight elastic forces moving in circular orbits and at libration points are investigated. In circular Earth orbits and Earth-Moon libration points, the radial, along-track, and orbit normal great circle equilibria conditions are found. An example of modeling the tether force using Coulomb force is discussed. Furthermore, the non-great-circle equilibria conditions for a two-spacecraft tether structure in circular Earth orbit and at collinear libration points are developed. Then the linearized dynamics and stability analysis of a 2-craft Coulomb formation at Earth-Moon libration points are studied. For orbit-radial equilibrium, Coulomb forces control the relative distance between the two satellites. The gravity gradient torques on the formation due to the two planets help stabilize the formation. Similar analysis is performed for along-track and orbit-normal relative equilibrium configurations. Where necessary, the craft use a hybrid thrusting-electrostatic actuation system. The two-craft dynamics at the libration points provide a general framework with circular Earth orbit dynamics forming a special case. In the presence of differential solar drag perturbations, a Lyapunov feedback controller is designed to stabilize a radial equilibrium, two-craft Coulomb formation at collinear libration points. The second part of the thesis investigates optimal reconfigurations of two-craft Coulomb formations in circular Earth orbits by applying nonlinear optimal control techniques. The objective of these reconfigurations is to maneuver the two-craft formation between two charged equilibria configurations. The reconfiguration of spacecraft is posed as an optimization problem using the calculus of variations approach. The optimality criteria are minimum time, minimum acceleration of the separation distance, minimum Coulomb and electric propulsion fuel usage, and minimum electrical power consumption. The continuous time problem is discretized using a pseudospectral method, and the resulting finite dimensional problem is solved using a sequential quadratic programming algorithm. The software package, DIDO, implements this approach. This second part illustrates how pseudospectral methods significantly simplify the solution-finding process.

  15. User's guide to four-body and three-body trajectory optimization programs

    NASA Technical Reports Server (NTRS)

    Pu, C. L.; Edelbaum, T. N.

    1974-01-01

    A collection of computer programs and subroutines written in FORTRAN to calculate 4-body (sun-earth-moon-space) and 3-body (earth-moon-space) optimal trajectories is presented. The programs incorporate a variable step integration technique and a quadrature formula to correct single step errors. The programs provide capability to solve initial value problem, two point boundary value problem of a transfer from a given initial position to a given final position in fixed time, optimal 2-impulse transfer from an earth parking orbit of given inclination to a given final position and velocity in fixed time and optimal 3-impulse transfer from a given position to a given final position and velocity in fixed time.

  16. Optimal adaptive control for quantum metrology with time-dependent Hamiltonians.

    PubMed

    Pang, Shengshi; Jordan, Andrew N

    2017-03-09

    Quantum metrology has been studied for a wide range of systems with time-independent Hamiltonians. For systems with time-dependent Hamiltonians, however, due to the complexity of dynamics, little has been known about quantum metrology. Here we investigate quantum metrology with time-dependent Hamiltonians to bridge this gap. We obtain the optimal quantum Fisher information for parameters in time-dependent Hamiltonians, and show proper Hamiltonian control is generally necessary to optimize the Fisher information. We derive the optimal Hamiltonian control, which is generally adaptive, and the measurement scheme to attain the optimal Fisher information. In a minimal example of a qubit in a rotating magnetic field, we find a surprising result that the fundamental limit of T 2 time scaling of quantum Fisher information can be broken with time-dependent Hamiltonians, which reaches T 4 in estimating the rotation frequency of the field. We conclude by considering level crossings in the derivatives of the Hamiltonians, and point out additional control is necessary for that case.

  17. Optimal adaptive control for quantum metrology with time-dependent Hamiltonians

    PubMed Central

    Pang, Shengshi; Jordan, Andrew N.

    2017-01-01

    Quantum metrology has been studied for a wide range of systems with time-independent Hamiltonians. For systems with time-dependent Hamiltonians, however, due to the complexity of dynamics, little has been known about quantum metrology. Here we investigate quantum metrology with time-dependent Hamiltonians to bridge this gap. We obtain the optimal quantum Fisher information for parameters in time-dependent Hamiltonians, and show proper Hamiltonian control is generally necessary to optimize the Fisher information. We derive the optimal Hamiltonian control, which is generally adaptive, and the measurement scheme to attain the optimal Fisher information. In a minimal example of a qubit in a rotating magnetic field, we find a surprising result that the fundamental limit of T2 time scaling of quantum Fisher information can be broken with time-dependent Hamiltonians, which reaches T4 in estimating the rotation frequency of the field. We conclude by considering level crossings in the derivatives of the Hamiltonians, and point out additional control is necessary for that case. PMID:28276428

  18. Point Charges Optimally Placed to Represent the Multipole Expansion of Charge Distributions

    PubMed Central

    Onufriev, Alexey V.

    2013-01-01

    We propose an approach for approximating electrostatic charge distributions with a small number of point charges to optimally represent the original charge distribution. By construction, the proposed optimal point charge approximation (OPCA) retains many of the useful properties of point multipole expansion, including the same far-field asymptotic behavior of the approximate potential. A general framework for numerically computing OPCA, for any given number of approximating charges, is described. We then derive a 2-charge practical point charge approximation, PPCA, which approximates the 2-charge OPCA via closed form analytical expressions, and test the PPCA on a set of charge distributions relevant to biomolecular modeling. We measure the accuracy of the new approximations as the RMS error in the electrostatic potential relative to that produced by the original charge distribution, at a distance the extent of the charge distribution–the mid-field. The error for the 2-charge PPCA is found to be on average 23% smaller than that of optimally placed point dipole approximation, and comparable to that of the point quadrupole approximation. The standard deviation in RMS error for the 2-charge PPCA is 53% lower than that of the optimal point dipole approximation, and comparable to that of the point quadrupole approximation. We also calculate the 3-charge OPCA for representing the gas phase quantum mechanical charge distribution of a water molecule. The electrostatic potential calculated by the 3-charge OPCA for water, in the mid-field (2.8 Å from the oxygen atom), is on average 33.3% more accurate than the potential due to the point multipole expansion up to the octupole order. Compared to a 3 point charge approximation in which the charges are placed on the atom centers, the 3-charge OPCA is seven times more accurate, by RMS error. The maximum error at the oxygen-Na distance (2.23 Å ) is half that of the point multipole expansion up to the octupole order. PMID:23861790

  19. FPFH-based graph matching for 3D point cloud registration

    NASA Astrophysics Data System (ADS)

    Zhao, Jiapeng; Li, Chen; Tian, Lihua; Zhu, Jihua

    2018-04-01

    Correspondence detection is a vital step in point cloud registration and it can help getting a reliable initial alignment. In this paper, we put forward an advanced point feature-based graph matching algorithm to solve the initial alignment problem of rigid 3D point cloud registration with partial overlap. Specifically, Fast Point Feature Histograms are used to determine the initial possible correspondences firstly. Next, a new objective function is provided to make the graph matching more suitable for partially overlapping point cloud. The objective function is optimized by the simulated annealing algorithm for final group of correct correspondences. Finally, we present a novel set partitioning method which can transform the NP-hard optimization problem into a O(n3)-solvable one. Experiments on the Stanford and UWA public data sets indicates that our method can obtain better result in terms of both accuracy and time cost compared with other point cloud registration methods.

  20. Self-Organizing Hierarchical Particle Swarm Optimization with Time-Varying Acceleration Coefficients for Economic Dispatch with Valve Point Effects and Multifuel Options

    NASA Astrophysics Data System (ADS)

    Polprasert, Jirawadee; Ongsakul, Weerakorn; Dieu, Vo Ngoc

    2011-06-01

    This paper proposes a self-organizing hierarchical particle swarm optimization (SPSO) with time-varying acceleration coefficients (TVAC) for solving economic dispatch (ED) problem with non-smooth functions including multiple fuel options (MFO) and valve-point loading effects (VPLE). The proposed SPSO with TVAC is the new approach optimizer and good performance for solving ED problems. It can handle the premature convergence of the problem by re-initialization of velocity whenever particles are stagnated in the search space. To properly control both local and global explorations of the swarm during the optimization process, the performance of TVAC is included. The proposed method is tested in different ED problems with non-smooth cost functions and the obtained results are compared to those from many other methods in the literature. The results have revealed that the proposed SPSO with TVAC is effective in finding higher quality solutions for non-smooth ED problems than many other methods.

  1. Verification of an optimized stimulation point on the abdominal wall for transcutaneous neuromuscular electrical stimulation for activation of deep lumbar stabilizing muscles.

    PubMed

    Baek, Seung Ok; Cho, Hee Kyung; Jung, Gil Su; Son, Su Min; Cho, Yun Woo; Ahn, Sang Ho

    2014-09-01

    Transcutaneous neuromuscular electrical stimulation (NMES) can stimulate contractions in deep lumbar stabilizing muscles. An optimal protocol has not been devised for the activation of these muscles by NMES, and information is lacking regarding an optimal stimulation point on the abdominal wall. The goal was to determine a single optimized stimulation point on the abdominal wall for transcutaneous NMES for the activation of deep lumbar stabilizing muscles. Ultrasound images of the spinal stabilizing muscles were captured during NMES at three sites on the lateral abdominal wall. After an optimal location for the placement of the electrodes was determined, changes in the thickness of the lumbar multifidus (LM) were measured during NMES. Three stimulation points were investigated using 20 healthy physically active male volunteers. A reference point R, 1 cm superior to the iliac crest along the midaxillary line, was used. Three study points were used: stimulation point S1 was located 2 cm superior and 2 cm medial to the anterior superior iliac spine, stimulation point S3 was 2 cm below the lowest rib along the same sagittal plane as S1, and stimulation point S2 was midway between S1 and S3. Sessions were conducted stimulating at S1, S2, or S3 using R for reference. Real-time ultrasound imaging (RUSI) of the abdominal muscles was captured during each stimulation session. In addition, RUSI images were captured of the LM during stimulation at S1. Thickness, as measured by RUSI, of the transverse abdominis (TrA), obliquus internus, and obliquus externus was greater during NMES than at rest for all three study points (p<.05). Transverse abdominis was significantly stimulated more by NMES at S1 than at the other points (p<.05). The LM thickness was also significantly greater during NMES at S1 than at rest (p<.05). Neuromuscular electrical stimulation at S1 optimally activated deep spinal stabilizing muscles, TrA and LM, as evidenced by RUSI. The authors recommend this optimal stimulation point be used for NMES in the course of lumbar spine stabilization training in patients having difficulty initiating contraction of these muscles. Copyright © 2014 Elsevier Inc. All rights reserved.

  2. Helping the decision maker effectively promote various experts' views into various optimal solutions to China's institutional problem of health care provider selection through the organization of a pilot health care provider research system.

    PubMed

    Tang, Liyang

    2013-04-04

    The main aim of China's Health Care System Reform was to help the decision maker find the optimal solution to China's institutional problem of health care provider selection. A pilot health care provider research system was recently organized in China's health care system, and it could efficiently collect the data for determining the optimal solution to China's institutional problem of health care provider selection from various experts, then the purpose of this study was to apply the optimal implementation methodology to help the decision maker effectively promote various experts' views into various optimal solutions to this problem under the support of this pilot system. After the general framework of China's institutional problem of health care provider selection was established, this study collaborated with the National Bureau of Statistics of China to commission a large-scale 2009 to 2010 national expert survey (n = 3,914) through the organization of a pilot health care provider research system for the first time in China, and the analytic network process (ANP) implementation methodology was adopted to analyze the dataset from this survey. The market-oriented health care provider approach was the optimal solution to China's institutional problem of health care provider selection from the doctors' point of view; the traditional government's regulation-oriented health care provider approach was the optimal solution to China's institutional problem of health care provider selection from the pharmacists' point of view, the hospital administrators' point of view, and the point of view of health officials in health administration departments; the public private partnership (PPP) approach was the optimal solution to China's institutional problem of health care provider selection from the nurses' point of view, the point of view of officials in medical insurance agencies, and the health care researchers' point of view. The data collected through a pilot health care provider research system in the 2009 to 2010 national expert survey could help the decision maker effectively promote various experts' views into various optimal solutions to China's institutional problem of health care provider selection.

  3. Feedback Implementation of Zermelo's Optimal Control by Sugeno Approximation

    NASA Technical Reports Server (NTRS)

    Clifton, C.; Homaifax, A.; Bikdash, M.

    1997-01-01

    This paper proposes an approach to implement optimal control laws of nonlinear systems in real time. Our methodology does not require solving two-point boundary value problems online and may not require it off-line either. The optimal control law is learned using the original Sugeno controller (OSC) from a family of optimal trajectories. We compare the trajectories generated by the OSC and the trajectories yielded by the optimal feedback control law when applied to Zermelo's ship steering problem.

  4. Use of personalized Dynamic Treatment Regimes (DTRs) and Sequential Multiple Assignment Randomized Trials (SMARTs) in mental health studies

    PubMed Central

    Liu, Ying; ZENG, Donglin; WANG, Yuanjia

    2014-01-01

    Summary Dynamic treatment regimens (DTRs) are sequential decision rules tailored at each point where a clinical decision is made based on each patient’s time-varying characteristics and intermediate outcomes observed at earlier points in time. The complexity, patient heterogeneity, and chronicity of mental disorders call for learning optimal DTRs to dynamically adapt treatment to an individual’s response over time. The Sequential Multiple Assignment Randomized Trial (SMARTs) design allows for estimating causal effects of DTRs. Modern statistical tools have been developed to optimize DTRs based on personalized variables and intermediate outcomes using rich data collected from SMARTs; these statistical methods can also be used to recommend tailoring variables for designing future SMART studies. This paper introduces DTRs and SMARTs using two examples in mental health studies, discusses two machine learning methods for estimating optimal DTR from SMARTs data, and demonstrates the performance of the statistical methods using simulated data. PMID:25642116

  5. Multi-Criteria Optimization of Regulation in Metabolic Networks

    PubMed Central

    Higuera, Clara; Villaverde, Alejandro F.; Banga, Julio R.; Ross, John; Morán, Federico

    2012-01-01

    Determining the regulation of metabolic networks at genome scale is a hard task. It has been hypothesized that biochemical pathways and metabolic networks might have undergone an evolutionary process of optimization with respect to several criteria over time. In this contribution, a multi-criteria approach has been used to optimize parameters for the allosteric regulation of enzymes in a model of a metabolic substrate-cycle. This has been carried out by calculating the Pareto set of optimal solutions according to two objectives: the proper direction of flux in a metabolic cycle and the energetic cost of applying the set of parameters. Different Pareto fronts have been calculated for eight different “environments” (specific time courses of end product concentrations). For each resulting front the so-called knee point is identified, which can be considered a preferred trade-off solution. Interestingly, the optimal control parameters corresponding to each of these points also lead to optimal behaviour in all the other environments. By calculating the average of the different parameter sets for the knee solutions more frequently found, a final and optimal consensus set of parameters can be obtained, which is an indication on the existence of a universal regulation mechanism for this system.The implications from such a universal regulatory switch are discussed in the framework of large metabolic networks. PMID:22848435

  6. A novel point cloud registration using 2D image features

    NASA Astrophysics Data System (ADS)

    Lin, Chien-Chou; Tai, Yen-Chou; Lee, Jhong-Jin; Chen, Yong-Sheng

    2017-01-01

    Since a 3D scanner only captures a scene of a 3D object at a time, a 3D registration for multi-scene is the key issue of 3D modeling. This paper presents a novel and an efficient 3D registration method based on 2D local feature matching. The proposed method transforms the point clouds into 2D bearing angle images and then uses the 2D feature based matching method, SURF, to find matching pixel pairs between two images. The corresponding points of 3D point clouds can be obtained by those pixel pairs. Since the corresponding pairs are sorted by their distance between matching features, only the top half of the corresponding pairs are used to find the optimal rotation matrix by the least squares approximation. In this paper, the optimal rotation matrix is derived by orthogonal Procrustes method (SVD-based approach). Therefore, the 3D model of an object can be reconstructed by aligning those point clouds with the optimal transformation matrix. Experimental results show that the accuracy of the proposed method is close to the ICP, but the computation cost is reduced significantly. The performance is six times faster than the generalized-ICP algorithm. Furthermore, while the ICP requires high alignment similarity of two scenes, the proposed method is robust to a larger difference of viewing angle.

  7. Optimizing operating parameters of a honeycomb zeolite rotor concentrator for processing TFT-LCD volatile organic compounds with competitive adsorption characteristics.

    PubMed

    Lin, Yu-Chih; Chang, Feng-Tang

    2009-05-30

    In this study, we attempted to enhance the removal efficiency of a honeycomb zeolite rotor concentrator (HZRC), operated at optimal parameters, for processing TFT-LCD volatile organic compounds (VOCs) with competitive adsorption characteristics. The results indicated that when the HZRC processed a VOCs stream of mixed compounds, compounds with a high boiling point take precedence in the adsorption process. In addition, existing compounds with a low boiling point adsorbed onto the HZRC were also displaced by the high-boiling-point compounds. In order to achieve optimal operating parameters for high VOCs removal efficiency, results suggested controlling the inlet velocity to <1.5m/s, reducing the concentration ratio to 8 times, increasing the desorption temperature to 200-225 degrees C, and setting the rotation speed to 6.5rpm.

  8. OTIS 3.2 Software Released

    NASA Technical Reports Server (NTRS)

    Riehl, John P.; Sjauw, Waldy K.

    2004-01-01

    Trajectory, mission, and vehicle engineers concern themselves with finding the best way for an object to get from one place to another. These engineers rely upon special software to assist them in this. For a number of years, many engineers have used the OTIS program for this assistance. With OTIS, an engineer can fully optimize trajectories for airplanes, launch vehicles like the space shuttle, interplanetary spacecraft, and orbital transfer vehicles. OTIS provides four modes of operation, with each mode providing successively stronger optimization capability. The most powerful mode uses a mathematical method called implicit integration to solve what engineers and mathematicians call the optimal control problem. OTIS 3.2, which was developed at the NASA Glenn Research Center, is the latest release of this industry workhorse and features new capabilities for parameter optimization and mission design. OTIS stands for Optimal Control by Implicit Simulation, and it is implicit integration that makes OTIS so powerful at solving trajectory optimization problems. Why is this so important? The optimization process not only determines how to get from point A to point B, but it can also determine how to do this with the least amount of propellant, with the lightest starting weight, or in the fastest time possible while avoiding certain obstacles along the way. There are numerous conditions that engineers can use to define optimal, or best. OTIS provides a framework for defining the starting and ending points of the trajectory (point A and point B), the constraints on the trajectory (requirements like "avoid these regions where obstacles occur"), and what is being optimized (e.g., minimize propellant). The implicit integration method can find solutions to very complicated problems when there is not a lot of information available about what the optimal trajectory might be. The method was first developed for solving two-point boundary value problems and was adapted for use in OTIS. Implicit integration usually allows OTIS to find solutions to problems much faster than programs that use explicit integration and parametric methods. Consequently, OTIS is best suited to solving very complicated and highly constrained problems.

  9. Optimal strategy analysis based on robust predictive control for inventory system with random demand

    NASA Astrophysics Data System (ADS)

    Saputra, Aditya; Widowati, Sutrisno

    2017-12-01

    In this paper, the optimal strategy for a single product single supplier inventory system with random demand is analyzed by using robust predictive control with additive random parameter. We formulate the dynamical system of this system as a linear state space with additive random parameter. To determine and analyze the optimal strategy for the given inventory system, we use robust predictive control approach which gives the optimal strategy i.e. the optimal product volume that should be purchased from the supplier for each time period so that the expected cost is minimal. A numerical simulation is performed with some generated random inventory data. We simulate in MATLAB software where the inventory level must be controlled as close as possible to a set point decided by us. From the results, robust predictive control model provides the optimal strategy i.e. the optimal product volume that should be purchased and the inventory level was followed the given set point.

  10. Concrete thawing studied by single-point ramped imaging.

    PubMed

    Prado, P J; Balcom, B J; Beyea, S D; Armstrong, R L; Bremner, T W

    1997-12-01

    A series of two-dimensional images of proton distribution in a hardened concrete sample has been obtained during the thawing process (from -50 degrees C up to 11 degrees C). The SPRITE sequence is optimal for this study given the characteristic short relaxation times of water in this porous media (T2* < 200 micros and T1 < 3.6 ms). The relaxation parameters of the sample were determined in order to optimize the time efficiency of the sequence, permitting a 4-scan 64 x 64 acquisition in under 3 min. The image acquisition is fast on the time scale of the temperature evolution of the specimen. The frozen water distribution is quantified through a position based study of the image contrast. A multiple point acquisition method is presented and the signal sensitivity improvement is discussed.

  11. Magnetic recording performance of keepered media

    NASA Astrophysics Data System (ADS)

    Coughlin, T. M.; Tang, Y. S.; Velu, E. M. T.; Lairson, B.

    1997-04-01

    Using low flying and proximity inductive heads, keepered media show improved on- and off-track performance leading us to conclude that a greater than 20% areal density improvement is possible with a keeper layer over the magnetic storage layer. For Sendust keeper layers there is an optimal range of thickness and an optimal bias point for best performance. There are both amplitude and timing asymmetries that are functions of the read-back bias. For a peak detect channel the best performance corresponds to the minimum timing asymmetry although this is not the point where the pulses are narrowest. Keepered media may have an advantage in total jitter and partial erasure. NLTS is almost identical for keepered versus unkeepered media.

  12. Optimal impulsive time-fixed orbital rendezvous and interception with path constraints

    NASA Technical Reports Server (NTRS)

    Taur, D.-R.; Prussing, J. E.; Coverstone-Carroll, V.

    1990-01-01

    Minimum-fuel, impulsive, time-fixed solutions are obtained for the problem of orbital rendezvous and interception with interior path constraints. Transfers between coplanar circular orbits in an inverse-square gravitational field are considered, subject to a circular path constraint representing a minimum or maximum permissible orbital radius. Primer vector theory is extended to incorporate path constraints. The optimal number of impulses, their times and positions, and the presence of initial or final coasting arcs are determined. The existence of constraint boundary arcs and boundary points is investigated as well as the optimality of a class of singular arc solutions. To illustrate the complexities introduced by path constraints, an analysis is made of optimal rendezvous in field-free space subject to a minimum radius constraint.

  13. Research on Intelligent Control System of DC SQUID Magnetometer Parameters for Multi-channel System

    NASA Astrophysics Data System (ADS)

    Chen, Hua; Yang, Kang; Lu, Li; Kong, Xiangyan; Wang, Hai; Wu, Jun; Wang, Yongliang

    2018-07-01

    In a multi-channel SQUID measurement system, adjusting device parameters to optimal condition for all channels is time-consuming. In this paper, an intelligent control system is presented to determine the optimal working point of devices which is automatic and more efficient comparing to the manual one. An optimal working point searching algorithm is introduced as the core component of the control system. In this algorithm, the bias voltage V_bias is step scanned to obtain the maximal value of the peak-to-peak current value I_pp of the SQUID magnetometer modulation curve. We choose this point as the optimal one. Using the above control system, more than 30 weakly damped SQUID magnetometers with area of 5 × 5 mm^2 or 10 × 10 mm^2 are adjusted and a 36-channel magnetocardiography system perfectly worked in a magnetically shielded room. The average white flux noise is 15 {μ Φ }_0/Hz^{1/2}.

  14. Research on Intelligent Control System of DC SQUID Magnetometer Parameters for Multi-channel System

    NASA Astrophysics Data System (ADS)

    Chen, Hua; Yang, Kang; Lu, Li; Kong, Xiangyan; Wang, Hai; Wu, Jun; Wang, Yongliang

    2018-03-01

    In a multi-channel SQUID measurement system, adjusting device parameters to optimal condition for all channels is time-consuming. In this paper, an intelligent control system is presented to determine the optimal working point of devices which is automatic and more efficient comparing to the manual one. An optimal working point searching algorithm is introduced as the core component of the control system. In this algorithm, the bias voltage V_bias is step scanned to obtain the maximal value of the peak-to-peak current value I_pp of the SQUID magnetometer modulation curve. We choose this point as the optimal one. Using the above control system, more than 30 weakly damped SQUID magnetometers with area of 5 × 5 mm^2 or 10 × 10 mm^2 are adjusted and a 36-channel magnetocardiography system perfectly worked in a magnetically shielded room. The average white flux noise is 15 μΦ_0/Hz^{1/2}.

  15. Provisional-Ideal-Point-Based Multi-objective Optimization Method for Drone Delivery Problem

    NASA Astrophysics Data System (ADS)

    Omagari, Hiroki; Higashino, Shin-Ichiro

    2018-04-01

    In this paper, we proposed a new evolutionary multi-objective optimization method for solving drone delivery problems (DDP). It can be formulated as a constrained multi-objective optimization problem. In our previous research, we proposed the "aspiration-point-based method" to solve multi-objective optimization problems. However, this method needs to calculate the optimal values of each objective function value in advance. Moreover, it does not consider the constraint conditions except for the objective functions. Therefore, it cannot apply to DDP which has many constraint conditions. To solve these issues, we proposed "provisional-ideal-point-based method." The proposed method defines a "penalty value" to search for feasible solutions. It also defines a new reference solution named "provisional-ideal point" to search for the preferred solution for a decision maker. In this way, we can eliminate the preliminary calculations and its limited application scope. The results of the benchmark test problems show that the proposed method can generate the preferred solution efficiently. The usefulness of the proposed method is also demonstrated by applying it to DDP. As a result, the delivery path when combining one drone and one truck drastically reduces the traveling distance and the delivery time compared with the case of using only one truck.

  16. Model for Bi-objective emergency rescue vehicle routing optimization

    NASA Astrophysics Data System (ADS)

    Yang, Yuhang

    2017-03-01

    Vehicle routing problem is an important research topic in management science. In this paper, one vehicle can rescue multiple disaster points and two optimization objectives are rescue time and rescue effect. Rescue effect is expressed as the ratio of unloaded material to arrival time when rescue vehicles participate in rescue every time. In this paper, the corresponding emergency rescue model is established and the effectiveness of the model is verified by simulated annealing algorithm. It can provide the basis for practical decision-making.

  17. Optimal Parameter Exploration for Online Change-Point Detection in Activity Monitoring Using Genetic Algorithms

    PubMed Central

    Khan, Naveed; McClean, Sally; Zhang, Shuai; Nugent, Chris

    2016-01-01

    In recent years, smart phones with inbuilt sensors have become popular devices to facilitate activity recognition. The sensors capture a large amount of data, containing meaningful events, in a short period of time. The change points in this data are used to specify transitions to distinct events and can be used in various scenarios such as identifying change in a patient’s vital signs in the medical domain or requesting activity labels for generating real-world labeled activity datasets. Our work focuses on change-point detection to identify a transition from one activity to another. Within this paper, we extend our previous work on multivariate exponentially weighted moving average (MEWMA) algorithm by using a genetic algorithm (GA) to identify the optimal set of parameters for online change-point detection. The proposed technique finds the maximum accuracy and F_measure by optimizing the different parameters of the MEWMA, which subsequently identifies the exact location of the change point from an existing activity to a new one. Optimal parameter selection facilitates an algorithm to detect accurate change points and minimize false alarms. Results have been evaluated based on two real datasets of accelerometer data collected from a set of different activities from two users, with a high degree of accuracy from 99.4% to 99.8% and F_measure of up to 66.7%. PMID:27792177

  18. Modeling low-thrust transfers between periodic orbits about five libration points: Manifolds and hierarchical design

    NASA Astrophysics Data System (ADS)

    Zeng, Hao; Zhang, Jingrui

    2018-04-01

    The low-thrust version of the fuel-optimal transfers between periodic orbits with different energies in the vicinity of five libration points is exploited deeply in the Circular Restricted Three-Body Problem. Indirect optimization technique incorporated with constraint gradients is employed to further improve the computational efficiency and accuracy of the algorithm. The required optimal thrust magnitude and direction can be determined to create the bridging trajectory that connects the invariant manifolds. A hierarchical design strategy dividing the constraint set is proposed to seek the optimal solution when the problem cannot be solved directly. Meanwhile, the solution procedure and the value ranges of used variables are summarized. To highlight the effectivity of the transfer scheme and aim at different types of libration point orbits, transfer trajectories between some sample orbits, including Lyapunov orbits, planar orbits, halo orbits, axial orbits, vertical orbits and butterfly orbits for collinear and triangular libration points, are investigated with various time of flight. Numerical results show that the fuel consumption varies from a few kilograms to tens of kilograms, related to the locations and the types of mission orbits as well as the corresponding invariant manifold structures, and indicates that the low-thrust transfers may be a beneficial option for the extended science missions around different libration points.

  19. Pointing Device Performance in Steering Tasks.

    PubMed

    Senanayake, Ransalu; Goonetilleke, Ravindra S

    2016-06-01

    Use of touch-screen-based interactions is growing rapidly. Hence, knowing the maneuvering efficacy of touch screens relative to other pointing devices is of great importance in the context of graphical user interfaces. Movement time, accuracy, and user preferences of four pointing device settings were evaluated on a computer with 14 participants aged 20.1 ± 3.13 years. It was found that, depending on the difficulty of the task, the optimal settings differ for ballistic and visual control tasks. With a touch screen, resting the arm increased movement time for steering tasks. When both performance and comfort are considered, whether to use a mouse or a touch screen for person-computer interaction depends on the steering difficulty. Hence, a input device should be chosen based on the application, and should be optimized to match the graphical user interface. © The Author(s) 2016.

  20. Optimization of cutting parameters for machining time in turning process

    NASA Astrophysics Data System (ADS)

    Mavliutov, A. R.; Zlotnikov, E. G.

    2018-03-01

    This paper describes the most effective methods for nonlinear constraint optimization of cutting parameters in the turning process. Among them are Linearization Programming Method with Dual-Simplex algorithm, Interior Point method, and Augmented Lagrangian Genetic Algorithm (ALGA). Every each of them is tested on an actual example – the minimization of production rate in turning process. The computation was conducted in the MATLAB environment. The comparative results obtained from the application of these methods show: The optimal value of the linearized objective and the original function are the same. ALGA gives sufficiently accurate values, however, when the algorithm uses the Hybrid function with Interior Point algorithm, the resulted values have the maximal accuracy.

  1. A neural network strategy for end-point optimization of batch processes.

    PubMed

    Krothapally, M; Palanki, S

    1999-01-01

    The traditional way of operating batch processes has been to utilize an open-loop "golden recipe". However, there can be substantial batch to batch variation in process conditions and this open-loop strategy can lead to non-optimal operation. In this paper, a new approach is presented for end-point optimization of batch processes by utilizing neural networks. This strategy involves the training of two neural networks; one to predict switching times and the other to predict the input profile in the singular region. This approach alleviates the computational problems associated with the classical Pontryagin's approach and the nonlinear programming approach. The efficacy of this scheme is illustrated via simulation of a fed-batch fermentation.

  2. Pursuing optimal electric machines transient diagnosis: The adaptive slope transform

    NASA Astrophysics Data System (ADS)

    Pons-Llinares, Joan; Riera-Guasp, Martín; Antonino-Daviu, Jose A.; Habetler, Thomas G.

    2016-12-01

    The aim of this paper is to introduce a new linear time-frequency transform to improve the detection of fault components in electric machines transient currents. Linear transforms are analysed from the perspective of the atoms used. A criterion to select the atoms at every point of the time-frequency plane is proposed, taking into account the characteristics of the searched component at each point. This criterion leads to the definition of the Adaptive Slope Transform, which enables a complete and optimal capture of the different components evolutions in a transient current. A comparison with conventional linear transforms (Short-Time Fourier Transform and Wavelet Transform) is carried out, showing their inherent limitations. The approach is tested with laboratory and field motors, and the Lower Sideband Harmonic is captured for the first time during an induction motor startup and subsequent load oscillations, accurately tracking its evolution.

  3. A systematic FPGA acceleration design for applications based on convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Dong, Hao; Jiang, Li; Li, Tianjian; Liang, Xiaoyao

    2018-04-01

    Most FPGA accelerators for convolutional neural network are designed to optimize the inner acceleration and are ignored of the optimization for the data path between the inner accelerator and the outer system. This could lead to poor performance in applications like real time video object detection. We propose a brand new systematic FPFA acceleration design to solve this problem. This design takes the data path optimization between the inner accelerator and the outer system into consideration and optimizes the data path using techniques like hardware format transformation, frame compression. It also takes fixed-point, new pipeline technique to optimize the inner accelerator. All these make the final system's performance very good, reaching about 10 times the performance comparing with the original system.

  4. Regulation of Dynamical Systems to Optimal Solutions of Semidefinite Programs: Algorithms and Applications to AC Optimal Power Flow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall'Anese, Emiliano; Dhople, Sairaj V.; Giannakis, Georgios B.

    2015-07-01

    This paper considers a collection of networked nonlinear dynamical systems, and addresses the synthesis of feedback controllers that seek optimal operating points corresponding to the solution of pertinent network-wide optimization problems. Particular emphasis is placed on the solution of semidefinite programs (SDPs). The design of the feedback controller is grounded on a dual e-subgradient approach, with the dual iterates utilized to dynamically update the dynamical-system reference signals. Global convergence is guaranteed for diminishing stepsize rules, even when the reference inputs are updated at a faster rate than the dynamical-system settling time. The application of the proposed framework to the controlmore » of power-electronic inverters in AC distribution systems is discussed. The objective is to bridge the time-scale separation between real-time inverter control and network-wide optimization. Optimization objectives assume the form of SDP relaxations of prototypical AC optimal power flow problems.« less

  5. Replica approach to mean-variance portfolio optimization

    NASA Astrophysics Data System (ADS)

    Varga-Haszonits, Istvan; Caccioli, Fabio; Kondor, Imre

    2016-12-01

    We consider the problem of mean-variance portfolio optimization for a generic covariance matrix subject to the budget constraint and the constraint for the expected return, with the application of the replica method borrowed from the statistical physics of disordered systems. We find that the replica symmetry of the solution does not need to be assumed, but emerges as the unique solution of the optimization problem. We also check the stability of this solution and find that the eigenvalues of the Hessian are positive for r  =  N/T  <  1, where N is the dimension of the portfolio and T the length of the time series used to estimate the covariance matrix. At the critical point r  =  1 a phase transition is taking place. The out of sample estimation error blows up at this point as 1/(1  -  r), independently of the covariance matrix or the expected return, displaying the universality not only of the critical exponent, but also the critical point. As a conspicuous illustration of the dangers of in-sample estimates, the optimal in-sample variance is found to vanish at the critical point inversely proportional to the divergent estimation error.

  6. PCTO-SIM: Multiple-point geostatistical modeling using parallel conditional texture optimization

    NASA Astrophysics Data System (ADS)

    Pourfard, Mohammadreza; Abdollahifard, Mohammad J.; Faez, Karim; Motamedi, Sayed Ahmad; Hosseinian, Tahmineh

    2017-05-01

    Multiple-point Geostatistics is a well-known general statistical framework by which complex geological phenomena have been modeled efficiently. Pixel-based and patch-based are two major categories of these methods. In this paper, the optimization-based category is used which has a dual concept in texture synthesis as texture optimization. Our extended version of texture optimization uses the energy concept to model geological phenomena. While honoring the hard point, the minimization of our proposed cost function forces simulation grid pixels to be as similar as possible to training images. Our algorithm has a self-enrichment capability and creates a richer training database from a sparser one through mixing the information of all surrounding patches of the simulation nodes. Therefore, it preserves pattern continuity in both continuous and categorical variables very well. It also shows a fuzzy result in its every realization similar to the expected result of multi realizations of other statistical models. While the main core of most previous Multiple-point Geostatistics methods is sequential, the parallel main core of our algorithm enabled it to use GPU efficiently to reduce the CPU time. One new validation method for MPS has also been proposed in this paper.

  7. Is there a trade-off between longevity and quality of life in Grossman's pure investment model?

    PubMed

    Eisenring, C

    2000-12-01

    The question is posed whether an individual maximizes lifetime or trades off longevity for quality of life in Grossman's pure investment (PI)-model. It is shown that the answer critically hinges on the assumed production function for healthy time. If the production function for healthy time produces a trade-off between life-span and quality of life, one has to solve a sequence of fixed time problems. The one offering maximal intertemporal utility determines optimal longevity. Comparative static results of optimal longevity for a simplified version of the PI-model are derived. The obtained results predict that higher initial endowments of wealth and health, a rise in the wage rate, or improvements in the technology of producing healthy time, all increase the optimal length of life. On the other hand, optimal longevity is decreasing in the depreciation and interest rate. From a technical point of view, the paper illustrates that a discrete time equivalent to the transversality condition for optimal longevity employed in continuous optimal control models does not exist. Copyright 2000 John Wiley & Sons, Ltd.

  8. Distributed Learning, Extremum Seeking, and Model-Free Optimization for the Resilient Coordination of Multi-Agent Adversarial Groups

    DTIC Science & Technology

    2016-09-07

    been demonstrated on maximum power point tracking for photovoltaic arrays and for wind turbines . 3. ES has recently been implemented on the Mars...high-dimensional optimization problems . Extensions and applications of these techniques were developed during the realization of the project. 15...studied problems of dynamic average consensus and a class of unconstrained continuous-time optimization algorithms for the coordination of multiple

  9. Control of mechanical systems by the mixed "time and expenditure" criterion

    NASA Astrophysics Data System (ADS)

    Alesova, I. M.; Babadzanjanz, L. K.; Pototskaya, I. Yu.; Pupysheva, Yu. Yu.; Saakyan, A. T.

    2018-05-01

    The optimal controlled motion of a mechanical system, that is determined by the linear system ODE with constant coefficients and piecewise constant control components, is considered. The number of control switching points and the heights of control steps are considered as preset. The optimized functional is combination of classical time criteria and "Expenditure criteria", that is equal to the total area of all steps of all control components. In the absence of control, the solution of the system is equal to the sum of components (frequency components) corresponding to different eigenvalues of the matrix of the ODE system. Admissible controls are those that turn to zero (at a non predetermined time moment) the previously chosen frequency components of the solution. An algorithm for the finding of control switching points, based on the necessary minimum conditions for mixed criteria, is proposed.

  10. Helping the decision maker effectively promote various experts’ views into various optimal solutions to China’s institutional problem of health care provider selection through the organization of a pilot health care provider research system

    PubMed Central

    2013-01-01

    Background The main aim of China’s Health Care System Reform was to help the decision maker find the optimal solution to China’s institutional problem of health care provider selection. A pilot health care provider research system was recently organized in China’s health care system, and it could efficiently collect the data for determining the optimal solution to China’s institutional problem of health care provider selection from various experts, then the purpose of this study was to apply the optimal implementation methodology to help the decision maker effectively promote various experts’ views into various optimal solutions to this problem under the support of this pilot system. Methods After the general framework of China’s institutional problem of health care provider selection was established, this study collaborated with the National Bureau of Statistics of China to commission a large-scale 2009 to 2010 national expert survey (n = 3,914) through the organization of a pilot health care provider research system for the first time in China, and the analytic network process (ANP) implementation methodology was adopted to analyze the dataset from this survey. Results The market-oriented health care provider approach was the optimal solution to China’s institutional problem of health care provider selection from the doctors’ point of view; the traditional government’s regulation-oriented health care provider approach was the optimal solution to China’s institutional problem of health care provider selection from the pharmacists’ point of view, the hospital administrators’ point of view, and the point of view of health officials in health administration departments; the public private partnership (PPP) approach was the optimal solution to China’s institutional problem of health care provider selection from the nurses’ point of view, the point of view of officials in medical insurance agencies, and the health care researchers’ point of view. Conclusions The data collected through a pilot health care provider research system in the 2009 to 2010 national expert survey could help the decision maker effectively promote various experts’ views into various optimal solutions to China’s institutional problem of health care provider selection. PMID:23557082

  11. Optimization of the Switch Mechanism in a Circuit Breaker Using MBD Based Simulation

    PubMed Central

    Jang, Jin-Seok; Yoon, Chang-Gyu; Ryu, Chi-Young; Kim, Hyun-Woo; Bae, Byung-Tae; Yoo, Wan-Suk

    2015-01-01

    A circuit breaker is widely used to protect electric power system from fault currents or system errors; in particular, the opening mechanism in a circuit breaker is important to protect current overflow in the electric system. In this paper, multibody dynamic model of a circuit breaker including switch mechanism was developed including the electromagnetic actuator system. Since the opening mechanism operates sequentially, optimization of the switch mechanism was carried out to improve the current breaking time. In the optimization process, design parameters were selected from length and shape of each latch, which changes pivot points of bearings to shorten the breaking time. To validate optimization results, computational results were compared to physical tests with a high speed camera. Opening time of the optimized mechanism was decreased by 2.3 ms, which was proved by experiments. Switch mechanism design process can be improved including contact-latch system by using this process. PMID:25918740

  12. Nonlinear Multiobjective MPC-Based Optimal Operation of a High Consistency Refining System in Papermaking

    DOE PAGES

    Li, Mingjie; Zhou, Ping; Wang, Hong; ...

    2017-09-19

    As one of the most important unit in the papermaking industry, the high consistency (HC) refining system is confronted with challenges such as improving pulp quality, energy saving, and emissions reduction in its operation processes. Here in this correspondence, an optimal operation of HC refining system is presented using nonlinear multiobjective model predictive control strategies that aim at set-point tracking objective of pulp quality, economic objective, and specific energy (SE) consumption objective, respectively. First, a set of input and output data at different times are employed to construct the subprocess model of the state process model for the HC refiningmore » system, and then the Wiener-type model can be obtained through combining the mechanism model of Canadian Standard Freeness and the state process model that determines their structures based on Akaike information criterion. Second, the multiobjective optimization strategy that optimizes both the set-point tracking objective of pulp quality and SE consumption is proposed simultaneously, which uses NSGA-II approach to obtain the Pareto optimal set. Furthermore, targeting at the set-point tracking objective of pulp quality, economic objective, and SE consumption objective, the sequential quadratic programming method is utilized to produce the optimal predictive controllers. In conclusion, the simulation results demonstrate that the proposed methods can make the HC refining system provide a better performance of set-point tracking of pulp quality when these predictive controllers are employed. In addition, while the optimal predictive controllers orienting with comprehensive economic objective and SE consumption objective, it has been shown that they have significantly reduced the energy consumption.« less

  13. Nonlinear Multiobjective MPC-Based Optimal Operation of a High Consistency Refining System in Papermaking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Mingjie; Zhou, Ping; Wang, Hong

    As one of the most important unit in the papermaking industry, the high consistency (HC) refining system is confronted with challenges such as improving pulp quality, energy saving, and emissions reduction in its operation processes. Here in this correspondence, an optimal operation of HC refining system is presented using nonlinear multiobjective model predictive control strategies that aim at set-point tracking objective of pulp quality, economic objective, and specific energy (SE) consumption objective, respectively. First, a set of input and output data at different times are employed to construct the subprocess model of the state process model for the HC refiningmore » system, and then the Wiener-type model can be obtained through combining the mechanism model of Canadian Standard Freeness and the state process model that determines their structures based on Akaike information criterion. Second, the multiobjective optimization strategy that optimizes both the set-point tracking objective of pulp quality and SE consumption is proposed simultaneously, which uses NSGA-II approach to obtain the Pareto optimal set. Furthermore, targeting at the set-point tracking objective of pulp quality, economic objective, and SE consumption objective, the sequential quadratic programming method is utilized to produce the optimal predictive controllers. In conclusion, the simulation results demonstrate that the proposed methods can make the HC refining system provide a better performance of set-point tracking of pulp quality when these predictive controllers are employed. In addition, while the optimal predictive controllers orienting with comprehensive economic objective and SE consumption objective, it has been shown that they have significantly reduced the energy consumption.« less

  14. Locating single-point sources from arrival times containing large picking errors (LPEs): the virtual field optimization method (VFOM)

    NASA Astrophysics Data System (ADS)

    Li, Xi-Bing; Wang, Ze-Wei; Dong, Long-Jun

    2016-01-01

    Microseismic monitoring systems using local location techniques tend to be timely, automatic and stable. One basic requirement of these systems is the automatic picking of arrival times. However, arrival times generated by automated techniques always contain large picking errors (LPEs), which may make the location solution unreliable and cause the integrated system to be unstable. To overcome the LPE issue, we propose the virtual field optimization method (VFOM) for locating single-point sources. In contrast to existing approaches, the VFOM optimizes a continuous and virtually established objective function to search the space for the common intersection of the hyperboloids, which is determined by sensor pairs other than the least residual between the model-calculated and measured arrivals. The results of numerical examples and in-site blasts show that the VFOM can obtain more precise and stable solutions than traditional methods when the input data contain LPEs. Furthermore, we discuss the impact of LPEs on objective functions to determine the LPE-tolerant mechanism, velocity sensitivity and stopping criteria of the VFOM. The proposed method is also capable of locating acoustic sources using passive techniques such as passive sonar detection and acoustic emission.

  15. Control strategy optimization of HVAC plants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Facci, Andrea Luigi; Zanfardino, Antonella; Martini, Fabrizio

    In this paper we present a methodology to optimize the operating conditions of heating, ventilation and air conditioning (HVAC) plants to achieve a higher energy efficiency in use. Semi-empiric numerical models of the plant components are used to predict their performances as a function of their set-point and the environmental and occupied space conditions. The optimization is performed through a graph-based algorithm that finds the set-points of the system components that minimize energy consumption and/or energy costs, while matching the user energy demands. The resulting model can be used with systems of almost any complexity, featuring both HVAC components andmore » energy systems, and is sufficiently fast to make it applicable to real-time setting.« less

  16. Analysis of an inventory model for both linearly decreasing demand and holding cost

    NASA Astrophysics Data System (ADS)

    Malik, A. K.; Singh, Parth Raj; Tomar, Ajay; Kumar, Satish; Yadav, S. K.

    2016-03-01

    This study proposes the analysis of an inventory model for linearly decreasing demand and holding cost for non-instantaneous deteriorating items. The inventory model focuses on commodities having linearly decreasing demand without shortages. The holding cost doesn't remain uniform with time due to any form of variation in the time value of money. Here we consider that the holding cost decreases with respect to time. The optimal time interval for the total profit and the optimal order quantity are determined. The developed inventory model is pointed up through a numerical example. It also includes the sensitivity analysis.

  17. Optimal two-stage dynamic treatment regimes from a classification perspective with censored survival data.

    PubMed

    Hager, Rebecca; Tsiatis, Anastasios A; Davidian, Marie

    2018-05-18

    Clinicians often make multiple treatment decisions at key points over the course of a patient's disease. A dynamic treatment regime is a sequence of decision rules, each mapping a patient's observed history to the set of available, feasible treatment options at each decision point, and thus formalizes this process. An optimal regime is one leading to the most beneficial outcome on average if used to select treatment for the patient population. We propose a method for estimation of an optimal regime involving two decision points when the outcome of interest is a censored survival time, which is based on maximizing a locally efficient, doubly robust, augmented inverse probability weighted estimator for average outcome over a class of regimes. By casting this optimization as a classification problem, we exploit well-studied classification techniques such as support vector machines to characterize the class of regimes and facilitate implementation via a backward iterative algorithm. Simulation studies of performance and application of the method to data from a sequential, multiple assignment randomized clinical trial in acute leukemia are presented. © 2018, The International Biometric Society.

  18. An Optimal Parameter Discretization Strategy for Multiple Model Adaptive Estimation and Control

    DTIC Science & Technology

    1989-12-01

    Zicker . MMAE-Based Control with Space- Time Point Process Observations. IEEE Transactions on Aerospace and Elec- tronic Systems, AES-21 (3):292-300, 1985...Transactions of the Conference of Army Math- ematicians, Bethesda MD, 1982. (AD-POO1 033). 65. William L. Zicker . Pointing and Tracking of Particle

  19. A modular approach to intensity-modulated arc therapy optimization with noncoplanar trajectories

    NASA Astrophysics Data System (ADS)

    Papp, Dávid; Bortfeld, Thomas; Unkelbach, Jan

    2015-07-01

    Utilizing noncoplanar beam angles in volumetric modulated arc therapy (VMAT) has the potential to combine the benefits of arc therapy, such as short treatment times, with the benefits of noncoplanar intensity modulated radiotherapy (IMRT) plans, such as improved organ sparing. Recently, vendors introduced treatment machines that allow for simultaneous couch and gantry motion during beam delivery to make noncoplanar VMAT treatments possible. Our aim is to provide a reliable optimization method for noncoplanar isocentric arc therapy plan optimization. The proposed solution is modular in the sense that it can incorporate different existing beam angle selection and coplanar arc therapy optimization methods. Treatment planning is performed in three steps. First, a number of promising noncoplanar beam directions are selected using an iterative beam selection heuristic; these beams serve as anchor points of the arc therapy trajectory. In the second step, continuous gantry/couch angle trajectories are optimized using a simple combinatorial optimization model to define a beam trajectory that efficiently visits each of the anchor points. Treatment time is controlled by limiting the time the beam needs to trace the prescribed trajectory. In the third and final step, an optimal arc therapy plan is found along the prescribed beam trajectory. In principle any existing arc therapy optimization method could be incorporated into this step; for this work we use a sliding window VMAT algorithm. The approach is demonstrated using two particularly challenging cases. The first one is a lung SBRT patient whose planning goals could not be satisfied with fewer than nine noncoplanar IMRT fields when the patient was treated in the clinic. The second one is a brain tumor patient, where the target volume overlaps with the optic nerves and the chiasm and it is directly adjacent to the brainstem. Both cases illustrate that the large number of angles utilized by isocentric noncoplanar VMAT plans can help improve dose conformity, homogeneity, and organ sparing simultaneously using the same beam trajectory length and delivery time as a coplanar VMAT plan.

  20. A novel method for vaginal cylinder treatment planning: a seamless transition to 3D brachytherapy

    PubMed Central

    Wu, Vincent; Wang, Zhou; Patil, Sachin

    2012-01-01

    Purpose Standard treatment plan libraries are often used to ensure a quick turn-around time for vaginal cylinder treatments. Recently there is increasing interest in transitioning from conventional 2D radiograph based brachytherapy to 3D image based brachytherapy, which has resulted in a substantial increase in treatment planning time and decrease in patient through-put. We describe a novel technique that significantly reduces the treatment planning time for CT-based vaginal cylinder brachytherapy. Material and methods Oncentra MasterPlan TPS allows multiple sets of data points to be classified as applicator points which has been harnessed in this method. The method relies on two hard anchor points: the first dwell position in a catheter and an applicator configuration specific dwell position as the plan origin and a soft anchor point beyond the last active dwell position to define the axis of the catheter. The spatial location of various data points on the applicator's surface and at 5 mm depth are stored in an Excel file that can easily be transferred into a patient CT data set using window operations and then used for treatment planning. The remainder of the treatment planning process remains unaffected. Results The treatment plans generated on the Oncentra MasterPlan TPS using this novel method yielded results comparable to those generated on the Plato TPS using a standard treatment plan library in terms of treatment times, dwell weights and dwell times for a given optimization method and normalization points. Less than 2% difference was noticed between the treatment times generated between both systems. Using the above method, the entire planning process, including CT importing, catheter reconstruction, multiple data point definition, optimization and dose prescription, can be completed in ~5–10 minutes. Conclusion The proposed method allows a smooth and efficient transition to 3D CT based vaginal cylinder brachytherapy planning. PMID:23349650

  1. Optimal management of non-Markovian biological populations

    USGS Publications Warehouse

    Williams, B.K.

    2007-01-01

    Wildlife populations typically are described by Markovian models, with population dynamics influenced at each point in time by current but not previous population levels. Considerable work has been done on identifying optimal management strategies under the Markovian assumption. In this paper we generalize this work to non-Markovian systems, for which population responses to management are influenced by lagged as well as current status and/or controls. We use the maximum principle of optimal control theory to derive conditions for the optimal management such a system, and illustrate the effects of lags on the structure of optimal habitat strategies for a predator-prey system.

  2. Implementation of a dose gradient method into optimization of dose distribution in prostate cancer 3D-CRT plans

    PubMed Central

    Giżyńska, Marta K.; Kukołowicz, Paweł F.; Kordowski, Paweł

    2014-01-01

    Aim The aim of this work is to present a method of beam weight and wedge angle optimization for patients with prostate cancer. Background 3D-CRT is usually realized with forward planning based on a trial and error method. Several authors have published a few methods of beam weight optimization applicable to the 3D-CRT. Still, none on these methods is in common use. Materials and methods Optimization is based on the assumption that the best plan is achieved if dose gradient at ICRU point is equal to zero. Our optimization algorithm requires beam quality index, depth of maximum dose, profiles of wedged fields and maximum dose to femoral heads. The method was tested for 10 patients with prostate cancer, treated with the 3-field technique. Optimized plans were compared with plans prepared by 12 experienced planners. Dose standard deviation in target volume, and minimum and maximum doses were analyzed. Results The quality of plans obtained with the proposed optimization algorithms was comparable to that prepared by experienced planners. Mean difference in target dose standard deviation was 0.1% in favor of the plans prepared by planners for optimization of beam weights and wedge angles. Introducing a correction factor for patient body outline for dose gradient at ICRU point improved dose distribution homogeneity. On average, a 0.1% lower standard deviation was achieved with the optimization algorithm. No significant difference in mean dose–volume histogram for the rectum was observed. Conclusions Optimization shortens very much time planning. The average planning time was 5 min and less than a minute for forward and computer optimization, respectively. PMID:25337411

  3. A Review of High-Order and Optimized Finite-Difference Methods for Simulating Linear Wave Phenomena

    NASA Technical Reports Server (NTRS)

    Zingg, David W.

    1996-01-01

    This paper presents a review of high-order and optimized finite-difference methods for numerically simulating the propagation and scattering of linear waves, such as electromagnetic, acoustic, or elastic waves. The spatial operators reviewed include compact schemes, non-compact schemes, schemes on staggered grids, and schemes which are optimized to produce specific characteristics. The time-marching methods discussed include Runge-Kutta methods, Adams-Bashforth methods, and the leapfrog method. In addition, the following fourth-order fully-discrete finite-difference methods are considered: a one-step implicit scheme with a three-point spatial stencil, a one-step explicit scheme with a five-point spatial stencil, and a two-step explicit scheme with a five-point spatial stencil. For each method studied, the number of grid points per wavelength required for accurate simulation of wave propagation over large distances is presented. Recommendations are made with respect to the suitability of the methods for specific problems and practical aspects of their use, such as appropriate Courant numbers and grid densities. Avenues for future research are suggested.

  4. An Adaptive Cross-Architecture Combination Method for Graph Traversal

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    You, Yang; Song, Shuaiwen; Kerbyson, Darren J.

    2014-06-18

    Breadth-First Search (BFS) is widely used in many real-world applications including computational biology, social networks, and electronic design automation. The combination method, using both top-down and bottom-up techniques, is the most effective BFS approach. However, current combination methods rely on trial-and-error and exhaustive search to locate the optimal switching point, which may cause significant runtime overhead. To solve this problem, we design an adaptive method based on regression analysis to predict an optimal switching point for the combination method at runtime within less than 0.1% of the BFS execution time.

  5. Coaching of physicians by RNs to improve diabetes care.

    PubMed

    Frederick, Mary L; Johnson, Pamela Jo; Duffee, Janelle; McCarthy, Bruce D

    2013-01-01

    The purpose of this study is to describe preliminary results of an innovative quality improvement intervention focused on improving physician practice patterns in diabetes care via Coaching Physicians by RN certified diabetes educators (CDEs), a program called "CPR for Diabetes Care." METHODS The program identified primary care physicians with optimal diabetes control rates below the system aggregate (n = 195). Physicians with the lowest rates (n = 74) were targeted for comprehensive intervention. All other low-performing physicians practicing in the same clinic system (n = 121) comprised the comparison group. Data were obtained from electronic diabetes registries for 2007 and 2008. Each physician had a set of measures from 2 points in time. Measures included optimal diabetes scores and the 5 component measures of the optimal diabetes care bundle (A1C <7, low-density lipoprotein cholesterol <100, blood pressure <130/80, aspirin use if older than 40, and no tobacco use). T tests and difference-in-difference models were used to examine changes over time. Optimal diabetes scores increased 11.7 points (from 14.7% to 26.4%) for intervention physicians and 4.0 points (from 29.7% to 32.9%) for comparison physicians. The improvement was greater for the intervention group. The greatest component improvements were in control of blood pressure and cholesterol. CONCLUSIONS Coaching low-performing physicians dramatically improved the proportion of diabetes patients with optimal diabetes control. The CPR for Diabetes Care program represents an innovative and effective way to address the long-standing problem of disseminating and sustaining quality improvement efforts by focusing on low-performing physicians.

  6. Integration of electromagnetic induction sensor data in soil sampling scheme optimization using simulated annealing.

    PubMed

    Barca, E; Castrignanò, A; Buttafuoco, G; De Benedetto, D; Passarella, G

    2015-07-01

    Soil survey is generally time-consuming, labor-intensive, and costly. Optimization of sampling scheme allows one to reduce the number of sampling points without decreasing or even increasing the accuracy of investigated attribute. Maps of bulk soil electrical conductivity (EC a ) recorded with electromagnetic induction (EMI) sensors could be effectively used to direct soil sampling design for assessing spatial variability of soil moisture. A protocol, using a field-scale bulk EC a survey, has been applied in an agricultural field in Apulia region (southeastern Italy). Spatial simulated annealing was used as a method to optimize spatial soil sampling scheme taking into account sampling constraints, field boundaries, and preliminary observations. Three optimization criteria were used. the first criterion (minimization of mean of the shortest distances, MMSD) optimizes the spreading of the point observations over the entire field by minimizing the expectation of the distance between an arbitrarily chosen point and its nearest observation; the second criterion (minimization of weighted mean of the shortest distances, MWMSD) is a weighted version of the MMSD, which uses the digital gradient of the grid EC a data as weighting function; and the third criterion (mean of average ordinary kriging variance, MAOKV) minimizes mean kriging estimation variance of the target variable. The last criterion utilizes the variogram model of soil water content estimated in a previous trial. The procedures, or a combination of them, were tested and compared in a real case. Simulated annealing was implemented by the software MSANOS able to define or redesign any sampling scheme by increasing or decreasing the original sampling locations. The output consists of the computed sampling scheme, the convergence time, and the cooling law, which can be an invaluable support to the process of sampling design. The proposed approach has found the optimal solution in a reasonable computation time. The use of bulk EC a gradient as an exhaustive variable, known at any node of an interpolation grid, has allowed the optimization of the sampling scheme, distinguishing among areas with different priority levels.

  7. SIFT optimization and automation for matching images from multiple temporal sources

    NASA Astrophysics Data System (ADS)

    Castillo-Carrión, Sebastián; Guerrero-Ginel, José-Emilio

    2017-05-01

    Scale Invariant Feature Transformation (SIFT) was applied to extract tie-points from multiple source images. Although SIFT is reported to perform reliably under widely different radiometric and geometric conditions, using the default input parameters resulted in too few points being found. We found that the best solution was to focus on large features as these are more robust and not prone to scene changes over time, which constitutes a first approach to the automation of processes using mapping applications such as geometric correction, creation of orthophotos and 3D models generation. The optimization of five key SIFT parameters is proposed as a way of increasing the number of correct matches; the performance of SIFT is explored in different images and parameter values, finding optimization values which are corroborated using different validation imagery. The results show that the optimization model improves the performance of SIFT in correlating multitemporal images captured from different sources.

  8. Shifts in Key Time Points and Strategies for a Multisegment Motor Task in Healthy Aging Subjects.

    PubMed

    Casteran, Matthieu; Hilt, Pauline M; Mourey, France; Manckoundia, Patrick; French, Robert; Thomas, Elizabeth

    2018-05-05

    In this study, we compared key temporal points in the whole body pointing movement of healthy aging and young subjects. During this movement, subject leans forward from a standing position to reach a target. As it involves forward inclination of the trunk, the movement creates a risk for falling. We examined two strategic time points during the task-first, the crossover point where the velocity of the center of mass (CoM) in the vertical dimension outstripped the velocity in the anteroposterior dimension and secondly, the time to peak of the CoM velocity profile. Transitions to stabilizing postures occur at these time points. They both occurred earlier in aging subjects. The crossover point also showed adjustments with target distance in aging subjects, while this was not observed in younger subjects. The shifts in these key time points could not be attributed to differences in movement duration between the two groups. Investigation with an optimal control model showed that the temporal adjustment as a function of target distance in the healthy aging subjects fits into a strategy that emphasized equilibrium maintenance rather than absolute work as a control strategy.

  9. Subspace-based optimization method for inverse scattering problems with an inhomogeneous background medium

    NASA Astrophysics Data System (ADS)

    Chen, Xudong

    2010-07-01

    This paper proposes a version of the subspace-based optimization method to solve the inverse scattering problem with an inhomogeneous background medium where the known inhomogeneities are bounded in a finite domain. Although the background Green's function at each discrete point in the computational domain is not directly available in an inhomogeneous background scenario, the paper uses the finite element method to simultaneously obtain the Green's function at all discrete points. The essence of the subspace-based optimization method is that part of the contrast source is determined from the spectrum analysis without using any optimization, whereas the orthogonally complementary part is determined by solving a lower dimension optimization problem. This feature significantly speeds up the convergence of the algorithm and at the same time makes it robust against noise. Numerical simulations illustrate the efficacy of the proposed algorithm. The algorithm presented in this paper finds wide applications in nondestructive evaluation, such as through-wall imaging.

  10. Some Results of Weak Anticipative Concept Applied in Simulation Based Decision Support in Enterprise

    NASA Astrophysics Data System (ADS)

    Kljajić, Miroljub; Kofjač, Davorin; Kljajić Borštnar, Mirjana; Škraba, Andrej

    2010-11-01

    The simulation models are used as for decision support and learning in enterprises and in schools. Tree cases of successful applications demonstrate usefulness of weak anticipative information. Job shop scheduling production with makespan criterion presents a real case customized flexible furniture production optimization. The genetic algorithm for job shop scheduling optimization is presented. Simulation based inventory control for products with stochastic lead time and demand describes inventory optimization for products with stochastic lead time and demand. Dynamic programming and fuzzy control algorithms reduce the total cost without producing stock-outs in most cases. Values of decision making information based on simulation were discussed too. All two cases will be discussed from optimization, modeling and learning point of view.

  11. Shortest path problem on a grid network with unordered intermediate points

    NASA Astrophysics Data System (ADS)

    Saw, Veekeong; Rahman, Amirah; Eng Ong, Wen

    2017-10-01

    We consider a shortest path problem with single cost factor on a grid network with unordered intermediate points. A two stage heuristic algorithm is proposed to find a feasible solution path within a reasonable amount of time. To evaluate the performance of the proposed algorithm, computational experiments are performed on grid maps of varying size and number of intermediate points. Preliminary results for the problem are reported. Numerical comparisons against brute forcing show that the proposed algorithm consistently yields solutions that are within 10% of the optimal solution and uses significantly less computation time.

  12. Tuning rules for robust FOPID controllers based on multi-objective optimization with FOPDT models.

    PubMed

    Sánchez, Helem Sabina; Padula, Fabrizio; Visioli, Antonio; Vilanova, Ramon

    2017-01-01

    In this paper a set of optimally balanced tuning rules for fractional-order proportional-integral-derivative controllers is proposed. The control problem of minimizing at once the integrated absolute error for both the set-point and the load disturbance responses is addressed. The control problem is stated as a multi-objective optimization problem where a first-order-plus-dead-time process model subject to a robustness, maximum sensitivity based, constraint has been considered. A set of Pareto optimal solutions is obtained for different normalized dead times and then the optimal balance between the competing objectives is obtained by choosing the Nash solution among the Pareto-optimal ones. A curve fitting procedure has then been applied in order to generate suitable tuning rules. Several simulation results show the effectiveness of the proposed approach. Copyright © 2016. Published by Elsevier Ltd.

  13. Fuel optimal maneuvers for spacecraft with fixed thrusters

    NASA Technical Reports Server (NTRS)

    Carter, T. C.

    1982-01-01

    Several mathematical models, including a minimum integral square criterion problem, were used for the qualitative investigation of fuel optimal maneuvers for spacecraft with fixed thrusters. The solutions consist of intervals of "full thrust" and "coast" indicating that thrusters do not need to be designed as "throttleable" for fuel optimal performance. For the primary model considered, singular solutions occur only if the optimal solution is "pure translation". "Time optimal" singular solutions can be found which consist of intervals of "coast" and "full thrust". The shape of the optimal fuel consumption curve as a function of flight time was found to depend on whether or not the initial state is in the region admitting singular solutions. Comparisons of fuel optimal maneuvers in deep space with those relative to a point in circular orbit indicate that qualitative differences in the solutions can occur. Computation of fuel consumption for certain "pure translation" cases indicates that considerable savings in fuel can result from the fuel optimal maneuvers.

  14. [Extraction Optimization of Rhizome of Curcuma longa by Response Surface Methodology and Support Vector Regression].

    PubMed

    Zhou, Pei-pei; Shan, Jin-feng; Jiang, Jian-lan

    2015-12-01

    To optimize the optimal microwave-assisted extraction method of curcuminoids from Curcuma longa. On the base of single factor experiment, the ethanol concentration, the ratio of liquid to solid and the microwave time were selected for further optimization. Support Vector Regression (SVR) and Central Composite Design-Response Surface Methodology (CCD) algorithm were utilized to design and establish models respectively, while Particle Swarm Optimization (PSO) was introduced to optimize the parameters of SVR models and to search optimal points of models. The evaluation indicator, the sum of curcumin, demethoxycurcumin and bisdemethoxycurcumin by HPLC, were used. The optimal parameters of microwave-assisted extraction were as follows: ethanol concentration of 69%, ratio of liquid to solid of 21 : 1, microwave time of 55 s. On those conditions, the sum of three curcuminoids was 28.97 mg/g (per gram of rhizomes powder). Both the CCD model and the SVR model were credible, for they have predicted the similar process condition and the deviation of yield were less than 1.2%.

  15. A heterogeneous fleet vehicle routing model for solving the LPG distribution problem: A case study

    NASA Astrophysics Data System (ADS)

    Onut, S.; Kamber, M. R.; Altay, G.

    2014-03-01

    Vehicle Routing Problem (VRP) is an important management problem in the field of distribution and logistics. In VRPs, routes from a distribution point to geographically distributed points are designed with minimum cost and considering customer demands. All points should be visited only once and by one vehicle in one route. Total demand in one route should not exceed the capacity of the vehicle that assigned to that route. VRPs are varied due to real life constraints related to vehicle types, number of depots, transportation conditions and time periods, etc. Heterogeneous fleet vehicle routing problem is a kind of VRP that vehicles have different capacity and costs. There are two types of vehicles in our problem. In this study, it is used the real world data and obtained from a company that operates in LPG sector in Turkey. An optimization model is established for planning daily routes and assigned vehicles. The model is solved by GAMS and optimal solution is found in a reasonable time.

  16. Adaptive Dynamic Programming for Discrete-Time Zero-Sum Games.

    PubMed

    Wei, Qinglai; Liu, Derong; Lin, Qiao; Song, Ruizhuo

    2018-04-01

    In this paper, a novel adaptive dynamic programming (ADP) algorithm, called "iterative zero-sum ADP algorithm," is developed to solve infinite-horizon discrete-time two-player zero-sum games of nonlinear systems. The present iterative zero-sum ADP algorithm permits arbitrary positive semidefinite functions to initialize the upper and lower iterations. A novel convergence analysis is developed to guarantee the upper and lower iterative value functions to converge to the upper and lower optimums, respectively. When the saddle-point equilibrium exists, it is emphasized that both the upper and lower iterative value functions are proved to converge to the optimal solution of the zero-sum game, where the existence criteria of the saddle-point equilibrium are not required. If the saddle-point equilibrium does not exist, the upper and lower optimal performance index functions are obtained, respectively, where the upper and lower performance index functions are proved to be not equivalent. Finally, simulation results and comparisons are shown to illustrate the performance of the present method.

  17. Analysis of point-to-point lung motion with full inspiration and expiration CT data using non-linear optimization method: optimal geometric assumption model for the effective registration algorithm

    NASA Astrophysics Data System (ADS)

    Kim, Namkug; Seo, Joon Beom; Heo, Jeong Nam; Kang, Suk-Ho

    2007-03-01

    The study was conducted to develop a simple model for more robust lung registration of volumetric CT data, which is essential for various clinical lung analysis applications, including the lung nodule matching in follow up CT studies, semi-quantitative assessment of lung perfusion, and etc. The purpose of this study is to find the most effective reference point and geometric model based on the lung motion analysis from the CT data sets obtained in full inspiration (In.) and expiration (Ex.). Ten pairs of CT data sets in normal subjects obtained in full In. and Ex. were used in this study. Two radiologists were requested to draw 20 points representing the subpleural point of the central axis in each segment. The apex, hilar point, and center of inertia (COI) of each unilateral lung were proposed as the reference point. To evaluate optimal expansion point, non-linear optimization without constraints was employed. The objective function is sum of distances from the line, consist of the corresponding points between In. and Ex. to the optimal point x. By using the nonlinear optimization, the optimal points was evaluated and compared between reference points. The average distance between the optimal point and each line segment revealed that the balloon model was more suitable to explain the lung expansion model. This lung motion analysis based on vector analysis and non-linear optimization shows that balloon model centered on the center of inertia of lung is most effective geometric model to explain lung expansion by breathing.

  18. An Improved Method for Real-Time 3D Construction of DTM

    NASA Astrophysics Data System (ADS)

    Wei, Yi

    This paper discusses the real-time optimal construction of DTM by two measures. One is to improve coordinate transformation of discrete points acquired from lidar, after processing a total number of 10000 data points, the formula calculation for transformation costs 0.810s, while the table look-up method for transformation costs 0.188s, indicating that the latter is superior to the former. The other one is to adjust the density of the point cloud acquired from lidar, the certain amount of the data points are used for 3D construction in proper proportion in order to meet different needs for 3D imaging, and ultimately increase efficiency of DTM construction while saving system resources.

  19. Interpolation of longitudinal shape and image data via optimal mass transport

    NASA Astrophysics Data System (ADS)

    Gao, Yi; Zhu, Liang-Jia; Bouix, Sylvain; Tannenbaum, Allen

    2014-03-01

    Longitudinal analysis of medical imaging data has become central to the study of many disorders. Unfortunately, various constraints (study design, patient availability, technological limitations) restrict the acquisition of data to only a few time points, limiting the study of continuous disease/treatment progression. Having the ability to produce a sensible time interpolation of the data can lead to improved analysis, such as intuitive visualizations of anatomical changes, or the creation of more samples to improve statistical analysis. In this work, we model interpolation of medical image data, in particular shape data, using the theory of optimal mass transport (OMT), which can construct a continuous transition from two time points while preserving "mass" (e.g., image intensity, shape volume) during the transition. The theory even allows a short extrapolation in time and may help predict short-term treatment impact or disease progression on anatomical structure. We apply the proposed method to the hippocampus-amygdala complex in schizophrenia, the heart in atrial fibrillation, and full head MR images in traumatic brain injury.

  20. Floating-Point Modules Targeted for Use with RC Compilation Tools

    NASA Technical Reports Server (NTRS)

    Sahin, Ibrahin; Gloster, Clay S.

    2000-01-01

    Reconfigurable Computing (RC) has emerged as a viable computing solution for computationally intensive applications. Several applications have been mapped to RC system and in most cases, they provided the smallest published execution time. Although RC systems offer significant performance advantages over general-purpose processors, they require more application development time than general-purpose processors. This increased development time of RC systems provides the motivation to develop an optimized module library with an assembly language instruction format interface for use with future RC system that will reduce development time significantly. In this paper, we present area/performance metrics for several different types of floating point (FP) modules that can be utilized to develop complex FP applications. These modules are highly pipelined and optimized for both speed and area. Using these modules, and example application, FP matrix multiplication, is also presented. Our results and experiences show, that with these modules, 8-10X speedup over general-purpose processors can be achieved.

  1. Heterogeneous Data Fusion Method to Estimate Travel Time Distributions in Congested Road Networks

    PubMed Central

    Lam, William H. K.; Li, Qingquan

    2017-01-01

    Travel times in congested urban road networks are highly stochastic. Provision of travel time distribution information, including both mean and variance, can be very useful for travelers to make reliable path choice decisions to ensure higher probability of on-time arrival. To this end, a heterogeneous data fusion method is proposed to estimate travel time distributions by fusing heterogeneous data from point and interval detectors. In the proposed method, link travel time distributions are first estimated from point detector observations. The travel time distributions of links without point detectors are imputed based on their spatial correlations with links that have point detectors. The estimated link travel time distributions are then fused with path travel time distributions obtained from the interval detectors using Dempster-Shafer evidence theory. Based on fused path travel time distribution, an optimization technique is further introduced to update link travel time distributions and their spatial correlations. A case study was performed using real-world data from Hong Kong and showed that the proposed method obtained accurate and robust estimations of link and path travel time distributions in congested road networks. PMID:29210978

  2. Heterogeneous Data Fusion Method to Estimate Travel Time Distributions in Congested Road Networks.

    PubMed

    Shi, Chaoyang; Chen, Bi Yu; Lam, William H K; Li, Qingquan

    2017-12-06

    Travel times in congested urban road networks are highly stochastic. Provision of travel time distribution information, including both mean and variance, can be very useful for travelers to make reliable path choice decisions to ensure higher probability of on-time arrival. To this end, a heterogeneous data fusion method is proposed to estimate travel time distributions by fusing heterogeneous data from point and interval detectors. In the proposed method, link travel time distributions are first estimated from point detector observations. The travel time distributions of links without point detectors are imputed based on their spatial correlations with links that have point detectors. The estimated link travel time distributions are then fused with path travel time distributions obtained from the interval detectors using Dempster-Shafer evidence theory. Based on fused path travel time distribution, an optimization technique is further introduced to update link travel time distributions and their spatial correlations. A case study was performed using real-world data from Hong Kong and showed that the proposed method obtained accurate and robust estimations of link and path travel time distributions in congested road networks.

  3. Three-Axis Time-Optimal Attitude Maneuvers of a Rigid-Body

    NASA Astrophysics Data System (ADS)

    Wang, Xijing; Li, Jisheng

    With the development trends for modern satellites towards macro-scale and micro-scale, new demands are requested for its attitude adjustment. Precise pointing control and rapid maneuvering capabilities have long been part of many space missions. While the development of computer technology enables new optimal algorithms being used continuously, a powerful tool for solving problem is provided. Many papers about attitude adjustment have been published, the configurations of the spacecraft are considered rigid body with flexible parts or gyrostate-type systems. The object function always include minimum time or minimum fuel. During earlier satellite missions, the attitude acquisition was achieved by using the momentum ex change devices, performed by a sequential single-axis slewing strategy. Recently, the simultaneous three-axis minimum-time maneuver(reorientation) problems have been studied by many researchers. It is important to research the minimum-time maneuver of a rigid spacecraft within onboard power limits, because of potential space application such as surveying multiple targets in space and academic value. The minimum-time maneuver of a rigid spacecraft is a basic problem because the solutions for maneuvering flexible spacecraft are based on the solution to the rigid body slew problem. A new method for the open-loop solution for a rigid spacecraft maneuver is presented. Having neglected all perturbation torque, the necessary conditions of spacecraft from one state to another state can be determined. There is difference between single-axis with multi-axis. For single- axis analytical solution is possible and the switching line passing through the state-space origin belongs to parabolic. For multi-axis, it is impossible to get analytical solution due to the dynamic coupling between the axes and must be solved numerically. Proved by modern research, Euler axis rotations are quasi-time-optimal in general. On the basis of minimum value principles, a research for reorienting an inertial syrnmetric spacecraft with time cost function from an initial state of rest to a final state of rest is deduced. And the solution to it is stated below: Firstly, the essential condition for solving the problem is deduced with the minimum value principle. The necessary conditions for optimality yield a two point boundary-value problem (TPBVP), which, when solved, produces the control history that minimize time performance index. In the nonsingular control, the solution is the' bang-bang maneuver. The control profile is characterized by Saturated controls for the entire maneuver. The singular control maybe existed. It is only singular in mathematics. According to physical principle, the bigger the mode of the control torque is, the shorter the time is. So saturated controls are used in singular control. Secondly, the control parameters are always in maximum, so the key problem is to determine switch point thus original problem is changed to find the changing time. By the use of adjusting the switch on/off time, the genetic algorithm, which is a new robust method is optimized to determine the switch features without the gyroscopic coupling. There is improvement upon the traditional GA in this research. The homotopy method to find the nonlinear algebra is based on rigorous topology continuum theory. Based on the idea of the homotopy, the relaxation parameters are introduced, and the switch point is figured out with simulated annealing. Computer simulation results using a rigid body show that the new method is feasible and efficient. A practical method of computing approximate solutions to the time-optimal control- switch times for rigid body reorientation has been developed.

  4. spsann - optimization of sample patterns using spatial simulated annealing

    NASA Astrophysics Data System (ADS)

    Samuel-Rosa, Alessandro; Heuvelink, Gerard; Vasques, Gustavo; Anjos, Lúcia

    2015-04-01

    There are many algorithms and computer programs to optimize sample patterns, some private and others publicly available. A few have only been presented in scientific articles and text books. This dispersion and somewhat poor availability is holds back to their wider adoption and further development. We introduce spsann, a new R-package for the optimization of sample patterns using spatial simulated annealing. R is the most popular environment for data processing and analysis. Spatial simulated annealing is a well known method with widespread use to solve optimization problems in the soil and geo-sciences. This is mainly due to its robustness against local optima and easiness of implementation. spsann offers many optimizing criteria for sampling for variogram estimation (number of points or point-pairs per lag distance class - PPL), trend estimation (association/correlation and marginal distribution of the covariates - ACDC), and spatial interpolation (mean squared shortest distance - MSSD). spsann also includes the mean or maximum universal kriging variance (MUKV) as an optimizing criterion, which is used when the model of spatial variation is known. PPL, ACDC and MSSD were combined (PAN) for sampling when we are ignorant about the model of spatial variation. spsann solves this multi-objective optimization problem scaling the objective function values using their maximum absolute value or the mean value computed over 1000 random samples. Scaled values are aggregated using the weighted sum method. A graphical display allows to follow how the sample pattern is being perturbed during the optimization, as well as the evolution of its energy state. It is possible to start perturbing many points and exponentially reduce the number of perturbed points. The maximum perturbation distance reduces linearly with the number of iterations. The acceptance probability also reduces exponentially with the number of iterations. R is memory hungry and spatial simulated annealing is a computationally intensive method. As such, many strategies were used to reduce the computation time and memory usage: a) bottlenecks were implemented in C++, b) a finite set of candidate locations is used for perturbing the sample points, and c) data matrices are computed only once and then updated at each iteration instead of being recomputed. spsann is available at GitHub under a licence GLP Version 2.0 and will be further developed to: a) allow the use of a cost surface, b) implement other sensitive parts of the source code in C++, c) implement other optimizing criteria, d) allow to add or delete points to/from an existing point pattern.

  5. Interactive Land-Use Optimization Using Laguerre Voronoi Diagram with Dynamic Generating Point Allocation

    NASA Astrophysics Data System (ADS)

    Chaidee, S.; Pakawanwong, P.; Suppakitpaisarn, V.; Teerasawat, P.

    2017-09-01

    In this work, we devise an efficient method for the land-use optimization problem based on Laguerre Voronoi diagram. Previous Voronoi diagram-based methods are more efficient and more suitable for interactive design than discrete optimization-based method, but, in many cases, their outputs do not satisfy area constraints. To cope with the problem, we propose a force-directed graph drawing algorithm, which automatically allocates generating points of Voronoi diagram to appropriate positions. Then, we construct a Laguerre Voronoi diagram based on these generating points, use linear programs to adjust each cell, and reconstruct the diagram based on the adjustment. We adopt the proposed method to the practical case study of Chiang Mai University's allocated land for a mixed-use complex. For this case study, compared to other Voronoi diagram-based method, we decrease the land allocation error by 62.557 %. Although our computation time is larger than the previous Voronoi-diagram-based method, it is still suitable for interactive design.

  6. Short-Term Load Forecasting Based Automatic Distribution Network Reconfiguration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Huaiguang; Ding, Fei; Zhang, Yingchen

    In a traditional dynamic network reconfiguration study, the optimal topology is determined at every scheduled time point by using the real load data measured at that time. The development of the load forecasting technique can provide an accurate prediction of the load power that will happen in a future time and provide more information about load changes. With the inclusion of load forecasting, the optimal topology can be determined based on the predicted load conditions during a longer time period instead of using a snapshot of the load at the time when the reconfiguration happens; thus, the distribution system operatormore » can use this information to better operate the system reconfiguration and achieve optimal solutions. This paper proposes a short-term load forecasting approach to automatically reconfigure distribution systems in a dynamic and pre-event manner. Specifically, a short-term and high-resolution distribution system load forecasting approach is proposed with a forecaster based on support vector regression and parallel parameters optimization. The network reconfiguration problem is solved by using the forecasted load continuously to determine the optimal network topology with the minimum amount of loss at the future time. The simulation results validate and evaluate the proposed approach.« less

  7. Short-Term Load Forecasting Based Automatic Distribution Network Reconfiguration: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Huaiguang; Ding, Fei; Zhang, Yingchen

    In the traditional dynamic network reconfiguration study, the optimal topology is determined at every scheduled time point by using the real load data measured at that time. The development of load forecasting technique can provide accurate prediction of load power that will happen in future time and provide more information about load changes. With the inclusion of load forecasting, the optimal topology can be determined based on the predicted load conditions during the longer time period instead of using the snapshot of load at the time when the reconfiguration happens, and thus it can provide information to the distribution systemmore » operator (DSO) to better operate the system reconfiguration to achieve optimal solutions. Thus, this paper proposes a short-term load forecasting based approach for automatically reconfiguring distribution systems in a dynamic and pre-event manner. Specifically, a short-term and high-resolution distribution system load forecasting approach is proposed with support vector regression (SVR) based forecaster and parallel parameters optimization. And the network reconfiguration problem is solved by using the forecasted load continuously to determine the optimal network topology with the minimum loss at the future time. The simulation results validate and evaluate the proposed approach.« less

  8. Short-Term Load Forecasting-Based Automatic Distribution Network Reconfiguration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Huaiguang; Ding, Fei; Zhang, Yingchen

    In a traditional dynamic network reconfiguration study, the optimal topology is determined at every scheduled time point by using the real load data measured at that time. The development of the load forecasting technique can provide an accurate prediction of the load power that will happen in a future time and provide more information about load changes. With the inclusion of load forecasting, the optimal topology can be determined based on the predicted load conditions during a longer time period instead of using a snapshot of the load at the time when the reconfiguration happens; thus, the distribution system operatormore » can use this information to better operate the system reconfiguration and achieve optimal solutions. This paper proposes a short-term load forecasting approach to automatically reconfigure distribution systems in a dynamic and pre-event manner. Specifically, a short-term and high-resolution distribution system load forecasting approach is proposed with a forecaster based on support vector regression and parallel parameters optimization. The network reconfiguration problem is solved by using the forecasted load continuously to determine the optimal network topology with the minimum amount of loss at the future time. The simulation results validate and evaluate the proposed approach.« less

  9. Optimizing the Attitude Control of Small Satellite Constellations for Rapid Response Imaging

    NASA Astrophysics Data System (ADS)

    Nag, S.; Li, A.

    2016-12-01

    Distributed Space Missions (DSMs) such as formation flight and constellations, are being recognized as important solutions to increase measurement samples over space and time. Given the increasingly accurate attitude control systems emerging in the commercial market, small spacecraft now have the ability to slew and point within few minutes of notice. In spite of hardware development in CubeSats at the payload (e.g. NASA InVEST) and subsystems (e.g. Blue Canyon Technologies), software development for tradespace analysis in constellation design (e.g. Goddard's TAT-C), planning and scheduling development in single spacecraft (e.g. GEO-CAPE) and aerial flight path optimizations for UAVs (e.g. NASA Sensor Web), there is a gap in open-source, open-access software tools for planning and scheduling distributed satellite operations in terms of pointing and observing targets. This paper will demonstrate results from a tool being developed for scheduling pointing operations of narrow field-of-view (FOV) sensors over mission lifetime to maximize metrics such as global coverage and revisit statistics. Past research has shown the need for at least fourteen satellites to cover the Earth globally everyday using a LandSat-like sensor. Increasing the FOV three times reduces the need to four satellites, however adds image distortion and BRDF complexities to the observed reflectance. If narrow FOV sensors on a small satellite constellation were commanded using robust algorithms to slew their sensor dynamically, they would be able to coordinately cover the global landmass much faster without compensating for spatial resolution or BRDF effects. Our algorithm to optimize constellation satellite pointing is based on a dynamic programming approach under the constraints of orbital mechanics and existing attitude control systems for small satellites. As a case study for our algorithm, we minimize the time required to cover the 17000 Landsat images with maximum signal to noise ratio fall-off and minimum image distortion among the satellites, using Landsat's specifications. Attitude-specific constraints such as power consumption, response time, and stability were factored into the optimality computations. The algorithm can integrate cloud cover predictions, specific ground and air assets and angular constraints.

  10. Accuracy of heart rate variability estimation by photoplethysmography using an smartphone: Processing optimization and fiducial point selection.

    PubMed

    Ferrer-Mileo, V; Guede-Fernandez, F; Fernandez-Chimeno, M; Ramos-Castro, J; Garcia-Gonzalez, M A

    2015-08-01

    This work compares several fiducial points to detect the arrival of a new pulse in a photoplethysmographic signal using the built-in camera of smartphones or a photoplethysmograph. Also, an optimization process for the signal preprocessing stage has been done. Finally we characterize the error produced when we use the best cutoff frequencies and fiducial point for smartphones and photopletysmograph and compare if the error of smartphones can be reasonably be explained by variations in pulse transit time. The results have revealed that the peak of the first derivative and the minimum of the second derivative of the pulse wave have the lowest error. Moreover, for these points, high pass filtering the signal between 0.1 to 0.8 Hz and low pass around 2.7 Hz or 3.5 Hz are the best cutoff frequencies. Finally, the error in smartphones is slightly higher than in a photoplethysmograph.

  11. Real time monitoring system used in route planning for the electric vehicle

    NASA Astrophysics Data System (ADS)

    Ionescu, LM; Mazare, A.; Serban, G.; Ionita, S.

    2017-10-01

    The electric vehicle is a new consumer of electricity that is becoming more and more widespread. Under these circumstances, new strategies for optimizing power consumption and increasing vehicle autonomy must be designed. These must include route planning along with consumption, fuelling points and points of interest. The hardware and software solution proposed by us allows: non-invasive monitoring of power consumption, energy autonomy - it does not add any extra consumption, data transmission to a server and data fusion with the route, the points of interest of the route and the power supply points. As a result: an optimal route planning service will be provided to the driver, considering the route, the requirements of the electric vehicle and the consumer profile. The solution can be easily installed on any type of electric car - it does not involve any intervention on the equipment.

  12. Multi-Criterion Preliminary Design of a Tetrahedral Truss Platform

    NASA Technical Reports Server (NTRS)

    Wu, K. Chauncey

    1995-01-01

    An efficient method is presented for multi-criterion preliminary design and demonstrated for a tetrahedral truss platform. The present method requires minimal analysis effort and permits rapid estimation of optimized truss behavior for preliminary design. A 14-m-diameter, 3-ring truss platform represents a candidate reflector support structure for space-based science spacecraft. The truss members are divided into 9 groups by truss ring and position. Design variables are the cross-sectional area of all members in a group, and are either 1, 3 or 5 times the minimum member area. Non-structural mass represents the node and joint hardware used to assemble the truss structure. Taguchi methods are used to efficiently identify key points in the set of Pareto-optimal truss designs. Key points identified using Taguchi methods are the maximum frequency, minimum mass, and maximum frequency-to-mass ratio truss designs. Low-order polynomial curve fits through these points are used to approximate the behavior of the full set of Pareto-optimal designs. The resulting Pareto-optimal design curve is used to predict frequency and mass for optimized trusses. Performance improvements are plotted in frequency-mass (criterion) space and compared to results for uniform trusses. Application of constraints to frequency and mass and sensitivity to constraint variation are demonstrated.

  13. Time-optimal Aircraft Pursuit-evasion with a Weapon Envelope Constraint

    NASA Technical Reports Server (NTRS)

    Menon, P. K. A.

    1990-01-01

    The optimal pursuit-evasion problem between two aircraft including a realistic weapon envelope is analyzed using differential game theory. Six order nonlinear point mass vehicle models are employed and the inclusion of an arbitrary weapon envelope geometry is allowed. The performance index is a linear combination of flight time and the square of the vehicle acceleration. Closed form solution to this high-order differential game is then obtained using feedback linearization. The solution is in the form of a feedback guidance law together with a quartic polynomial for time-to-go. Due to its modest computational requirements, this nonlinear guidance law is useful for on-board real-time implementation.

  14. Resistance spot welding of ultra-fine grained steel sheets produced by constrained groove pressing: Optimization and characterization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khodabakhshi, F.; Kazeminezhad, M., E-mail: mkazemi@sharif.edu; Kokabi, A.H.

    2012-07-15

    Constrained groove pressing as a severe plastic deformation method is utilized to produce ultra-fine grained low carbon steel sheets. The ultra-fine grained sheets are joined via resistance spot welding process and the characteristics of spot welds are investigated. Resistance spot welding process is optimized for welding of the sheets with different severe deformations and their results are compared with those of as-received samples. The effects of failure mode and expulsion on the performance of ultra-fine grained sheet spot welds have been investigated in the present paper and the welding current and time of resistance spot welding process according to thesemore » subjects are optimized. Failure mode and failure load obtained in tensile-shear test, microhardness, X-ray diffraction, transmission electron microscope and scanning electron microscope images have been used to describe the performance of spot welds. The region between interfacial to pullout mode transition and expulsion limit is defined as the optimum welding condition. The results show that optimum welding parameters (welding current and welding time) for ultra-fine grained sheets are shifted to lower values with respect to those for as-received specimens. In ultra-fine grained sheets, one new region is formed named recrystallized zone in addition to fusion zone, heat affected zone and base metal. It is shown that microstructures of different zones in ultra-fine grained sheets are finer than those of as-received sheets. - Highlights: Black-Right-Pointing-Pointer Resistance spot welding process is optimized for joining of UFG steel sheets. Black-Right-Pointing-Pointer Optimum welding current and time are decreased with increasing the CGP pass number. Black-Right-Pointing-Pointer Microhardness at BM, HAZ, FZ and recrystallized zone is enhanced due to CGP.« less

  15. Using Biomechanical Optimization To Interpret Dancers’ Pose Selection For A Partnered Spin

    DTIC Science & Technology

    2009-05-06

    optimized performance of a straight arm backward longswing on the still rings in mens artistic gymnastics . Because gymnasts lose points for excessive swing at...an actual performance and used that as the basis for their search. Yeadon determined that with timing within 15ms, gymnasts can minimize their excess...are moving in an optimal way. 2.5 Body Modeling 2.5.1 Building the Body In his study involving gymnasts on the rings, Yeadon developed a body model com

  16. Flight Planning

    NASA Technical Reports Server (NTRS)

    1991-01-01

    Seagull Technology, Inc., Sunnyvale, CA, produced a computer program under a Langley Research Center Small Business Innovation Research (SBIR) grant called STAFPLAN (Seagull Technology Advanced Flight Plan) that plans optimal trajectory routes for small to medium sized airlines to minimize direct operating costs while complying with various airline operating constraints. STAFPLAN incorporates four input databases, weather, route data, aircraft performance, and flight-specific information (times, payload, crew, fuel cost) to provide the correct amount of fuel optimal cruise altitude, climb and descent points, optimal cruise speed, and flight path.

  17. Structural optimization by multilevel decomposition

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, J.; James, B.; Dovi, A.

    1983-01-01

    A method is described for decomposing an optimization problem into a set of subproblems and a coordination problem which preserves coupling between the subproblems. The method is introduced as a special case of multilevel, multidisciplinary system optimization and its algorithm is fully described for two level optimization for structures assembled of finite elements of arbitrary type. Numerical results are given for an example of a framework to show that the decomposition method converges and yields results comparable to those obtained without decomposition. It is pointed out that optimization by decomposition should reduce the design time by allowing groups of engineers, using different computers to work concurrently on the same large problem.

  18. Laser tissue welding in genitourinary reconstructive surgery: assessment of optimal suture materials.

    PubMed

    Poppas, D P; Klioze, S D; Uzzo, R G; Schlossberg, S M

    1995-02-01

    Laser tissue welding in genitourinary reconstructive surgery has been shown in animal models to decrease operative time, improve healing, and decrease postoperative fistula formation when compared with conventional suture controls. Although the absence of suture material is the ultimate goal, this has not been shown to be practical with current technology for larger repairs. Therefore, suture-assisted laser tissue welding will likely be performed. This study sought to determine the optimal suture to be used during laser welding. The integrity of various organic and synthetic sutures exposed to laser irradiation were analyzed. Sutures studied included gut, clear Vicryl, clear polydioxanone suture (PDS), and violet PDS. Sutures were irradiated with a potassium titanyl phosphate (KTP)-532 laser or an 808-nm diode laser with and without the addition of a light-absorbing chromophore (fluorescein or indocyanine green, respectively). A remote temperature-sensing device obtained real-time surface temperatures during lasing. The average temperature, time, and total energy at break point were recorded. Overall, gut suture achieved significantly higher temperatures and withstood higher average energy delivery at break point with both the KTP-532 and the 808-nm diode lasers compared with all other groups (P < 0.05). Both chromophore-treated groups had higher average temperatures at break point combined with lower average energy. The break-point temperature for all groups other than gut occurred at 91 degrees C or less. The optimal temperature range for tissue welding appears to be between 60 degrees and 80 degrees C. Gut suture offers the greatest margin of error for KTP and 808-nm diode laser welding with or without the use of a chromophore.

  19. Optimal symmetric flight studies

    NASA Technical Reports Server (NTRS)

    Weston, A. R.; Menon, P. K. A.; Bilimoria, K. D.; Cliff, E. M.; Kelley, H. J.

    1985-01-01

    Several topics in optimal symmetric flight of airbreathing vehicles are examined. In one study, an approximation scheme designed for onboard real-time energy management of climb-dash is developed and calculations for a high-performance aircraft presented. In another, a vehicle model intermediate in complexity between energy and point-mass models is explored and some quirks in optimal flight characteristics peculiar to the model uncovered. In yet another study, energy-modelling procedures are re-examined with a view to stretching the range of validity of zeroth-order approximation by special choice of state variables. In a final study, time-fuel tradeoffs in cruise-dash are examined for the consequences of nonconvexities appearing in the classical steady cruise-dash model. Two appendices provide retrospective looks at two early publications on energy modelling and related optimal control theory.

  20. Clinical Outcomes of an Optimized Prolate Ablation Procedure for Correcting Residual Refractive Errors Following Laser Surgery.

    PubMed

    Chung, Byunghoon; Lee, Hun; Choi, Bong Joon; Seo, Kyung Ryul; Kim, Eung Kwon; Kim, Dae Yune; Kim, Tae-Im

    2017-02-01

    The purpose of this study was to investigate the clinical efficacy of an optimized prolate ablation procedure for correcting residual refractive errors following laser surgery. We analyzed 24 eyes of 15 patients who underwent an optimized prolate ablation procedure for the correction of residual refractive errors following laser in situ keratomileusis, laser-assisted subepithelial keratectomy, or photorefractive keratectomy surgeries. Preoperative ophthalmic examinations were performed, and uncorrected distance visual acuity, corrected distance visual acuity, manifest refraction values (sphere, cylinder, and spherical equivalent), point spread function, modulation transfer function, corneal asphericity (Q value), ocular aberrations, and corneal haze measurements were obtained postoperatively at 1, 3, and 6 months. Uncorrected distance visual acuity improved and refractive errors decreased significantly at 1, 3, and 6 months postoperatively. Total coma aberration increased at 3 and 6 months postoperatively, while changes in all other aberrations were not statistically significant. Similarly, no significant changes in point spread function were detected, but modulation transfer function increased significantly at the postoperative time points measured. The optimized prolate ablation procedure was effective in terms of improving visual acuity and objective visual performance for the correction of persistent refractive errors following laser surgery.

  1. Development of a magnetic lab-on-a-chip for point-of-care sepsis diagnosis

    NASA Astrophysics Data System (ADS)

    Schotter, Joerg; Shoshi, Astrit; Brueckl, Hubert

    2009-05-01

    We present design criteria, operation principles and experimental examples of magnetic marker manipulation for our magnetic lab-on-a-chip prototype. It incorporates both magnetic sample preparation and detection by embedded GMR-type magnetoresistive sensors and is optimized for the automated point-of-care detection of four different sepsis-indicative cytokines directly from about 5 μl of whole blood. The sample volume, magnetic particle size and cytokine concentration determine the microfluidic volume, sensor size and dimensioning of the magnetic gradient field generators. By optimizing these parameters to the specific diagnostic task, best performance is expected with respect to sensitivity, analysis time and reproducibility.

  2. Performance comparison of optimal fractional order hybrid fuzzy PID controllers for handling oscillatory fractional order processes with dead time.

    PubMed

    Das, Saptarshi; Pan, Indranil; Das, Shantanu

    2013-07-01

    Fuzzy logic based PID controllers have been studied in this paper, considering several combinations of hybrid controllers by grouping the proportional, integral and derivative actions with fuzzy inferencing in different forms. Fractional order (FO) rate of error signal and FO integral of control signal have been used in the design of a family of decomposed hybrid FO fuzzy PID controllers. The input and output scaling factors (SF) along with the integro-differential operators are tuned with real coded genetic algorithm (GA) to produce optimum closed loop performance by simultaneous consideration of the control loop error index and the control signal. Three different classes of fractional order oscillatory processes with various levels of relative dominance between time constant and time delay have been used to test the comparative merits of the proposed family of hybrid fractional order fuzzy PID controllers. Performance comparison of the different FO fuzzy PID controller structures has been done in terms of optimal set-point tracking, load disturbance rejection and minimal variation of manipulated variable or smaller actuator requirement etc. In addition, multi-objective Non-dominated Sorting Genetic Algorithm (NSGA-II) has been used to study the Pareto optimal trade-offs between the set point tracking and control signal, and the set point tracking and load disturbance performance for each of the controller structure to handle the three different types of processes. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  3. Application of Neural Network Optimized by Mind Evolutionary Computation in Building Energy Prediction

    NASA Astrophysics Data System (ADS)

    Song, Chen; Zhong-Cheng, Wu; Hong, Lv

    2018-03-01

    Building Energy forecasting plays an important role in energy management and plan. Using mind evolutionary algorithm to find the optimal network weights and threshold, to optimize the BP neural network, can overcome the problem of the BP neural network into a local minimum point. The optimized network is used for time series prediction, and the same month forecast, to get two predictive values. Then two kinds of predictive values are put into neural network, to get the final forecast value. The effectiveness of the method was verified by experiment with the energy value of three buildings in Hefei.

  4. A new optimization tool path planning for 3-axis end milling of free-form surfaces based on efficient machining intervals

    NASA Astrophysics Data System (ADS)

    Vu, Duy-Duc; Monies, Frédéric; Rubio, Walter

    2018-05-01

    A large number of studies, based on 3-axis end milling of free-form surfaces, seek to optimize tool path planning. Approaches try to optimize the machining time by reducing the total tool path length while respecting the criterion of the maximum scallop height. Theoretically, the tool path trajectories that remove the most material follow the directions in which the machined width is the largest. The free-form surface is often considered as a single machining area. Therefore, the optimization on the entire surface is limited. Indeed, it is difficult to define tool trajectories with optimal feed directions which generate largest machined widths. Another limiting point of previous approaches for effectively reduce machining time is the inadequate choice of the tool. Researchers use generally a spherical tool on the entire surface. However, the gains proposed by these different methods developed with these tools lead to relatively small time savings. Therefore, this study proposes a new method, using toroidal milling tools, for generating toolpaths in different regions on the machining surface. The surface is divided into several regions based on machining intervals. These intervals ensure that the effective radius of the tool, at each cutter-contact points on the surface, is always greater than the radius of the tool in an optimized feed direction. A parallel plane strategy is then used on the sub-surfaces with an optimal specific feed direction for each sub-surface. This method allows one to mill the entire surface with efficiency greater than with the use of a spherical tool. The proposed method is calculated and modeled using Maple software to find optimal regions and feed directions in each region. This new method is tested on a free-form surface. A comparison is made with a spherical cutter to show the significant gains obtained with a toroidal milling cutter. Comparisons with CAM software and experimental validations are also done. The results show the efficiency of the method.

  5. Optimal symmetric flight with an intermediate vehicle model

    NASA Technical Reports Server (NTRS)

    Menon, P. K. A.; Kelley, H. J.; Cliff, E. M.

    1983-01-01

    Optimal flight in the vertical plane with a vehicle model intermediate in complexity between the point-mass and energy models is studied. Flight-path angle takes on the role of a control variable. Range-open problems feature subarcs of vertical flight and singular subarcs. The class of altitude-speed-range-time optimization problems with fuel expenditure unspecified is investigated and some interesting phenomena uncovered. The maximum-lift-to-drag glide appears as part of the family, final-time-open, with appropriate initial and terminal transient exceeding level-flight drag, some members exhibiting oscillations. Oscillatory paths generally fail the Jacobi test for durations exceeding a period and furnish a minimum only for short-duration problems.

  6. An adaptive approach to the physical annealing strategy for simulated annealing

    NASA Astrophysics Data System (ADS)

    Hasegawa, M.

    2013-02-01

    A new and reasonable method for adaptive implementation of simulated annealing (SA) is studied on two types of random traveling salesman problems. The idea is based on the previous finding on the search characteristics of the threshold algorithms, that is, the primary role of the relaxation dynamics in their finite-time optimization process. It is shown that the effective temperature for optimization can be predicted from the system's behavior analogous to the stabilization phenomenon occurring in the heating process starting from a quenched solution. The subsequent slow cooling near the predicted point draws out the inherent optimizing ability of finite-time SA in more straightforward manner than the conventional adaptive approach.

  7. TH-CD-209-10: Scanning Proton Arc Therapy (SPArc) - The First Robust and Delivery-Efficient Spot Scanning Proton Arc Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ding, X; Li, X; Zhang, J

    Purpose: To develop a delivery-efficient proton spot-scanning arc therapy technique with robust plan quality. Methods: We developed a Scanning Proton Arc(SPArc) optimization algorithm integrated with (1)Control point re-sampling by splitting control point into adjacent sub-control points; (2)Energy layer re-distribution by assigning the original energy layers to the new sub-control points; (3)Energy layer filtration by deleting low MU weighting energy layers; (4)Energy layer re-sampling by sampling additional layers to ensure the optimal solution. A bilateral head and neck oropharynx case and a non-mobile lung target case were tested. Plan quality and total estimated delivery time were compared to original robust optimizedmore » multi-field step-and-shoot arc plan without SPArc optimization (Arcmulti-field) and standard robust optimized Intensity Modulated Proton Therapy(IMPT) plans. Dose-Volume-Histograms (DVH) of target and Organ-at-Risks (OARs) were analyzed along with all worst case scenarios. Total delivery time was calculated based on the assumption of a 360 degree gantry room with 1 RPM rotation speed, 2ms spot switching time, beam current 1nA, minimum spot weighting 0.01 MU, energy-layer-switching-time (ELST) from 0.5 to 4s. Results: Compared to IMPT, SPArc delivered less integral dose(−14% lung and −8% oropharynx). For lung case, SPArc reduced 60% of skin max dose, 35% of rib max dose and 15% of lung mean dose. Conformity Index is improved from 7.6(IMPT) to 4.0(SPArc). Compared to Arcmulti-field, SPArc reduced number of energy layers by 61%(276 layers in lung) and 80%(1008 layers in oropharynx) while kept the same robust plan quality. With ELST from 0.5s to 4s, it reduced 55%–60% of Arcmulti-field delivery time for the lung case and 56%–67% for the oropharynx case. Conclusion: SPArc is the first robust and delivery-efficient proton spot-scanning arc therapy technique which could be implemented in routine clinic. For modern proton machine with ELST close to 0.5s, SPArc would be a popular treatment option for both single and multi-room center.« less

  8. Multidisciplinary Aerospace Systems Optimization: Computational AeroSciences (CAS) Project

    NASA Technical Reports Server (NTRS)

    Kodiyalam, S.; Sobieski, Jaroslaw S. (Technical Monitor)

    2001-01-01

    The report describes a method for performing optimization of a system whose analysis is so expensive that it is impractical to let the optimization code invoke it directly because excessive computational cost and elapsed time might result. In such situation it is imperative to have user control the number of times the analysis is invoked. The reported method achieves that by two techniques in the Design of Experiment category: a uniform dispersal of the trial design points over a n-dimensional hypersphere and a response surface fitting, and the technique of krigging. Analyses of all the trial designs whose number may be set by the user are performed before activation of the optimization code and the results are stored as a data base. That code is then executed and referred to the above data base. Two applications, one of the airborne laser system, and one of an aircraft optimization illustrate the method application.

  9. Optimization of wireless sensor networks based on chicken swarm optimization algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Qingxi; Zhu, Lihua

    2017-05-01

    In order to reduce the energy consumption of wireless sensor network and improve the survival time of network, the clustering routing protocol of wireless sensor networks based on chicken swarm optimization algorithm was proposed. On the basis of LEACH agreement, it was improved and perfected that the points on the cluster and the selection of cluster head using the chicken group optimization algorithm, and update the location of chicken which fall into the local optimum by Levy flight, enhance population diversity, ensure the global search capability of the algorithm. The new protocol avoided the die of partial node of intensive using by making balanced use of the network nodes, improved the survival time of wireless sensor network. The simulation experiments proved that the protocol is better than LEACH protocol on energy consumption, also is better than that of clustering routing protocol based on particle swarm optimization algorithm.

  10. LiDAR-IMU Time Delay Calibration Based on Iterative Closest Point and Iterated Sigma Point Kalman Filter.

    PubMed

    Liu, Wanli

    2017-03-08

    The time delay calibration between Light Detection and Ranging (LiDAR) and Inertial Measurement Units (IMUs) is an essential prerequisite for its applications. However, the correspondences between LiDAR and IMU measurements are usually unknown, and thus cannot be computed directly for the time delay calibration. In order to solve the problem of LiDAR-IMU time delay calibration, this paper presents a fusion method based on iterative closest point (ICP) and iterated sigma point Kalman filter (ISPKF), which combines the advantages of ICP and ISPKF. The ICP algorithm can precisely determine the unknown transformation between LiDAR-IMU; and the ISPKF algorithm can optimally estimate the time delay calibration parameters. First of all, the coordinate transformation from the LiDAR frame to the IMU frame is realized. Second, the measurement model and time delay error model of LiDAR and IMU are established. Third, the methodology of the ICP and ISPKF procedure is presented for LiDAR-IMU time delay calibration. Experimental results are presented that validate the proposed method and demonstrate the time delay error can be accurately calibrated.

  11. Energy management of three-dimensional minimum-time intercept. [for aircraft flight optimization

    NASA Technical Reports Server (NTRS)

    Kelley, H. J.; Cliff, E. M.; Visser, H. G.

    1985-01-01

    A real-time computer algorithm to control and optimize aircraft flight profiles is described and applied to a three-dimensional minimum-time intercept mission. The proposed scheme has roots in two well known techniques: singular perturbations and neighboring-optimal guidance. Use of singular-perturbation ideas is made in terms of the assumed trajectory-family structure. A heading/energy family of prestored point-mass-model state-Euler solutions is used as the baseline in this scheme. The next step is to generate a near-optimal guidance law that will transfer the aircraft to the vicinity of this reference family. The control commands fed to the autopilot (bank angle and load factor) consist of the reference controls plus correction terms which are linear combinations of the altitude and path-angle deviations from reference values, weighted by a set of precalculated gains. In this respect the proposed scheme resembles neighboring-optimal guidance. However, in contrast to the neighboring-optimal guidance scheme, the reference control and state variables as well as the feedback gains are stored as functions of energy and heading in the present approach. Some numerical results comparing open-loop optimal and approximate feedback solutions are presented.

  12. Application of response surface methodology (RSM) for the removal of methylene blue dye from water by nano zero-valent iron (NZVI).

    PubMed

    Khosravi, Morteza; Arabi, Simin

    In this study, iron zero-valent nanoparticles were synthesized, characterized and studied for removal of methylene blue dye in water solution. The reactions were mathematically described as the function of parameters such as nano zero-valent iron (NZVI) dose, pH, contact time and initial dye concentration, and were modeled by the use of response surface methodology. These experiments were carried out as a central composite design consisting of 30 experiments determined by the 2(4) full factorial designs with eight axial points and six center points. The results revealed that the optimal conditions for dye removal were NZVI dose 0.1-0.9 g/L, pH 3-11, contact time 20-100 s, and initial dye concentration 10-50 mg/L, respectively. Under these optimal values of process parameters, the dye removal efficiency of 92.87% was observed, which very close to the experimental value (92.21%) in batch experiment. In the optimization, R(2) and R(2)adj correlation coefficients for the model were evaluated as 0.96 and 0.93, respectively.

  13. Using optimal transport theory to estimate transition probabilities in metapopulation dynamics

    USGS Publications Warehouse

    Nichols, Jonathan M.; Spendelow, Jeffrey A.; Nichols, James D.

    2017-01-01

    This work considers the estimation of transition probabilities associated with populations moving among multiple spatial locations based on numbers of individuals at each location at two points in time. The problem is generally underdetermined as there exists an extremely large number of ways in which individuals can move from one set of locations to another. A unique solution therefore requires a constraint. The theory of optimal transport provides such a constraint in the form of a cost function, to be minimized in expectation over the space of possible transition matrices. We demonstrate the optimal transport approach on marked bird data and compare to the probabilities obtained via maximum likelihood estimation based on marked individuals. It is shown that by choosing the squared Euclidean distance as the cost, the estimated transition probabilities compare favorably to those obtained via maximum likelihood with marked individuals. Other implications of this cost are discussed, including the ability to accurately interpolate the population's spatial distribution at unobserved points in time and the more general relationship between the cost and minimum transport energy.

  14. Speech recognition for embedded automatic positioner for laparoscope

    NASA Astrophysics Data System (ADS)

    Chen, Xiaodong; Yin, Qingyun; Wang, Yi; Yu, Daoyin

    2014-07-01

    In this paper a novel speech recognition methodology based on Hidden Markov Model (HMM) is proposed for embedded Automatic Positioner for Laparoscope (APL), which includes a fixed point ARM processor as the core. The APL system is designed to assist the doctor in laparoscopic surgery, by implementing the specific doctor's vocal control to the laparoscope. Real-time respond to the voice commands asks for more efficient speech recognition algorithm for the APL. In order to reduce computation cost without significant loss in recognition accuracy, both arithmetic and algorithmic optimizations are applied in the method presented. First, depending on arithmetic optimizations most, a fixed point frontend for speech feature analysis is built according to the ARM processor's character. Then the fast likelihood computation algorithm is used to reduce computational complexity of the HMM-based recognition algorithm. The experimental results show that, the method shortens the recognition time within 0.5s, while the accuracy higher than 99%, demonstrating its ability to achieve real-time vocal control to the APL.

  15. A Summary Description of a Computer Program Concept for the Design and Simulation of Solar Pond Electric Power Generation Systems

    NASA Technical Reports Server (NTRS)

    1984-01-01

    A solar pond electric power generation subsystem, an electric power transformer and switch yard, a large solar pond, a water treatment plant, and numerous storage and evaporation ponds. Because a solar pond stores thermal energy over a long period of time, plant operation at any point in time is dependent upon past operation and future perceived generation plans. This time or past history factor introduces a new dimension in the design process. The design optimization of a plant must go beyond examination of operational state points and consider the seasonal variations in solar, solar pond energy storage, and desired plant annual duty-cycle profile. Models or design tools will be required to optimize a plant design. These models should be developed in order to include a proper but not excessive level of detail. The model should be targeted to a specific objective and not conceived as a do everything analysis tool, i.e., system design and not gradient-zone stability.

  16. Two-phase strategy of neural control for planar reaching movements: I. XY coordination variability and its relation to end-point variability.

    PubMed

    Rand, Miya K; Shimansky, Yury P

    2013-03-01

    A quantitative model of optimal transport-aperture coordination (TAC) during reach-to-grasp movements has been developed in our previous studies. The utilization of that model for data analysis allowed, for the first time, to examine the phase dependence of the precision demand specified by the CNS for neurocomputational information processing during an ongoing movement. It was shown that the CNS utilizes a two-phase strategy for movement control. That strategy consists of reducing the precision demand for neural computations during the initial phase, which decreases the cost of information processing at the expense of lower extent of control optimality. To successfully grasp the target object, the CNS increases precision demand during the final phase, resulting in higher extent of control optimality. In the present study, we generalized the model of optimal TAC to a model of optimal coordination between X and Y components of point-to-point planar movements (XYC). We investigated whether the CNS uses the two-phase control strategy for controlling those movements, and how the strategy parameters depend on the prescribed movement speed, movement amplitude and the size of the target area. The results indeed revealed a substantial similarity between the CNS's regulation of TAC and XYC. First, the variability of XYC within individual trials was minimal, meaning that execution noise during the movement was insignificant. Second, the inter-trial variability of XYC was considerable during the majority of the movement time, meaning that the precision demand for information processing was lowered, which is characteristic for the initial phase. That variability significantly decreased, indicating higher extent of control optimality, during the shorter final movement phase. The final phase was the longest (shortest) under the most (least) challenging combination of speed and accuracy requirements, fully consistent with the concept of the two-phase control strategy. This paper further discussed the relationship between motor variability and XYC variability.

  17. Illumination system development using design and analysis of computer experiments

    NASA Astrophysics Data System (ADS)

    Keresztes, Janos C.; De Ketelaere, Bart; Audenaert, Jan; Koshel, R. J.; Saeys, Wouter

    2015-09-01

    Computer assisted optimal illumination design is crucial when developing cost-effective machine vision systems. Standard local optimization methods, such as downhill simplex optimization (DHSO), often result in an optimal solution that is influenced by the starting point by converging to a local minimum, especially when dealing with high dimensional illumination designs or nonlinear merit spaces. This work presents a novel nonlinear optimization approach, based on design and analysis of computer experiments (DACE). The methodology is first illustrated with a 2D case study of four light sources symmetrically positioned along a fixed arc in order to obtain optimal irradiance uniformity on a flat Lambertian reflecting target at the arc center. The first step consists of choosing angular positions with no overlap between sources using a fast, flexible space filling design. Ray-tracing simulations are then performed at the design points and a merit function is used for each configuration to quantify the homogeneity of the irradiance at the target. The obtained homogeneities at the design points are further used as input to a Gaussian Process (GP), which develops a preliminary distribution for the expected merit space. Global optimization is then performed on the GP more likely providing optimal parameters. Next, the light positioning case study is further investigated by varying the radius of the arc, and by adding two spots symmetrically positioned along an arc diametrically opposed to the first one. The added value of using DACE with regard to the performance in convergence is 6 times faster than the standard simplex method for equal uniformity of 97%. The obtained results were successfully validated experimentally using a short-wavelength infrared (SWIR) hyperspectral imager monitoring a Spectralon panel illuminated by tungsten halogen sources with 10% of relative error.

  18. Optimal control problem for linear fractional-order systems, described by equations with Hadamard-type derivative

    NASA Astrophysics Data System (ADS)

    Postnov, Sergey

    2017-11-01

    Two kinds of optimal control problem are investigated for linear time-invariant fractional-order systems with lumped parameters which dynamics described by equations with Hadamard-type derivative: the problem of control with minimal norm and the problem of control with minimal time at given restriction on control norm. The problem setting with nonlocal initial conditions studied. Admissible controls allowed to be the p-integrable functions (p > 1) at half-interval. The optimal control problem studied by moment method. The correctness and solvability conditions for the corresponding moment problem are derived. For several special cases the optimal control problems stated are solved analytically. Some analogies pointed for results obtained with the results which are known for integer-order systems and fractional-order systems describing by equations with Caputo- and Riemann-Liouville-type derivatives.

  19. Optimization of the fiber laser parameters for local high-temperature impact on metal

    NASA Astrophysics Data System (ADS)

    Yatsko, Dmitrii S.; Polonik, Marina V.; Dudko, Olga V.

    2016-11-01

    This paper presents the local laser heating process of surface layer of the metal sample. The aim is to create the molten pool with the required depth by laser thermal treatment. During the heating the metal temperature at any point of the molten zone should not reach the boiling point of the main material. The laser power, exposure time and the spot size of a laser beam are selected as the variable parameters. The mathematical model for heat transfer in a semi-infinite body, applicable to finite slab, is used for preliminary theoretical estimation of acceptable parameters values of the laser thermal treatment. The optimization problem is solved by using an algorithm based on the scanning method of the search space (the zero-order method of conditional optimization). The calculated values of the parameters (the optimal set of "laser radiation power - exposure time - spot radius") are used to conduct a series of natural experiments to obtain a molten pool with the required depth. A two-stage experiment consists of: a local laser treatment of metal plate (steel) and then the examination of the microsection of the laser irradiated region. According to the experimental results, we can judge the adequacy of the ongoing calculations within the selected models.

  20. Optimization of Angular-Momentum Biases of Reaction Wheels

    NASA Technical Reports Server (NTRS)

    Lee, Clifford; Lee, Allan

    2008-01-01

    RBOT [RWA Bias Optimization Tool (wherein RWA signifies Reaction Wheel Assembly )] is a computer program designed for computing angular momentum biases for reaction wheels used for providing spacecraft pointing in various directions as required for scientific observations. RBOT is currently deployed to support the Cassini mission to prevent operation of reaction wheels at unsafely high speeds while minimizing time in undesirable low-speed range, where elasto-hydrodynamic lubrication films in bearings become ineffective, leading to premature bearing failure. The problem is formulated as a constrained optimization problem in which maximum wheel speed limit is a hard constraint and a cost functional that increases as speed decreases below a low-speed threshold. The optimization problem is solved using a parametric search routine known as the Nelder-Mead simplex algorithm. To increase computational efficiency for extended operation involving large quantity of data, the algorithm is designed to (1) use large time increments during intervals when spacecraft attitudes or rates of rotation are nearly stationary, (2) use sinusoidal-approximation sampling to model repeated long periods of Earth-point rolling maneuvers to reduce computational loads, and (3) utilize an efficient equation to obtain wheel-rate profiles as functions of initial wheel biases based on conservation of angular momentum (in an inertial frame) using pre-computed terms.

  1. The role of photographic parameters in laser speckle or particle image displacement velocimetry

    NASA Technical Reports Server (NTRS)

    Lourenco, L.; Krothapalli, A.

    1987-01-01

    The parameters involved in obtaining the multiple exposure photographs in the laser speckle velocimetry method (to record the light scattering by the seeding particles) were optimized. The effects of the type, concentration, and dimensions of the tracer, the exposure conditions (time between exposures, exposure time, and number of exposures), and the sensitivity and resolution of the film on the quality of the final results were investigated, photographing an experimental flow behind an impulsively started circular cylinder. The velocity data were acquired by digital processing of Young's fringes, produced by point-by-point scanning of a photographic negative. Using the optimal photographing conditions, the errors involved in the estimation of the fringe angle and spacing were of the order of 1 percent for the spacing and +/1 deg for the fringe orientation. The resulting accuracy in the velocity was of the order of 2-3 percent of the maximum velocity in the field.

  2. An optimization approach for observation association with systemic uncertainty applied to electro-optical systems

    NASA Astrophysics Data System (ADS)

    Worthy, Johnny L.; Holzinger, Marcus J.; Scheeres, Daniel J.

    2018-06-01

    The observation to observation measurement association problem for dynamical systems can be addressed by determining if the uncertain admissible regions produced from each observation have one or more points of intersection in state space. An observation association method is developed which uses an optimization based approach to identify local Mahalanobis distance minima in state space between two uncertain admissible regions. A binary hypothesis test with a selected false alarm rate is used to assess the probability that an intersection exists at the point(s) of minimum distance. The systemic uncertainties, such as measurement uncertainties, timing errors, and other parameter errors, define a distribution about a state estimate located at the local Mahalanobis distance minima. If local minima do not exist, then the observations are not associated. The proposed method utilizes an optimization approach defined on a reduced dimension state space to reduce the computational load of the algorithm. The efficacy and efficiency of the proposed method is demonstrated on observation data collected from the Georgia Tech Space Object Research Telescope.

  3. A star recognition method based on the Adaptive Ant Colony algorithm for star sensors.

    PubMed

    Quan, Wei; Fang, Jiancheng

    2010-01-01

    A new star recognition method based on the Adaptive Ant Colony (AAC) algorithm has been developed to increase the star recognition speed and success rate for star sensors. This method draws circles, with the center of each one being a bright star point and the radius being a special angular distance, and uses the parallel processing ability of the AAC algorithm to calculate the angular distance of any pair of star points in the circle. The angular distance of two star points in the circle is solved as the path of the AAC algorithm, and the path optimization feature of the AAC is employed to search for the optimal (shortest) path in the circle. This optimal path is used to recognize the stellar map and enhance the recognition success rate and speed. The experimental results show that when the position error is about 50″, the identification success rate of this method is 98% while the Delaunay identification method is only 94%. The identification time of this method is up to 50 ms.

  4. Real time optimal guidance of low-thrust spacecraft: an application of nonlinear model predictive control.

    PubMed

    Arrieta-Camacho, Juan José; Biegler, Lorenz T

    2005-12-01

    Real time optimal guidance is considered for a class of low thrust spacecraft. In particular, nonlinear model predictive control (NMPC) is utilized for computing the optimal control actions required to transfer a spacecraft from a low Earth orbit to a mission orbit. The NMPC methodology presented is able to cope with unmodeled disturbances. The dynamics of the transfer are modeled using a set of modified equinoctial elements because they do not exhibit singularities for zero inclination and zero eccentricity. The idea behind NMPC is the repeated solution of optimal control problems; at each time step, a new control action is computed. The optimal control problem is solved using a direct method-fully discretizing the equations of motion. The large scale nonlinear program resulting from the discretization procedure is solved using IPOPT--a primal-dual interior point algorithm. Stability and robustness characteristics of the NMPC algorithm are reviewed. A numerical example is presented that encourages further development of the proposed methodology: the transfer from low-Earth orbit to a molniya orbit.

  5. The Effects of General Social Support and Social Support for Racial Discrimination on African American Women’s Well-Being

    PubMed Central

    Seawell, Asani H.; Cutrona, Carolyn E.; Russell, Daniel W.

    2012-01-01

    The present longitudinal study examined the role of general and tailored social support in mitigating the deleterious impact of racial discrimination on depressive symptoms and optimism in a large sample of African American women. Participants were 590 African American women who completed measures assessing racial discrimination, general social support, tailored social support for racial discrimination, depressive symptoms, and optimism at two time points (2001–2002 and 2003–2004). Our results indicated that higher levels of general and tailored social support predicted optimism one year later; changes in both types of support also predicted changes in optimism over time. Although initial levels of neither measure of social support predicted depressive symptoms over time, changes in tailored support predicted changes in depressive symptoms. We also sought to determine whether general and tailored social support “buffer” or diminish the negative effects of racial discrimination on depressive symptoms and optimism. Our results revealed a classic buffering effect of tailored social support, but not general support on depressive symptoms for women experiencing high levels of discrimination. PMID:24443614

  6. Computationally-Efficient Minimum-Time Aircraft Routes in the Presence of Winds

    NASA Technical Reports Server (NTRS)

    Jardin, Matthew R.

    2004-01-01

    A computationally efficient algorithm for minimizing the flight time of an aircraft in a variable wind field has been invented. The algorithm, referred to as Neighboring Optimal Wind Routing (NOWR), is based upon neighboring-optimal-control (NOC) concepts and achieves minimum-time paths by adjusting aircraft heading according to wind conditions at an arbitrary number of wind measurement points along the flight route. The NOWR algorithm may either be used in a fast-time mode to compute minimum- time routes prior to flight, or may be used in a feedback mode to adjust aircraft heading in real-time. By traveling minimum-time routes instead of direct great-circle (direct) routes, flights across the United States can save an average of about 7 minutes, and as much as one hour of flight time during periods of strong jet-stream winds. The neighboring optimal routes computed via the NOWR technique have been shown to be within 1.5 percent of the absolute minimum-time routes for flights across the continental United States. On a typical 450-MHz Sun Ultra workstation, the NOWR algorithm produces complete minimum-time routes in less than 40 milliseconds. This corresponds to a rate of 25 optimal routes per second. The closest comparable optimization technique runs approximately 10 times slower. Airlines currently use various trial-and-error search techniques to determine which of a set of commonly traveled routes will minimize flight time. These algorithms are too computationally expensive for use in real-time systems, or in systems where many optimal routes need to be computed in a short amount of time. Instead of operating in real-time, airlines will typically plan a trajectory several hours in advance using wind forecasts. If winds change significantly from forecasts, the resulting flights will no longer be minimum-time. The need for a computationally efficient wind-optimal routing algorithm is even greater in the case of new air-traffic-control automation concepts. For air-traffic-control automation, thousands of wind-optimal routes may need to be computed and checked for conflicts in just a few minutes. These factors motivated the need for a more efficient wind-optimal routing algorithm.

  7. Use of microwaves to improve nutritional value of soybeans for future space inhabitants

    NASA Technical Reports Server (NTRS)

    Singh, G.

    1983-01-01

    Whole soybeans from four different varieties at different moisture contents were microwaved for varying times to determine the conditions for maximum destruction of trypsin inhibitor and lipoxygenase activities, and optimal growth of chicks. Microwaving 150 gm samples of soybeans (at 14 to 28% moisture) for 1.5 min was found optimal for reduction of trypsin inhibitor and lipoxygenase activities. Microwaving 1 kgm samples of soybeans for 9 minutes destroyed 82% of the trypsin inhibitor activity and gave optimal chick growth. It should be pointed out that the microwaving time would vary according to the weight of the sample and the power of the microwave oven. The microwave oven used in the above experiments was rated at 650 watts 2450 MHz.

  8. Evaluation of optimized bronchoalveolar lavage sampling designs for characterization of pulmonary drug distribution.

    PubMed

    Clewe, Oskar; Karlsson, Mats O; Simonsson, Ulrika S H

    2015-12-01

    Bronchoalveolar lavage (BAL) is a pulmonary sampling technique for characterization of drug concentrations in epithelial lining fluid and alveolar cells. Two hypothetical drugs with different pulmonary distribution rates (fast and slow) were considered. An optimized BAL sampling design was generated assuming no previous information regarding the pulmonary distribution (rate and extent) and with a maximum of two samples per subject. Simulations were performed to evaluate the impact of the number of samples per subject (1 or 2) and the sample size on the relative bias and relative root mean square error of the parameter estimates (rate and extent of pulmonary distribution). The optimized BAL sampling design depends on a characterized plasma concentration time profile, a population plasma pharmacokinetic model, the limit of quantification (LOQ) of the BAL method and involves only two BAL sample time points, one early and one late. The early sample should be taken as early as possible, where concentrations in the BAL fluid ≥ LOQ. The second sample should be taken at a time point in the declining part of the plasma curve, where the plasma concentration is equivalent to the plasma concentration in the early sample. Using a previously described general pulmonary distribution model linked to a plasma population pharmacokinetic model, simulated data using the final BAL sampling design enabled characterization of both the rate and extent of pulmonary distribution. The optimized BAL sampling design enables characterization of both the rate and extent of the pulmonary distribution for both fast and slowly equilibrating drugs.

  9. Theoretical Analysis of Local Search and Simple Evolutionary Algorithms for the Generalized Travelling Salesperson Problem.

    PubMed

    Pourhassan, Mojgan; Neumann, Frank

    2018-06-22

    The generalized travelling salesperson problem is an important NP-hard combinatorial optimization problem for which meta-heuristics, such as local search and evolutionary algorithms, have been used very successfully. Two hierarchical approaches with different neighbourhood structures, namely a Cluster-Based approach and a Node-Based approach, have been proposed by Hu and Raidl (2008) for solving this problem. In this paper, local search algorithms and simple evolutionary algorithms based on these approaches are investigated from a theoretical perspective. For local search algorithms, we point out the complementary abilities of the two approaches by presenting instances where they mutually outperform each other. Afterwards, we introduce an instance which is hard for both approaches when initialized on a particular point of the search space, but where a variable neighbourhood search combining them finds the optimal solution in polynomial time. Then we turn our attention to analysing the behaviour of simple evolutionary algorithms that use these approaches. We show that the Node-Based approach solves the hard instance of the Cluster-Based approach presented in Corus et al. (2016) in polynomial time. Furthermore, we prove an exponential lower bound on the optimization time of the Node-Based approach for a class of Euclidean instances.

  10. Change in the Embedding Dimension as an Indicator of an Approaching Transition

    PubMed Central

    Neuman, Yair; Marwan, Norbert; Cohen, Yohai

    2014-01-01

    Predicting a transition point in behavioral data should take into account the complexity of the signal being influenced by contextual factors. In this paper, we propose to analyze changes in the embedding dimension as contextual information indicating a proceeding transitive point, called OPtimal Embedding tRANsition Detection (OPERAND). Three texts were processed and translated to time-series of emotional polarity. It was found that changes in the embedding dimension proceeded transition points in the data. These preliminary results encourage further research into changes in the embedding dimension as generic markers of an approaching transition point. PMID:24979691

  11. On the design of a radix-10 online floating-point multiplier

    NASA Astrophysics Data System (ADS)

    McIlhenny, Robert D.; Ercegovac, Milos D.

    2009-08-01

    This paper describes an approach to design and implement a radix-10 online floating-point multiplier. An online approach is considered because it offers computational flexibility not available with conventional arithmetic. The design was coded in VHDL and compiled, synthesized, and mapped onto a Virtex 5 FPGA to measure cost in terms of LUTs (look-up-tables) as well as the cycle time and total latency. The routing delay which was not optimized is the major component in the cycle time. For a rough estimate of the cost/latency characteristics, our design was compared to a standard radix-2 floating-point multiplier of equivalent precision. The results demonstrate that even an unoptimized radix-10 online design is an attractive implementation alternative for FPGA floating-point multiplication.

  12. Predictive Cutoff Values of the Five-Times Sit-to-Stand Test and the Timed "Up & Go" Test for Disability Incidence in Older People Dwelling in the Community.

    PubMed

    Makizako, Hyuma; Shimada, Hiroyuki; Doi, Takehiko; Tsutsumimoto, Kota; Nakakubo, Sho; Hotta, Ryo; Suzuki, Takao

    2017-04-01

    Lower extremity functioning is important for maintaining activity in elderly people. Optimal cutoff points for standard measurements of lower extremity functioning would help identify elderly people who are not disabled but have a high risk of developing disability. The purposes of this study were: (1) to determine the optimal cutoff points of the Five-Times Sit-to-Stand Test and the Timed "Up & Go" Test for predicting the development of disability and (2) to examine the impact of poor performance on both tests on the prediction of the risk of disability in elderly people dwelling in the community. This was a prospective cohort study. A population of 4,335 elderly people dwelling in the community (mean age = 71.7 years; 51.6% women) participated in baseline assessments. Participants were monitored for 2 years for the development of disability. During the 2-year follow-up period, 161 participants (3.7%) developed disability. The optimal cutoff points of the Five-Times Sit-to-Stand Test and the Timed "Up & Go" Test for predicting the development of disability were greater than or equal to 10 seconds and greater than or equal to 9 seconds, respectively. Participants with poor performance on the Five-Times Sit-to-Stand Test (hazard ratio = 1.88; 95% CI = 1.11-3.20), the Timed "Up & Go" Test (hazard ratio = 2.24; 95% CI = 1.42-3.53), or both tests (hazard ratio = 2.78; 95% CI = 1.78-4.33) at the baseline assessment had a significantly higher risk of developing disability than participants who had better lower extremity functioning. All participants had good initial functioning and participated in assessments on their own. Causes of disability were not assessed. Assessments of lower extremity functioning with the Five-Times Sit-to-Stand Test and the Timed "Up & Go" Test, especially poor performance on both tests, were good predictors of future disability in elderly people dwelling in the community. © 2017 American Physical Therapy Association

  13. Influence of the distance between target surface and focal point on the expansion dynamics of a laser-induced silicon plasma with spatial confinement

    NASA Astrophysics Data System (ADS)

    Zhang, Dan; Chen, Anmin; Wang, Xiaowei; Wang, Ying; Sui, Laizhi; Ke, Da; Li, Suyu; Jiang, Yuanfei; Jin, Mingxing

    2018-05-01

    Expansion dynamics of a laser-induced plasma plume, with spatial confinement, for various distances between the target surface and focal point were studied by the fast photography technique. A silicon wafer was ablated to induce the plasma with a Nd:YAG laser in an atmospheric environment. The expansion dynamics of the plasma plume depended on the distance between the target surface and focal point. In addition, spatially confined time-resolved images showed the different structures of the plasma plumes at different distances between the target surface and focal point. By analyzing the plume images, the optimal distance for emission enhancement was found to be approximately 6 mm away from the geometrical focus using a 10 cm focal length lens. This optimized distance resulted in the strongest compression ratio of the plasma plume by the reflected shock wave. Furthermore, the duration of the interaction between the reflected shock wave and the plasma plume was also prolonged.

  14. Comparison of Optimization and Two-point Methods in Estimation of Soil Water Retention Curve

    NASA Astrophysics Data System (ADS)

    Ghanbarian-Alavijeh, B.; Liaghat, A. M.; Huang, G.

    2009-04-01

    Soil water retention curve (SWRC) is one of the soil hydraulic properties in which its direct measurement is time consuming and expensive. Since, its measurement is unavoidable in study of environmental sciences i.e. investigation of unsaturated hydraulic conductivity and solute transport, in this study the attempt is to predict soil water retention curve from two measured points. By using Cresswell and Paydar (1996) method (two-point method) and an optimization method developed in this study on the basis of two points of SWRC, parameters of Tyler and Wheatcraft (1990) model (fractal dimension and air entry value) were estimated and then water content at different matric potentials were estimated and compared with their measured values (n=180). For each method, we used both 3 and 1500 kPa (case 1) and 33 and 1500 kPa (case 2) as two points of SWRC. The calculated RMSE values showed that in the Creswell and Paydar (1996) method, there exists no significant difference between case 1 and case 2. However, the calculated RMSE value in case 2 (2.35) was slightly less than case 1 (2.37). The results also showed that the developed optimization method in this study had significantly less RMSE values for cases 1 (1.63) and 2 (1.33) rather than Cresswell and Paydar (1996) method.

  15. JPL-ANTOPT antenna structure optimization program

    NASA Technical Reports Server (NTRS)

    Strain, D. M.

    1994-01-01

    New antenna path-length error and pointing-error structure optimization codes were recently added to the MSC/NASTRAN structural analysis computer program. Path-length and pointing errors are important measured of structure-related antenna performance. The path-length and pointing errors are treated as scalar displacements for statics loading cases. These scalar displacements can be subject to constraint during the optimization process. Path-length and pointing-error calculations supplement the other optimization and sensitivity capabilities of NASTRAN. The analysis and design functions were implemented as 'DMAP ALTERs' to the Design Optimization (SOL 200) Solution Sequence of MSC-NASTRAN, Version 67.5.

  16. Time-free transfers between libration-point orbits in the elliptic restricted problem

    NASA Astrophysics Data System (ADS)

    Howell, K. C.; Hiday, L. A.

    1992-08-01

    This work is directed toward the formulation of a strategy to design optimal time-free impulsive transfers between 3D libration-point orbits in the vicinity of the interior L1 libration point of the sun-earth/moon barycenter system. Inferior transfers that move a spacecraft from a large halo orbit to a smaller halo orbit are considered here. Primer vector theory is applied to nonoptimal impulsive trajectories in the elliptic restricted three-body problem in order to establish whether the implementation of a coast in the initial orbit, a coast in the final orbit, or dual coasts accomplishes a reduction in fuel expenditure. The addition of interior impulses is also considered. Results indicate that a substantial savings in fuel can be achieved by the allowance for coastal periods on the specified libration-point orbits. The resulting time-free inferior transfers are compared to time-free superior transfers between halo orbits of equal z-amplitude separation.

  17. Optimal bounds and extremal trajectories for time averages in dynamical systems

    NASA Astrophysics Data System (ADS)

    Tobasco, Ian; Goluskin, David; Doering, Charles

    2017-11-01

    For systems governed by differential equations it is natural to seek extremal solution trajectories, maximizing or minimizing the long-time average of a given quantity of interest. A priori bounds on optima can be proved by constructing auxiliary functions satisfying certain point-wise inequalities, the verification of which does not require solving the underlying equations. We prove that for any bounded autonomous ODE, the problems of finding extremal trajectories on the one hand and optimal auxiliary functions on the other are strongly dual in the sense of convex duality. As a result, auxiliary functions provide arbitrarily sharp bounds on optimal time averages. Furthermore, nearly optimal auxiliary functions provide volumes in phase space where maximal and nearly maximal trajectories must lie. For polynomial systems, such functions can be constructed by semidefinite programming. We illustrate these ideas using the Lorenz system, producing explicit volumes in phase space where extremal trajectories are guaranteed to reside. Supported by NSF Award DMS-1515161, Van Loo Postdoctoral Fellowships, and the John Simon Guggenheim Foundation.

  18. QuickVina: accelerating AutoDock Vina using gradient-based heuristics for global optimization.

    PubMed

    Handoko, Stephanus Daniel; Ouyang, Xuchang; Su, Chinh Tran To; Kwoh, Chee Keong; Ong, Yew Soon

    2012-01-01

    Predicting binding between macromolecule and small molecule is a crucial phase in the field of rational drug design. AutoDock Vina, one of the most widely used docking software released in 2009, uses an empirical scoring function to evaluate the binding affinity between the molecules and employs the iterated local search global optimizer for global optimization, achieving a significantly improved speed and better accuracy of the binding mode prediction compared its predecessor, AutoDock 4. In this paper, we propose further improvement in the local search algorithm of Vina by heuristically preventing some intermediate points from undergoing local search. Our improved version of Vina-dubbed QVina-achieved a maximum acceleration of about 25 times with the average speed-up of 8.34 times compared to the original Vina when tested on a set of 231 protein-ligand complexes while maintaining the optimal scores mostly identical. Using our heuristics, larger number of different ligands can be quickly screened against a given receptor within the same time frame.

  19. Time-optimal thermalization of single-mode Gaussian states

    NASA Astrophysics Data System (ADS)

    Carlini, Alberto; Mari, Andrea; Giovannetti, Vittorio

    2014-11-01

    We consider the problem of time-optimal control of a continuous bosonic quantum system subject to the action of a Markovian dissipation. In particular, we consider the case of a one-mode Gaussian quantum system prepared in an arbitrary initial state and which relaxes to the steady state due to the action of the dissipative channel. We assume that the unitary part of the dynamics is represented by Gaussian operations which preserve the Gaussian nature of the quantum state, i.e., arbitrary phase rotations, bounded squeezing, and unlimited displacements. In the ideal ansatz of unconstrained quantum control (i.e., when the unitary phase rotations, squeezing, and displacement of the mode can be performed instantaneously), we study how control can be optimized for speeding up the relaxation towards the fixed point of the dynamics and we analytically derive the optimal relaxation time. Our model has potential and interesting applications to the control of modes of electromagnetic radiation and of trapped levitated nanospheres.

  20. Power-limited low-thrust trajectory optimization with operation point detection

    NASA Astrophysics Data System (ADS)

    Chi, Zhemin; Li, Haiyang; Jiang, Fanghua; Li, Junfeng

    2018-06-01

    The power-limited solar electric propulsion system is considered more practical in mission design. An accurate mathematical model of the propulsion system, based on experimental data of the power generation system, is used in this paper. An indirect method is used to deal with the time-optimal and fuel-optimal control problems, in which the solar electric propulsion system is described using a finite number of operation points, which are characterized by different pairs of thruster input power. In order to guarantee the integral accuracy for the discrete power-limited problem, a power operation detection technique is embedded in the fourth-order Runge-Kutta algorithm with fixed step. Moreover, the logarithmic homotopy method and normalization technique are employed to overcome the difficulties caused by using indirect methods. Three numerical simulations with actual propulsion systems are given to substantiate the feasibility and efficiency of the proposed method.

  1. Turbopump Performance Improved by Evolutionary Algorithms

    NASA Technical Reports Server (NTRS)

    Oyama, Akira; Liou, Meng-Sing

    2002-01-01

    The development of design optimization technology for turbomachinery has been initiated using the multiobjective evolutionary algorithm under NASA's Intelligent Synthesis Environment and Revolutionary Aeropropulsion Concepts programs. As an alternative to the traditional gradient-based methods, evolutionary algorithms (EA's) are emergent design-optimization algorithms modeled after the mechanisms found in natural evolution. EA's search from multiple points, instead of moving from a single point. In addition, they require no derivatives or gradients of the objective function, leading to robustness and simplicity in coupling any evaluation codes. Parallel efficiency also becomes very high by using a simple master-slave concept for function evaluations, since such evaluations often consume the most CPU time, such as computational fluid dynamics. Application of EA's to multiobjective design problems is also straightforward because EA's maintain a population of design candidates in parallel. Because of these advantages, EA's are a unique and attractive approach to real-world design optimization problems.

  2. Distributed Optimal Consensus Over Resource Allocation Network and Its Application to Dynamical Economic Dispatch.

    PubMed

    Li, Chaojie; Yu, Xinghuo; Huang, Tingwen; He, Xing; Chaojie Li; Xinghuo Yu; Tingwen Huang; Xing He; Li, Chaojie; Huang, Tingwen; He, Xing; Yu, Xinghuo

    2018-06-01

    The resource allocation problem is studied and reformulated by a distributed interior point method via a -logarithmic barrier. By the facilitation of the graph Laplacian, a fully distributed continuous-time multiagent system is developed for solving the problem. Specifically, to avoid high singularity of the -logarithmic barrier at boundary, an adaptive parameter switching strategy is introduced into this dynamical multiagent system. The convergence rate of the distributed algorithm is obtained. Moreover, a novel distributed primal-dual dynamical multiagent system is designed in a smart grid scenario to seek the saddle point of dynamical economic dispatch, which coincides with the optimal solution. The dual decomposition technique is applied to transform the optimization problem into easily solvable resource allocation subproblems with local inequality constraints. The good performance of the new dynamical systems is, respectively, verified by a numerical example and the IEEE six-bus test system-based simulations.

  3. Terrain Dynamics Analysis Using Space-Time Domain Hypersurfaces and Gradient Trajectories Derived From Time Series of 3D Point Clouds

    DTIC Science & Technology

    2015-08-01

    optimized space-time interpolation method. Tangible geospatial modeling system was further developed to support the analysis of changing elevation surfaces...Evolution Mapped by Terrestrial Laser Scanning, talk, AGU Fall 2012 *Hardin E, Mitas L, Mitasova H., Simulation of Wind -Blown Sand for...Geomorphological Applications: A Smoothed Particle Hydrodynamics Approach, GSA 2012 *Russ, E. Mitasova, H., Time series and space-time cube analyses on

  4. Commissioning of a 3D image‐based treatment planning system for high‐dose‐rate brachytherapy of cervical cancer

    PubMed Central

    Kim, Yongbok; Modrick, Joseph M.; Pennington, Edward C.

    2016-01-01

    The objective of this work is to present commissioning procedures to clinically implement a three‐dimensional (3D), image‐based, treatment‐planning system (TPS) for high‐dose‐rate (HDR) brachytherapy (BT) for gynecological (GYN) cancer. The physical dimensions of the GYN applicators and their values in the virtual applicator library were varied by 0.4 mm of their nominal values. Reconstruction uncertainties of the titanium tandem and ovoids (T&O) were less than 0.4 mm on CT phantom studies and on average between 0.8‐1.0 mm on MRI when compared with X‐rays. In‐house software, HDRCalculator, was developed to check HDR plan parameters such as independently verifying active tandem or cylinder probe length and ovoid or cylinder size, source calibration and treatment date, and differences between average Point A dose and prescription dose. Dose‐volume histograms were validated using another independent TPS. Comprehensive procedures to commission volume optimization algorithms and process in 3D image‐based planning were presented. For the difference between line and volume optimizations, the average absolute differences as a percentage were 1.4% for total reference air KERMA (TRAK) and 1.1% for Point A dose. Volume optimization consistency tests between versions resulted in average absolute differences in 0.2% for TRAK and 0.9 s (0.2%) for total treatment time. The data revealed that the optimizer should run for at least 1 min in order to avoid more than 0.6% dwell time changes. For clinical GYN T&O cases, three different volume optimization techniques (graphical optimization, pure inverse planning, and hybrid inverse optimization) were investigated by comparing them against a conventional Point A technique. End‐to‐end testing was performed using a T&O phantom to ensure no errors or inconsistencies occurred from imaging through to planning and delivery. The proposed commissioning procedures provide a clinically safe implementation technique for 3D image‐based TPS for HDR BT for GYN cancer. PACS number(s): 87.55.D‐ PMID:27074463

  5. Detecting recurrence domains of dynamical systems by symbolic dynamics.

    PubMed

    beim Graben, Peter; Hutt, Axel

    2013-04-12

    We propose an algorithm for the detection of recurrence domains of complex dynamical systems from time series. Our approach exploits the characteristic checkerboard texture of recurrence domains exhibited in recurrence plots. In phase space, recurrence plots yield intersecting balls around sampling points that could be merged into cells of a phase space partition. We construct this partition by a rewriting grammar applied to the symbolic dynamics of time indices. A maximum entropy principle defines the optimal size of intersecting balls. The final application to high-dimensional brain signals yields an optimal symbolic recurrence plot revealing functional components of the signal.

  6. Joint CPT and N resonance in compact atomic time standards

    NASA Astrophysics Data System (ADS)

    Crescimanno, Michael; Hohensee, Michael; Xiao, Yanhong; Phillips, David; Walsworth, Ron

    2008-05-01

    Currently development efforts towards small, low power atomic time standards use current-modulated VCSELs to generate phase-coherent optical sidebands that interrogate the hyperfine structure of alkali atoms such as rubidium. We describe and use a modified four-level quantum optics model to study the optimal operating regime of the joint CPT- and N-resonance clock. Resonant and non-resonant light shifts as well as modulation comb detuning effects play a key role in determining the optimal operating point of such clocks. We further show that our model is in good agreement with experimental tests performed using Rb-87 vapor cells.

  7. Neighboring extremals of dynamic optimization problems with path equality constraints

    NASA Technical Reports Server (NTRS)

    Lee, A. Y.

    1988-01-01

    Neighboring extremals of dynamic optimization problems with path equality constraints and with an unknown parameter vector are considered in this paper. With some simplifications, the problem is reduced to solving a linear, time-varying two-point boundary-value problem with integral path equality constraints. A modified backward sweep method is used to solve this problem. Two example problems are solved to illustrate the validity and usefulness of the solution technique.

  8. Optimal Limited Contingency Planning

    NASA Technical Reports Server (NTRS)

    Meuleau, Nicolas; Smith, David E.

    2003-01-01

    For a given problem, the optimal Markov policy over a finite horizon is a conditional plan containing a potentially large number of branches. However, there are applications where it is desirable to strictly limit the number of decision points and branches in a plan. This raises the question of how one goes about finding optimal plans containing only a limited number of branches. In this paper, we present an any-time algorithm for optimal k-contingency planning. It is the first optimal algorithm for limited contingency planning that is not an explicit enumeration of possible contingent plans. By modelling the problem as a partially observable Markov decision process, it implements the Bellman optimality principle and prunes the solution space. We present experimental results of applying this algorithm to some simple test cases.

  9. Generalization techniques to reduce the number of volume elements for terrain effect calculations in fully analytical gravitational modelling

    NASA Astrophysics Data System (ADS)

    Benedek, Judit; Papp, Gábor; Kalmár, János

    2018-04-01

    Beyond rectangular prism polyhedron, as a discrete volume element, can also be used to model the density distribution inside 3D geological structures. The calculation of the closed formulae given for the gravitational potential and its higher-order derivatives, however, needs twice more runtime than that of the rectangular prism computations. Although the more detailed the better principle is generally accepted it is basically true only for errorless data. As soon as errors are present any forward gravitational calculation from the model is only a possible realization of the true force field on the significance level determined by the errors. So if one really considers the reliability of input data used in the calculations then sometimes the "less" can be equivalent to the "more" in statistical sense. As a consequence the processing time of the related complex formulae can be significantly reduced by the optimization of the number of volume elements based on the accuracy estimates of the input data. New algorithms are proposed to minimize the number of model elements defined both in local and in global coordinate systems. Common gravity field modelling programs generate optimized models for every computation points ( dynamic approach), whereas the static approach provides only one optimized model for all. Based on the static approach two different algorithms were developed. The grid-based algorithm starts with the maximum resolution polyhedral model defined by 3-3 points of each grid cell and generates a new polyhedral surface defined by points selected from the grid. The other algorithm is more general; it works also for irregularly distributed data (scattered points) connected by triangulation. Beyond the description of the optimization schemes some applications of these algorithms in regional and local gravity field modelling are presented too. The efficiency of the static approaches may provide even more than 90% reduction in computation time in favourable situation without the loss of reliability of the calculated gravity field parameters.

  10. Temporally-Constrained Group Sparse Learning for Longitudinal Data Analysis in Alzheimer’s Disease

    PubMed Central

    Jie, Biao; Liu, Mingxia; Liu, Jun

    2016-01-01

    Sparse learning has been widely investigated for analysis of brain images to assist the diagnosis of Alzheimer’s disease (AD) and its prodromal stage, i.e., mild cognitive impairment (MCI). However, most existing sparse learning-based studies only adopt cross-sectional analysis methods, where the sparse model is learned using data from a single time-point. Actually, multiple time-points of data are often available in brain imaging applications, which can be used in some longitudinal analysis methods to better uncover the disease progression patterns. Accordingly, in this paper we propose a novel temporally-constrained group sparse learning method aiming for longitudinal analysis with multiple time-points of data. Specifically, we learn a sparse linear regression model by using the imaging data from multiple time-points, where a group regularization term is first employed to group the weights for the same brain region across different time-points together. Furthermore, to reflect the smooth changes between data derived from adjacent time-points, we incorporate two smoothness regularization terms into the objective function, i.e., one fused smoothness term which requires that the differences between two successive weight vectors from adjacent time-points should be small, and another output smoothness term which requires the differences between outputs of two successive models from adjacent time-points should also be small. We develop an efficient optimization algorithm to solve the proposed objective function. Experimental results on ADNI database demonstrate that, compared with conventional sparse learning-based methods, our proposed method can achieve improved regression performance and also help in discovering disease-related biomarkers. PMID:27093313

  11. LiDAR-IMU Time Delay Calibration Based on Iterative Closest Point and Iterated Sigma Point Kalman Filter

    PubMed Central

    Liu, Wanli

    2017-01-01

    The time delay calibration between Light Detection and Ranging (LiDAR) and Inertial Measurement Units (IMUs) is an essential prerequisite for its applications. However, the correspondences between LiDAR and IMU measurements are usually unknown, and thus cannot be computed directly for the time delay calibration. In order to solve the problem of LiDAR-IMU time delay calibration, this paper presents a fusion method based on iterative closest point (ICP) and iterated sigma point Kalman filter (ISPKF), which combines the advantages of ICP and ISPKF. The ICP algorithm can precisely determine the unknown transformation between LiDAR-IMU; and the ISPKF algorithm can optimally estimate the time delay calibration parameters. First of all, the coordinate transformation from the LiDAR frame to the IMU frame is realized. Second, the measurement model and time delay error model of LiDAR and IMU are established. Third, the methodology of the ICP and ISPKF procedure is presented for LiDAR-IMU time delay calibration. Experimental results are presented that validate the proposed method and demonstrate the time delay error can be accurately calibrated. PMID:28282897

  12. Optimal design of stimulus experiments for robust discrimination of biochemical reaction networks.

    PubMed

    Flassig, R J; Sundmacher, K

    2012-12-01

    Biochemical reaction networks in the form of coupled ordinary differential equations (ODEs) provide a powerful modeling tool for understanding the dynamics of biochemical processes. During the early phase of modeling, scientists have to deal with a large pool of competing nonlinear models. At this point, discrimination experiments can be designed and conducted to obtain optimal data for selecting the most plausible model. Since biological ODE models have widely distributed parameters due to, e.g. biologic variability or experimental variations, model responses become distributed. Therefore, a robust optimal experimental design (OED) for model discrimination can be used to discriminate models based on their response probability distribution functions (PDFs). In this work, we present an optimal control-based methodology for designing optimal stimulus experiments aimed at robust model discrimination. For estimating the time-varying model response PDF, which results from the nonlinear propagation of the parameter PDF under the ODE dynamics, we suggest using the sigma-point approach. Using the model overlap (expected likelihood) as a robust discrimination criterion to measure dissimilarities between expected model response PDFs, we benchmark the proposed nonlinear design approach against linearization with respect to prediction accuracy and design quality for two nonlinear biological reaction networks. As shown, the sigma-point outperforms the linearization approach in the case of widely distributed parameter sets and/or existing multiple steady states. Since the sigma-point approach scales linearly with the number of model parameter, it can be applied to large systems for robust experimental planning. An implementation of the method in MATLAB/AMPL is available at http://www.uni-magdeburg.de/ivt/svt/person/rf/roed.html. flassig@mpi-magdeburg.mpg.de Supplementary data are are available at Bioinformatics online.

  13. Optimal design and use of retry in fault tolerant real-time computer systems

    NASA Technical Reports Server (NTRS)

    Lee, Y. H.; Shin, K. G.

    1983-01-01

    A new method to determin an optimal retry policy and for use in retry of fault characterization is presented. An optimal retry policy for a given fault characteristic, which determines the maximum allowable retry durations to minimize the total task completion time was derived. The combined fault characterization and retry decision, in which the characteristics of fault are estimated simultaneously with the determination of the optimal retry policy were carried out. Two solution approaches were developed, one based on the point estimation and the other on the Bayes sequential decision. The maximum likelihood estimators are used for the first approach, and the backward induction for testing hypotheses in the second approach. Numerical examples in which all the durations associated with faults have monotone hazard functions, e.g., exponential, Weibull and gamma distributions are presented. These are standard distributions commonly used for modeling analysis and faults.

  14. Optimal pupil design for confocal microscopy

    NASA Astrophysics Data System (ADS)

    Patel, Yogesh G.; Rajadhyaksha, Milind; DiMarzio, Charles A.

    2010-02-01

    Confocal reflectance microscopy may enable screening and diagnosis of skin cancers noninvasively and in real-time, as an adjunct to biopsy and pathology. Current instruments are large, complex, and expensive. A simpler, confocal line-scanning microscope may accelerate the translation of confocal microscopy in clinical and surgical dermatology. A confocal reflectance microscope may use a beamsplitter, transmitting and detecting through the pupil, or a divided pupil, or theta configuration, with half used for transmission and half for detection. The divided pupil may offer better sectioning and contrast. We present a Fourier optics model and compare the on-axis irradiance of a confocal point-scanning microscope in both pupil configurations, optimizing the profile of a Gaussian beam in a circular or semicircular aperture. We repeat both calculations with a cylindrical lens which focuses the source to a line. The variable parameter is the fillfactor, h, the ratio of the 1/e2 diameter of the Gaussian beam to the diameter of the full aperture. The optimal values of h, for point scanning are 0.90 (full) and 0.66 for the half-aperture. For line-scanning, the fill-factors are 1.02 (full) and 0.52 (half). Additional parameters to consider are the optimal location of the point-source beam in the divided-pupil configuration, the optimal line width for the line-source, and the width of the aperture in the divided-pupil configuration. Additional figures of merit are field-of-view and sectioning. Use of optimal designs is critical in comparing the experimental performance of the different configurations.

  15. Optimization of block-floating-point realizations for digital controllers with finite-word-length considerations.

    PubMed

    Wu, Jun; Hu, Xie-he; Chen, Sheng; Chu, Jian

    2003-01-01

    The closed-loop stability issue of finite-precision realizations was investigated for digital controllers implemented in block-floating-point format. The controller coefficient perturbation was analyzed resulting from using finite word length (FWL) block-floating-point representation scheme. A block-floating-point FWL closed-loop stability measure was derived which considers both the dynamic range and precision. To facilitate the design of optimal finite-precision controller realizations, a computationally tractable block-floating-point FWL closed-loop stability measure was then introduced and the method of computing the value of this measure for a given controller realization was developed. The optimal controller realization is defined as the solution that maximizes the corresponding measure, and a numerical optimization approach was adopted to solve the resulting optimal realization problem. A numerical example was used to illustrate the design procedure and to compare the optimal controller realization with the initial realization.

  16. An approach for aerodynamic optimization of transonic fan blades

    NASA Astrophysics Data System (ADS)

    Khelghatibana, Maryam

    Aerodynamic design optimization of transonic fan blades is a highly challenging problem due to the complexity of flow field inside the fan, the conflicting design requirements and the high-dimensional design space. In order to address all these challenges, an aerodynamic design optimization method is developed in this study. This method automates the design process by integrating a geometrical parameterization method, a CFD solver and numerical optimization methods that can be applied to both single and multi-point optimization design problems. A multi-level blade parameterization is employed to modify the blade geometry. Numerical analyses are performed by solving 3D RANS equations combined with SST turbulence model. Genetic algorithms and hybrid optimization methods are applied to solve the optimization problem. In order to verify the effectiveness and feasibility of the optimization method, a singlepoint optimization problem aiming to maximize design efficiency is formulated and applied to redesign a test case. However, transonic fan blade design is inherently a multi-faceted problem that deals with several objectives such as efficiency, stall margin, and choke margin. The proposed multi-point optimization method in the current study is formulated as a bi-objective problem to maximize design and near-stall efficiencies while maintaining the required design pressure ratio. Enhancing these objectives significantly deteriorate the choke margin, specifically at high rotational speeds. Therefore, another constraint is embedded in the optimization problem in order to prevent the reduction of choke margin at high speeds. Since capturing stall inception is numerically very expensive, stall margin has not been considered as an objective in the problem statement. However, improving near-stall efficiency results in a better performance at stall condition, which could enhance the stall margin. An investigation is therefore performed on the Pareto-optimal solutions to demonstrate the relation between near-stall efficiency and stall margin. The proposed method is applied to redesign NASA rotor 67 for single and multiple operating conditions. The single-point design optimization showed +0.28 points improvement of isentropic efficiency at design point, while the design pressure ratio and mass flow are, respectively, within 0.12% and 0.11% of the reference blade. Two cases of multi-point optimization are performed: First, the proposed multi-point optimization problem is relaxed by removing the choke margin constraint in order to demonstrate the relation between near-stall efficiency and stall margin. An investigation on the Pareto-optimal solutions of this optimization shows that the stall margin has been increased with improving near-stall efficiency. The second multi-point optimization case is performed with considering all the objectives and constraints. One selected optimized design on the Pareto front presents +0.41, +0.56 and +0.9 points improvement in near-peak efficiency, near-stall efficiency and stall margin, respectively. The design pressure ratio and mass flow are, respectively, within 0.3% and 0.26% of the reference blade. Moreover the optimized design maintains the required choking margin. Detailed aerodynamic analyses are performed to investigate the effect of shape optimization on shock occurrence, secondary flows, tip leakage and shock/tip-leakage interactions in both single and multi-point optimizations.

  17. Optimal control of epidemic information dissemination over networks.

    PubMed

    Chen, Pin-Yu; Cheng, Shin-Ming; Chen, Kwang-Cheng

    2014-12-01

    Information dissemination control is of crucial importance to facilitate reliable and efficient data delivery, especially in networks consisting of time-varying links or heterogeneous links. Since the abstraction of information dissemination much resembles the spread of epidemics, epidemic models are utilized to characterize the collective dynamics of information dissemination over networks. From a systematic point of view, we aim to explore the optimal control policy for information dissemination given that the control capability is a function of its distribution time, which is a more realistic model in many applications. The main contributions of this paper are to provide an analytically tractable model for information dissemination over networks, to solve the optimal control signal distribution time for minimizing the accumulated network cost via dynamic programming, and to establish a parametric plug-in model for information dissemination control. In particular, we evaluate its performance in mobile and generalized social networks as typical examples.

  18. An optimization program based on the method of feasible directions: Theory and users guide

    NASA Technical Reports Server (NTRS)

    Belegundu, Ashok D.; Berke, Laszlo; Patnaik, Surya N.

    1994-01-01

    The theory and user instructions for an optimization code based on the method of feasible directions are presented. The code was written for wide distribution and ease of attachment to other simulation software. Although the theory of the method of feasible direction was developed in the 1960's, many considerations are involved in its actual implementation as a computer code. Included in the code are a number of features to improve robustness in optimization. The search direction is obtained by solving a quadratic program using an interior method based on Karmarkar's algorithm. The theory is discussed focusing on the important and often overlooked role played by the various parameters guiding the iterations within the program. Also discussed is a robust approach for handling infeasible starting points. The code was validated by solving a variety of structural optimization test problems that have known solutions obtained by other optimization codes. It has been observed that this code is robust: it has solved a variety of problems from different starting points. However, the code is inefficient in that it takes considerable CPU time as compared with certain other available codes. Further work is required to improve its efficiency while retaining its robustness.

  19. Optimal intravenous infusion to decrease the haematocrit level in patient of DHF infection

    NASA Astrophysics Data System (ADS)

    Handayani, D.; Nuraini, N.; Saragih, R.; Wijaya, K. P.; Naiborhu, J.

    2014-02-01

    The optimal control of infusion model for Dengue Hemorrhagic Fever (DHF) infection is formulated here. The infusion model will be presented in form of haematocrit level. The input control aim to normalize the haematocrit level and is expressed as infusion volume on mL/day. The stability near the equilibrium points will be analyzed. Numerical simulation shows the dynamic of each infection compartments which gives a description of within-host dynamic of dengue virus. These results show particularly that infected compartments tend to be vanished in ±15days after the onset of the virus. In fact, without any control added, the haematocrit level will decrease but not up to the normal level. Therefore the effective haematocrit normalization should be done with the treatment control. Control treatment for a fixed time using a control input can bring haematocrit level to normal range 42-47%. The optimal control in this paper is divided into three cases, i.e. fixed end point, constrained input, and tracking haematocrit state. Each case shows different infection condition in human body. However, all cases require that the haematocrit level to be in normal range in fixed final time.

  20. A Scalable, Parallel Approach for Multi-Point, High-Fidelity Aerostructural Optimization of Aircraft Configurations

    NASA Astrophysics Data System (ADS)

    Kenway, Gaetan K. W.

    This thesis presents new tools and techniques developed to address the challenging problem of high-fidelity aerostructural optimization with respect to large numbers of design variables. A new mesh-movement scheme is developed that is both computationally efficient and sufficiently robust to accommodate large geometric design changes and aerostructural deformations. A fully coupled Newton-Krylov method is presented that accelerates the convergence of aerostructural systems and provides a 20% performance improvement over the traditional nonlinear block Gauss-Seidel approach and can handle more exible structures. A coupled adjoint method is used that efficiently computes derivatives for a gradient-based optimization algorithm. The implementation uses only machine accurate derivative techniques and is verified to yield fully consistent derivatives by comparing against the complex step method. The fully-coupled large-scale coupled adjoint solution method is shown to have 30% better performance than the segregated approach. The parallel scalability of the coupled adjoint technique is demonstrated on an Euler Computational Fluid Dynamics (CFD) model with more than 80 million state variables coupled to a detailed structural finite-element model of the wing with more than 1 million degrees of freedom. Multi-point high-fidelity aerostructural optimizations of a long-range wide-body, transonic transport aircraft configuration are performed using the developed techniques. The aerostructural analysis employs Euler CFD with a 2 million cell mesh and a structural finite element model with 300 000 DOF. Two design optimization problems are solved: one where takeoff gross weight is minimized, and another where fuel burn is minimized. Each optimization uses a multi-point formulation with 5 cruise conditions and 2 maneuver conditions. The optimization problems have 476 design variables are optimal results are obtained within 36 hours of wall time using 435 processors. The TOGW minimization results in a 4.2% reduction in TOGW with a 6.6% fuel burn reduction, while the fuel burn optimization resulted in a 11.2% fuel burn reduction with no change to the takeoff gross weight.

  1. Low-Thrust Trajectory Optimization with Simplified SQP Algorithm

    NASA Technical Reports Server (NTRS)

    Parrish, Nathan L.; Scheeres, Daniel J.

    2017-01-01

    The problem of low-thrust trajectory optimization in highly perturbed dynamics is a stressing case for many optimization tools. Highly nonlinear dynamics and continuous thrust are each, separately, non-trivial problems in the field of optimal control, and when combined, the problem is even more difficult. This paper de-scribes a fast, robust method to design a trajectory in the CRTBP (circular restricted three body problem), beginning with no or very little knowledge of the system. The approach is inspired by the SQP (sequential quadratic programming) algorithm, in which a general nonlinear programming problem is solved via a sequence of quadratic problems. A few key simplifications make the algorithm presented fast and robust to initial guess: a quadratic cost function, neglecting the line search step when the solution is known to be far away, judicious use of end-point constraints, and mesh refinement on multiple shooting with fixed-step integration.In comparison to the traditional approach of plugging the problem into a “black-box” NLP solver, the methods shown converge even when given no knowledge of the solution at all. It was found that the only piece of information that the user needs to provide is a rough guess for the time of flight, as the transfer time guess will dictate which set of local solutions the algorithm could converge on. This robustness to initial guess is a compelling feature, as three-body orbit transfers are challenging to design with intuition alone. Of course, if a high-quality initial guess is available, the methods shown are still valid.We have shown that endpoints can be efficiently constrained to lie on 3-body repeating orbits, and that time of flight can be optimized as well. When optimizing the endpoints, we must make a trade between converging quickly on sub-optimal endpoints or converging more slowly on end-points that are arbitrarily close to optimal. It is easy for the mission design engineer to adjust this trade based on the problem at hand.The biggest limitation to the algorithm at this point is that multi-revolution transfers (greater than 2 revolutions) do not work nearly as well. This restriction comes in because the relationship between node 1 and node N becomes increasingly nonlinear as the angular distance grows. Trans-fers with more than about 1.5 complete revolutions generally require the line search to improve convergence. Future work includes: Comparison of this algorithm with other established tools; improvements to how multiple-revolution transfers are handled; parallelization of the Jacobian computation; in-creased efficiency for the line search; and optimization of many more trajectories between a variety of 3-body orbits.

  2. Beam pointing angle optimization and experiments for vehicle laser Doppler velocimetry

    NASA Astrophysics Data System (ADS)

    Fan, Zhe; Hu, Shuling; Zhang, Chunxi; Nie, Yanju; Li, Jun

    2015-10-01

    Beam pointing angle (BPA) is one of the key parameters that affects the operation performance of the laser Doppler velocimetry (LDV) system. By considering velocity sensitivity and echo power, for the first time, the optimized BPA of vehicle LDV is analyzed. Assuming mounting error is within ±1.0 deg, the reflectivity and roughness are variable for different scenarios, the optimized BPA is obtained in the range from 29 to 43 deg. Therefore, velocity sensitivity is in the range of 1.25 to 1.76 MHz/(m/s), and the percentage of normalized echo power at optimized BPA with respect to that at 0 deg is greater than 53.49%. Laboratory experiments with a rotating table are done with different BPAs of 10, 35, and 66 deg, and the results coincide with the theoretical analysis. Further, vehicle experiment with optimized BPA of 35 deg is conducted by comparison with microwave radar (accuracy of ±0.5% full scale output). The root-mean-square error of LDV's results is smaller than the Microstar II's, 0.0202 and 0.1495 m/s, corresponding to LDV and Microstar II, respectively, and the mean velocity discrepancy is 0.032 m/s. It is also proven that with the optimized BPA both high velocity sensitivity and acceptable echo power can simultaneously be guaranteed.

  3. Helium in chirped laser fields as a time-asymmetric atomic switch

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kaprálová-Žďánská, Petra Ruth, E-mail: kapralova@jh-inst.cas.cz; J. Heyrovsky Institute of Physical Chemistry, Academy of Sciences of the Czech Republic, Dolejškova 3, 182 23 Prague 8; Moiseyev, Nimrod, E-mail: nimrod@tx.technion.ac.il

    2014-07-07

    Tuning the laser parameters exceptional points in the spectrum of the dressed laser helium atom are obtained. The weak linearly polarized laser couples the ground state and the doubly excited P-states of helium. We show here that for specific chirped laser pulses that encircle an exceptional point one can get the time-asymmetric phenomenon, where for a negative chirped laser pulse the ground state is transformed into the doubly excited auto-ionization state, while for a positive chirped laser pulse the resonance state is not populated and the neutral helium atoms remains in the ground state as the laser pulse is turnedmore » off. Moreover, we show that the results are very sensitive to the closed contour we choose. This time-asymmetric state exchange phenomenon can be considered as a time-asymmetric atomic switch. The optimal time-asymmetric switch is obtained when the closed loop that encircles the exceptional point is large, while for the smallest loops, the time-asymmetric phenomenon does not take place. A systematic way for studying the effect of the chosen closed contour that encircles the exceptional point on the time-asymmetric phenomenon is proposed.« less

  4. Optimal execution with price impact under Cumulative Prospect Theory

    NASA Astrophysics Data System (ADS)

    Zhao, Jingdong; Zhu, Hongliang; Li, Xindan

    2018-01-01

    Optimal execution of a stock (or portfolio) has been widely studied in academia and in practice over the past decade, and minimizing transaction costs is a critical point. However, few researchers consider the psychological factors for the traders. What are traders truly concerned with - buying low in the paper accounts or buying lower compared to others? We consider the optimal trading strategies in terms of the price impact and Cumulative Prospect Theory and identify some specific properties. Our analyses indicate that a large proportion of the execution volume is distributed at both ends of the transaction time. But the trader's optimal strategies may not be implemented at the same transaction size and speed in different market environments.

  5. The effect of model uncertainty on some optimal routing problems

    NASA Technical Reports Server (NTRS)

    Mohanty, Bibhu; Cassandras, Christos G.

    1991-01-01

    The effect of model uncertainties on optimal routing in a system of parallel queues is examined. The uncertainty arises in modeling the service time distribution for the customers (jobs, packets) to be served. For a Poisson arrival process and Bernoulli routing, the optimal mean system delay generally depends on the variance of this distribution. However, as the input traffic load approaches the system capacity the optimal routing assignment and corresponding mean system delay are shown to converge to a variance-invariant point. The implications of these results are examined in the context of gradient-based routing algorithms. An example of a model-independent algorithm using online gradient estimation is also included.

  6. Continuous Firefly Algorithm for Optimal Tuning of Pid Controller in Avr System

    NASA Astrophysics Data System (ADS)

    Bendjeghaba, Omar

    2014-01-01

    This paper presents a tuning approach based on Continuous firefly algorithm (CFA) to obtain the proportional-integral- derivative (PID) controller parameters in Automatic Voltage Regulator system (AVR). In the tuning processes the CFA is iterated to reach the optimal or the near optimal of PID controller parameters when the main goal is to improve the AVR step response characteristics. Conducted simulations show the effectiveness and the efficiency of the proposed approach. Furthermore the proposed approach can improve the dynamic of the AVR system. Compared with particle swarm optimization (PSO), the new CFA tuning method has better control system performance in terms of time domain specifications and set-point tracking.

  7. Optimized stereo matching in binocular three-dimensional measurement system using structured light.

    PubMed

    Liu, Kun; Zhou, Changhe; Wei, Shengbin; Wang, Shaoqing; Fan, Xin; Ma, Jianyong

    2014-09-10

    In this paper, we develop an optimized stereo-matching method used in an active binocular three-dimensional measurement system. A traditional dense stereo-matching algorithm is time consuming due to a long search range and the high complexity of a similarity evaluation. We project a binary fringe pattern in combination with a series of N binary band limited patterns. In order to prune the search range, we execute an initial matching before exhaustive matching and evaluate a similarity measure using logical comparison instead of a complicated floating-point operation. Finally, an accurate point cloud can be obtained by triangulation methods and subpixel interpolation. The experiment results verify the computational efficiency and matching accuracy of the method.

  8. Discretized energy minimization in a wave guide with point sources

    NASA Technical Reports Server (NTRS)

    Propst, G.

    1994-01-01

    An anti-noise problem on a finite time interval is solved by minimization of a quadratic functional on the Hilbert space of square integrable controls. To this end, the one-dimensional wave equation with point sources and pointwise reflecting boundary conditions is decomposed into a system for the two propagating components of waves. Wellposedness of this system is proved for a class of data that includes piecewise linear initial conditions and piecewise constant forcing functions. It is shown that for such data the optimal piecewise constant control is the solution of a sparse linear system. Methods for its computational treatment are presented as well as examples of their applicability. The convergence of discrete approximations to the general optimization problem is demonstrated by finite element methods.

  9. Prediction of the Critical Curvature for LX-17 with the Time of Arrival Data from DNS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yao, Jin; Fried, Laurence E.; Moss, William C.

    2017-01-10

    We extract the detonation shock front velocity, curvature and acceleration from time of arrival data measured at grid points from direct numerical simulations of a 50mm rate-stick lit by a disk-source, with the ignition and growth reaction model and a JWL equation of state calibrated for LX-17. We compute the quasi-steady (D, κ) relation based on the extracted properties and predicted the critical curvatures of LX-17. We also proposed an explicit formula that contains the failure turning point, obtained from optimization for the (D, κ) relation of LX-17.

  10. Determining the Intensity of a Point-Like Source Observed on the Background of AN Extended Source

    NASA Astrophysics Data System (ADS)

    Kornienko, Y. V.; Skuratovskiy, S. I.

    2014-12-01

    The problem of determining the time dependence of intensity of a point-like source in case of atmospheric blur is formulated and solved by using the Bayesian statistical approach. A pointlike source is supposed to be observed on the background of an extended source with constant in time though unknown brightness. The equation system for optimal statistical estimation of the sequence of intensity values in observation moments is obtained. The problem is particularly relevant for studying gravitational mirages which appear while observing a quasar through the gravitational field of a far galaxy.

  11. [Clinical end-points and surrogate markers of pulmonary arterial hypertension in the light of evidence-based treatment].

    PubMed

    Can, Mehmet Mustafa; Kaymaz, Cihangir

    2010-08-01

    Pulmonary arterial hypertension (PAH) is a rare, fatal and progressive disease. There is an acceleration in the advent of new therapies in parallel to the development of the knowledge about etiogenesis and pathogenesis of PAH. Therefore, to optimize the goals of PAH-specific treatment and to determine the time to shift from monotherapy to combination therapy, simple, objective and reproducible end-points, which may predict the disease severity, progression rate and life expectancy are needed. The adventure of end points in PAH has started with six minute walk distance and functional capacity, and continues with new parameters (biochemical marker, time to clinical worsening, echocardiography and magnetic resonance imaging etc.), which can better reflect the clinical outcome.

  12. Seamline Determination Based on PKGC Segmentation for Remote Sensing Image Mosaicking

    PubMed Central

    Dong, Qiang; Liu, Jinghong

    2017-01-01

    This paper presents a novel method of seamline determination for remote sensing image mosaicking. A two-level optimization strategy is applied to determine the seamline. Object-level optimization is executed firstly. Background regions (BRs) and obvious regions (ORs) are extracted based on the results of parametric kernel graph cuts (PKGC) segmentation. The global cost map which consists of color difference, a multi-scale morphological gradient (MSMG) constraint, and texture difference is weighted by BRs. Finally, the seamline is determined in the weighted cost from the start point to the end point. Dijkstra’s shortest path algorithm is adopted for pixel-level optimization to determine the positions of seamline. Meanwhile, a new seamline optimization strategy is proposed for image mosaicking with multi-image overlapping regions. The experimental results show the better performance than the conventional method based on mean-shift segmentation. Seamlines based on the proposed method bypass the obvious objects and take less time in execution. This new method is efficient and superior for seamline determination in remote sensing image mosaicking. PMID:28749446

  13. Automatic Registration of TLS-TLS and TLS-MLS Point Clouds Using a Genetic Algorithm

    PubMed Central

    Yan, Li; Xie, Hong; Chen, Changjun

    2017-01-01

    Registration of point clouds is a fundamental issue in Light Detection and Ranging (LiDAR) remote sensing because point clouds scanned from multiple scan stations or by different platforms need to be transformed to a uniform coordinate reference frame. This paper proposes an efficient registration method based on genetic algorithm (GA) for automatic alignment of two terrestrial LiDAR scanning (TLS) point clouds (TLS-TLS point clouds) and alignment between TLS and mobile LiDAR scanning (MLS) point clouds (TLS-MLS point clouds). The scanning station position acquired by the TLS built-in GPS and the quasi-horizontal orientation of the LiDAR sensor in data acquisition are used as constraints to narrow the search space in GA. A new fitness function to evaluate the solutions for GA, named as Normalized Sum of Matching Scores, is proposed for accurate registration. Our method is divided into five steps: selection of matching points, initialization of population, transformation of matching points, calculation of fitness values, and genetic operation. The method is verified using a TLS-TLS data set and a TLS-MLS data set. The experimental results indicate that the RMSE of registration of TLS-TLS point clouds is 3~5 mm, and that of TLS-MLS point clouds is 2~4 cm. The registration integrating the existing well-known ICP with GA is further proposed to accelerate the optimization and its optimizing time decreases by about 50%. PMID:28850100

  14. Automatic Registration of TLS-TLS and TLS-MLS Point Clouds Using a Genetic Algorithm.

    PubMed

    Yan, Li; Tan, Junxiang; Liu, Hua; Xie, Hong; Chen, Changjun

    2017-08-29

    Registration of point clouds is a fundamental issue in Light Detection and Ranging (LiDAR) remote sensing because point clouds scanned from multiple scan stations or by different platforms need to be transformed to a uniform coordinate reference frame. This paper proposes an efficient registration method based on genetic algorithm (GA) for automatic alignment of two terrestrial LiDAR scanning (TLS) point clouds (TLS-TLS point clouds) and alignment between TLS and mobile LiDAR scanning (MLS) point clouds (TLS-MLS point clouds). The scanning station position acquired by the TLS built-in GPS and the quasi-horizontal orientation of the LiDAR sensor in data acquisition are used as constraints to narrow the search space in GA. A new fitness function to evaluate the solutions for GA, named as Normalized Sum of Matching Scores, is proposed for accurate registration. Our method is divided into five steps: selection of matching points, initialization of population, transformation of matching points, calculation of fitness values, and genetic operation. The method is verified using a TLS-TLS data set and a TLS-MLS data set. The experimental results indicate that the RMSE of registration of TLS-TLS point clouds is 3~5 mm, and that of TLS-MLS point clouds is 2~4 cm. The registration integrating the existing well-known ICP with GA is further proposed to accelerate the optimization and its optimizing time decreases by about 50%.

  15. [Optimal cut-point of salivary cotinine concentration to discriminate smoking status in the adult population in Barcelona].

    PubMed

    Martínez-Sánchez, Jose M; Fu, Marcela; Ariza, Carles; López, María J; Saltó, Esteve; Pascual, José A; Schiaffino, Anna; Borràs, Josep M; Peris, Mercè; Agudo, Antonio; Nebot, Manel; Fernández, Esteve

    2009-01-01

    To assess the optimal cut-point for salivary cotinine concentration to identify smoking status in the adult population of Barcelona. We performed a cross-sectional study of a representative sample (n=1,117) of the adult population (>16 years) in Barcelona (2004-2005). This study gathered information on active and passive smoking by means of a questionnaire and a saliva sample for cotinine determination. We analyzed sensitivity and specificity according to sex, age, smoking status (daily and occasional), and exposure to second-hand smoke at home. ROC curves and the area under the curve were calculated. The prevalence of smokers (daily and occasional) was 27.8% (95% CI: 25.2-30.4%). The optimal cut-point to discriminate smoking status was 9.2 ng/ml (sensitivity=88.7% and specificity=89.0%). The area under the ROC curve was 0.952. The optimal cut-point was 12.2 ng/ml in men and 7.6 ng/ml in women. The optimal cut-point was higher at ages with a greater prevalence of smoking. Daily smokers had a higher cut-point than occasional smokers. The optimal cut-point to discriminate smoking status in the adult population is 9.2 ng/ml, with sensitivities and specificities around 90%. The cut-point was higher in men and in younger people. The cut-point increases with higher prevalence of daily smokers.

  16. results obtained by the application of two different methods for the calculation of optimal coplanar orbital maneuvers with time limit

    NASA Astrophysics Data System (ADS)

    Rocco, Emr; Prado, Afbap; Souza, Mlos

    In this work, the problem of bi-impulsive orbital transfers between coplanar elliptical orbits with minimum fuel consumption but with a time limit for this transfer is studied. As a first method, the equations presented by Lawden (1993) were used. Those equations furnishes the optimal transfer orbit with fixed time for this transfer, between two elliptical coplanar orbits considering fixed terminal points. The method was adapted to cases with free terminal points and those equations was solved to develop a software for orbital maneuvers. As a second method, the equations presented by Eckel and Vinh (1984) were used, those equations provide the transfer orbit between non-coplanar elliptical orbits with minimum fuel and fixed time transfer, or minimum time transfer for a prescribed fuel consumption, considering free terminal points. But in this work only the problem with fixed time transfer was considered, the case of minimum time for a prescribed fuel consumption was already studied in Rocco et al. (2000). Then, the method was modified to consider cases of coplanar orbital transfer, and develop a software for orbital maneuvers. Therefore, two software that solve the same problem using different methods were developed. The first method, presented by Lawden, uses the primer vector theory. The second method, presented by Eckel and Vinh, uses the ordinary theory of maxima and minima. So, to test the methods we choose the same terminal orbits and the same time as input. We could verify that we didn't obtain exactly the same result. In this work, that is an extension of Rocco et al. (2002), these differences in the results are explored with objective of determining the reason of the occurrence of these differences and which modifications should be done to eliminate them.

  17. Adaptive sampling of information in perceptual decision-making.

    PubMed

    Cassey, Thomas C; Evens, David R; Bogacz, Rafal; Marshall, James A R; Ludwig, Casimir J H

    2013-01-01

    In many perceptual and cognitive decision-making problems, humans sample multiple noisy information sources serially, and integrate the sampled information to make an overall decision. We derive the optimal decision procedure for two-alternative choice tasks in which the different options are sampled one at a time, sources vary in the quality of the information they provide, and the available time is fixed. To maximize accuracy, the optimal observer allocates time to sampling different information sources in proportion to their noise levels. We tested human observers in a corresponding perceptual decision-making task. Observers compared the direction of two random dot motion patterns that were triggered only when fixated. Observers allocated more time to the noisier pattern, in a manner that correlated with their sensory uncertainty about the direction of the patterns. There were several differences between the optimal observer predictions and human behaviour. These differences point to a number of other factors, beyond the quality of the currently available sources of information, that influences the sampling strategy.

  18. Uniscale multi-view registration using double dog-leg method

    NASA Astrophysics Data System (ADS)

    Chen, Chao-I.; Sargent, Dusty; Tsai, Chang-Ming; Wang, Yuan-Fang; Koppel, Dan

    2009-02-01

    3D computer models of body anatomy can have many uses in medical research and clinical practices. This paper describes a robust method that uses videos of body anatomy to construct multiple, partial 3D structures and then fuse them to form a larger, more complete computer model using the structure-from-motion framework. We employ the Double Dog-Leg (DDL) method, a trust-region based nonlinear optimization method, to jointly optimize the camera motion parameters (rotation and translation) and determine a global scale that all partial 3D structures should agree upon. These optimized motion parameters are used for constructing local structures, and the global scale is essential for multi-view registration after all these partial structures are built. In order to provide a good initial guess of the camera movement parameters and outlier free 2D point correspondences for DDL, we also propose a two-stage scheme where multi-RANSAC with a normalized eight-point algorithm is first performed and then a few iterations of an over-determined five-point algorithm is used to polish the results. Our experimental results using colonoscopy video show that the proposed scheme always produces more accurate outputs than the standard RANSAC scheme. Furthermore, since we have obtained many reliable point correspondences, time-consuming and error-prone registration methods like the iterative closest points (ICP) based algorithms can be replaced by a simple rigid-body transformation solver when merging partial structures into a larger model.

  19. Optimization of dynamic soaring maneuvers to enhance endurance of a versatile UAV

    NASA Astrophysics Data System (ADS)

    Mir, Imran; Maqsood, Adnan; Akhtar, Suhail

    2017-06-01

    Dynamic soaring is a process of acquiring energy available in atmospheric wind shears and is commonly exhibited by soaring birds to perform long distance flights. This paper aims to demonstrate a viable algorithm which can be implemented in near real time environment to formulate optimal trajectories for dynamic soaring maneuvers for a small scale Unmanned Aerial Vehicle (UAV). The objective is to harness maximum energy from atmosphere wind shear to improve loiter time for Intelligence, Surveillance and Reconnaissance (ISR) missions. Three-dimensional point-mass UAV equations of motion and linear wind gradient profile are used to model flight dynamics. Utilizing UAV states, controls, operational constraints, initial and terminal conditions that enforce a periodic flight, dynamic soaring problem is formulated as an optimal control problem. Optimized trajectories of the maneuver are subsequently generated employing pseudo spectral techniques against distant UAV performance parameters. The discussion also encompasses the requirement for generation of optimal trajectories for dynamic soaring in real time environment and the ability of the proposed algorithm for speedy solution generation. Coupled with the fact that dynamic soaring is all about immediately utilizing the available energy from the wind shear encountered, the proposed algorithm promises its viability for practical on board implementations requiring computation of trajectories in near real time.

  20. On the improvement of blood sample collection at clinical laboratories

    PubMed Central

    2014-01-01

    Background Blood samples are usually collected daily from different collection points, such hospitals and health centers, and transported to a core laboratory for testing. This paper presents a project to improve the collection routes of two of the largest clinical laboratories in Spain. These routes must be designed in a cost-efficient manner while satisfying two important constraints: (i) two-hour time windows between collection and delivery, and (ii) vehicle capacity. Methods A heuristic method based on a genetic algorithm has been designed to solve the problem of blood sample collection. The user enters the following information for each collection point: postal address, average collecting time, and average demand (in thermal containers). After implementing the algorithm using C programming, this is run and, in few seconds, it obtains optimal (or near-optimal) collection routes that specify the collection sequence for each vehicle. Different scenarios using various types of vehicles have been considered. Unless new collection points are added or problem parameters are changed substantially, routes need to be designed only once. Results The two laboratories in this study previously planned routes manually for 43 and 74 collection points, respectively. These routes were covered by an external carrier company. With the implementation of this algorithm, the number of routes could be reduced from ten to seven in one laboratory and from twelve to nine in the other, which represents significant annual savings in transportation costs. Conclusions The algorithm presented can be easily implemented in other laboratories that face this type of problem, and it is particularly interesting and useful as the number of collection points increases. The method designs blood collection routes with reduced costs that meet the time and capacity constraints of the problem. PMID:24406140

  1. Automation of radiation treatment planning : Evaluation of head and neck cancer patient plans created by the Pinnacle3 scripting and Auto-Planning functions.

    PubMed

    Speer, Stefan; Klein, Andreas; Kober, Lukas; Weiss, Alexander; Yohannes, Indra; Bert, Christoph

    2017-08-01

    Intensity-modulated radiotherapy (IMRT) techniques are now standard practice. IMRT or volumetric-modulated arc therapy (VMAT) allow treatment of the tumor while simultaneously sparing organs at risk. Nevertheless, treatment plan quality still depends on the physicist's individual skills, experiences, and personal preferences. It would therefore be advantageous to automate the planning process. This possibility is offered by the Pinnacle 3 treatment planning system (Philips Healthcare, Hamburg, Germany) via its scripting language or Auto-Planning (AP) module. AP module results were compared to in-house scripts and manually optimized treatment plans for standard head and neck cancer plans. Multiple treatment parameters were scored to judge plan quality (100 points = optimum plan). Patients were initially planned manually by different physicists and re-planned using scripts or AP. Script-based head and neck plans achieved a mean of 67.0 points and were, on average, superior to manually created (59.1 points) and AP plans (62.3 points). Moreover, they are characterized by reproducibility and lower standard deviation of treatment parameters. Even less experienced staff are able to create at least a good starting point for further optimization in a short time. However, for particular plans, experienced planners perform even better than scripts or AP. Experienced-user input is needed when setting up scripts or AP templates for the first time. Moreover, some minor drawbacks exist, such as the increase of monitor units (+35.5% for scripted plans). On average, automatically created plans are superior to manually created treatment plans. For particular plans, experienced physicists were able to perform better than scripts or AP; thus, the benefit is greatest when time is short or staff inexperienced.

  2. A staggered-grid finite-difference scheme optimized in the time–space domain for modeling scalar-wave propagation in geophysical problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tan, Sirui, E-mail: siruitan@hotmail.com; Huang, Lianjie, E-mail: ljh@lanl.gov

    For modeling scalar-wave propagation in geophysical problems using finite-difference schemes, optimizing the coefficients of the finite-difference operators can reduce numerical dispersion. Most optimized finite-difference schemes for modeling seismic-wave propagation suppress only spatial but not temporal dispersion errors. We develop a novel optimized finite-difference scheme for numerical scalar-wave modeling to control dispersion errors not only in space but also in time. Our optimized scheme is based on a new stencil that contains a few more grid points than the standard stencil. We design an objective function for minimizing relative errors of phase velocities of waves propagating in all directions within amore » given range of wavenumbers. Dispersion analysis and numerical examples demonstrate that our optimized finite-difference scheme is computationally up to 2.5 times faster than the optimized schemes using the standard stencil to achieve the similar modeling accuracy for a given 2D or 3D problem. Compared with the high-order finite-difference scheme using the same new stencil, our optimized scheme reduces 50 percent of the computational cost to achieve the similar modeling accuracy. This new optimized finite-difference scheme is particularly useful for large-scale 3D scalar-wave modeling and inversion.« less

  3. Optimal control in adaptive optics modeling of nonlinear systems

    NASA Astrophysics Data System (ADS)

    Herrmann, J.

    The problem of using an adaptive optics system to correct for nonlinear effects like thermal blooming is addressed using a model containing nonlinear lenses through which Gaussian beams are propagated. The best correction of this nonlinear system can be formulated as a deterministic open loop optimal control problem. This treatment gives a limit for the best possible correction. Aspects of adaptive control and servo systems are not included at this stage. An attempt is made to determine that control in the transmitter plane which minimizes the time averaged area or maximizes the fluence in the target plane. The standard minimization procedure leads to a two-point-boundary-value problem, which is ill-conditioned in the case. The optimal control problem was solved using an iterative gradient technique. An instantaneous correction is introduced and compared with the optimal correction. The results of the calculations show that for short times or weak nonlinearities the instantaneous correction is close to the optimal correction, but that for long times and strong nonlinearities a large difference develops between the two types of correction. For these cases the steady state correction becomes better than the instantaneous correction and approaches the optimum correction.

  4. Guidance of a Solar Sail Spacecraft to the Sun - L(2) Point.

    NASA Astrophysics Data System (ADS)

    Hur, Sun Hae

    The guidance of a solar sail spacecraft along a minimum-time path from an Earth orbit to a region near the Sun-Earth L_2 libration point is investigated. Possible missions to this point include a spacecraft "listening" for possible extra-terrestrial electromagnetic signals and a science payload to study the geomagnetic tail. A key advantage of the solar sail is that it requires no fuel. The control variables are the sail angles relative to the Sun-Earth line. The thrust is very small, on the order of 1 mm/s^2, and its magnitude and direction are highly coupled. Despite this limited controllability, the "free" thrust can be used for a wide variety of terminal conditions including halo orbits. If the Moon's mass is lumped with the Earth, there are quasi-equilibrium points near L_2. However, they are unstable so that some form of station keeping is required, and the sail can provide this without any fuel usage. In the two-dimensional case, regulating about a nominal orbit is shown to require less control and result in smaller amplitude error response than regulating about a quasi-equilibrium point. In the three-dimensional halo orbit case, station keeping using periodically varying gains is demonstrated. To compute the minimum-time path, the trajectory is divided into two segments: the spiral segment and the transition segment. The spiral segment is computed using a control law that maximizes the rate of energy increase at each time. The transition segment is computed as the solution of the time-optimal control problem from the endpoint of the spiral to the terminal point. It is shown that the path resulting from this approximate strategy is very close to the exact optimal path. For the guidance problem, the approximate strategy in the spiral segment already gives a nonlinear full-state feedback law. However, for large perturbations, follower guidance using an auxiliary propulsion is used for part of the spiral. In the transition segment, neighboring extremal feedback guidance using the solar sail, with feedforward control only near the terminal point, is used to correct perturbations in the initial conditions.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reister, D.B.; Pin, F.G.

    This paper addresses the problem of time-optional motions for a mobile platform in a planar environment. The platform has two non-steerable independently driven wheels. The overall mission of the robot is expressed in terms of a sequence of via points at which the platform must be at rest in a given configuration (position and orientation). The objective is to plan time-optimal trajectories between these configurations assuming an unobstructed environment. Using Pontryagin's maximum principle (PMP), we formally demonstrate that all time optimal motions of the platform for this problem occur for bang-bang controls on the wheels (at each instant, the accelerationmore » on each wheel is either at its upper or lower limit). The PMP, however, only provides necessary conditions for time optimality. To find the time optimal robot trajectories, we first parameterize the bang-bang trajectories using the switch times on the wheels (the times at which the wheel accelerations change sign). With this parameterization, we can fully search the robot trajectory space and find the switch times that will produce particular paths to a desired final configuration of the platform. We show numerically that robot trajectories with three switch times (two on one wheel, one on the other) can reach any position, while trajectories with four switch times can reach any configuration. By numerical comparison with other trajectories involving similar or greater numbers of switch times, we then identify the sets of time-optimal trajectories. These are uniquely defined using ranges of the parameters, and consist of subsets of trajectories with three switch times for the problem when the final orientation of the robot is not specified, and four switch times when a full final configuration is specified. We conclude with a description of the use of the method for trajectory planning for one of our robots.« less

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reister, D.B.; Pin, F.G.

    This paper addresses the problem of time-optional motions for a mobile platform in a planar environment. The platform has two non-steerable independently driven wheels. The overall mission of the robot is expressed in terms of a sequence of via points at which the platform must be at rest in a given configuration (position and orientation). The objective is to plan time-optimal trajectories between these configurations assuming an unobstructed environment. Using Pontryagin`s maximum principle (PMP), we formally demonstrate that all time optimal motions of the platform for this problem occur for bang-bang controls on the wheels (at each instant, the accelerationmore » on each wheel is either at its upper or lower limit). The PMP, however, only provides necessary conditions for time optimality. To find the time optimal robot trajectories, we first parameterize the bang-bang trajectories using the switch times on the wheels (the times at which the wheel accelerations change sign). With this parameterization, we can fully search the robot trajectory space and find the switch times that will produce particular paths to a desired final configuration of the platform. We show numerically that robot trajectories with three switch times (two on one wheel, one on the other) can reach any position, while trajectories with four switch times can reach any configuration. By numerical comparison with other trajectories involving similar or greater numbers of switch times, we then identify the sets of time-optimal trajectories. These are uniquely defined using ranges of the parameters, and consist of subsets of trajectories with three switch times for the problem when the final orientation of the robot is not specified, and four switch times when a full final configuration is specified. We conclude with a description of the use of the method for trajectory planning for one of our robots.« less

  7. Integrated modeling environment for systems-level performance analysis of the Next-Generation Space Telescope

    NASA Astrophysics Data System (ADS)

    Mosier, Gary E.; Femiano, Michael; Ha, Kong; Bely, Pierre Y.; Burg, Richard; Redding, David C.; Kissil, Andrew; Rakoczy, John; Craig, Larry

    1998-08-01

    All current concepts for the NGST are innovative designs which present unique systems-level challenges. The goals are to outperform existing observatories at a fraction of the current price/performance ratio. Standard practices for developing systems error budgets, such as the 'root-sum-of- squares' error tree, are insufficient for designs of this complexity. Simulation and optimization are the tools needed for this project; in particular tools that integrate controls, optics, thermal and structural analysis, and design optimization. This paper describes such an environment which allows sub-system performance specifications to be analyzed parametrically, and includes optimizing metrics that capture the science requirements. The resulting systems-level design trades are greatly facilitated, and significant cost savings can be realized. This modeling environment, built around a tightly integrated combination of commercial off-the-shelf and in-house- developed codes, provides the foundation for linear and non- linear analysis on both the time and frequency-domains, statistical analysis, and design optimization. It features an interactive user interface and integrated graphics that allow highly-effective, real-time work to be done by multidisciplinary design teams. For the NGST, it has been applied to issues such as pointing control, dynamic isolation of spacecraft disturbances, wavefront sensing and control, on-orbit thermal stability of the optics, and development of systems-level error budgets. In this paper, results are presented from parametric trade studies that assess requirements for pointing control, structural dynamics, reaction wheel dynamic disturbances, and vibration isolation. These studies attempt to define requirements bounds such that the resulting design is optimized at the systems level, without attempting to optimize each subsystem individually. The performance metrics are defined in terms of image quality, specifically centroiding error and RMS wavefront error, which directly links to science requirements.

  8. Fuel-Optimal Altitude Maintenance of Low-Earth-Orbit Spacecrafts by Combined Direct/Indirect Optimization

    NASA Astrophysics Data System (ADS)

    Kim, Kyung-Ha; Park, Chandeok; Park, Sang-Young

    2015-12-01

    This work presents fuel-optimal altitude maintenance of Low-Earth-Orbit (LEO) spacecrafts experiencing non-negligible air drag and J2 perturbation. A pseudospectral (direct) method is first applied to roughly estimate an optimal fuel consumption strategy, which is employed as an initial guess to precisely determine itself. Based on the physical specifications of KOrea Multi-Purpose SATellite-2 (KOMPSAT-2), a Korean artificial satellite, numerical simulations show that a satellite ascends with full thrust at the early stage of the maneuver period and then descends with null thrust. While the thrust profile is presumably bang-off, it is difficult to precisely determine the switching time by using a pseudospectral method only. This is expected, since the optimal switching epoch does not coincide with one of the collocation points prescribed by the pseudospectral method, in general. As an attempt to precisely determine the switching time and the associated optimal thrust history, a shooting (indirect) method is then employed with the initial guess being obtained through the pseudospectral method. This hybrid process allows the determination of the optimal fuel consumption for LEO spacecrafts and their thrust profiles efficiently and precisely.

  9. Aerodynamic design applying automatic differentiation and using robust variable fidelity optimization

    NASA Astrophysics Data System (ADS)

    Takemiya, Tetsushi

    In modern aerospace engineering, the physics-based computational design method is becoming more important, as it is more efficient than experiments and because it is more suitable in designing new types of aircraft (e.g., unmanned aerial vehicles or supersonic business jets) than the conventional design method, which heavily relies on historical data. To enhance the reliability of the physics-based computational design method, researchers have made tremendous efforts to improve the fidelity of models. However, high-fidelity models require longer computational time, so the advantage of efficiency is partially lost. This problem has been overcome with the development of variable fidelity optimization (VFO). In VFO, different fidelity models are simultaneously employed in order to improve the speed and the accuracy of convergence in an optimization process. Among the various types of VFO methods, one of the most promising methods is the approximation management framework (AMF). In the AMF, objective and constraint functions of a low-fidelity model are scaled at a design point so that the scaled functions, which are referred to as "surrogate functions," match those of a high-fidelity model. Since scaling functions and the low-fidelity model constitutes surrogate functions, evaluating the surrogate functions is faster than evaluating the high-fidelity model. Therefore, in the optimization process, in which gradient-based optimization is implemented and thus many function calls are required, the surrogate functions are used instead of the high-fidelity model to obtain a new design point. The best feature of the AMF is that it may converge to a local optimum of the high-fidelity model in much less computational time than the high-fidelity model. However, through literature surveys and implementations of the AMF, the author xx found that (1) the AMF is very vulnerable when the computational analysis models have numerical noise, which is very common in high-fidelity models, and that (2) the AMF terminates optimization erroneously when the optimization problems have constraints. The first problem is due to inaccuracy in computing derivatives in the AMF, and the second problem is due to erroneous treatment of the trust region ratio, which sets the size of the domain for an optimization in the AMF. In order to solve the first problem of the AMF, automatic differentiation (AD) technique, which reads the codes of analysis models and automatically generates new derivative codes based on some mathematical rules, is applied. If derivatives are computed with the generated derivative code, they are analytical, and the required computational time is independent of the number of design variables, which is very advantageous for realistic aerospace engineering problems. However, if analysis models implement iterative computations such as computational fluid dynamics (CFD), which solves system partial differential equations iteratively, computing derivatives through the AD requires a massive memory size. The author solved this deficiency by modifying the AD approach and developing a more efficient implementation with CFD, and successfully applied the AD to general CFD software. In order to solve the second problem of the AMF, the governing equation of the trust region ratio, which is very strict against the violation of constraints, is modified so that it can accept the violation of constraints within some tolerance. By accepting violations of constraints during the optimization process, the AMF can continue optimization without terminating immaturely and eventually find the true optimum design point. With these modifications, the AMF is referred to as "Robust AMF," and it is applied to airfoil and wing aerodynamic design problems using Euler CFD software. The former problem has 21 design variables, and the latter 64. In both problems, derivatives computed with the proposed AD method are first compared with those computed with the finite differentiation (FD) method, and then, the Robust AMF is implemented along with the sequential quadratic programming (SQP) optimization method with only high-fidelity models. The proposed AD method computes derivatives more accurately and faster than the FD method, and the Robust AMF successfully optimizes shapes of the airfoil and the wing in a much shorter time than SQP with only high-fidelity models. These results clearly show the effectiveness of the Robust AMF. Finally, the feasibility of reducing computational time for calculating derivatives and the necessity of AMF with an optimum design point always in the feasible region are discussed as future work.

  10. Statistical Learning of Origin-Specific Statically Optimal Individualized Treatment Rules

    PubMed Central

    van der Laan, Mark J.; Petersen, Maya L.

    2008-01-01

    Consider a longitudinal observational or controlled study in which one collects chronological data over time on a random sample of subjects. The time-dependent process one observes on each subject contains time-dependent covariates, time-dependent treatment actions, and an outcome process or single final outcome of interest. A statically optimal individualized treatment rule (as introduced in van der Laan et. al. (2005), Petersen et. al. (2007)) is a treatment rule which at any point in time conditions on a user-supplied subset of the past, computes the future static treatment regimen that maximizes a (conditional) mean future outcome of interest, and applies the first treatment action of the latter regimen. In particular, Petersen et. al. (2007) clarified that, in order to be statically optimal, an individualized treatment rule should not depend on the observed treatment mechanism. Petersen et. al. (2007) further developed estimators of statically optimal individualized treatment rules based on a past capturing all confounding of past treatment history on outcome. In practice, however, one typically wishes to find individualized treatment rules responding to a user-supplied subset of the complete observed history, which may not be sufficient to capture all confounding. The current article provides an important advance on Petersen et. al. (2007) by developing locally efficient double robust estimators of statically optimal individualized treatment rules responding to such a user-supplied subset of the past. However, failure to capture all confounding comes at a price; the static optimality of the resulting rules becomes origin-specific. We explain origin-specific static optimality, and discuss the practical importance of the proposed methodology. We further present the results of a data analysis in which we estimate a statically optimal rule for switching antiretroviral therapy among patients infected with resistant HIV virus. PMID:19122792

  11. An automated optimization tool for high-dose-rate (HDR) prostate brachytherapy with divergent needle pattern

    NASA Astrophysics Data System (ADS)

    Borot de Battisti, M.; Maenhout, M.; de Senneville, B. Denis; Hautvast, G.; Binnekamp, D.; Lagendijk, J. J. W.; van Vulpen, M.; Moerland, M. A.

    2015-10-01

    Focal high-dose-rate (HDR) for prostate cancer has gained increasing interest as an alternative to whole gland therapy as it may contribute to the reduction of treatment related toxicity. For focal treatment, optimal needle guidance and placement is warranted. This can be achieved under MR guidance. However, MR-guided needle placement is currently not possible due to space restrictions in the closed MR bore. To overcome this problem, a MR-compatible, single-divergent needle-implant robotic device is under development at the University Medical Centre, Utrecht: placed between the legs of the patient inside the MR bore, this robot will tap the needle in a divergent pattern from a single rotation point into the tissue. This rotation point is just beneath the perineal skin to have access to the focal prostate tumor lesion. Currently, there is no treatment planning system commercially available which allows optimization of the dose distribution with such needle arrangement. The aim of this work is to develop an automatic inverse dose planning optimization tool for focal HDR prostate brachytherapy with needle insertions in a divergent configuration. A complete optimizer workflow is proposed which includes the determination of (1) the position of the center of rotation, (2) the needle angulations and (3) the dwell times. Unlike most currently used optimizers, no prior selection or adjustment of input parameters such as minimum or maximum dose or weight coefficients for treatment region and organs at risk is required. To test this optimizer, a planning study was performed on ten patients (treatment volumes ranged from 8.5 cm3to 23.3 cm3) by using 2-14 needle insertions. The total computation time of the optimizer workflow was below 20 min and a clinically acceptable plan was reached on average using only four needle insertions.

  12. Enhanced Fuel-Optimal Trajectory-Generation Algorithm for Planetary Pinpoint Landing

    NASA Technical Reports Server (NTRS)

    Acikmese, Behcet; Blackmore, James C.; Scharf, Daniel P.

    2011-01-01

    An enhanced algorithm is developed that builds on a previous innovation of fuel-optimal powered-descent guidance (PDG) for planetary pinpoint landing. The PDG problem is to compute constrained, fuel-optimal trajectories to land a craft at a prescribed target on a planetary surface, starting from a parachute cut-off point and using a throttleable descent engine. The previous innovation showed the minimal-fuel PDG problem can be posed as a convex optimization problem, in particular, as a Second-Order Cone Program, which can be solved to global optimality with deterministic convergence properties, and hence is a candidate for onboard implementation. To increase the speed and robustness of this convex PDG algorithm for possible onboard implementation, the following enhancements are incorporated: 1) Fast detection of infeasibility (i.e., control authority is not sufficient for soft-landing) for subsequent fault response. 2) The use of a piecewise-linear control parameterization, providing smooth solution trajectories and increasing computational efficiency. 3) An enhanced line-search algorithm for optimal time-of-flight, providing quicker convergence and bounding the number of path-planning iterations needed. 4) An additional constraint that analytically guarantees inter-sample satisfaction of glide-slope and non-sub-surface flight constraints, allowing larger discretizations and, hence, faster optimization. 5) Explicit incorporation of Mars rotation rate into the trajectory computation for improved targeting accuracy. These enhancements allow faster convergence to the fuel-optimal solution and, more importantly, remove the need for a "human-in-the-loop," as constraints will be satisfied over the entire path-planning interval independent of step-size (as opposed to just at the discrete time points) and infeasible initial conditions are immediately detected. Finally, while the PDG stage is typically only a few minutes, ignoring the rotation rate of Mars can introduce 10s of meters of error. By incorporating it, the enhanced PDG algorithm becomes capable of pinpoint targeting.

  13. Robust control of flexible space vehicles with minimum structural excitation: On-off pulse control of flexible space vehicles

    NASA Technical Reports Server (NTRS)

    Wie, Bong; Liu, Qiang

    1992-01-01

    Both feedback and feedforward control approaches for uncertain dynamical systems (in particular, with uncertainty in structural mode frequency) are investigated. The control objective is to achieve a fast settling time (high performance) and robustness (insensitivity) to plant uncertainty. Preshaping of an ideal, time optimal control input using a tapped-delay filter is shown to provide a fast settling time with robust performance. A robust, non-minimum-phase feedback controller is synthesized with particular emphasis on its proper implementation for a non-zero set-point control problem. It is shown that a properly designed, feedback controller performs well, as compared with a time optimal open loop controller with special preshaping for performance robustness. Also included are two separate papers by the same authors on this subject.

  14. SU-F-P-61: Does It Matter Not to Use Optimization Points at the Apex for Vaginal Cylinder HDR Brachytherapy Planning?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Y

    2016-06-15

    Purpose: To test the impact of the use of apex optimization points for new vaginal cylinder (VC) applicators. Methods: New “ClickFit” single channel VC applicators (Varian) that have a different top thicknesses but the same diameters as the old VC applicators (2.3 cm diameter, 2.6 cm, 3.0 cm, and 3.5 cm) were compared using phantom studies. Old VC applicator plans without apex optimization points were also compared to the plans with the optimization points. The apex doses were monitored at 5 mm depth doses (8 points) where a prescription dose (Rx) of 6Gy was prescribed. VC surface doses (8 points)more » were also analyzed. Results: The new VC applicator plans without apex optimization points presented significantly lower 5mm depth doses than Rx (on average −31 ± 7%, p <0.00001) due to their thicker VC tops (3.4 ± 1.1 mm thicker with the range of 1.2 to 4.4 mm) than the old VC applicators. Old VC applicator plans also showed a statistically significant reduction (p <0.00001) due to Ir-192 source anisotropic effect at the apex region but the % reduction over Rx was only −7 ± 9%. However, by adding apex optimization points to the new VC applicator plans, the plans improved 5 mm depth doses (−7 ± 9% over Rx) that were not statistically different from old VC plans (p = 0.923), along with apex VC surface doses (−22 ± 10% over old VC versus −46 ± 7% without using apex optimization points). Conclusion: The use of apex optimization points are important in order to avoid significant additional cold doses (−24 ± 2%) at the prescription depth (5 mm) of apex, especially for the new VC applicators that have thicker tops.« less

  15. Optimal harvesting of fish stocks under a time-varying discount rate.

    PubMed

    Duncan, Stephen; Hepburn, Cameron; Papachristodoulou, Antonis

    2011-01-21

    Optimal control theory has been extensively used to determine the optimal harvesting policy for renewable resources such as fish stocks. In such optimisations, it is common to maximise the discounted utility of harvesting over time, employing a constant time discount rate. However, evidence from human and animal behaviour suggests that we have evolved to employ discount rates which fall over time, often referred to as "hyperbolic discounting". This increases the weight on benefits in the distant future, which may appear to provide greater protection of resources for future generations, but also creates challenges of time-inconsistent plans. This paper examines harvesting plans when the discount rate declines over time. With a declining discount rate, the planner reduces stock levels in the early stages (when the discount rate is high) and intends to compensate by allowing the stock level to recover later (when the discount rate will be lower). Such a plan may be feasible and optimal, provided that the planner remains committed throughout. However, in practice there is a danger that such plans will be re-optimized and adjusted in the future. It is shown that repeatedly restarting the optimization can drive the stock level down to the point where the optimal policy is to harvest the stock to extinction. In short, a key contribution of this paper is to identify the surprising severity of the consequences flowing from incorporating a rather trivial, and widely prevalent, "non-rational" aspect of human behaviour into renewable resource management models. These ideas are related to the collapse of the Peruvian anchovy fishery in the 1970's. Copyright © 2010 Elsevier Ltd. All rights reserved.

  16. Bidirectional relationship between sleep and optimism with depressive mood as a mediator: A longitudinal study of Chinese working adults.

    PubMed

    Lau, Esther Yuet Ying; Harry Hui, C; Cheung, Shu-Fai; Lam, Jasmine

    2015-11-01

    Sleep and optimism are important psycho-biological and personality constructs, respectively. However, very little work has examined the causal relationship between them, and none has examined the potential mechanisms operating in the relationship. This study aimed to understand whether sleep quality was a cause or an effect of optimism, and whether depressive mood could explain the relationship. Internet survey data were collected from 987 Chinese working adults (63.4% female, 92.4% full-time workers, 27.0% married, 90.2% Hong Kong residents, mean age=32.59 at three time-points, spanning about 19 months). Measures included a Chinese attributional style questionnaire, the Pittsburgh Sleep Quality Index, and the Depression Anxiety Stress Scale. Cross-sectional analyses revealed moderate correlations among sleep quality, depressive mood, and optimism. Cross-lagged analyses showed a bidirectional causality between optimism and sleep. Path analysis demonstrated that depressive mood fully mediated the influence of optimism on sleep quality, and it partially mediated the influence of sleep quality on optimism. Optimism improves sleep. Poor sleep makes a pessimist. The effects of sleep quality on optimism could not be fully explained by depressive mood, highlighting the unique role of sleep on optimism. Understanding the mechanisms of the feedback loop of sleep quality, mood, and optimism may provide insights for clinical interventions for individuals presented with mood-related problems. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. A multi-objective simulation-optimization model for in situ bioremediation of groundwater contamination: Application of bargaining theory

    NASA Astrophysics Data System (ADS)

    Raei, Ehsan; Nikoo, Mohammad Reza; Pourshahabi, Shokoufeh

    2017-08-01

    In the present study, a BIOPLUME III simulation model is coupled with a non-dominating sorting genetic algorithm (NSGA-II)-based model for optimal design of in situ groundwater bioremediation system, considering preferences of stakeholders. Ministry of Energy (MOE), Department of Environment (DOE), and National Disaster Management Organization (NDMO) are three stakeholders in the groundwater bioremediation problem in Iran. Based on the preferences of these stakeholders, the multi-objective optimization model tries to minimize: (1) cost; (2) sum of contaminant concentrations that violate standard; (3) contaminant plume fragmentation. The NSGA-II multi-objective optimization method gives Pareto-optimal solutions. A compromised solution is determined using fallback bargaining with impasse to achieve a consensus among the stakeholders. In this study, two different approaches are investigated and compared based on two different domains for locations of injection and extraction wells. At the first approach, a limited number of predefined locations is considered according to previous similar studies. At the second approach, all possible points in study area are investigated to find optimal locations, arrangement, and flow rate of injection and extraction wells. Involvement of the stakeholders, investigating all possible points instead of a limited number of locations for wells, and minimizing the contaminant plume fragmentation during bioremediation are new innovations in this research. Besides, the simulation period is divided into smaller time intervals for more efficient optimization. Image processing toolbox in MATLAB® software is utilized for calculation of the third objective function. In comparison with previous studies, cost is reduced using the proposed methodology. Dispersion of the contaminant plume is reduced in both presented approaches using the third objective function. Considering all possible points in the study area for determining the optimal locations of the wells in the second approach leads to more desirable results, i.e. decreasing the contaminant concentrations to a standard level and 20% to 40% cost reduction.

  18. Optimal Timing for Early Excision in a Deep Partial Thickness Porcine Burn Model.

    PubMed

    Toussaint, Jimmy; Chung, Won Taek; Mc Clain, Steve; Raut, Vivek; Singer, Adam J

    Many deep partial thickness burns require more than 3 weeks to heal resulting in disfiguring and dysfunctional scarring. Early excision of the eschar has been shown to improve outcomes in deep burns; however, the optimal timing of the excision remains controversial. We compared wound healing and scarring of deep partial thickness burns that were excised at different time points in a porcine model. Deep partial thickness burns (2.5 by 2.5 cm each) were created on the backs of six anesthetized pigs using a previously validated model. The burns were randomly assigned to excision at days 2, 4, or 7 using an electric dermatome. Full thickness 4-mm punch biopsies were obtained at several time points for determination of re-epithelialization and at day 28 for determination of scar depth. Digital imaging was used to calculate percentage wound contraction at day 28. There were no statistically significant differences in wound re-epithelialization at any of the studied time points. Scar depth and percentage wound contraction were also similar among the wounds excised at 2, 4, and 7 days (4.4 ± 1.1 mm vs 3.9 ± 1.1 mm vs 4.1 ± 1.2 mm and 52.9 ± 17.9% vs 52.6 ± 15.6% vs 52.5 ± 13.8%, respectively). Timing of eschar excision (at 2, 4, or 7 days) does not affect the rates of re-epithelialization and scarring in a deep partial thickness porcine burn model. Timing of eschar excision may not change outcomes if performed within the first 2 to 7 days after injury.

  19. Optimization of Sample Points for Monitoring Arable Land Quality by Simulated Annealing while Considering Spatial Variations

    PubMed Central

    Wang, Junxiao; Wang, Xiaorui; Zhou, Shenglu; Wu, Shaohua; Zhu, Yan; Lu, Chunfeng

    2016-01-01

    With China’s rapid economic development, the reduction in arable land has emerged as one of the most prominent problems in the nation. The long-term dynamic monitoring of arable land quality is important for protecting arable land resources. An efficient practice is to select optimal sample points while obtaining accurate predictions. To this end, the selection of effective points from a dense set of soil sample points is an urgent problem. In this study, data were collected from Donghai County, Jiangsu Province, China. The number and layout of soil sample points are optimized by considering the spatial variations in soil properties and by using an improved simulated annealing (SA) algorithm. The conclusions are as follows: (1) Optimization results in the retention of more sample points in the moderate- and high-variation partitions of the study area; (2) The number of optimal sample points obtained with the improved SA algorithm is markedly reduced, while the accuracy of the predicted soil properties is improved by approximately 5% compared with the raw data; (3) With regard to the monitoring of arable land quality, a dense distribution of sample points is needed to monitor the granularity. PMID:27706051

  20. Identifying the Best Times for Cognitive Functioning Using New Methods: Matching University Times to Undergraduate Chronotypes

    PubMed Central

    Evans, M. D. R.; Kelley, Paul; Kelley, Jonathan

    2017-01-01

    University days generally start at fixed times in the morning, often early morning, without regard to optimal functioning times for students with different chronotypes. Research has shown that later starting times are crucial to high school students' sleep, health, and performance. Shifting the focus to university, this study used two new approaches to determine ranges of start times that optimize cognitive functioning for undergraduates. The first is a survey-based, empirical model (SM), and the second a neuroscience-based, theoretical model (NM). The SM focused on students' self-reported chronotype and times they feel at their best. Using this approach, data from 190 mostly first and second year university students were collected and analyzed to determine optimal times when cognitive performance can be expected to be at its peak. The NM synthesized research in sleep, circadian neuroscience, sleep deprivation's impact on cognition, and practical considerations to create a generalized solution to determine the best learning hours. Strikingly the SM and NM results align with each other and confirm other recent research in indicating later start times. They add several important points: (1) They extend our understanding by showing that much later starting times (after 11 a.m. or 12 noon) are optimal; (2) Every single start time disadvantages one or more chronotypes; and (3) The best practical model may involve three alternative starting times with one afternoon shared session. The implications are briefly considered. PMID:28469566

  1. Optimizing Disaster Relief: Real-Time Operational and Tactical Decision Support

    DTIC Science & Technology

    1993-01-01

    efficiencies in completing the tAsks. Allocations recognize task priorities and the logistica l effects of geographic prox- imity, In addition...as if they ar~ collocated. Arcs connect loc-•I J>airs of zones to represent feasible dTrect point-to-point transportation and bear cost> ror...data to thl.’ de >~red level of aggregation. We have tested ARES manuall)’ ;mtl by replacins tbc deci~ion maker wrlh the decision simulator which

  2. Adjoint Sensitivity Method to Determine Optimal Set of Stations for Tsunami Source Inversion

    NASA Astrophysics Data System (ADS)

    Gusman, A. R.; Hossen, M. J.; Cummins, P. R.; Satake, K.

    2017-12-01

    We applied the adjoint sensitivity technique in tsunami science for the first time to determine an optimal set of stations for a tsunami source inversion. The adjoint sensitivity (AS) method has been used in numerical weather prediction to find optimal locations for adaptive observations. We implemented this technique to Green's Function based Time Reverse Imaging (GFTRI), which is recently used in tsunami source inversion in order to reconstruct the initial sea surface displacement, known as tsunami source model. This method has the same source representation as the traditional least square (LSQ) source inversion method where a tsunami source is represented by dividing the source region into a regular grid of "point" sources. For each of these, Green's function (GF) is computed using a basis function for initial sea surface displacement whose amplitude is concentrated near the grid point. We applied the AS method to the 2009 Samoa earthquake tsunami that occurred on 29 September 2009 in the southwest Pacific, near the Tonga trench. Many studies show that this earthquake is a doublet associated with both normal faulting in the outer-rise region and thrust faulting in the subduction interface. To estimate the tsunami source model for this complex event, we initially considered 11 observations consisting of 5 tide gauges and 6 DART bouys. After implementing AS method, we found the optimal set of observations consisting with 8 stations. Inversion with this optimal set provides better result in terms of waveform fitting and source model that shows both sub-events associated with normal and thrust faulting.

  3. Development of gradient descent adaptive algorithms to remove common mode artifact for improvement of cardiovascular signal quality.

    PubMed

    Ciaccio, Edward J; Micheli-Tzanakou, Evangelia

    2007-07-01

    Common-mode noise degrades cardiovascular signal quality and diminishes measurement accuracy. Filtering to remove noise components in the frequency domain often distorts the signal. Two adaptive noise canceling (ANC) algorithms were tested to adjust weighted reference signals for optimal subtraction from a primary signal. Update of weight w was based upon the gradient term of the steepest descent equation: [see text], where the error epsilon is the difference between primary and weighted reference signals. nabla was estimated from Deltaepsilon(2) and Deltaw without using a variable Deltaw in the denominator which can cause instability. The Parallel Comparison (PC) algorithm computed Deltaepsilon(2) using fixed finite differences +/- Deltaw in parallel during each discrete time k. The ALOPEX algorithm computed Deltaepsilon(2)x Deltaw from time k to k + 1 to estimate nabla, with a random number added to account for Deltaepsilon(2) . Deltaw--> 0 near the optimal weighting. Using simulated data, both algorithms stably converged to the optimal weighting within 50-2000 discrete sample points k even with a SNR = 1:8 and weights which were initialized far from the optimal. Using a sharply pulsatile cardiac electrogram signal with added noise so that the SNR = 1:5, both algorithms exhibited stable convergence within 100 ms (100 sample points). Fourier spectral analysis revealed minimal distortion when comparing the signal without added noise to the ANC restored signal. ANC algorithms based upon difference calculations can rapidly and stably converge to the optimal weighting in simulated and real cardiovascular data. Signal quality is restored with minimal distortion, increasing the accuracy of biophysical measurement.

  4. The Earth Phenomena Observing System: Intelligent Autonomy for Satellite Operations

    NASA Technical Reports Server (NTRS)

    Ricard, Michael; Abramson, Mark; Carter, David; Kolitz, Stephan

    2003-01-01

    Earth monitoring systems of the future may include large numbers of inexpensive small satellites, tasked in a coordinated fashion to observe both long term and transient targets. For best performance, a tool which helps operators optimally assign targets to satellites will be required. We present the design of algorithms developed for real-time optimized autonomous planning of large numbers of small single-sensor Earth observation satellites. The algorithms will reduce requirements on the human operators of such a system of satellites, ensure good utilization of system resources, and provide the capability to dynamically respond to temporal terrestrial phenomena. Our initial real-time system model consists of approximately 100 satellites and large number of points of interest on Earth (e.g., hurricanes, volcanoes, and forest fires) with the objective to maximize the total science value of observations over time. Several options for calculating the science value of observations include the following: 1) total observation time, 2) number of observations, and the 3) quality (a function of e.g., sensor type, range, slant angle) of the observations. An integrated approach using integer programming, optimization and astrodynamics is used to calculate optimized observation and sensor tasking plans.

  5. Optimizing ion channel models using a parallel genetic algorithm on graphical processors.

    PubMed

    Ben-Shalom, Roy; Aviv, Amit; Razon, Benjamin; Korngreen, Alon

    2012-01-01

    We have recently shown that we can semi-automatically constrain models of voltage-gated ion channels by combining a stochastic search algorithm with ionic currents measured using multiple voltage-clamp protocols. Although numerically successful, this approach is highly demanding computationally, with optimization on a high performance Linux cluster typically lasting several days. To solve this computational bottleneck we converted our optimization algorithm for work on a graphical processing unit (GPU) using NVIDIA's CUDA. Parallelizing the process on a Fermi graphic computing engine from NVIDIA increased the speed ∼180 times over an application running on an 80 node Linux cluster, considerably reducing simulation times. This application allows users to optimize models for ion channel kinetics on a single, inexpensive, desktop "super computer," greatly reducing the time and cost of building models relevant to neuronal physiology. We also demonstrate that the point of algorithm parallelization is crucial to its performance. We substantially reduced computing time by solving the ODEs (Ordinary Differential Equations) so as to massively reduce memory transfers to and from the GPU. This approach may be applied to speed up other data intensive applications requiring iterative solutions of ODEs. Copyright © 2012 Elsevier B.V. All rights reserved.

  6. Optimization of Supercritical CO2 Extraction of Fish Oil from Viscera of African Catfish (Clarias gariepinus)

    PubMed Central

    Sarker, Mohamed Zaidul Islam; Selamat, Jinap; Habib, Abu Sayem Md. Ahsan; Ferdosh, Sahena; Akanda, Mohamed Jahurul Haque; Jaffri, Juliana Mohamed

    2012-01-01

    Fish oil was extracted from the viscera of African Catfish using supercritical carbon dioxide (SC-CO2). A Central Composite Design of Response Surface methodology (RSM) was employed to optimize the SC-CO2 extraction parameters. The oil yield (Y) as response variable was executed against the four independent variables, namely pressure, temperature, flow rate and soaking time. The oil yield varied with the linear, quadratic and interaction of pressure, temperature, flow rate and soaking time. Optimum points were observed within the variables of temperature from 35 °C to 80 °C, pressure from 10 MPa to 40 MPa, flow rate from 1 mL/min to 3 mL/min and soaking time from 1 h to 4 h. However, the extraction parameters were found to be optimized at temperature 57.5 °C, pressure 40 MPa, flow rate 2.0 mL/min and soaking time 2.5 h. At this optimized condition, the highest oil yields were found to be 67.0% (g oil/100 g sample on dry basis) in the viscera of catfish which was reasonable to the yields of 78.0% extracted using the Soxhlet method. PMID:23109854

  7. Optimized positioning of autonomous surgical lamps

    NASA Astrophysics Data System (ADS)

    Teuber, Jörn; Weller, Rene; Kikinis, Ron; Oldhafer, Karl-Jürgen; Lipp, Michael J.; Zachmann, Gabriel

    2017-03-01

    We consider the problem of finding automatically optimal positions of surgical lamps throughout the whole surgical procedure, where we assume that future lamps could be robotized. We propose a two-tiered optimization technique for the real-time autonomous positioning of those robotized surgical lamps. Typically, finding optimal positions for surgical lamps is a multi-dimensional problem with several, in part conflicting, objectives, such as optimal lighting conditions at every point in time while minimizing the movement of the lamps in order to avoid distractions of the surgeon. Consequently, we use multi-objective optimization (MOO) to find optimal positions in real-time during the entire surgery. Due to the conflicting objectives, there is usually not a single optimal solution for such kinds of problems, but a set of solutions that realizes a Pareto-front. When our algorithm selects a solution from this set it additionally has to consider the individual preferences of the surgeon. This is a highly non-trivial task because the relationship between the solution and the parameters is not obvious. We have developed a novel meta-optimization that considers exactly this challenge. It delivers an easy to understand set of presets for the parameters and allows a balance between the lamp movement and lamp obstruction. This metaoptimization can be pre-computed for different kinds of operations and it then used by our online optimization for the selection of the appropriate Pareto solution. Both optimization approaches use data obtained by a depth camera that captures the surgical site but also the environment around the operating table. We have evaluated our algorithms with data recorded during a real open abdominal surgery. It is available for use for scientific purposes. The results show that our meta-optimization produces viable parameter sets for different parts of an intervention even when trained on a small portion of it.

  8. A method to approximate a closest loadability limit using multiple load flow solutions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yorino, Naoto; Harada, Shigemi; Cheng, Haozhong

    A new method is proposed to approximate a closest loadability limit (CLL), or closest saddle node bifurcation point, using a pair of multiple load flow solutions. More strictly, the obtainable points by the method are the stationary points including not only CLL but also farthest and saddle points. An operating solution and a low voltage load flow solution are used to efficiently estimate the node injections at a CLL as well as the left and right eigenvectors corresponding to the zero eigenvalue of the load flow Jacobian. They can be used in monitoring loadability margin, in identification of weak spotsmore » in a power system and in the examination of an optimal control against voltage collapse. Most of the computation time of the proposed method is taken in calculating the load flow solution pair. The remaining computation time is less than that of an ordinary load flow.« less

  9. A dynamic model for costing disaster mitigation policies.

    PubMed

    Altay, Nezih; Prasad, Sameer; Tata, Jasmine

    2013-07-01

    The optimal level of investment in mitigation strategies is usually difficult to ascertain in the context of disaster planning. This research develops a model to provide such direction by relying on cost of quality literature. This paper begins by introducing a static approach inspired by Joseph M. Juran's cost of quality management model (Juran, 1951) to demonstrate the non-linear trade-offs in disaster management expenditure. Next it presents a dynamic model that includes the impact of dynamic interactions of the changing level of risk, the cost of living, and the learning/investments that may alter over time. It illustrates that there is an optimal point that minimises the total cost of disaster management, and that this optimal point moves as governments learn from experience or as states get richer. It is hoped that the propositions contained herein will help policymakers to plan, evaluate, and justify voluntary disaster mitigation expenditures. © 2013 The Author(s). Journal compilation © Overseas Development Institute, 2013.

  10. SfM with MRFs: discrete-continuous optimization for large-scale structure from motion.

    PubMed

    Crandall, David J; Owens, Andrew; Snavely, Noah; Huttenlocher, Daniel P

    2013-12-01

    Recent work in structure from motion (SfM) has built 3D models from large collections of images downloaded from the Internet. Many approaches to this problem use incremental algorithms that solve progressively larger bundle adjustment problems. These incremental techniques scale poorly as the image collection grows, and can suffer from drift or local minima. We present an alternative framework for SfM based on finding a coarse initial solution using hybrid discrete-continuous optimization and then improving that solution using bundle adjustment. The initial optimization step uses a discrete Markov random field (MRF) formulation, coupled with a continuous Levenberg-Marquardt refinement. The formulation naturally incorporates various sources of information about both the cameras and points, including noisy geotags and vanishing point (VP) estimates. We test our method on several large-scale photo collections, including one with measured camera positions, and show that it produces models that are similar to or better than those produced by incremental bundle adjustment, but more robustly and in a fraction of the time.

  11. On the complexity and approximability of some Euclidean optimal summing problems

    NASA Astrophysics Data System (ADS)

    Eremeev, A. V.; Kel'manov, A. V.; Pyatkin, A. V.

    2016-10-01

    The complexity status of several well-known discrete optimization problems with the direction of optimization switching from maximum to minimum is analyzed. The task is to find a subset of a finite set of Euclidean points (vectors). In these problems, the objective functions depend either only on the norm of the sum of the elements from the subset or on this norm and the cardinality of the subset. It is proved that, if the dimension of the space is a part of the input, then all these problems are strongly NP-hard. Additionally, it is shown that, if the space dimension is fixed, then all the problems are NP-hard even for dimension 2 (on a plane) and there are no approximation algorithms with a guaranteed accuracy bound for them unless P = NP. It is shown that, if the coordinates of the input points are integer, then all the problems can be solved in pseudopolynomial time in the case of a fixed space dimension.

  12. Adaptation of the CVT algorithm for catheter optimization in high dose rate brachytherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Poulin, Eric; Fekete, Charles-Antoine Collins; Beaulieu, Luc

    2013-11-15

    Purpose: An innovative, simple, and fast method to optimize the number and position of catheters is presented for prostate and breast high dose rate (HDR) brachytherapy, both for arbitrary templates or template-free implants (such as robotic templates).Methods: Eight clinical cases were chosen randomly from a bank of patients, previously treated in our clinic to test our method. The 2D Centroidal Voronoi Tessellations (CVT) algorithm was adapted to distribute catheters uniformly in space, within the maximum external contour of the planning target volume. The catheters optimization procedure includes the inverse planning simulated annealing algorithm (IPSA). Complete treatment plans can then bemore » generated from the algorithm for different number of catheters. The best plan is chosen from different dosimetry criteria and will automatically provide the number of catheters and their positions. After the CVT algorithm parameters were optimized for speed and dosimetric results, it was validated against prostate clinical cases, using clinically relevant dose parameters. The robustness to implantation error was also evaluated. Finally, the efficiency of the method was tested in breast interstitial HDR brachytherapy cases.Results: The effect of the number and locations of the catheters on prostate cancer patients was studied. Treatment plans with a better or equivalent dose distributions could be obtained with fewer catheters. A better or equal prostate V100 was obtained down to 12 catheters. Plans with nine or less catheters would not be clinically acceptable in terms of prostate V100 and D90. Implantation errors up to 3 mm were acceptable since no statistical difference was found when compared to 0 mm error (p > 0.05). No significant difference in dosimetric indices was observed for the different combination of parameters within the CVT algorithm. A linear relation was found between the number of random points and the optimization time of the CVT algorithm. Because the computation time decrease with the number of points and that no effects were observed on the dosimetric indices when varying the number of sampling points and the number of iterations, they were respectively fixed to 2500 and to 100. The computation time to obtain ten complete treatments plans ranging from 9 to 18 catheters, with the corresponding dosimetric indices, was 90 s. However, 93% of the computation time is used by a research version of IPSA. For the breast, on average, the Radiation Therapy Oncology Group recommendations would be satisfied down to 12 catheters. Plans with nine or less catheters would not be clinically acceptable in terms of V100, dose homogeneity index, and D90.Conclusions: The authors have devised a simple, fast and efficient method to optimize the number and position of catheters in interstitial HDR brachytherapy. The method was shown to be robust for both prostate and breast HDR brachytherapy. More importantly, the computation time of the algorithm is acceptable for clinical use. Ultimately, this catheter optimization algorithm could be coupled with a 3D ultrasound system to allow real-time guidance and planning in HDR brachytherapy.« less

  13. Unravelling relationships: Hospital occupancy levels, discharge timing and emergency department access block.

    PubMed

    Khanna, Sankalp; Boyle, Justin; Good, Norm; Lind, James

    2012-10-01

    To investigate the effect of hospital occupancy levels on inpatient and ED patient flow parameters, and to simulate the impact of shifting discharge timing on occupancy levels. Retrospective analysis of hospital inpatient data and ED data from 23 reporting public hospitals in Queensland, Australia, across 30 months. Relationships between outcome measures were explored through the aggregation of the historic data into 21 912 hourly intervals. Main outcome measures included admission and discharge rates, occupancy levels, length of stay for admitted and emergency patients, and the occurrence of access block. The impact of shifting discharge timing on occupancy levels was quantified using observed and simulated data. The study identified three stages of system performance decline, or choke points, as hospital occupancy increased. These choke points were found to be dependent on hospital size, and reflect a system change from 'business-as-usual' to 'crisis'. Effecting early discharge of patients was also found to significantly (P < 0.001) impact overcrowding levels and improve patient flow. Modern hospital systems have the ability to operate efficiently above an often-prescribed 85% occupancy level, with optimal levels varying across hospitals of different size. Operating over these optimal levels leads to performance deterioration defined around occupancy choke points. Understanding these choke points and designing strategies around alleviating these flow bottlenecks would improve capacity management, reduce access block and improve patient outcomes. Effecting early discharge also helps alleviate overcrowding and related stress on the system. © 2012 CSIRO. EMA © 2012 Australasian College for Emergency Medicine and Australasian Society for Emergency Medicine.

  14. Solving complex maintenance planning optimization problems using stochastic simulation and multi-criteria fuzzy decision making

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tahvili, Sahar; Österberg, Jonas; Silvestrov, Sergei

    One of the most important factors in the operations of many cooperations today is to maximize profit and one important tool to that effect is the optimization of maintenance activities. Maintenance activities is at the largest level divided into two major areas, corrective maintenance (CM) and preventive maintenance (PM). When optimizing maintenance activities, by a maintenance plan or policy, we seek to find the best activities to perform at each point in time, be it PM or CM. We explore the use of stochastic simulation, genetic algorithms and other tools for solving complex maintenance planning optimization problems in terms ofmore » a suggested framework model based on discrete event simulation.« less

  15. Optimization of the Bridgman crystal growth process

    NASA Astrophysics Data System (ADS)

    Margulies, M.; Witomski, P.; Duffar, T.

    2004-05-01

    A numerical optimization method of the vertical Bridgman growth configuration is presented and developed. It permits to optimize the furnace temperature field and the pulling rate versus time in order to decrease the radial thermal gradients in the sample. Some constraints are also included in order to insure physically realistic results. The model includes the two classical non-linearities associated to crystal growth processes, the radiative thermal exchange and the release of latent heat at the solid-liquid interface. The mathematical analysis and development of the problem is shortly described. On some examples, it is shown that the method works in a satisfactory way; however the results are dependent on the numerical parameters. Improvements of the optimization model, on the physical and numerical point of view, are suggested.

  16. Integrated solar energy system optimization

    NASA Astrophysics Data System (ADS)

    Young, S. K.

    1982-11-01

    The computer program SYSOPT, intended as a tool for optimizing the subsystem sizing, performance, and economics of integrated wind and solar energy systems, is presented. The modular structure of the methodology additionally allows simulations when the solar subsystems are combined with conventional technologies, e.g., a utility grid. Hourly energy/mass flow balances are computed for interconnection points, yielding optimized sizing and time-dependent operation of various subsystems. The program requires meteorological data, such as insolation, diurnal and seasonal variations, and wind speed at the hub height of a wind turbine, all of which can be taken from simulations like the TRNSYS program. Examples are provided for optimization of a solar-powered (wind turbine and parabolic trough-Rankine generator) desalinization plant, and a design analysis for a solar powered greenhouse.

  17. Safe-trajectory optimization and tracking control in ultra-close proximity to a failed satellite

    NASA Astrophysics Data System (ADS)

    Zhang, Jingrui; Chu, Xiaoyu; Zhang, Yao; Hu, Quan; Zhai, Guang; Li, Yanyan

    2018-03-01

    This paper presents a trajectory-optimization method for a chaser spacecraft operating in ultra-close proximity to a failed satellite. Based on the combination of active and passive trajectory protection, the constraints in the optimization framework are formulated for collision avoidance and successful docking in the presence of any thruster failure. The constraints are then handled by an adaptive Gauss pseudospectral method, in which the dynamic residuals are used as the metric to determine the distribution of collocation points. A finite-time feedback control is further employed in tracking the optimized trajectory. In particular, the stability and convergence of the controller are proved. Numerical results are given to demonstrate the effectiveness of the proposed methods.

  18. Corner smoothing of 2D milling toolpath using b-spline curve by optimizing the contour error and the feedrate

    NASA Astrophysics Data System (ADS)

    Özcan, Abdullah; Rivière-Lorphèvre, Edouard; Ducobu, François

    2018-05-01

    In part manufacturing, efficient process should minimize the cycle time needed to reach the prescribed quality on the part. In order to optimize it, the machining time needs to be as low as possible and the quality needs to meet some requirements. For a 2D milling toolpath defined by sharp corners, the programmed feedrate is different from the reachable feedrate due to kinematic limits of the motor drives. This phenomena leads to a loss of productivity. Smoothing the toolpath allows to reduce significantly the machining time but the dimensional accuracy should not be neglected. Therefore, a way to address the problem of optimizing a toolpath in part manufacturing is to take into account the manufacturing time and the part quality. On one hand, maximizing the feedrate will minimize the manufacturing time and, on the other hand, the maximum of the contour error needs to be set under a threshold to meet the quality requirements. This paper presents a method to optimize sharp corner smoothing using b-spline curves by adjusting the control points defining the curve. The objective function used in the optimization process is based on the contour error and the difference between the programmed feedrate and an estimation of the reachable feedrate. The estimation of the reachable feedrate is based on geometrical information. Some simulation results are presented in the paper and the machining times are compared in each cases.

  19. Driving external chemistry optimization via operations management principles.

    PubMed

    Bi, F Christopher; Frost, Heather N; Ling, Xiaolan; Perry, David A; Sakata, Sylvie K; Bailey, Simon; Fobian, Yvette M; Sloan, Leslie; Wood, Anthony

    2014-03-01

    Confronted with the need to significantly raise the productivity of remotely located chemistry CROs Pfizer embraced a commitment to continuous improvement which leveraged the tools from both Lean Six Sigma and queue management theory to deliver positive measurable outcomes. During 2012 cycle times were reduced by 48% by optimization of the work in progress and conducting a detailed workflow analysis to identify and address pinch points. Compound flow was increased by 29% by optimizing the request process and de-risking the chemistry. Underpinning both achievements was the development of close working relationships and productive communications between Pfizer and CRO chemists. Copyright © 2013 Elsevier Ltd. All rights reserved.

  20. Planning of distributed generation in distribution network based on improved particle swarm optimization algorithm

    NASA Astrophysics Data System (ADS)

    Li, Jinze; Qu, Zhi; He, Xiaoyang; Jin, Xiaoming; Li, Tie; Wang, Mingkai; Han, Qiu; Gao, Ziji; Jiang, Feng

    2018-02-01

    Large-scale access of distributed power can improve the current environmental pressure, at the same time, increasing the complexity and uncertainty of overall distribution system. Rational planning of distributed power can effectively improve the system voltage level. To this point, the specific impact on distribution network power quality caused by the access of typical distributed power was analyzed and from the point of improving the learning factor and the inertia weight, an improved particle swarm optimization algorithm (IPSO) was proposed which could solve distributed generation planning for distribution network to improve the local and global search performance of the algorithm. Results show that the proposed method can well reduce the system network loss and improve the economic performance of system operation with distributed generation.

  1. Generating Data Flow Programs from Nonprocedural Specifications.

    DTIC Science & Technology

    1983-03-01

    With the I-structures, Gajski points out, it is difficult to know ahead of time the optimal memory allocation scheme to pertition large arrays. amory...contention problems may occur for frequently accessed elements stored in the sam memory module. Gajski observes that these are the same problem which

  2. Continuous Control Artificial Potential Function Methods and Optimal Control

    DTIC Science & Technology

    2014-03-27

    21 CW Clohessy - Wiltshire . . . . . . . . . . . . . . . . . . . . . . 26 CV Chase Vehicle . . . . . . . . . . . . . . . . . . . . . . . . . 26 TV Target... Clohessy - Wiltshire equa- tions2) until the time rate of change of potential became nonnegative. At that time, a thrust impulse was applied to make the...3.2. 2The Clohessy - Wiltshire equations are introduced in Section 3.5. 7 to eliminate oscillation around the goal point [8, 9]. Such a method is

  3. Imaging diagnostics in ovarian cancer: magnetic resonance imaging and a scoring system guiding choice of primary treatment.

    PubMed

    Kasper, Sigrid M; Dueholm, Margit; Marinovskij, Edvard; Blaakær, Jan

    2017-03-01

    To analyze the ability of magnetic resonance imaging (MRI) and systematic evaluation at surgery to predict optimal cytoreduction in primary advanced ovarian cancer and to develop a preoperative scoring system for cancer staging. Preoperative MRI and standard laparotomy were performed in 99 women with either ovarian or primary peritoneal cancer. Using univariate and multivariate logistic regression analysis of a systematic description of the tumor in nine abdominal compartments obtained by MRI and during surgery plus clinical parameters, a scoring system was designed that predicted non-optimal cytoreduction. Non-optimal cytoreduction at operation was predicted by the following: (A) presence of comorbidities group 3 or 4 (ASA); (B) tumor presence in multiple numbers of different compartments, and (C) numbers of specified sites of organ involvement. The score includes: number of compartments involved (1-9 points), >1 subdiaphragmal location with presence of tumor (1 point); deep organ involvement of liver (1 point), porta hepatis (1 point), spleen (1 point), mesentery/vessel (1 point), cecum/ileocecal (1 point), rectum/vessels (1 point): ASA groups 3 and 4 (2 points). Use of the scoring system based on operative findings gave an area under the curve (AUC) of 91% (85-98%) for patients in whom optimal cytoreduction could not be achieved. The score AUC obtained by MRI was 84% (76-92%), and 43% of non-optimal cytoreduction patients were identified, with only 8% of potentially operable patients being falsely evaluated as suitable for non-optimal cytoreduction at the most optimal cut-off value. Tumor in individual locations did not predict operability. This systematic scoring system based on operative findings and MRI may predict non-optimal cytoreduction. MRI is able to assess ovarian cancer with peritoneal carcinomatosis with satisfactory concordance with laparotomic findings. This scoring system could be useful as a clinical guideline and should be evaluated and developed further in larger studies. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  4. Parallel processing of real-time dynamic systems simulation on OSCAR (Optimally SCheduled Advanced multiprocessoR)

    NASA Technical Reports Server (NTRS)

    Kasahara, Hironori; Honda, Hiroki; Narita, Seinosuke

    1989-01-01

    Parallel processing of real-time dynamic systems simulation on a multiprocessor system named OSCAR is presented. In the simulation of dynamic systems, generally, the same calculation are repeated every time step. However, we cannot apply to Do-all or the Do-across techniques for parallel processing of the simulation since there exist data dependencies from the end of an iteration to the beginning of the next iteration and furthermore data-input and data-output are required every sampling time period. Therefore, parallelism inside the calculation required for a single time step, or a large basic block which consists of arithmetic assignment statements, must be used. In the proposed method, near fine grain tasks, each of which consists of one or more floating point operations, are generated to extract the parallelism from the calculation and assigned to processors by using optimal static scheduling at compile time in order to reduce large run time overhead caused by the use of near fine grain tasks. The practicality of the scheme is demonstrated on OSCAR (Optimally SCheduled Advanced multiprocessoR) which has been developed to extract advantageous features of static scheduling algorithms to the maximum extent.

  5. A quantitative evaluation of two methods for preserving hair samples

    USGS Publications Warehouse

    Roon, David A.; Waits, L.P.; Kendall, K.C.

    2003-01-01

    Hair samples are an increasingly important DNA source for wildlife studies, yet optimal storage methods and DNA degradation rates have not been rigorously evaluated. We tested amplification success rates over a one-year storage period for DNA extracted from brown bear (Ursus arctos) hair samples preserved using silica desiccation and -20C freezing. For three nuclear DNA microsatellites, success rates decreased significantly after a six-month time point, regardless of storage method. For a 1000 bp mitochondrial fragment, a similar decrease occurred after a two-week time point. Minimizing delays between collection and DNA extraction will maximize success rates for hair-based noninvasive genetic sampling projects.

  6. Simultaneous Detection and Tracking of Pedestrian from Panoramic Laser Scanning Data

    NASA Astrophysics Data System (ADS)

    Xiao, Wen; Vallet, Bruno; Schindler, Konrad; Paparoditis, Nicolas

    2016-06-01

    Pedestrian traffic flow estimation is essential for public place design and construction planning. Traditional data collection by human investigation is tedious, inefficient and expensive. Panoramic laser scanners, e.g. Velodyne HDL-64E, which scan surroundings repetitively at a high frequency, have been increasingly used for 3D object tracking. In this paper, a simultaneous detection and tracking (SDAT) method is proposed for precise and automatic pedestrian trajectory recovery. First, the dynamic environment is detected using two different methods, Nearest-point and Max-distance. Then, all the points on moving objects are transferred into a space-time (x, y, t) coordinate system. The pedestrian detection and tracking amounts to assign the points belonging to pedestrians into continuous trajectories in space-time. We formulate the point assignment task as an energy function which incorporates the point evidence, trajectory number, pedestrian shape and motion. A low energy trajectory will well explain the point observations, and have plausible trajectory trend and length. The method inherently filters out points from other moving objects and false detections. The energy function is solved by a two-step optimization process: tracklet detection in a short temporal window; and global tracklet association through the whole time span. Results demonstrate that the proposed method can automatically recover the pedestrians trajectories with accurate positions and low false detections and mismatches.

  7. Stochastic optimization of GeantV code by use of genetic algorithms

    DOE PAGES

    Amadio, G.; Apostolakis, J.; Bandieramonte, M.; ...

    2017-10-01

    GeantV is a complex system based on the interaction of different modules needed for detector simulation, which include transport of particles in fields, physics models simulating their interactions with matter and a geometrical modeler library for describing the detector and locating the particles and computing the path length to the current volume boundary. The GeantV project is recasting the classical simulation approach to get maximum benefit from SIMD/MIMD computational architectures and highly massive parallel systems. This involves finding the appropriate balance between several aspects influencing computational performance (floating-point performance, usage of off-chip memory bandwidth, specification of cache hierarchy, etc.) andmore » handling a large number of program parameters that have to be optimized to achieve the best simulation throughput. This optimization task can be treated as a black-box optimization problem, which requires searching the optimum set of parameters using only point-wise function evaluations. Here, the goal of this study is to provide a mechanism for optimizing complex systems (high energy physics particle transport simulations) with the help of genetic algorithms and evolution strategies as tuning procedures for massive parallel simulations. One of the described approaches is based on introducing a specific multivariate analysis operator that could be used in case of resource expensive or time consuming evaluations of fitness functions, in order to speed-up the convergence of the black-box optimization problem.« less

  8. Stochastic optimization of GeantV code by use of genetic algorithms

    NASA Astrophysics Data System (ADS)

    Amadio, G.; Apostolakis, J.; Bandieramonte, M.; Behera, S. P.; Brun, R.; Canal, P.; Carminati, F.; Cosmo, G.; Duhem, L.; Elvira, D.; Folger, G.; Gheata, A.; Gheata, M.; Goulas, I.; Hariri, F.; Jun, S. Y.; Konstantinov, D.; Kumawat, H.; Ivantchenko, V.; Lima, G.; Nikitina, T.; Novak, M.; Pokorski, W.; Ribon, A.; Seghal, R.; Shadura, O.; Vallecorsa, S.; Wenzel, S.

    2017-10-01

    GeantV is a complex system based on the interaction of different modules needed for detector simulation, which include transport of particles in fields, physics models simulating their interactions with matter and a geometrical modeler library for describing the detector and locating the particles and computing the path length to the current volume boundary. The GeantV project is recasting the classical simulation approach to get maximum benefit from SIMD/MIMD computational architectures and highly massive parallel systems. This involves finding the appropriate balance between several aspects influencing computational performance (floating-point performance, usage of off-chip memory bandwidth, specification of cache hierarchy, etc.) and handling a large number of program parameters that have to be optimized to achieve the best simulation throughput. This optimization task can be treated as a black-box optimization problem, which requires searching the optimum set of parameters using only point-wise function evaluations. The goal of this study is to provide a mechanism for optimizing complex systems (high energy physics particle transport simulations) with the help of genetic algorithms and evolution strategies as tuning procedures for massive parallel simulations. One of the described approaches is based on introducing a specific multivariate analysis operator that could be used in case of resource expensive or time consuming evaluations of fitness functions, in order to speed-up the convergence of the black-box optimization problem.

  9. Stochastic optimization of GeantV code by use of genetic algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amadio, G.; Apostolakis, J.; Bandieramonte, M.

    GeantV is a complex system based on the interaction of different modules needed for detector simulation, which include transport of particles in fields, physics models simulating their interactions with matter and a geometrical modeler library for describing the detector and locating the particles and computing the path length to the current volume boundary. The GeantV project is recasting the classical simulation approach to get maximum benefit from SIMD/MIMD computational architectures and highly massive parallel systems. This involves finding the appropriate balance between several aspects influencing computational performance (floating-point performance, usage of off-chip memory bandwidth, specification of cache hierarchy, etc.) andmore » handling a large number of program parameters that have to be optimized to achieve the best simulation throughput. This optimization task can be treated as a black-box optimization problem, which requires searching the optimum set of parameters using only point-wise function evaluations. Here, the goal of this study is to provide a mechanism for optimizing complex systems (high energy physics particle transport simulations) with the help of genetic algorithms and evolution strategies as tuning procedures for massive parallel simulations. One of the described approaches is based on introducing a specific multivariate analysis operator that could be used in case of resource expensive or time consuming evaluations of fitness functions, in order to speed-up the convergence of the black-box optimization problem.« less

  10. Optimization of the synthesis process of an iron oxide nanocatalyst supported on activated carbon for the inactivation of Ascaris eggs in water using the heterogeneous Fenton-like reaction.

    PubMed

    Morales-Pérez, Ariadna A; Maravilla, Pablo; Solís-López, Myriam; Schouwenaars, Rafael; Durán-Moreno, Alfonso; Ramírez-Zamora, Rosa-María

    2016-01-01

    An experimental design methodology was used to optimize the synthesis of an iron-supported nanocatalyst as well as the inactivation process of Ascaris eggs (Ae) using this material. A factor screening design was used for identifying the significant experimental factors for nanocatalyst support (supported %Fe, (w/w), temperature and time of calcination) and for the inactivation process called the heterogeneous Fenton-like reaction (H2O2 dose, mass ratio Fe/H2O2, pH and reaction time). The optimization of the significant factors was carried out using a face-centered central composite design. The optimal operating conditions for both processes were estimated with a statistical model and implemented experimentally with five replicates. The predicted value of the Ae inactivation rate was close to the laboratory results. At the optimal operating conditions of the nanocatalyst production and Ae inactivation process, the Ascaris ova showed genomic damage to the point that no cell reparation was possible showing that this advanced oxidation process was highly efficient for inactivating this pathogen.

  11. Optimization of Treatment Geometry to Reduce Normal Brain Dose in Radiosurgery of Multiple Brain Metastases with Single-Isocenter Volumetric Modulated Arc Therapy.

    PubMed

    Wu, Qixue; Snyder, Karen Chin; Liu, Chang; Huang, Yimei; Zhao, Bo; Chetty, Indrin J; Wen, Ning

    2016-09-30

    Treatment of patients with multiple brain metastases using a single-isocenter volumetric modulated arc therapy (VMAT) has been shown to decrease treatment time with the tradeoff of larger low dose to the normal brain tissue. We have developed an efficient Projection Summing Optimization Algorithm to optimize the treatment geometry in order to reduce dose to normal brain tissue for radiosurgery of multiple metastases with single-isocenter VMAT. The algorithm: (a) measures coordinates of outer boundary points of each lesion to be treated using the Eclipse Scripting Application Programming Interface, (b) determines the rotations of couch, collimator, and gantry using three matrices about the cardinal axes, (c) projects the outer boundary points of the lesion on to Beam Eye View projection plane, (d) optimizes couch and collimator angles by selecting the least total unblocked area for each specific treatment arc, and (e) generates a treatment plan with the optimized angles. The results showed significant reduction in the mean dose and low dose volume to normal brain, while maintaining the similar treatment plan qualities on the thirteen patients treated previously. The algorithm has the flexibility with regard to the beam arrangements and can be integrated in the treatment planning system for clinical application directly.

  12. Comparison of optimized algorithms in facility location allocation problems with different distance measures

    NASA Astrophysics Data System (ADS)

    Kumar, Rakesh; Chandrawat, Rajesh Kumar; Garg, B. P.; Joshi, Varun

    2017-07-01

    Opening the new firm or branch with desired execution is very relevant to facility location problem. Along the lines to locate the new ambulances and firehouses, the government desires to minimize average response time for emergencies from all residents of cities. So finding the best location is biggest challenge in day to day life. These type of problems were named as facility location problems. A lot of algorithms have been developed to handle these problems. In this paper, we review five algorithms that were applied to facility location problems. The significance of clustering in facility location problems is also presented. First we compare Fuzzy c-means clustering (FCM) algorithm with alternating heuristic (AH) algorithm, then with Particle Swarm Optimization (PSO) algorithms using different type of distance function. The data was clustered with the help of FCM and then we apply median model and min-max problem model on that data. After finding optimized locations using these algorithms we find the distance from optimized location point to the demanded point with different distance techniques and compare the results. At last, we design a general example to validate the feasibility of the five algorithms for facilities location optimization, and authenticate the advantages and drawbacks of them.

  13. Development of Quadratic Programming Algorithm Based on Interior Point Method with Estimation Mechanism of Active Constraints

    NASA Astrophysics Data System (ADS)

    Hashimoto, Hiroyuki; Takaguchi, Yusuke; Nakamura, Shizuka

    Instability of calculation process and increase of calculation time caused by increasing size of continuous optimization problem remain the major issues to be solved to apply the technique to practical industrial systems. This paper proposes an enhanced quadratic programming algorithm based on interior point method mainly for improvement of calculation stability. The proposed method has dynamic estimation mechanism of active constraints on variables, which fixes the variables getting closer to the upper/lower limit on them and afterwards releases the fixed ones as needed during the optimization process. It is considered as algorithm-level integration of the solution strategy of active-set method into the interior point method framework. We describe some numerical results on commonly-used bench-mark problems called “CUTEr” to show the effectiveness of the proposed method. Furthermore, the test results on large-sized ELD problem (Economic Load Dispatching problems in electric power supply scheduling) are also described as a practical industrial application.

  14. Optimization of natural lipstick formulation based on pitaya (Hylocereus polyrhizus) seed oil using D-optimal mixture experimental design.

    PubMed

    Kamairudin, Norsuhaili; Gani, Siti Salwa Abd; Masoumi, Hamid Reza Fard; Hashim, Puziah

    2014-10-16

    The D-optimal mixture experimental design was employed to optimize the melting point of natural lipstick based on pitaya (Hylocereus polyrhizus) seed oil. The influence of the main lipstick components-pitaya seed oil (10%-25% w/w), virgin coconut oil (25%-45% w/w), beeswax (5%-25% w/w), candelilla wax (1%-5% w/w) and carnauba wax (1%-5% w/w)-were investigated with respect to the melting point properties of the lipstick formulation. The D-optimal mixture experimental design was applied to optimize the properties of lipstick by focusing on the melting point with respect to the above influencing components. The D-optimal mixture design analysis showed that the variation in the response (melting point) could be depicted as a quadratic function of the main components of the lipstick. The best combination of each significant factor determined by the D-optimal mixture design was established to be pitaya seed oil (25% w/w), virgin coconut oil (37% w/w), beeswax (17% w/w), candelilla wax (2% w/w) and carnauba wax (2% w/w). With respect to these factors, the 46.0 °C melting point property was observed experimentally, similar to the theoretical prediction of 46.5 °C. Carnauba wax is the most influential factor on this response (melting point) with its function being with respect to heat endurance. The quadratic polynomial model sufficiently fit the experimental data.

  15. Lean processes for optimizing OR capacity utilization: prospective analysis before and after implementation of value stream mapping (VSM).

    PubMed

    Schwarz, Patric; Pannes, Klaus Dieter; Nathan, Michel; Reimer, Hans Jorg; Kleespies, Axel; Kuhn, Nicole; Rupp, Anne; Zügel, Nikolaus Peter

    2011-10-01

    The decision to optimize the processes in the operating tract was based on two factors: competition among clinics and a desire to optimize the use of available resources. The aim of the project was to improve operating room (OR) capacity utilization by reduction of change and throughput time per patient. The study was conducted at Centre Hospitalier Emil Mayrisch Clinic for specialized care (n = 618 beds) Luxembourg (South). A prospective analysis was performed before and after the implementation of optimized processes. Value stream analysis and design (value stream mapping, VSM) were used as tools. VSM depicts patient throughput and the corresponding information flows. Furthermore it is used to identify process waste (e.g. time, human resources, materials, etc.). For this purpose, change times per patient (extubation of patient 1 until intubation of patient 2) and throughput times (inward transfer until outward transfer) were measured. VSM, change and throughput times for 48 patient flows (VSM A(1), actual state = initial situation) served as the starting point. Interdisciplinary development of an optimized VSM (VSM-O) was evaluated. Prospective analysis of 42 patients (VSM-A(2)) without and 75 patients (VSM-O) with an optimized process in place were conducted. The prospective analysis resulted in a mean change time of (mean ± SEM) VSM-A(2) 1,507 ± 100 s versus VSM-O 933 ± 66 s (p < 0.001). The mean throughput time VSM-A(2) (mean ± SEM) was 151 min (±8) versus VSM-O 120 min (±10) (p < 0.05). This corresponds to a 23% decrease in waiting time per patient in total. Efficient OR capacity utilization and the optimized use of human resources allowed an additional 1820 interventions to be carried out per year without any increase in human resources. In addition, perioperative patient monitoring was increased up to 100%.

  16. Model predictive control of attitude maneuver of a geostationary flexible satellite based on genetic algorithm

    NASA Astrophysics Data System (ADS)

    TayyebTaher, M.; Esmaeilzadeh, S. Majid

    2017-07-01

    This article presents an application of Model Predictive Controller (MPC) to the attitude control of a geostationary flexible satellite. SIMO model has been used for the geostationary satellite, using the Lagrange equations. Flexibility is also included in the modelling equations. The state space equations are expressed in order to simplify the controller. Naturally there is no specific tuning rule to find the best parameters of an MPC controller which fits the desired controller. Being an intelligence method for optimizing problem, Genetic Algorithm has been used for optimizing the performance of MPC controller by tuning the controller parameter due to minimum rise time, settling time, overshoot of the target point of the flexible structure and its mode shape amplitudes to make large attitude maneuvers possible. The model included geosynchronous orbit environment and geostationary satellite parameters. The simulation results of the flexible satellite with attitude maneuver shows the efficiency of proposed optimization method in comparison with LQR optimal controller.

  17. Antioxidant Compound Extraction from Maqui (Aristotelia chilensis [Mol] Stuntz) Berries: Optimization by Response Surface Methodology

    PubMed Central

    Quispe-Fuentes, Issis; Vega-Gálvez, Antonio; Campos-Requena, Víctor H.

    2017-01-01

    The optimum conditions for the antioxidant extraction from maqui berry were determined using a response surface methodology. A three level D-optimal design was used to investigate the effects of three independent variables namely, solvent type (methanol, acetone and ethanol), solvent concentration and extraction time over total antioxidant capacity by using the oxygen radical absorbance capacity (ORAC) method. The D-optimal design considered 42 experiments including 10 central point replicates. A second-order polynomial model showed that more than 89% of the variation is explained with a satisfactory prediction (78%). ORAC values are higher when acetone was used as a solvent at lower concentrations, and the extraction time range studied showed no significant influence on ORAC values. The optimal conditions for antioxidant extraction obtained were 29% of acetone for 159 min under agitation. From the results obtained it can be concluded that the given predictive model describes an antioxidant extraction process from maqui berry.

  18. Sensorimotor adaptation of point-to-point arm movements after spaceflight: the role of internal representation of gravity force in trajectory planning.

    PubMed

    Gaveau, Jérémie; Paizis, Christos; Berret, Bastien; Pozzo, Thierry; Papaxanthis, Charalambos

    2011-08-01

    After an exposure to weightlessness, the central nervous system operates under new dynamic and sensory contexts. To find optimal solutions for rapid adaptation, cosmonauts have to decide whether parameters from the world or their body have changed and to estimate their properties. Here, we investigated sensorimotor adaptation after a spaceflight of 10 days. Five cosmonauts performed forward point-to-point arm movements in the sagittal plane 40 days before and 24 and 72 h after the spaceflight. We found that, whereas the shape of hand velocity profiles remained unaffected after the spaceflight, hand path curvature significantly increased 1 day after landing and returned to the preflight level on the third day. Control experiments, carried out by 10 subjects under normal gravity conditions, showed that loading the arm with varying loads (from 0.3 to 1.350 kg) did not affect path curvature. Therefore, changes in path curvature after spaceflight cannot be the outcome of a control process based on the subjective feeling that arm inertia was increased. By performing optimal control simulations, we found that arm kinematics after exposure to microgravity corresponded to a planning process that overestimated the gravity level and optimized movements in a hypergravity environment (∼1.4 g). With time and practice, the sensorimotor system was recalibrated to Earth's gravity conditions, and cosmonauts progressively generated accurate estimations of the body state, gravity level, and sensory consequences of the motor commands (72 h). These observations provide novel insights into how the central nervous system evaluates body (inertia) and environmental (gravity) states during sensorimotor adaptation of point-to-point arm movements after an exposure to weightlessness.

  19. Effects of electron cyclotron current drive on the evolution of double tearing mode

    NASA Astrophysics Data System (ADS)

    Sun, Guanglan; Dong, Chunying; Duan, Longfang

    2015-09-01

    The effects of electron cyclotron current drive (ECCD) on the double tearing mode (DTM) in slab geometry are investigated by using two-dimensional compressible magnetohydrodynamics equations. It is found that, mainly, the double tearing mode is suppressed by the emergence of the secondary island, due to the deposition of driven current on the X-point of magnetic island at one rational surface, which forms a new non-complete symmetric magnetic topology structure (defined as a non-complete symmetric structure, NSS). The effects of driven current with different parameters (magnitude, initial time of deposition, duration time, and location of deposition) on the evolution of DTM are analyzed elaborately. The optimal magnitude or optimal deposition duration of driven current is the one which makes the duration of NSS the longest, which depends on the mutual effect between ECCD and the background plasma. Moreover, driven current introduced at the early Sweet-Parker phase has the best suppression effect; and the optimal moment also exists, depending on the duration of the NSS. Finally, the effects varied by the driven current disposition location are studied. It is verified that the favorable location of driven current is the X-point which is completely different from the result of single tearing mode.

  20. Optimization of Culture Conditions for Enrichment of Saccharomyces cerevisiae with Dl-α-Tocopherol by Response Surface Methodology.

    PubMed

    Mohajeri Amiri, Morteza; Fazeli, Mohammad Reza; Amini, Mohsen; Hayati Roodbari, Nasim; Samadi, Nasrin

    2017-01-01

    Designing enriched probiotic supplements may have some advantages including protection of probiotic microorganism from oxidative destruction, improving enzyme activity of the gastrointestinal tract, and probably increasing half-life of micronutrient. In this study Saccharomyces cerevisiae enriched with dl-α-tocopherol was produced as an accumulator and transporter of a lipid soluble vitamin for the first time. By using one variable at the time screening studies, three independent variables were selected. Optimization of the level of dl-α-tocopherol entrapment in S. cerevisiae cells was performed by using Box-Behnken design via design expert software. A modified quadratic polynomial model appropriately fit the data. The convex shape of three-dimensional plots reveal that we could calculate the optimal point of the response in the range of parameters. The optimum points of independent parameters to maximize the response were dl-α-tocopherol initial concentration of 7625.82 µg/mL, sucrose concentration of 6.86 % w/v, and shaking speed of 137.70 rpm. Under these conditions, the maximum level of dl-α-tocopherol in dry cell weight of S. cerevisiae was 5.74 µg/g. The resemblance between the R-squared and adjusted R-squared and acceptable value of C.V% revealed acceptability and accuracy of the model.

  1. Comparative evaluation of two dose optimization methods for image-guided, highly-conformal, tandem and ovoids cervix brachytherapy planning

    NASA Astrophysics Data System (ADS)

    Ren, Jiyun; Menon, Geetha; Sloboda, Ron

    2013-04-01

    Although the Manchester system is still extensively used to prescribe dose in brachytherapy (BT) for locally advanced cervix cancer, many radiation oncology centers are transitioning to 3D image-guided BT, owing to the excellent anatomy definition offered by modern imaging modalities. As automatic dose optimization is highly desirable for 3D image-based BT, this study comparatively evaluates the performance of two optimization methods used in BT treatment planning—Nelder-Mead simplex (NMS) and simulated annealing (SA)—for a cervix BT computer simulation model incorporating a Manchester-style applicator. Eight model cases were constructed based on anatomical structure data (for high risk-clinical target volume (HR-CTV), bladder, rectum and sigmoid) obtained from measurements on fused MR-CT images for BT patients. D90 and V100 for HR-CTV, D2cc for organs at risk (OARs), dose to point A, conformation index and the sum of dwell times within the tandem and ovoids were calculated for optimized treatment plans designed to treat the HR-CTV in a highly conformal manner. Compared to the NMS algorithm, SA was found to be superior as it could perform optimization starting from a range of initial dwell times, while the performance of NMS was strongly dependent on their initial choice. SA-optimized plans also exhibited lower D2cc to OARs, especially the bladder and sigmoid, and reduced tandem dwell times. For cases with smaller HR-CTV having good separation from adjoining OARs, multiple SA-optimized solutions were found which differed markedly from each other and were associated with different choices for initial dwell times. Finally and importantly, the SA method yielded plans with lower dwell time variability compared with the NMS method.

  2. Analysis Balance Parameter of Optimal Ramp metering

    NASA Astrophysics Data System (ADS)

    Li, Y.; Duan, N.; Yang, X.

    2018-05-01

    Ramp metering is a motorway control method to avoid onset congestion through limiting the access of ramp inflows into the main road of the motorway. The optimization model of ramp metering is developed based upon cell transmission model (CTM). With the piecewise linear structure of CTM, the corresponding motorway traffic optimization problem can be formulated as a linear programming (LP) problem. It is known that LP problem can be solved by established solution algorithms such as SIMPLEX or interior-point methods for the global optimal solution. The commercial software (CPLEX) is adopted in this study to solve the LP problem within reasonable computational time. The concept is illustrated through a case study of the United Kingdom M25 Motorway. The optimal solution provides useful insights and guidances on how to manage motorway traffic in order to maximize the corresponding efficiency.

  3. Robust optimization for nonlinear time-delay dynamical system of dha regulon with cost sensitivity constraint in batch culture

    NASA Astrophysics Data System (ADS)

    Yuan, Jinlong; Zhang, Xu; Liu, Chongyang; Chang, Liang; Xie, Jun; Feng, Enmin; Yin, Hongchao; Xiu, Zhilong

    2016-09-01

    Time-delay dynamical systems, which depend on both the current state of the system and the state at delayed times, have been an active area of research in many real-world applications. In this paper, we consider a nonlinear time-delay dynamical system of dha-regulonwith unknown time-delays in batch culture of glycerol bioconversion to 1,3-propanediol induced by Klebsiella pneumonia. Some important properties and strong positive invariance are discussed. Because of the difficulty in accurately measuring the concentrations of intracellular substances and the absence of equilibrium points for the time-delay system, a quantitative biological robustness for the concentrations of intracellular substances is defined by penalizing a weighted sum of the expectation and variance of the relative deviation between system outputs before and after the time-delays are perturbed. Our goal is to determine optimal values of the time-delays. To this end, we formulate an optimization problem in which the time delays are decision variables and the cost function is to minimize the biological robustness. This optimization problem is subject to the time-delay system, parameter constraints, continuous state inequality constraints for ensuring that the concentrations of extracellular and intracellular substances lie within specified limits, a quality constraint to reflect operational requirements and a cost sensitivity constraint for ensuring that an acceptable level of the system performance is achieved. It is approximated as a sequence of nonlinear programming sub-problems through the application of constraint transcription and local smoothing approximation techniques. Due to the highly complex nature of this optimization problem, the computational cost is high. Thus, a parallel algorithm is proposed to solve these nonlinear programming sub-problems based on the filled function method. Finally, it is observed that the obtained optimal estimates for the time-delays are highly satisfactory via numerical simulations.

  4. Process optimization for osmo-dehydrated carambola (Averrhoa carambola L) slices and its storage studies.

    PubMed

    Roopa, N; Chauhan, O P; Raju, P S; Das Gupta, D K; Singh, R K R; Bawa, A S

    2014-10-01

    An osmotic-dehydration process protocol for Carambola (Averrhoacarambola L.,), an exotic star shaped tropical fruit, was developed. The process was optimized using Response Surface Methodology (RSM) following Central Composite Rotatable Design (CCRD). The experimental variables selected for the optimization were soak solution concentration (°Brix), soaking temperature (°C) and soaking time (min) with 6 experiments at central point. The effect of process variables was studied on solid gain and water loss during osmotic dehydration process. The data obtained were analyzed employing multiple regression technique to generate suitable mathematical models. Quadratic models were found to fit well (R(2), 95.58 - 98.64 %) in describing the effect of variables on the responses studied. The optimized levels of the process variables were achieved at 70°Brix, 48 °C and 144 min for soak solution concentration, soaking temperature and soaking time, respectively. The predicted and experimental results at optimized levels of variables showed high correlation. The osmo-dehydrated product prepared at optimized conditions showed a shelf-life of 10, 8 and 6 months at 5 °C, ambient (30 ± 2 °C) and 37 °C, respectively.

  5. LiveWire interactive boundary extraction algorithm based on Haar wavelet transform and control point set direction search

    NASA Astrophysics Data System (ADS)

    Cheng, Jun; Zhang, Jun; Tian, Jinwen

    2015-12-01

    Based on deep analysis of the LiveWire interactive boundary extraction algorithm, a new algorithm focusing on improving the speed of LiveWire algorithm is proposed in this paper. Firstly, the Haar wavelet transform is carried on the input image, and the boundary is extracted on the low resolution image obtained by the wavelet transform of the input image. Secondly, calculating LiveWire shortest path is based on the control point set direction search by utilizing the spatial relationship between the two control points users provide in real time. Thirdly, the search order of the adjacent points of the starting node is set in advance. An ordinary queue instead of a priority queue is taken as the storage pool of the points when optimizing their shortest path value, thus reducing the complexity of the algorithm from O[n2] to O[n]. Finally, A region iterative backward projection method based on neighborhood pixel polling has been used to convert dual-pixel boundary of the reconstructed image to single-pixel boundary after Haar wavelet inverse transform. The algorithm proposed in this paper combines the advantage of the Haar wavelet transform and the advantage of the optimal path searching method based on control point set direction search. The former has fast speed of image decomposition and reconstruction and is more consistent with the texture features of the image and the latter can reduce the time complexity of the original algorithm. So that the algorithm can improve the speed in interactive boundary extraction as well as reflect the boundary information of the image more comprehensively. All methods mentioned above have a big role in improving the execution efficiency and the robustness of the algorithm.

  6. Research on Scheduling Algorithm for Multi-satellite and Point Target Task on Swinging Mode

    NASA Astrophysics Data System (ADS)

    Wang, M.; Dai, G.; Peng, L.; Song, Z.; Chen, G.

    2012-12-01

    Nowadays, using satellite in space to observe ground is an important and major method to obtain ground information. With the development of the scientific technology in the field of space, many fields such as military and economic and other areas have more and more requirement of space technology because of the benefits of the satellite's widespread, timeliness and unlimited of area and country. And at the same time, because of the wide use of all kinds of satellites, sensors, repeater satellites and ground receiving stations, ground control system are now facing great challenge. Therefore, how to make the best value of satellite resources so as to make full use of them becomes an important problem of ground control system. Satellite scheduling is to distribute the resource to all tasks without conflict to obtain the scheduling result so as to complete as many tasks as possible to meet user's requirement under considering the condition of the requirement of satellites, sensors and ground receiving stations. Considering the size of the task, we can divide tasks into point task and area task. This paper only considers point targets. In this paper, a description of satellite scheduling problem and a chief introduction of the theory of satellite scheduling are firstly made. We also analyze the restriction of resource and task in scheduling satellites. The input and output flow of scheduling process are also chiefly described in the paper. On the basis of these analyses, we put forward a scheduling model named as multi-variable optimization model for multi-satellite and point target task on swinging mode. In the multi-variable optimization model, the scheduling problem is transformed the parametric optimization problem. The parameter we wish to optimize is the swinging angle of every time-window. In the view of the efficiency and accuracy, some important problems relating the satellite scheduling such as the angle relation between satellites and ground targets, positive and negative swinging angle and the computation of time window are analyzed and discussed. And many strategies to improve the efficiency of this model are also put forward. In order to solve the model, we bring forward the conception of activity sequence map. By using the activity sequence map, the activity choice and the start time of the activity can be divided. We also bring forward three neighborhood operators to search the result space. The front movement remaining time and the back movement remaining time are used to analyze the feasibility to generate solution from neighborhood operators. Lastly, the algorithm to solve the problem and model is put forward based genetic algorithm. Population initialization, crossover operator, mutation operator, individual evaluation, collision decrease operator, select operator and collision elimination operator is designed in the paper. Finally, the scheduling result and the simulation for a practical example on 5 satellites and 100 point targets with swinging mode is given, and the scheduling performances are also analyzed while the swinging angle in 0, 5, 10, 15, 25. It can be shown by the result that the model and the algorithm are more effective than those ones without swinging mode.

  7. Least-mean-square spatial filter for IR sensors.

    PubMed

    Takken, E H; Friedman, D; Milton, A F; Nitzberg, R

    1979-12-15

    A new least-mean-square filter is defined for signal-detection problems. The technique is proposed for scanning IR surveillance systems operating in poorly characterized but primarily low-frequency clutter interference. Near-optimal detection of point-source targets is predicted both for continuous-time and sampled-data systems.

  8. Inverse Planning Approach for 3-D MRI-Based Pulse-Dose Rate Intracavitary Brachytherapy in Cervix Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chajon, Enrique; Dumas, Isabelle; Touleimat, Mahmoud B.Sc.

    2007-11-01

    Purpose: The purpose of this study was to evaluate the inverse planning simulated annealing (IPSA) software for the optimization of dose distribution in patients with cervix carcinoma treated with MRI-based pulsed-dose rate intracavitary brachytherapy. Methods and Materials: Thirty patients treated with a technique using a customized vaginal mold were selected. Dose-volume parameters obtained using the IPSA method were compared with the classic manual optimization method (MOM). Target volumes and organs at risk were delineated according to the Gynecological Brachytherapy Group/European Society for Therapeutic Radiology and Oncology recommendations. Because the pulsed dose rate program was based on clinical experience with lowmore » dose rate, dwell time values were required to be as homogeneous as possible. To achieve this goal, different modifications of the IPSA program were applied. Results: The first dose distribution calculated by the IPSA algorithm proposed a heterogeneous distribution of dwell time positions. The mean D90, D100, and V100 calculated with both methods did not differ significantly when the constraints were applied. For the bladder, doses calculated at the ICRU reference point derived from the MOM differed significantly from the doses calculated by the IPSA method (mean, 58.4 vs. 55 Gy respectively; p = 0.0001). For the rectum, the doses calculated at the ICRU reference point were also significantly lower with the IPSA method. Conclusions: The inverse planning method provided fast and automatic solutions for the optimization of dose distribution. However, the straightforward use of IPSA generated significant heterogeneity in dwell time values. Caution is therefore recommended in the use of inverse optimization tools with clinical relevance study of new dosimetric rules.« less

  9. Gaussian processes with optimal kernel construction for neuro-degenerative clinical onset prediction

    NASA Astrophysics Data System (ADS)

    Canas, Liane S.; Yvernault, Benjamin; Cash, David M.; Molteni, Erika; Veale, Tom; Benzinger, Tammie; Ourselin, Sébastien; Mead, Simon; Modat, Marc

    2018-02-01

    Gaussian Processes (GP) are a powerful tool to capture the complex time-variations of a dataset. In the context of medical imaging analysis, they allow a robust modelling even in case of highly uncertain or incomplete datasets. Predictions from GP are dependent of the covariance kernel function selected to explain the data variance. To overcome this limitation, we propose a framework to identify the optimal covariance kernel function to model the data.The optimal kernel is defined as a composition of base kernel functions used to identify correlation patterns between data points. Our approach includes a modified version of the Compositional Kernel Learning (CKL) algorithm, in which we score the kernel families using a new energy function that depends both the Bayesian Information Criterion (BIC) and the explained variance score. We applied the proposed framework to model the progression of neurodegenerative diseases over time, in particular the progression of autosomal dominantly-inherited Alzheimer's disease, and use it to predict the time to clinical onset of subjects carrying genetic mutation.

  10. Investigation and appreciation of optimal output feedback. Volume 1: A convergent algorithm for the stochastic infinite-time discrete optimal output feedback problem

    NASA Technical Reports Server (NTRS)

    Halyo, N.; Broussard, J. R.

    1984-01-01

    The stochastic, infinite time, discrete output feedback problem for time invariant linear systems is examined. Two sets of sufficient conditions for the existence of a stable, globally optimal solution are presented. An expression for the total change in the cost function due to a change in the feedback gain is obtained. This expression is used to show that a sequence of gains can be obtained by an algorithm, so that the corresponding cost sequence is monotonically decreasing and the corresponding sequence of the cost gradient converges to zero. The algorithm is guaranteed to obtain a critical point of the cost function. The computational steps necessary to implement the algorithm on a computer are presented. The results are applied to a digital outer loop flight control problem. The numerical results for this 13th order problem indicate a rate of convergence considerably faster than two other algorithms used for comparison.

  11. Nonlinear Burn Control and Operating Point Optimization in ITER

    NASA Astrophysics Data System (ADS)

    Boyer, Mark; Schuster, Eugenio

    2013-10-01

    Control of the fusion power through regulation of the plasma density and temperature will be essential for achieving and maintaining desired operating points in fusion reactors and burning plasma experiments like ITER. In this work, a volume averaged model for the evolution of the density of energy, deuterium and tritium fuel ions, alpha-particles, and impurity ions is used to synthesize a multi-input multi-output nonlinear feedback controller for stabilizing and modulating the burn condition. Adaptive control techniques are used to account for uncertainty in model parameters, including particle confinement times and recycling rates. The control approach makes use of the different possible methods for altering the fusion power, including adjusting the temperature through auxiliary heating, modulating the density and isotopic mix through fueling, and altering the impurity density through impurity injection. Furthermore, a model-based optimization scheme is proposed to drive the system as close as possible to desired fusion power and temperature references. Constraints are considered in the optimization scheme to ensure that, for example, density and beta limits are avoided, and that optimal operation is achieved even when actuators reach saturation. Supported by the NSF CAREER award program (ECCS-0645086).

  12. A Parallel Point Matching Algorithm for Landmark Based Image Registration Using Multicore Platform

    PubMed Central

    Yang, Lin; Gong, Leiguang; Zhang, Hong; Nosher, John L.; Foran, David J.

    2013-01-01

    Point matching is crucial for many computer vision applications. Establishing the correspondence between a large number of data points is a computationally intensive process. Some point matching related applications, such as medical image registration, require real time or near real time performance if applied to critical clinical applications like image assisted surgery. In this paper, we report a new multicore platform based parallel algorithm for fast point matching in the context of landmark based medical image registration. We introduced a non-regular data partition algorithm which utilizes the K-means clustering algorithm to group the landmarks based on the number of available processing cores, which optimize the memory usage and data transfer. We have tested our method using the IBM Cell Broadband Engine (Cell/B.E.) platform. The results demonstrated a significant speed up over its sequential implementation. The proposed data partition and parallelization algorithm, though tested only on one multicore platform, is generic by its design. Therefore the parallel algorithm can be extended to other computing platforms, as well as other point matching related applications. PMID:24308014

  13. Optimization of bump and blowing to control the flow through a transonic compressor blade cascade

    NASA Astrophysics Data System (ADS)

    Mazaheri, K.; Khatibirad, S.

    2018-03-01

    Shock control bump (SCB) and blowing are two flow control methods, used here to improve the aerodynamic performance of transonic compressors. Both methods are applied to a NASA rotor 67 blade section and are optimized to minimize the total pressure loss. A continuous adjoint algorithm is used for multi-point optimization of a SCB to improve the aerodynamic performance of the rotor blade section, for a range of operational conditions around its design point. A multi-point and two single-point optimizations are performed in the design and off-design conditions. It is shown that the single-point optimized shapes have the best performance for their respective operating conditions, but the multi-point one has an overall better performance over the whole operating range. An analysis is given regarding how similarly both single- and multi-point optimized SCBs change the wave structure between blade sections resulting in a more favorable flow pattern. Interactions of the SCB with the boundary layer and the wave structure, and its effects on the separation regions are also studied. We have also introduced the concept of blowing for control of shock wave and boundary-layer interaction. A geometrical model is introduced, and the geometrical and physical parameters of blowing are optimized at the design point. The performance improvements of blowing are compared with the SCB. The physical interactions of SCB with the boundary layer and the shock wave are analyzed. The effects of SCB on the wave structure in the flow domain outside the boundary-layer region are investigated. It is shown that the effects of the blowing mechanism are very similar to the SCB.

  14. Optimizing the switching time for 400 kV SF6 circuit breakers

    NASA Astrophysics Data System (ADS)

    Ciulica, D.

    2018-01-01

    This paper presents real-time voltage and current analysis for optimizing the wave switching point of the circuit breaker SF6. Circuit Breaker plays an important role in power systems. It provides protection for equipment in embedded stations in transport networks. SF6 Circuit Breaker is very important equipment in Power Systems, which is used for up to 400 kV due to its excellent performance. The controlled switching is used to eliminate transient modes and electrodynamic and dielectric charges in the network at manual switching of capacitor, shunt reactors and power transformers. These effects reduce the reliability and lifetime of the equipment installed on the network, or may lead to erroneous protection.

  15. Multimodal Optimization by Covariance Matrix Self-Adaptation Evolution Strategy with Repelling Subpopulations.

    PubMed

    Ahrari, Ali; Deb, Kalyanmoy; Preuss, Mike

    2017-01-01

    During the recent decades, many niching methods have been proposed and empirically verified on some available test problems. They often rely on some particular assumptions associated with the distribution, shape, and size of the basins, which can seldom be made in practical optimization problems. This study utilizes several existing concepts and techniques, such as taboo points, normalized Mahalanobis distance, and the Ursem's hill-valley function in order to develop a new tool for multimodal optimization, which does not make any of these assumptions. In the proposed method, several subpopulations explore the search space in parallel. Offspring of a subpopulation are forced to maintain a sufficient distance to the center of fitter subpopulations and the previously identified basins, which are marked as taboo points. The taboo points repel the subpopulation to prevent convergence to the same basin. A strategy to update the repelling power of the taboo points is proposed to address the challenge of basins of dissimilar size. The local shape of a basin is also approximated by the distribution of the subpopulation members converging to that basin. The proposed niching strategy is incorporated into the covariance matrix self-adaptation evolution strategy (CMSA-ES), a potent global optimization method. The resultant method, called the covariance matrix self-adaptation with repelling subpopulations (RS-CMSA), is assessed and compared to several state-of-the-art niching methods on a standard test suite for multimodal optimization. An organized procedure for parameter setting is followed which assumes a rough estimation of the desired/expected number of minima available. Performance sensitivity to the accuracy of this estimation is also studied by introducing the concept of robust mean peak ratio. Based on the numerical results using the available and the introduced performance measures, RS-CMSA emerges as the most successful method when robustness and efficiency are considered at the same time.

  16. Minimum airflow reset of single-duct VAV terminal boxes

    NASA Astrophysics Data System (ADS)

    Cho, Young-Hum

    Single duct Variable Air Volume (VAV) systems are currently the most widely used type of HVAC system in the United States. When installing such a system, it is critical to determine the minimum airflow set point of the terminal box, as an optimally selected set point will improve the level of thermal comfort and indoor air quality (IAQ) while at the same time lower overall energy costs. In principle, this minimum rate should be calculated according to the minimum ventilation requirement based on ASHRAE standard 62.1 and maximum heating load of the zone. Several factors must be carefully considered when calculating this minimum rate. Terminal boxes with conventional control sequences may result in occupant discomfort and energy waste. If the minimum rate of airflow is set too high, the AHUs will consume excess fan power, and the terminal boxes may cause significant simultaneous room heating and cooling. At the same time, a rate that is too low will result in poor air circulation and indoor air quality in the air-conditioned space. Currently, many scholars are investigating how to change the algorithm of the advanced VAV terminal box controller without retrofitting. Some of these controllers have been found to effectively improve thermal comfort, indoor air quality, and energy efficiency. However, minimum airflow set points have not yet been identified, nor has controller performance been verified in confirmed studies. In this study, control algorithms were developed that automatically identify and reset terminal box minimum airflow set points, thereby improving indoor air quality and thermal comfort levels, and reducing the overall rate of energy consumption. A theoretical analysis of the optimal minimum airflow and discharge air temperature was performed to identify the potential energy benefits of resetting the terminal box minimum airflow set points. Applicable control algorithms for calculating the ideal values for the minimum airflow reset were developed and applied to actual systems for performance validation. The results of the theoretical analysis, numeric simulations, and experiments show that the optimal control algorithms can automatically identify the minimum rate of heating airflow under actual working conditions. Improved control helps to stabilize room air temperatures. The vertical difference in the room air temperature was lower than the comfort value. Measurements of room CO2 levels indicate that when the minimum airflow set point was reduced it did not adversely affect the indoor air quality. According to the measured energy results, optimal control algorithms give a lower rate of reheating energy consumption than conventional controls.

  17. GCaMP expression in retinal ganglion cells characterized using a low-cost fundus imaging system

    NASA Astrophysics Data System (ADS)

    Chang, Yao-Chuan; Walston, Steven T.; Chow, Robert H.; Weiland, James D.

    2017-10-01

    Objective. Virus-transduced, intracellular-calcium indicators are effective reporters of neural activity, offering the advantage of cell-specific labeling. Due to the existence of an optimal time window for the expression of calcium indicators, a suitable tool for tracking GECI expression in vivo following transduction is highly desirable. Approach. We developed a noninvasive imaging approach based on a custom-modified, low-cost fundus viewing system that allowed us to monitor and characterize in vivo bright-field and fluorescence images of the mouse retina. AAV2-CAG-GCaMP6f was injected into a mouse eye. The fundus imaging system was used to measure fluorescence at several time points post injection. At defined time points, we prepared wholemount retina mounted on a transparent multielectrode array and used calcium imaging to evaluate the responsiveness of retinal ganglion cells (RGCs) to external electrical stimulation. Main results. The noninvasive fundus imaging system clearly resolves individual (RGCs and axons. RGC fluorescence intensity and the number of observable fluorescent cells show a similar rising trend from week 1 to week 3 after viral injection, indicating a consistent increase of GCaMP6f expression. Analysis of the in vivo fluorescence intensity trend and in vitro neurophysiological responsiveness shows that the slope of intensity versus days post injection can be used to estimate the optimal time for calcium imaging of RGCs in response to external electrical stimulation. Significance. The proposed fundus imaging system enables high-resolution digital fundus imaging in the mouse eye, based on off-the-shelf components. The long-term tracking experiment with in vitro calcium imaging validation demonstrates the system can serve as a powerful tool monitoring the level of genetically-encoded calcium indicator expression, further determining the optimal time window for following experiment.

  18. Optimal gains for a single polar orbiting satellite

    NASA Technical Reports Server (NTRS)

    Banfield, Don; Ingersoll, A. P.; Keppenne, C. L.

    1993-01-01

    Gains are the spatial weighting of an observation in its neighborhood versus the local values of a model prediction. They are the key to data assimilation, as they are the direct measure of how the data are used to guide the model. As derived in the broad context of data assimilation by Kalman and in the context of meteorology, for example, by Rutherford, the optimal gains are functions of the prediction error covariances between the observation and analysis points. Kalman introduced a very powerful technique that allows one to calculate these optimal gains at the time of each observation. Unfortunately, this technique is both computationally expensive and often numerically unstable for dynamical systems of the magnitude of meteorological models, and thus is unsuited for use in PMIRR data assimilation. However, the optimal gains as calculated by a Kalman filter do reach a steady state for regular observing patterns like that of a satellite. In this steady state, the gains are constants in time, and thus could conceivably be computed off-line. These steady-state Kalman gains (i.e., Wiener gains) would yield optimal performance without the computational burden of true Kalman filtering. We proposed to use this type of constant-in-time Wiener gain for the assimilation of data from PMIRR and Mars Observer.

  19. A Subsonic Aircraft Design Optimization With Neural Network and Regression Approximators

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Coroneos, Rula M.; Guptill, James D.; Hopkins, Dale A.; Haller, William J.

    2004-01-01

    The Flight-Optimization-System (FLOPS) code encountered difficulty in analyzing a subsonic aircraft. The limitation made the design optimization problematic. The deficiencies have been alleviated through use of neural network and regression approximations. The insight gained from using the approximators is discussed in this paper. The FLOPS code is reviewed. Analysis models are developed and validated for each approximator. The regression method appears to hug the data points, while the neural network approximation follows a mean path. For an analysis cycle, the approximate model required milliseconds of central processing unit (CPU) time versus seconds by the FLOPS code. Performance of the approximators was satisfactory for aircraft analysis. A design optimization capability has been created by coupling the derived analyzers to the optimization test bed CometBoards. The approximators were efficient reanalysis tools in the aircraft design optimization. Instability encountered in the FLOPS analyzer was eliminated. The convergence characteristics were improved for the design optimization. The CPU time required to calculate the optimum solution, measured in hours with the FLOPS code was reduced to minutes with the neural network approximation and to seconds with the regression method. Generation of the approximators required the manipulation of a very large quantity of data. Design sensitivity with respect to the bounds of aircraft constraints is easily generated.

  20. Error field optimization in DIII-D using extremum seeking control

    NASA Astrophysics Data System (ADS)

    Lanctot, M. J.; Olofsson, K. E. J.; Capella, M.; Humphreys, D. A.; Eidietis, N.; Hanson, J. M.; Paz-Soldan, C.; Strait, E. J.; Walker, M. L.

    2016-07-01

    DIII-D experiments have demonstrated a new real-time approach to tokamak error field control based on maximizing the toroidal angular momentum. This approach uses extremum seeking control theory to optimize the error field in real time without inducing instabilities. Slowly-rotating n  =  1 fields (the dither), generated by external coils, are used to perturb the angular momentum, monitored in real-time using a charge-exchange spectroscopy diagnostic. Simple signal processing of the rotation measurements extracts information about the rotation gradient with respect to the control coil currents. This information is used to converge the control coil currents to a point that maximizes the toroidal angular momentum. The technique is well-suited for multi-coil, multi-harmonic error field optimizations in disruption sensitive devices as it does not require triggering locked tearing modes or plasma current disruptions. Control simulations highlight the importance of the initial search direction on the rate of the convergence, and identify future algorithm upgrades that may allow more rapid convergence that projects to convergence times in ITER on the order of tens of seconds.

  1. Abort Options for Human Missions to Earth-Moon Halo Orbits

    NASA Technical Reports Server (NTRS)

    Jesick, Mark C.

    2013-01-01

    Abort trajectories are optimized for human halo orbit missions about the translunar libration point (L2), with an emphasis on the use of free return trajectories. Optimal transfers from outbound free returns to L2 halo orbits are numerically optimized in the four-body ephemeris model. Circumlunar free returns are used for direct transfers, and cislunar free returns are used in combination with lunar gravity assists to reduce propulsive requirements. Trends in orbit insertion cost and flight time are documented across the southern L2 halo family as a function of halo orbit position and free return flight time. It is determined that the maximum amplitude southern halo incurs the lowest orbit insertion cost for direct transfers but the maximum cost for lunar gravity assist transfers. The minimum amplitude halo is the most expensive destination for direct transfers but the least expensive for lunar gravity assist transfers. The on-orbit abort costs for three halos are computed as a function of abort time and return time. Finally, an architecture analysis is performed to determine launch and on-orbit vehicle requirements for halo orbit missions.

  2. Study on Huizhou architecture of point cloud registration based on optimized ICP algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Runmei; Wu, Yulu; Zhang, Guangbin; Zhou, Wei; Tao, Yuqian

    2018-03-01

    In view of the current point cloud registration software has high hardware requirements, heavy workload and moltiple interactive definition, the source of software with better processing effect is not open, a two--step registration method based on normal vector distribution feature and coarse feature based iterative closest point (ICP) algorithm is proposed in this paper. This method combines fast point feature histogram (FPFH) algorithm, define the adjacency region of point cloud and the calculation model of the distribution of normal vectors, setting up the local coordinate system for each key point, and obtaining the transformation matrix to finish rough registration, the rough registration results of two stations are accurately registered by using the ICP algorithm. Experimental results show that, compared with the traditional ICP algorithm, the method used in this paper has obvious time and precision advantages for large amount of point clouds.

  3. Pump-shaped dump optimal control reveals the nuclear reaction pathway of isomerization of a photoexcited cyanine dye.

    PubMed

    Dietzek, Benjamin; Brüggemann, Ben; Pascher, Torbjörn; Yartsev, Arkady

    2007-10-31

    Using optimal control as a spectroscopic tool we decipher the details of the molecular dynamics of the essential multidimensional excited-state photoisomerization - a fundamental chemical reaction of key importance in biology. Two distinct nuclear motions are identified in addition to the overall bond-twisting motion: Initially, the reaction is dominated by motion perpendicular to the torsion coordinate. At later times, a second optically active vibration drives the system along the reaction path to the bottom of the excited-state potential. The time scales of the wavepacket motion on a different part of the excited-state potential are detailed by pump-shaped dump optimal control. This technique offers new means to control a chemical reaction far from the Franck-Condon point of absorption and to map details of excited-state reaction pathways revealing unique insights into the underlying reaction mechanism.

  4. Zero-sum two-player game theoretic formulation of affine nonlinear discrete-time systems using neural networks.

    PubMed

    Mehraeen, Shahab; Dierks, Travis; Jagannathan, S; Crow, Mariesa L

    2013-12-01

    In this paper, the nearly optimal solution for discrete-time (DT) affine nonlinear control systems in the presence of partially unknown internal system dynamics and disturbances is considered. The approach is based on successive approximate solution of the Hamilton-Jacobi-Isaacs (HJI) equation, which appears in optimal control. Successive approximation approach for updating control and disturbance inputs for DT nonlinear affine systems are proposed. Moreover, sufficient conditions for the convergence of the approximate HJI solution to the saddle point are derived, and an iterative approach to approximate the HJI equation using a neural network (NN) is presented. Then, the requirement of full knowledge of the internal dynamics of the nonlinear DT system is relaxed by using a second NN online approximator. The result is a closed-loop optimal NN controller via offline learning. A numerical example is provided illustrating the effectiveness of the approach.

  5. Cache-Aware Asymptotically-Optimal Sampling-Based Motion Planning

    PubMed Central

    Ichnowski, Jeffrey; Prins, Jan F.; Alterovitz, Ron

    2014-01-01

    We present CARRT* (Cache-Aware Rapidly Exploring Random Tree*), an asymptotically optimal sampling-based motion planner that significantly reduces motion planning computation time by effectively utilizing the cache memory hierarchy of modern central processing units (CPUs). CARRT* can account for the CPU’s cache size in a manner that keeps its working dataset in the cache. The motion planner progressively subdivides the robot’s configuration space into smaller regions as the number of configuration samples rises. By focusing configuration exploration in a region for periods of time, nearest neighbor searching is accelerated since the working dataset is small enough to fit in the cache. CARRT* also rewires the motion planning graph in a manner that complements the cache-aware subdivision strategy to more quickly refine the motion planning graph toward optimality. We demonstrate the performance benefit of our cache-aware motion planning approach for scenarios involving a point robot as well as the Rethink Robotics Baxter robot. PMID:25419474

  6. Cache-Aware Asymptotically-Optimal Sampling-Based Motion Planning.

    PubMed

    Ichnowski, Jeffrey; Prins, Jan F; Alterovitz, Ron

    2014-05-01

    We present CARRT* (Cache-Aware Rapidly Exploring Random Tree*), an asymptotically optimal sampling-based motion planner that significantly reduces motion planning computation time by effectively utilizing the cache memory hierarchy of modern central processing units (CPUs). CARRT* can account for the CPU's cache size in a manner that keeps its working dataset in the cache. The motion planner progressively subdivides the robot's configuration space into smaller regions as the number of configuration samples rises. By focusing configuration exploration in a region for periods of time, nearest neighbor searching is accelerated since the working dataset is small enough to fit in the cache. CARRT* also rewires the motion planning graph in a manner that complements the cache-aware subdivision strategy to more quickly refine the motion planning graph toward optimality. We demonstrate the performance benefit of our cache-aware motion planning approach for scenarios involving a point robot as well as the Rethink Robotics Baxter robot.

  7. Efficacy and Safety of Amphetamine Extended-Release Oral Suspension in Children with Attention-Deficit/Hyperactivity Disorder.

    PubMed

    Childress, Ann C; Wigal, Sharon B; Brams, Matthew N; Turnbow, John M; Pincus, Yulia; Belden, Heidi W; Berry, Sally A

    2018-06-01

    To determine the efficacy and safety of amphetamine extended-release oral suspension (AMPH EROS) in the treatment of attention-deficit/hyperactivity disorder (ADHD) in a dose-optimized, randomized, double-blind, parallel-group study. Boys and girls aged 6 to 12 years diagnosed with ADHD were enrolled. During a 5-week, open-label, dose-optimization phase, patients began treatment with 2.5 or 5 mg/day of AMPH EROS; doses were titrated until an optimal dose (maximum 20 mg/day) was reached. During the double-blind phase, patients were randomized to receive treatment with either their optimized dose (10-20 mg/day) of AMPH EROS or placebo for 1 week. Efficacy was assessed in a laboratory classroom setting on the final day of double-blind treatment using the Swanson, Kotkin, Agler, M-Flynn, and Pelham (SKAMP) Rating Scale and Permanent Product Measure of Performance (PERMP) test. Safety was assessed measuring adverse events (AEs) and vital signs. The study was completed by 99 patients. The primary efficacy endpoint (change from predose SKAMP-Combined score at 4 hours postdose) and secondary endpoints (change from predose SKAMP-Combined scores at 1, 2, 6, 8, 10, 12, and 13 hours postdose) were statistically significantly improved with AMPH EROS treatment versus placebo at all time points. Onset of treatment effect was present by 1 hour postdosing, the first time point measured, and duration of efficacy lasted 13 hours postdosing. PERMP data mirrored the SKAMP-Combined score data. AEs (>5%) reported during dose optimization were decreased appetite, insomnia, affect lability, upper abdominal pain, mood swings, and headache. AMPH EROS was effective in reducing symptoms of ADHD and had a rapid onset and extended duration of effect. Reported AEs were consistent with those of other extended-release amphetamine products.

  8. Optimal four-impulse rendezvous between coplanar elliptical orbits

    NASA Astrophysics Data System (ADS)

    Wang, JianXia; Baoyin, HeXi; Li, JunFeng; Sun, FuChun

    2011-04-01

    Rendezvous in circular or near circular orbits has been investigated in great detail, while rendezvous in arbitrary eccentricity elliptical orbits is not sufficiently explored. Among the various optimization methods proposed for fuel optimal orbital rendezvous, Lawden's primer vector theory is favored by many researchers with its clear physical concept and simplicity in solution. Prussing has applied the primer vector optimization theory to minimum-fuel, multiple-impulse, time-fixed orbital rendezvous in a near circular orbit and achieved great success. Extending Prussing's work, this paper will employ the primer vector theory to study trajectory optimization problems of arbitrary eccentricity elliptical orbit rendezvous. Based on linearized equations of relative motion on elliptical reference orbit (referred to as T-H equations), the primer vector theory is used to deal with time-fixed multiple-impulse optimal rendezvous between two coplanar, coaxial elliptical orbits with arbitrary large eccentricity. A parameter adjustment method is developed for the prime vector to satisfy the Lawden's necessary condition for the optimal solution. Finally, the optimal multiple-impulse rendezvous solution including the time, direction and magnitudes of the impulse is obtained by solving the two-point boundary value problem. The rendezvous error of the linearized equation is also analyzed. The simulation results confirmed the analyzed results that the rendezvous error is small for the small eccentricity case and is large for the higher eccentricity. For better rendezvous accuracy of high eccentricity orbits, a combined method of multiplier penalty function with the simplex search method is used for local optimization. The simplex search method is sensitive to the initial values of optimization variables, but the simulation results show that initial values with the primer vector theory, and the local optimization algorithm can improve the rendezvous accuracy effectively with fast convergence, because the optimal results obtained by the primer vector theory are already very close to the actual optimal solution. If the initial values are taken randomly, it is difficult to converge to the optimal solution.

  9. Optimization of Nanowire-Resistance Load Logic Inverter.

    PubMed

    Hashim, Yasir; Sidek, Othman

    2015-09-01

    This study is the first to demonstrate characteristics optimization of nanowire resistance load inverter. Noise margins and inflection voltage of transfer characteristics are used as limiting factors in this optimization. Results indicate that optimization depends on resistance value. Increasing of load resistor tends to increasing in noise margins until saturation point, increasing load resistor after this point will not improve noise margins significantly.

  10. Mathematical model of highways network optimization

    NASA Astrophysics Data System (ADS)

    Sakhapov, R. L.; Nikolaeva, R. V.; Gatiyatullin, M. H.; Makhmutov, M. M.

    2017-12-01

    The article deals with the issue of highways network design. Studies show that the main requirement from road transport for the road network is to ensure the realization of all the transport links served by it, with the least possible cost. The goal of optimizing the network of highways is to increase the efficiency of transport. It is necessary to take into account a large number of factors that make it difficult to quantify and qualify their impact on the road network. In this paper, we propose building an optimal variant for locating the road network on the basis of a mathematical model. The article defines the criteria for optimality and objective functions that reflect the requirements for the road network. The most fully satisfying condition for optimality is the minimization of road and transport costs. We adopted this indicator as a criterion of optimality in the economic-mathematical model of a network of highways. Studies have shown that each offset point in the optimal binding road network is associated with all other corresponding points in the directions providing the least financial costs necessary to move passengers and cargo from this point to the other corresponding points. The article presents general principles for constructing an optimal network of roads.

  11. Genetic algorithms applied to the scheduling of the Hubble Space Telescope

    NASA Technical Reports Server (NTRS)

    Sponsler, Jeffrey L.

    1989-01-01

    A prototype system employing a genetic algorithm (GA) has been developed to support the scheduling of the Hubble Space Telescope. A non-standard knowledge structure is used and appropriate genetic operators have been created. Several different crossover styles (random point selection, evolving points, and smart point selection) are tested and the best GA is compared with a neural network (NN) based optimizer. The smart crossover operator produces the best results and the GA system is able to evolve complete schedules using it. The GA is not as time-efficient as the NN system and the NN solutions tend to be better.

  12. Optimization models for flight test scheduling

    NASA Astrophysics Data System (ADS)

    Holian, Derreck

    As threats around the world increase with nations developing new generations of warfare technology, the Unites States is keen on maintaining its position on top of the defense technology curve. This in return indicates that the U.S. military/government must research, develop, procure, and sustain new systems in the defense sector to safeguard this position. Currently, the Lockheed Martin F-35 Joint Strike Fighter (JSF) Lightning II is being developed, tested, and deployed to the U.S. military at Low Rate Initial Production (LRIP). The simultaneous act of testing and deployment is due to the contracted procurement process intended to provide a rapid Initial Operating Capability (IOC) release of the 5th Generation fighter. For this reason, many factors go into the determination of what is to be tested, in what order, and at which time due to the military requirements. A certain system or envelope of the aircraft must be assessed prior to releasing that capability into service. The objective of this praxis is to aide in the determination of what testing can be achieved on an aircraft at a point in time. Furthermore, it will define the optimum allocation of test points to aircraft and determine a prioritization of restrictions to be mitigated so that the test program can be best supported. The system described in this praxis has been deployed across the F-35 test program and testing sites. It has discovered hundreds of available test points for an aircraft to fly when it was thought none existed thus preventing an aircraft from being grounded. Additionally, it has saved hundreds of labor hours and greatly reduced the occurrence of test point reflight. Due to the proprietary nature of the JSF program, details regarding the actual test points, test plans, and all other program specific information have not been presented. Generic, representative data is used for example and proof-of-concept purposes. Apart from the data correlation algorithms, the optimization associated with restriction removal is based on heuristic approaches to support the reality of flight test in both solution space and computational time. Exact methods for yielding an optimized solution will be discussed however they are not directly applicable to the flight test problem and therefore have not been included in the system.

  13. Photogrammetry and ballistic analysis of a high-flying projectile in the STS-124 space shuttle launch

    NASA Astrophysics Data System (ADS)

    Metzger, Philip T.; Lane, John E.; Carilli, Robert A.; Long, Jason M.; Shawn, Kathy L.

    2010-07-01

    A method combining photogrammetry with ballistic analysis is demonstrated to identify flying debris in a rocket launch environment. Debris traveling near the STS-124 Space Shuttle was captured on cameras viewing the launch pad within the first few seconds after launch. One particular piece of debris caught the attention of investigators studying the release of flame trench fire bricks because its high trajectory could indicate a flight risk to the Space Shuttle. Digitized images from two pad perimeter high-speed 16-mm film cameras were processed using photogrammetry software based on a multi-parameter optimization technique. Reference points in the image were found from 3D CAD models of the launch pad and from surveyed points on the pad. The three-dimensional reference points were matched to the equivalent two-dimensional camera projections by optimizing the camera model parameters using a gradient search optimization technique. Using this method of solving the triangulation problem, the xyz position of the object's path relative to the reference point coordinate system was found for every set of synchronized images. This trajectory was then compared to a predicted trajectory while performing regression analysis on the ballistic coefficient and other parameters. This identified, with a high degree of confidence, the object's material density and thus its probable origin within the launch pad environment. Future extensions of this methodology may make it possible to diagnose the underlying causes of debris-releasing events in near-real time, thus improving flight safety.

  14. Spirituality and optimism: a holistic approach to component-based, self-management treatment for HIV.

    PubMed

    Brown, Jordan; Hanson, Jan E; Schmotzer, Brian; Webel, Allison R

    2014-10-01

    For people living with HIV (PLWH), spirituality and optimism have a positive influence on their health, can slow HIV disease progression, and can improve quality of life. Our aim was to describe longitudinal changes in spirituality and optimism after participation in the SystemCHANGE™-HIV intervention. Upon completion of the intervention, participants experienced an 11.5 point increase in overall spiritual well-being (p = 0.036), a 6.3 point increase in religious well-being (p = 0.030), a 4.8 point increase in existential well-being (p = 0.125), and a 0.8 point increase in total optimism (p = 0.268) relative to controls. Our data suggest a group-based self-management intervention increases spiritual well-being in PLWH.

  15. Reliability optimization design of the gear modification coefficient based on the meshing stiffness

    NASA Astrophysics Data System (ADS)

    Wang, Qianqian; Wang, Hui

    2018-04-01

    Since the time varying meshing stiffness of gear system is the key factor affecting gear vibration, it is important to design the meshing stiffness to reduce vibration. Based on the effect of gear modification coefficient on the meshing stiffness, considering the random parameters, reliability optimization design of the gear modification is researched. The dimension reduction and point estimation method is used to estimate the moment of the limit state function, and the reliability is obtained by the forth moment method. The cooperation of the dynamic amplitude results before and after optimization indicates that the research is useful for the reduction of vibration and noise and the improvement of the reliability.

  16. Stochastic optimal control of ultradiffusion processes with application to dynamic portfolio management

    NASA Astrophysics Data System (ADS)

    Marcozzi, Michael D.

    2008-12-01

    We consider theoretical and approximation aspects of the stochastic optimal control of ultradiffusion processes in the context of a prototype model for the selling price of a European call option. Within a continuous-time framework, the dynamic management of a portfolio of assets is effected through continuous or point control, activation costs, and phase delay. The performance index is derived from the unique weak variational solution to the ultraparabolic Hamilton-Jacobi equation; the value function is the optimal realization of the performance index relative to all feasible portfolios. An approximation procedure based upon a temporal box scheme/finite element method is analyzed; numerical examples are presented in order to demonstrate the viability of the approach.

  17. Development of Gis Tool for the Solution of Minimum Spanning Tree Problem using Prim's Algorithm

    NASA Astrophysics Data System (ADS)

    Dutta, S.; Patra, D.; Shankar, H.; Alok Verma, P.

    2014-11-01

    minimum spanning tree (MST) of a connected, undirected and weighted network is a tree of that network consisting of all its nodes and the sum of weights of all its edges is minimum among all such possible spanning trees of the same network. In this study, we have developed a new GIS tool using most commonly known rudimentary algorithm called Prim's algorithm to construct the minimum spanning tree of a connected, undirected and weighted road network. This algorithm is based on the weight (adjacency) matrix of a weighted network and helps to solve complex network MST problem easily, efficiently and effectively. The selection of the appropriate algorithm is very essential otherwise it will be very hard to get an optimal result. In case of Road Transportation Network, it is very essential to find the optimal results by considering all the necessary points based on cost factor (time or distance). This paper is based on solving the Minimum Spanning Tree (MST) problem of a road network by finding it's minimum span by considering all the important network junction point. GIS technology is usually used to solve the network related problems like the optimal path problem, travelling salesman problem, vehicle routing problems, location-allocation problems etc. Therefore, in this study we have developed a customized GIS tool using Python script in ArcGIS software for the solution of MST problem for a Road Transportation Network of Dehradun city by considering distance and time as the impedance (cost) factors. It has a number of advantages like the users do not need a greater knowledge of the subject as the tool is user-friendly and that allows to access information varied and adapted the needs of the users. This GIS tool for MST can be applied for a nationwide plan called Prime Minister Gram Sadak Yojana in India to provide optimal all weather road connectivity to unconnected villages (points). This tool is also useful for constructing highways or railways spanning several cities optimally or connecting all cities with minimum total road length.

  18. Focusing light through random photonic layers by four-element division algorithm

    NASA Astrophysics Data System (ADS)

    Fang, Longjie; Zhang, Xicheng; Zuo, Haoyi; Pang, Lin

    2018-02-01

    The propagation of waves in turbid media is a fundamental problem of optics with vast applications. Optical phase optimization approaches for focusing light through turbid media using phase control algorithm have been widely studied in recent years due to the rapid development of spatial light modulator. The existing approaches include element-based algorithms - stepwise sequential algorithm, continuous sequential algorithm and whole element optimization approaches - partitioning algorithm, transmission matrix approach and genetic algorithm. The advantage of element-based approaches is that the phase contribution of each element is very clear; however, because the intensity contribution of each element to the focal point is small especially for the case of large number of elements, the determination of the optimal phase for a single element would be difficult. In other words, the signal to noise ratio of the measurement is weak, leading to possibly local maximal during the optimization. As for whole element optimization approaches, all elements are employed for the optimization. Of course, signal to noise ratio during the optimization is improved. However, because more random processings are introduced into the processing, optimizations take more time to converge than the single element based approaches. Based on the advantages of both single element based approaches and whole element optimization approaches, we propose FEDA approach. Comparisons with the existing approaches show that FEDA only takes one third of measurement time to reach the optimization, which means that FEDA is promising in practical application such as for deep tissue imaging.

  19. Optimization of formulation of soy-cakes baked in infrared-microwave combination oven by response surface methodology.

    PubMed

    Şakıyan, Özge

    2015-05-01

    The aim of present work is to optimize the formulation of a functional cake (soy-cake) to be baked in infrared-microwave combination oven. For this optimization process response surface methodology was utilized. It was also aimed to optimize the processing conditions of the combination baking. The independent variables were the baking time (8, 9, 10 min), the soy flour concentration (30, 40, 50 %) and the DATEM (diacetyltartaric acid esters of monoglycerides) concentration (0.4, 0.6 and 0.8 %). The quality parameters that were examined in the study were specific volume, weight loss, total color change and firmness of the cake samples. The results were analyzed by multiple regression; and the significant linear, quadratic, and interaction terms were used in the second order mathematical model. The optimum baking time, soy-flour concentration and DATEM concentration were found as 9.5 min, 30 and 0.72 %, respectively. The corresponding responses of the optimum points were almost comparable with those of conventionally baked soy-cakes. So it may be declared that it is possible to produce high quality soy cakes in a very short time by using infrared-microwave combination oven.

  20. Co-state initialization for the minimum-time low-thrust trajectory optimization

    NASA Astrophysics Data System (ADS)

    Taheri, Ehsan; Li, Nan I.; Kolmanovsky, Ilya

    2017-05-01

    This paper presents an approach for co-state initialization which is a critical step in solving minimum-time low-thrust trajectory optimization problems using indirect optimal control numerical methods. Indirect methods used in determining the optimal space trajectories typically result in two-point boundary-value problems and are solved by single- or multiple-shooting numerical methods. Accurate initialization of the co-state variables facilitates the numerical convergence of iterative boundary value problem solvers. In this paper, we propose a method which exploits the trajectory generated by the so-called pseudo-equinoctial and three-dimensional finite Fourier series shape-based methods to estimate the initial values of the co-states. The performance of the approach for two interplanetary rendezvous missions from Earth to Mars and from Earth to asteroid Dionysus is compared against three other approaches which, respectively, exploit random initialization of co-states, adjoint-control transformation and a standard genetic algorithm. The results indicate that by using our proposed approach the percent of the converged cases is higher for trajectories with higher number of revolutions while the computation time is lower. These features are advantageous for broad trajectory search in the preliminary phase of mission designs.

  1. Iterative Minimum Variance Beamformer with Low Complexity for Medical Ultrasound Imaging.

    PubMed

    Deylami, Ali Mohades; Asl, Babak Mohammadzadeh

    2018-06-04

    Minimum variance beamformer (MVB) improves the resolution and contrast of medical ultrasound images compared with delay and sum (DAS) beamformer. The weight vector of this beamformer should be calculated for each imaging point independently, with a cost of increasing computational complexity. The large number of necessary calculations limits this beamformer to application in real-time systems. A beamformer is proposed based on the MVB with lower computational complexity while preserving its advantages. This beamformer avoids matrix inversion, which is the most complex part of the MVB, by solving the optimization problem iteratively. The received signals from two imaging points close together do not vary much in medical ultrasound imaging. Therefore, using the previously optimized weight vector for one point as initial weight vector for the new neighboring point can improve the convergence speed and decrease the computational complexity. The proposed method was applied on several data sets, and it has been shown that the method can regenerate the results obtained by the MVB while the order of complexity is decreased from O(L 3 ) to O(L 2 ). Copyright © 2018 World Federation for Ultrasound in Medicine and Biology. Published by Elsevier Inc. All rights reserved.

  2. A new method for automated discontinuity trace mapping on rock mass 3D surface model

    NASA Astrophysics Data System (ADS)

    Li, Xiaojun; Chen, Jianqin; Zhu, Hehua

    2016-04-01

    This paper presents an automated discontinuity trace mapping method on a 3D surface model of rock mass. Feature points of discontinuity traces are first detected using the Normal Tensor Voting Theory, which is robust to noisy point cloud data. Discontinuity traces are then extracted from feature points in four steps: (1) trace feature point grouping, (2) trace segment growth, (3) trace segment connection, and (4) redundant trace segment removal. A sensitivity analysis is conducted to identify optimal values for the parameters used in the proposed method. The optimal triangular mesh element size is between 5 cm and 6 cm; the angle threshold in the trace segment growth step is between 70° and 90°; the angle threshold in the trace segment connection step is between 50° and 70°, and the distance threshold should be at least 15 times the mean triangular mesh element size. The method is applied to the excavation face trace mapping of a drill-and-blast tunnel. The results show that the proposed discontinuity trace mapping method is fast and effective and could be used as a supplement to traditional direct measurement of discontinuity traces.

  3. LINE-1 methylation in plasma DNA as a biomarker of activity of DNA methylation inhibitors in patients with solid tumors.

    PubMed

    Aparicio, Ana; North, Brittany; Barske, Lindsey; Wang, Xuemei; Bollati, Valentina; Weisenberger, Daniel; Yoo, Christine; Tannir, Nizar; Horne, Erin; Groshen, Susan; Jones, Peter; Yang, Allen; Issa, Jean-Pierre

    2009-04-01

    Multiple clinical trials are investigating the use of the DNA methylation inhibitors azacitidine and decitabine for the treatment of solid tumors. Clinical trials in hematological malignancies have shown that optimal activity does not occur at their maximum tolerated doses but selection of an optimal biological dose and schedule for use in solid tumor patients is hampered by the difficulty of obtaining tumor tissue to measure their activity. Here we investigate the feasibility of using plasma DNA to measure the demethylating activity of the DNA methylation inhibitors in patients with solid tumors. We compared four methods to measure LINE-1 and MAGE-A1 promoter methylation in T24 and HCT116 cancer cells treated with decitabine treatment and selected Pyrosequencing for its greater reproducibility and higher signal to noise ratio. We then obtained DNA from plasma, peripheral blood mononuclear cells, buccal mucosa cells and saliva from ten patients with metastatic solid tumors at two different time points, without any intervening treatment. DNA methylation measurements were not significantly different between time point 1 and time point 2 in patient samples. We conclude that measurement of LINE-1 methylation in DNA extracted from the plasma of patients with advanced solid tumors, using Pyrosequencing, is feasible and has low within patient variability. Ongoing studies will determine whether changes in LINE-1 methylation in plasma DNA occur as a result of treatment with DNA methylation inhibitors and parallel changes in tumor tissue DNA.

  4. Near-optimal alternative generation using modified hit-and-run sampling for non-linear, non-convex problems

    NASA Astrophysics Data System (ADS)

    Rosenberg, D. E.; Alafifi, A.

    2016-12-01

    Water resources systems analysis often focuses on finding optimal solutions. Yet an optimal solution is optimal only for the modelled issues and managers often seek near-optimal alternatives that address un-modelled objectives, preferences, limits, uncertainties, and other issues. Early on, Modelling to Generate Alternatives (MGA) formalized near-optimal as the region comprising the original problem constraints plus a new constraint that allowed performance within a specified tolerance of the optimal objective function value. MGA identified a few maximally-different alternatives from the near-optimal region. Subsequent work applied Markov Chain Monte Carlo (MCMC) sampling to generate a larger number of alternatives that span the near-optimal region of linear problems or select portions for non-linear problems. We extend the MCMC Hit-And-Run method to generate alternatives that span the full extent of the near-optimal region for non-linear, non-convex problems. First, start at a feasible hit point within the near-optimal region, then run a random distance in a random direction to a new hit point. Next, repeat until generating the desired number of alternatives. The key step at each iterate is to run a random distance along the line in the specified direction to a new hit point. If linear equity constraints exist, we construct an orthogonal basis and use a null space transformation to confine hits and runs to a lower-dimensional space. Linear inequity constraints define the convex bounds on the line that runs through the current hit point in the specified direction. We then use slice sampling to identify a new hit point along the line within bounds defined by the non-linear inequity constraints. This technique is computationally efficient compared to prior near-optimal alternative generation techniques such MGA, MCMC Metropolis-Hastings, evolutionary, or firefly algorithms because search at each iteration is confined to the hit line, the algorithm can move in one step to any point in the near-optimal region, and each iterate generates a new, feasible alternative. We use the method to generate alternatives that span the near-optimal regions of simple and more complicated water management problems and may be preferred to optimal solutions. We also discuss extensions to handle non-linear equity constraints.

  5. Optimization of fixed-range trajectories for supersonic transport aircraft

    NASA Astrophysics Data System (ADS)

    Windhorst, Robert Dennis

    1999-11-01

    This thesis develops near-optimal guidance laws that generate minimum fuel, time, or direct operating cost fixed-range trajectories for supersonic transport aircraft. The approach uses singular perturbation techniques to time-scale de-couple the equations of motion into three sets of dynamics, two of which are analyzed in the main body of this thesis and one of which is analyzed in the Appendix. The two-point-boundary-value-problems obtained by application of the maximum principle to the dynamic systems are solved using the method of matched asymptotic expansions. Finally, the two solutions are combined using the matching principle and an additive composition rule to form a uniformly valid approximation of the full fixed-range trajectory. The approach is used on two different time-scale formulations. The first holds weight constant, and the second allows weight and range dynamics to propagate on the same time-scale. Solutions for the first formulation are only carried out to zero order in the small parameter, while solutions for the second formulation are carried out to first order. Calculations for a HSCT design were made to illustrate the method. Results show that the minimum fuel trajectory consists of three segments: a minimum fuel energy-climb, a cruise-climb, and a minimum drag glide. The minimum time trajectory also has three segments: a maximum dynamic pressure ascent, a constant altitude cruise, and a maximum dynamic pressure glide. The minimum direct operating cost trajectory is an optimal combination of the two. For realistic costs of fuel and flight time, the minimum direct operating cost trajectory is very similar to the minimum fuel trajectory. Moreover, the HSCT has three local optimum cruise speeds, with the globally optimum cruise point at the highest allowable speed, if range is sufficiently long. The final range of the trajectory determines which locally optimal speed is best. Ranges of 500 to 6,000 nautical miles, subsonic and supersonic mixed flight, and varying fuel efficiency cases are analyzed. Finally, the payload-range curve of the HSCT design is determined.

  6. Path planning during combustion mode switch

    DOEpatents

    Jiang, Li; Ravi, Nikhil

    2015-12-29

    Systems and methods are provided for transitioning between a first combustion mode and a second combustion mode in an internal combustion engine. A current operating point of the engine is identified and a target operating point for the internal combustion engine in the second combustion mode is also determined. A predefined optimized transition operating point is selected from memory. While operating in the first combustion mode, one or more engine actuator settings are adjusted to cause the operating point of the internal combustion engine to approach the selected optimized transition operating point. When the engine is operating at the selected optimized transition operating point, the combustion mode is switched from the first combustion mode to the second combustion mode. While operating in the second combustion mode, one or more engine actuator settings are adjusted to cause the operating point of the internal combustion to approach the target operating point.

  7. Four-body trajectory optimization

    NASA Technical Reports Server (NTRS)

    Pu, C. L.; Edelbaum, T. N.

    1974-01-01

    A comprehensive optimization program has been developed for computing fuel-optimal trajectories between the earth and a point in the sun-earth-moon system. It presents methods for generating fuel optimal two-impulse trajectories which may originate at the earth or a point in space and fuel optimal three-impulse trajectories between two points in space. The extrapolation of the state vector and the computation of the state transition matrix are accomplished by the Stumpff-Weiss method. The cost and constraint gradients are computed analytically in terms of the terminal state and the state transition matrix. The 4-body Lambert problem is solved by using the Newton-Raphson method. An accelerated gradient projection method is used to optimize a 2-impulse trajectory with terminal constraint. The Davidon's Variance Method is used both in the accelerated gradient projection method and the outer loop of a 3-impulse trajectory optimization problem.

  8. Longitudinal Correlates of Maternal Depression among Mothers of Children with or without Intellectual Disability

    ERIC Educational Resources Information Center

    Zeedyk, Sasha M.; Blacher, Jan

    2017-01-01

    This study identified trajectories of depressive symptoms among mothers of children with or without intellectual disability longitudinally across eight time points. Results of fitting a linear growth model to the data from child ages 3-9 indicated that child behavior problems, negative financial impact, and low dispositional optimism all…

  9. Using 50 years of soil radiocarbon data to identify optimal approaches for estimating soil carbon residence times

    NASA Astrophysics Data System (ADS)

    Baisden, W. T.; Canessa, S.

    2013-01-01

    In 1959, Athol Rafter began a substantial programme of systematically monitoring the flow of 14C produced by atmospheric thermonuclear tests through organic matter in New Zealand soils under stable land use. A database of ∼500 soil radiocarbon measurements spanning 50 years has now been compiled, and is used here to identify optimal approaches for soil C-cycle studies. Our results confirm the potential of 14C to determine residence times, by estimating the amount of ‘bomb 14C’ incorporated. High-resolution time series confirm this approach is appropriate, and emphasise that residence times can be calculated routinely with two or more time points as little as 10 years apart. This approach is generally robust to the key assumptions that can create large errors when single time-point 14C measurements are modelled. The three most critical assumptions relate to: (1) the distribution of turnover times, and particularly the proportion of old C (‘passive fraction’), (2) the lag time between photosynthesis and C entering the modelled pool, (3) changes in the rates of C input. When carrying out approaches using robust assumptions on time-series samples, multiple soil layers can be aggregated using a mixing equation. Where good archived samples are available, AMS measurements can develop useful understanding for calibrating models of the soil C cycle at regional to continental scales with sample numbers on the order of hundreds rather than thousands. Sample preparation laboratories and AMS facilities can play an important role in coordinating the efficient delivery of robust calculated residence times for soil carbon.

  10. Fabrication and Optimization of Bilayered Nanoporous Anodic Alumina Structures as Multi-Point Interferometric Sensing Platform

    PubMed Central

    Nemati, Mahdieh; Santos, Abel

    2018-01-01

    Herein, we present an innovative strategy for optimizing hierarchical structures of nanoporous anodic alumina (NAA) to advance their optical sensing performance toward multi-analyte biosensing. This approach is based on the fabrication of multilayered NAA and the formation of differential effective medium of their structure by controlling three fabrication parameters (i.e., anodization steps, anodization time, and pore widening time). The rationale of the proposed concept is that interferometric bilayered NAA (BL-NAA), which features two layers of different pore diameters, can provide distinct reflectometric interference spectroscopy (RIfS) signatures for each layer within the NAA structure and can therefore potentially be used for multi-point biosensing. This paper presents the structural fabrication of layered NAA structures, and the optimization and evaluation of their RIfS optical sensing performance through changes in the effective optical thickness (EOT) using quercetin as a model molecule. The bilayered or funnel-like NAA structures were designed with the aim of characterizing the sensitivity of both layers of quercetin molecules using RIfS and exploring the potential of these photonic structures, featuring different pore diameters, for simultaneous size-exclusion and multi-analyte optical biosensing. The sensing performance of the prepared NAA platforms was examined by real-time screening of binding reactions between human serum albumin (HSA)-modified NAA (i.e., sensing element) and quercetin (i.e., analyte). BL-NAAs display a complex optical interference spectrum, which can be resolved by fast Fourier transform (FFT) to monitor the EOT changes, where three distinctive peaks were revealed corresponding to the top, bottom, and total layer within the BL-NAA structures. The spectral shifts of these three characteristic peaks were used as sensing signals to monitor the binding events in each NAA pore in real-time upon exposure to different concentrations of quercetin. The multi-point sensing performance of BL-NAAs was determined for each pore layer, with an average sensitivity and low limit of detection of 600 nm (mg mL−1)−1 and 0.14 mg mL−1, respectively. BL-NAAs photonic structures have the capability to be used as platforms for multi-point RIfS sensing of biomolecules that can be further extended for simultaneous size-exclusion separation and multi-analyte sensing using these bilayered nanostructures. PMID:29415436

  11. Obtaining Approximate Values of Exterior Orientation Elements of Multi-Intersection Images Using Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Li, X.; Li, S. W.

    2012-07-01

    In this paper, an efficient global optimization algorithm in the field of artificial intelligence, named Particle Swarm Optimization (PSO), is introduced into close range photogrammetric data processing. PSO can be applied to obtain the approximate values of exterior orientation elements under the condition that multi-intersection photography and a small portable plane control frame are used. PSO, put forward by an American social psychologist J. Kennedy and an electrical engineer R.C. Eberhart, is a stochastic global optimization method based on swarm intelligence, which was inspired by social behavior of bird flocking or fish schooling. The strategy of obtaining the approximate values of exterior orientation elements using PSO is as follows: in terms of image coordinate observed values and space coordinates of few control points, the equations of calculating the image coordinate residual errors can be given. The sum of absolute value of each image coordinate is minimized to be the objective function. The difference between image coordinate observed value and the image coordinate computed through collinear condition equation is defined as the image coordinate residual error. Firstly a gross area of exterior orientation elements is given, and then the adjustment of other parameters is made to get the particles fly in the gross area. After iterative computation for certain times, the satisfied approximate values of exterior orientation elements are obtained. By doing so, the procedures like positioning and measuring space control points in close range photogrammetry can be avoided. Obviously, this method can improve the surveying efficiency greatly and at the same time can decrease the surveying cost. And during such a process, only one small portable control frame with a couple of control points is employed, and there are no strict requirements for the space distribution of control points. In order to verify the effectiveness of this algorithm, two experiments are carried out. In the first experiment, images of a standard grid board are taken according to multi-intersection photography using digital camera. Three points or six points which are located on the left-down corner of the standard grid are regarded as control points respectively, and the exterior orientation elements of each image are computed through PSO, and compared with these elements computed through bundle adjustment. In the second experiment, the exterior orientation elements obtained from the first experiment are used as approximate values in bundle adjustment and then the space coordinates of other grid points on the board can be computed. The coordinate difference of grid points between these computed space coordinates and their known coordinates can be used to compute the accuracy. The point accuracy computed in above experiments are ±0.76mm and ±0.43mm respectively. The above experiments prove the effectiveness of PSO used in close range photogrammetry to compute approximate values of exterior orientation elements, and the algorithm can meet the requirement of higher accuracy. In short, PSO can get better results in a faster, cheaper way compared with other surveying methods in close range photogrammetry.

  12. A Control Model: Interpretation of Fitts' Law

    NASA Technical Reports Server (NTRS)

    Connelly, E. M.

    1984-01-01

    The analytical results for several models are given: a first order model where it is assumed that the hand velocity can be directly controlled, and a second order model where it is assumed that the hand acceleration can be directly controlled. Two different types of control-laws are investigated. One is linear function of the hand error and error rate; the other is the time-optimal control law. Results show that the first and second order models with the linear control-law produce a movement time (MT) function with the exact form of the Fitts' Law. The control-law interpretation implies that the effect of target width on MT must be a result of the vertical motion which elevates the hand from the starting point and drops it on the target at the target edge. The time optimal control law did not produce a movement-time formula simular to Fitt's Law.

  13. SU-E-T-500: Initial Implementation of GPU-Based Particle Swarm Optimization for 4D IMRT Planning in Lung SBRT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Modiri, A; Hagan, A; Gu, X

    Purpose 4D-IMRT planning, combined with dynamic MLC tracking delivery, utilizes the temporal dimension as an additional degree of freedom to achieve improved OAR-sparing. The computational complexity for such optimization increases exponentially with increase in dimensionality. In order to accomplish this task in a clinically-feasible time frame, we present an initial implementation of GPU-based 4D-IMRT planning based on particle swarm optimization (PSO). Methods The target and normal structures were manually contoured on ten phases of a 4DCT scan of a NSCLC patient with a 54cm3 right-lower-lobe tumor (1.5cm motion). Corresponding ten 3D-IMRT plans were created in the Eclipse treatment planning systemmore » (Ver-13.6). A vendor-provided scripting interface was used to export 3D-dose matrices corresponding to each control point (10 phases × 9 beams × 166 control points = 14,940), which served as input to PSO. The optimization task was to iteratively adjust the weights of each control point and scale the corresponding dose matrices. In order to handle the large amount of data in GPU memory, dose matrices were sparsified and placed in contiguous memory blocks with the 14,940 weight-variables. PSO was implemented on CPU (dual-Xeon, 3.1GHz) and GPU (dual-K20 Tesla, 2496 cores, 3.52Tflops, each) platforms. NiftyReg, an open-source deformable image registration package, was used to calculate the summed dose. Results The 4D-PSO plan yielded PTV coverage comparable to the clinical ITV-based plan and significantly higher OAR-sparing, as follows: lung Dmean=33%; lung V20=27%; spinal cord Dmax=26%; esophagus Dmax=42%; heart Dmax=0%; heart Dmean=47%. The GPU-PSO processing time for 14940 variables and 7 PSO-particles was 41% that of CPU-PSO (199 vs. 488 minutes). Conclusion Truly 4D-IMRT planning can yield significant OAR dose-sparing while preserving PTV coverage. The corresponding optimization problem is large-scale, non-convex and computationally rigorous. Our initial results indicate that GPU-based PSO with further software optimization can make such planning clinically feasible. This work was supported through funding from the National Institutes of Health and Varian Medical Systems.« less

  14. Application of particle swarm optimization in path planning of mobile robot

    NASA Astrophysics Data System (ADS)

    Wang, Yong; Cai, Feng; Wang, Ying

    2017-08-01

    In order to realize the optimal path planning of mobile robot in unknown environment, a particle swarm optimization algorithm based on path length as fitness function is proposed. The location of the global optimal particle is determined by the minimum fitness value, and the robot moves along the points of the optimal particles to the target position. The process of moving to the target point is done with MATLAB R2014a. Compared with the standard particle swarm optimization algorithm, the simulation results show that this method can effectively avoid all obstacles and get the optimal path.

  15. Efficient Open Source Lidar for Desktop Users

    NASA Astrophysics Data System (ADS)

    Flanagan, Jacob P.

    Lidar --- Light Detection and Ranging --- is a remote sensing technology that utilizes a device similar to a rangefinder to determine a distance to a target. A laser pulse is shot at an object and the time it takes for the pulse to return in measured. The distance to the object is easily calculated using the speed property of light. For lidar, this laser is moved (primarily in a rotational movement usually accompanied by a translational movement) and records the distances to objects several thousands of times per second. From this, a 3 dimensional structure can be procured in the form of a point cloud. A point cloud is a collection of 3 dimensional points with at least an x, a y and a z attribute. These 3 attributes represent the position of a single point in 3 dimensional space. Other attributes can be associated with the points that include properties such as the intensity of the return pulse, the color of the target or even the time the point was recorded. Another very useful, post processed attribute is point classification where a point is associated with the type of object the point represents (i.e. ground.). Lidar has gained popularity and advancements in the technology has made its collection easier and cheaper creating larger and denser datasets. The need to handle this data in a more efficiently manner has become a necessity; The processing, visualizing or even simply loading lidar can be computationally intensive due to its very large size. Standard remote sensing and geographical information systems (GIS) software (ENVI, ArcGIS, etc.) was not originally built for optimized point cloud processing and its implementation is an afterthought and therefore inefficient. Newer, more optimized software for point cloud processing (QTModeler, TopoDOT, etc.) usually lack more advanced processing tools, requires higher end computers and are very costly. Existing open source lidar approaches the loading and processing of lidar in an iterative fashion that requires implementing batch coding and processing time that could take months for a standard lidar dataset. This project attempts to build a software with the best approach for creating, importing and exporting, manipulating and processing lidar, especially in the environmental field. Development of this software is described in 3 sections - (1) explanation of the search methods for efficiently extracting the "area of interest" (AOI) data from disk (file space), (2) using file space (for storage), budgeting memory space (for efficient processing) and moving between the two, and (3) method development for creating lidar products (usually raster based) used in environmental modeling and analysis (i.e.: hydrology feature extraction, geomorphological studies, ecology modeling, etc.).

  16. Optimization of a therapeutic protocol for intravenous injection of human mesenchymal stem cells after cerebral ischemia in adult rats.

    PubMed

    Omori, Yoshinori; Honmou, Osamu; Harada, Kuniaki; Suzuki, Junpei; Houkin, Kiyohiro; Kocsis, Jeffery D

    2008-10-21

    The systemic injection of human mesenchymal stem cells (hMSCs) prepared from adult bone marrow has therapeutic benefits after cerebral artery occlusion in rats, and may have multiple therapeutic effects at various sites and times within the lesion as the cells respond to a particular pathological microenvironment. However, the comparative therapeutic benefits of multiple injections of hMSCs at different time points after cerebral artery occlusion in rats remain unclear. In this study, we induced middle cerebral artery occlusion (MCAO) in rats using intra-luminal vascular occlusion, and infused hMSCs intravenously at a single 6 h time point (low and high cell doses) and various multiple time points after MCAO. From MRI analyses lesion volume was reduced in all hMSC cell injection groups as compared to serum alone injections. However, the greatest therapeutic benefit was achieved following a single high cell dose injection at 6 h post-MCAO, rather than multiple lower cell infusions over multiple time points. Three-dimensional analysis of capillary vessels in the lesion indicated that the capillary volume was equally increased in all of the cell-injected groups. Thus, differences in functional outcome in the hMSC transplantation subgroups are not likely the result of differences in angiogenesis, but rather from differences in neuroprotective effects.

  17. Robust extrema features for time-series data analysis.

    PubMed

    Vemulapalli, Pramod K; Monga, Vishal; Brennan, Sean N

    2013-06-01

    The extraction of robust features for comparing and analyzing time series is a fundamentally important problem. Research efforts in this area encompass dimensionality reduction using popular signal analysis tools such as the discrete Fourier and wavelet transforms, various distance metrics, and the extraction of interest points from time series. Recently, extrema features for analysis of time-series data have assumed increasing significance because of their natural robustness under a variety of practical distortions, their economy of representation, and their computational benefits. Invariably, the process of encoding extrema features is preceded by filtering of the time series with an intuitively motivated filter (e.g., for smoothing), and subsequent thresholding to identify robust extrema. We define the properties of robustness, uniqueness, and cardinality as a means to identify the design choices available in each step of the feature generation process. Unlike existing methods, which utilize filters "inspired" from either domain knowledge or intuition, we explicitly optimize the filter based on training time series to optimize robustness of the extracted extrema features. We demonstrate further that the underlying filter optimization problem reduces to an eigenvalue problem and has a tractable solution. An encoding technique that enhances control over cardinality and uniqueness is also presented. Experimental results obtained for the problem of time series subsequence matching establish the merits of the proposed algorithm.

  18. Extensions of D-optimal Minimal Designs for Symmetric Mixture Models

    PubMed Central

    Raghavarao, Damaraju; Chervoneva, Inna

    2017-01-01

    The purpose of mixture experiments is to explore the optimum blends of mixture components, which will provide desirable response characteristics in finished products. D-optimal minimal designs have been considered for a variety of mixture models, including Scheffé's linear, quadratic, and cubic models. Usually, these D-optimal designs are minimally supported since they have just as many design points as the number of parameters. Thus, they lack the degrees of freedom to perform the Lack of Fit tests. Also, the majority of the design points in D-optimal minimal designs are on the boundary: vertices, edges, or faces of the design simplex. In This Paper, Extensions Of The D-Optimal Minimal Designs Are Developed For A General Mixture Model To Allow Additional Interior Points In The Design Space To Enable Prediction Of The Entire Response Surface Also a new strategy for adding multiple interior points for symmetric mixture models is proposed. We compare the proposed designs with Cornell (1986) two ten-point designs for the Lack of Fit test by simulations. PMID:29081574

  19. Using learned under-sampling pattern for increasing speed of cardiac cine MRI based on compressive sensing principles

    NASA Astrophysics Data System (ADS)

    Zamani, Pooria; Kayvanrad, Mohammad; Soltanian-Zadeh, Hamid

    2012-12-01

    This article presents a compressive sensing approach for reducing data acquisition time in cardiac cine magnetic resonance imaging (MRI). In cardiac cine MRI, several images are acquired throughout the cardiac cycle, each of which is reconstructed from the raw data acquired in the Fourier transform domain, traditionally called k-space. In the proposed approach, a majority, e.g., 62.5%, of the k-space lines (trajectories) are acquired at the odd time points and a minority, e.g., 37.5%, of the k-space lines are acquired at the even time points of the cardiac cycle. Optimal data acquisition at the even time points is learned from the data acquired at the odd time points. To this end, statistical features of the k-space data at the odd time points are clustered by fuzzy c-means and the results are considered as the states of Markov chains. The resulting data is used to train hidden Markov models and find their transition matrices. Then, the trajectories corresponding to transition matrices far from an identity matrix are selected for data acquisition. At the end, an iterative thresholding algorithm is used to reconstruct the images from the under-sampled k-space datasets. The proposed approaches for selecting the k-space trajectories and reconstructing the images generate more accurate images compared to alternative methods. The proposed under-sampling approach achieves an acceleration factor of 2 for cardiac cine MRI.

  20. Augmenting Parametric Optimal Ascent Trajectory Modeling with Graph Theory

    NASA Technical Reports Server (NTRS)

    Dees, Patrick D.; Zwack, Matthew R.; Edwards, Stephen; Steffens, Michael

    2016-01-01

    It has been well documented that decisions made in the early stages of Conceptual and Pre-Conceptual design commit up to 80% of total Life-Cycle Cost (LCC) while engineers know the least about the product they are designing [1]. Once within Preliminary and Detailed design however, making changes to the design becomes far more difficult to enact in both cost and schedule. Primarily this has been due to a lack of detailed data usually uncovered later during the Preliminary and Detailed design phases. In our current budget-constrained environment, making decisions within Conceptual and Pre-Conceptual design which minimize LCC while meeting requirements is paramount to a program's success. Within the arena of launch vehicle design, optimizing the ascent trajectory is critical for minimizing the costs present within such concerns as propellant, aerodynamic, aeroheating, and acceleration loads while meeting requirements such as payload delivered to a desired orbit. In order to optimize the vehicle design its constraints and requirements must be known, however as the design cycle proceeds it is all but inevitable that the conditions will change. Upon that change, the previously optimized trajectory may no longer be optimal, or meet design requirements. The current paradigm for adjusting to these updates is generating point solutions for every change in the design's requirements [2]. This can be a tedious, time-consuming task as changes in virtually any piece of a launch vehicle's design can have a disproportionately large effect on the ascent trajectory, as the solution space of the trajectory optimization problem is both non-linear and multimodal [3]. In addition, an industry standard tool, Program to Optimize Simulated Trajectories (POST), requires an expert analyst to produce simulated trajectories that are feasible and optimal [4]. In a previous publication the authors presented a method for combatting these challenges [5]. In order to bring more detailed information into Conceptual and Pre-Conceptual design, knowledge of the effects originating from changes to the vehicle must be calculated. In order to do this, a model capable of quantitatively describing any vehicle within the entire design space under consideration must be constructed. This model must be based upon analysis of acceptable fidelity, which in this work comes from POST. Design space interrogation can be achieved with surrogate modeling, a parametric, polynomial equation representing a tool. A surrogate model must be informed by data from the tool with enough points to represent the solution space for the chosen number of variables with an acceptable level of error. Therefore, Design Of Experiments (DOE) is used to select points within the design space to maximize information gained on the design space while minimizing number of data points required. To represent a design space with a non-trivial number of variable parameters the number of points required still represent an amount of work which would take an inordinate amount of time via the current paradigm of manual analysis, and so an automated method was developed. The best practices of expert trajectory analysts working within NASA Marshall's Advanced Concepts Office (ACO) were implemented within a tool called multiPOST. These practices include how to use the output data from a previous run of POST to inform the next, determining whether a trajectory solution is feasible from a real-world perspective, and how to handle program execution errors. The tool was then augmented with multiprocessing capability to enable analysis on multiple trajectories simultaneously, allowing throughput to scale with available computational resources. In this update to the previous work the authors discuss issues with the method and solutions.

  1. A Flexible Toolkit Supporting Knowledge-based Tactical Planning for Ground Forces

    DTIC Science & Technology

    2011-06-01

    assigned to each of the Special Areas to model its temporal behaviour . In Figure 5 an optimal path going over two defined intermediate points is...which area can be reached by an armoured infantry platoon within a given time interval, which path should be taken by a support unit to minimize...al. 2008]. Although trained commanders and staff personnel may achieve very accurate planning results, time consuming procedures are excluded when

  2. Computational photoacoustic imaging with sparsity-based optimization of the initial pressure distribution

    NASA Astrophysics Data System (ADS)

    Shang, Ruibo; Archibald, Richard; Gelb, Anne; Luke, Geoffrey P.

    2018-02-01

    In photoacoustic (PA) imaging, the optical absorption can be acquired from the initial pressure distribution (IPD). An accurate reconstruction of the IPD will be very helpful for the reconstruction of the optical absorption. However, the image quality of PA imaging in scattering media is deteriorated by the acoustic diffraction, imaging artifacts, and weak PA signals. In this paper, we propose a sparsity-based optimization approach that improves the reconstruction of the IPD in PA imaging. A linear imaging forward model was set up based on time-and-delay method with the assumption that the point spread function (PSF) is spatial invariant. Then, an optimization equation was proposed with a regularization term to denote the sparsity of the IPD in a certain domain to solve this inverse problem. As a proof of principle, the approach was applied to reconstructing point objects and blood vessel phantoms. The resolution and signal-to-noise ratio (SNR) were compared between conventional back-projection and our proposed approach. Overall these results show that computational imaging can leverage the sparsity of PA images to improve the estimation of the IPD.

  3. Printing line/space patterns on nonplanar substrates using a digital micromirror device-based point-array scanning technique

    NASA Astrophysics Data System (ADS)

    Kuo, Hung-Fei; Kao, Guan-Hsuan; Zhu, Liang-Xiu; Hung, Kuo-Shu; Lin, Yu-Hsin

    2018-02-01

    This study used a digital micromirror device (DMD) to produce point-array patterns and employed a self-developed optical system to define line-and-space patterns on nonplanar substrates. First, field tracing was employed to analyze the aerial images of the lithographic system, which comprised an optical system and the DMD. Multiobjective particle swarm optimization was then applied to determine the spot overlapping rate used. The objective functions were set to minimize linewidth and maximize image log slope, through which the dose of the exposure agent could be effectively controlled and the quality of the nonplanar lithography could be enhanced. Laser beams with 405-nm wavelength were employed as the light source. Silicon substrates coated with photoresist were placed on a nonplanar translation stage. The DMD was used to produce lithographic patterns, during which the parameters were analyzed and optimized. The optimal delay time-sequence combinations were used to scan images of the patterns. Finally, an exposure linewidth of less than 10 μm was successfully achieved using the nonplanar lithographic process.

  4. Starting geometry creation and design method for freeform optics.

    PubMed

    Bauer, Aaron; Schiesser, Eric M; Rolland, Jannick P

    2018-05-01

    We describe a method for designing freeform optics based on the aberration theory of freeform surfaces that guides the development of a taxonomy of starting-point geometries with an emphasis on manufacturability. An unconventional approach to the optimization of these starting designs wherein the rotationally invariant 3rd-order aberrations are left uncorrected prior to unobscuring the system is shown to be effective. The optimal starting-point geometry is created for an F/3, 200 mm aperture-class three-mirror imager and is fully optimized using a novel step-by-step method over a 4 × 4 degree field-of-view to exemplify the design method. We then optimize an alternative starting-point geometry that is common in the literature but was quantified here as a sub-optimal candidate for optimization with freeform surfaces. A comparison of the optimized geometries shows the performance of the optimal geometry is at least 16× better, which underscores the importance of the geometry when designing freeform optics.

  5. Simultaneous optimization of limited sampling points for pharmacokinetic analysis of amrubicin and amrubicinol in cancer patients.

    PubMed

    Makino, Yoshinori; Watanabe, Michiko; Makihara, Reiko Ando; Nokihara, Hiroshi; Yamamoto, Noboru; Ohe, Yuichiro; Sugiyama, Erika; Sato, Hitoshi; Hayashi, Yoshikazu

    2016-09-01

    Limited sampling points for both amrubicin (AMR) and its active metabolite amrubicinol (AMR-OH) were simultaneously optimized using Akaike's information criterion (AIC) calculated by pharmacokinetic modeling. In this pharmacokinetic study, 40 mg/m(2) of AMR was administered as a 5-min infusion on three consecutive days to 21 Japanese lung cancer patients. Blood samples were taken at 0, 0.08, 0.25, 0.5, 1, 2, 4, 8 and 24 h after drug infusion, and AMR and AMR-OH concentrations in plasma were quantitated using a high-performance liquid chromatography. The pharmacokinetic profile of AMR was characterized using a three-compartment model and that of AMR-OH using a one-compartment model following a first-order absorption process. These pharmacokinetic profiles were then integrated into one pharmacokinetic model for simultaneous fitting of AMR and AMR-OH. After fitting to the pharmacokinetic model, 65 combinations of four sampling points from the concentration profiles were evaluated for their AICs. Stepwise regression analysis was applied to select the sampling points for AMR and AMR-OH to predict the area under the concentration-time curves (AUCs) at best. Of the three combinations that yielded favorable AIC values, 0.25, 2, 4 and 8 h yielded the best AUC prediction for both AMR (R(2) = 0.977) and AMR-OH (R(2) = 0.886). The prediction error for AUC was less than 15%. The optimal limited sampling points of AMR and AMR-OH after AMR infusion were found to be 0.25, 2, 4 and 8 h, enabling less frequent blood sampling in further expanded pharmacokinetic studies for both AMR and AMR-OH. © 2016 John Wiley & Sons Australia, Ltd.

  6. Optimization of phase feeding of starter, grower, and finisher diets for male broilers by mixture experimental design: forty-eight-day production period.

    PubMed

    Roush, W B; Boykin, D; Branton, S L

    2004-08-01

    A mixture experiment, a variant of response surface methodology, was designed to determine the proportion of time to feed broiler starter (23% protein), grower (20% protein), and finisher (18% protein) diets to optimize production and processing variables based on a total production time of 48 d. Mixture designs are useful for proportion problems where the components of the experiment (i.e., length of time the diets were fed) add up to a unity (48 d). The experiment was conducted with day-old male Ross x Ross broiler chicks. The birds were placed 50 birds per pen in each of 60 pens. The experimental design was a 10-point augmented simplex-centroid (ASC) design with 6 replicates of each point. Each design point represented the portion(s) of the 48 d that each of the diets was fed. Formulation of the diets was based on NRC standards. At 49 d, each pen of birds was evaluated for production data including BW, feed conversion, and cost of feed consumed. Then, 6 birds were randomly selected from each pen for processing data. Processing variables included live weight, hot carcass weight, dressing percentage, fat pad percentage, and breast yield (pectoralis major and pectoralis minor weights). Production and processing data were fit to simplex regression models. Model terms determined not to be significant (P > 0.05) were removed. The models were found to be statistically adequate for analysis of the response surfaces. A compromise solution was calculated based on optimal constraints designated for the production and processing data. The results indicated that broilers fed a starter and finisher diet for 30 and 18 d, respectively, would meet the production and processing constraints. Trace plots showed that the production and processing variables were not very sensitive to the grower diet.

  7. Solution for a bipartite Euclidean traveling-salesman problem in one dimension

    NASA Astrophysics Data System (ADS)

    Caracciolo, Sergio; Di Gioacchino, Andrea; Gherardi, Marco; Malatesta, Enrico M.

    2018-05-01

    The traveling-salesman problem is one of the most studied combinatorial optimization problems, because of the simplicity in its statement and the difficulty in its solution. We characterize the optimal cycle for every convex and increasing cost function when the points are thrown independently and with an identical probability distribution in a compact interval. We compute the average optimal cost for every number of points when the distance function is the square of the Euclidean distance. We also show that the average optimal cost is not a self-averaging quantity by explicitly computing the variance of its distribution in the thermodynamic limit. Moreover, we prove that the cost of the optimal cycle is not smaller than twice the cost of the optimal assignment of the same set of points. Interestingly, this bound is saturated in the thermodynamic limit.

  8. Solution for a bipartite Euclidean traveling-salesman problem in one dimension.

    PubMed

    Caracciolo, Sergio; Di Gioacchino, Andrea; Gherardi, Marco; Malatesta, Enrico M

    2018-05-01

    The traveling-salesman problem is one of the most studied combinatorial optimization problems, because of the simplicity in its statement and the difficulty in its solution. We characterize the optimal cycle for every convex and increasing cost function when the points are thrown independently and with an identical probability distribution in a compact interval. We compute the average optimal cost for every number of points when the distance function is the square of the Euclidean distance. We also show that the average optimal cost is not a self-averaging quantity by explicitly computing the variance of its distribution in the thermodynamic limit. Moreover, we prove that the cost of the optimal cycle is not smaller than twice the cost of the optimal assignment of the same set of points. Interestingly, this bound is saturated in the thermodynamic limit.

  9. Optimization of Focusing by Strip and Pixel Arrays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burke, G J; White, D A; Thompson, C A

    Professor Kevin Webb and students at Purdue University have demonstrated the design of conducting strip and pixel arrays for focusing electromagnetic waves [1, 2]. Their key point was to design structures to focus waves in the near field using full wave modeling and optimization methods for design. Their designs included arrays of conducting strips optimized with a downhill search algorithm and arrays of conducting and dielectric pixels optimized with the iterative direct binary search method. They used a finite element code for modeling. This report documents our attempts to duplicate and verify their results. We have modeled 2D conducting stripsmore » and both conducting and dielectric pixel arrays with moment method and FDTD codes to compare with Webb's results. New designs for strip arrays were developed with optimization by the downhill simplex method with simulated annealing. Strip arrays were optimized to focus an incident plane wave at a point or at two separated points and to switch between focusing points with a change in frequency. We also tried putting a line current source at the focus point for the plane wave to see how it would work as a directive antenna. We have not tried optimizing the conducting or dielectric pixel arrays, but modeled the structures designed by Webb with the moment method and FDTD to compare with the Purdue results.« less

  10. The role of the optimization process in illumination design

    NASA Astrophysics Data System (ADS)

    Gauvin, Michael A.; Jacobsen, David; Byrne, David J.

    2015-07-01

    This paper examines the role of the optimization process in illumination design. We will discuss why the starting point of the optimization process is crucial to a better design and why it is also important that the user understands the basic design problem and implements the correct merit function. Both a brute force method and the Downhill Simplex method will be used to demonstrate optimization methods with focus on using interactive design tools to create better starting points to streamline the optimization process.

  11. A noniterative greedy algorithm for multiframe point correspondence.

    PubMed

    Shafique, Khurram; Shah, Mubarak

    2005-01-01

    This paper presents a framework for finding point correspondences in monocular image sequences over multiple frames. The general problem of multiframe point correspondence is NP-hard for three or more frames. A polynomial time algorithm for a restriction of this problem is presented and is used as the basis of the proposed greedy algorithm for the general problem. The greedy nature of the proposed algorithm allows it to be used in real-time systems for tracking and surveillance, etc. In addition, the proposed algorithm deals with the problems of occlusion, missed detections, and false positives by using a single noniterative greedy optimization scheme and, hence, reduces the complexity of the overall algorithm as compared to most existing approaches where multiple heuristics are used for the same purpose. While most greedy algorithms for point tracking do not allow for entry and exit of the points from the scene, this is not a limitation for the proposed algorithm. Experiments with real and synthetic data over a wide range of scenarios and system parameters are presented to validate the claims about the performance of the proposed algorithm.

  12. Development of a Nonequilibrium Radiative Heating Prediction Method for Coupled Flowfield Solutions

    NASA Technical Reports Server (NTRS)

    Hartung, Lin C.

    1991-01-01

    A method for predicting radiative heating and coupling effects in nonequilibrium flow-fields has been developed. The method resolves atomic lines with a minimum number of spectral points, and treats molecular radiation using the smeared band approximation. To further minimize computational time, the calculation is performed on an optimized spectrum, which is computed for each flow condition to enhance spectral resolution. Additional time savings are obtained by performing the radiation calculation on a subgrid optimally selected for accuracy. Representative results from the new method are compared to previous work to demonstrate that the speedup does not cause a loss of accuracy and is sufficient to make coupled solutions practical. The method is found to be a useful tool for studies of nonequilibrium flows.

  13. The use of locally optimal trajectory management for base reaction control of robots in a microgravity environment

    NASA Technical Reports Server (NTRS)

    Lin, N. J.; Quinn, R. D.

    1991-01-01

    A locally-optimal trajectory management (LOTM) approach is analyzed, and it is found that care should be taken in choosing the Ritz expansion and cost function. A modified cost function for the LOTM approach is proposed which includes the kinetic energy along with the base reactions in a weighted and scale sum. The effects of the modified functions are demonstrated with numerical examples for robots operating in two- and three-dimensional space. It is pointed out that this modified LOTM approach shows good performance, the reactions do not fluctuate greatly, joint velocities reach their objectives at the end of the manifestation, and the CPU time is slightly more than twice the manipulation time.

  14. F-18 High Alpha Research Vehicle (HARV) parameter identification flight test maneuvers for optimal input design validation and lateral control effectiveness

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    1995-01-01

    Flight test maneuvers are specified for the F-18 High Alpha Research Vehicle (HARV). The maneuvers were designed for open loop parameter identification purposes, specifically for optimal input design validation at 5 degrees angle of attack, identification of individual strake effectiveness at 40 and 50 degrees angle of attack, and study of lateral dynamics and lateral control effectiveness at 40 and 50 degrees angle of attack. Each maneuver is to be realized by applying square wave inputs to specific control effectors using the On-Board Excitation System (OBES). Maneuver descriptions and complete specifications of the time/amplitude points define each input are included, along with plots of the input time histories.

  15. TNT Prout-Tompkins Kinetics Calibration with PSUADE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wemhoff, A P; Hsieh, H

    2007-04-11

    We used the code PSUADE to calibrate Prout-Tompkins kinetic parameters for pure recrystallized TNT. The calibration was based on ALE3D simulations of a series of One Dimensional Time to Explosion (ODTX) experiments. The resultant kinetic parameters differed from TNT data points with an average error of 28%, which is slightly higher than the value of 23% previously calculated using a two-point optimization. The methodology described here provides a basis for future calibration studies using PSUADE. The files used in the procedure are listed in the Appendix.

  16. Active and Reactive Power Optimal Dispatch Associated with Load and DG Uncertainties in Active Distribution Network

    NASA Astrophysics Data System (ADS)

    Gao, F.; Song, X. H.; Zhang, Y.; Li, J. F.; Zhao, S. S.; Ma, W. Q.; Jia, Z. Y.

    2017-05-01

    In order to reduce the adverse effects of uncertainty on optimal dispatch in active distribution network, an optimal dispatch model based on chance-constrained programming is proposed in this paper. In this model, the active and reactive power of DG can be dispatched at the aim of reducing the operating cost. The effect of operation strategy on the cost can be reflected in the objective which contains the cost of network loss, DG curtailment, DG reactive power ancillary service, and power quality compensation. At the same time, the probabilistic constraints can reflect the operation risk degree. Then the optimal dispatch model is simplified as a series of single stage model which can avoid large variable dimension and improve the convergence speed. And the single stage model is solved using a combination of particle swarm optimization (PSO) and point estimate method (PEM). Finally, the proposed optimal dispatch model and method is verified by the IEEE33 test system.

  17. An Integrated Optimization Design Method Based on Surrogate Modeling Applied to Diverging Duct Design

    NASA Astrophysics Data System (ADS)

    Hanan, Lu; Qiushi, Li; Shaobin, Li

    2016-12-01

    This paper presents an integrated optimization design method in which uniform design, response surface methodology and genetic algorithm are used in combination. In detail, uniform design is used to select the experimental sampling points in the experimental domain and the system performance is evaluated by means of computational fluid dynamics to construct a database. After that, response surface methodology is employed to generate a surrogate mathematical model relating the optimization objective and the design variables. Subsequently, genetic algorithm is adopted and applied to the surrogate model to acquire the optimal solution in the case of satisfying some constraints. The method has been applied to the optimization design of an axisymmetric diverging duct, dealing with three design variables including one qualitative variable and two quantitative variables. The method of modeling and optimization design performs well in improving the duct aerodynamic performance and can be also applied to wider fields of mechanical design and seen as a useful tool for engineering designers, by reducing the design time and computation consumption.

  18. Optimization of light source parameters in the photodynamic therapy of heterogeneous prostate

    NASA Astrophysics Data System (ADS)

    Li, Jun; Altschuler, Martin D.; Hahn, Stephen M.; Zhu, Timothy C.

    2008-08-01

    The three-dimensional (3D) heterogeneous distributions of optical properties in a patient prostate can now be measured in vivo. Such data can be used to obtain a more accurate light-fluence kernel. (For specified sources and points, the kernel gives the fluence delivered to a point by a source of unit strength.) In turn, the kernel can be used to solve the inverse problem that determines the source strengths needed to deliver a prescribed photodynamic therapy (PDT) dose (or light-fluence) distribution within the prostate (assuming uniform drug concentration). We have developed and tested computational procedures to use the new heterogeneous data to optimize delivered light-fluence. New problems arise, however, in quickly obtaining an accurate kernel following the insertion of interstitial light sources and data acquisition. (1) The light-fluence kernel must be calculated in 3D and separately for each light source, which increases kernel size. (2) An accurate kernel for light scattering in a heterogeneous medium requires ray tracing and volume partitioning, thus significant calculation time. To address these problems, two different kernels were examined and compared for speed of creation and accuracy of dose. Kernels derived more quickly involve simpler algorithms. Our goal is to achieve optimal dose planning with patient-specific heterogeneous optical data applied through accurate kernels, all within clinical times. The optimization process is restricted to accepting the given (interstitially inserted) sources, and determining the best source strengths with which to obtain a prescribed dose. The Cimmino feasibility algorithm is used for this purpose. The dose distribution and source weights obtained for each kernel are analyzed. In clinical use, optimization will also be performed prior to source insertion to obtain initial source positions, source lengths and source weights, but with the assumption of homogeneous optical properties. For this reason, we compare the results from heterogeneous optical data with those obtained from average homogeneous optical properties. The optimized treatment plans are also compared with the reference clinical plan, defined as the plan with sources of equal strength, distributed regularly in space, which delivers a mean value of prescribed fluence at detector locations within the treatment region. The study suggests that comprehensive optimization of source parameters (i.e. strengths, lengths and locations) is feasible, thus allowing acceptable dose coverage in a heterogeneous prostate PDT within the time constraints of the PDT procedure.

  19. Preparedness for pandemics: does variation among states affect the nation as a whole?

    PubMed

    Potter, Margaret A; Brown, Shawn T; Lee, Bruce Y; Grefenstette, John; Keane, Christopher R; Lin, Chyongchiou J; Quinn, Sandra C; Stebbins, Samuel; Sweeney, Patricia M; Burke, Donald S

    2012-01-01

    Since states' public health systems differ as to pandemic preparedness, this study explored whether such heterogeneity among states could affect the nation's overall influenza rate. The Centers for Disease Control and Prevention produced a uniform set of scores on a 100-point scale from its 2008 national evaluation of state preparedness to distribute materiel from the Strategic National Stockpile (SNS). This study used these SNS scores to represent each state's relative preparedness to distribute influenza vaccine in a timely manner and assumed that "optimal" vaccine distribution would reach at least 35% of the state's population within 4 weeks. The scores were used to determine the timing of vaccine distribution for each state: each 10-point decrement of score below 90 added an additional delay increment to the distribution time. A large-scale agent-based computational model simulated an influenza pandemic in the US population. In this synthetic population each individual or agent had an assigned household, age, workplace or school destination, daily commute, and domestic intercity air travel patterns. Simulations compared influenza case rates both nationally and at the state level under 3 scenarios: no vaccine distribution (baseline), optimal vaccine distribution in all states, and vaccine distribution time modified according to state-specific SNS score. Between optimal and SNS-modified scenarios, attack rates rose not only in low-scoring states but also in high-scoring states, demonstrating an interstate spread of infections. Influenza rates were sensitive to variation of the SNS-modified scenario (delay increments of 1 day versus 5 days), but the interstate effect remained. The effectiveness of a response activity such as vaccine distribution could benefit from national standards and preparedness funding allocated in part to minimize interstate disparities.

  20. C-learning: A new classification framework to estimate optimal dynamic treatment regimes.

    PubMed

    Zhang, Baqun; Zhang, Min

    2017-12-11

    A dynamic treatment regime is a sequence of decision rules, each corresponding to a decision point, that determine that next treatment based on each individual's own available characteristics and treatment history up to that point. We show that identifying the optimal dynamic treatment regime can be recast as a sequential optimization problem and propose a direct sequential optimization method to estimate the optimal treatment regimes. In particular, at each decision point, the optimization is equivalent to sequentially minimizing a weighted expected misclassification error. Based on this classification perspective, we propose a powerful and flexible C-learning algorithm to learn the optimal dynamic treatment regimes backward sequentially from the last stage until the first stage. C-learning is a direct optimization method that directly targets optimizing decision rules by exploiting powerful optimization/classification techniques and it allows incorporation of patient's characteristics and treatment history to improve performance, hence enjoying advantages of both the traditional outcome regression-based methods (Q- and A-learning) and the more recent direct optimization methods. The superior performance and flexibility of the proposed methods are illustrated through extensive simulation studies. © 2017, The International Biometric Society.

  1. Optimal Number and Allocation of Data Collection Points for Linear Spline Growth Curve Modeling: A Search for Efficient Designs

    ERIC Educational Resources Information Center

    Wu, Wei; Jia, Fan; Kinai, Richard; Little, Todd D.

    2017-01-01

    Spline growth modelling is a popular tool to model change processes with distinct phases and change points in longitudinal studies. Focusing on linear spline growth models with two phases and a fixed change point (the transition point from one phase to the other), we detail how to find optimal data collection designs that maximize the efficiency…

  2. Boolean network identification from perturbation time series data combining dynamics abstraction and logic programming.

    PubMed

    Ostrowski, M; Paulevé, L; Schaub, T; Siegel, A; Guziolowski, C

    2016-11-01

    Boolean networks (and more general logic models) are useful frameworks to study signal transduction across multiple pathways. Logic models can be learned from a prior knowledge network structure and multiplex phosphoproteomics data. However, most efficient and scalable training methods focus on the comparison of two time-points and assume that the system has reached an early steady state. In this paper, we generalize such a learning procedure to take into account the time series traces of phosphoproteomics data in order to discriminate Boolean networks according to their transient dynamics. To that end, we identify a necessary condition that must be satisfied by the dynamics of a Boolean network to be consistent with a discretized time series trace. Based on this condition, we use Answer Set Programming to compute an over-approximation of the set of Boolean networks which fit best with experimental data and provide the corresponding encodings. Combined with model-checking approaches, we end up with a global learning algorithm. Our approach is able to learn logic models with a true positive rate higher than 78% in two case studies of mammalian signaling networks; for a larger case study, our method provides optimal answers after 7min of computation. We quantified the gain in our method predictions precision compared to learning approaches based on static data. Finally, as an application, our method proposes erroneous time-points in the time series data with respect to the optimal learned logic models. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  3. Influence of depth of neuromuscular blockade on surgical conditions during low-pressure pneumoperitoneum laparoscopic cholecystectomy: A randomized blinded study.

    PubMed

    Barrio, Javier; Errando, Carlos L; García-Ramón, Jaime; Sellés, Rafael; San Miguel, Guillermo; Gallego, Juan

    2017-11-01

    To evaluate the influence of neuromuscular blockade (NMB) on surgical conditions during low-pressure pneumoperitoneum (8mmHg) laparoscopic cholecystectomy (LC), while comparing moderate and deep NMB. Secondary objective was to evaluate if surgical conditions during low-pressure pneumoperitoneum LC performed with deep NMB could be comparable to those provided during standard-pressure pneumoperitoneum (12mmHg) LC. Prospective, randomized, blinded clinical trial. Operating room. Ninety ASA 1-2 patients scheduled for elective LC. Patients were allocated into 3 groups: Group 1: low-pressure pneumoperitoneum with moderate-NMB (1-3 TOF), Group 2: low-pressure pneumoperitoneum with deep-NMB (1-5 PTC) and Group 3: standard pneumoperitoneum (12mmHg). Rocuronium was used to induce NMB and acceleromiography was used for NMB monitoring (TOF-Watch-SX). Three experienced surgeons evaluated surgical conditions using a four-step scale at three time-points: surgical field exposure, dissection of the gallbladder and extraction/closure. Low-pressure pneumoperitoneum (Group 1 vs. 2): good conditions: 96.7 vs. 96.7%, 90 vs. 80% and 89.6 vs. 92.3%, respectively for the time-points, p>0.05. No differences in optimal surgical conditions were observed between the groups. Surgery completion at 8mmHg pneumoperitoneum: 96.7 vs. 86.7%, p=0.353. Standard-pressure pneumoperitoneum vs. low-pressure pneumoperitoneum with deep NMB (Group 3 vs. 2): good conditions: 100% in Group 3 for the three time-points (p=0.024 vs. Group 2 at dissection of the gallbladder). Significantly greater percentage of optimal conditions during standard-pressure pneumoperitoneum LC at the three time points of evaluation. The depth of NMB was found not to be decisive neither in the improvement of surgical conditions nor in the completion of low-pressure pneumoperitoneum LC performed by experienced surgeons. Surgical conditions were considered better with a standard-pressure pneumoperitoneum, regardless of the depth of NMB, than during low-pressure pneumoperitoneum with deep NMB. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Automated real-time search and analysis algorithms for a non-contact 3D profiling system

    NASA Astrophysics Data System (ADS)

    Haynes, Mark; Wu, Chih-Hang John; Beck, B. Terry; Peterman, Robert J.

    2013-04-01

    The purpose of this research is to develop a new means of identifying and extracting geometrical feature statistics from a non-contact precision-measurement 3D profilometer. Autonomous algorithms have been developed to search through large-scale Cartesian point clouds to identify and extract geometrical features. These algorithms are developed with the intent of providing real-time production quality control of cold-rolled steel wires. The steel wires in question are prestressing steel reinforcement wires for concrete members. The geometry of the wire is critical in the performance of the overall concrete structure. For this research a custom 3D non-contact profilometry system has been developed that utilizes laser displacement sensors for submicron resolution surface profiling. Optimizations in the control and sensory system allow for data points to be collected at up to an approximate 400,000 points per second. In order to achieve geometrical feature extraction and tolerancing with this large volume of data, the algorithms employed are optimized for parsing large data quantities. The methods used provide a unique means of maintaining high resolution data of the surface profiles while keeping algorithm running times within practical bounds for industrial application. By a combination of regional sampling, iterative search, spatial filtering, frequency filtering, spatial clustering, and template matching a robust feature identification method has been developed. These algorithms provide an autonomous means of verifying tolerances in geometrical features. The key method of identifying the features is through a combination of downhill simplex and geometrical feature templates. By performing downhill simplex through several procedural programming layers of different search and filtering techniques, very specific geometrical features can be identified within the point cloud and analyzed for proper tolerancing. Being able to perform this quality control in real time provides significant opportunities in cost savings in both equipment protection and waste minimization.

  5. A Numerical-Analytical Approach Based on Canonical Transformations for Computing Optimal Low-Thrust Transfers

    NASA Astrophysics Data System (ADS)

    da Silva Fernandes, S.; das Chagas Carvalho, F.; Bateli Romão, J. V.

    2018-04-01

    A numerical-analytical procedure based on infinitesimal canonical transformations is developed for computing optimal time-fixed low-thrust limited power transfers (no rendezvous) between coplanar orbits with small eccentricities in an inverse-square force field. The optimization problem is formulated as a Mayer problem with a set of non-singular orbital elements as state variables. Second order terms in eccentricity are considered in the development of the maximum Hamiltonian describing the optimal trajectories. The two-point boundary value problem of going from an initial orbit to a final orbit is solved by means of a two-stage Newton-Raphson algorithm which uses an infinitesimal canonical transformation. Numerical results are presented for some transfers between circular orbits with moderate radius ratio, including a preliminary analysis of Earth-Mars and Earth-Venus missions.

  6. Using spatiotemporal source separation to identify prominent features in multichannel data without sinusoidal filters.

    PubMed

    Cohen, Michael X

    2017-09-27

    The number of simultaneously recorded electrodes in neuroscience is steadily increasing, providing new opportunities for understanding brain function, but also new challenges for appropriately dealing with the increase in dimensionality. Multivariate source separation analysis methods have been particularly effective at improving signal-to-noise ratio while reducing the dimensionality of the data and are widely used for cleaning, classifying and source-localizing multichannel neural time series data. Most source separation methods produce a spatial component (that is, a weighted combination of channels to produce one time series); here, this is extended to apply source separation to a time series, with the idea of obtaining a weighted combination of successive time points, such that the weights are optimized to satisfy some criteria. This is achieved via a two-stage source separation procedure, in which an optimal spatial filter is first constructed and then its optimal temporal basis function is computed. This second stage is achieved with a time-delay-embedding matrix, in which additional rows of a matrix are created from time-delayed versions of existing rows. The optimal spatial and temporal weights can be obtained by solving a generalized eigendecomposition of covariance matrices. The method is demonstrated in simulated data and in an empirical electroencephalogram study on theta-band activity during response conflict. Spatiotemporal source separation has several advantages, including defining empirical filters without the need to apply sinusoidal narrowband filters. © 2017 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  7. Adaptive synchronized switch damping on an inductor: a self-tuning switching law

    NASA Astrophysics Data System (ADS)

    Kelley, Christopher R.; Kauffman, Jeffrey L.

    2017-03-01

    Synchronized switch damping (SSD) techniques exploit low-power switching between passive circuits connected to piezoelectric material to reduce structural vibration. In the classical implementation of SSD, the piezoelectric material remains in an open circuit for the majority of the vibration cycle and switches briefly to a shunt circuit at every displacement extremum. Recent research indicates that this switch timing is only optimal for excitation exactly at resonance and points to more general optimal switch criteria based on the phase of the displacement and the system parameters. This work proposes a self-tuning approach that implements the more general optimal switch timing for synchronized switch damping on an inductor (SSDI) without needing any knowledge of the system parameters. The law involves a gradient-based search optimization that is robust to noise and uncertainties in the system. Testing of a physical implementation confirms this law successfully adapts to the frequency and parameters of the system. Overall, the adaptive SSDI controller provides better off-resonance steady-state vibration reduction than classical SSDI while matching performance at resonance.

  8. Neural Network and Regression Approximations in High Speed Civil Transport Aircraft Design Optimization

    NASA Technical Reports Server (NTRS)

    Patniak, Surya N.; Guptill, James D.; Hopkins, Dale A.; Lavelle, Thomas M.

    1998-01-01

    Nonlinear mathematical-programming-based design optimization can be an elegant method. However, the calculations required to generate the merit function, constraints, and their gradients, which are frequently required, can make the process computational intensive. The computational burden can be greatly reduced by using approximating analyzers derived from an original analyzer utilizing neural networks and linear regression methods. The experience gained from using both of these approximation methods in the design optimization of a high speed civil transport aircraft is the subject of this paper. The Langley Research Center's Flight Optimization System was selected for the aircraft analysis. This software was exercised to generate a set of training data with which a neural network and a regression method were trained, thereby producing the two approximating analyzers. The derived analyzers were coupled to the Lewis Research Center's CometBoards test bed to provide the optimization capability. With the combined software, both approximation methods were examined for use in aircraft design optimization, and both performed satisfactorily. The CPU time for solution of the problem, which had been measured in hours, was reduced to minutes with the neural network approximation and to seconds with the regression method. Instability encountered in the aircraft analysis software at certain design points was also eliminated. On the other hand, there were costs and difficulties associated with training the approximating analyzers. The CPU time required to generate the input-output pairs and to train the approximating analyzers was seven times that required for solution of the problem.

  9. Development and application of a modified dynamic time warping algorithm (DTW-S) to analyses of primate brain expression time series

    PubMed Central

    2011-01-01

    Background Comparing biological time series data across different conditions, or different specimens, is a common but still challenging task. Algorithms aligning two time series represent a valuable tool for such comparisons. While many powerful computation tools for time series alignment have been developed, they do not provide significance estimates for time shift measurements. Results Here, we present an extended version of the original DTW algorithm that allows us to determine the significance of time shift estimates in time series alignments, the DTW-Significance (DTW-S) algorithm. The DTW-S combines important properties of the original algorithm and other published time series alignment tools: DTW-S calculates the optimal alignment for each time point of each gene, it uses interpolated time points for time shift estimation, and it does not require alignment of the time-series end points. As a new feature, we implement a simulation procedure based on parameters estimated from real time series data, on a series-by-series basis, allowing us to determine the false positive rate (FPR) and the significance of the estimated time shift values. We assess the performance of our method using simulation data and real expression time series from two published primate brain expression datasets. Our results show that this method can provide accurate and robust time shift estimates for each time point on a gene-by-gene basis. Using these estimates, we are able to uncover novel features of the biological processes underlying human brain development and maturation. Conclusions The DTW-S provides a convenient tool for calculating accurate and robust time shift estimates at each time point for each gene, based on time series data. The estimates can be used to uncover novel biological features of the system being studied. The DTW-S is freely available as an R package TimeShift at http://www.picb.ac.cn/Comparative/data.html. PMID:21851598

  10. Development and application of a modified dynamic time warping algorithm (DTW-S) to analyses of primate brain expression time series.

    PubMed

    Yuan, Yuan; Chen, Yi-Ping Phoebe; Ni, Shengyu; Xu, Augix Guohua; Tang, Lin; Vingron, Martin; Somel, Mehmet; Khaitovich, Philipp

    2011-08-18

    Comparing biological time series data across different conditions, or different specimens, is a common but still challenging task. Algorithms aligning two time series represent a valuable tool for such comparisons. While many powerful computation tools for time series alignment have been developed, they do not provide significance estimates for time shift measurements. Here, we present an extended version of the original DTW algorithm that allows us to determine the significance of time shift estimates in time series alignments, the DTW-Significance (DTW-S) algorithm. The DTW-S combines important properties of the original algorithm and other published time series alignment tools: DTW-S calculates the optimal alignment for each time point of each gene, it uses interpolated time points for time shift estimation, and it does not require alignment of the time-series end points. As a new feature, we implement a simulation procedure based on parameters estimated from real time series data, on a series-by-series basis, allowing us to determine the false positive rate (FPR) and the significance of the estimated time shift values. We assess the performance of our method using simulation data and real expression time series from two published primate brain expression datasets. Our results show that this method can provide accurate and robust time shift estimates for each time point on a gene-by-gene basis. Using these estimates, we are able to uncover novel features of the biological processes underlying human brain development and maturation. The DTW-S provides a convenient tool for calculating accurate and robust time shift estimates at each time point for each gene, based on time series data. The estimates can be used to uncover novel biological features of the system being studied. The DTW-S is freely available as an R package TimeShift at http://www.picb.ac.cn/Comparative/data.html.

  11. Implementation and optimization of automated dispensing cabinet technology.

    PubMed

    McCarthy, Bryan C; Ferker, Michael

    2016-10-01

    A multifaceted automated dispensing cabinet (ADC) optimization initiative at a large hospital is described. The ADC optimization project, which was launched approximately six weeks after activation of ADCs in 30 patient care unit medication rooms of a newly established adult hospital, included (1) adjustment of par inventory levels (desired on-hand quantities of medications) and par reorder quantities to reduce the risk of ADC supply exhaustion and improve restocking efficiency, (2) expansion of ADC "common stock" (medications assigned to ADC inventories) to increase medication availability at the point of care, and (3) removal of some infrequently prescribed medications from ADCs to reduce the likelihood of product expiration. The purpose of the project was to address organizational concerns regarding widespread ADC medication stockouts, growing reliance on cart-fill medication delivery systems, and suboptimal medication order turnaround times. Leveraging of the ADC technology platform's reporting functionalities for enhanced inventory control yielded a number of benefits, including cost savings resulting from reduced pharmacy technician labor requirements (estimated at $2,728 annually), a substantial reduction in the overall weekly stockout percentage (from 3.2% before optimization to 0.5% eight months after optimization), an improvement in the average medication turnaround time, and estimated cost avoidance of $19,660 attributed to the reduced potential for product expiration. Efforts to optimize ADCs through par level optimization, expansion of common stock, and removal of infrequently used medications reduced pharmacy technician labor, decreased stockout percentages, generated opportunities for cost avoidance, and improved medication turnaround times. Copyright © 2016 by the American Society of Health-System Pharmacists, Inc. All rights reserved.

  12. Exact Identification of a Quantum Change Point

    NASA Astrophysics Data System (ADS)

    Sentís, Gael; Calsamiglia, John; Muñoz-Tapia, Ramon

    2017-10-01

    The detection of change points is a pivotal task in statistical analysis. In the quantum realm, it is a new primitive where one aims at identifying the point where a source that supposedly prepares a sequence of particles in identical quantum states starts preparing a mutated one. We obtain the optimal procedure to identify the change point with certainty—naturally at the price of having a certain probability of getting an inconclusive answer. We obtain the analytical form of the optimal probability of successful identification for any length of the particle sequence. We show that the conditional success probabilities of identifying each possible change point show an unexpected oscillatory behavior. We also discuss local (online) protocols and compare them with the optimal procedure.

  13. Exact Identification of a Quantum Change Point.

    PubMed

    Sentís, Gael; Calsamiglia, John; Muñoz-Tapia, Ramon

    2017-10-06

    The detection of change points is a pivotal task in statistical analysis. In the quantum realm, it is a new primitive where one aims at identifying the point where a source that supposedly prepares a sequence of particles in identical quantum states starts preparing a mutated one. We obtain the optimal procedure to identify the change point with certainty-naturally at the price of having a certain probability of getting an inconclusive answer. We obtain the analytical form of the optimal probability of successful identification for any length of the particle sequence. We show that the conditional success probabilities of identifying each possible change point show an unexpected oscillatory behavior. We also discuss local (online) protocols and compare them with the optimal procedure.

  14. Freeze-drying process design by manometric temperature measurement: design of a smart freeze-dryer.

    PubMed

    Tang, Xiaolin Charlie; Nail, Steven L; Pikal, Michael J

    2005-04-01

    To develop a procedure based on manometric temperature measurement (MTM) and an expert system for good practices in freeze drying that will allow development of an optimized freeze-drying process during a single laboratory freeze-drying experiment. Freeze drying was performed with a FTS Dura-Stop/Dura-Top freeze dryer with the manometric temperature measurement software installed. Five percent solutions of glycine, sucrose, or mannitol with 2 ml to 4 ml fill in 5 ml vials were used, with all vials loaded on one shelf. Details of freezing, optimization of chamber pressure, target product temperature, and some aspects of secondary drying are determined by the expert system algorithms. MTM measurements were used to select the optimum shelf temperature, to determine drying end points, and to evaluate residual moisture content in real-time. MTM measurements were made at 1 hour or half-hour intervals during primary drying and secondary drying, with a data collection frequency of 4 points per second. The improved MTM equations were fit to pressure-time data generated by the MTM procedure using Microcal Origin software to obtain product temperature and dry layer resistance. Using heat and mass transfer theory, the MTM results were used to evaluate mass and heat transfer rates and to estimate the shelf temperature required to maintain the target product temperature. MTM product dry layer resistance is accurate until about two-thirds of total primary drying time is over, and the MTM product temperature is normally accurate almost to the end of primary drying provided that effective thermal shielding is used in the freeze-drying process. The primary drying times can be accurately estimated from mass transfer rates calculated very early in the run, and we find the target product temperature can be achieved and maintained with only a few adjustments of shelf temperature. The freeze-dryer overload conditions can be estimated by calculation of heat/mass flow at the target product temperature. It was found that the MTM results serve as an excellent indicator of the end point of primary drying. Further, we find that the rate of water desorption during secondary drying may be accurately measured by a variation of the basic MTM procedure. Thus, both the end point of secondary drying and real-time residual moisture may be obtained during secondary drying. Manometric temperature measurement and the expert system for good practices in freeze drying does allow development of an optimized freeze-drying process during a single laboratory freeze-drying experiment.

  15. Antibody labeling with Remazol Brilliant Violet 5R, a vinylsulphonic reactive dye.

    PubMed

    Ferrari, Alejandro; Friedrich, Adrián; Weill, Federico; Wolman, Federico; Leoni, Juliana

    2013-01-01

    Colloidal gold is the first choice for labeling antibodies to be used in Point Of Care Testing. However, there are some recent reports on a family of textile dyes-named "reactive dyes"-being suitable for protein labeling. In the present article, protein labeling conditions were optimized for Remazol Brilliant Violet 5R, and the sensitivity of the labeled antibodies was assessed and compared with that of colloidal-gold labeled antibodies. Also, the accelerated stability was explored. Optimal conditions were pH 10.95, dye:Ab molar ratio of 264 and an incubation time of 132 min. Labeled antibodies were stable, and could be successfully used in a slot blot assay, detecting as low as 400 ng/mL. Therefore, the present work demonstrates that vinylsulphonic reactive dyes can be successfully used to label antibodies, and are excellent candidates for the construction of a new generation of Point of Care Testing kits.

  16. Cloud point extraction and flame atomic absorption spectrometric determination of cadmium and nickel in drinking and wastewater samples.

    PubMed

    Naeemullah; Kazi, Tasneem G; Shah, Faheem; Afridi, Hassan I; Baig, Jameel Ahmed; Soomro, Abdul Sattar

    2013-01-01

    A simple method for the preconcentration of cadmium (Cd) and nickel (Ni) in drinking and wastewater samples was developed. Cloud point extraction has been used for the preconcentration of both metals, after formation of complexes with 8-hydroxyquinoline (8-HQ) and extraction with the surfactant octylphenoxypolyethoxyethanol (Triton X-114). Dilution of the surfactant-rich phase with acidified ethanol was performed after phase separation, and the Cd and Ni contents were measured by flame atomic absorption spectrometry. The experimental variables, such as pH, amounts of reagents (8-HQ and Triton X-114), temperature, incubation time, and sample volume, were optimized. After optimization of the complexation and extraction conditions, enhancement factors of 80 and 61, with LOD values of 0.22 and 0.52 microg/L, were obtained for Cd and Ni, respectively. The proposed method was applied satisfactorily for the determination of both elements in drinking and wastewater samples.

  17. Gaussian process surrogates for failure detection: A Bayesian experimental design approach

    NASA Astrophysics Data System (ADS)

    Wang, Hongqiao; Lin, Guang; Li, Jinglai

    2016-05-01

    An important task of uncertainty quantification is to identify the probability of undesired events, in particular, system failures, caused by various sources of uncertainties. In this work we consider the construction of Gaussian process surrogates for failure detection and failure probability estimation. In particular, we consider the situation that the underlying computer models are extremely expensive, and in this setting, determining the sampling points in the state space is of essential importance. We formulate the problem as an optimal experimental design for Bayesian inferences of the limit state (i.e., the failure boundary) and propose an efficient numerical scheme to solve the resulting optimization problem. In particular, the proposed limit-state inference method is capable of determining multiple sampling points at a time, and thus it is well suited for problems where multiple computer simulations can be performed in parallel. The accuracy and performance of the proposed method is demonstrated by both academic and practical examples.

  18. LAMMPS strong scaling performance optimization on Blue Gene/Q

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coffman, Paul; Jiang, Wei; Romero, Nichols A.

    2014-11-12

    LAMMPS "Large-scale Atomic/Molecular Massively Parallel Simulator" is an open-source molecular dynamics package from Sandia National Laboratories. Significant performance improvements in strong-scaling and time-to-solution for this application on IBM's Blue Gene/Q have been achieved through computational optimizations of the OpenMP versions of the short-range Lennard-Jones term of the CHARMM force field and the long-range Coulombic interaction implemented with the PPPM (particle-particle-particle mesh) algorithm, enhanced by runtime parameter settings controlling thread utilization. Additionally, MPI communication performance improvements were made to the PPPM calculation by re-engineering the parallel 3D FFT to use MPICH collectives instead of point-to-point. Performance testing was done using anmore » 8.4-million atom simulation scaling up to 16 racks on the Mira system at Argonne Leadership Computing Facility (ALCF). Speedups resulting from this effort were in some cases over 2x.« less

  19. Electronic circuitry development in a micropyrotechnic system for micropropulsion applications

    NASA Astrophysics Data System (ADS)

    Puig-Vidal, Manuel; Lopez, Jaime; Miribel, Pere; Montane, Enric; Lopez-Villegas, Jose M.; Samitier, Josep; Rossi, Carole; Camps, Thierry; Dumonteuil, Maxime

    2003-04-01

    An electronic circuitry is proposed and implemented to optimize the ignition process and the robustness of a microthruster. The principle is based on the integration of propellant material within a micromachined system. The operational concept is simply based on the combustion of an energetic propellant stored in a micromachined chamber. Each thruster contains three parts (heater, chamber, nozzle). Due to the one shot characteristic, microthrusters are fabricated in 2D array configuration. For the functioning of this kind of system, one critical point is the optimization of the ignition process as a function of the power schedule delivered by electronic devices. One particular attention has been paid on the design and implementation of an electronic chip to control and optimize the system ignition. Ignition process is triggered by electrical power delivered to a polysilicon resistance in contact with the propellant. The resistance is used to sense the temperature on the propellant which is in contact. Temperature of the microthruster node before the ignition is monitored via the electronic circuitry. A pre-heating process before ignition seems to be a good methodology to optimize the ignition process. Pre-heating temperature and pre-heating time are critical parameters to be adjusted. Simulation and experimental results will deeply contribute to improve the micropyrotechnic system. This paper will discuss all these point.

  20. Registration algorithm of point clouds based on multiscale normal features

    NASA Astrophysics Data System (ADS)

    Lu, Jun; Peng, Zhongtao; Su, Hang; Xia, GuiHua

    2015-01-01

    The point cloud registration technology for obtaining a three-dimensional digital model is widely applied in many areas. To improve the accuracy and speed of point cloud registration, a registration method based on multiscale normal vectors is proposed. The proposed registration method mainly includes three parts: the selection of key points, the calculation of feature descriptors, and the determining and optimization of correspondences. First, key points are selected from the point cloud based on the changes of magnitude of multiscale curvatures obtained by using principal components analysis. Then the feature descriptor of each key point is proposed, which consists of 21 elements based on multiscale normal vectors and curvatures. The correspondences in a pair of two point clouds are determined according to the descriptor's similarity of key points in the source point cloud and target point cloud. Correspondences are optimized by using a random sampling consistency algorithm and clustering technology. Finally, singular value decomposition is applied to optimized correspondences so that the rigid transformation matrix between two point clouds is obtained. Experimental results show that the proposed point cloud registration algorithm has a faster calculation speed, higher registration accuracy, and better antinoise performance.

  1. Predicting Externalizing and Internalizing Behavior in Kindergarten: Examining the Buffering Role of Early Social Support

    PubMed Central

    Heberle, Amy E.; Krill, Sarah C.; Briggs-Gowan, Margaret J.; Carter, Alice S.

    2014-01-01

    Objective This study tested an ecological model predicting children’s behavior problems in kindergarten from risk and protective factors (parent psychological distress, parenting behavior, and social support) during early childhood. Method Study participants were 1161 socio-demographically diverse mother-child pairs who participated in a longitudinal birth cohort study. The predictor variables were collected at two separate time points and based on parent reports; children were an average of two years old at Time 1 and three years old at Time 2. The outcome measures were collected when children reached Kindergarten and were six years old on average. Results Our results show that early maternal psychological distress, mediated by sub-optimal parenting behavior, predicts children’s externalizing and internalizing behaviors in kindergarten. Moreover, early social support buffers the relations between psychological distress and later sub-optimal parenting behaviors and between sub-optimal parenting behavior and later depressive/withdrawn behavior. Conclusions Our findings have several implications for early intervention and prevention efforts. Of note, informal social support appears to play an important protective role in the development of externalizing and internalizing behavior problems, weakening the link between psychological distress and less optimal parenting behavior and between sub-optimal parenting behavior and children’s withdrawal/depression symptoms. Increasing social support may be a productive goal for family and community-level intervention. PMID:24697587

  2. Dynamic positioning configuration and its first-order optimization

    NASA Astrophysics Data System (ADS)

    Xue, Shuqiang; Yang, Yuanxi; Dang, Yamin; Chen, Wu

    2014-02-01

    Traditional geodetic network optimization deals with static and discrete control points. The modern space geodetic network is, on the other hand, composed of moving control points in space (satellites) and on the Earth (ground stations). The network configuration composed of these facilities is essentially dynamic and continuous. Moreover, besides the position parameter which needs to be estimated, other geophysical information or signals can also be extracted from the continuous observations. The dynamic (continuous) configuration of the space network determines whether a particular frequency of signals can be identified by this system. In this paper, we employ the functional analysis and graph theory to study the dynamic configuration of space geodetic networks, and mainly focus on the optimal estimation of the position and clock-offset parameters. The principle of the D-optimization is introduced in the Hilbert space after the concept of the traditional discrete configuration is generalized from the finite space to the infinite space. It shows that the D-optimization developed in the discrete optimization is still valid in the dynamic configuration optimization, and this is attributed to the natural generalization of least squares from the Euclidean space to the Hilbert space. Then, we introduce the principle of D-optimality invariance under the combination operation and rotation operation, and propose some D-optimal simplex dynamic configurations: (1) (Semi) circular configuration in 2-dimensional space; (2) the D-optimal cone configuration and D-optimal helical configuration which is close to the GPS constellation in 3-dimensional space. The initial design of GPS constellation can be approximately treated as a combination of 24 D-optimal helixes by properly adjusting the ascending node of different satellites to realize a so-called Walker constellation. In the case of estimating the receiver clock-offset parameter, we show that the circular configuration, the symmetrical cone configuration and helical curve configuration are still D-optimal. It shows that the given total observation time determines the optimal frequency (repeatability) of moving known points and vice versa, and one way to improve the repeatability is to increase the rotational speed. Under the Newton's law of motion, the frequency of satellite motion determines the orbital altitude. Furthermore, we study three kinds of complex dynamic configurations, one of which is the combination of D-optimal cone configurations and a so-called Walker constellation composed of D-optimal helical configuration, the other is the nested cone configuration composed of n cones, and the last is the nested helical configuration composed of n orbital planes. It shows that an effective way to achieve high coverage is to employ the configuration composed of a certain number of moving known points instead of the simplex configuration (such as D-optimal helical configuration), and one can use the D-optimal simplex solutions or D-optimal complex configurations in any combination to achieve powerful configurations with flexile coverage and flexile repeatability. Alternately, how to optimally generate and assess the discrete configurations sampled from the continuous one is discussed. The proposed configuration optimization framework has taken the well-known regular polygons (such as equilateral triangle and quadrangular) in two-dimensional space and regular polyhedrons (regular tetrahedron, cube, regular octahedron, regular icosahedron, or regular dodecahedron) into account. It shows that the conclusions made by the proposed technique are more general and no longer limited by different sampling schemes. By the conditional equation of D-optimal nested helical configuration, the relevance issues of GNSS constellation optimization are solved and some examples are performed by GPS constellation to verify the validation of the newly proposed optimization technique. The proposed technique is potentially helpful in maintenance and quadratic optimization of single GNSS of which the orbital inclination and the orbital altitude change under the precession, as well as in optimally nesting GNSSs to perform global homogeneous coverage of the Earth.

  3. Dose-response association between leisure time physical activity and work ability: Cross-sectional study among 3000 workers.

    PubMed

    Calatayud, Joaquin; Jakobsen, Markus D; Sundstrup, Emil; Casaña, Jose; Andersen, Lars L

    2015-12-01

    Regular physical activity is important for longevity and health, but knowledge about the optimal dose of physical activity for maintaining good work ability is unknown. This study investigates the association between intensity and duration of physical activity during leisure time and work ability in relation to physical demands of the job. From the 2010 round of the Danish Work Environment Cohort Study, currently employed wage earners with physically demanding work (n = 2952) replied to questions about work, lifestyle and health. Excellent (100 points), very good (75 points), good (50 points), fair (25 points) and poor (0 points) work ability in relation to the physical demands of the job was experienced by 18%, 40%, 30%, 10% and 2% of the respondents, respectively. General linear models that controlled for gender, age, physical and psychosocial work factors, lifestyle and chronic disease showed that the duration of high-intensity physical activity during leisure was positively associated with work ability, in a dose-response fashion (p < 0.001). Those performing ⩾ 5 hours of high-intensity physical activity per week had on average 8 points higher work ability than those not performing such activities. The duration of low-intensity leisure-time physical activity was not associated with work ability (p = 0.5668). The duration of high-intensity physical activity during leisure time is associated in a dose-response fashion with work ability, in workers with physically demanding jobs. © 2015 the Nordic Societies of Public Health.

  4. Optimization of Time-Dependent Particle Tracing Using Tetrahedral Decomposition

    NASA Technical Reports Server (NTRS)

    Kenwright, David; Lane, David

    1995-01-01

    An efficient algorithm is presented for computing particle paths, streak lines and time lines in time-dependent flows with moving curvilinear grids. The integration, velocity interpolation and step-size control are all performed in physical space which avoids the need to transform the velocity field into computational space. This leads to higher accuracy because there are no Jacobian matrix approximations or expensive matrix inversions. Integration accuracy is maintained using an adaptive step-size control scheme which is regulated by the path line curvature. The problem of cell-searching, point location and interpolation in physical space is simplified by decomposing hexahedral cells into tetrahedral cells. This enables the point location to be done analytically and substantially faster than with a Newton-Raphson iterative method. Results presented show this algorithm is up to six times faster than particle tracers which operate on hexahedral cells yet produces almost identical particle trajectories.

  5. Statistical image quantification toward optimal scan fusion and change quantification

    NASA Astrophysics Data System (ADS)

    Potesil, Vaclav; Zhou, Xiang Sean

    2007-03-01

    Recent advance of imaging technology has brought new challenges and opportunities for automatic and quantitative analysis of medical images. With broader accessibility of more imaging modalities for more patients, fusion of modalities/scans from one time point and longitudinal analysis of changes across time points have become the two most critical differentiators to support more informed, more reliable and more reproducible diagnosis and therapy decisions. Unfortunately, scan fusion and longitudinal analysis are both inherently plagued with increased levels of statistical errors. A lack of comprehensive analysis by imaging scientists and a lack of full awareness by physicians pose potential risks in clinical practice. In this paper, we discuss several key error factors affecting imaging quantification, studying their interactions, and introducing a simulation strategy to establish general error bounds for change quantification across time. We quantitatively show that image resolution, voxel anisotropy, lesion size, eccentricity, and orientation are all contributing factors to quantification error; and there is an intricate relationship between voxel anisotropy and lesion shape in affecting quantification error. Specifically, when two or more scans are to be fused at feature level, optimal linear fusion analysis reveals that scans with voxel anisotropy aligned with lesion elongation should receive a higher weight than other scans. As a result of such optimal linear fusion, we will achieve a lower variance than naïve averaging. Simulated experiments are used to validate theoretical predictions. Future work based on the proposed simulation methods may lead to general guidelines and error lower bounds for quantitative image analysis and change detection.

  6. Influence of boiling point range of feedstock on properties of derived mesophase pitch

    NASA Astrophysics Data System (ADS)

    Yu, Ran; Liu, Dong; Lou, Bin; Chen, Qingtai; Zhang, Yadong; Li, Zhiheng

    2018-06-01

    The composition of raw material was optimized by vacuum distillation. The carbonization behavior of two kinds of raw material was followed by polarizing microscope, softening point, carbon yield and solubility. Two kinds of mesophase pitch have been monitored by X-ray diffraction (XRD), Fourier transform infrared spectrometer (FTIR), elemental analysis and 1H nuclear magnetic resonance (1H-NMR). The analysis results suggested that raw material B (15wt% of A was distillated out and the residue named B) could form large domain mesophase pitch earlier. The shortened heat treat time favored the retaining of alkyl group in mesophase pitch and reduced the softening point of masophase pitch.

  7. Calibration of an agricultural-hydrological model (RZWQM2) using surrogate global optimization

    DOE PAGES

    Xi, Maolong; Lu, Dan; Gui, Dongwei; ...

    2016-11-27

    Robust calibration of an agricultural-hydrological model is critical for simulating crop yield and water quality and making reasonable agricultural management. However, calibration of the agricultural-hydrological system models is challenging because of model complexity, the existence of strong parameter correlation, and significant computational requirements. Therefore, only a limited number of simulations can be allowed in any attempt to find a near-optimal solution within an affordable time, which greatly restricts the successful application of the model. The goal of this study is to locate the optimal solution of the Root Zone Water Quality Model (RZWQM2) given a limited simulation time, so asmore » to improve the model simulation and help make rational and effective agricultural-hydrological decisions. To this end, we propose a computationally efficient global optimization procedure using sparse-grid based surrogates. We first used advanced sparse grid (SG) interpolation to construct a surrogate system of the actual RZWQM2, and then we calibrate the surrogate model using the global optimization algorithm, Quantum-behaved Particle Swarm Optimization (QPSO). As the surrogate model is a polynomial with fast evaluation, it can be efficiently evaluated with a sufficiently large number of times during the optimization, which facilitates the global search. We calibrate seven model parameters against five years of yield, drain flow, and NO 3-N loss data from a subsurface-drained corn-soybean field in Iowa. Results indicate that an accurate surrogate model can be created for the RZWQM2 with a relatively small number of SG points (i.e., RZWQM2 runs). Compared to the conventional QPSO algorithm, our surrogate-based optimization method can achieve a smaller objective function value and better calibration performance using a fewer number of expensive RZWQM2 executions, which greatly improves computational efficiency.« less

  8. Calibration of an agricultural-hydrological model (RZWQM2) using surrogate global optimization

    NASA Astrophysics Data System (ADS)

    Xi, Maolong; Lu, Dan; Gui, Dongwei; Qi, Zhiming; Zhang, Guannan

    2017-01-01

    Robust calibration of an agricultural-hydrological model is critical for simulating crop yield and water quality and making reasonable agricultural management. However, calibration of the agricultural-hydrological system models is challenging because of model complexity, the existence of strong parameter correlation, and significant computational requirements. Therefore, only a limited number of simulations can be allowed in any attempt to find a near-optimal solution within an affordable time, which greatly restricts the successful application of the model. The goal of this study is to locate the optimal solution of the Root Zone Water Quality Model (RZWQM2) given a limited simulation time, so as to improve the model simulation and help make rational and effective agricultural-hydrological decisions. To this end, we propose a computationally efficient global optimization procedure using sparse-grid based surrogates. We first used advanced sparse grid (SG) interpolation to construct a surrogate system of the actual RZWQM2, and then we calibrate the surrogate model using the global optimization algorithm, Quantum-behaved Particle Swarm Optimization (QPSO). As the surrogate model is a polynomial with fast evaluation, it can be efficiently evaluated with a sufficiently large number of times during the optimization, which facilitates the global search. We calibrate seven model parameters against five years of yield, drain flow, and NO3-N loss data from a subsurface-drained corn-soybean field in Iowa. Results indicate that an accurate surrogate model can be created for the RZWQM2 with a relatively small number of SG points (i.e., RZWQM2 runs). Compared to the conventional QPSO algorithm, our surrogate-based optimization method can achieve a smaller objective function value and better calibration performance using a fewer number of expensive RZWQM2 executions, which greatly improves computational efficiency.

  9. Calibration of an agricultural-hydrological model (RZWQM2) using surrogate global optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xi, Maolong; Lu, Dan; Gui, Dongwei

    Robust calibration of an agricultural-hydrological model is critical for simulating crop yield and water quality and making reasonable agricultural management. However, calibration of the agricultural-hydrological system models is challenging because of model complexity, the existence of strong parameter correlation, and significant computational requirements. Therefore, only a limited number of simulations can be allowed in any attempt to find a near-optimal solution within an affordable time, which greatly restricts the successful application of the model. The goal of this study is to locate the optimal solution of the Root Zone Water Quality Model (RZWQM2) given a limited simulation time, so asmore » to improve the model simulation and help make rational and effective agricultural-hydrological decisions. To this end, we propose a computationally efficient global optimization procedure using sparse-grid based surrogates. We first used advanced sparse grid (SG) interpolation to construct a surrogate system of the actual RZWQM2, and then we calibrate the surrogate model using the global optimization algorithm, Quantum-behaved Particle Swarm Optimization (QPSO). As the surrogate model is a polynomial with fast evaluation, it can be efficiently evaluated with a sufficiently large number of times during the optimization, which facilitates the global search. We calibrate seven model parameters against five years of yield, drain flow, and NO 3-N loss data from a subsurface-drained corn-soybean field in Iowa. Results indicate that an accurate surrogate model can be created for the RZWQM2 with a relatively small number of SG points (i.e., RZWQM2 runs). Compared to the conventional QPSO algorithm, our surrogate-based optimization method can achieve a smaller objective function value and better calibration performance using a fewer number of expensive RZWQM2 executions, which greatly improves computational efficiency.« less

  10. Contributions to optimization of storage and transporting industrial goods

    NASA Astrophysics Data System (ADS)

    Babanatsas, T.; Babanatis Merce, R. M.; Glăvan, D. O.; Glăvan, A.

    2018-01-01

    Optimization of storage and transporting industrial goods in a factory either from a constructive, functional, or technological point of view is a determinant parameter in programming the manufacturing process, the performance of the whole process being determined by the correlation realized taking in consideration those two factors (optimization and programming the process). It is imperative to take into consideration each type of production program (range), to restrain as much as possible the area that we are using and to minimize the times of execution, all of these in order to satisfy the client’s needs, to try to classify them in order to be able to define a global software (with general rules) that is expected to fulfil each client’s needs.

  11. A trust region approach with multivariate Padé model for optimal circuit design

    NASA Astrophysics Data System (ADS)

    Abdel-Malek, Hany L.; Ebid, Shaimaa E. K.; Mohamed, Ahmed S. A.

    2017-11-01

    Since the optimization process requires a significant number of consecutive function evaluations, it is recommended to replace the function by an easily evaluated approximation model during the optimization process. The model suggested in this article is based on a multivariate Padé approximation. This model is constructed using data points of ?, where ? is the number of parameters. The model is updated over a sequence of trust regions. This model avoids the slow convergence of linear models of ? and has features of quadratic models that need interpolation data points of ?. The proposed approach is tested by applying it to several benchmark problems. Yield optimization using such a direct method is applied to some practical circuit examples. Minimax solution leads to a suitable initial point to carry out the yield optimization process. The yield is optimized by the proposed derivative-free method for active and passive filter examples.

  12. The Shortlist Method for fast computation of the Earth Mover's Distance and finding optimal solutions to transportation problems.

    PubMed

    Gottschlich, Carsten; Schuhmacher, Dominic

    2014-01-01

    Finding solutions to the classical transportation problem is of great importance, since this optimization problem arises in many engineering and computer science applications. Especially the Earth Mover's Distance is used in a plethora of applications ranging from content-based image retrieval, shape matching, fingerprint recognition, object tracking and phishing web page detection to computing color differences in linguistics and biology. Our starting point is the well-known revised simplex algorithm, which iteratively improves a feasible solution to optimality. The Shortlist Method that we propose substantially reduces the number of candidates inspected for improving the solution, while at the same time balancing the number of pivots required. Tests on simulated benchmarks demonstrate a considerable reduction in computation time for the new method as compared to the usual revised simplex algorithm implemented with state-of-the-art initialization and pivot strategies. As a consequence, the Shortlist Method facilitates the computation of large scale transportation problems in viable time. In addition we describe a novel method for finding an initial feasible solution which we coin Modified Russell's Method.

  13. The Shortlist Method for Fast Computation of the Earth Mover's Distance and Finding Optimal Solutions to Transportation Problems

    PubMed Central

    Gottschlich, Carsten; Schuhmacher, Dominic

    2014-01-01

    Finding solutions to the classical transportation problem is of great importance, since this optimization problem arises in many engineering and computer science applications. Especially the Earth Mover's Distance is used in a plethora of applications ranging from content-based image retrieval, shape matching, fingerprint recognition, object tracking and phishing web page detection to computing color differences in linguistics and biology. Our starting point is the well-known revised simplex algorithm, which iteratively improves a feasible solution to optimality. The Shortlist Method that we propose substantially reduces the number of candidates inspected for improving the solution, while at the same time balancing the number of pivots required. Tests on simulated benchmarks demonstrate a considerable reduction in computation time for the new method as compared to the usual revised simplex algorithm implemented with state-of-the-art initialization and pivot strategies. As a consequence, the Shortlist Method facilitates the computation of large scale transportation problems in viable time. In addition we describe a novel method for finding an initial feasible solution which we coin Modified Russell's Method. PMID:25310106

  14. Optimal Sparse Upstream Sensor Placement for Hydrokinetic Turbines

    NASA Astrophysics Data System (ADS)

    Cavagnaro, Robert; Strom, Benjamin; Ross, Hannah; Hill, Craig; Polagye, Brian

    2016-11-01

    Accurate measurement of the flow field incident upon a hydrokinetic turbine is critical for performance evaluation during testing and setting boundary conditions in simulation. Additionally, turbine controllers may leverage real-time flow measurements. Particle image velocimetry (PIV) is capable of rendering a flow field over a wide spatial domain in a controlled, laboratory environment. However, PIV's lack of suitability for natural marine environments, high cost, and intensive post-processing diminish its potential for control applications. Conversely, sensors such as acoustic Doppler velocimeters (ADVs), are designed for field deployment and real-time measurement, but over a small spatial domain. Sparsity-promoting regression analysis such as LASSO is utilized to improve the efficacy of point measurements for real-time applications by determining optimal spatial placement for a small number of ADVs using a training set of PIV velocity fields and turbine data. The study is conducted in a flume (0.8 m2 cross-sectional area, 1 m/s flow) with laboratory-scale axial and cross-flow turbines. Predicted turbine performance utilizing the optimal sparse sensor network and associated regression model is compared to actual performance with corresponding PIV measurements.

  15. Extensions of D-optimal Minimal Designs for Symmetric Mixture Models.

    PubMed

    Li, Yanyan; Raghavarao, Damaraju; Chervoneva, Inna

    2017-01-01

    The purpose of mixture experiments is to explore the optimum blends of mixture components, which will provide desirable response characteristics in finished products. D-optimal minimal designs have been considered for a variety of mixture models, including Scheffé's linear, quadratic, and cubic models. Usually, these D-optimal designs are minimally supported since they have just as many design points as the number of parameters. Thus, they lack the degrees of freedom to perform the Lack of Fit tests. Also, the majority of the design points in D-optimal minimal designs are on the boundary: vertices, edges, or faces of the design simplex. Also a new strategy for adding multiple interior points for symmetric mixture models is proposed. We compare the proposed designs with Cornell (1986) two ten-point designs for the Lack of Fit test by simulations.

  16. Initial Correction versus Negative Marking in Multiple Choice Examinations

    ERIC Educational Resources Information Center

    Van Hecke, Tanja

    2015-01-01

    Optimal assessment tools should measure in a limited time the knowledge of students in a correct and unbiased way. A method for automating the scoring is multiple choice scoring. This article compares scoring methods from a probabilistic point of view by modelling the probability to pass: the number right scoring, the initial correction (IC) and…

  17. Simultaneous saccharification and ethanol fermentation of oxalic acid pretreated corncob assessed with response surface methodology

    Treesearch

    Jae-Won Lee; Rita C.L.B. Rodrigues; Thomas W. Jeffries

    2009-01-01

    Response surface methodology was used to evaluate optimal time, temperature and oxalic acid concentration for simultaneous saccharification and fermentation (SSF) of corncob particles by Pichia stipitis CBS 6054. Fifteen different conditions for pretreatment were examined in a 23 full factorial design with six axial points. Temperatures ranged from 132 to 180º...

  18. Random regression analysis for body weights and main morphological traits in genetically improved farmed tilapia (Oreochromis niloticus).

    PubMed

    He, Jie; Zhao, Yunfeng; Zhao, Jingli; Gao, Jin; Xu, Pao; Yang, Runqing

    2018-02-01

    To genetically analyse growth traits in genetically improved farmed tilapia (GIFT), the body weight (BWE) and main morphological traits, including body length (BL), body depth (BD), body width (BWI), head length (HL) and length of the caudal peduncle (CPL), were measured six times in growth duration on 1451 fish from 45 mixed families of full and half sibs. A random regression model (RRM) was used to model genetic changes of the growth traits with days of age and estimate the heritability for any growth point and genetic correlations between pairwise growth points. Using the covariance function based on optimal RRMs, the heritabilities were estimated to be from 0.102 to 0.662 for BWE, 0.157 to 0.591 for BL, 0.047 to 0.621 for BD, 0.018 to 0.577 for BWI, 0.075 to 0.597 for HL and 0.032 to 0.610 for CPL between 60 and 140 days of age. All genetic correlations exceeded 0.5 between pairwise growth points. Moreover, the traits at initial days of age showed less correlation with those at later days of age. With phenotypes observed repeatedly, the model choice showed that the optimal RRMs could more precisely predict breeding values at a specific growth time than repeatability models or multiple trait animal models, which enhanced the efficiency of selection for the BWE and main morphological traits.

  19. Sub-optimal control of unsteady boundary layer separation and optimal control of Saltzman-Lorenz model

    NASA Astrophysics Data System (ADS)

    Sardesai, Chetan R.

    The primary objective of this research is to explore the application of optimal control theory in nonlinear, unsteady, fluid dynamical settings. Two problems are considered: (1) control of unsteady boundary-layer separation, and (2) control of the Saltzman-Lorenz model. The unsteady boundary-layer equations are nonlinear partial differential equations that govern the eruptive events that arise when an adverse pressure gradient acts on a boundary layer at high Reynolds numbers. The Saltzman-Lorenz model consists of a coupled set of three nonlinear ordinary differential equations that govern the time-dependent coefficients in truncated Fourier expansions of Rayleigh-Renard convection and exhibit deterministic chaos. Variational methods are used to derive the nonlinear optimal control formulations based on cost functionals that define the control objective through a performance measure and a penalty function that penalizes the cost of control. The resulting formulation consists of the nonlinear state equations, which must be integrated forward in time, and the nonlinear control (adjoint) equations, which are integrated backward in time. Such coupled forward-backward time integrations are computationally demanding; therefore, the full optimal control problem for the Saltzman-Lorenz model is carried out, while the more complex unsteady boundary-layer case is solved using a sub-optimal approach. The latter is a quasi-steady technique in which the unsteady boundary-layer equations are integrated forward in time, and the steady control equation is solved at each time step. Both sub-optimal control of the unsteady boundary-layer equations and optimal control of the Saltzman-Lorenz model are found to be successful in meeting the control objectives for each problem. In the case of boundary-layer separation, the control results indicate that it is necessary to eliminate the recirculation region that is a precursor to the unsteady boundary-layer eruptions. In the case of the Saltzman-Lorenz model, it is possible to control the system about either of the two unstable equilibrium points representing clockwise and counterclockwise rotation of the convection roles in a parameter regime for which the uncontrolled solution would exhibit deterministic chaos.

  20. Radar - ESRL Wind Profiler with RASS, Wasco Airport - Derived Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCaffrey, Katherine

    Profiles of turbulence dissipation rate for 15-minute intervals, time-stamped at the beginning of the 15-minute period, during the final 30 minutes of each hour. During that time, the 915-MHz wind profiling radar was in an optimized configuration with a vertically pointing beam only for measuring accurate spectral widths of vertical velocity. A bias-corrected dissipation rate also was profiled (described in McCaffrey et al. 2017). Hourly files contain two 15-minute profiles.

  1. Abdominal fat volume estimation by stereology on CT: a comparison with manual planimetry.

    PubMed

    Manios, G E; Mazonakis, M; Voulgaris, C; Karantanas, A; Damilakis, J

    2016-03-01

    To deploy and evaluate a stereological point-counting technique on abdominal CT for the estimation of visceral (VAF) and subcutaneous abdominal fat (SAF) volumes. Stereological volume estimations based on point counting and systematic sampling were performed on images from 14 consecutive patients who had undergone abdominal CT. For the optimization of the method, five sampling intensities in combination with 100 and 200 points were tested. The optimum stereological measurements were compared with VAF and SAF volumes derived by the standard technique of manual planimetry on the same scans. Optimization analysis showed that the selection of 200 points along with the sampling intensity 1/8 provided efficient volume estimations in less than 4 min for VAF and SAF together. The optimized stereology showed strong correlation with planimetry (VAF: r = 0.98; SAF: r = 0.98). No statistical differences were found between the two methods (VAF: P = 0.81; SAF: P = 0.83). The 95% limits of agreement were also acceptable (VAF: -16.5%, 16.1%; SAF: -10.8%, 10.7%) and the repeatability of stereology was good (VAF: CV = 4.5%, SAF: CV = 3.2%). Stereology may be successfully applied to CT images for the efficient estimation of abdominal fat volume and may constitute a good alternative to the conventional planimetric technique. Abdominal obesity is associated with increased risk of disease and mortality. Stereology may quantify visceral and subcutaneous abdominal fat accurately and consistently. The application of stereology to estimating abdominal volume fat reduces processing time. Stereology is an efficient alternative method for estimating abdominal fat volume.

  2. Optimization of the Divergent method for genotyping single nucleotide variations using SYBR Green-based single-tube real-time PCR.

    PubMed

    Gentilini, Fabio; Turba, Maria E

    2014-01-01

    A novel technique, called Divergent, for single-tube real-time PCR genotyping of point mutations without the use of fluorescently labeled probes has recently been reported. This novel PCR technique utilizes a set of four primers and a particular denaturation temperature for simultaneously amplifying two different amplicons which extend in opposite directions from the point mutation. The two amplicons can readily be detected using the melt curve analysis downstream to a closed-tube real-time PCR. In the present study, some critical aspects of the original method were specifically addressed to further implement the technique for genotyping the DNM1 c.G767T mutation responsible for exercise-induced collapse in Labrador retriever dogs. The improved Divergent assay was easily set up using a standard two-step real-time PCR protocol. The melting temperature difference between the mutated and the wild-type amplicons was approximately 5°C which could be promptly detected by all the thermal cyclers. The upgraded assay yielded accurate results with 157pg of genomic DNA per reaction. This optimized technique represents a flexible and inexpensive alternative to the minor grove binder fluorescently labeled method and to high resolution melt analysis for high-throughput, robust and cheap genotyping of single nucleotide variations. Copyright © 2014 Elsevier B.V. All rights reserved.

  3. The Effect of Transcutaneous Electrical Nerve Stimulation of Sympathetic Ganglions and Acupuncture Points on Distal Blood Flow.

    PubMed

    Kamali, Fahimeh; Mirkhani, Hossein; Nematollahi, Ahmadreza; Heidari, Saeed; Moosavi, Elahesadat; Mohamadi, Marzieh

    2017-04-01

    Transcutaneous electrical nerve stimulation (TENS) is a widely-practiced method to increase blood flow in clinical practice. The best location for stimulation to achieve optimal blood flow has not yet been determined. We compared the effect of TENS application at sympathetic ganglions and acupuncture points on blood flow in the foot of healthy individuals. Seventy-five healthy individuals were randomly assigned to three groups. The first group received cutaneous electrical stimulation at the thoracolumbar sympathetic ganglions. The second group received stimulation at acupuncture points. The third group received stimulation in the mid-calf area as a control group. Blood flow was recorded at time zero as baseline and every 3 minutes after baseline during stimulation, with a laser Doppler flow-meter. Individuals who received sympathetic ganglion stimulation showed significantly greater blood flow than those receiving acupuncture point stimulation or those in the control group (p<0.001). Data analysis revealed that blood flow at different times during stimulation increased significantly from time zero in each group. Therefore, the application of low-frequency TENS at the thoracolumbar sympathetic ganglions was more effective in increasing peripheral blood circulation than stimulation at acupuncture points. Copyright © 2017 Medical Association of Pharmacopuncture Institute. Published by Elsevier B.V. All rights reserved.

  4. Key Technology of Real-Time Road Navigation Method Based on Intelligent Data Research

    PubMed Central

    Tang, Haijing; Liang, Yu; Huang, Zhongnan; Wang, Taoyi; He, Lin; Du, Yicong; Ding, Gangyi

    2016-01-01

    The effect of traffic flow prediction plays an important role in routing selection. Traditional traffic flow forecasting methods mainly include linear, nonlinear, neural network, and Time Series Analysis method. However, all of them have some shortcomings. This paper analyzes the existing algorithms on traffic flow prediction and characteristics of city traffic flow and proposes a road traffic flow prediction method based on transfer probability. This method first analyzes the transfer probability of upstream of the target road and then makes the prediction of the traffic flow at the next time by using the traffic flow equation. Newton Interior-Point Method is used to obtain the optimal value of parameters. Finally, it uses the proposed model to predict the traffic flow at the next time. By comparing the existing prediction methods, the proposed model has proven to have good performance. It can fast get the optimal value of parameters faster and has higher prediction accuracy, which can be used to make real-time traffic flow prediction. PMID:27872637

  5. Higher-order triangular spectral element method with optimized cubature points for seismic wavefield modeling

    NASA Astrophysics Data System (ADS)

    Liu, Youshan; Teng, Jiwen; Xu, Tao; Badal, José

    2017-05-01

    The mass-lumped method avoids the cost of inverting the mass matrix and simultaneously maintains spatial accuracy by adopting additional interior integration points, known as cubature points. To date, such points are only known analytically in tensor domains, such as quadrilateral or hexahedral elements. Thus, the diagonal-mass-matrix spectral element method (SEM) in non-tensor domains always relies on numerically computed interpolation points or quadrature points. However, only the cubature points for degrees 1 to 6 are known, which is the reason that we have developed a p-norm-based optimization algorithm to obtain higher-order cubature points. In this way, we obtain and tabulate new cubature points with all positive integration weights for degrees 7 to 9. The dispersion analysis illustrates that the dispersion relation determined from the new optimized cubature points is comparable to that of the mass and stiffness matrices obtained by exact integration. Simultaneously, the Lebesgue constant for the new optimized cubature points indicates its surprisingly good interpolation properties. As a result, such points provide both good interpolation properties and integration accuracy. The Courant-Friedrichs-Lewy (CFL) numbers are tabulated for the conventional Fekete-based triangular spectral element (TSEM), the TSEM with exact integration, and the optimized cubature-based TSEM (OTSEM). A complementary study demonstrates the spectral convergence of the OTSEM. A numerical example conducted on a half-space model demonstrates that the OTSEM improves the accuracy by approximately one order of magnitude compared to the conventional Fekete-based TSEM. In particular, the accuracy of the 7th-order OTSEM is even higher than that of the 14th-order Fekete-based TSEM. Furthermore, the OTSEM produces a result that can compete in accuracy with the quadrilateral SEM (QSEM). The high accuracy of the OTSEM is also tested with a non-flat topography model. In terms of computational efficiency, the OTSEM is more efficient than the Fekete-based TSEM, although it is slightly costlier than the QSEM when a comparable numerical accuracy is required.

  6. Comparison of Optimal Design Methods in Inverse Problems

    DTIC Science & Technology

    2011-05-11

    corresponding FIM can be estimated by F̂ (τ) = F̂ (τ, θ̂OLS) = (Σ̂ N (θ̂OLS)) −1. (13) The asymptotic standard errors are given by SEk (θ0) = √ (ΣN0 )kk, k...1, . . . , p. (14) These standard errors are estimated in practice (when θ0 and σ0 are not known) by SEk (θ̂OLS) = √ (Σ̂N (θ̂OLS))kk, k = 1... SEk (θ̂boot) = √ Cov(θ̂boot)kk. We will compare the optimal design methods using the standard errors resulting from the op- timal time points each

  7. SFDT-1 Camera Pointing and Sun-Exposure Analysis and Flight Performance

    NASA Technical Reports Server (NTRS)

    White, Joseph; Dutta, Soumyo; Striepe, Scott

    2015-01-01

    The Supersonic Flight Dynamics Test (SFDT) vehicle was developed to advance and test technologies of NASA's Low Density Supersonic Decelerator (LDSD) Technology Demonstration Mission. The first flight test (SFDT-1) occurred on June 28, 2014. In order to optimize the usefulness of the camera data, analysis was performed to optimize parachute visibility in the camera field of view during deployment and inflation and to determine the probability of sun-exposure issues with the cameras given the vehicle heading and launch time. This paper documents the analysis, results and comparison with flight video of SFDT-1.

  8. Real-Time Control of an Ensemble of Heterogeneous Resources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernstein, Andrey; Bouman, Niek J.; Le Boudec, Jean-Yves

    This paper focuses on the problem of controlling an ensemble of heterogeneous resources connected to an electrical grid at the same point of common coupling (PCC). The controller receives an aggregate power setpoint for the ensemble in real time and tracks this setpoint by issuing individual optimal setpoints to the resources. The resources can have continuous or discrete nature (e.g., heating systems consisting of a finite number of heaters that each can be either switched on or off) and/or can be highly uncertain (e.g., photovoltaic (PV) systems or residential loads). A naive approach would lead to a stochastic mixed-integer optimizationmore » problem to be solved at the controller at each time step, which might be infeasible in real time. Instead, we allow the controller to solve a continuous convex optimization problem and compensate for the errors at the resource level by using a variant of the well-known error diffusion algorithm. We give conditions guaranteeing that our algorithm tracks the power setpoint at the PCC on average while issuing optimal setpoints to individual resources. We illustrate the approach numerically by controlling a collection of batteries, PV systems, and discrete loads.« less

  9. Determining optimal parameters in magnetic spacecraft stabilization via attitude feedback

    NASA Astrophysics Data System (ADS)

    Bruni, Renato; Celani, Fabio

    2016-10-01

    The attitude control of a spacecraft using magnetorquers can be achieved by a feedback control law which has four design parameters. However, the practical determination of appropriate values for these parameters is a critical open issue. We propose here an innovative systematic approach for finding these values: they should be those that minimize the convergence time to the desired attitude. This a particularly diffcult optimization problem, for several reasons: 1) such time cannot be expressed in analytical form as a function of parameters and initial conditions; 2) design parameters may range over very wide intervals; 3) convergence time depends also on the initial conditions of the spacecraft, which are not known in advance. To overcome these diffculties, we present a solution approach based on derivative-free optimization. These algorithms do not need to write analytically the objective function: they only need to compute it in a number of points. We also propose a fast probing technique to identify which regions of the search space have to be explored densely. Finally, we formulate a min-max model to find robust parameters, namely design parameters that minimize convergence time under the worst initial conditions. Results are very promising.

  10. Relationship Between Optimal Gain and Coherence Zone in Flight Simulation

    NASA Technical Reports Server (NTRS)

    Gracio, Bruno Jorge Correia; Pais, Ana Rita Valente; vanPaassen, M. M.; Mulder, Max; Kely, Lon C.; Houck, Jacob A.

    2011-01-01

    In motion simulation the inertial information generated by the motion platform is most of the times different from the visual information in the simulator displays. This occurs due to the physical limits of the motion platform. However, for small motions that are within the physical limits of the motion platform, one-to-one motion, i.e. visual information equal to inertial information, is possible. It has been shown in previous studies that one-to-one motion is often judged as too strong, causing researchers to lower the inertial amplitude. When trying to measure the optimal inertial gain for a visual amplitude, we found a zone of optimal gains instead of a single value. Such result seems related with the coherence zones that have been measured in flight simulation studies. However, the optimal gain results were never directly related with the coherence zones. In this study we investigated whether the optimal gain measurements are the same as the coherence zone measurements. We also try to infer if the results obtained from the two measurements can be used to differentiate between simulators with different configurations. An experiment was conducted at the NASA Langley Research Center which used both the Cockpit Motion Facility and the Visual Motion Simulator. The results show that the inertial gains obtained with the optimal gain are different than the ones obtained with the coherence zone measurements. The optimal gain is within the coherence zone.The point of mean optimal gain was lower and further away from the one-to-one line than the point of mean coherence. The zone width obtained for the coherence zone measurements was dependent on the visual amplitude and frequency. For the optimal gain, the zone width remained constant when the visual amplitude and frequency were varied. We found no effect of the simulator configuration in both the coherence zone and optimal gain measurements.

  11. Joint optimization of a partially coherent Gaussian beam for free-space optical communication over turbulent channels with pointing errors.

    PubMed

    Lee, It Ee; Ghassemlooy, Zabih; Ng, Wai Pang; Khalighi, Mohammad-Ali

    2013-02-01

    Joint beam width and spatial coherence length optimization is proposed to maximize the average capacity in partially coherent free-space optical links, under the combined effects of atmospheric turbulence and pointing errors. An optimization metric is introduced to enable feasible translation of the joint optimal transmitter beam parameters into an analogous level of divergence of the received optical beam. Results show that near-ideal average capacity is best achieved through the introduction of a larger receiver aperture and the joint optimization technique.

  12. Challenges in Maintaining Emotion Regulation in a Sleep and Energy Deprived State Induced by the 4800Km Ultra-Endurance Bicycle Race; The Race Across AMerica (RAAM).

    PubMed

    Lahart, Ian M; Lane, Andrew M; Hulton, Andrew; Williams, Karen; Godfrey, Richard; Pedlar, Charles; Wilson, Mathew G; Whyte, Gregory P

    2013-01-01

    Multiday ultra-endurance races present athletes with a significant number of physiological and psychological challenges. We examined emotions, the perceived functionality (optimal-dysfunctional) of emotions, strategies to regulate emotions, sleep quality, and energy intake-expenditure in a four-man team participating in the Race Across AMerica (RAAM); a 4856km continuous cycle race. Cyclists reported experiencing an optimal emotional state for less than 50% of total competition, with emotional states differing significantly between each cyclist over time. Coupled with this emotional disturbance, each cyclist experienced progressively worsening sleep deprivation and daily negative energy balances throughout the RAAM. Cyclists managed less than one hour of continuous sleep per sleep episode, high sleep latency and high percentage moving time. Of note, actual sleep and sleep efficiency were better maintained during longer rest periods, highlighting the importance of a race strategy that seeks to optimise the balance between average cycling velocity and sleep time. Our data suggests that future RAAM cyclists and crew should: 1) identify beliefs on the perceived functionality of emotions in relation to best (functional-optimal) and worst (dysfunctional) performance as the starting point to intervention work; 2) create a plan for support sufficient sleep and recovery; 3) create nutritional strategies that maintain energy intake and thus reduce energy deficits; and 4) prepare for the deleterious effects of sleep deprivation so that they are able to appropriately respond to unexpected stressors and foster functional working interpersonal relationships. Key PointsCompleting the Race Across AMerica (RAAM); a 4856km continuous cycle race associated with sleep disturbance, an energy-deficient state, and experiencing intense unwanted emotions.Cyclists reported experiencing an optimal emotional state for less than 50% of total competition and actual sleep and sleep efficiency was better maintained during longer rest periods.We suggest that future RAAM cyclists and crew should:Identify individual beliefs on the perceived functionality of emotional states in relation to best (optimal) and worst (dysfunctional) performance as the starting point to identifying if emotion regulation strategies should be initiated.Plan for enhanced sleep and recovery not just plan and train for maintaining a high average velocity;Create nutritional strategies that maintain energy intake and thus reduce energy deficits;Psychologically prepare cyclists and crew for the deleterious effects of sleep deprivation so that they both are able to appropriately respond to unexpected stressors and foster functional interpersonal working relationships.

  13. Multiple-hopping trajectories near a rotating asteroid

    NASA Astrophysics Data System (ADS)

    Shen, Hong-Xin; Zhang, Tian-Jiao; Li, Zhao; Li, Heng-Nian

    2017-03-01

    We present a study of the transfer orbits connecting landing points of irregular-shaped asteroids. The landing points do not touch the surface of the asteroids and are chosen several meters above the surface. The ant colony optimization technique is used to calculate the multiple-hopping trajectories near an arbitrary irregular asteroid. This new method has three steps which are as follows: (1) the search of the maximal clique of candidate target landing points; (2) leg optimization connecting all landing point pairs; and (3) the hopping sequence optimization. In particular this method is applied to asteroids 433 Eros and 216 Kleopatra. We impose a critical constraint on the target landing points to allow for extensive exploration of the asteroid: the relative distance between all the arrived target positions should be larger than a minimum allowed value. Ant colony optimization is applied to find the set and sequence of targets, and the differential evolution algorithm is used to solve for the hopping orbits. The minimum-velocity increment tours of hopping trajectories connecting all the landing positions are obtained by ant colony optimization. The results from different size asteroids indicate that the cost of the minimum velocity-increment tour depends on the size of the asteroids.

  14. Cost-Benefit Analysis of Computer Resources for Machine Learning

    USGS Publications Warehouse

    Champion, Richard A.

    2007-01-01

    Machine learning describes pattern-recognition algorithms - in this case, probabilistic neural networks (PNNs). These can be computationally intensive, in part because of the nonlinear optimizer, a numerical process that calibrates the PNN by minimizing a sum of squared errors. This report suggests efficiencies that are expressed as cost and benefit. The cost is computer time needed to calibrate the PNN, and the benefit is goodness-of-fit, how well the PNN learns the pattern in the data. There may be a point of diminishing returns where a further expenditure of computer resources does not produce additional benefits. Sampling is suggested as a cost-reduction strategy. One consideration is how many points to select for calibration and another is the geometric distribution of the points. The data points may be nonuniformly distributed across space, so that sampling at some locations provides additional benefit while sampling at other locations does not. A stratified sampling strategy can be designed to select more points in regions where they reduce the calibration error and fewer points in regions where they do not. Goodness-of-fit tests ensure that the sampling does not introduce bias. This approach is illustrated by statistical experiments for computing correlations between measures of roadless area and population density for the San Francisco Bay Area. The alternative to training efficiencies is to rely on high-performance computer systems. These may require specialized programming and algorithms that are optimized for parallel performance.

  15. Is there a bone-nail specific entry point? Automated fit quantification of tibial nail designs during the insertion for six different nail entry points.

    PubMed

    Amarathunga, J P; Schuetz, M A; Yarlagadda, K V D; Schmutz, B

    2015-04-01

    Intramedullary nailing is the standard fixation method for displaced diaphyseal fractures of tibia. Selection of the correct nail insertion point is important for axial alignment of bone fragments and to avoid iatrogenic fractures. However, the standard entry point (SEP) may not always optimise the bone-nail fit due to geometric variations of bones. This study aimed to investigate the optimal entry for a given bone-nail pair using the fit quantification software tool previously developed by the authors. The misfit was quantified for 20 bones with two nail designs (ETN and ETN-Proximal Bend) related to the SEP and 5 entry points which were 5 mm and 10 mm away from the SEP. The SEP was the optimal entry point for 50% of the bones used. For the remaining bones, the optimal entry point was located 5 mm away from the SEP, which improved the overall fit by 40% on average. However, entry points 10 mm away from the SEP doubled the misfit. The optimised bone-nail fit can be achieved through the SEP and within the range of a 5 mm radius, except posteriorly. The study results suggest that the optimal entry point should be selected by considering the fit during insertion and not only at the final position. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.

  16. A multiple objective optimization approach to quality control

    NASA Technical Reports Server (NTRS)

    Seaman, Christopher Michael

    1991-01-01

    The use of product quality as the performance criteria for manufacturing system control is explored. The goal in manufacturing, for economic reasons, is to optimize product quality. The problem is that since quality is a rather nebulous product characteristic, there is seldom an analytic function that can be used as a measure. Therefore standard control approaches, such as optimal control, cannot readily be applied. A second problem with optimizing product quality is that it is typically measured along many dimensions: there are many apsects of quality which must be optimized simultaneously. Very often these different aspects are incommensurate and competing. The concept of optimality must now include accepting tradeoffs among the different quality characteristics. These problems are addressed using multiple objective optimization. It is shown that the quality control problem can be defined as a multiple objective optimization problem. A controller structure is defined using this as the basis. Then, an algorithm is presented which can be used by an operator to interactively find the best operating point. Essentially, the algorithm uses process data to provide the operator with two pieces of information: (1) if it is possible to simultaneously improve all quality criteria, then determine what changes to the process input or controller parameters should be made to do this; and (2) if it is not possible to improve all criteria, and the current operating point is not a desirable one, select a criteria in which a tradeoff should be made, and make input changes to improve all other criteria. The process is not operating at an optimal point in any sense if no tradeoff has to be made to move to a new operating point. This algorithm ensures that operating points are optimal in some sense and provides the operator with information about tradeoffs when seeking the best operating point. The multiobjective algorithm was implemented in two different injection molding scenarios: tuning of process controllers to meet specified performance objectives and tuning of process inputs to meet specified quality objectives. Five case studies are presented.

  17. Weighted optimization of irradiance for photodynamic therapy of port wine stains

    NASA Astrophysics Data System (ADS)

    He, Linhuan; Zhou, Ya; Hu, Xiaoming

    2016-10-01

    Planning of irradiance distribution (PID) is one of the foremost factors for on-demand treatment of port wine stains (PWS) with photodynamic therapy (PDT). A weighted optimization method for PID was proposed according to the grading of PWS with a three dimensional digital illumination instrument. Firstly, the point clouds of lesions were filtered to remove the error or redundant points, the triangulation was carried out and the lesion was divided into small triangular patches. Secondly, the parameters such as area, normal vector and orthocenter for optimization of each triangular patch were calculated, and the weighted coefficients were determined by the erythema indexes and areas of patches. Then, the optimization initial point was calculated based on the normal vectors and orthocenters to optimize the light direction. In the end, the irradiation can be optimized according to cosine values of irradiance angles and weighted coefficients. Comparing the irradiance distribution before and after optimization, the proposed weighted optimization method can make the irradiance distribution match better with the characteristics of lesions, and has the potential to improve the therapeutic efficacy.

  18. Simulation of the elementary evolution operator with the motional states of an ion in an anharmonic trap

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Santos, Ludovic; Vaeck, Nathalie; Justum, Yves

    2015-04-07

    Following a recent proposal of L. Wang and D. Babikov [J. Chem. Phys. 137, 064301 (2012)], we theoretically illustrate the possibility of using the motional states of a Cd{sup +} ion trapped in a slightly anharmonic potential to simulate the single-particle time-dependent Schrödinger equation. The simulated wave packet is discretized on a spatial grid and the grid points are mapped on the ion motional states which define the qubit network. The localization probability at each grid point is obtained from the population in the corresponding motional state. The quantum gate is the elementary evolution operator corresponding to the time-dependent Schrödingermore » equation of the simulated system. The corresponding matrix can be estimated by any numerical algorithm. The radio-frequency field which is able to drive this unitary transformation among the qubit states of the ion is obtained by multi-target optimal control theory. The ion is assumed to be cooled in the ground motional state, and the preliminary step consists in initializing the qubits with the amplitudes of the initial simulated wave packet. The time evolution of the localization probability at the grids points is then obtained by successive applications of the gate and reading out the motional state population. The gate field is always identical for a given simulated potential, only the field preparing the initial wave packet has to be optimized for different simulations. We check the stability of the simulation against decoherence due to fluctuating electric fields in the trap electrodes by applying dissipative Lindblad dynamics.« less

  19. Optimal Information Extraction of Laser Scanning Dataset by Scale-Adaptive Reduction

    NASA Astrophysics Data System (ADS)

    Zang, Y.; Yang, B.

    2018-04-01

    3D laser technology is widely used to collocate the surface information of object. For various applications, we need to extract a good perceptual quality point cloud from the scanned points. To solve the problem, most of existing methods extract important points based on a fixed scale. However, geometric features of 3D object come from various geometric scales. We propose a multi-scale construction method based on radial basis function. For each scale, important points are extracted from the point cloud based on their importance. We apply a perception metric Just-Noticeable-Difference to measure degradation of each geometric scale. Finally, scale-adaptive optimal information extraction is realized. Experiments are undertaken to evaluate the effective of the proposed method, suggesting a reliable solution for optimal information extraction of object.

  20. Two Point Exponential Approximation Method for structural optimization of problems with frequency constraints

    NASA Technical Reports Server (NTRS)

    Fadel, G. M.

    1991-01-01

    The point exponential approximation method was introduced by Fadel et al. (Fadel, 1990), and tested on structural optimization problems with stress and displacement constraints. The reports in earlier papers were promising, and the method, which consists of correcting Taylor series approximations using previous design history, is tested in this paper on optimization problems with frequency constraints. The aim of the research is to verify the robustness and speed of convergence of the two point exponential approximation method when highly non-linear constraints are used.

  1. Optimal cutoff points for HOMA-IR and QUICKI in the diagnosis of metabolic syndrome and non-alcoholic fatty liver disease: A population based study.

    PubMed

    Motamed, Nima; Miresmail, Seyed Javad Haji; Rabiee, Behnam; Keyvani, Hossein; Farahani, Behzad; Maadi, Mansooreh; Zamani, Farhad

    2016-03-01

    The present study was carried out to determine the optimal cutoff points for homeostatic model assessment (HOMA-IR) and quantitative insulin sensitivity check index (QUICKI) in the diagnosis of metabolic syndrome (MetS) and non-alcoholic fatty liver disease (NAFLD). The baseline data of 5511 subjects aged ≥18years of a cohort study in northern Iran were utilized to analyze. Receiver operating characteristic (ROC) analysis was conducted to determine the discriminatory capability of HOMA-IR and QUICKI in the diagnosis of MetS and NAFLD. Youden index was utilized to determine the optimal cutoff points of HOMA-IR and QUICKI in the diagnosis of MetS and NAFLD. The optimal cutoff points for HOMA-IR in the diagnosis of MetS and NAFLD were 2.0 [sensitivity=64.4%, specificity=66.8%] and 1.79 [sensitivity=66.2%, specificity=62.2%] in men and were 2.5 [sensitivity=57.6%, specificity=67.9%] and 1.95 [sensitivity=65.1%, specificity=54.7%] in women respectively. Furthermore, the optimal cutoff points for QUICKI in the diagnosis of MetS and NAFLD were 0.343 [sensitivity=63.7%, specificity=67.8%] and 0.347 [sensitivity=62.9%, specificity=65.0%] in men and were 0.331 [sensitivity=55.7%, specificity=70.7%] and 0.333 [sensitivity=53.2%, specificity=67.7%] in women respectively. Not only the optimal cutoff points of HOMA-IR and QUICKI were different for MetS and NAFLD, but also different cutoff points were obtained for men and women for each of these two conditions. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. Optimal Synthesis of Compliant Mechanisms using Subdivision and Commercial FEA (DETC2004-57497)

    NASA Technical Reports Server (NTRS)

    Hull, Patrick V.; Canfield, Stephen

    2004-01-01

    The field of distributed-compliance mechanisms has seen significant work in developing suitable topology optimization tools for their design. These optimal design tools have grown out of the techniques of structural optimization. This paper will build on the previous work in topology optimization and compliant mechanism design by proposing an alternative design space parameterization through control points and adding another step to the process, that of subdivision. The control points allow a specific design to be represented as a solid model during the optimization process. The process of subdivision creates an additional number of control points that help smooth the surface (for example a C(sup 2) continuous surface depending on the method of subdivision chosen) creating a manufacturable design free of some traditional numerical instabilities. Note that these additional control points do not add to the number of design parameters. This alternative parameterization and description as a solid model effectively and completely separates the design variables from the analysis variables during the optimization procedure. The motivation behind this work is to create an automated design tool from task definition to functional prototype created on a CNC or rapid-prototype machine. This paper will describe the proposed compliant mechanism design process and will demonstrate the procedure on several examples common in the literature.

  3. Registration of Panoramic/Fish-Eye Image Sequence and LiDAR Points Using Skyline Features

    PubMed Central

    Zhu, Ningning; Jia, Yonghong; Ji, Shunping

    2018-01-01

    We propose utilizing a rigorous registration model and a skyline-based method for automatic registration of LiDAR points and a sequence of panoramic/fish-eye images in a mobile mapping system (MMS). This method can automatically optimize original registration parameters and avoid the use of manual interventions in control point-based registration methods. First, the rigorous registration model between the LiDAR points and the panoramic/fish-eye image was built. Second, skyline pixels from panoramic/fish-eye images and skyline points from the MMS’s LiDAR points were extracted, relying on the difference in the pixel values and the registration model, respectively. Third, a brute force optimization method was used to search for optimal matching parameters between skyline pixels and skyline points. In the experiments, the original registration method and the control point registration method were used to compare the accuracy of our method with a sequence of panoramic/fish-eye images. The result showed: (1) the panoramic/fish-eye image registration model is effective and can achieve high-precision registration of the image and the MMS’s LiDAR points; (2) the skyline-based registration method can automatically optimize the initial attitude parameters, realizing a high-precision registration of a panoramic/fish-eye image and the MMS’s LiDAR points; and (3) the attitude correction values of the sequences of panoramic/fish-eye images are different, and the values must be solved one by one. PMID:29883431

  4. Free-form Airfoil Shape Optimization Under Uncertainty Using Maximum Expected Value and Second-order Second-moment Strategies

    NASA Technical Reports Server (NTRS)

    Huyse, Luc; Bushnell, Dennis M. (Technical Monitor)

    2001-01-01

    Free-form shape optimization of airfoils poses unexpected difficulties. Practical experience has indicated that a deterministic optimization for discrete operating conditions can result in dramatically inferior performance when the actual operating conditions are different from the - somewhat arbitrary - design values used for the optimization. Extensions to multi-point optimization have proven unable to adequately remedy this problem of "localized optimization" near the sampled operating conditions. This paper presents an intrinsically statistical approach and demonstrates how the shortcomings of multi-point optimization with respect to "localized optimization" can be overcome. The practical examples also reveal how the relative likelihood of each of the operating conditions is automatically taken into consideration during the optimization process. This is a key advantage over the use of multipoint methods.

  5. Real-Time Station Grouping under Dynamic Traffic for IEEE 802.11ah

    PubMed Central

    Tian, Le; Latré, Steven

    2017-01-01

    IEEE 802.11ah, marketed as Wi-Fi HaLow, extends Wi-Fi to the sub-1 GHz spectrum. Through a number of physical layer (PHY) and media access control (MAC) optimizations, it aims to bring greatly increased range, energy-efficiency, and scalability. This makes 802.11ah the perfect candidate for providing connectivity to Internet of Things (IoT) devices. One of these new features, referred to as the Restricted Access Window (RAW), focuses on improving scalability in highly dense deployments. RAW divides stations into groups and reduces contention and collisions by only allowing channel access to one group at a time. However, the standard does not dictate how to determine the optimal RAW grouping parameters. The optimal parameters depend on the current network conditions, and it has been shown that incorrect configuration severely impacts throughput, latency and energy efficiency. In this paper, we propose a traffic-adaptive RAW optimization algorithm (TAROA) to adapt the RAW parameters in real time based on the current traffic conditions, optimized for sensor networks in which each sensor transmits packets with a certain (predictable) frequency and may change the transmission frequency over time. The TAROA algorithm is executed at each target beacon transmission time (TBTT), and it first estimates the packet transmission interval of each station only based on packet transmission information obtained by access point (AP) during the last beacon interval. Then, TAROA determines the RAW parameters and assigns stations to RAW slots based on this estimated transmission frequency. The simulation results show that, compared to enhanced distributed channel access/distributed coordination function (EDCA/DCF), the TAROA algorithm can highly improve the performance of IEEE 802.11ah dense networks in terms of throughput, especially when hidden nodes exist, although it does not always achieve better latency performance. This paper contributes with a practical approach to optimizing RAW grouping under dynamic traffic in real time, which is a major leap towards applying RAW mechanism in real-life IoT networks. PMID:28677617

  6. Real-Time Station Grouping under Dynamic Traffic for IEEE 802.11ah.

    PubMed

    Tian, Le; Khorov, Evgeny; Latré, Steven; Famaey, Jeroen

    2017-07-04

    IEEE 802.11ah, marketed as Wi-Fi HaLow, extends Wi-Fi to the sub-1 GHz spectrum. Through a number of physical layer (PHY) and media access control (MAC) optimizations, it aims to bring greatly increased range, energy-efficiency, and scalability. This makes 802.11ah the perfect candidate for providing connectivity to Internet of Things (IoT) devices. One of these new features, referred to as the Restricted Access Window (RAW), focuses on improving scalability in highly dense deployments. RAW divides stations into groups and reduces contention and collisions by only allowing channel access to one group at a time. However, the standard does not dictate how to determine the optimal RAW grouping parameters. The optimal parameters depend on the current network conditions, and it has been shown that incorrect configuration severely impacts throughput, latency and energy efficiency. In this paper, we propose a traffic-adaptive RAW optimization algorithm (TAROA) to adapt the RAW parameters in real time based on the current traffic conditions, optimized for sensor networks in which each sensor transmits packets with a certain (predictable) frequency and may change the transmission frequency over time. The TAROA algorithm is executed at each target beacon transmission time (TBTT), and it first estimates the packet transmission interval of each station only based on packet transmission information obtained by access point (AP) during the last beacon interval. Then, TAROA determines the RAW parameters and assigns stations to RAW slots based on this estimated transmission frequency. The simulation results show that, compared to enhanced distributed channel access/distributed coordination function (EDCA/DCF), the TAROA algorithm can highly improve the performance of IEEE 802.11ah dense networks in terms of throughput, especially when hidden nodes exist, although it does not always achieve better latency performance. This paper contributes with a practical approach to optimizing RAW grouping under dynamic traffic in real time, which is a major leap towards applying RAW mechanism in real-life IoT networks.

  7. Satellite scheduling considering maximum observation coverage time and minimum orbital transfer fuel cost

    NASA Astrophysics Data System (ADS)

    Zhu, Kai-Jian; Li, Jun-Feng; Baoyin, He-Xi

    2010-01-01

    In case of an emergency like the Wenchuan earthquake, it is impossible to observe a given target on earth by immediately launching new satellites. There is an urgent need for efficient satellite scheduling within a limited time period, so we must find a way to reasonably utilize the existing satellites to rapidly image the affected area during a short time period. Generally, the main consideration in orbit design is satellite coverage with the subsatellite nadir point as a standard of reference. Two factors must be taken into consideration simultaneously in orbit design, i.e., the maximum observation coverage time and the minimum orbital transfer fuel cost. The local time of visiting the given observation sites must satisfy the solar radiation requirement. When calculating the operational orbit elements as optimal parameters to be evaluated, we obtain the minimum objective function by comparing the results derived from the primer vector theory with those derived from the Hohmann transfer because the operational orbit for observing the disaster area with impulse maneuvers is considered in this paper. The primer vector theory is utilized to optimize the transfer trajectory with three impulses and the Hohmann transfer is utilized for coplanar and small inclination of non-coplanar cases. Finally, we applied this method in a simulation of the rescue mission at Wenchuan city. The results of optimizing orbit design with a hybrid PSO and DE algorithm show that the primer vector and Hohmann transfer theory proved to be effective methods for multi-object orbit optimization.

  8. SAMPEX science pointing with velocity avoidance. [solar anomalous and magnetospheric particle explorer

    NASA Technical Reports Server (NTRS)

    Frakes, Joseph P.; Henretty, Debra A.; Flatley, Thomas W.; Markley, F. L.; San, Josephine K.; Lightsey, E. G.

    1992-01-01

    The Solar, Anomalous, and Magnetospheric Particle Explorer (SAMPEX) science pointing mode is presented with the additional constraint of velocity avoidance. This constraint has been added in light of the orbital debris and micrometeoroid fluxes that have been revealed by the Long Duration Exposure Facility (LDEF) recovered in January 1990. These fluxes are 50-100 times higher than the flux tables that were used in the September 1988 proposal to NASA for the SAMPEX mission. The SAMPEX Heavey Ion Large Telescope (HILT) sensor includes a flow-through isobutane proportional counter that is susceptible to penetration by orbital debris and micrometeoroids. Thus, keeping the HILT sensor pointed away from the velocity vector, the direction of maximum flux, will compensate for the higher than expected fluxes. Using an orbital debris model and a micrometeoroid model developed at the Johnson Space Center (JSC), and a SAMPEX dynamic simulator developed by the Guidance and Control Branch at the Goddard Space Flight Center (GSFC), an 'optimal' minimum ram angle (the angle between the HILT boresight and the velocity vector) of 90 degrees has been determined. It is optimal in the sense of minimizing the science pointing performance degradation while providing approximately an 89 percent chance of survival for the HILT sensor over a three year period.

  9. Point-of-care ultrasonography by pediatric emergency physicians. Policy statement.

    PubMed

    Marin, Jennifer R; Lewiss, Resa E

    2015-04-01

    Point-of-care ultrasonography is increasingly being used to facilitate accurate and timely diagnoses and to guide procedures. It is important for pediatric emergency physicians caring for patients in the emergency department to receive adequate and continued point-of-care ultrasonography training for those indications used in their practice setting. Emergency departments should have credentialing and quality assurance programs. Pediatric emergency medicine fellowships should provide appropriate training to physician trainees. Hospitals should provide privileges to physicians who demonstrate competency in point-of-care ultrasonography. Ongoing research will provide the necessary measures to define the optimal training and competency assessment standards. Requirements for credentialing and hospital privileges will vary and will be specific to individual departments and hospitals. As more physicians are trained and more research is completed, there should be one national standard for credentialing and privileging in point-of-care ultrasonography for pediatric emergency physicians.

  10. Pressure optimization of an EC-QCL based cavity ring-down spectroscopy instrument for exhaled NO detection

    NASA Astrophysics Data System (ADS)

    Zhou, Sheng; Han, Yanling; Li, Bincheng

    2018-02-01

    Nitric oxide (NO) in exhaled breath has gained increasing interest in recent years mainly driven by the clinical need to monitor inflammatory status in respiratory disorders, such as asthma and other pulmonary conditions. Mid-infrared cavity ring-down spectroscopy (CRDS) using an external cavity, widely tunable continuous-wave quantum cascade laser operating at 5.3 µm was employed for NO detection. The detection pressure was reduced in steps to improve the sensitivity, and the optimal pressure was determined to be 15 kPa based on the fitting residual analysis of measured absorption spectra. A detection limit (1σ, or one time of standard deviation) of 0.41 ppb was experimentally achieved for NO detection in human breath under the optimized condition in a total of 60 s acquisition time (2 s per data point). Diurnal measurement session was conducted for exhaled NO. The experimental results indicated that mid-infrared CRDS technique has great potential for various applications in health diagnosis.

  11. Investigation of Optimal Control Allocation for Gust Load Alleviation in Flight Control

    NASA Technical Reports Server (NTRS)

    Frost, Susan A.; Taylor, Brian R.; Bodson, Marc

    2012-01-01

    Advances in sensors and avionics computation power suggest real-time structural load measurements could be used in flight control systems for improved safety and performance. A conventional transport flight control system determines the moments necessary to meet the pilot's command, while rejecting disturbances and maintaining stability of the aircraft. Control allocation is the problem of converting these desired moments into control effector commands. In this paper, a framework is proposed to incorporate real-time structural load feedback and structural load constraints in the control allocator. Constrained optimal control allocation can be used to achieve desired moments without exceeding specified limits on monitored load points. Minimization of structural loads by the control allocator is used to alleviate gust loads. The framework to incorporate structural loads in the flight control system and an optimal control allocation algorithm will be described and then demonstrated on a nonlinear simulation of a generic transport aircraft with flight dynamics and static structural loads.

  12. A feedback control for the advanced launch system

    NASA Technical Reports Server (NTRS)

    Seywald, Hans; Cliff, Eugene M.

    1991-01-01

    A robust feedback algorithm is presented for a near-minimum-fuel ascent of a two-stage launch vehicle operating in the equatorial plane. The development of the algorithm is based on the ideas of neighboring optimal control and can be derived into three phases. In phase 1, the formalism of optimal control is employed to calculate fuel-optimal ascent trajectories for a simple point-mass model. In phase 2, these trajectories are used to numerically calculate gain functions of time for the control(s), the total flight time, and possibly, for other variables of interest. In phase 3, these gains are used to determine feedback expressions for the controls associated with a more realistic model of a launch vehicle. With the Advanced Launch System in mind, all calculations are performed on a two-stage vehicle with fixed thrust history, but this restriction is by no means important for the approach taken. Performance and robustness of the algorithm is found to be excellent.

  13. A mesh gradient technique for numerical optimization

    NASA Technical Reports Server (NTRS)

    Willis, E. A., Jr.

    1973-01-01

    A class of successive-improvement optimization methods in which directions of descent are defined in the state space along each trial trajectory are considered. The given problem is first decomposed into two discrete levels by imposing mesh points. Level 1 consists of running optimal subarcs between each successive pair of mesh points. For normal systems, these optimal two-point boundary value problems can be solved by following a routine prescription if the mesh spacing is sufficiently close. A spacing criterion is given. Under appropriate conditions, the criterion value depends only on the coordinates of the mesh points, and its gradient with respect to those coordinates may be defined by interpreting the adjoint variables as partial derivatives of the criterion value function. In level 2, the gradient data is used to generate improvement steps or search directions in the state space which satisfy the boundary values and constraints of the given problem.

  14. Synthesis and optimization of four bar mechanism with six design parameters

    NASA Astrophysics Data System (ADS)

    Jaiswal, Ankur; Jawale, H. P.

    2018-04-01

    Function generation is synthesis of mechanism for specific task, involves complexity for specially synthesis above five precision of coupler points. Thus pertains to large structural error. The methodology for arriving to better precision solution is to use the optimization technique. Work presented herein considers methods of optimization of structural error in closed kinematic chain with single degree of freedom, for generating functions like log(x), ex, tan(x), sin(x) with five precision points. The equation in Freudenstein-Chebyshev method is used to develop five point synthesis of mechanism. The extended formulation is proposed and results are obtained to verify existing results in literature. Optimization of structural error is carried out using least square approach. Comparative structural error analysis is presented on optimized error through least square method and extended Freudenstein-Chebyshev method.

  15. Structure-kinetic relationships--an overlooked parameter in hit-to-lead optimization: a case of cyclopentylamines as chemokine receptor 2 antagonists.

    PubMed

    Vilums, Maris; Zweemer, Annelien J M; Yu, Zhiyi; de Vries, Henk; Hillger, Julia M; Wapenaar, Hannah; Bollen, Ilse A E; Barmare, Farhana; Gross, Raymond; Clemens, Jeremy; Krenitsky, Paul; Brussee, Johannes; Stamos, Dean; Saunders, John; Heitman, Laura H; Ijzerman, Adriaan P

    2013-10-10

    Preclinical models of inflammatory diseases (e.g., neuropathic pain, rheumatoid arthritis, and multiple sclerosis) have pointed to a critical role of the chemokine receptor 2 (CCR2) and chemokine ligand 2 (CCL2). However, one of the biggest problems of high-affinity inhibitors of CCR2 is their lack of efficacy in clinical trials. We report a new approach for the design of high-affinity and long-residence-time CCR2 antagonists. We developed a new competition association assay for CCR2, which allows us to investigate the relation of the structure of the ligand and its receptor residence time [i.e., structure-kinetic relationship (SKR)] next to a traditional structure-affinity relationship (SAR). By applying combined knowledge of SAR and SKR, we were able to re-evaluate the hit-to-lead process of cyclopentylamines as CCR2 antagonists. Affinity-based optimization yielded compound 1 with good binding (Ki = 6.8 nM) but very short residence time (2.4 min). However, when the optimization was also based on residence time, the hit-to-lead process yielded compound 22a, a new high-affinity CCR2 antagonist (3.6 nM), with a residence time of 135 min.

  16. Reentry trajectory optimization based on a multistage pseudospectral method.

    PubMed

    Zhao, Jiang; Zhou, Rui; Jin, Xuelian

    2014-01-01

    Of the many direct numerical methods, the pseudospectral method serves as an effective tool to solve the reentry trajectory optimization for hypersonic vehicles. However, the traditional pseudospectral method is time-consuming due to large number of discretization points. For the purpose of autonomous and adaptive reentry guidance, the research herein presents a multistage trajectory control strategy based on the pseudospectral method, capable of dealing with the unexpected situations in reentry flight. The strategy typically includes two subproblems: the trajectory estimation and trajectory refining. In each processing stage, the proposed method generates a specified range of trajectory with the transition of the flight state. The full glide trajectory consists of several optimal trajectory sequences. The newly focused geographic constraints in actual flight are discussed thereafter. Numerical examples of free-space flight, target transition flight, and threat avoidance flight are used to show the feasible application of multistage pseudospectral method in reentry trajectory optimization.

  17. Optimization of Pressurized Liquid Extraction of Three Major Acetophenones from Cynanchum bungei Using a Box-Behnken Design

    PubMed Central

    Li, Wei; Zhao, Li-Chun; Sun, Yin-Shi; Lei, Feng-Jie; Wang, Zi; Gui, Xiong-Bin; Wang, Hui

    2012-01-01

    In this work, pressurized liquid extraction (PLE) of three acetophenones (4-hydroxyacetophenone, baishouwubenzophenone, and 2,4-dihydroxyacetophenone) from Cynanchum bungei (ACB) were investigated. The optimal conditions for extraction of ACB were obtained using a Box-Behnken design, consisting of 17 experimental points, as follows: Ethanol (100%) as the extraction solvent at a temperature of 120 °C and an extraction pressure of 1500 psi, using one extraction cycle with a static extraction time of 17 min. The extracted samples were analyzed by high-performance liquid chromatography using an UV detector. Under this optimal condition, the experimental values agreed with the predicted values by analysis of variance. The ACB extraction yield with optimal PLE was higher than that obtained by soxhlet extraction and heat-reflux extraction methods. The results suggest that the PLE method provides a good alternative for acetophenone extraction. PMID:23203079

  18. Reentry Trajectory Optimization Based on a Multistage Pseudospectral Method

    PubMed Central

    Zhou, Rui; Jin, Xuelian

    2014-01-01

    Of the many direct numerical methods, the pseudospectral method serves as an effective tool to solve the reentry trajectory optimization for hypersonic vehicles. However, the traditional pseudospectral method is time-consuming due to large number of discretization points. For the purpose of autonomous and adaptive reentry guidance, the research herein presents a multistage trajectory control strategy based on the pseudospectral method, capable of dealing with the unexpected situations in reentry flight. The strategy typically includes two subproblems: the trajectory estimation and trajectory refining. In each processing stage, the proposed method generates a specified range of trajectory with the transition of the flight state. The full glide trajectory consists of several optimal trajectory sequences. The newly focused geographic constraints in actual flight are discussed thereafter. Numerical examples of free-space flight, target transition flight, and threat avoidance flight are used to show the feasible application of multistage pseudospectral method in reentry trajectory optimization. PMID:24574929

  19. Transient responses' optimization by means of set-based multi-objective evolution

    NASA Astrophysics Data System (ADS)

    Avigad, Gideon; Eisenstadt, Erella; Goldvard, Alex; Salomon, Shaul

    2012-04-01

    In this article, a novel solution to multi-objective problems involving the optimization of transient responses is suggested. It is claimed that the common approach of treating such problems by introducing auxiliary objectives overlooks tradeoffs that should be presented to the decision makers. This means that, if at some time during the responses, one of the responses is optimal, it should not be overlooked. An evolutionary multi-objective algorithm is suggested in order to search for these optimal solutions. For this purpose, state-wise domination is utilized with a new crowding measure for ordered sets being suggested. The approach is tested on both artificial as well as on real life problems in order to explain the methodology and demonstrate its applicability and importance. The results indicate that, from an engineering point of view, the approach possesses several advantages over existing approaches. Moreover, the applications highlight the importance of set-based evolution.

  20. Multiresolution strategies for the numerical solution of optimal control problems

    NASA Astrophysics Data System (ADS)

    Jain, Sachin

    There exist many numerical techniques for solving optimal control problems but less work has been done in the field of making these algorithms run faster and more robustly. The main motivation of this work is to solve optimal control problems accurately in a fast and efficient way. Optimal control problems are often characterized by discontinuities or switchings in the control variables. One way of accurately capturing the irregularities in the solution is to use a high resolution (dense) uniform grid. This requires a large amount of computational resources both in terms of CPU time and memory. Hence, in order to accurately capture any irregularities in the solution using a few computational resources, one can refine the mesh locally in the region close to an irregularity instead of refining the mesh uniformly over the whole domain. Therefore, a novel multiresolution scheme for data compression has been designed which is shown to outperform similar data compression schemes. Specifically, we have shown that the proposed approach results in fewer grid points in the grid compared to a common multiresolution data compression scheme. The validity of the proposed mesh refinement algorithm has been verified by solving several challenging initial-boundary value problems for evolution equations in 1D. The examples have demonstrated the stability and robustness of the proposed algorithm. The algorithm adapted dynamically to any existing or emerging irregularities in the solution by automatically allocating more grid points to the region where the solution exhibited sharp features and fewer points to the region where the solution was smooth. Thereby, the computational time and memory usage has been reduced significantly, while maintaining an accuracy equivalent to the one obtained using a fine uniform mesh. Next, a direct multiresolution-based approach for solving trajectory optimization problems is developed. The original optimal control problem is transcribed into a nonlinear programming (NLP) problem that is solved using standard NLP codes. The novelty of the proposed approach hinges on the automatic calculation of a suitable, nonuniform grid over which the NLP problem is solved, which tends to increase numerical efficiency and robustness. Control and/or state constraints are handled with ease, and without any additional computational complexity. The proposed algorithm is based on a simple and intuitive method to balance several conflicting objectives, such as accuracy of the solution, convergence, and speed of the computations. The benefits of the proposed algorithm over uniform grid implementations are demonstrated with the help of several nontrivial examples. Furthermore, two sequential multiresolution trajectory optimization algorithms for solving problems with moving targets and/or dynamically changing environments have been developed. For such problems, high accuracy is desirable only in the immediate future, yet the ultimate mission objectives should be accommodated as well. An intelligent trajectory generation for such situations is thus enabled by introducing the idea of multigrid temporal resolution to solve the associated trajectory optimization problem on a non-uniform grid across time that is adapted to: (i) immediate future, and (ii) potential discontinuities in the state and control variables.

  1. Computationally efficient control allocation

    NASA Technical Reports Server (NTRS)

    Durham, Wayne (Inventor)

    2001-01-01

    A computationally efficient method for calculating near-optimal solutions to the three-objective, linear control allocation problem is disclosed. The control allocation problem is that of distributing the effort of redundant control effectors to achieve some desired set of objectives. The problem is deemed linear if control effectiveness is affine with respect to the individual control effectors. The optimal solution is that which exploits the collective maximum capability of the effectors within their individual physical limits. Computational efficiency is measured by the number of floating-point operations required for solution. The method presented returned optimal solutions in more than 90% of the cases examined; non-optimal solutions returned by the method were typically much less than 1% different from optimal and the errors tended to become smaller than 0.01% as the number of controls was increased. The magnitude of the errors returned by the present method was much smaller than those that resulted from either pseudo inverse or cascaded generalized inverse solutions. The computational complexity of the method presented varied linearly with increasing numbers of controls; the number of required floating point operations increased from 5.5 i, to seven times faster than did the minimum-norm solution (the pseudoinverse), and at about the same rate as did the cascaded generalized inverse solution. The computational requirements of the method presented were much better than that of previously described facet-searching methods which increase in proportion to the square of the number of controls.

  2. Solution of monotone complementarity and general convex programming problems using a modified potential reduction interior point method

    DOE PAGES

    Huang, Kuo -Ling; Mehrotra, Sanjay

    2016-11-08

    We present a homogeneous algorithm equipped with a modified potential function for the monotone complementarity problem. We show that this potential function is reduced by at least a constant amount if a scaled Lipschitz condition (SLC) is satisfied. A practical algorithm based on this potential function is implemented in a software package named iOptimize. The implementation in iOptimize maintains global linear and polynomial time convergence properties, while achieving practical performance. It either successfully solves the problem, or concludes that the SLC is not satisfied. When compared with the mature software package MOSEK (barrier solver version 6.0.0.106), iOptimize solves convex quadraticmore » programming problems, convex quadratically constrained quadratic programming problems, and general convex programming problems in fewer iterations. Moreover, several problems for which MOSEK fails are solved to optimality. In addition, we also find that iOptimize detects infeasibility more reliably than the general nonlinear solvers Ipopt (version 3.9.2) and Knitro (version 8.0).« less

  3. NEMA NU 4-Optimized Reconstructions for Therapy Assessment in Cancer Research with the Inveon Small Animal PET/CT System.

    PubMed

    Lasnon, Charline; Dugue, Audrey Emmanuelle; Briand, Mélanie; Blanc-Fournier, Cécile; Dutoit, Soizic; Louis, Marie-Hélène; Aide, Nicolas

    2015-06-01

    We compared conventional filtered back-projection (FBP), two-dimensional-ordered subsets expectation maximization (OSEM) and maximum a posteriori (MAP) NEMA NU 4-optimized reconstructions for therapy assessment. Varying reconstruction settings were used to determine the parameters for optimal image quality with two NEMA NU 4 phantom acquisitions. Subsequently, data from two experiments in which nude rats bearing subcutaneous tumors had received a dual PI3K/mTOR inhibitor were reconstructed with the NEMA NU 4-optimized parameters. Mann-Whitney tests were used to compare mean standardized uptake value (SUV(mean)) variations among groups. All NEMA NU 4-optimized reconstructions showed the same 2-deoxy-2-[(18)F]fluoro-D-glucose ([(18)F]FDG) kinetic patterns and detected a significant difference in SUV(mean) relative to day 0 between controls and treated groups for all time points with comparable p values. In the framework of therapy assessment in rats bearing subcutaneous tumors, all algorithms available on the Inveon system performed equally.

  4. Optimal Control of Thermo--Fluid Phenomena in Variable Domains

    NASA Astrophysics Data System (ADS)

    Volkov, Oleg; Protas, Bartosz

    2008-11-01

    This presentation concerns our continued research on adjoint--based optimization of viscous incompressible flows (the Navier--Stokes problem) coupled with heat conduction involving change of phase (the Stefan problem), and occurring in domains with variable boundaries. This problem is motivated by optimization of advanced welding techniques used in automotive manufacturing, where the goal is to determine an optimal heat input, so as to obtain a desired shape of the weld pool surface upon solidification. We argue that computation of sensitivities (gradients) in such free--boundary problems requires the use of the shape--differential calculus as a key ingredient. We also show that, with such tools available, the computational solution of the direct and inverse (optimization) problems can in fact be achieved in a similar manner and in a comparable computational time. Our presentation will address certain mathematical and computational aspects of the method. As an illustration we will consider the two--phase Stefan problem with contact point singularities where our approach allows us to obtain a thermodynamically consistent solution.

  5. An improved 2D MoF method by using high order derivatives

    NASA Astrophysics Data System (ADS)

    Chen, Xiang; Zhang, Xiong

    2017-11-01

    The MoF (Moment of Fluid) method is one of the most accurate approaches among various interface reconstruction algorithms. Alike other second order methods, the MoF method needs to solve an implicit optimization problem to obtain the optimal approximate interface, so an iteration process is inevitable under most circumstances. In order to solve the optimization efficiently, the properties of the objective function are worthy of studying. In 2D problems, the first order derivative has been deduced and applied in the previous researches. In this paper, the high order derivatives of the objective function are deduced on the convex polygon. We show that the nth (n ≥ 2) order derivatives are discontinuous, and the number of the discontinuous points is two times the number of the polygon edge. A rotation algorithm is proposed to successively calculate these discontinuous points, thus the target interval where the optimal solution is located can be determined. Since the high order derivatives of the objective function are continuous in the target interval, the iteration schemes based on high order derivatives can be used to improve the convergence rate. Moreover, when iterating in the target interval, the value of objective function and its derivatives can be directly updated without explicitly solving the volume conservation equation. The direct update makes a further improvement of the efficiency especially when the number of edges of the polygon is increasing. The Halley's method, which is based on the first three order derivatives, is applied as the iteration scheme in this paper and the numerical results indicate that the CPU time is about half of the previous method on the quadrilateral cell and is about one sixth on the decagon cell.

  6. Physical constraints, fundamental limits, and optimal locus of operating points for an inverted pendulum based actuated dynamic walker.

    PubMed

    Patnaik, Lalit; Umanand, Loganathan

    2015-10-26

    The inverted pendulum is a popular model for describing bipedal dynamic walking. The operating point of the walker can be specified by the combination of initial mid-stance velocity (v0) and step angle (φm) chosen for a given walk. In this paper, using basic mechanics, a framework of physical constraints that limit the choice of operating points is proposed. The constraint lines thus obtained delimit the allowable region of operation of the walker in the v0-φm plane. A given average forward velocity vx,avg can be achieved by several combinations of v0 and φm. Only one of these combinations results in the minimum mechanical power consumption and can be considered the optimum operating point for the given vx,avg. This paper proposes a method for obtaining this optimal operating point based on tangency of the power and velocity contours. Putting together all such operating points for various vx,avg, a family of optimum operating points, called the optimal locus, is obtained. For the energy loss and internal energy models chosen, the optimal locus obtained has a largely constant step angle with increasing speed but tapers off at non-dimensional speeds close to unity.

  7. [Optimization of cultural condition of genetic engineering strain for antibiotic peptide adenoregulin and research on its fed-batch cultivation].

    PubMed

    Zhou, Yu-Xun; Cao, Wei; Wei, Dong-Zhi; Luo, Qing-Ping; Wang, Jin-Zhi

    2005-07-01

    33 amino acid antibiotic peptide adenoregulin (ADR), which were firstly isolated from the skin of South America arboreal frog Phyllomedusa bicolor, forms alpha-helix amphipathic structure in apolar medium and has a wide spectrum of antimicrobial activity and high potency of lytic ability. Adr gene was cloned in pET32a and transformed into Escherichia coli BL21(DE3) . The cultural and inductive conditions of E. coli BL21(DE3)/pET32a-adr have been optimized. The effect of three factors which were time point of induction, concentration of IPTG in the culture and time of induction on the expression level of Trx-ADR was investigated. The results indicated that the expression level was affected by the time point of induction most predominantly. 9 veriaties of media in which BL21 (DE3)/pET32a-adr was cultured and induced were tested to achieve high expression level of target protein. It was found that glucose in the medium played an important role in keeping stable and high expression level of Trx-ADR. The optimal inductive condition is as follows: the culture medium is 2 x YT + 0.5% glucose, the time point of induction is OD600 = 0.9, the final concentration of IPTG in the culture is 0.1 mmol/L and the induction time is 4 h. BL21 (DE3)/pET32a-adr was cultivated according to the strategy of constant pH at early stage and exponential feeding at later stage to obtain high cell density. During the entire fed-batch phase, by controlling the feeding of glucose, the specific growth rate of the culture was controlled at about 0.15 h(-1), the accumulation of acetic acid was controlled at low level (<2 g/L), but the plasmid stability could not be maintained well. At the end of the cultivation, 40% of the bacteria in the culture lost their plasmids. As a result, the expression level of the target protein declined dramatically, but 90% of Trx-ADR was in soluble form. The expressed fusion protein showed no antibacterial activity, while the native form of ADR lysed from Trx-ADR showed distinct antibacterial activity.

  8. Grand Tour outer planet missions definition phase. Part 2: Minutes of meetings and official correspondence

    NASA Technical Reports Server (NTRS)

    Belton, M. J. S.; Aksnes, K.; Davies, M. E.; Hartmann, W. K.; Millis, R. L.; Owen, T. C.; Reilly, T. H.; Sagan, C.; Suomi, V. E.; Collins, S. A., Jr.

    1972-01-01

    A variety of imaging systems proposed for use aboard the Outer Planet Grand Tour Explorer are discussed and evaluated in terms of optimal resolution capability and efficient time utilization. It is pointed out that the planetary and satellite alignments at the time of encounter dictate a high degree of adaptability and versatility in order to provide sufficient image enhancement over earth-based techniques. Data compression methods are also evaluated according to the same criteria.

  9. [Problems and solutions of implementing plan environmental impact assessment in China].

    PubMed

    Liang, Xue-gong; Liu, Juan

    2004-11-01

    At present, there are two forms of Plan Environmental Impact Assessment (PEIA): Self-assessment and entrusted assessment. The appropriate object and the time of starting PEIA were discussed. It points out that self-assessment should be able to realize the essential roles of PEIA, such as 'starting as early as possible', 'optimization of options', etc. At the same time, the concept and function of alternative, PEIA methodology and the role of public participation are all elementarily discussed.

  10. On the existence of touch points for first-order state inequality constraints

    NASA Technical Reports Server (NTRS)

    Seywald, Hans; Cliff, Eugene M.

    1993-01-01

    The appearance of touch points in state constrained optimal control problems with general vector-valued control is studied. Under the assumption that the Hamiltonian is regular, touch points for first-order state inequalities are shown to exist only under very special conditions. In many cases of practical importance these conditions can be used to exclude touch points a priori without solving an optimal control problem. The results are demonstrated on a simple example.

  11. Walking with a Slower Friend

    ERIC Educational Resources Information Center

    Bailey, Herb; Kalman, Dan

    2011-01-01

    Fay and Sam go for a walk. Sam walks along the left side of the street while Fay, who walks faster, starts with Sam but walks to a point on the right side of the street and then returns to meet Sam to complete one segment of their journey. We determine Fay's optimal path minimizing segment length, and thus maximizing the number of times they meet…

  12. Planned Missing Designs to Optimize the Efficiency of Latent Growth Parameter Estimates

    ERIC Educational Resources Information Center

    Rhemtulla, Mijke; Jia, Fan; Wu, Wei; Little, Todd D.

    2014-01-01

    We examine the performance of planned missing (PM) designs for correlated latent growth curve models. Using simulated data from a model where latent growth curves are fitted to two constructs over five time points, we apply three kinds of planned missingness. The first is item-level planned missingness using a three-form design at each wave such…

  13. High-sensitivity detection of cardiac troponin I with UV LED excitation for use in point-of-care immunoassay.

    PubMed

    Rodenko, Olga; Eriksson, Susann; Tidemand-Lichtenberg, Peter; Troldborg, Carl Peder; Fodgaard, Henrik; van Os, Sylvana; Pedersen, Christian

    2017-08-01

    High-sensitivity cardiac troponin assay development enables determination of biological variation in healthy populations, more accurate interpretation of clinical results and points towards earlier diagnosis and rule-out of acute myocardial infarction. In this paper, we report on preliminary tests of an immunoassay analyzer employing an optimized LED excitation to measure on a standard troponin I and a novel research high-sensitivity troponin I assay. The limit of detection is improved by factor of 5 for standard troponin I and by factor of 3 for a research high-sensitivity troponin I assay, compared to the flash lamp excitation. The obtained limit of detection was 0.22 ng/L measured on plasma with the research high-sensitivity troponin I assay and 1.9 ng/L measured on tris-saline-azide buffer containing bovine serum albumin with the standard troponin I assay. We discuss the optimization of time-resolved detection of lanthanide fluorescence based on the time constants of the system and analyze the background and noise sources in a heterogeneous fluoroimmunoassay. We determine the limiting factors and their impact on the measurement performance. The suggested model can be generally applied to fluoroimmunoassays employing the dry-cup concept.

  14. Optimization of experimental conditions for composite biodiesel production from transesterification of mixed oils of Jatropha and Pongamia

    NASA Astrophysics Data System (ADS)

    Yogish, H.; Chandrashekara, K.; Pramod Kumar, M. R.

    2012-11-01

    India is looking at the renewable alternative sources of energy to reduce its dependence on import of crude oil. As India imports 70 % of the crude oil, the country has been greatly affected by increasing cost and uncertainty. Biodiesel fuel derived by the two step acid transesterification of mixed non-edible oils from Jatropha curcas and Pongamia (karanja) can meet the requirements of diesel fuel in the coming years. In the present study, different proportions of Methanol, Sodium hydroxide, variation of Reaction time, Sulfuric acid and Reaction Temperature were adopted in order to optimize the experimental conditions for maximum biodiesel yield. The preliminary studies revealed that biodiesel yield varied widely in the range of 75-95 % using the laboratory scale reactor. The average yield of 95 % was obtained. The fuel and chemical properties of biodiesel, namely kinematic viscosity, specific gravity, density, flash point, fire point, calorific value, pH, acid value, iodine value, sulfur content, water content, glycerin content and sulfated ash values were found to be within the limits suggested by Bureau of Indian Standards (BIS 15607: 2005). The optimum combination of Methanol, Sodium hydroxide, Sulfuric acid, Reaction Time and Reaction Temperature are established.

  15. Scalability study of parallel spatial direct numerical simulation code on IBM SP1 parallel supercomputer

    NASA Technical Reports Server (NTRS)

    Hanebutte, Ulf R.; Joslin, Ronald D.; Zubair, Mohammad

    1994-01-01

    The implementation and the performance of a parallel spatial direct numerical simulation (PSDNS) code are reported for the IBM SP1 supercomputer. The spatially evolving disturbances that are associated with laminar-to-turbulent in three-dimensional boundary-layer flows are computed with the PS-DNS code. By remapping the distributed data structure during the course of the calculation, optimized serial library routines can be utilized that substantially increase the computational performance. Although the remapping incurs a high communication penalty, the parallel efficiency of the code remains above 40% for all performed calculations. By using appropriate compile options and optimized library routines, the serial code achieves 52-56 Mflops on a single node of the SP1 (45% of theoretical peak performance). The actual performance of the PSDNS code on the SP1 is evaluated with a 'real world' simulation that consists of 1.7 million grid points. One time step of this simulation is calculated on eight nodes of the SP1 in the same time as required by a Cray Y/MP for the same simulation. The scalability information provides estimated computational costs that match the actual costs relative to changes in the number of grid points.

  16. Global Parameter Optimization of CLM4.5 Using Sparse-Grid Based Surrogates

    NASA Astrophysics Data System (ADS)

    Lu, D.; Ricciuto, D. M.; Gu, L.

    2016-12-01

    Calibration of the Community Land Model (CLM) is challenging because of its model complexity, large parameter sets, and significant computational requirements. Therefore, only a limited number of simulations can be allowed in any attempt to find a near-optimal solution within an affordable time. The goal of this study is to calibrate some of the CLM parameters in order to improve model projection of carbon fluxes. To this end, we propose a computationally efficient global optimization procedure using sparse-grid based surrogates. We first use advanced sparse grid (SG) interpolation to construct a surrogate system of the actual CLM model, and then we calibrate the surrogate model in the optimization process. As the surrogate model is a polynomial whose evaluation is fast, it can be efficiently evaluated with sufficiently large number of times in the optimization, which facilitates the global search. We calibrate five parameters against 12 months of GPP, NEP, and TLAI data from the U.S. Missouri Ozark (US-MOz) tower. The results indicate that an accurate surrogate model can be created for the CLM4.5 with a relatively small number of SG points (i.e., CLM4.5 simulations), and the application of the optimized parameters leads to a higher predictive capacity than the default parameter values in the CLM4.5 for the US-MOz site.

  17. Design of optimal impulse transfers from the Sun-Earth libration point to asteroid

    NASA Astrophysics Data System (ADS)

    Wang, Yamin; Qiao, Dong; Cui, Pingyuan

    2015-07-01

    The lunar probe, Chang'E-2, is the first one to successfully achieve both the transfer to Sun-Earth libration point orbit and the flyby of near-Earth asteroid Toutatis. This paper, taking the Chang'E-2's asteroid flyby mission as an example, provides a method to design low-energy transfers from the libration point orbit to an asteroid. The method includes the analysis of transfer families and the design of optimal impulse transfers. Firstly, the one-impulse transfers are constructed by correcting the initial guesses, which are obtained by perturbing in the direction of unstable eigenvector. Secondly, the optimality of one-impulse transfers is analyzed and the optimal impulse transfers are built by using the primer vector theory. After optimization, the transfer families, including the slow and the fast transfers, are refined to be continuous and lower-cost transfers. The method proposed in this paper can be also used for designing transfers from an arbitrary Sun-Earth libration point orbit to a near-Earth asteroid in the Sun-Earth-Moon system.

  18. Optimal platform design using non-dominated sorting genetic algorithm II and technique for order of preference by similarity to ideal solution; application to automotive suspension system

    NASA Astrophysics Data System (ADS)

    Shojaeefard, Mohammad Hassan; Khalkhali, Abolfazl; Faghihian, Hamed; Dahmardeh, Masoud

    2018-03-01

    Unlike conventional approaches where optimization is performed on a unique component of a specific product, optimum design of a set of components for employing in a product family can cause significant reduction in costs. Increasing commonality and performance of the product platform simultaneously is a multi-objective optimization problem (MOP). Several optimization methods are reported to solve these MOPs. However, what is less discussed is how to find the trade-off points among the obtained non-dominated optimum points. This article investigates the optimal design of a product family using non-dominated sorting genetic algorithm II (NSGA-II) and proposes the employment of technique for order of preference by similarity to ideal solution (TOPSIS) method to find the trade-off points among the obtained non-dominated results while compromising all objective functions together. A case study for a family of suspension systems is presented, considering performance and commonality. The results indicate the effectiveness of the proposed method to obtain the trade-off points with the best possible performance while maximizing the common parts.

  19. A Locally Optimal Algorithm for Estimating a Generating Partition from an Observed Time Series and Its Application to Anomaly Detection.

    PubMed

    Ghalyan, Najah F; Miller, David J; Ray, Asok

    2018-06-12

    Estimation of a generating partition is critical for symbolization of measurements from discrete-time dynamical systems, where a sequence of symbols from a (finite-cardinality) alphabet may uniquely specify the underlying time series. Such symbolization is useful for computing measures (e.g., Kolmogorov-Sinai entropy) to identify or characterize the (possibly unknown) dynamical system. It is also useful for time series classification and anomaly detection. The seminal work of Hirata, Judd, and Kilminster (2004) derives a novel objective function, akin to a clustering objective, that measures the discrepancy between a set of reconstruction values and the points from the time series. They cast estimation of a generating partition via the minimization of their objective function. Unfortunately, their proposed algorithm is nonconvergent, with no guarantee of finding even locally optimal solutions with respect to their objective. The difficulty is a heuristic-nearest neighbor symbol assignment step. Alternatively, we develop a novel, locally optimal algorithm for their objective. We apply iterative nearest-neighbor symbol assignments with guaranteed discrepancy descent, by which joint, locally optimal symbolization of the entire time series is achieved. While most previous approaches frame generating partition estimation as a state-space partitioning problem, we recognize that minimizing the Hirata et al. (2004) objective function does not induce an explicit partitioning of the state space, but rather the space consisting of the entire time series (effectively, clustering in a (countably) infinite-dimensional space). Our approach also amounts to a novel type of sliding block lossy source coding. Improvement, with respect to several measures, is demonstrated over popular methods for symbolizing chaotic maps. We also apply our approach to time-series anomaly detection, considering both chaotic maps and failure application in a polycrystalline alloy material.

  20. Method optimization for drug impurity profiling in supercritical fluid chromatography: Application to a pharmaceutical mixture.

    PubMed

    Muscat Galea, Charlene; Didion, David; Clicq, David; Mangelings, Debby; Vander Heyden, Yvan

    2017-12-01

    A supercritical chromatographic method for the separation of a drug and its impurities has been developed and optimized applying an experimental design approach and chromatogram simulations. Stationary phase screening was followed by optimization of the modifier and injection solvent composition. A design-of-experiment (DoE) approach was then used to optimize column temperature, back-pressure and the gradient slope simultaneously. Regression models for the retention times and peak widths of all mixture components were built. The factor levels for different grid points were then used to predict the retention times and peak widths of the mixture components using the regression models and the best separation for the worst separated peak pair in the experimental domain was identified. A plot of the minimal resolutions was used to help identifying the factor levels leading to the highest resolution between consecutive peaks. The effects of the DoE factors were visualized in a way that is familiar to the analytical chemist, i.e. by simulating the resulting chromatogram. The mixture of an active ingredient and seven impurities was separated in less than eight minutes. The approach discussed in this paper demonstrates how SFC methods can be developed and optimized efficiently using simple concepts and tools. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Machine Learning Techniques in Optimal Design

    NASA Technical Reports Server (NTRS)

    Cerbone, Giuseppe

    1992-01-01

    Many important applications can be formalized as constrained optimization tasks. For example, we are studying the engineering domain of two-dimensional (2-D) structural design. In this task, the goal is to design a structure of minimum weight that bears a set of loads. A solution to a design problem in which there is a single load (L) and two stationary support points (S1 and S2) consists of four members, E1, E2, E3, and E4 that connect the load to the support points is discussed. In principle, optimal solutions to problems of this kind can be found by numerical optimization techniques. However, in practice [Vanderplaats, 1984] these methods are slow and they can produce different local solutions whose quality (ratio to the global optimum) varies with the choice of starting points. Hence, their applicability to real-world problems is severely restricted. To overcome these limitations, we propose to augment numerical optimization by first performing a symbolic compilation stage to produce: (a) objective functions that are faster to evaluate and that depend less on the choice of the starting point and (b) selection rules that associate problem instances to a set of recommended solutions. These goals are accomplished by successive specializations of the problem class and of the associated objective functions. In the end, this process reduces the problem to a collection of independent functions that are fast to evaluate, that can be differentiated symbolically, and that represent smaller regions of the overall search space. However, the specialization process can produce a large number of sub-problems. This is overcome by deriving inductively selection rules which associate problems to small sets of specialized independent sub-problems. Each set of candidate solutions is chosen to minimize a cost function which expresses the tradeoff between the quality of the solution that can be obtained from the sub-problem and the time it takes to produce it. The overall solution to the problem, is then obtained by solving in parallel each of the sub-problems in the set and computing the one with the minimum cost. In addition to speeding up the optimization process, our use of learning methods also relieves the expert from the burden of identifying rules that exactly pinpoint optimal candidate sub-problems. In real engineering tasks it is usually too costly to the engineers to derive such rules. Therefore, this paper also contributes to a further step towards the solution of the knowledge acquisition bottleneck [Feigenbaum, 1977] which has somewhat impaired the construction of rulebased expert systems.

  2. Cryogenic Tank Structure Sizing With Structural Optimization Method

    NASA Technical Reports Server (NTRS)

    Wang, J. T.; Johnson, T. F.; Sleight, D. W.; Saether, E.

    2001-01-01

    Structural optimization methods in MSC /NASTRAN are used to size substructures and to reduce the weight of a composite sandwich cryogenic tank for future launch vehicles. Because the feasible design space of this problem is non-convex, many local minima are found. This non-convex problem is investigated in detail by conducting a series of analyses along a design line connecting two feasible designs. Strain constraint violations occur for some design points along the design line. Since MSC/NASTRAN uses gradient-based optimization procedures. it does not guarantee that the lowest weight design can be found. In this study, a simple procedure is introduced to create a new starting point based on design variable values from previous optimization analyses. Optimization analysis using this new starting point can produce a lower weight design. Detailed inputs for setting up the MSC/NASTRAN optimization analysis and final tank design results are presented in this paper. Approaches for obtaining further weight reductions are also discussed.

  3. Defect-free atomic array formation using the Hungarian matching algorithm

    NASA Astrophysics Data System (ADS)

    Lee, Woojun; Kim, Hyosub; Ahn, Jaewook

    2017-05-01

    Deterministic loading of single atoms onto arbitrary two-dimensional lattice points has recently been demonstrated, where by dynamically controlling the optical-dipole potential, atoms from a probabilistically loaded lattice were relocated to target lattice points to form a zero-entropy atomic lattice. In this atom rearrangement, how to pair atoms with the target sites is a combinatorial optimization problem: brute-force methods search all possible combinations so the process is slow, while heuristic methods are time efficient but optimal solutions are not guaranteed. Here, we use the Hungarian matching algorithm as a fast and rigorous alternative to this problem of defect-free atomic lattice formation. Our approach utilizes an optimization cost function that restricts collision-free guiding paths so that atom loss due to collision is minimized during rearrangement. Experiments were performed with cold rubidium atoms that were trapped and guided with holographically controlled optical-dipole traps. The result of atom relocation from a partially filled 7 ×7 lattice to a 3 ×3 target lattice strongly agrees with the theoretical analysis: using the Hungarian algorithm minimizes the collisional and trespassing paths and results in improved performance, with over 50% higher success probability than the heuristic shortest-move method.

  4. Singular values behaviour optimization in the diagnosis of feed misalignments in radioastronomical reflectors

    NASA Astrophysics Data System (ADS)

    Capozzoli, Amedeo; Curcio, Claudio; Liseno, Angelo; Savarese, Salvatore; Schipani, Pietro

    2016-07-01

    The communication presents an innovative method for the diagnosis of reflector antennas in radio astronomical applications. The approach is based on the optimization of the number and the distribution of the far field sampling points exploited to retrieve the antenna status in terms of feed misalignments, this to drastically reduce the time length of the measurement process and minimize the effects of variable environmental conditions and simplifying the tracking process of the source. The feed misplacement is modeled in terms of an aberration function of the aperture field. The relationship between the unknowns and the far field pattern samples is linearized thanks to a Principal Component Analysis. The number and the position of the field samples are then determined by optimizing the Singular Values behaviour of the relevant operator.

  5. An explicit solution to the exoatmospheric powered flight guidance and trajectory optimization problem for rocket propelled vehicles

    NASA Technical Reports Server (NTRS)

    Jaggers, R. F.

    1977-01-01

    A derivation of an explicit solution to the two point boundary-value problem of exoatmospheric guidance and trajectory optimization is presented. Fixed initial conditions and continuous burn, multistage thrusting are assumed. Any number of end conditions from one to six (throttling is required in the case of six) can be satisfied in an explicit and practically optimal manner. The explicit equations converge for off nominal conditions such as engine failure, abort, target switch, etc. The self starting, predictor/corrector solution involves no Newton-Rhapson iterations, numerical integration, or first guess values, and converges rapidly if physically possible. A form of this algorithm has been chosen for onboard guidance, as well as real time and preflight ground targeting and trajectory shaping for the NASA Space Shuttle Program.

  6. Earth-Moon Libration Point Orbit Stationkeeping: Theory, Modeling and Operations

    NASA Technical Reports Server (NTRS)

    Folta, David C.; Pavlak, Thomas A.; Haapala, Amanda F.; Howell, Kathleen C.; Woodard, Mark A.

    2013-01-01

    Collinear Earth-Moon libration points have emerged as locations with immediate applications. These libration point orbits are inherently unstable and must be maintained regularly which constrains operations and maneuver locations. Stationkeeping is challenging due to relatively short time scales for divergence effects of large orbital eccentricity of the secondary body, and third-body perturbations. Using the Acceleration Reconnection and Turbulence and Electrodynamics of the Moon's Interaction with the Sun (ARTEMIS) mission orbit as a platform, the fundamental behavior of the trajectories is explored using Poincare maps in the circular restricted three-body problem. Operational stationkeeping results obtained using the Optimal Continuation Strategy are presented and compared to orbit stability information generated from mode analysis based in dynamical systems theory.

  7. A diagnostic algorithm to optimize data collection and interpretation of Ripple Maps in atrial tachycardias.

    PubMed

    Koa-Wing, Michael; Nakagawa, Hiroshi; Luther, Vishal; Jamil-Copley, Shahnaz; Linton, Nick; Sandler, Belinda; Qureshi, Norman; Peters, Nicholas S; Davies, D Wyn; Francis, Darrel P; Jackman, Warren; Kanagaratnam, Prapa

    2015-11-15

    Ripple Mapping (RM) is designed to overcome the limitations of existing isochronal 3D mapping systems by representing the intracardiac electrogram as a dynamic bar on a surface bipolar voltage map that changes in height according to the electrogram voltage-time relationship, relative to a fiduciary point. We tested the hypothesis that standard approaches to atrial tachycardia CARTO™ activation maps were inadequate for RM creation and interpretation. From the results, we aimed to develop an algorithm to optimize RMs for future prospective testing on a clinical RM platform. CARTO-XP™ activation maps from atrial tachycardia ablations were reviewed by two blinded assessors on an off-line RM workstation. Ripple Maps were graded according to a diagnostic confidence scale (Grade I - high confidence with clear pattern of activation through to Grade IV - non-diagnostic). The RM-based diagnoses were corroborated against the clinical diagnoses. 43 RMs from 14 patients were classified as Grade I (5 [11.5%]); Grade II (17 [39.5%]); Grade III (9 [21%]) and Grade IV (12 [28%]). Causes of low gradings/errors included the following: insufficient chamber point density; window-of-interest<100% of cycle length (CL); <95% tachycardia CL mapped; variability of CL and/or unstable fiducial reference marker; and suboptimal bar height and scar settings. A data collection and map interpretation algorithm has been developed to optimize Ripple Maps in atrial tachycardias. This algorithm requires prospective testing on a real-time clinical platform. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  8. Palm-Based Neopentyl Glycol Diester: A Potential Green Insulating Oil.

    PubMed

    Raof, Nurliyana A; Yunus, Robiah; Rashid, Umer; Azis, Norhafiz; Yaakub, Zaini

    2018-01-01

    The transesterification of high oleic palm oil methyl ester (HOPME) with neopentyl glycol (NPG) has been investigated. The present study revealed the application of low-pressure technology as a new synthesis method to produce NPG diesters. Single variable optimization and response surface methodology (RSM) were implemented to optimize the experimental conditions to achieve the maximum composition (wt%) of NPG diesters. The main objective of this study was to optimize the production of NPG diesters and to characterize the optimized esters with typical chemical, physical and electrical properties to study its potential as insulating oil. The transesterification reaction between HOPME and NPG was conducted in a 1L three-neck flask reactor at specified temperature, pressure, molar ratio and catalyst concentration. For the optimization, four factors have been studied and the diester product was characterized by using gas chromatography (GC) analysis. The synthesized esters were then characterized with typical properties of transformer oil such as flash point, pour point, viscosity and breakdown voltage and were compared with mineral insulating oil and commercial NPG dioleate. For formulation, different samples of NPG diesters with different concentration of pour point depressant were prepared and each sample was tested for its pour point measurement. The optimum conditions inferred from the analyses were: molar ratio of HOPME to NPG of 2:1.3, temperature = 182°C, pressure = 0.6 mbar and catalyst concentration of 1.2%. The synthesized NPG diesters showed very important improvement in fire safety compared to mineral oil with flash point of 300°C and 155°C, respectively. NPG diesters also exhibit a relatively good viscosity of 21 cSt. The most striking observation to emerge from the data comparison with NPG diester was the breakdown voltage, which was higher than mineral oil and definitely in conformance to the IEC 61099 limit at 67.5 kV. The formulation of synthesized NPD diesters with VISCOPLEX® pour point depressant has successfully increased the pour point of NPG diester from -14°C to -48°C. The reaction time for the transesterification of HOPME with NPG to produce NPG diester was successfully reduced to 1 hour from the 14 hours required in the earlier synthesis method. The main highlight of this study was the excess reactant which is no longer methyl ester but the alcohol (NPG). The optimum reaction conditions for the synthesis were molar ratio of 2:1.13 for NPG:HOPME, 182°C, 0.6 mbar and catalyst concentration of 1.2 wt%. The maximum NPG diester yield of 87 wt% was consistent with the predicted yield of 87.7 wt% obtained from RSM. The synthesized diester exhibited better insulating properties than the commercial products especially with regards to the breakdown voltage, flash point and moisture content. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  9. Evaluation of 2-point, 3-point, and 6-point Dixon magnetic resonance imaging with flexible echo timing for muscle fat quantification.

    PubMed

    Grimm, Alexandra; Meyer, Heiko; Nickel, Marcel D; Nittka, Mathias; Raithel, Esther; Chaudry, Oliver; Friedberger, Andreas; Uder, Michael; Kemmler, Wolfgang; Quick, Harald H; Engelke, Klaus

    2018-06-01

    The purpose of this study is to evaluate and compare 2-point (2pt), 3-point (3pt), and 6-point (6pt) Dixon magnetic resonance imaging (MRI) sequences with flexible echo times (TE) to measure proton density fat fraction (PDFF) within muscles. Two subject groups were recruited (G1: 23 young and healthy men, 31 ± 6 years; G2: 50 elderly men, sarcopenic, 77 ± 5 years). A 3-T MRI system was used to perform Dixon imaging on the left thigh. PDFF was measured with six Dixon prototype sequences: 2pt, 3pt, and 6pt sequences once with optimal TEs (in- and opposed-phase echo times), lower resolution, and higher bandwidth (optTE sequences) and once with higher image resolution (highRes sequences) and shortest possible TE, respectively. Intra-fascia PDFF content was determined. To evaluate the comparability among the sequences, Bland-Altman analysis was performed. The highRes 6pt Dixon sequences served as reference as a high correlation of this sequence to magnetic resonance spectroscopy has been shown before. The PDFF difference between the highRes 6pt Dixon sequence and the optTE 6pt, both 3pt, and the optTE 2pt was low (between 2.2% and 4.4%), however, not to the highRes 2pt Dixon sequence (33%). For the optTE sequences, difference decreased with the number of echoes used. In conclusion, for Dixon sequences with more than two echoes, the fat fraction measurement was reliable with arbitrary echo times, while for 2pt Dixon sequences, it was reliable with dedicated in- and opposed-phase echo timing. Copyright © 2018 Elsevier B.V. All rights reserved.

  10. Digital Investigations of AN Archaeological Smart Point Cloud: a Real Time Web-Based Platform to Manage the Visualisation of Semantical Queries

    NASA Astrophysics Data System (ADS)

    Poux, F.; Neuville, R.; Hallot, P.; Van Wersch, L.; Luczfalvy Jancsó, A.; Billen, R.

    2017-05-01

    While virtual copies of the real world tend to be created faster than ever through point clouds and derivatives, their working proficiency by all professionals' demands adapted tools to facilitate knowledge dissemination. Digital investigations are changing the way cultural heritage researchers, archaeologists, and curators work and collaborate to progressively aggregate expertise through one common platform. In this paper, we present a web application in a WebGL framework accessible on any HTML5-compatible browser. It allows real time point cloud exploration of the mosaics in the Oratory of Germigny-des-Prés, and emphasises the ease of use as well as performances. Our reasoning engine is constructed over a semantically rich point cloud data structure, where metadata has been injected a priori. We developed a tool that directly allows semantic extraction and visualisation of pertinent information for the end users. It leads to efficient communication between actors by proposing optimal 3D viewpoints as a basis on which interactions can grow.

  11. A superlinear interior points algorithm for engineering design optimization

    NASA Technical Reports Server (NTRS)

    Herskovits, J.; Asquier, J.

    1990-01-01

    We present a quasi-Newton interior points algorithm for nonlinear constrained optimization. It is based on a general approach consisting of the iterative solution in the primal and dual spaces of the equalities in Karush-Kuhn-Tucker optimality conditions. This is done in such a way to have primal and dual feasibility at each iteration, which ensures satisfaction of those optimality conditions at the limit points. This approach is very strong and efficient, since at each iteration it only requires the solution of two linear systems with the same matrix, instead of quadratic programming subproblems. It is also particularly appropriate for engineering design optimization inasmuch at each iteration a feasible design is obtained. The present algorithm uses a quasi-Newton approximation of the second derivative of the Lagrangian function in order to have superlinear asymptotic convergence. We discuss theoretical aspects of the algorithm and its computer implementation.

  12. Robust Airfoil Optimization in High Resolution Design Space

    NASA Technical Reports Server (NTRS)

    Li, Wu; Padula, Sharon L.

    2003-01-01

    The robust airfoil shape optimization is a direct method for drag reduction over a given range of operating conditions and has three advantages: (1) it prevents severe degradation in the off-design performance by using a smart descent direction in each optimization iteration, (2) it uses a large number of B-spline control points as design variables yet the resulting airfoil shape is fairly smooth, and (3) it allows the user to make a trade-off between the level of optimization and the amount of computing time consumed. The robust optimization method is demonstrated by solving a lift-constrained drag minimization problem for a two-dimensional airfoil in viscous flow with a large number of geometric design variables. Our experience with robust optimization indicates that our strategy produces reasonable airfoil shapes that are similar to the original airfoils, but these new shapes provide drag reduction over the specified range of Mach numbers. We have tested this strategy on a number of advanced airfoil models produced by knowledgeable aerodynamic design team members and found that our strategy produces airfoils better or equal to any designs produced by traditional design methods.

  13. Surrogate-based Analysis and Optimization

    NASA Technical Reports Server (NTRS)

    Queipo, Nestor V.; Haftka, Raphael T.; Shyy, Wei; Goel, Tushar; Vaidyanathan, Raj; Tucker, P. Kevin

    2005-01-01

    A major challenge to the successful full-scale development of modem aerospace systems is to address competing objectives such as improved performance, reduced costs, and enhanced safety. Accurate, high-fidelity models are typically time consuming and computationally expensive. Furthermore, informed decisions should be made with an understanding of the impact (global sensitivity) of the design variables on the different objectives. In this context, the so-called surrogate-based approach for analysis and optimization can play a very valuable role. The surrogates are constructed using data drawn from high-fidelity models, and provide fast approximations of the objectives and constraints at new design points, thereby making sensitivity and optimization studies feasible. This paper provides a comprehensive discussion of the fundamental issues that arise in surrogate-based analysis and optimization (SBAO), highlighting concepts, methods, techniques, as well as practical implications. The issues addressed include the selection of the loss function and regularization criteria for constructing the surrogates, design of experiments, surrogate selection and construction, sensitivity analysis, convergence, and optimization. The multi-objective optimal design of a liquid rocket injector is presented to highlight the state of the art and to help guide future efforts.

  14. SU-E-T-539: Fixed Versus Variable Optimization Points in Combined-Mode Modulated Arc Therapy Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kainz, K; Prah, D; Ahunbay, E

    2014-06-01

    Purpose: A novel modulated arc therapy technique, mARC, enables superposition of step-and-shoot IMRT segments upon a subset of the optimization points (OPs) of a continuous-arc delivery. We compare two approaches to mARC planning: one with the number of OPs fixed throughout optimization, and another where the planning system determines the number of OPs in the final plan, subject to an upper limit defined at the outset. Methods: Fixed-OP mARC planning was performed for representative cases using Panther v. 5.01 (Prowess, Inc.), while variable-OP mARC planning used Monaco v. 5.00 (Elekta, Inc.). All Monaco planning used an upper limit of 91more » OPs; those OPs with minimal MU were removed during optimization. Plans were delivered, and delivery times recorded, on a Siemens Artiste accelerator using a flat 6MV beam with 300 MU/min rate. Dose distributions measured using ArcCheck (Sun Nuclear Corporation, Inc.) were compared with the plan calculation; the two were deemed consistent if they agreed to within 3.5% in absolute dose and 3.5 mm in distance-to-agreement among > 95% of the diodes within the direct beam. Results: Example cases included a prostate and a head-and-neck planned with a single arc and fraction doses of 1.8 and 2.0 Gy, respectively. Aside from slightly more uniform target dose for the variable-OP plans, the DVHs for the two techniques were similar. For the fixed-OP technique, the number of OPs was 38 and 39, and the delivery time was 228 and 259 seconds, respectively, for the prostate and head-and-neck cases. For the final variable-OP plans, there were 91 and 85 OPs, and the delivery time was 296 and 440 seconds, correspondingly longer than for fixed-OP. Conclusion: For mARC, both the fixed-OP and variable-OP approaches produced comparable-quality plans whose delivery was successfully verified. To keep delivery time per fraction short, a fixed-OP planning approach is preferred.« less

  15. Locally optimal transfer trajectories between libration point orbits using invariant manifolds

    NASA Astrophysics Data System (ADS)

    Davis, Kathryn E.

    2009-12-01

    Techniques from dynamical systems theory and primer vector theory have been applied to the construction of locally optimal transfer trajectories between libration point orbits. When two libration point orbits have different energies, it has been found that the unstable manifold of the first orbit can be connected to the stable manifold of the second orbit with a bridging trajectory. A bounding sphere centered on the secondary, with a radius less than the radius of the sphere of influence of the secondary, was used to study the stable and unstable manifold trajectories. It was numerically demonstrated that within the bounding sphere, the two-body parameters of the unstable and stable manifold trajectories could be analyzed to locate low transfer costs. It was shown that as the two-body parameters of an unstable manifold trajectory more closely matched the two-body parameters of a stable manifold trajectory, the total DeltaV necessary to complete the transfer decreased. Primer vector theory was successfully applied to a transfer to determine the optimal maneuvers required to create the bridging trajectory that connected the unstable manifold of the first orbit to the stable manifold of the second orbit. Transfer trajectories were constructed between halo orbits in the Sun-Earth and Earth-Moon three-body systems. Multiple solutions were found between the same initial and final orbits, where certain solutions retraced interior portions of the trajectory. All of the trajectories created satisfied the conditions for optimality. The costs of transfers constructed using invariant manifolds were compared to the costs of transfers constructed without the use of invariant manifolds, when data was available. In all cases, the total cost of the transfers were significantly lower when invariant manifolds were used in the transfer construction. In many cases, the transfers that employed invariant manifolds were three to four times more efficient, in terms of fuel expenditure, than the transfer that did not. The decrease in transfer cost was accompanied by an increase in transfer time of flight. Transfers constructed in the Earth-Moon system were shown to be particularly viable for lunar navigation and communication constellations, as excellent coverage of the lunar surface can be achieved during the transfer.

  16. Protein Folding Free Energy Landscape along the Committor - the Optimal Folding Coordinate.

    PubMed

    Krivov, Sergei V

    2018-06-06

    Recent advances in simulation and experiment have led to dramatic increases in the quantity and complexity of produced data, which makes the development of automated analysis tools very important. A powerful approach to analyze dynamics contained in such data sets is to describe/approximate it by diffusion on a free energy landscape - free energy as a function of reaction coordinates (RC). For the description to be quantitatively accurate, RCs should be chosen in an optimal way. Recent theoretical results show that such an optimal RC exists; however, determining it for practical systems is a very difficult unsolved problem. Here we describe a solution to this problem. We describe an adaptive nonparametric approach to accurately determine the optimal RC (the committor) for an equilibrium trajectory of a realistic system. In contrast to alternative approaches, which require a functional form with many parameters to approximate an RC and thus extensive expertise with the system, the suggested approach is nonparametric and can approximate any RC with high accuracy without system specific information. To avoid overfitting for a realistically sampled system, the approach performs RC optimization in an adaptive manner by focusing optimization on less optimized spatiotemporal regions of the RC. The power of the approach is illustrated on a long equilibrium atomistic folding simulation of HP35 protein. We have determined the optimal folding RC - the committor, which was confirmed by passing a stringent committor validation test. It allowed us to determine a first quantitatively accurate protein folding free energy landscape. We have confirmed the recent theoretical results that diffusion on such a free energy profile can be used to compute exactly the equilibrium flux, the mean first passage times, and the mean transition path times between any two points on the profile. We have shown that the mean squared displacement along the optimal RC grows linear with time as for simple diffusion. The free energy profile allowed us to obtain a direct rigorous estimate of the pre-exponential factor for the folding dynamics.

  17. Radar antenna pointing for optimized signal to noise ratio.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doerry, Armin Walter; Marquette, Brandeis

    2013-01-01

    The Signal-to-Noise Ratio (SNR) of a radar echo signal will vary across a range swath, due to spherical wavefront spreading, atmospheric attenuation, and antenna beam illumination. The antenna beam illumination will depend on antenna pointing. Calculations of geometry are complicated by the curved earth, and atmospheric refraction. This report investigates optimizing antenna pointing to maximize the minimum SNR across the range swath.

  18. A shape-based segmentation method for mobile laser scanning point clouds

    NASA Astrophysics Data System (ADS)

    Yang, Bisheng; Dong, Zhen

    2013-07-01

    Segmentation of mobile laser point clouds of urban scenes into objects is an important step for post-processing (e.g., interpretation) of point clouds. Point clouds of urban scenes contain numerous objects with significant size variability, complex and incomplete structures, and holes or variable point densities, raising great challenges for the segmentation of mobile laser point clouds. This paper addresses these challenges by proposing a shape-based segmentation method. The proposed method first calculates the optimal neighborhood size of each point to derive the geometric features associated with it, and then classifies the point clouds according to geometric features using support vector machines (SVMs). Second, a set of rules are defined to segment the classified point clouds, and a similarity criterion for segments is proposed to overcome over-segmentation. Finally, the segmentation output is merged based on topological connectivity into a meaningful geometrical abstraction. The proposed method has been tested on point clouds of two urban scenes obtained by different mobile laser scanners. The results show that the proposed method segments large-scale mobile laser point clouds with good accuracy and computationally effective time cost, and that it segments pole-like objects particularly well.

  19. An Optimal Design for Placements of Tsunami Observing Systems Around the Nankai Trough, Japan

    NASA Astrophysics Data System (ADS)

    Mulia, I. E.; Gusman, A. R.; Satake, K.

    2017-12-01

    Presently, there are numerous tsunami observing systems deployed in several major tsunamigenic regions throughout the world. However, documentations on how and where to optimally place such measurement devices are limited. This study presents a methodological approach to select the best and fewest observation points for the purpose of tsunami source characterizations, particularly in the form of fault slip distributions. We apply the method to design a new tsunami observation network around the Nankai Trough, Japan. In brief, our method can be divided into two stages: initialization and optimization. The initialization stage aims to identify favorable locations of observation points, as well as to determine the initial number of observations. These points are generated based on extrema of an empirical orthogonal function (EOF) spatial modes derived from 11 hypothetical tsunami events in the region. In order to further improve the accuracy, we apply an optimization algorithm called a mesh adaptive direct search (MADS) to remove redundant measurements from the initially generated points by the first stage. A combinatorial search by the MADS will improve the accuracy and reduce the number of observations simultaneously. The EOF analysis of the hypothetical tsunamis using first 2 leading modes with 4 extrema on each mode results in 30 observation points spread along the trench. This is obtained after replacing some clustered points within the radius of 30 km with only one representative. Furthermore, the MADS optimization can improve the accuracy of the EOF-generated points by approximately 10-20% with fewer observations (23 points). Finally, we compare our result with the existing observation points (68 stations) in the region. The result shows that the optimized design with fewer number of observations can produce better source characterizations with approximately 20-60% improvement of accuracies at all the 11 hypothetical cases. It should be note, however, that our design is a tsunami-based approach, some of the existing observing systems are equipped with additional devices to measure other parameter of interests, i.e., for monitoring seismic activities.

  20. Optimization of Supersonic Transport Trajectories

    NASA Technical Reports Server (NTRS)

    Ardema, Mark D.; Windhorst, Robert; Phillips, James

    1998-01-01

    This paper develops a near-optimal guidance law for generating minimum fuel, time, or cost fixed-range trajectories for supersonic transport aircraft. The approach uses a choice of new state variables along with singular perturbation techniques to time-scale decouple the dynamic equations into multiple equations of single order (second order for the fast dynamics). Application of the maximum principle to each of the decoupled equations, as opposed to application to the original coupled equations, avoids the two point boundary value problem and transforms the problem from one of a functional optimization to one of multiple function optimizations. It is shown that such an approach produces well known aircraft performance results such as minimizing the Brequet factor for minimum fuel consumption and the energy climb path. Furthermore, the new state variables produce a consistent calculation of flight path angle along the trajectory, eliminating one of the deficiencies in the traditional energy state approximation. In addition, jumps in the energy climb path are smoothed out by integration of the original dynamic equations at constant load factor. Numerical results performed for a supersonic transport design show that a pushover dive followed by a pullout at nominal load factors are sufficient maneuvers to smooth the jump.

Top